12/5/13

The “Genius” of Cultural Relativism

Why is it that refusing to judge is often seen as a form of virtue? I’ve been leading discussion groups in a class that explores the nature of genius this semester. This week we explored the idea of “evil genius,” which is a significant cultural trope in our society, often in other guises such as the mad scientist or the super villain. We asked the students if it was possible for genius to be evil. While most lined up on one side or the other of the debate, a few vocal students protested, and one defended his protest with a classic “Who am I to judge?” line: “What some might call evil, others might call good.”

Now as an abstract statement this might be defensible. There are certainly cases, particularly involving violence, where one person’s (or one country’s) evil is another’s greater good. However, what this student (and many others like him) was doing was to use the fact of a multiplicity of perspectives to conclude that we cannot and should not make distinctions between perspectives. “That’s just their culture” is disingenuous, but it is seen as a modern, savvy, and politically correct response. After all, its better than “They’re just inferior” or “They’re just savages,” right?

Don’t get me wrong. I’m glad that cultural imperialism is not—at least overtly—the norm in higher education. The problem is that many students conclude that the best response to lessons about pluralism and diversity is to adopt a position of cultural relativism, and many teachers either don’t know how to correct the trend or think the same way their students do.

Of course, none of us are really cultural relativists. We are cultural relativists in so far as we are personally unaffected by, distanced from, the cultures we are reluctant to judge. We are cultural relativists in so far as we reside in a culture that allows us the privilege to treat other cultures as thought experiments. Yet insofar as we are privileged, we should instead use that privilege to question thoroughly both other cultures and our own in order to make judgements for positive change.

So a historical shift has taken place from explicit cultural imperialism to an implicit cultural imperialism under the guise of appreciating and valuing cultural diversity. Religion plays a significant role in maintaining this separation. Echoing my own past religious experience, students who profess a strong Christianity usually fail to see a connection between their ideological ethics and their practical ethics, their way of operating in the world. Given, this is true to a certain extent with all students due in part to the infiltration of Christian values into American life, but it is more easily visible in the religious. These students are quick to defend Christianity from perceived attacks and extremist misrepresentation, but fail to see the ethical implications in their practical lives for the Jesus Christianity they profess.

I don’t think this is all their fault. The training to connect an ideological Christian ethic with reality is remarkably sparse from within the religious community. Christianity is personal salvation, after all, and it is rarely in the institutional interest to advance anti-institutional claims such as equal treatment for the LBGTQ community or universal health care. If anything, religious students are implicitly told to not make their religion a big deal in public for fear of being one of those extremists on the quad who screams at scantily-clad women that they’re going to hell. Higher education maintains a tacit agreement with religion to allow students to keep their faith unquestioningly, and even use it to make their decisions, as long as they don’t make it overly obvious.

This strongly contributes to the “Who am I to judge?” scenario above. It stems from an inability to engage complex issues because of a faulty and undeveloped means of reasoning. It may be that refusing to make any sort of judgement is better than trumpeting an overtly culturally biased one, but I’m not sure that it makes a lasting difference if the underlying mechanism of unjustified belief remains in place. If it was a success in the 20th century, it is no longer enough.

There has been much talk of the dim future of the humanities lately, and if pluralism and cultural diversity are the best things they have to offer, the analysis may be correct. Cultural diversity should absolutely be taught, but not in a way that allows students to keep their ideologies as sacred. We should at least not pretend that this makes for a productive and successful citizenry. What it makes is a body of people that profess love, care, and community support while they maintain bias and bigotry against others. Education is not about what not to say or what to think, but as David Foster Wallace claimed, “how to think.”

08/5/13

Is science the key to morality?

81vhPlG1sNL._SL1500_The only one of the “New Atheists” I have ever read is Sam Harris. I recently finished his The Moral Landscape: How Science Can Determine Human Values. I think it was the seeming audacity of the title that drew me to the work. As a student of religion (and the humanities more generally), I am reluctant to believe claims that science can directly replace the position that religions have traditionally held in society, even as I am a failure at religion myself. I have written on the topic before, as well as the relation of scientific knowledge to the senses.

After reading The Moral Landscape, I looked at my notes for the other Harris book I read back in 2007, The End of Faith: Religion, Terror, and the Future of Reason, which is the work that first put Harris on the map. Though the earlier work talks more specifically about religion, they both contain some of the same ideas, namely that religion is an illogical and insufficient guide for morality, and does more harm than good (or at least it does enough harm to outweigh the good). Even reading his book back then, as a Christian, I conceded that he did seem to have a genuine concern for the growing violence in the world and its connection with forms of religion. However, I had several general objections at the time, all of which I now consider insufficient (and all of which he anticipates in The Moral Landscape).

First, I objected that Harris criticizes faith for not being testable, when the very definition of faith—at least in one Pauline Christian interpretation—is belief in things unseen, belief despite lack of evidence. Harris also noted that the extent to which religious adherents are tolerant is the extent to which they don’t believe what their tradition tells them. I am much more inclined to agree with this statement now than I was as a Christian.

The other major objection I lodged is embarrassingly common among religious adherents. If you take away a person’s religion, what else will they have to give them a reason to live? It is easy to see that this is not an adequate defense of religion; it is simply a plea to allow people to continue believing something that cannot be proven. The frequent complaint lodged against atheists is that it is just mean to pick on someone’s beliefs if they aren’t hurting anyone and it gives the person comfort. One response is that it does hurt society for people who don’t existentially rely on religion to continue to affirm belief in it, both because of the systemic forms of intolerance and violence it can support, and the continued support it gives religion in general for those groups we would label as “fundamentalist.”

My conclusion in my review of End of Faith was that, despite good arguments that Harris made, science was simply not advanced enough to replace religion as a source of values. Religion has traditionally been that source, and that gives it a historical advantage. Looking back, that amounted to dragging my heels and applying a standard to science that I exempted religion from because of its lengthier history. My reading of the Moral Landscape affected me in a different way.

The gist of The Moral Landscape is that our brain, our consciousness, is the primary determinant of how we view, interact with, and understand our world. As that is the case, it is science that offers us the best method for understanding the way we operate, particularly the way we interact with the world and each other. We call the standards that guide us morals, and many think those are given by God or a religious tradition, but for Harris, we must look to science for keys to a more sustainable well-being than religion has offered.

At the beginning of the work, I found myself making the same critique: science doesn’t lay out an exact map of morality. I am much less confident than Harris in the ability of science to help solve moral quandaries, especially “science” in the generalized way he seems to be using it. His focus on the brain seems a little too cold and clinical at times. For example he explains that the chemicals oxycontin and vasopressin have to do with the way we emotionally bond to others. Children raised in orphanages do not experience the same surge of these chemicals when interacting with adoptive parents as other children do with biological parents. While to me, as with Harris, it is clear that this altered chemical makeup affects the emotional and psychological responses of these children, the implications of solving these problems on a chemical or biological level would look much different than solving them on a psychological one, and involve looking at the human in a different way. At the least, this shows that while our morality may depend in part on the human brain—and a complete picture of morality may not be possible without it—it does not depend solely on the brain.

However, the critiques that Harris makes of our current moral hang-ups are poignant, and offer experts in religion a significant challenge. He strongly criticizes the kind of moral and cultural relativism that seems to prevent any critique of a particular value system. The idea that we cannot criticize the head-to-toe veiling of women is preposterous, Harris argues, based on any system that would suppose to value societal well-being. He dismisses the response that these women may be happy with their situation by contending that even if this were the case, it is quite clear that we often do not know what is best for us.

This is dangerous territory for Harris, who might be accused of playing God, but no more or less so than the major religious traditions themselves do. What is overwhelmingly practical about his approach, however, is that it does not claim to have the right answers, although it certainly does admit to their possibility. Rather, Harris sets broad parameters and says it is clear that a world in which everyone’s well-being was maximized would surely be better than a world where everyone misery would be maximized. We know the direction to go, although we may not have the definitive answer to every moral dilemma. Maximizing well-being is good, maximizing misery is bad.

The study of religion, and that of morality in general, is heavily influenced by anthropology and its story of the noble savage, the cultures and tribes that we cannot judge since they are culturally independent. Who are we to say they are unhappy, even if they are sacrificing each other to appease bloodthirsty deities? This complex is in part rooted in a reaction to a past history of Western imperialism, to be sure. However, Harris suggests it is also connected to a confusion between ontology and epistemology. Our experiences are subjective, but this does not mean we can know nothing about them, particularly in a comparative sense. Harris seems to take this approach much farther than I can, seeming to claim that there are right and wrong answers to questions of morality. In a conditional sense, I would agree. In a universal sense, I cannot, if only because I don’t see us being privileged with anything approaching that level of knowledge in the near future. However, this doesn’t and shouldn’t stop us from making moral judgements.

As I prepare to teach a class on ethics, Harris’s commitment to “changing people’s ethical commitments” resonates with me. Where we differ is that Harris thinks our ethical commitments can and should be grounded in science. We should be nice to one another because that rewards us with the highest level of such-and-such chemical in our brains, and the presence of such chemical is the highest indicator of subjective levels of happiness based on multiple experiments. I am skeptical that we can ever explicitly base our morality on this. As Harris seems to admit on some level, we may need a more elaborate story, some sort of Nietzschean tragedy to found our morality. I think, though, that we might be happier with founding our morality on the level of social construction, with the help of scientific insight of course. Brain chemicals just don’t make the same story that Joseph Campbell’s hero myth does. This doesn’t prevent criticizing the inadequacy of our current stories and searching for better ones, ones more inclusive of current culture.

In any case, there is much to recommend in Harris’s book and little to fear.

02/26/13

Eusociality, Multilevel Selection, and my Smartphone

Harvard Emeritus Professor E. O. Wilson posted an interesting opinion piece in the New York Times over the weekend, entitled “The Riddle of the Species.” (I subsequently found/remembered another piece written last year, conveying similar information with more religious language.)Wilson is one of the few scientists I like to read because he writes accessibly and is conversant with the other side of the aisle, i.e., the humanities.

The article opens with the idea that the humanities (history, philosophy, art, religion, etc.) cannot give us the full picture of humanity and that science must contribute to this endeavor. I appreciate this approach because science is often presented by both sides as being an exclusive harbinger of truth, one that cannot or doesn’t know how to share. It can contribute a valuable piece of the puzzle, Wilson says, in helping to determine why we are the way we are.

He continues, “A majority of people prefer to interpret history as the unfolding of a supernatural design, to whose author we owe obedience. But that comforting interpretation has grown less supportable as knowledge of the real world has expanded.” There’s a lot to comment on, just in this one sentence. First, it is in one sense astonishing that the majority of people on earth “prefer” a supernatural explanation to the reason things are than a non-supernatural one, scientific or not. I’ve never thought about it quite this way, but perhaps one reason is that supernatural explanations are great equalizers in that they require, on their surface, no specialized knowledge. On the one hand, you have a complex explanation of the evolution of the human species as in part a result of eusocial behavior and multilevel selection, and on the other, “God made the world.” The latter is more immediately accessible.

I might make a comparison with my smartphone. I have very little idea of how it works. If someone asked, I might offer up lame suggestions of electricity and microprocessors, but I don’t know how it all fits together. One could argue that I treat it as supernatural. It just works, and when it doesn’t, I don’t know why and my lack of knowledge makes me extremely frustrated because it should just work. Its lack of functionality exposes my severe lack of understanding. If I knew just a little bit more, I might be able to deal with problems—at least smaller ones—myself, and I would likely be less frustrated or dogmatic about its reliability. But most of the time, I am satisfied to treat it like magic. This is not to say, necessarily, that a detailed knowledge of how electricity works with the components of the phone is equivalent to an objective knowledge of how it works, but it is a more justifiable and reliable understanding than, “It just works.”

Wilson attributes some of the success of humanity to euscociality, “cooperatively rear[ing] the young across multiple generations.” This requires protection, creating a “home base” in which to harbor the weakest and watch over them with a smaller number while others venture forth to forage. This transition, in turn, may have been enabled by a transition to meat-eating, which allowed less work by less people for more energy gain.

These elements required alliances and group formation in order for some to go out and hunt while others stayed behind. The alliances, in turn, require constant negotiation and inference, staying up to date on the feelings and associations of others and being aware of one’s own. Wilson identifies these group formations as based in part on individual competition and cooperation within groups and in part on the same across groups.

This background provides a lead-up to the last three paragraphs of the article, which are the most interesting to me. Wilson comments that although violence—as a result of competition in and across groups—has been a part of society as long as we have record, we do not have to conclude that they are part of our nature. “Instead,” he claims, “they are among the idiosyncratic hereditary traits that define our species.” What’s the difference? Rather than explaining our violence by man’s sinful nature, or the secular equivalent of there existing intrinsically good and bad people, we can locate the reasons for competition in meaningful explanation in order to look for alternatives to the kinds of violence we collectively believe cause more harm than good. We are the way we are because we became that way, not because we were made that way.

For Wilson, this biological genealogy means a couple things. First, as people begin to process the connections between science and the humanities, it will make a substantial difference in the way we understand our history, which will include pre-history as well. We may also take better care neither to treat the world as a temporary home that will soon be abandoned, according to traditional Christian theology, or an object we can control at our will, according to certain earnest scientific communities.

The moral of the story for me is that science doesn’t have to be pitted against the humanities in a life-and-death competition for the explanation of the universe. Both offer necessary avenues to the fullest explanation of the human species. Religion is an intricate part of the development of human understanding as well, but it is gradually losing its influence as an explanatory value. For Wilson, it has no place left; for most, it will take more time. Even if its explicit value disappears from the scene, however, its legacy will live on its cultural influence for many years.