09/15/13

Human Nature: The Good, the Bad…and the Neutral?

Source: Wikipedia

Source: Wikipedia

I’m teaching a course on ethics to college sophomores this semester, and it has been an intriguing experience so far. I set the stage the first day by explaining my background in the study of religion and thus my proclivity to discuss the influence of religion on ethical behavior. I also told students they would not only need to articulate what they believe about a given scenario, but why they believe it, and if the “why” adequately justifies the “what.” Standard fare, but especially important in an ethics course.

Our first foray into exploring these issues was a discussion on our moral valuation of human nature. I instructed students to stand and physically place themselves in a line according to whether they thought human nature was good, neutral, bad, or somewhere in between. The point of the exercise is to indicate that the way we view the nature of humanity (or whether we believe such a thing exists) affects how we understand our capacity for ethics and our ethical decision-making processes.

In both my sections of the course, about three-quarters of students put themselves somewhere in the middle between the extremes of good and bad. The remaining quarter thought human nature lay in one extreme or the other. Only a few thought that humanity was by nature good (although this is certainly as defensible as saying it is bad), and even fewer of those could articulate their response. They primarily defended their positions with the (acknowledge) idealistic hope that humanity should be good, despite fears that it is not.

On the other side, those who explained their justification best were those who think our nature is “bad.” One student gave a short testimony when I asked, responding that because she is a Christian, she believes that humanity is bad unless it is saved by Jesus Christ. Another student responded that although she was not religious, she also thought humanity was bad when left to its own devices, but is tamed through constructive social influence. Both of these positions are informed by a Western Christian cultural influence, the first more obviously, and the second, by a Hobbesian perspective suggesting that without a “social contract” we would all being clubbing each other over the head at the slightest provocation.

The exercise served its purpose well, as many students reflected that their minds were changed when they heard justifications from other students and had to think how to defend their beliefs. I was reminded, though, of how much more difficult a task it is to explore other ethical possibilities when we have a dogmatic view of the world, and a decision on the morality of human nature is one of the most fundamentally dogmatic of all.

If I had participated in the experience myself, I would have placed myself directly in the middle, not because I think human nature is both good and bad, as some students commented, but because I doubt that there is such a thing we can productively label as human nature. There are multiple problems with assigning a moral value to our nature as humans. I can certainly not rule out genetic or biological predispositions and adaptations that encourage us to act in particular ways, but it makes little sense to give them a moral valuation on the natural level. It is the product of lazy thinking.

One could try to justify the badness of human nature by the many terrible tragedies that have taken place in the course of human history, but those on the other side of the spectrum could also amass a great number of advancements and improvements that point to the innate goodness of humanity. While perhaps the most emotionally appealing, this is not the most decisive forum for discussion. Those approaching the problem from a cherry-picking historical perspective are informed not by the state of nature itself, but by their situated-ness that appears to them as the natural state of human affairs, which in turn is used to interpret and judge all new stimuli.

But the larger point is that it is self-defeating to place a moral value on the nature of humanity. If indeed there are commonalities to be observed among us, their very existence suggests a futility to our approval or disapproval of them. They simply are. The stigmatization of our nature is responsible for a long history of self-revulsion in Western Christian thought, and serves to keep the institution in a position of authority, just as a state of “terror” makes the business of war much more manageable. Certainly if our notions of good and bad are actually divinely influenced, there is conceivable justification for labeling attitudes, dispositions, and even things as good or bad, but there can be little evidence for a divine origin other than the historical and social manifestations of the labeling process, which cannot be definitely linked to an a priori state of humankind. It is a theological, and not a philosophical or historical argument. Even if such a link could be made between our notions of morality and divine ones, it would in any case indict the Divine for capriciousness in intentionally imbuing humanity with moral deficiency. To do otherwise would be to break the divine moral code.

This is not to say that the actions or even thoughts of humanity cannot be morally labeled. They can and often should be. However, the explanatory value of judging an ethical situation on the basis of the badness of human nature is deceptive and weak. It is a poor substitute for the difficult ethical work of evaluating the intricacies of the entanglements we find ourselves in. “That’s just human nature” has never solved an ethical dilemma; it has merely exempted us from the dirty work of wrestling with it.

06/26/13

Anti-Christian Bias in Academia is Responsible for Religious Bigotry. Part Two…

I posted recently about Rebecca Hamilton’s blog commentary on George Yancey’s research about anti-Christian bias among the well-educated. Hamilton’s concluded that anti-Christianity is widespread in the higher education system and that this is responsible for increasing religious bigotry. Although her reaction is inflammatory, her sentiment that there is a connection between higher education and loss of religious belief seems accurate. I disagree, however, with her suggestion that the higher education system is responsible for religious bigotry.

Speaking anecdotally, I would most likely still be a practicing Christian had I not gone back to school to earn a graduate degree. I don’t think that I once experienced any sort of unjustified intolerance toward Christianity from any professor. My experience of deconversion, insofar as education was a part of it, came largely from wrestling with texts that challenged the historical and ideological viability of the Christian tradition. Since I studied religion directly, it would be difficult for me to comment on how much anti-Christian bias a student in the sciences, for example, might absorb. There is a fine line for some between being challenged and being unfairly discriminated against. One of my main goals as a teacher is to encourage students to question their long-held world views and expose inconsistencies in thought and practice. Usually just exposing students to a variety of other world views and teaching them to think critically is sufficient to provoke crises. For a Christian (as well as most other students), college can thus be a complex existential experience. Many make it through relatively unscathed, but a sufficient enough number do not that it is a common practice to go a Christian school to avoid the conflict.

But why would there be, or seem to be, anti-Christian bias in the academy? For the same reason that there would seem to be an anti-educational bias in Christianity. The ideals of Christianity conflict with the ideals of humanistic or scientific inquiry. Christianity gives an answer to the question of life and living—God—that other forms of inquiry cannot neither accept or ignore. To be certain there are many individuals who live out their lives maintaining a balance between sometimes contradictory world views, but they do so by compromising in one or more areas. The extent to which these institutions—Christianity and (public) higher education—mix is the extent to which one or the other cedes ground. And that is not a bad thing. But its effect is negated if one or both parties must pretend that either position is neutral or irrelevant. In other words, discrimination and self-bias are inbuilt in both higher education and Christianity. These self-protective aspects cannot be removed without compromising the integrity of their structures.

What this means to me is that we should not lament that these systems conflict or attempt to neutralize their clashes. Rather, if we are searching for answers, the best way forward, a better society, etc., we should highlight points of conflict as points of leverage toward common truths. I realize that sounds platitudinous, but it is surely a better step forward than the wary pluralism of much liberal doctrine.

We must make a distinction between the ethical treatment of those who espouse world views different from our own and challenging those world views. They are not the same thing, yet very few can resist eliding one into the other. In our rush toward fixity, toward systematization, we deny the instances to better understand ourselves and our world. These instances will necessarily involve giving and receiving offense, but their rewards, I have decided, exceed the discomfort they cause.

06/24/13

Anti-Christian Bias in Academia is Responsible for Religious Bigotry. Part One…

Rebecca Hamilton, a member of the Oklahoma House of Representatives who also blogs at Patheos, recently posted that anti-Christian bias in academia is “one of the major reasons for the sudden increase in religious bigotry and Christian bashing in America today.” For evidence, she points to a talk given in March by Dr. George Yancey promoting his forthcoming book, Too Many Christians, Not Enough Lions.

It’s clear that Hamilton is drawing conclusions from the data that support her prior conclusions about the status of Christianity in America. I’ve talked elsewhere about the Christian construction of adversity as persecution. With that said, I was not surprised at the claim that there is anti-Christian bias in higher academia. For that reason, I had to watch the video to see how accurately Hamilton represented Yancey’s study.

There are gaps in her interpretation (although to her credit, these were put there by Yancey). First, the seemingly most damning surveys he completed were not of those working in higher education, but those with an advanced degree. This suggests that the survey says more about the correlation between levels of education and anti-religious bias, a much broader spectrum than just those in academia. Many other studies have suggested a correlation between wealth/education and a corresponding lack of religiosity (except in the United States). However, it also questions Hamilton’s viral conception of anti-Christianity being inculcated into the young by anti-Christians and spread throughout society. It means something different if religious deconversion is the result of education in general rather than simply the bias of educators.

At the end of her article, Hamilton laments that Yancey doesn’t say that “to try to make assumptions about the intelligence of a group of people based on something like religious preference is illogical in the first place.” As I watched the video, though, sociologist Yancey does suggest that religious background is a factor among others that shouldn’t matter in the hiring of a candidate. Both seem to share the belief that religious affiliation should not be relevant to faculty employment in higher education. I want to suggest, not that religious affiliation should be relevant, but that it is relevant for employment in higher education (as well as elsewhere, but perhaps less so in other areas).

There are at least two ways to examine the relevance of religious affiliation for employment in higher education. The first may just be a clarification. There is a difference between the legality of a distinction and its significance. It is fairly well-known, and Yancey makes clear, that one cannot ask questions about religious affiliation in the hiring process. Yancey’s survey question asked only whether it would make a difference if one did find out about a candidate’s religious affiliation. The affirmative responses he received seem to justify the law’s existence. However, as with any law, its creation of a blanket prohibition does not entail that all discrimination—in the morally neutral sense of the word—based on religion is irrelevant. The law is in existence precisely because there are cases in which religion carries undue weight, becoming a nearly exclusive determinant of the appropriateness of a candidate for a position. Unfortunately, to prevent unwarranted and inappropriate discrimination, the law hinders all distinctions.

The position that religion is irrelevant for hiring purposes is also interesting because it contradicts the importance of religion to the candidate. Right now the cards are stacked in favor of the potential employee. But the extent to which religion is a defining portion of the individual’s identity is also the extent of its relevance to the hiring committee. In other words, part of the reason Yancey’s survey showed that individuals were more likely to count religious affiliation against fundamentalists and evangelicals but not Catholics was because the former groups are perceived to be more likely to “bring religion to work,” so to speak. The extent to which these systems come into conflict is significant to all parties.

I was disappointed in the open-ended responses in Yancey’s second survey of “cultural progressives.” Many respondents suggested that Christians should be thrown to the lions, a riff on the ancient Christian apologist Tertullian’s protest against Christian treatment by the Roman majority. The apparent ferocity of the statements might be tempered by the protection of anonymity the survey offered, and thus any correlations between the sentiments of these respondents and corresponding actions are dubious. Insofar as I understood the statements, though—without a clear understanding of the question asked by Yancey or the context—they erode any sort of ethical or moral high ground the “culturally progressive” respondents might have over whatever construction of Christianity they have in mind.

I’m glad, then, that these respondents can no more or less represent “academics” as a whole than fundamentalists or extremists can represent Christians as a whole. However, their existence cannot be denied. There is bigotry in Christianity as well as academia, and the key is not to generalize—”Academics are anti-religious,” or “Christians are ignorant”—but to examine the specific cases and their relation to the institutions as whole. To what extent or in what ways does Christianity promote uncritical thinking? To what extent or in what ways does higher education inculcate a devaluation of religious traditions? These are questions worth exploring to exemplify both the significance of social and institutional construction and the heterogeneity of interpretation, the diversity of behavior within categories we would prefer to think of as homogeneous.

None of this is to say I think that laws attempting to prevent employers from religious discrimination should be removed. I think they serve to prevent unwarranted discrimination. It is also hypocritical to hold that, while folks like Hamilton show through their writing that Christianity is their primary identity factor, others must pretend as if that identity is nonetheless irrelevant concerning their employment. When I was a Christian in the higher education system, my religious affiliation certainly made a difference in my research, even though I tried to pretend—like others—that it did not.

I haven’t discussed the conflicts particularly between religious identity and the higher education system, which are the most significant factors Hamilton (and Yancey) are reacting to. I’ll save that for later.

What do you think? Should/does religious affiliation matter for employment? Why and from what perspective?

04/8/13

Smart people can be religious too, can’t they?

Being charitable to the positions, beliefs, and arguments of others is a hallmark of thorough thinking, and it is a good marker to determine the quality of online content. Blogs and comments are often dominated by clear but one-sided opinions on a particular subject, which allows them to gain a quick following by confirming the opinions of their own group. If one’s goal is to start and maintain a community of like-minded people for the benefits a feeling of belonging provides, this is effective. Usually, however, such blogs are constructed as if intending to speak to those on the other side of the fence, in which case their manner of argument is poor and ineffective, because, in the language of Stephen Covey, they seek first to be understood before they understand.

I cringe at these types of arguments, regardless of what side of the fence they land, because they pretend to be something they are not. Being charitable doesn’t mean not making claims of value or judgement; it simply means a considered investigation of the side you are arguing against, putting it in the best possible light. Unfortunately, academic training seems to make one prone to the opposite problem, being so charitable that one is doing little other than summarizing the state of affairs. This may be helpful if the greater public is unaware of a factor that may change the nature of a discourse, and often it functions as a plea for moderation against the more one-sided folks. Only rarely, at least in my field, do scholars make challenging claims. It’s simply the way we were raised.

I would like to think that people who study religion have to be more charitable than most, because they are often dealing with the impact of beliefs and actions that are self-founding; in other words, they cannot be verified or justified by outside reasoning. I have come to wonder, though, whether touting the pluralism of religious scholarship is not simply bad faith. Perhaps scholars use arguments against bias to avoid upsetting their audience, or even more critically, to avoid upsetting themselves. I know this was true in my case. I survived as a Christian for at least two years only by maintaining a separation between my religious life and my academic life, even though the latter deals almost exclusively with the religion I practiced. It eventually became an untenable separation for me, the exact reasons for which remain a mystery, especially as many others are able to operate in both worlds, the religious and the academic.

Indeed, I have had numerous conversations with friends who are believers about the fact that there are many intelligent people, many intelligent scholars even, who hold very strong religious beliefs. It may seem silly even to have that conversation, but the nature of the majority of the discourse, in which atheists think Christians are stupid, or at least Christians think atheists think they are stupid, and Christians think atheists are all the devil’s servants destined for hell, or at least atheists think Christians think they are, makes it a practically inevitable conversation. In addition, because I quit religion while in higher education, friends often assume I think that my current position is the “smarter” one.

Many different names come up in the conversation about smart Christians, with C. S. Lewis always high on the list. I’ll return to him another time, but I came across another brief argument by a Christian academic that reinforces my contention that one cannot justify religious belief from a non-theological scholarly methodology. Gary Cutting, a philosopher from Notre Dame, wrote an opinion piece “On Being Catholic” in the New York Times, where he says, “I try to articulate a position that I expect many fellow Catholics will find congenial and that non-Catholics (even those who reject all religion) may recognize as an intellectually respectable stance.” What follows is part personal testimony and part justification of a liberal approach to an orthodox tradition.

Cutting argues, as liberal Christians often do, that while the church may not provide fundamental truths, it is a helpful tool for understanding the human condition. While he doesn’t go into detail here, the “tools” that other Christians cite are primarily explanatory ones, such as man having a sinful nature, which then explains why people do bad things, reinforcing the idea that if there were only more Christians, there would be less evil in the world. Cutting also aligns with other liberal Christians in highlighting the ethic of love as a “powerful force for good” and the lens through which Biblical teachings should be interpreted. He anticipates the counterargument that he is promoting a watered-down version of the faith by contending that the Catholicism itself makes room for such diversity of belief.

None of this is a clear justification of his belief as a Catholic or a reconciliation with his life as an academic. In the end, he offers two reasons why not to abandon the flawed institution of the Catholic Church. First, the Catholic tradition is, as he says, “the only place I feel at home. Simply to renounce it would be…to deny part of my moral core.” This is where the heart of Cutting’s argument lies. He can’t give up religion because it would be giving up part of himself. I understand his argument and have felt that way myself, but it is not the intellectually respectable stance he claimed it would be. It is rather a conversation-stopper, an argument that maintains a foundational ground without question out of (a very real) fear.

By holding both that the church is flawed and yet that its ideals are right or that its heart is in the right place, Cutting keeps those flaws at a distance from himself. Yet he is left with two choices. One would be to articulate more clearly what are those beliefs that constitute his moral core and why exactly they are best served in Christianity. If simply because that is the tradition he grew up in, fine, but that is not the reasonable argument he is making. The other option would be to seriously question whether the flaws in the Church are also deeply embedded in his moral core as well. The change in my life, from a place where I felt like Cutting to where I am now, was facilitated by the realization that my moral worldview was not, in practice, supported by the theological underpinnings I had been told it was. It was then that I realized my moral core was tied more to the particularities of my social world—which did include Christianity— and my dispositions rather than a divine Creator.

Cutting’s second reason not to abandon his belief is contingent upon the first. He doesn’t want to abandon his faith to the conservatives. Again, I recognize the position, and it is one I held for a period of time. The lines are not as clear here. I am not willing to say, as many nonreligious folks do, that all religion does more harm than good. So I understand the sentiment of wanting to reclaim a rich tradition from seeming perversions. But it could also be that the unwillingness of “liberal” religious folks to abandon their tradition helps maintain the space that allows conservative and extreme factions to enact their violence against others who think differently. Think for a moment what would happen if all liberal Christians abandoned their Christianity for another system that was centered around love and morality, but without the theological underpinnings? I know it’s far-fetched, but where would that leave conservative factions? Without enough support to survive.

Though Cutting claims Christianity is not the only way to truth, I don’t see him taking the route I suggested. But that means that he and others like him, have a lot more work to do than making generalizations about “love” and “my belief,” which excludes nearly all of what religious traditions have historically been about. His argument is not justifiable in the manner he proposed it in. Rather, it is evasive precisely where it needs to be specific. It takes for granted both the theological propositions and the social conditions required for him to profess such a faith. I don’t think it is necessarily impossible to make a reasoned argument that takes these factors into serious account, but I have yet to see one.

02/27/13

Punishment for Moral Failure

Those irreligious folks over at Patheos asked an interesting question for the latest iteration of their values posts: How/should we punish people for moral failure? Wisely, they limit this to adults in your personal and professional network. In processing the question, I thought of potential moral failures that exist on some sort of governmental level, but this really is a different question, because the problem cannot be located in an individual. Indeed, that is both the benefit and the detriment of bureaucracy. The figures we try to locate as the real problem (the President, the CEO, etc.) rarely bear more than a portion of the blame we wish them to carry.

This question is constructed, though, so that we can assume an individual is to blame for a moral faux pas. And this individual is not someone clearly under our moral authority, such as children would be. Is it ever okay to punish someone (personally) for a moral failure in this situation? I would like to be able to say yes, but I don’t think so.

There are perhaps some constructed situations in which it would be appropriate. If, for example, you have an accountability partner of some sort, where the partnership is based on mutual motivation toward a common goal, then it would be incumbent on you to point out the moral failure of the partner when he or she fails to fulfill the aim in view of which the partnership was formed. A marriage would be a formal example, but most of the informal partnerships I can think of would be constructed around goals or aims that aren’t specifically moral, such as exercising regularly or staying away from particular foods. As a Christian teenager, I’m fairly certain I was involved in groups designed to be accountable for one’s sexual purity. In these types of situations in which the accountability was established beforehand, punishment would presumably take the form of whatever was decided beforehand as well.

The more complex situation, though, is one in which someone in your circle of social influence commits a moral transgression about which they had no explicit contract with you. What, if anything, should you do? Well, the situation is obviously much different if you personally are the victim of that failure or transgression, and it is different if the victim is not competent, so let’s set those contingencies aside for a moment.

There is only one situation I can think of in my personal experience. Many years ago, an acquaintance decided to get married. A relative of the acquaintance who did not approve of the marriage, on ostensibly Christian moral grounds, took it upon him/herself to punish the acquaintance by cutting off contact with him/her and not attending the wedding. I am/was aware of the perceived moral failure (in fact, I think the response was a greater moral failure because of its pride and self-righteousness, but I didn’t punish that one myself), but wasn’t privy to the details of the situation. From my vantage point, however, it seemed that the acquaintance was only temporarily hurt and then moved on while the punisher maintained a strong sense of indignation and self-righteousness that was compounded by the fact that his/her punishment was ineffective.

The evidence is anecdotal, but it tells me that “punishing” a peer for a moral failure in unlikely to be effective if the goal is to chasten the individual’s behavior. If the intent is to distance oneself from a perceived moral impurity, which may be legitimate in certain cases, the “punishment” in the form of a withdrawal of relationship is not primarily intended as punishment but a cessation of association, which may or may not have that effect and should be a point of indifference to the initiator anyway.

If a moral failure is regulated in some other social or legal sphere, such as a physical assault, then your personal punishment is unlikely to be significant in comparison. In addition, if you are aware of a moral failure that is also a legal transgression and decide to punish the person yourself rather than inform legal authority, it is unlikely that you would be sufficiently protected from blame, if the transgression was uncovered, by explaining that you punished the moral failure yourself.

In short, then, we live in a society in which there is some overlap between moral failure and institutional punishment, as there should be. It seems to me that if a moral failure is a legally punishable offense, the institutional punishment takes precedence over your personal punishment (although, as mentioned above, this might be augmented by termination of the relationship with the offender, the aim of which would primarily be to preserve oneself and not “punish” the other). If, as in my example above, the moral failure is not legally punishable, the scope of any punishment is going to be limited and will be of significant cost to the punisher as well. Assuming that the punisher and the offender are peers, I consequently see little ground or benefit for aiming at punishment.

If, as in the case above, both parties are members of a common religious or social institution that regulates such behavior, the punisher could of course remind the offender of the requirements of the institution, if those are clear. However, in the case of divorce, for example, although strong prohibitions are made against it, Christians get divorced at least as much as others, so the practical basis for personal punishment would be slim. The institution can punish as an institution, but I would argue greater moral failures come from individuals attempting to embody the institution and enact its punishment in its place.

Outside of institutional logic or the scenarios constructed above, I cannot see a situation in which it would be safe to assume that the moral failure for which the offender would be punished is understood and shared by the offender. There is no objective reference against which to administer punishment. Common decency is too platitudinous to support personal punishment for moral failure. In my capacity solely as an individual, in relation to peers, who am I to judge?