Pulling Ourselves Up By Our Bootstraps

A group of notable Christian leaders calling themselves Circle of Protection, with Jim Wallis as their spokesperson, recently sent a letter to Congress asking them to protect the poorest in our nation from being affected by budget cuts. Wallis, a long-time advocate for social justice in political circles, argues that the budget is a moral issue and that Congress needs to be reminded that cuts to vital aid programs affect people personally. I recognize that some political posturing goes along with public statements such as this, but I value the efforts of Wallis and those in this group for reminding Christians and others that, “a nation will be judged by how it treats the poorest and most vulnerable.” I’d argue that we’re better off if we judge each other on this principle rather than letting God do it, but the statement is valuable in trying to craft the kind of religion Wallis and some others want to see. He has been a social justice advocate for years and I’ll give his group the benefit of the doubt that not only are they making a public statement, but are engaged in their own communities as well.

In another article published recently, Amia Srinivasan exposes the “sin” of being dependent on the state. There is a connection between apathy toward continuing funding for aid programs and our stigmatization of their use. I’ve been more aware of the prevailing negative attitudes toward the “poor” in Idaho than when I was in California. I’ve heard comments at the gym, located downtown near a shelter, about how the people milling around should stop smoking cigarettes and “go get a job,” as if it were simply a matter of making the decision. (If that were the case, I wouldn’t be an underemployed PhD.)  One might think that we stigmatize receiving state support just to be mean, or we feel guilty, or because we’re scared that if we don’t make people feel terrible for relying on it, they’ll just stay on it forever. But there’s more to it than that.

Srinivasan’s article points out that we only stigmatize particular kinds of state support, and don’t recognize the state’s contribution to the wealthy (and the rest of us) to maintain our socio-economic status. Thus, much of the article is devoted to poking holes in the myths that support our inaccurate judgements of state support. Many people who rely on government aid, for example, already have full-time jobs that don’t make ends meet, and we subsidize those people in minimum wage jobs so that we can have cheap t-shirts from Wal-Mart. Wealthier folks, on the other hand, are protected by favorable tax laws (and have the means to take advantage of them), and have the ability to grease the wheels of legislature to remain in their favor.

So much has been heard before, and reinforcing the polarization between the rich and poor only goes so far. It already is joining the ranks of such played-out dichotomies as Democrat and Republican, pro-life and pro-choice, etc. The better question the article asks that is unique is why Americans only stigmatize certain types of state dependence and not others. Srinivasan suggests a couple possibilities. First, the state relies on the wealthy just as the wealthy rely on the state. We’ve been told at least since the Reagan era that big corporations and their leaders are what drive America’s economy. America and the wealthy are codependent, while the poor just drag the state down (and work at the companies with huge profits that don’t “trickle down”). Secondly, we have the myth of the self-made man or woman, which ignores the fact that many of today’s wealthy inherit their wealth. Indeed, even if they “worked hard,” they began with social advantages that the poor could not dream of. While self-reliance and rags-to-riches are embedded in American culture, they are largely myths. In their current form, they merely reinforce class divisions.

What struck me most in reading the article, though, is the idea that the biggest obstacle to destigmatization of the poverty we see around us every day may be pride, the indoctrinated belief of the aspiring middle class that we will be wealthy some day as well if we try hard enough. Thus we blame the poor for their poverty, giving them a minimal amount of support so that we can maintain the outline of a dream that will not be a reality for the vast majority of us, while those for whom wealth is a reality benefit from cuts to the bottom half.

This, I would argue, is one of the greatest hypocrisies in American Christianity, which extends strong ideological support for the poor while attempting to limit state support for them. I had a conversation a few years ago with a person who fits the self-made profile well, coming from humble beginnings to successful entrepreneurship. This person argued, as a Christian, that the government shouldn’t be able to force people to do things, like help support the impoverished, that they would otherwise do on their own. Many share this sentiment, yet I imagine they wouldn’t want the same principle applied to infrastructure or small business tax breaks. The government support of the poor that allows those, even those just above the poverty line, to stigmatize those just below because they are “mooching” from the system. But it’s not because they cheated the system and we worked hard for what we have. It’s because we know that we are no different. We have been able to take even more advantage of the system, and it is only circumstance that put us where we are today. The line separating “us” from “them” is ever so thin.

I agree that taxation of the wealthy will not solve the country’s economic imbalances. Part of the reason it is so controversial, however, is that it pokes a hole in the myth that we deserve what we have and we earned it all ourselves, a myth so vital to our nation that many of us will continue to support it without ever seeing any benefit from it.


Punishment for Moral Failure

Those irreligious folks over at Patheos asked an interesting question for the latest iteration of their values posts: How/should we punish people for moral failure? Wisely, they limit this to adults in your personal and professional network. In processing the question, I thought of potential moral failures that exist on some sort of governmental level, but this really is a different question, because the problem cannot be located in an individual. Indeed, that is both the benefit and the detriment of bureaucracy. The figures we try to locate as the real problem (the President, the CEO, etc.) rarely bear more than a portion of the blame we wish them to carry.

This question is constructed, though, so that we can assume an individual is to blame for a moral faux pas. And this individual is not someone clearly under our moral authority, such as children would be. Is it ever okay to punish someone (personally) for a moral failure in this situation? I would like to be able to say yes, but I don’t think so.

There are perhaps some constructed situations in which it would be appropriate. If, for example, you have an accountability partner of some sort, where the partnership is based on mutual motivation toward a common goal, then it would be incumbent on you to point out the moral failure of the partner when he or she fails to fulfill the aim in view of which the partnership was formed. A marriage would be a formal example, but most of the informal partnerships I can think of would be constructed around goals or aims that aren’t specifically moral, such as exercising regularly or staying away from particular foods. As a Christian teenager, I’m fairly certain I was involved in groups designed to be accountable for one’s sexual purity. In these types of situations in which the accountability was established beforehand, punishment would presumably take the form of whatever was decided beforehand as well.

The more complex situation, though, is one in which someone in your circle of social influence commits a moral transgression about which they had no explicit contract with you. What, if anything, should you do? Well, the situation is obviously much different if you personally are the victim of that failure or transgression, and it is different if the victim is not competent, so let’s set those contingencies aside for a moment.

There is only one situation I can think of in my personal experience. Many years ago, an acquaintance decided to get married. A relative of the acquaintance who did not approve of the marriage, on ostensibly Christian moral grounds, took it upon him/herself to punish the acquaintance by cutting off contact with him/her and not attending the wedding. I am/was aware of the perceived moral failure (in fact, I think the response was a greater moral failure because of its pride and self-righteousness, but I didn’t punish that one myself), but wasn’t privy to the details of the situation. From my vantage point, however, it seemed that the acquaintance was only temporarily hurt and then moved on while the punisher maintained a strong sense of indignation and self-righteousness that was compounded by the fact that his/her punishment was ineffective.

The evidence is anecdotal, but it tells me that “punishing” a peer for a moral failure in unlikely to be effective if the goal is to chasten the individual’s behavior. If the intent is to distance oneself from a perceived moral impurity, which may be legitimate in certain cases, the “punishment” in the form of a withdrawal of relationship is not primarily intended as punishment but a cessation of association, which may or may not have that effect and should be a point of indifference to the initiator anyway.

If a moral failure is regulated in some other social or legal sphere, such as a physical assault, then your personal punishment is unlikely to be significant in comparison. In addition, if you are aware of a moral failure that is also a legal transgression and decide to punish the person yourself rather than inform legal authority, it is unlikely that you would be sufficiently protected from blame, if the transgression was uncovered, by explaining that you punished the moral failure yourself.

In short, then, we live in a society in which there is some overlap between moral failure and institutional punishment, as there should be. It seems to me that if a moral failure is a legally punishable offense, the institutional punishment takes precedence over your personal punishment (although, as mentioned above, this might be augmented by termination of the relationship with the offender, the aim of which would primarily be to preserve oneself and not “punish” the other). If, as in my example above, the moral failure is not legally punishable, the scope of any punishment is going to be limited and will be of significant cost to the punisher as well. Assuming that the punisher and the offender are peers, I consequently see little ground or benefit for aiming at punishment.

If, as in the case above, both parties are members of a common religious or social institution that regulates such behavior, the punisher could of course remind the offender of the requirements of the institution, if those are clear. However, in the case of divorce, for example, although strong prohibitions are made against it, Christians get divorced at least as much as others, so the practical basis for personal punishment would be slim. The institution can punish as an institution, but I would argue greater moral failures come from individuals attempting to embody the institution and enact its punishment in its place.

Outside of institutional logic or the scenarios constructed above, I cannot see a situation in which it would be safe to assume that the moral failure for which the offender would be punished is understood and shared by the offender. There is no objective reference against which to administer punishment. Common decency is too platitudinous to support personal punishment for moral failure. In my capacity solely as an individual, in relation to peers, who am I to judge?


Eusociality, Multilevel Selection, and my Smartphone

Harvard Emeritus Professor E. O. Wilson posted an interesting opinion piece in the New York Times over the weekend, entitled “The Riddle of the Species.” (I subsequently found/remembered another piece written last year, conveying similar information with more religious language.)Wilson is one of the few scientists I like to read because he writes accessibly and is conversant with the other side of the aisle, i.e., the humanities.

The article opens with the idea that the humanities (history, philosophy, art, religion, etc.) cannot give us the full picture of humanity and that science must contribute to this endeavor. I appreciate this approach because science is often presented by both sides as being an exclusive harbinger of truth, one that cannot or doesn’t know how to share. It can contribute a valuable piece of the puzzle, Wilson says, in helping to determine why we are the way we are.

He continues, “A majority of people prefer to interpret history as the unfolding of a supernatural design, to whose author we owe obedience. But that comforting interpretation has grown less supportable as knowledge of the real world has expanded.” There’s a lot to comment on, just in this one sentence. First, it is in one sense astonishing that the majority of people on earth “prefer” a supernatural explanation to the reason things are than a non-supernatural one, scientific or not. I’ve never thought about it quite this way, but perhaps one reason is that supernatural explanations are great equalizers in that they require, on their surface, no specialized knowledge. On the one hand, you have a complex explanation of the evolution of the human species as in part a result of eusocial behavior and multilevel selection, and on the other, “God made the world.” The latter is more immediately accessible.

I might make a comparison with my smartphone. I have very little idea of how it works. If someone asked, I might offer up lame suggestions of electricity and microprocessors, but I don’t know how it all fits together. One could argue that I treat it as supernatural. It just works, and when it doesn’t, I don’t know why and my lack of knowledge makes me extremely frustrated because it should just work. Its lack of functionality exposes my severe lack of understanding. If I knew just a little bit more, I might be able to deal with problems—at least smaller ones—myself, and I would likely be less frustrated or dogmatic about its reliability. But most of the time, I am satisfied to treat it like magic. This is not to say, necessarily, that a detailed knowledge of how electricity works with the components of the phone is equivalent to an objective knowledge of how it works, but it is a more justifiable and reliable understanding than, “It just works.”

Wilson attributes some of the success of humanity to euscociality, “cooperatively rear[ing] the young across multiple generations.” This requires protection, creating a “home base” in which to harbor the weakest and watch over them with a smaller number while others venture forth to forage. This transition, in turn, may have been enabled by a transition to meat-eating, which allowed less work by less people for more energy gain.

These elements required alliances and group formation in order for some to go out and hunt while others stayed behind. The alliances, in turn, require constant negotiation and inference, staying up to date on the feelings and associations of others and being aware of one’s own. Wilson identifies these group formations as based in part on individual competition and cooperation within groups and in part on the same across groups.

This background provides a lead-up to the last three paragraphs of the article, which are the most interesting to me. Wilson comments that although violence—as a result of competition in and across groups—has been a part of society as long as we have record, we do not have to conclude that they are part of our nature. “Instead,” he claims, “they are among the idiosyncratic hereditary traits that define our species.” What’s the difference? Rather than explaining our violence by man’s sinful nature, or the secular equivalent of there existing intrinsically good and bad people, we can locate the reasons for competition in meaningful explanation in order to look for alternatives to the kinds of violence we collectively believe cause more harm than good. We are the way we are because we became that way, not because we were made that way.

For Wilson, this biological genealogy means a couple things. First, as people begin to process the connections between science and the humanities, it will make a substantial difference in the way we understand our history, which will include pre-history as well. We may also take better care neither to treat the world as a temporary home that will soon be abandoned, according to traditional Christian theology, or an object we can control at our will, according to certain earnest scientific communities.

The moral of the story for me is that science doesn’t have to be pitted against the humanities in a life-and-death competition for the explanation of the universe. Both offer necessary avenues to the fullest explanation of the human species. Religion is an intricate part of the development of human understanding as well, but it is gradually losing its influence as an explanatory value. For Wilson, it has no place left; for most, it will take more time. Even if its explicit value disappears from the scene, however, its legacy will live on its cultural influence for many years.


Less Violent than Ever Before?

Most of us weigh the present more heavily than the past when thinking about change over time, and this distorts our view to a certain extent. As we are currently experiencing the present, and rely on evidence for the past, the former seems to have more depth and texture, as if what was in the past was all just a prelude to now. Societal measurements based on this linear narrative of history—common to the monotheistic religious traditions—tend to place the present at either the zenith or nadir of humanity. This presents multiple problems when looking at the past. Scholars have often been guilty of presentism, judging the past based on the norms and values of the present, because they appear so obvious that they must have been apparent to past generations as well. Criticism of pre-modern societies based upon their acceptance of slavery is but one example.

Violence is an important subject often placed in this historical narrative. Are we becoming less or more violent as a people? In light of our access to information about violence around the world in almost real time, it may seem that violence is increasing. A recent book by Harvard Professor of Psychology Stephen Pinker, The Better Angels of Our Nature, argues that violence has actually decreased over time at several periods, and that we are at a comparatively low point for violence in human history.

I have not read Pinker’s book yet. To be honest, I might have dismissed it entirely had I not watched an interview with Pinker about the book. It seemed to me that any thesis claiming less violence in the present would be primarily opportunistic to sell some books. For example, one could claim that rates of violence had gone down based just upon the dramatic population increase in the past century, but that would have little functional value, and might even justify a more passive stance toward instances of violence in the present. Hearing his discussion of the book changed my view a bit.

In the wake of the recent Newtown massacre, Pinker’s thesis appears even more controversial than before, and the Center for Human Inquiry conducted a podcast with him to ask questions about the book. It’s worth a watch. Pinker said several things worthy of note in the conversation. The primary one, of course, is his thesis that we are living in a less violent time than is suggested by the media or common belief. He noted, as many have, that rates of violence are much higher in the United States than in other first-world countries, and that even if all instances of gun violence were removed, the US would still have a higher violence rate. Guns aside, we would still beat and club each other to death more than most first-world countries. In other words, gun control isn’t the only answer to the problem of violence. He went further and suggested that violence is more problematic in the southern and western portions of the US, and connects that to their relatively recent frontier history. (This was following up on a comment that I think deserves further exploration; namely, that we are really two countries: the old Northeast, and the “new” south and west.)

Pinker also suggested that social media may help contribute to a decrease in violence, just as the popularization of the printing press and the decreased cost in printing led to a greater dissemination of information, which increased knowledge, expanded social spheres, and may have helped decrease acts of violence. He also made some interesting comments about violence against women that I may discuss in a future post.

What I liked most in the interview, though, was Pinker’s insistence on the media’s preference of particular types of violence, namely mass shootings. When looked at from a disinterested scholarly viewpoint (which only someone unconnected to the events can do), the number killed in the Newtown shooting was quite small, and one and a half times that amount are killed in the US every day. Those isolated cases are not treated with the same importance. Pinker noted that even the largest single terrorist attack in history, 9/11, killed around 2,800, and 16,000 are killed each year in the US.

The point is not just raw numbers, but a realization of our skewed criteria for recognizing and privileging violence, which is not just a media problem. What the book may suggest, then, is a change in the way we prioritize types of violence, zooming in on some instances with hyper-focus and virtually ignoring others. The interview didn’t talk much about solutions, but alluded briefly to the idea that media coverage could be more closely aligned with a broader take on violence. The problem is not easy to solve, but it may be that intense focus on cases of mass violence may do more to entrench our beliefs about violence than push us to make change. It is not that these are not important, but that this particular type of violence should be placed in its position within the much broader range of violence that takes place daily in our country and others.


Yoga is Worship of the Sun God

Religion Dispatches posted another iteration of a question I have brought up in introductory religion classes many times before. A group of parents is thinking of bringing suit against a school district because of a yoga class, which they allege is intrinsically religious. They claim that the poses represent a form of worship to the sun god, while the school district argues that it is a form of exercise, stretching, and focus for the children. The article talks more about the legal implications of the case, but I’d like to think about the theoretical question. Can an action or a symbol be intrinsically religious?

We could take the cross as an example. If you encounter someone wearing a necklace with that symbol on it, you are perhaps more likely to consider it a reference to Christianity than a lower-case “t.” We are culturally conditioned, religious or not, to understand it as a religious symbol, and its presence can prompt a number of positive or negative reactions. However, in different contexts the symbol can obviously be something different: a letter of the alphabet, a plus sign, the representation of a traffic intersection, etc. Historically speaking, the stylized cross by which we represent Christianity is an execution tool. It would be like wearing a miniaturized hangman’s noose or a small electric chair around our necks. The cross was mystified and theologized by Paul and others, becoming the nexus of understanding for resurrection, divinity, and eternal life in the Christian tradition.

The question of yoga has been debated in the West since its popularization, and that of Eastern religion in general, which exploded in the latter half of the twentieth century. Yoga was popularized and now is widespread in many different forms, many or most of which are not explicitly religious. (I tried out a week of Bikram yoga a few years ago. It was only superficially religious, mostly with at the beginning and end of the sessions. On the other hand, I have never sweat so much in my entire life. You sweat through all your sweat and then it’s just pure water coming out of you. Like a miracle. Wait a second…) In the last few years, there have been Hindu groups who have advocated programs to take back Yoga, to reassociate it with its Hindu roots and use it as a positive tool to promote Hinduism. Others, however, have criticized this approach, claiming that Hinduism has no exclusive claim over yoga, or that yoga was not originally Hindu anyway.

When I bring this up in the classroom, nearly all of my students agree that there is nothing intrinsically Hindu about yoga, while acknowledging this may be due in part to their having come to know yoga from “Yoga Booty Ballet” or face yoga for fuller lips rather than as a spiritual practice. But is there a line that can be drawn as to when a symbol becomes intrinsically religious?

There are several avenues we could explore. From a semiotic [the study of signs and symbols] aspect, the only way that a signifier can become equivalent to what is signified (e.g., the cup becoming the blood of Christ) would be through supernatural means. This was attested frequently in early Christianity with the relics of martyrs, for example. A fragment of bone or a bloody cloth became and contained the power of the martyr and could perform miracles. From a non-religious standpoint, however, a symbol is always excessive, meaning it can point to a number of possible signifieds, or meanings, although all are not equally likely in every case.

So that concerns symbols. What about with actions performed by the body? The principle that governs the decision seems close to the same. The parents planning to bring suit against the school district for providing yoga “characterized the ‘Sun Salutation,’ a basic series of yoga poses in which the student stretches his or her hands to the ceiling, as ‘a movement sequence that worships the sun god Surya,’ and claimed that ‘yoga, including its physical practice, is very religious indeed.’ So the human body forms itself in a variety of poses that can become symbols, in this case, of reference to the sun god. The parents in question are claiming an equivalence between the poses the children perform and worship of Hindu deities.

If this is true, what of those who perform yoga for health reasons with no pretense of religious activity? Are they being misled? These questions get more contentious (rightly so) when dealing with children. The parents seem to be suggesting that the school children are unwittingly worshipping Hindu deities. Author Katherine Stewart is correct that the protest has much to do with “other” religious traditions being practiced in schools, and not the dominant Christian one, as there are after school activities such as the Good News Club that have not as of yet generated similar lawsuits in the school district. There have certainly been times where courts have decided that certain activities, deemed religious, cannot be practiced in schools. Although I’m not an expert in that area, it’s likely that most of those cases involved the additional use of language to circumscribe or explain certain types of action, such as the words of prayer to an explicitly Christian God that would accompany a bowing of the head and folding of the hands.

The parents protesting yoga are able to say it is intrinsically religious because they have imbued it with a semiotic equivalence. They have, just as a sincere worshipper does, given it that power. However, due in part to its popularization, in the current cultural climate most participate in the activity without positing any such equivalence with worship. As the equivalence is not a majority belief or practice, it seems there is little theoretical ground for a lawsuit. Whether it will succeed in the courts is another matter…