David Eller On Morality and Religion
Once again cultural anthropologist Dr. David Eller has granted us access to a large amount of text, from his excellent book, Atheism Advanced: Further Thoughts of a Freethinker, pp. 365-390. If you want to learn about morality this is very good, as is the whole chapter 10, "Of Myths and Morals: Religion, Stories, and the Practice of Living."
On Morality and Religion by David Eller.
There is no doubt much more stress in Western/Christian
cultures on morality than on myth. Again,
Christians would insist that they do not have “myth” but that they definitely
have morality, or even that their religion is
morality above all else. Atheists, often
taking their lead from Christianity and literally “speaking Christian,” tend to
allow themselves to be swept along with Christian thinking on this
subject. Atheists do not much trouble
ourselves with myths (for us, all myths are false by definition, since myths
refer to supernatural/religious beings and we reject the very notion of such
being). But we trouble ourselves very
much with morality, down to trying to prove that we “have morality too” or that
we can “be good without god(s).”
Given the amount of time and energy that Christians and atheists alike—and not just them but philosophers, politicians, lawyers, and social scientists—have devoted to the problem of morality, it is remarkable that so little progress has been made. As the famous early 20th-century moral philosopher G. E. Moore wrote almost one hundred years ago, morality or ethics “is a subject about which there has been and still is an immense amount of difference of opinion…. Actions which some philosophers hold to be generally wrong, others hold to be generally right, and occurrences which some hold to be evils, others hold to be goods” (1963: 7). Surely any topic that has resisted progress and agreement for so long must be being approached in the wrong way.
In this section, we are not going to try to adjudicate all of the various and diverse theories of morality. That is a fool’s errand, and the failure to settle the question over the past 2,400 years probably indicates that it is a question that is not going to be settled, that cannot be settled. Nor are we going to defend any particular morality. Rather, we are going to interrogate the very concept of morality and its relation to religion. We will find that there is a core of misunderstanding about morality—roughly the same core as in the misunderstanding of myth, ritual, belief (see Chapter 11), and religion as a whole. But we will also find some guidance from our earlier discovery that religion and all its elements are more about doing than knowing, and this will be particularly relevant since morality is fundamentally about human action, not human knowledge. This will ultimately bring us back to the question of myth and morality and finally to the lesson for atheism about both.
What is Morality?
One of the basic reasons why humans have made so little headway on the matter of morality is that hardly anyone attempts, and no one as far as I have determined has ever succeeded, to define what they mean by “morality.” Scholars and theorists of morality either seem to jump directly to their discussions without explaining exactly what they are talking about—presumably on the assumption that we all already know—or give some unhelpful definition such the painfully circular one I find in my Webster’s dictionary: “a moral discourse, statement, or lesson; a doctrine of system of moral conduct; particular moral principles or rules of conduct.” Turning to philosophers for assistance, we find the Stanford Encyclopedia of Philosophy offering this handy meaning: “a code of conduct put forward by a society or, some other group, such as a religion, or accepted by an individual for her own behavior.” The Dictionary of Philosophy and the Internet Encyclopedia of Philosophy do not even contain entries for it, deferring to “morals” or “moral philosophy.”
We all—at least “we all” in the Western/Christian
tradition—do have a thumbnail sense of what we mean by morality, perhaps
encapsulated by Michael Shermer’s definition of “right and wrong thoughts and
behaviors in the context of the rules of a social group” (2004: 7). But this and the other above definitions do
not get us anywhere, since they merely substitute one unknown (“morality”) with
another unknown (“right and wrong thoughts and behaviors”) or by the very same
term (“morality is about morals”). Let
us at the outset clarify one bit of language: let us not use the words “right”
and “wrong” in the context of morality at all.
These are words that belong to the context of propositions or
fact-claims, where “right” means “correct,” and “wrong” means “incorrect.” Thus, it is “right” to assert “The earth is
round” or “Two plus two equals four,” and it is “wrong” to assert “The earth is
flat” or “Two plus two equals five.”
Only a proposition can be right or wrong; a single word or concept
cannot be. It makes no sense to say
“Polygamy is wrong” or “Abortion is right”; it would be equivalent to asserting
that polygamy is false or that abortion is true, which is meaningless. The proper distinction for moral discourse is
“good versus bad” or “proper versus improper” or “moral versus immoral.”
As we encounter it, “morality” is a singularly vague and
profitless concept. It does not tell us what is moral, since different
societies, religions, and other groups seem to disagree profoundly and loudly
about it. It does not even tell us what
a “moral concern” is. In the
So, “morality” as a concept does not convey much meaning. Worse yet, at the same time that it does not
mean enough, it threatens to mean too much.
What I am suggesting is that there are many behavioral concerns or norms
or rules for humans that would not fall within the range of someone’s (or maybe
anyone’s) “morality.” For instance, men
in the
Having found no “essence” to “morality,” let us
acknowledge that, like language and religion before, there is probably no such
“thing” as “morality.” Rather—just as
there is no such thing as “language” but only “languages” and no such thing as
“religion” but only “religions”—so there are “moralities” but no “morality.” As
Nietzsche rightly put it in Beyond Good and Evil, (section 18),
Strange as it may sound, in
all “science of morals” hitherto the problem of morality itself has been lacking: the suspicion was lacking that
there was anything problematic here.
What philosophers called “the rational ground of morality” and sought to
furnish us was, viewed in the proper light, only a scholarly form of faith in the prevailing morality, a new
way of expressing it, and thus itself
a fact within a certain morality, indeed even in the last resort a kind of
denial that this morality ought to be
conceived as a problem—in any event the opposite of a testing, analysis,
doubting, and vivisection of this faith….
[I]t is precisely because they were ill informed and not even very
inquisitive about other peoples, ages, and former times, that they did not so
much as catch sight of the real problem of morality—for these come into view
only if we compare many moralities.
In
other words, when people—including professional philosophers—have talked about
“morality,” they have generally and un-self-consciously meant “their morality”
(usually Christian morality) as if it
were the only one on earth or the only one of any interest. The other “moralities” have either escaped
their grasp completely or have simply been filed away under “false morality” in
the same way that religions have dismissed other religions as “false religion.”
It might be more useful, therefore,
to talk about “moral system” than “morality.”
There are many moral systems, which collectively we can refer to
“morality” (just as there are many religious systems which collectively we can
refer to as “religion”), which vary dramatically from each other but presumably
have some thing(s) in common to belong to the same category. What a future theory of morality will be, then, is not advocacy or exegesis of
any one particular moral system, much less the invention of yet another moral
system, but a description of what actual moral systems have in common and the
range of variation between them, as well as an explanation of what makes moral
systems possible or why humans have such things. We can suggest that there are three dead-ends
down which a future theory of morality should not go. One is an appeal to the “goodness” of acts
and/or actors. Besides being hopelessly
circular (moral = good?), such an approach falls into the familiar Western
habit of reifying an adjective into a noun.
As an adjective, “good” is a judgment made by somebody for some purpose:
candy is “good” to eat but “bad” to build houses out of. And it is “bad” to eat if you are a
diabetic. But to transform the judgment of “good” into the quality of “goodness” is to commit a
tragic fallacy. There is no more such a
“thing” as “goodness” than there is such a “thing” as “tallness” or “coldness.”
A second path to avoid is the issue
of “pleasure” and “pain.” Some moral
theories (which are nothing but disguised moral systems) maintain that morality
is all about maximizing pleasure and minimizing pain. But that is frivolous, first since a moral
system often explicitly interferes
with pleasure (i.e., it may be very pleasurable to have premarital sex or for
homosexuals to have gay sex) and second because a moral theory/system like
utilitarianism mires us in the relative and cumulative pleasures and pains of
actions in nonsensical ways: in other words, if my action causes me “one unit”
of pain but gives two other people “one unit” of pleasure each, is it thereby
moral? The whole approach is off base
and not the way that real people make their “moral” decisions.
The third path to avoid is the question of
“universalizability.” Many philosophers,
from Kant to Nielsen, insist that a moral principle must be a universal or at
least universalizable principle. Kant
was the first to make this urgent: the basis of his moral philosophy was what
he called the “categorical imperative,” that one should act in such a way that
the principle or “maxim” of one’s action could be universalized. So, if I do not kill, I am saying, “It is
always and everywhere bad to kill.”
Nielsen offers an updated version of the central claim: “For an act to
be moral or for an attitude to be moral, it must be universalizable. By this is meant the following. If A is morally right for X, it is similarly
morally right for anyone else in like circumstances” (63). But the qualification “in like circumstances”
is the death of “universal morality” since it is no longer universal, or merely
trivially so: in other words, if I say, “You should never kill, except if you
are a soldier in the line of duty facing enemy combatants,” we are not longer
talking about universals but situationals.
Otherwise the “universal” moral claim is that it is bad to kill unless you are a soldier and unless you are at war and unless your victim is an enemy
combatant, etc. But that is a mockery of
“universalization.” Ever worse,
“universalibity” is much easier and therefore much less useful than these moral
theorists think: a Muslim father who kills his daughter for having sex before
marriage (a so-called “honor killing”) would presumably have no trouble
universalizing that action. “It is good
for a father to kill a daughter who has dishonored her family” is not only a
possible but the actual maxim of his action.
Therefore, people can and easily do universalize all kinds of things
that we might find morally reprehensible.
It is no help at all.
Finally, many people, especially religionists, hold that
there is an essential dependence of morality on religion; that is the claim
that gets atheists scrambling to “defend” their morality. However, if morality is a vague and
contradictory term, and if there is no “essence” to morality—and, as we have
seen, if there is no “essence” to religion either—then the morality-needs-religion
or the morality-equals-religion position is exposed as the nonsense that it
is. Like ritual and myth and like belief,
“morality” is neither unique to nor universal to religion. This is what makes attempts like Stephen Jay
Gould’s “non-overlapping magisteria” (1999) so mind-boggling. His basic assumption is that science is about
facts and that religion is about morality; then he admits that there are other
kinds of and foundations for morality than religion, including reason, culture,
philosophy, and nature. So not all
morality is religious, and not all religion is moral.
As any informed person knows, the Judeo-Christian
scriptures never use the words “morality” or “moral” once, at least in the King
James translation. Of course, there is a
preoccupation with “correct” behavior, but much of this behavior is “ritual”
rather than “moral.” For instance, the
well-known (and generally ignored by Christians) dietary laws in the early books
do not say that certain foods are “immoral” but rather that they are “unclean”
or “abominable.” Even “sin” is often not
a matter of “immorality” but of “impurity” which one can eliminate with a
blood-sacrifice (although how that works
is never explained) or with the simple passage of time, i.e., impurity “wears
off.” And hardly ever does their god explain why certain acts are forbidden
or demanded; there is no little or no “informational” content and a lot of
“imperative” command.
It may prove, on closer inspection, that “morality” is
not only a Christian preoccupation but a relatively late Christian
preoccupation. It is not found in all
religions. In fact, the 19th-century
ethnologist E.
B. Tylor wrote in his classic 1871 Primitive
Culture that the “moral element which among the higher nations forms a most
vital part, is indeed little represented in the religion of the lower races,”
such that “morality” or “ethics” was most often not a central part of religion.
There is empirical evidence to support this claim. Nadel (1954) explicitly states that the Nupe
religion was “altogether silent” on ethical matters, that it offered no
portrayal of the “ideal person,” that it contained no myths of good and evil,
and that it promised no supernatural rewards for good behavior. Similarly, Nuer religion diverges from our
popular expectation of “morality”: Evans-Pritchard finds that “the ethical
content of what the Nuer regard as grave faults may appear highly variable, and
even altogether absent” (1956: 188). The
author continues: “It is difficult also for the European observer to understand
why Nuer regard as grave faults, or even as faults at all, what seem to him
rather trivial actions” (189). No doubt
the feeling would be mutual: the Nuer might find it hard to understand why
American Christians get so exercised about women’s breasts or smoking, etc.,
just as the Christians would be surprised to learn that the Nuer think that
such things are serious badly as “a man milking his cow and drinking the milk,
or a man eating with persons with whom his kin are at a blood-feud.” It shows again that what is a moral priority
in one culture might not even be a moral issue in another, including the
highest moral priorities in one culture: “Homicide is not forbidden, and Nuer
do not think it wrong to kill a man in a fair fight. On the contrary, a man who slays another in
combat is admired for his courage and skill” (195).
The impenetrability of “morality” in
the modern philosophical and religious analysis seems like sufficient evidence
that we have been barking up the wrong tree.
Morality is not a “thing” at all but rather a category under which many
and various “moral systems” are classified.
There appears to be no “essence” to morality, such that we could say
unequivocally that this action or even this issue is a “moral” one. And morality most assuredly does not begin
and end with religion.
I submit that everyone—theists and
atheists alike—has been asking the wrong question about “morality” until
recently. The questions that have been
asked perennially have included “What is the moral thing to do?” or, as Nielsen
asks in the title of his book, “Why be moral?”
But the first question resists answer because it is simply an appeal to
some particular “moral system”: What is a moral thing to do for a Christian, for instance (and even
on this they cannot agree). And the
second question resists answer because it is a nonsense question. As Shermer has aptly expressed it, “asking
‘Why should we be moral?’ is like asking ‘Why should we be hungry?’ or ‘Why
should we be horny.’ For that matter, we
could ask, ‘Why should we be jealous?’ or ‘Why should we fall in love?’” (2004:
57). In other words, we have been
looking for the wrong thing in the wrong place.
The question that we should ask, the
only question that makes sense and is important, is “What is it about humans
that makes them a ‘moral’ species?” Even
this one is not quite right, since we do not even know what “moral” means nor
do we find it in every human group. Let
us reformulate the whole investigation along lines that Nielsen suggests
(without accepting his conclusion). Let
us describe “morality” as a generalization of “moral systems,” each of which is
distinguished by a certain kind of talk, which he and we will call “moral
language”:
Moral language is the
language we use in verbalizing a choice or decision; it is the language we use
in appraising human conduct and in giving advice about courses of action; it is
the language we use in ascribing or excusing responsibility; and finally, it is
the language we use in committing ourselves to a principle of action. Moral language is a practical kind of discourse that is concerned to answer the
questions: “What should be done?” or “What attitude should be taken toward what
has been done, is being done, or will be done?”
Moral language is most particularly concerned with guiding choices as to
what to do when we are faced with alternative courses of action.
As a form of practical
discourse, morality functions to guide
conduct and alter behavior and
attitudes (1989: 39).
Now
the questions that seem important to pose and to answer are “Why are humans
concerned about appraising and advising conduct?” and most profoundly “Why do
humans need to guide and alter their behavior?” and “Why do humans need
‘external,’ principled guidance for
their behavior?” In short, why are we
the kind of beings for whom “moral concerns” are possible and necessary?
Until very recently, there was only
one conceivable answer—the Kant/Lewis answer.
Humans have “free will” to choose between courses of action, this “free
will” installed by a god but constrained (though not very effectively) by
divine rules and divine rewards and punishments. In a word, “morality” was a supernatural phenomenon. But atheists obviously cannot accept this
position, partly because we do not accept god(s) and the supernatural at all
and partly because then it would be true that atheists could not have
“morality.” Fortunately, recent research
has suggested that “morality” has a perfectly natural basis and is a perfectly natural phenomenon.
A great deal of literature has
accumulated over the last couple of decades to support this claim, although
The details of the research into the
evolution of morality are too vast and too varied to explore in depth here, and
we do not need to. What we need to
illustrate is that it is possible to
give a natural, evolutionary explanation, so that non-natural, supernatural,
and “creationist” explanations are not necessary or welcome. Since
These scientists and philosophers
disagree on a variety of issues, but there is a consistent core to their
messages. The core of the core is that
“morality” is not utterly unique to humans but has its historical/evolutionary
antecedents and (therefore) its biological bases. In other words, “morality” does not appear
suddenly out of nowhere in humans but emerges gradually with the emergence of
certain kinds of beings living certain kinds of lives. This is not to assert that animals have
full-blown “morality” any more than they have full-blown language. It is to assert that morality, like language,
is not an all-or-nothing thing but rather the kind of phenomenon that a being
can exhibit more-or-less of until we cross a threshold into a full human
version.
The key to the evolutionary theory
of morality is that particular sorts of beings—especially social beings who live for long periods of time in groups of their
own kind—tend reasonably to develop interests in the behavior of others and
capacities to determine and to influence that behavior. This might start most obviously with
offspring: parents of many species, from birds to apes, show concern for their
offspring, disadvantage themselves for their offspring (for instance, by
spending time feeding them), and even put their own lives at risk for their
offspring (the notorious problem of “altruism”). Other species may show these same behaviors
toward adult members of the “family,” or toward adult members of the larger
social group, or ultimately, in humans, to all members of the species and
perhaps to other species as well. In
this regard, human “morality” is an extension of more “short-range” helping
behaviors.
While Western culture and philosophy
has been focused on competition and selfishness (for interesting reasons beyond
the scope of this book), scientists have discovered lately that nature does not
always operate on the principle of selfishness.
Lynn Margulis’ 1987 Microcosmos:
Four Billion Years of Microbial Evolution was one of the first biology
texts to suggest that cooperation and symbiosis might be equally important
processes in the natural world and may have even formed the first complex
animal cells. We know today that humans
have “good bacteria” in our digestive tract without which we could not survive. Cooperative or helping (that is,
“un-selfish”) and eventually reciprocal behavior now being seen as possible and
valuable, it becomes easier to see. As
de
Evolution forms animals that
assist each other if by doing so they achieve long-term benefits of greater
value than the benefits derived from going it alone and competing with
others. Unlike cooperation resting on simultaneous
benefits to all parties involved (known as mutualism), reciprocity involves
exchanged acts that, while beneficial to the recipient, are costly to the
performer (2006: 13).
With
such costly but pro-social behaviors, we have taken a long step toward
“morality.” Or, as Shermer puts it,
human morality or the capacity and tendency to have “moral sentiments” or moral
concerns evolved out of the “premoral” feelings and tendencies of pre-human
species. De Waal and other
animal-watchers have gathered an enormous amount of data on pre-human
“morality,” including sharing, signs of “fairness,” gratitude, self-sacrifice,
sympathy and comforting, and many more. Sufficient
data has built up that O’Connell (1995) has catalogued hundreds of reported
cases of “empathy” and “moral” behavior in chimps. And it has been observed in an extraordinary
variety of species, from birds to elephants to primates.
The link between social living and
“morality” or at least “premoral behavior” can be appreciated readily
enough. As de Waal reminds us, social
living depends on social “regularity,” which he characterizes as a “set of
expectations about the way in which oneself (or others) should be treated and
how resources should be divided” (2006: 44).
Individuals without some sense of what to expect from others—and of what
others expect of him or her—would not be properly “social.” And this social regularity entails some
method for handling exceptions and deviations: “Whenever reality deviates from
these expectations to one’s (or the other’s) disadvantage, a negative reaction
ensues, most commonly protest by subordinate individuals and punishment by
dominant individuals” (44-5).
Thus, a certain amount of regularity
and predictability in behavior is a requisite for social co-existence and for
the eventual formation of “morality.”
However, it is only one component.
In fact, in keeping with our analysis of religion itself, let us regard
“morality” not as a single monolithic skill or interest but a composite
phenomenon of multiple skills and interests, all evolved, and many evolved for
other non- or pre-moral purposes. Behavioral
regularity is one building block, but ants and bees achieve this goal without
“morality” or even “learning”; it appears to be instinctual. To reach premoral behavior, and ultimately
human morality, a variety of other pieces must be in place. One of the essential ones is a certain degree
of “intersubjectivity,” the ability to understand (and therefore hopefully
predict) the thoughts and feelings of others; we might call this, at least at a
fairly high level of development, the notion of “agency” that we introduced in
Chapter 3. I am an agent, and the other
members of my family/group/species are agents, with thoughts and feelings
similar to mine. Beyond the mere
awareness of others’ thoughts and feelings is the capacity to share them in some way, what de Waal
calls “emotional contagion.” Borrowing
the term from Hatfield, Cacioppo, and Rapson (1993), de Waal proposes that, as
beings approach “moral” status, they develop the capacity to experience the experiences of
others. Fortunately, some of the most
fascinating recent work has identified a basis for this phenomenon in so-called
“mirror neurons” in the brain.
Discovered in the 1990s in monkey
brains, mirror neurons, as the name suggests, imitate or mimic the activity
of other parts of the brain—or of other brains.
Experiments have shown that “neurons in the same area of the brain were
activated whether the animals were performing a particular movement…or simply
observing another monkey—or a researcher—perform the same action” (May 2006:
3). What is truly remarkable, as the
last words indicate, is that this process operates across species. If this work is correct, and it is very
promising, then it provides a literal biological explanation for empathy:
individuals with mirror neurons, including humans and other primates, can
actually feel what others feel. This
research gives the old saying, “It hurts (or pleases) me more than does you”
some new seriousness.
In addition to the above-mentioned
premoral competences, Hauser names a few others. One is the ability to inhibit one’s own
actions. Most species have difficulty
preventing themselves from acting, but humans and some primates can do so,
although not perfectly. Another is
memory, which is crucial for preserving and learning from previous interactions
with the same individuals: can this other individual be trusted, and has he/she
reciprocated before? A third is the
ability to detect and respond to “cheaters” or those who violate expectations. A fourth is “symbolic” thought, ultimately in
the form of language and even quite abstract thought about “rules” and
“principles.” Hauser admits that few if
any animals meet all of these qualifications, but then neither do very young
human children—proving that “morality” must be developmentally achieved by each
human individual. However, many or most
of these talents exist in non-human species, and by the time these talents all
appear together in one species, namely humans, we have a patently un-mysterious
and un-supernatural “moral” sensibility.
The fact that non-humans do not have human morality, de Waal reminds us,
is no reason to discount the natural, pre-human roots of “morality”:
To neglect the common ground
of primates, and to deny the evolutionary roots of human morality, would be
like arriving at the top of a tower to declare that the rest of the building is
irrelevant, that the precious concept of “tower” ought to be reserved for its
summit.
“Morality” is not supernatural; neither is there a “moral
law” that all humans share. Rather,
there are inter-“personal,” social, behavioral regularities and concerns and
the biological bases for them which add up in humans to a “moral” sense. Since there is no “moral law” but rather “moral
habits” or “moral interests” or “moral concerns,” the foundation on which C. S.
Lewis built his elaborate argument for Christianity comes crashing down. No “moral law,” no need for a moral
law-giver, and no need for his god.
“Morality” is little more than what the premoral tendencies and
capacities feel like to an incorrigibly social and painfully self-aware species
like humanity.
In a way, our initial question has been answered. Why do humans have morality? Because all social species have morality-like
traits, and humans simply have more of them.
But there is another dimension to this question. Why do humans need “morality” in ways that other social species seem not to? In other words, what is “morality” doing for
us that something else accomplishes
for them?
It appears that, in the course of evolution, humans have
gained some abilities and lost some abilities.
We have gained self-awareness and language and formal abstract thought,
but we have lost many instincts that, for most species, make these wonderful
qualities unnecessary. Most species are
born knowing more or less “what to do.”
Of course, there are species, including non-primates like lions and
dolphins and birds of prey, that must “learn” some of the critical skills (like
hunting) without which they would not survive; this is why such animals raised
in captivity must be “taught” by humans how
to be wild animals. So, humans are
hardly the only species that needs to learn to be itself. But no
other species needs to learn so much nor needs to learn it so badly.
In the words of American anthropologist Clifford Geertz,
humans are very “incomplete” beings.
They require non-biological, “extra-somatic” resources to complete their
inadequate biological and instinctual birthright. The encompassing term for these resources is culture.
Hence Geertz describes culture as “a set of control mechanisms—plans,
recipes, rules, instructions (what computer engineers call ‘programs’)—for the
governing of behavior” (1973: 44).
Because we are not born with innate control mechanisms, humanity “is
precisely the animal most desperately dependent upon such extragenetic,
outside-the-skin control mechanisms, such cultural programs, for ordering his
behavior.” Culture is how humanity
settles it potentially rich but actually quite indefinite nature into specific
plans of meaning and plans of action.
Even Nielsen agrees that “morality” exists because the humans “need some
social mechanism” to guide our behavior, although he still thinks that
“morality” is a matter of “curbing personal desires” rather than informing
those desires in the first place (1989: 70).
Given the relative and quite real inadequacy of human
inborn guidance, it is no surprise that humans have had to invent their own nor
that humans have invented such a diversity of them; the source of cultural
diversity is multiple solutions to the same general challenges. And humans have had to invent solutions to
many problems other than the so-called “moral” ones. Everything from what foods to eat and how to
prepare them, what clothes (if any) to wear, what name to call things, and of
course what values and rule to adopt in relation to each other are problems to
solve, offering an endless opportunity for creativity and an almost endless
variety of answers. But let us focus on
the interpersonal behavioral area.
There are arguably three relevant facts about humans:
they must behave, they must behave in relation to each other (that is, they
must “interact”), and they must behave within some shared standard or norm or
code of action. As we considered in
Chapter 7, many species handle their interaction problem through
“ritualization,” a more or less instinctive and therefore “universal” (within
the species) set of behaviors that are intended not so much to communicate as
to elicit responses. Humans also
ritualize, although, like with language and “moral” behavior, they do so
relatively more self-consciously than other species. Humans have to invent—and have invented many
times—what John Skorupski (1976) has characterized as an “interaction code”
that will govern their interpersonal behavior, that will tell them what to do
and what to expect others to do, that will create and maintain regularity in
their interactions.
According to Skorupski, the interaction code provides
individuals in social situations with more or less clear and definite (and
often quite clear and definite)
behavioral guidelines: it amounts to directing individuals that such a person
in such a situation should perform such an action. By this process, relations—and not always equal relations, debunking the assertion
that equality or “fairness” or “justice” is central to “morality”—are formed
and perpetuated. As Skorupski explains,
social interaction demands “that people should use the code to establish the relationship which
ought—in accordance with other norms—to hold between them, to maintain it, to
re-establish it if it is thrown out of equilibrium and to terminate it
properly” (83-4). The interaction code thus
specifies how “the people involved in an interaction [depending on] their
relative standing or roles, and their reciprocal commitments and obligations,”
should comport themselves in order to achieve mutual understanding, acceptance,
and ongoing successful interaction (77).
The interaction code of a society takes many forms, from
small and trivial to grand and momentous.
At the low end of the interactional spectrum are little, mundane social
forms and scripts, the minutia of daily life; some examples would be greetings
(“How are you?” “Fine, thanks, how are you?”), thanks, apologies,
hand-shakings, and such familiar yet meaningful gestures. All competent members of society know how and
when to perform them. At a slightly
higher level, for slightly more formal situations, are matters of
“etiquette.” These include writing
thank-you notes, using the correct fork, wearing the appropriate clothing, and
so on. Certain situations or social
contexts have their own specialized interaction-code requirements, such as the
workplace, the court room, the class room, a wedding or funeral, and many
more. On occasions of greater
seriousness, such as the meeting of two heads of state, a yet more formal level
of interaction like “protocol” appears.
Protocol specifies exactly where individuals should stand, where and
when they should sit, what they should say to each—potentially every detail of
their interaction. The point, obviously,
of protocol is to minimize the possibility of misunderstanding (and perhaps
war) by controlling and predetermining as much of the interaction as
possible. “Religious ritual” is a
particular version of protocol for religious occasions and purposes, in which
the behavior is more or less completely specified; some religions, like ancient
Hinduism, codify ritual behavior to an extreme extent, specifying each hand
gesture and spoken word. When religious
ritual becomes totally “frozen” and unchangeable, we might speak of “liturgy.”
Somewhere within this continuum of interaction-code
behavior is what we commonly think of as “morality” or “moral behavior.” It is not necessarily at the “high end” and
it is not necessarily restricted to only one position on the continuum; it may
be “spread” throughout the possible range of human actions. Whatever else “moral behaviors” share with
each other (and there seems to be very little), members feel that they are
“important” and “must be done” lest the person be perceived as a “bad” person
or wants the society to be shocked, offended, or damaged.
Comprehending “morality” as only one aspect of a much
more inclusive interaction-concerned system of behavior solves many problems
for us. First, it illustrates why some
kind of interest in “correct” behavior (whether or not that interest is “moral”
in the familiar Western-Christian sense) is universal among humans. As Nielsen put it earlier, humans face
multiple behavioral options and, as social beings, are constantly “advising”
and “appraising” their own and each others’ behavior: why did that other person
do that, and what should I do now? The
interaction code provides ready-made, prefabricated solutions to most important
social dilemmas, and it provides a way to function effectively in the
group. If the individual performs the
proper interaction-code behavior, others will know that he or she understands
his or her situation and will be understood in turn. Even more, since human social interactions
are very frequently hierarchical, performing your expected interaction-code
behavior indicates that you accept
your place in the system: for example, if a Japanese inferior bows deeply to
his superior, or if a peasant prostrates himself before the king, then both
have submitted themselves to the authority of their better. Or when my students call me Dr. Eller and I
call them by their first name, we have executed our unequally-ranked
relationship.
Second, as we suggested, some parts of the interaction
code are more formal and elaborate than others: the clothing you wear to a
friend’s party is less rigidly determined than what you would wear to a
wedding, which is less rigidly determined than what you would wear to a dinner
at the White House, and so on. The
specificity and restrictiveness of the behavior is a sign of the gravity of the
situation and the inequality of the participants. Part of this formality and elaboration is insuring
that the interaction is done correctly, but another part is self-referential,
that is, a way “of marking out, emphasizing, underlining the fact of code
behavior” (87). In other words, in
insignificant social situations, one can “improvise” more freely, but in very
serious ones, it is better and safer to do what everyone else does and what has
always been done before. Also, the more
formal and elaborate the behavior, the more apparent it is that something
“special” is going on—that this is not everyday, voluntary behavior but an
enactment of a code that we have all bought into.
This, further, explains why
religious rituals are decidedly repetitive, formal, and compulsory. All interactions are social, that is, performed
with or to agents who advise and appraise us and who, hopefully, respond to
us. Religious rituals are interactions
too, only with non-human and super-human
agents whose advice, appraisal, and response are singularly important to
us. As Skorupski asserts, “to a large
extent religious rites are social
interactions with authoritative or powerful beings within the actor’s social
field, and…their special characteristics are in large part due to the special
characteristics these beings are thought to have” (165). Since the “stakes” of the interaction are
often great—we are hoping to receive some major benefit from it, such as health
or long life or good fortune or something—we are particularly concerned to “do
it right.” Also, since the inequality
between the participants is so great—teachers are higher than students, and
kings are higher than peasants, but gods are higher than anyone—the pressures
to perform the interaction precisely in conformity with the interaction code
are maximized. My students may praise me
for my wonderful teaching, but when one talks to one’s ancestor spirits or
god(s) or whatever, one may really pour on the superlatives.
Finally, it is worth noting that the interaction code,
because it is so ubiquitous and so essential to human social life, is often
invisible or opaque to participants and, even when it is perceived, not much understood. In fact, humans can often perform the
prescribed actions with little comprehension of what they are doing or
why. People who choose the correct fork
at a formal dinner need not know the history of forks or of formal dinners nor
why this particular fork is good to use.
It might actually be detrimental if individuals had to reason out which
fork, which suit, which words to use in every situation. Rather, performing the interaction code is
more like mastering a skill than learning a body of knowledge. We do not “know” the code; we “do” the code. It is about acting rather than thinking,
which is one reason why it is often disparaged as “empty ritual.” But as we have seen, and as it is easy and
important to see, action and ritual are not empty. They are full, if not of “meaning,” then of consequence: if one of my students walks
up to me and shouts, “Give me your book!” I know precisely what he or she
means, but they are unlikely to get their wishes fulfilled.
0 comments:
Post a Comment