Before Virtue: Ethics as Evolutionary Expertise
Introduction
Traditional cognitive accounts of moral behavior, which draw from rational models of decision processes (Hastie & Dawes, 2001), emphasize conscious reasoning using ethical principles as a basis for moral action (Beauchamp & Childress, 2001; Kohlberg, 1981). More recently, these principle-based approaches have come under criticism from two very different sources.
First, evolutionary accounts of moral behavior suggest that how we respond to morally relevant stimuli is largely ‘shaped up’ or biased by the forces of natural selection (de Waal, 1996; de Waal (2006); Hauser, 2006; Tremlin, 2006; Wright, 1994). Second, advocates of virtue ethics, including philosophers, educators, and psychologists (Peterson & Seligman, 2004; Seligman, 2002), suggest that moral activity is best understood in light of the “historically formed character of identifiable persons” (Jordan & Meara, 1990) with emphasis on community relevant individual qualities, traits, or mature habits (MacIntyre, 1984).
This paper examines an unexpected convergence between these latter two models, and explores the implications of this convergence both philosophically (with reference to arguments about free will and conscious choice) and practically (with reference to how one chooses the good).
In outline, the paper argues that virtue ethics can best be understood (psychologically, cognitively and neurologically) as a form of expertise (Jordan & Meara, 1990; Klein, 1998). Here the paper explores in depth the prerequisites for moral choice, the biasing effects of practice on choice, and what, from this perspective, it means to choose the good. Then, virtue (expertise) as a cognitive process is assessed relative to other cognitive processes through the lens of natural selection. Such accounts are widely covered in recent available work, and will be reviewed briefly (Krebs, 1998; Tremlin, 2006; Wright, 1994).
Finally, the claim is made that taken together, evolutionary and expertise accounts (properly understood) not only enable a better formulation of the relation between ethical concepts and behavior but also, when this new understanding is put into practice, better enable people in particular contexts to make what we think of as morally relevant conscious choices. That is, evolutionary and expertise oriented approaches fit the data better together than either does alone, and are together more compelling than alternate, competing models. Together, these models provide a more comprehensive understanding of what it means to be human agents.
Expertise
In 1992 Dreyfus & Dreyfus offered a phenomenology of ethical experience that focused attention on ethical behavior rather than reasoning or judgment. In this model they emphasize the development of skills, and equate moral development with other kinds of skill development, leading to expertise. In some detail they describe not the development of context-free universal principles that guide judgment (as might Mandelbaum or Kohlberg or Piaget), but rather the development of more and more refined intuitive assessments that guide behaviors. They thus propose a developmental model of moral behavior focused on skill acquisition, or what they call “skillful coping.”
In this model, they describe five stages of moral development that parallel the development of other types of skill acquisition, such as the ability to play chess or drive a car. In the first stage, the novice relies on a very precise set of rules that help compensate for a lack of experience. For example, when my daughter, Lindsay, took driver’s training to obtain her driver’s license, she was instructed to repeat the acronym SMILE after getting into the car. This stood for Seat, Mirrors, Ignition, Lights, and Emergency break. We would get into the car, and I would watch her head tilt and hands move toward each of these instruments as she sub vocally repeated each letter. She was methodical and deliberate. She would then say, “Okay, I’m ready.” If any of these needed attention, she would correct the problem and then repeat the whole procedure from the beginning.
Likewise, we see pre-school teachers instructing young children on explicit rules of moral behavior. Keep your hands to yourself. Do not hit. Do not lie. Share toys. Even professionals engage in these types of rule guidance. As a clinical psychologist I am required to take a course in ethics every year, and in these courses we often rehearse basic rules: get a release before sharing legally protected information; report child abuse; do not have sex with your client. We rehearse these rules often; notice their simple, explicit, imperative structure. Do this; don’t do that. As a novice in any domain, and certainly in the domain of moral behavior, we rely on these types of rules to guide us. As we become more experienced, the rules become second nature, more intuitive, and in some cases more flexible.
Sometimes professionals also attempt to identify and learn the principles that stand behind rules or standards. Rules about confidentiality relate to principles of fidelity and integrity, for example. However, it is important to notice that the process does not and cannot work the other way around: knowing the general principles (fidelity, for example) we could not deduce the particular standards by which a profession or community operates, except in the most general sense. Some professions translate fidelity to imply confidentiality, but others to information sharing. Rules and standards arise out of the historical experience of a profession or a community and only then can they be associated with moral principles. Understanding how and when to use confidentiality is a moral as well as professional skill, learned only through training and experience.
The advanced beginner, in stage two, draws on a series of experiences that may lead to the modification of certain rules, or at least a bit of flexibility in their application. The advanced beginner becomes less focused on individual pieces of a behavioral sequence, and begins to see the way discrete behaviors fit together. The rule against breaching confidentiality becomes more nuanced. The person at this stage may notice patterns or structures that they did not recognize earlier. Sometimes white lies, such as “yes I love that outfit” may not be explicitly true, but may tell a larger truth, such as “I love you no matter what you wear.” Exceptions are thus identified: do not break confidentiality except in the case of harm or danger to the client or to others. Even so, it takes yet more practice and skill to identify what counts as an exception.
Dreyfus and Dreyfus call stage 3 the stage of competence. Here the person develops a hierarchical perspective on decision-making to help manage a growing body of relevant information. This process involves “detached planning, conscious assessment of elements that are salient with respect to the plan, and an analytical rule-guided choice of action” (p. 115). A teenager client is engaged in risky behaviors: do I violate our trust and inform her parents, or do I maintain confidentiality? A more vulnerable sense of perspective taking is involved (I can take the teenage client’s perspective, but also that of the parent’s); as a moral actor I feel more vulnerable because the rules of the game (protect confidentiality) do not describe which perspective is most appropriate or salient.
Stage four, proficiency, involves less of a detached or analytical perspective, and more of an emotionally laden holistic perspective. By this point a person has had many experiences of success and failure, is highly invested in successful outcomes, and tends to trust his or her automatic reactions, though these are still rather deliberate and involve some second-guessing. There remains a conscious decision process for the proficient actor.
The expert, at stage five, however, with years of experience involving a variety of different situations, actions and outcomes, “knows by feel and familiarity when an action … is required,” and “knows how to perform the action without calculating and comparing alternatives” (p. 116). Dreyfus and Dreyfus write: “beginners make judgments using strict rules and features, but that with talent and a great deal of involved experience the beginner develops into an expert who sees intuitively what to do without applying rules and making judgments at all” (p. 117). Thus “the budding ethical expert would learn at least some of the ethics of his community by following strict rules, would then go on to apply contextualized maxims, and, in the highest stage, would leave rules and principles behind and develop more and more refined spontaneous ethical responses” (p. 118).
Thus, what Dreyfus and Dreyfus call ethical expertise is something akin to what Alasdair MacIntyre calls virtue. MacIntyre suggests that virtues are “acquired human qualities” or dispositions that we “exercise” in order to achieve certain human goods or ends (MacIntyre, 1984). He emphasizes the importance of communal traditions that provided a context for the development and sustenance of certain practices, the exercise of which contribute to the good of a human life. In this sense, and in this context, virtues are best understood as a particular form of expertise.
When properly formed, the practices that under gird virtues in a particular domain come to be experienced much like intuition. We know, or have a sense, of what to do such that what we do is consistent with who we understand ourselves to be. That is, good intuition in a given situation is not the exercise of magic or the positing of unexplainable inspiration; rather, good intuition is the result of years of experience in a given domain of life that draws on a vast store of organized information about that domain . The term intuition may be used to denote that aspect of expertise that appears to occur almost instantaneously at a preconscious level (Klein, 1998). In fact, however, as an aspect of expertise, good intuition is possible only because of this elaborate knowledge base on which it draws.
This proposal clearly places Dreyfus and Dreyfus (1992) over against two dominant traditions, one in philosophy and one in psychology. What they reject, and what is common to these traditions, is an emphasis on conscious deliberation as the root of moral judgments. That is, much of modern moral philosophy and cognitive psychology claim that ethics is centrally about the reasons we give for the decisions we make; it is about having good reasons for what one does rather than about the doing.
This formulation should sound familiar: it dominates how we think about ethics today, and directly influences ethical training in many professions. I was at a conference recently on professional ethics in psychology in which ethics was defined as “thinking about reasons in terms of values.” Of course, central to any discussion that follows such a definition will be a focus on thinking and reasons.
Thus, cognitive psychology proposes rational decision models, including those for making morally relevant choices. Rational models recommend deliberate, conscious processes that include identifying options, weighing or evaluating these options on the basis of some identifiable criteria (often using rudimentary mathematical formulas), examining the relative weights of various factors and then choosing based on the outcome of this evaluation (Hastie & Dawes, 2001). In these models, what it means to be rational is to use this type of deliberative process.
Interestingly, to do this well, the ethical decider must set aside three other potential influences on decision-making: habits developed on the basis of prior decisions; important or influential people who might influence one’s decision; and religious or other cultural factors that might in some significant way sway the decision maker (Hastie & Dawes, 2001). According to the model, these three factors are not logically relevant to decision equations, introduce bias and irrationality into the decision process, and thus should be excluded.
What we see in both the philosophical and cognitive approaches is a great suspicion of habit, of context, and of tradition, with a corresponding embrace of more abstract, universally applicable, conscious, deliberate thought. In some sense this formulation appears obvious. Isn’t “being ethical” about making moral choices for good reasons, or “thinking about reasons in terms of values”? What could be objectionable about this?
The objection is that to the extent that ethics is about deliberation and decision, it is so secondarily. By this statement I intend only to make an empirical claim, though one that may have philosophical implications. Embarrassingly for the cognitive scientist, what people actually do is not well predicted by conscious, deliberative decision processes, or by what they think or claim they will do. Rather, and as a further embarrassment, what people do is better predicted by their habits, their context, and their traditions. Whether you will get up and exercise tomorrow morning at 6:00 AM is better predicted by whether you did it yesterday morning than by what you tell yourself you will do tomorrow morning, regardless of how much you think or reason about it. Your behavior in this and other regards is also better predicted by what those around you are doing and whether the behavior is valued in the history and traditions of your community. Thus, the problem with rational decision models, and any moral philosophies partnered with them, is that they do not accurately reflect how we actually deliberate and decide.
Admittedly, we often do use deliberative processes effectively, especially when we lack experience in a domain, or, perhaps more often, when we critique what we have done in the past in preparation for future action. In such situations we attempt to slow the process down, and increase our deliberative and consultative activities in order to establish new and effective practices. Even this, however, does not necessarily improve outcomes, though it may, depending on the history of our thinking skills, the quality of our consultants, and the faithfulness of our practice.
Indeed, most of us do not use this type of deliberative process in any domain where we have experience, and in fact we rarely do this at all, despite the persuasive efforts of cognitive scientists. There are at least three empirically verified reasons for this failure (Ericsson et al., 2006; Klein, 1998). One reason for in-the-moment rejection of rational models is the relative speed with which most decisions, including morally relevant decisions, must be made. Conscious decision processes are just too slow. Humans have a well-developed cognitive system that acts quickly to changing circumstances and serves us well in most situations with which we are familiar. We make the vast majority of our morally relevant decisions in microseconds, from the decision to compliment a spouse, to hugging our child, to whether or not to fire a gun or to cheat on an exam or to give money to a homeless person or to go to war. The reasons we give for the actions we take in these situations are typically post hoc (Haidt, 2001). The decisions themselves emerge almost viscerally, especially if they have personal relevance. In fact, ask yourself this question: how easy or difficult is it to reason someone out of a position they have taken seemingly spontaneously but with conviction in a domain they care about?
Second, we do not use rational decision processes well because we typically lack the kinds of information necessary to construct repres ntative models of real life contexts. Such contexts are often complex, fluid, rapidly changing and situation dependent, and critical information, such as the relative importance of different kinds of information, is unavailable. Rather, we survey the context, we size it up, we gather what information we can, and we in some sense experience our response as we respond. We often see this kind of process occur when professionals debate an ethical issue. If the issue is presented as a hypothetical dilemma, for example, the first thing that happens is that the respondents seek more information, more details. We get responses such as, “well, it depends on x or y.” Professionals quickly shift the conversation from general principles to issues relevant to their profession, about which they have expertise. However, in most contexts we lack often-critical pieces of information and yet must act.
Finally, we often do not know how to weigh discrepant or disconfirming information. Psychologically, moment-to-moment, we tend to accept as relevant to us information that confirms our perspectives, and we filter out and ignore discrepant information – unless the discrepancy alerts us viscerally to a danger. In situations where we perceive danger, we tend to oversubscribe to the new information, weighing it more heavily than we should. That is, we tend to over generalize evidence of danger. Cognitive scientists know this, and use this as evidence that we should use their models to compensate, but they provide no mechanism by which to do this in the daily process of decision-making.
In contrast, with experience experts develop rapid pattern recognition, nuanced perceptual discriminations, metaphorical reasoning, the rapid identification of adequate outcomes at the expense of best outcomes, all at a preconscious level (Klein, 1998). In this sense, expert decisions are delivered to consciousness rather than discovered by it. Writes Todd Tremlin, “One of the most significant findings of cognitive psychology is how much of our thinking takes place below the level of awareness. Representations are constructed in mental workshops outfitted with specialized machinery of all sorts, each contributing to the project at hand. Most of this work is automatic, rapid, and incorrigible; only the finished product is made available, by means of a mental dumbwaiter, to conscious inspection” (Tremlin, 2006, p. 72).
The development of expertise thus structures the brain for complex, pre-conscious processing on the basis of habits, past experience, training and context. These processes are somewhat less efficient than those hot-wired in by genetics or early experience, which I will discuss in a moment, but nevertheless function in much the same way, in that they do not require conscious processing to problem solve or to make judgment. In moral expertise, the conscious component – “I ought to do that” or “it would be unjust to respond this way” (regardless of the level of abstract reasoning) – typically occurs after the “decision” has been made. The reasons given (for example, in Kohlberg’s work) are after the fact rationalizations that do not provide insight into the actual cognitive processes involved.
For example, Stanley Fish discussed how judges make decisions in an article about constitutional theory (Fish, 2008). He writes, “When Professor Lief Carter asked a number of judges to talk about their interpretive theories, he found that ‘the conversation would quickly drift from the theoretical points’ he had introduced to anecdotal accounts of practice and opinion writing. ‘Most of the time,’ said one judge, ‘you reach the result that’s fair and then build your thinking around it.’”
I notice this when watching professionals engaged in discussions of moral dilemmas. For example, when asked whether it is permissible for a therapist in a psychiatric hospital to read the private diary of a teenager recently admitted for attempted suicide, I’ve observed therapists respond ‘yes’ or ‘no’ or ‘maybe’ within seconds, though most hem and haw. If you ask therapists why they responded ‘yes’ (or ‘no’), they will fumble about trying to generate a reason. Those that respond ‘yes’ will eventually justify their response by referring to the salience of the suicide attempt, that is, they respond to the safety issue. Those that respond ‘no’ tend to discuss the relevance of the privacy of a teenage diary and the violation of mutual trust that would ensue by reading it. But generating reasons is hard work, and much too slow for on the ground responses. And even after lengthy conversation, few people are swayed by alternative arguments, and everyone realizes that no principle or lengthy deliberation can settle the matter.
The third group, those that respond ‘maybe’ ask for more, clinically relevant, information. The problem of course is that the underlying ethical principles contradict each other, and at some level clinicians sense the contradiction, and pause, if they have time. New information may sway the therapist toward one side or the other; for example, the therapist encounters the mother holding the diary above her head as the teenager swears angrily at the mother for bringing the diary to the hospital, and screams “I’ll never trust you again.” Or, the teenager says to the therapist, “if you want to understand me, you’ll have to read this,” and hands the diary over. Again, based on clinical experience, therapists respond with a decision rapidly based on how the new information shapes the unique moment. Once the moment is over, therapists can give reasons for their behavior, but it is typically after the fact rationalizing.
Moral expertise in this sense is like playing the violin well or moving between parallel bars with precision in gymnastics; moral wisdom (virtue, expertise) is a complex skill that is employed almost automatically when the context is familiar. Nevertheless, the process is somehow mysterious. The mystery is due to the convergence of several factors each of which resists analysis: individual idiosyncrasies in genetics and experience, uniqueness of context, and chaotic neural processes. We cannot fully measure the starting point, we cannot fully know the forces acting to define the current situation, and we cannot predict the outcome of chaotic interactions of multiple variables.
However, we may be able to bias complex human processes which are in an ongoing interaction with the environment: this biasing is what we mean by practicing the virtues, or developing moral expertise. In addition, a focus on biasing effects takes seriously (and functions in conjunction with) our evolutionary history. Both processes, training up through experience and shaping up through evolution, shape the neural structures of the brain, and bias outcome. Understanding how this happens is to understand better how we choose, and thus, in the broadest sense, who we are.
So how do we bias outcomes? The simple answer is practice. In fact, we emphasize practice so much in so many domains of life, including morally salient contexts, in order to overcome the cumbersome, ponderous inefficiency of conscious thinking. The more complex the task, the more problematic conscious thought becomes, and the more important are just those factors held in suspicion by rational models: habit, context, tradition and mentors. Our brains are extraordinarily flexible and creative, but, as William James understood over 100 years ago, they are most dependable, and most innovative, when primed with a rich array of associations within a given domain (Calvin, 1996;Dreyfus, Dreyfus, & Athanasiou, 1986; James & Allport, 1985). Appropriate associations in the sense James means are trained up by associating with appropriate experts: by practicing what one is taught. “To acquire ethical expertise one must have the talent to respond to those ethical situations as similar that ethical experts respond to as similar, and one must have the sensibility to experience the socially appropriate sense of satisfaction or regret at the outcome of one’s action…. Without a shared ethical sensibility to what is laudable and what is condemnable one would go on doing what the experts in the community found inappropriate, develop bad habits, and become what Aristotle calls an unjust person” (Dreyfus & Dreyfus, 1992, p. 119).
Is conscious thought ever useful? Of course. It’s use, however, is in the critical evaluation of past performance and in the formulation of effective future practice. The point of practice is to bias neural networks to respond, in the moment, as practiced. We create what one neurophysiologist called “bumps and ruts” in our neural networks that bias us toward our next thought or action. These biasing influences include our immediate context, short and long-term memories, our emotional state, our neural health, the focus of our attention and other factors (Calvin, 1996).
Evolution and Moral Behavior
A most important other factor is our evolutionary history. Practice fine-tunes our behavioral repertoire, and frees conscious processes to innovate, but our fundamental moral concerns are not generated in a vacuum. We have evolved, and an increasingly rich and sophisticated analysis of our evolutionary history suggests that what we care about, or perhaps more pointedly, how we go about caring functions within an array of evolved parameters that bias our moral thought and action in fundamental ways (Hauser, 2006). Perhaps counter intuitively, understanding these evolutionary biases, and the relative degree of flexibility of subsequent evolved parameters, may actually increase the importance of practice, and the related importance of habit, tradition, experience, training, and good teachers.
For example, Hauser (2006) speaks of an “evolved capacity of all human minds that unconsciously and automatically generates judgments of right and wrong” (p. 2). Moral judgments are not inevitable, however. “They color our perceptions, constrain our moral options, and leave us dumbfounded because the guiding principles are inaccessible, tucked away in the mind’s library of unconscious knowledge” (p. 2).
Support for such claims comes from two directions: the study of the kinds of moral judgments that people make, and extensive studies of nonhuman animal behavior. I’ll provide two examples of moral judgment studies, one focused on violence and the second on sharing.
First, you are perhaps familiar with the trolley/footbridge dilemma (Hauser, 2006; Thomson & Parent, 1986). Imagine that you see a runaway trolley headed towards five people standing on the track, who cannot get out of the way. However, if you flip a switch, you can divert the trolley onto a parallel track. Problem is, there is one person standing on the alternate track who would be hit if you flip the switch. Is it morally permissible to flip the switch, saving five but killing one? In study after study, approximately 90% of respondents say yes, it is permissible under these circumstances.
Now consider the footbridge version. You are standing on a footbridge. The trolley is coming down the track, approaching the five innocent people. Only now, in order to stop the train you must push a person standing next to you off the bridge and into the path of the train, saving the five but killing the one. May you push the innocent person standing next to you? Notice that the result is the same in each scenario: five are saved, one dies. But in the footbridge situation only about 10% say it is permissible. These results hold for thousands of respondents across English speaking societies in the United States, Canada, England and Australia, and do not differ by age, ethnicity, educational background, religion, or experience with moral philosophy. Currently, Hauser is expanding the research to include people whose native language is Hebrew, Arabic, Indonesian, Chinese and Spanish, and the general pattern is the same (Hauser, 2006).
In addition, fMRI studies reliably show that the footbridge scenario activates, in a way that the original trolley scenario does not, emotion centers in the brain (Greene, Sommerville, Nystrom, Darley, & Cohen, 2001). The activation of those centers appears to alter our moral calculus in reliable and predicable ways. Finally, when questioned, people struggled to offer anything like a coherent explanation for their decision (Hauser, 2006).
A second kind of example involves human sharing, or reciprocal behavior. In these games, a subject, called a proposer, is given a day’s salary or its cultural equivalent (Henrich et al., 2001). The proposer is then invited to give to a second subject, called a respondent (and who, by the way, is watching the whole process), any amount of the original gift the proposer desires. The respondent may then accept or reject the offer. If the respondent accepts, he or she gets the offered amount, and the proposer keeps the rest. If the respondent rejects the offer, neither gets anything. Rational economic models suggest that the proposer should offer as little as possible, and the respondent should accept anything offered, as anything is better than nothing. It’s a big win/ little win versus nobody wins option.
Oddly, the average offer of the proposer to the respondent is about 44% of the original gift. Although there is variability across industrial and other societies, in every society studied, from the United States to Europe to South America to Africa, people share more than they ought to based on logical models of self-interest. In addition, even in one-trial studies, offers of less than 20% are rejected 40% to 60% of the time. In multi-trial studies, proposers who consistently made low offers tended to be punished by observers when the roles were reversed (Hauser, 2006). Humans evidence a reliable cross-cultural sense of fairness. One study concludes, “long-run evolutionary processes governing the distribution of genes and cultural practices could well have resulted in a substantial fraction of each population being predisposed in certain situations to forgo material payoffs in order to share with others, or to punish unfair actions” (Henrich et al., 2001, p. 77).
When we turn to the non-human animal world, we also see striking evidence of what must be considered at the least pre-moral behavior. Rhesus monkeys care for disabled siblings; chimpanzees celebrate the birth of baby chimpanzees, intervene for one another to reduce conflict, form coalitions in conflicts, keep track of favors and slights, marshal out coordinated revenge on opponents, appear to recognize themselves in mirrors, and, as with capuchin monkeys, actively share food (de Waal, 1996). Elephants return year after year to explore the bones of relatives (Moss, 2000). Dolphins exhibit complex social groups that evidence desire and emotion, judgment, play, and goal directed activity (Pryor & Norris, 1991). Frans de Waal, a student of primate behavior for decades, summarizes, “Many non-human primates … seem to have similar methods to humans for resolving, managing, and preventing conflicts of interests within their groups. Such methods, which include reciprocity and food sharing, reconciliation, consolation, conflict intervention, and mediation, are the very building blocks of moral systems” (Flack & de Waal, 2000, p. 3).
There is thus no question that these most highly developed mammals think, reason, plan, choose among options, and have beliefs about the world. What they do when they do these things is to engage in behaviors and for reasons that ar at least precursors