Is “Nonreductive Physicalism” an Oxymoron?
My view of human nature is physicalistâ€”in the sense of not dualist. As do many philosophers of mind these days, I call my position nonreductive physicalism. But when I began using this term ten or so years ago, the â€œnonreductiveâ€ part, I realized, was just a place holder. I had no adequate answer to the question: if humans are purely physical, then how can it fail to be the case that all of their thoughts and behavior are merely the product of the laws of neurobiology? But doesnâ€™t reductionism just have to be false? Otherwise we are not holding our positions for reasonsâ€”we are determined to do so. And in fact, we canâ€™t make sense of a meeting such as the one at which this paper was presentedâ€”we must have just been taking turns making noises at one another.1
I believe that I now have the resources to provide an answer to the reductionists, due in large measure to the collaboration of my colleague in neuropsychology, Warren Brown.2 However, our solution to the problem took us three hundred pages, so I canâ€™t give an adequate argument in this short essay. Iâ€™ll focus on one aspect of the issue, the role of downward causation, and then shamelessly promote our book to provide the rest of the story. The other significant ingredient in our argument is the development of what we call a post-Cartesian, and particularly post-Cartesian-materialist, account of the mental. A Cartesian-materialist account attempts to understand the mental almost entirely as related to the brainâ€”inside the head. We argue instead for a concept of the mental as essentially embodied and constituted by action-feedback-evaluation-action loops in the environment, and â€œscaffoldedâ€ by cultural resources.3
Understanding Downward Causation
The topic of downward causation (and its opposite, causal reductionism) is an interesting one in its own right. But it would also be an interesting topic from the point of view of the sociology of knowledge. What I mean by this is, first, there are many ardent reductionists among philosophers and scientists, and I would state their position not in terms of â€œI have good grounds for this thesis,â€ but rather: â€œI canâ€™t imagine how reductionism can fail to be true.â€ On the other hand, one can do a literature search in psychology and cognitive neuroscience and find hundreds of references to downward causation. Presumably these scientists would not use the term if they thought there was anything controversial about it.
Meanwhile in the philosophical literature there was an article in 1974 by Donald Campbell on downward causation, but scarcely any mention of the topic again until the 1990s, when it began to show up in philosophy of mind. I believe that the most common stated position on the relation of mental phenomena to the brain among current philosophers of mind is nonreductive physicalism. Yet Jaegwon Kim has been remarkably effective in using the concept of downward causation as one of the horns of a â€œfive-horned-lemmaâ€: you either have (1) to be a dualist, or (2) countenance some spooky form of causal overdetermination, or (3) accept an even spookier concept of downward causation, or (4) give up on the causal closure of the physical in order to avoid (5) reductionism. He has convinced a surprising number of philosophers that â€œnonreductive physicalismâ€ is an oxymoron.
So I take the thesis of downward causation to be the denial of the thesis of causal reductionism. And we have scholars on both sides, some saying that reductionism must be true, others that it must be false. Ludwig Wittgenstein claimed that when we find ourselves saying that it just must be this way, we should be suspicious that our thinking has been captured by mental images rather than being motivated by arguments. So in this essay Iâ€™ll do four things. One is to trace the source of the mental imagery that makes it seem that reductionism must be true. Then I present a short history of developments in philosophy that have shown us how to get out of this particular Wittgensteinian fly-bottle.4 This account will end with the suggestion that downward causation is best understood in terms of â€œcontext-sensitive constraintsâ€ imposed by global characteristics of a dynamical system. Third, I illustrate this claim by applying it to the behavior of an ant colony. And, finally, I mention some of the additional issues that the nonreductive physicalist needs to deal with.
The Atomist-Reductionist Fly-Bottle
When I first began teaching, my students tended to be innate reductionists. That is, when I presented them with the model of the hierarchy of the sciences, and a corresponding hierarchy of complex systems, I never had to explain why reductionists held the position they did. Within an interval of about fifteen years, though, Iâ€™ve found that many students are innate anti-reductionists; thus it has become important to be able to explain why causal reductionism seems necessarily true to so many. There is a worldview change going on now, and reductionism has been one of the central features of the modern worldview.5
To understand how reductionism could have gone unchallenged for so long we need to see its origin in early modern physics. Aristotelian hylomorphism (the thesis that material things are composed of matter and an activating principle called a form) had to be rejected due to the new astronomy; an alternative theory of matter was found in ancient atomism. Reductionism was the outcome of combining the atomism that early modern physicists took over from Epicureanism with the notion of deterministic laws of physics. Early modern atomism consisted of the following theses: First, the essential elements of reality are the atoms. Second, atoms are unaffected by their interaction with other atoms or by the composites of which they are a part. Third, the atoms are the source of all motion and change. Fourth, insofar as the atoms behave deterministically they determine the behavior of all complex entities. Finally, in consequence, complex entities are not, ultimately, causes in their own right.
When modern scientists added Newtonâ€™s laws of motion it was then reasonable to assume that these deterministic laws governed the behavior of all physical processes. All causation is bottom-up (this is causal reductionism) and all physical processes are deterministic because the ultimate causal players, the atoms, obey deterministic laws. The determinism at the bottom of the hierarchy of the sciences is transmitted to all higher levels.
When we recognize that all of the assumptions in this early modern picture have been called into question, the reductionist dogma loses some of its grip on the imagination. Atoms modeled as tiny solar systems have given way to a plethora of smaller constituents whose “particle-ness” is problematic. The original assumption that the elementary particles are unaffected by their interactions has certainly been challenged by the peculiar phenomenon of quantum nonlocality. Particles that have once interacted continue to behave in coordinated ways even when they are too far apart for any known causal interaction in the time available. Thus, measuring some characteristic of one particle affects its partner, wherever it happens to be. The main point of my paper will be that when we consider parts from levels of complexity above the atomic and sub-atomic, the possibilities for the whole to effect changes are dramatic, and in the case of complex dynamical systems, the notion of a part shifts from that of a component thing to a component process or function.
Scientific ideas about the ultimate source of motion and change have gone through a complex history. For the Epicureans, atoms alone were the source of motion. An important development was Newton’s concept of inertia: a body will remain at rest or continue in uniform acceleration unless acted upon by a force. In Newtonâ€™s system, initial movement could only be from a first cause, God, and the relation of the force of gravity to divine action remained for him a problem. Eventually three other forces were added to the picture. Big-bang cosmology played a role too. The force of the initial explosion plays a significant part in the causes of motion, and it is an open question whether there can be an explanation of that singularity.
There is also the problem that we no longer know how to define determinism. For the Epicureans, determinism was in nature itself. After the invention of the concept of laws of nature, we have to distinguish between the claim that things or events in nature determine subsequent events versus the claim that the laws of nature are deterministic. But much has changed during the modern period. The concept of a law of nature began as a metaphor: God has laws for human behavior and for non-human nature. While it was thought that nature always obeyed Godâ€™s laws, God presumably could change or override his own laws. By Laplaceâ€™s day the laws of nature were thought to be necessary. But today with multiple-universe cosmologies and reflection on the anthropic issue there is much room, again, to imagine that the laws of our universe are contingent: it can be asked why the universe has laws and constants, from within a vast range of possibilities, that belong to a very small set that permit the evolution of life.
Jeremy Butterfield argues that the only clear sense to be made of determinist theses is to ask whether significant scientific theories are deterministic. This is more difficult than it first appears, however. It may appear that the determinism of a set of equations is simply the mathematical necessity in their transformations and their use in predictions of future states of the system. One problem, though, according to Butterfield, is that â€œthere are many examples of a set of differential equations which can be interpreted as a deterministic theory, or as an indeterminate theory, depending on the notion of state used to interpret the equations.â€6
Second, even if a theory is deterministic, no theories apply to actual systems in the universe because no system can be suitably isolated from its environment. The only way around this problem would be to take the whole universe as the system in question. If the idea of a theory that describes the relevant (essential, intrinsic) properties of the state of the entire universe and allows for calculation of all future states is even coherent, it is wildly speculative.
A third problem, argued by Alwyn Scott, is the fact that many important theories dealing with higher levels of complexity (such as those governing the transmission of nerve impulses) can be shown not to be derivable from lower-level theories, and especially not from quantum mechanics.7
Finally, William Bechtel has called into question the pervasive emphasis on laws in scientific explanations. He argues that most scientific explanation proceeds by identifying a phenomenon (e.g., vision), then by identifying the system involved in the phenomenon, and by decomposing the system into its functional parts. No need to refer here to any basic laws of nature. And if the decomposition itself sounds reductionistic, it is not, because the explanatory task is only complete when one understands how the functions of the parts are organized into the phenomenon of interest. So the existence of deterministic laws in some aspects of physics, or even of deterministic laws in neuroscience such as the Hodgkin-Huxley equations, have little or no relevance for explaining cognitive phenomena.8
So, given all of these developments, we might say that the assumption of complete bottom-up determinism has had the rug pulled out from under it.
Developing a Concept of Downward Causation
So the worldview that made causal reductionism appear to be obviously true has been called into question in a variety of ways. I now want to consider the alternative. I believe that the most cogent arguments against causal reductionism are those showing that in many complex systems the whole has reciprocal effects on its constituents.
Donald Campbell and Roger Sperry both used the term â€œdownward causationâ€ in the 1970s. Sperry often spoke of the properties of the higher-level entity or system overpowering the causal forces of the component entities.9 Campbellâ€™s work has turned out to be more helpful. Here there is no talk of overpowering lower-level causal processes, but instead a thoroughly non-mysterious account of a larger system of causal factors having a selective effect on lower-level entities and processes. Campbellâ€™s example is the role of natural selection in producing the remarkably efficient jaw structures of ants and worker termites.10
As I mentioned earlier, downward causation is often invoked in current literature in psychology and related fields, yet it received little attention in philosophy after Campbellâ€™s essay in 1974. However, in 1995 Robert Van Gulick spelled out in more detail an account based on selection. The reductionistâ€™s claim is that the causal roles associated with special-science classifications are entirely derivative from the causal roles of the underlying physical constituents. Van Gulick argues that even though the events and objects picked out by the special sciences are composites of physical constituents, the causal powers of such an object are not determined solely by the physical properties of its constituents and the laws of physics.11 They are also determined by the organization of those constituents within the composite. And it is just such patterns of organization that are picked out by the predicates of the special sciences.
These patterns have downward causal efficacy in that they can affect which causal powers of their constituents are activated. â€œA given physical constituent may have many causal powers, but only some subsets of them will be active in a given situation. The larger context (i.e., the pattern) of which it is a part may affect which of its causal powers get activated. . . . Thus the whole is not any simple function of its parts, since the whole at least partially determines what contributions are made by its parts.â€12 Such patterns or entities are stable features of the world, often in spite of variations or exchanges in their underlying physical constituents. Many such patterns are self-sustaining or self-reproducing in the face of perturbing physical forces that might degrade or destroy them (e.g., DNA patterns). Finally, the selective activation of the causal powers of such a patternâ€™s parts may in many cases contribute to the maintenance and preservation of the pattern itself. Taken together, he says, these points illustrate that â€œhigher-order patterns can have a degree of independence from their underlying physical realizations and can exert what might be called downward causal influences without requiring any objectionable form of emergentism by which higher-order properties would alter the underlying laws of physics. Higher-order properties act by the selective activation of physical powers and not by their alteration.â€13
A likely objection to be raised to Van Gulick’s account is this: the reductionist will ask how the larger system affects the behavior of its constituents. To affect a constituent must be to cause it to do something different than it would have done otherwise. Either this is causation by the usual physical means or it is something spooky. If it is by the usual physical means, then those interactions must be governed by ordinary physical laws, and thus all causation is bottom-up after all.
The next (and I believe the most significant) development in the concept of downward causation is well represented in the work of Alicia Juarrero.14 She describes the role of the system as a whole in determining the behavior of its parts in terms similar to Van Gulickâ€™s account of the larger pattern or entity selectively activating the causal powers of its components, and she draws on the theory of dynamical self-organizing systems to explain how. Juarrero says:
The dynamical organization functions as an internal selection process established by the system itself, operating top-down to preserve and enhance itself. That is why autocatalytic and other self-organizing processes are primarily informational; their internal dynamics determine which molecules are â€œfitâ€ to be imported into the system or survive.15
She addresses the crucial question of how to understand the effect of the system on its components. Her answer is that the system constrains the behavior of its component processes. The concept of a constraint in science suggests â€œnot an external force that pushes, but a thing’s connections to something else . . . as well as to the setting in which the object is situated.â€16 More generally, then, constraints pertain to an objectâ€™s connection with the environment or its embeddedness in that environment. They are relational properties rather than primary qualities in the object itself. Objects in aggregates do not have constraints; constraints only exist when an object is part of a unified system.
From information theory Juarrero employs a distinction between context-free and context-sensitive constraints. In successive throws of a die, the numbers that have come up previously do not constrain the probabilities for the current throw; the constraints on the die’s behavior are context-free. In contrast, in a card game the constraints are context-sensitive: the chances of drawing an ace at any point are sensitive to history. She writes:
Assume there are four aces in a fifty-two card deck, which is dealt evenly around the table. Before the game starts each player has a 1/13 chance of receiving at least one ace. As the game proceeds, onceplayers A, B, and C have already been dealt all four aces, the probability that player D has one automatically drops to 0. The change occurs because within the context of the game, player D’s having an ace is not independent of what the other players have. Any prior probability in place before the game starts suddenly changes because, by establishing interrelationships among the players, the rules of the game impose second-order contextual constraints (and thus conditional probabilities).
. . . [N]o external force was impressed on D to alter his situation. There was no forceful efficient cause separate and distinct from the effect. Once the individuals become card players, the conditional probabilities imposed by the rules and the course of the game itself alter the prior probability that D has an ace, not because one thing bumps into another but because each player is embedded in a web of interrelationships.17
Thus, a better term for this sort of interaction across levels might be â€œwhole-part constraintâ€ rather than downward causation.
Alwyn Scott, a specialist in nonlinear mathematics, states that a paradigm change (in Thomas Kuhn’s sense) has occurred in science beginning in the 1970s. He describes nonlinear science as a meta-science, based on recognition of patterns in kinds of phenomena in diverse fields. This paradigm shift amounts to a new conception of the very nature of causality.18
The goal of this paper is to show the applicability of the notion of downward causation (or whole-part constraint) to the problem of relating psychology to neurobiology. In light of Professor Goetzâ€™s paper, it is also to show that such downward causation does not violate the causal closure of the physical. In the terms I have developed here, it is to understand human beings, with their immense neural complexity, and enmeshed in an immensely complex cultural environment, as complex dynamical systems. Such systems are beyond human capacity to describe fully. What I shall do instead is to provide an easily grasped example of a dynamical system. Since Campbellâ€™s original paper focused on ants it is appropriate to follow in his footsteps. I will show the applicability of dynamical systems theory to the behavior of an ant colony.
Harvester ant colonies consist of a queen surrounded by interior workers deep inside the burrow, and other worker ants that only enter chambers near the surface. The worker ants are specialized: some forage for food, others carry away trash, and still others carry dead ants away from the colony. Deborah Gordon has shown that the ants manage to locate the trash pile and the cemetery at points that maximize the distances between cemetery and trash pile, and between both of these and the colony itself.19
Ant colonies show other sorts of â€œintelligentâ€ behavior. If the colony is disturbed, workers near the queen will carry her down an escape hatch. â€œA harvester ant colony in the field will not only ascertain the shortest distance to a food source, it will also prioritize food sources, based on their distance and ease of access. In response to changing external conditions, worker ants switch from nest-building to foraging, to raising ant pupae.â€20 Colonies develop over time. Successful colonies last up to fifteen years, the lifespan of the queen, even though worker ants live only a year. The colonies themselves go through stages: young colonies are more fickle than older ones. Gordon says: â€œif I do the same experiment week after week with older colonies, I get the same results: they respond the same way over and over. If we do the same experiment week after week with a younger colony, they’ll respond one way this week, and another way next week, so the younger colonies are more sensitive to whateverâ€™s different about this week than last week.â€21 Younger colonies are also more aggressive: â€œif older colonies meet a neighbor one day, the next day they’re more likely to turn and go in the other direction to avoid each other. The younger colonies are much more persistent and aggressive, even though they’re smaller.â€22
While these shifts in the coloniesâ€™ â€œattitudesâ€ over time have yet to be explained, the coordination of the functions of the worker ants, such as changing from foraging to nest-building, has been. Ants secrete pheromones that serve as chemical signals to other ants. E. O. Wilson has shown that fire ants have a vocabulary of ten signals, nine based on pheromones, that code for task recognition.23 Gradients in pheromone trails make it possible to indicate directionality. Gordon’s explanation for the colony’s ability to adjust task allocation according to colony size and food supply depends on the antsâ€™ ability to keep track of the frequency of encounters with other ants of various types. So, for example, â€œ[a] foraging ant might expect to meet three other foragers per minuteâ€”if she encounters more than three, she might follow a rule that has her return to the nest.”24
It is tempting to try to explain the behavior of the colony reductionistically. Knowledge of some of the â€œant rulesâ€ gives the impression that the behavior of the colony is entirely determined bottom-up. One can imagine that each ant has built-in laws governing its behavior, and one can imagine a molecular-neural level account: â€œsmell of fourth forager within one minute causes return to the nest.â€ So the typical causal agent is not â€œthe system as a wholeâ€ or â€œthe environmentâ€ but a few molecules of a pheromone embedded in the ant’s receptor system. If one had all of the information about the rules, the initial placement of the ants, and the pheromone trails one could predict or explain the behavior of the whole colony.
Now consider an alternative, systems-theory description of the phenomena. The colony as a whole is certainly describable as a system. It is bounded but not closed; it is a self-sustaining pattern. The shift in perspective required by a systems approach is to see the colony’s components as a set of interrelated functional systemsâ€”not a queen plus other ants, but rather an organization of processes such as reproduction, foraging, nest-building. It is a self-organized system that runs on information; it produces and maintains its own functional systems in that the relations among the ants constrain them to fulfill the roles of forager, nest-builder, etc. All have the same DNA; differentiation occurs only in the context of the colony. In addition it has a high degree of autonomy vis-â€¡-vis the environment.
The colony displays a number of emergent, holistic properties. In addition to its relative stability there is the â€œintelligenceâ€ displayed in the placement of the trash pile and cemetery, the ability to prioritize food sources. Accidents of the environment such as location of food sources affect the foraging system as a whole, which in turn constrains the behavior of individual ants.
The crucial shift in perspective is from thinking in terms of causes (that is, nothing will happen unless something makes it happen) to thinking in terms of both bottom-up causes and constraints (that is, a variety of behaviors are possible and the important question is what constricts the possibilities to give the observed result). It is a switch from viewing matter as inherently passive to viewing it (at least the complex systems in question) as inherently active. In contrast to the assumption that each lower-level entity will do only one thing, the assumption here is that each lower-level entity has a repertoire of behaviors, one of which will be selected due to its relations to the rest of the system and to its environment. In fact, ant behavior when extracted from its environment (its colony) is a good visual model: drop an ant on the table and it runs helter-skelter. It can be coerced into going one way rather than another (these would be context-free constraints), but in the colony it also responds to context-sensitive constraints that train its behavior to that of other ants in ways sensitive to history and to higher levels of organized context.
From this point of view, the genetically imprinted rules in the individual antsâ€™ nervous systems are not (primarily) to be understood as causal laws; they are receptors of information regarding such things as the density of the forager population. The holistic property of the system, forager density, increases the probability that a given forager will encounter more than three other foragers per minute, and thus increases the probability that the ant in question will return to the nest. It is a non-forceful constraint on the ant’s behavior.
Note that the reductionist’s question is: if you take all the components and place them in exactly the same positions in the environment and allow the system to run again, will the entire system follow exactly the same path? The reductionist assumes that it must do so unless there is some source of genuine indeterminacy involved at the bottom level. The systems theorist asks a different question: given that no two complex systems (e.g., two ant colonies) are ever identical, why is it the case that, starting from so wide a variety of initial conditions, one finds such similar patterns emerging? That the world is full of such phenomena is now a widely recognized fact, but it is counter-intuitive on a bottom-up account. I claim that the fact of higher-order patternedness in nature, patterns that are stable despite perturbations, and despite replacement of their constituents, calls for a paradigm shift in our perceptions of (much of ) the world.
From Ants to Actions
Scott Kelso has argued that the language needed to connect the levels of psychology and cognition to those of neuroscience is specifically that of nonlinear dynamical systems.25 I have introduced some of that language here, and have applied it to a very simple complex system. But the level of complexity involved in an ant colony is comparable to that of the very simplest of multicelled organismsâ€”those without a nervous system. The cells making up these organisms, like the ants, are restricted to local communication via the diffusion of molecules. This means that both ant colonies and simple organisms lack the high degree of coupling of their components that produces the most interesting cases of self-organizing and increasingly flexible and goal-directed systems.
To get from ants to human conscious choices it is necessary first to consider the ways in which all complex organisms differ from simple ones. The variables that lead to increases in the capacity for self-causation include modifiability of parts (i.e., context-sensitive constraints on components), neural complexity, behavioral flexibility, and increasing ability to acquire information. In systems terms, this involves functional specialization of components and a high level of flexible coupling of those components.
As we move from rudimentary animal behavior toward humans, we see a vast increase in brain size, tighter coupling (number of axons, dendrites, synapses), structural complexification, recurrent neural interconnections, and complex functional networks that are hypothesized to be the source of consciousness. But still there is the question of what distinguishes intelligent, self-conscious, and morally responsible choice from the flexibility and autonomy of the other higher animals. Brown and I argue that the two crucial developments are symbolic language and the related capacity to evaluate oneâ€™s own behavior and cognition. So in chapter 4, â€œHow Can Neural Nets Mean?,â€ we consider the charge that a physicalist cannot make sense of meaning. 26 We argue that the supposed mysteries of meaning and intentionality are a product of Cartesian assumptions regarding the inwardness of mental acts and the passivity of the knower. If instead we consider the mental in terms of action in the social world, there is no more mystery to how the word â€œchairâ€ hooks onto the world than there is to how one learns to sit in one. We consider what is known so far about the neural capacities needed for increasingly complex use of symbols. Symbolic languageâ€”in fact, quite sophisticated symbolic languageâ€”is a prerequisite for both reasoning and morally responsible action.
In chapter 5, â€œHow Does Reason Get Its Grip on the Brain?,â€ we turn to the role of reason in human thought and action. A powerful argument against physicalism is the lack, so far, of a suitable account of â€œmental causation,â€ that is, of the role of reason in brain processes. The problem is often formulated as the question of how the mental properties of brain events can be causally efficacious. We reformulate the problem, instead, as two questions: how is it that series of mental/neural events come to conform to rational (as opposed to merely causal) patterns?; and what difference does the possession of mental capacities make to the causal efficacy of an organismâ€™s interaction with its environment?
In chapter 6, â€œWhoâ€™s Responsible?,â€ we turn to a central theme of the book, a philosophical analysis of the concept of morally responsible action. Here we adopt an account of moral agency worked out by Alasdair MacIntyre. Morally responsible action depends (initially) on the ability to evaluate oneâ€™s reasons for acting in light of a concept of the good. We then investigate the cognitive prerequisites for such action, among which we include a sense of self, the ability to predict and represent the future, and high-order symbolic language.
In chapter 7, â€œNeurobiological Reductionism and Free Will,â€ we bring to bear our argument to the effect that organisms are (often) the causes of their own behaviorâ€”the argument I have made briefly in this paperâ€”together with our work on language, rationality, and responsibility, in order to make the claim to have eliminated one of the worries that seems to threaten our conception of ourselves as free agents, namely neurobiological reductionismâ€”the worry that â€œmy neurons made me do it.â€
2 Nancey Murphy and Warren S. Brown, Did My Neurons Make Me Do It?: Philosophical and Neurobiological Perspectives on Moral Responsibility and Free Will (Oxford: Oxford University Press, 2007). Much of the content of this essay is excepted from this book.
10Donald T. Campbell, â€œâ€˜Downward Causationâ€™ in Hierarchically Organised Biological Systems,â€ in F. J. Ayala and T. Dobzhansky, eds., Studies in the Philosophy of Biology: Reduction and Related Problems (Berkeley and Los Angeles: University of California Press, 1974), 179-186.