Nonreductive Physicalism and Free Will
First I wish to extend my thanks to the Metanexus Institute for planning this conference, for locating it in my favorite European country, and in particular for inviting me to be on the program with the person Iâ€™ve long believed to be the most sophisticated scholar writing on the free-will problem.
In this presentation I shall first briefly outline the history of Christian scholarship arguing against dualism and in favor of a physicalist anthropology, along with even more abbreviated comments on issues in Judaism and Islam. Then I turn to the distinction between reductionist and anti-reductionist forms of physicalism. I claim that reductionism in general has been one of the most significant assumptions of the modern worldview; we are only in this generation working out suitable nonreductive understandings of complex phenomena. The developments here involve definitions of downward causation and of emergence, and the development of a new set of concepts for describing complex dynamical systems.
The major focus of the paper will be on the most difficult aspect of distinguishing nonreductive from reductive physicalism, that of free will. While I shall not be able to provide here a full treatment of free will, I shall argue, first, that there is no such thing as the free-will problem; it is an anachronistic reading of philosophical history to assume that there is a single problem. What many of the assorted free-will problems do have in common is the opposition of free will to determinismâ€”of some sort or another. The sort of determinism that is of particular interest to physicalists is neurobiological determinism.
I shall argue, however, that neurobiological determinism is only a worry if neurobiological reductionism is true. The latter decidedly is not true, as I shall attempt to show in the brief time allotted. In making my argument I shall note briefly the contrast between my approach that that of Robert Kane in his influential book, The Significance of Free Will.1
2 Science, Biblical Studies, and Physicalism
To many a reader of todayâ€™s media it would appear that Christians have once again bowed to the authority of science; they are renouncing the dualist anthropology that has characterized their teaching from the beginning, in order to adopt the physicalism that is consistent with current science, particularly cognitive neuroscience. It seems to be the case that only those of us who attended seminary sometime in the late twentieth century, and, more precisely, a seminary of a liberal sort, are aware of the fact that the dualism-physicalism issue is already a century old in Christian biblical studies and church history.
In 1911, biblical scholar H. Wheeler Robinson argued persuasively that writers of the Hebrew scriptures were not dualists; their concept of human nature was monistic.2 Later translators read dualism back into the texts by employing, first, Greek anthropological terms, and then later translating these Greek terms into modern languages as they had been understood by Greek philosophers. By the middle of the twentieth century is was commonplace to argue that New Testament authors also presupposed a monistic and physicalist account of human nature. Nonetheless, already in the second century, dualism began to appear in Christian teaching. The Epistle to Diognetus (written in approximately 130) described humans as possessing an immortal soul. By the time of Augustine, in the early fifth century, dualism of a modified Platonic sort was taken as the orthodox position.
Contemporary Jewish scholars appear to be divided on the question of dualism versus physicalism. A persuasive book, though, is Neil Gillmanâ€™s The Death of Death: Resurrection and Immortality in Jewish Thought.3Gillman argues that the only conception of human nature that fits comfortably with the Jewish understanding of life and of Jewsâ€™ relation to God is a physicalist account, along with an emphasis on afterlife understood in terms of bodily resurrection. I had the opportunity to lecture on this topic in Iran (thanks, again, to Metanexus Institute). I found that all of the Muslim scholars I addressed there were either dualists or else they held a more complex tri-partite account. Nonetheless, there are ample precedents in the history of Islam for a physicalist account of human nature.
So the Abrahamic faiths have plenty of historical precedent for accepting a physicalist account of human nature, and it can be argued that in so doing they are not bowing to science at all, but rather recovering a more authentic version of their own early teachings.4 What all religious believers need to worry about, however, is the extent to which a physicalist ontology is believed to entail a reductionistic account of human life. In the (in)famous words of Francis Crick: â€œYou, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules.â€5 As already noted, the aspect of this reductionist view that I shall address here is the question of free will.
3 Background to the Free-Will Problem
There have been numerous sources of worry about free will: divine determinism; divine foreknowledge; social determinism; and a variety of forms of physical determinism, based on the roles of physics, genetics, neurobiology. I shall not address the theological problems. Social determinism appeared to be a significant problem during the behaviorist era, but today physical determinism is the main focus of the philosophical literature. Although genetic determinism is treated as a legitimate worry by some, most of us are aware that there is little, if anything, in human life that is strictly determined by the genes, and also that the human genome packs nowhere near enough information to determine the precise â€œwiringâ€ of an individualâ€™s brain.
Much of the philosophical literature focuses on what I shall argue is an â€œindeterminateâ€ sort of determinist worry. Until recently it was thought to be a truism that all events in the physical world are determined by prior physical causes. Now the standard view is that all except certain quantum events are deterministic. Most of the current philosophical literature is structured by the compatibilist-libertarian debate. All agree that if determinism is true, then all human choices are determined by prior causes. Compatibilists argue that determinism may well be true, but it is a conceptual error to suppose that this rules out free will. Libertarians argue that free will requires that our choices, somehow, not be determined. A variety of authors agree that this debate has reached a stalemate. For example, Galen Strawson, in his article on free will in the new Routledge Encyclopedia of Philosophy, sees little chance of progress in settling this issue: â€œThe principal positions in the traditional metaphysical debate are clear. No radically new option is likely to emerge after millennia of debate.â€6
Now, I want to try to shake up this stalemate in two ways: first, by calling into question the general determinist thesis. There is now no consensus on what the concept of determinism amounts to. For the Epicureans, determinism was in nature itself. After the invention of the concept of laws of nature we have to distinguish between claims that things or events in nature determine subsequent events and the claim that the laws of nature are deterministic. But much has changed during the modern period. The concept of a law of nature began as a metaphor: God has laws for human behavior and for non-human nature. While it was thought that nature always obeyed God’s laws, God presumably could change or override his own laws. By Laplace’s day the laws of nature were thought to be necessary. But today with multiple-universe cosmologies and reflection on the anthropic issue there is much room, again, to imagine that the laws of our universe are contingent: It can be asked why the universe has laws and constants, from within a vast range of possibilities, that belong to a very small set that permit the evolution of life.
Jeremy Butterfield argues that the only clear sense to be made of determinist theses is to ask whether significant scientific theories are deterministic. This is more difficult than it first appears, however. It may appear that the determinism of a set of equations is simply the mathematical necessity in their transformations and their use in predictions of future states of the system. One problem, though, is that â€œthere are many examples of a set of differential equations which can be interpreted as a deterministic theory, or as an indeterminate theory, depending on the notion of state used to interpret the equations.â€7
Second, even if a theory is deterministic, no theories apply to actual systems in the universe because no system can be suitably isolated from its environment. The only way around this problem would be to take the whole universe as the system in question. If the idea of a theory that describes the relevant (essential, intrinsic) properties of the state of the entire universe and allows for calculation of all future states is even coherent, it is wildly speculative.
A third problem, argued by Alwyn Scott, is the fact that many important theories dealing with higher levels of complexity (such as those governing the transmission of nerve impulses) can be shown not to be derivable from lower-level theories, and especially not from quantum mechanics.8
So the general claim that all natural events (apart from quantum events) have deterministic causes is too vague to play the role that it so often has in the free-will literature. It is necessary so specify what is supposed to be determined by what. And a legitimate worry is that human thought and behavior are determined by neurobiological processes or laws. For instance, the Hodgkin-Huxley laws that describe the transmission of nerve impulses are strict (deterministic) laws.
We can see that one obvious route to escape neurobiological determinism would be to show that indeterministic quantum events play a role in neural processes. This is what Kane did so elegantly in his Significance of Free Will. But at this point I have chosen to take a different route. I shall argue that the worry about neurological determinism is misplaced. Rather, what we need to worry about is neurobiological reductionism, and the antidote to reductionism, in general, is the recognition of what has been called in the literature downward causation or whole-part constraint. Causal reductionism presupposes the notion of the hierarchy of complex systems, such that higher-level systems are composed of lower-level parts. Causal reductionism, then, is the thesis that all causation is â€œbottom-upâ€â€”from part to whole. Downward causation is so called because it represents the claim that the whole has reciprocal effects or constraints on its parts.
4 Downward Causation
My argument, in brief, will be the following: Neurobiological determinism is only a threat to free will if neurobiological reductionism is trueâ€”that is, if our thoughts and behavior are entirely determined by neurobiological processes (or biological processes more generally). This would be an instance of bottom-up causation. However, causal reductionism in general has been called into question by philosophers and scientists in the past generation. The most cogent arguments against causal reductionism are those showing that in many complex systems the whole has reciprocal effects on its constituents. If it can be shown that organisms, in general, impose downward constraints on their own parts, including their neural systems, then the reductionist threat to free will is defused. However, more will need to be said about the differences between genuine human free will and the partial causal autonomy of other complex organisms.
I believe I can take it for granted that this audience is familiar with arguments for downward causation, largely because of Arthur Peacockeâ€™s work. Thus, I shall be rather brief. First I want to show why causal reductionism seemed unavoidable in early modern physics. But when we recognize that all of those early assumptions have been called into question the reductionist dogma loses some of its grip on the imagination. Next I shall present some recent developments in the understanding of downward causation.
Reductionism was the apparently necessary outcome of combining the atomism that early modern physicists such as Pierre Gassendi took over from the Epicureans with the notion of deterministic laws of physics. Early modern atomism consisted of the following theses: First, the essential elements of reality are the atoms. Second, atoms are unaffected by their interaction with other atoms or by the composites of which they are a part. Third, the atoms are the source of all motion and change. Fourth, insofar as the atoms behave deterministically (the Epicureans countenanced spontaneous “swerves,” but Laplace and his followers did not) they determine the behavior of all complex entities. Finally, in consequence, complex entities are not, ultimately, causes in their own right.
When modern scientists added Newtonâ€™s laws of motion it was then reasonable to assume that these deterministic laws governed the behavior of all physical processes. In our terms, all causation is bottom-up (causal reductionism) and all physical processes are deterministic because the ultimate causal players (the atoms) obey deterministic laws. The determinism at the bottom of the hierarchy of the sciences is transmitted to all higher levels.
The tidy Laplacean worldview has fallen apart in more ways than I can catalogue here. Atoms modeled as tiny solar systems have given way to a plethora of smaller constituents whose ‘particle-ness’ is problematic. It is unknown whether these will turn out to be composed of even stranger parts such as strings. The original assumption that the elementary particles are unaffected by their interactions has certainly been challenged by the peculiar phenomenon of quantum nonlocality. Particles that have once interacted continue to behave in coordinated ways even when they are too far apart for any known causal interaction in the time available. Thus, measuring or otherwise tampering with one particle affects its partner, wherever it happens to be. The thesis of this section of my paper is that when we consider parts from levels of complexity above the atomic and sub-atomic, the possibilities for the whole to effect changes are dramatic, and the notion of a part shifts from that of a component thing to a component process or function.
Scientific ideas about the ultimate source of motion and change have gone through a complex history of changes. For the Epicureans, atoms alone were the source of motion. An important development was Newton’s concept of inertia: a body will remain at rest or continue in uniform motion unless acted upon by a force. In Newton’s system, initial movement could only be from a first cause, God, and the relation of the force of gravity to divine action remained for him a problem. Eventually three other forces, el crtromagnetism and the strong and weak nuclear forces, were added to the picture. Big-bang cosmology played a role, too. The force of the initial explosion plays a significant part in the causes of motion, and it is very much an open question whether there can be an explanation of that singularity.
And finally, there is the problem mentioned above, that we no longer know how to define determinism. So we might say that the assumption of complete bottom-up determinism has had the rug pulled out from under it.
Now I shall give just a brief overview of familiar work on downward causation and then add a few recent developments. Donald Campbell and Roger Sperry both used the term â€œdownward causationâ€ in the 1970s. Sperry often spoke of the properties of the higher-level entity or system overpowering the causal forces of the component entities.9 Campbellâ€™s work has turned out to be more helpful. Here there is no talk of overpowering lower-level causal processes, but instead a thoroughly non-mysterious account of a larger system of causal factors having a selective effect on lower-level entities and processes. Campbellâ€™s example is the role of natural selection in producing the remarkably efficient jaw structures of worker termites.10
While downward causation is often invoked in current literature in psychology and related fields, until recently it received little attention in philosophy after Campbellâ€™s essay was published in 1974. In 1995 Robert Van Gulick has spelled out in more detail an account based on selection. The reductionistâ€™s claim is that the causal roles associated with special-science classifications are entirely derivative from the causal roles of the underlying physical constituents. Van Gulick replies that even though the events and objects picked out by the special sciences are composites of physical constituents, the causal powers of such an object are not determined solely by the physical properties of its constituents and the laws of physics. They are also determined by the organization of those constituents within the composite. And it is just such patterns of organization that are picked out by the predicates of the special sciences. These patterns have downward causal efficacy in that they can affect which causal powers of their constituents are activated. â€œA given physical constituent may have many causal powers, but only some subsets of them will be active in a given situation. The larger context (i.e. the pattern) of which it is a part may affect which of its causal powers get activated. . . . Thus the whole is not any simple function of its parts, since the whole at least partially determines what contributions are made by its parts.â€11
Such patterns or entities are stable features of the world, often in spite of variations or exchanges in their underlying physical constituents. Many such patterns are self-sustaining or self-reproducing in the face of perturbing physical forces that might degrade or destroy them (e.g. DNA patterns). Finally, the selective activation of the causal powers of such a patternâ€™s parts may in many cases contribute to the maintenance and preservation of the pattern itself. Taken together, these points illustrate that â€œhigher-order patterns can have a degree of independence from their underlying physical realizations and can exert what might be called downward causal influences without requiring any objectionable form of emergentism by which higher-order properties would alter the underlying laws of physics. Higher-order properties act by the selective activation of physical powers and not by their alteration.â€12
A likely objection to be raised to Van Gulick’s account is this: The reductionist will ask how the larger system affects the behavior of its constituents. To affect it must be to cause it to do something different than it would have done otherwise. Either this is causation by the usual physical means or it is something spooky. If it is by the usual physical means, then those interactions must be governed by ordinary physical laws, and thus all causation is bottom-up after all.
The next (and I believe the most significant) development in the concept of downward causation is in the work of Alicia Juarrero.13 She describes the role of the system as a whole in determining the behavior of its parts in terms similar to Van Gulick’s account of the larger pattern or entity selectively activating the causal powers of its components. Juarrero says:
The dynamical organization functions as an internal selection process established by the system itself, operating top-down to preserve and enhance itself. That is why autocatalytic and other self-organizing processes are primarily informational; their internal dynamics determine which molecules are “fit” to be imported into the system or survive.14
She addresses the crucial question of how to understand the causal effect of the system on its components. Her answer is that the system constrains the behavior of its component processes. The concept of a constraint in science suggests “not an external force that pushes, but a thing’s connections to something else by rods . . . and the like as well as to the setting in which the object is situated.”15 More generally, then, constraints pertain to an object’s connection with the environment or its embeddedness in that environment. They are relational properties rather than primary qualities in the object itself. Objects in aggregates do not have constraints; constraints only exist when an object is part of a unified system.
From information theory Juarrero employs a distinction between context-free and context-sensitive constraints. In successive throws of a die, the numbers that have come up previously do not constrain the probabilities for the current throw; the constraints on the die’s behavior are context-free. In contrast, in a card game the constraints are context-sensitive: the chances of drawing an ace at any point are sensitive to history:
assume there are four aces in a fifty-two card deck, which is dealt evenly around the table. Before the game starts each player has a 1/13 chance of receiving at least one ace. As the game proceeds, once players A, B, and C have already been dealt all four aces, the probability that player D has one automatically drops to 0. The change occurs because within the context of the game, player D’s having an ace is not independent of what the other players have. Any prior probability in place before the game starts suddenly changes because, by establishing interrelationships among the players, the rules of the game impose second-order contextual constraints (and thus conditional probabilities).
. . . [N]o external force was impressed on D to alter his situation. There was no forceful efficient cause separate and distinct from the effect. Once the individuals become card players, the conditional probabilities imposed by the rules and the course of the game itself alter the prior probability that D has an ace, not because one thing bumps into another but because each player is embedded in a web of interrelationships.16
Alwyn Scott, a specialist in nonlinear mathematics, states that a paradigm change (in Thomas Kuhn’s sense) has occurred in science beginning in the 1970s. He describes nonlinear science as a meta-science, based on recognition of patterns in kinds of phenomena in diverse fields. This paradigm shift amounts to a new conception of the very nature of causality.17
The picture presented here is of a world in which many systems come into being, preserve themselves, and adapt to their environments as a result of a tri-level process. Lower-level entities or systems manifest or produce variation; higher-level structures select or constrain the variation. Note that with the recognition of the role of downward causation in organismsâ€™ behavior, the question of determinism or indeterminism at the lower levels of the hierarchy of complexity is irrelevant. Downward causation can select among variants produced either deterministically or indeterministically at the lower level. For example, in evolution the lower level (the level of the genes) produces a range of variants, and the higher level, the environment, selects among them. It happens that some genetic variation comes from mutations involving genuinely indeterministic processes and some from macro-level deterministic processes. So, in general, there are two parts to causal stories of this sort: first, how the variants are produced, and second, the basis upon which and the means by which the selection takes place. A fairly insignificant part of the story is whether the lower-level processes that produce the variants are deterministic or indeterministic.
It is likely that brain processes depend significantly on indeterministic quantum-level events. But for addressing the issues of neurobiological reductionism and free will we do not need to know whether this is true. Thus, I conclude, the long and tedious debate between libertarians and compatibilists is focusing entirely on the wrong issue. The issue is not determinism versus indeterminism, but rather reduction versus downward causation.
5 Fleshing out Free Will
What I have done so far is to rebut one of the most serious contenders for an argument against free will. In the process I have called into question the rationale for couching the debate in terms of compatibilism versus incompatibilism or libertarianism. Also, I pointed out above that an account of free will really requires in investigation of what the differences are that lead us to attribute free will to mature humans while we do not attribute it to animals or even to small children.
There are a there are a number of valuable contributions to be found in the long history of the free-will literature. For example, a major tradition defines free will as being able to act for a reason. There are also various accounts of free will defined as autonomy, and these are distinguished by the authorsâ€™ perceptions of the greatest threats to human autonomy. One threat, of course, is the threat of external control, but there are also various internal factors such as passions and appetites. A final example: recently Harry Frankfurt has helpfully distinguished first-order and second-order desires, and claimed that we are free when we have the second-order desire to have our own first-order desires. For instance, if I have a desire for revenge, but my higher-order desire is not to have this first-order desire, then I am not free.
Warren Brown and I approached this set of issues in our book Did My Neurons Make Me Do It?18by adopting Alasdair MacIntyreâ€™s account of morally responsible action, and arguing that if one has the capacities MacIntyre describes for moral responsibility, this is the equivalent of having free will. Of course, definitions are debatable, and we are open to having our work dismissed on the grounds that we have simply side-stepped the issue. Nonetheless, allow me to present our position.
MacIntyre describes the capacity for morally responsible action as the ability to evaluate that which moves one to action in light of a concept of the good. Spelling this out more fully, MacIntyre says:
as a practical reasoner I have to be able to imagine different possible futures for me, to imagine myself moving forward from the starting point of the present in different directions. For different or alternative futures present me with different and alternative sets of goods to be achieved, with different possible modes of flourishing. And it is important that I should be able to envisage both nearer and more distant futures and to attach probabilities, even if only in a rough and ready way, to the future results of acting in one way rather than another. For this both knowledge and imagination are necessary.19
Brown and I claim that the person who can do all of this is in possession of free will. Notice that this incorporates the ingredients in concepts of free will drawn from the literature. One ingredient is what we call self-transcendence, the ability to make oneself the object of observation, reflection, and evaluation. This is what Frankfurt was calling attention to in his recognition of our ability to evaluate our own desires. MacIntyre broadens this insight to include an evaluation of all of the sorts of factors that shape our actions. He notes the role of sophisticated language in enabling this ability. In order to evaluate a motive for acting, one must be able to formulate sentences complex enough not only to describe the motive, but also to state an evaluation of the motive so described.
A second ingredient, of course, is reasonâ€”not the mere reasonableness of higher animals, but the ability to enunciate principles against which to judge our own lower-level cognitions and motivations. Regarding autonomy, MacIntyre focuses on development of the ability to form our own moral judgments independent of social conformityâ€”that is, not only the ability to evaluate our motives in light of social norms, but also to evaluate social norms themselves. This is an instance of third-order self-transcendence.
From MacIntyreâ€™s description or morally responsible action, Brown and I extracted the following cognitive components:
- A symbolic sense of self (as MacIntyre says, the ability to imagine “different possible futures for me“).
- A sense of the narrative unity of life (“to imagine myself moving forward from . . . the present”; “nearer and more distant futures”).
- The ability to run behavioral scenarios (“imagination”) and predict the outcome (“knowledge”; “attach probabilities . . . to the future results”).
- The ability to evaluate predicted outcomes in light of goals.
- The ability to evaluate the goals themselves (“alternative sets of goods . . . different possible modes of flourishing”) in light of abstract concepts.
- The ability to act in light of 1 through 5.
Because each of these capacities is complex, and because each of them has to be presupposed in order to argue for humansâ€™ morality and free will, Brown and I devoted three chapters to analyzing the capacities for self-reflection, symbolic language, and reasoning. The pattern in each chapter was to begin with precursors of each capacity as found in lower animals, and then to reflect on the stages of complexification needed to reach adult human capacities.
The goal of our book was to show that increased understanding of cognitive neuroscience is not only not a threat to traditional ideas of higher human capacities, but rather that such increased knowledge helps us to understand how these capacities emerge from our complex neural systems, enmeshed in our natural and social environments.
I shall sketch out one example. The capacities for self-evaluation and for imagining oneâ€™s own future behavior both depend on the more basic capacity to f rm a self-concept. Wh