Materialism, Mind, and Meaning: Warning, Spoilers Ahead

Materialism, Mind, and Meaning: Warning, Spoilers Ahead

Print Friendly, PDF & Email

width=330“I now ask this human being brave enough to stand next to me to pick two twinkling points of light in the sky above us. It doesn’t matter what they are, except that they must twinkle. If they don’t twinkle, they are either planets or satellites. Tonight we are not interested in planets or satellites… Now then: whatever heavenly bodies those two glints represent, it is certain that the Universe has become so rarefied that for light to go from one to the other would take thousands or millions of years…But I now ask you to look precisely at one, and then precisely at the other.”
“OK,” I said, “I did it.”
“Even if you’d taken an hour, something would have passed between where those two heavenly bodies used to be, at, conservatively speaking, a million times the speed of light.”
“What was it?” I said.
“Your awareness,” he said. “That is a new quality in the Universe, which exists only because there are human beings. Physicists must from now on, when pondering the secrets of the Cosmos, factor in not only energy and matter and time, but something very new and beautiful, which is human awareness….”
“I have thought of a better word than awareness,” he said. “Let us call it soul.”

–Kurt Vonnegut, Timequake, 2001

Introduction

In an earlier work, his 1981 reflections entitled Palm Sunday, Kurt Vonnegut describes his “ancestral religion” (the term dripping with irony) as the firm belief that no God had anything to do with the creation of the world. All of the moral worth to be found in the world is located squarely in the way we treat each other and our Earth, given what we know about our and our planet’s material nature.

The quoted section above, taken from the final pages of the last book published in his lifetime, Timequake, might therefore appear to be a typical late-life conversion from atheism to religion or at least agnosticism. I think neither is the case. Kurt Vonnegut did come to believe, in fact may always have believed, something I want to argue for here: That is, believing that the world is a material place, and that human minds subsist in brains operating according to physical laws, like everything else, despite gut feelings to the contrary, leaves us with

• mysteries about what it is to possess a mind;

• wonder at the various fruits of human cognition; and,

• the belief that having a mind marks something special, and unusual, in the universe.

To argue for these conclusions, I’m going to discuss what I take to be the best scientific theory of what it is to have a mind. In doing so, we’ll get somewhat acquainted with open controversies among philosophers who try to understand and defend this theory.

A Million Times the Speed of Light

Many philosophers today believe something called “semantic externalism.” This view simply says that when we form beliefs about something or other, the content of our thought is the thing we are thinking about. The very thing itself, out there in the world, rather than some mental image or concept, is the subject of our mental state. A great deal of work needs to be done to reconcile this view with the fact that we often form beliefs about fictional things like Santa Claus and Captain James T. Kirk, but in broad contours, the view is widely held.

This view can explain what Vonnegut’s character, Kilgore Trout, is up to when he asks Vonnegut to look at two distant stars, themselves distant from one another. If the content of my belief “hey, that’s a nice star” is the location of the light source itself, and the same goes for the next star I look at, then the object that is “the content of my consideration” has moved faster than physics tells us anything can move.

I’m not going to defend semantic externalism, nor am I going to try to argue that this is in fact what Trout has in mind (the reader will no doubt find something fishy – sorry – with Trout’s argument). Rather, I’m going to start by telling you about a similar mystery of the mind that, according to some philosophers, is solved by the theory of mind I’ll be discussing.

Related to Trout’s mystery is this one: when a person thinks about a problem, they typically try to proceed from a group of facts or claims to other ones. Doing so requires being attentive to things like subtlety of meaning, and only making moves that are justified or defensible. Things like rocks, bodies of water, and other collections of mere matter don’t do any of these things. They merely get pushed around by the sum of the physical forces acting on them. They don’t do anything for reasons, or pay attention to the meanings of things.

Minds, it was claimed, are the only things that operate this way. Then the 20th century gave us a mathematical theory of what it is to be a computer, along with ever-cheaper computers first found only at universities and large corporations, and now costing less than some sewing machines. Some have argued that with these developments, humankind had its first glimpse at a solution to this mystery.

A Very Brief History of Computers

It is common to believe that the computer came into existence around World War II; after all, the first construction of electronic computers, and their application to a concrete problem, were part of Allied codebreaking efforts. In fact, one of the key figures in the development and use of codebreaking computers, Alan Turing, invented the modern mathematical concept of the computer a few years before WWII, though for a very different purpose.

Turing was interested in a problem posed by the great German mathematician, David Hilbert: is there a definite procedure by which it can be determined whether an arbitrary string of symbols is a theorem of pure logic?1 Turing showed that the answer was, in fact, no. This result will have importance for us later on.

Also, by investigating systems of logic related to the one Turing invented, mathematicians proved things like “soundness” for certain domains. Soundness results showed that if you started with symbols that stood for some true statement, and followed specified rules that only mentioned properties of the form of the symbols, then you would end up with other true statements. Computers are, it turns out, just machines that manipulate symbols according to rules that only mention properties of the form of those symbols.

Computers and the Mind

Philosophers of mind, reflecting on the proofs of these mathematicians and logicians, realized that computers might be the key to the problem of how material objects can think rationally. After all, computers do something that is very close to rational thinking: if you set them up right at the start, and then build in the right rules, they can (according to soundness proofs) behave as though they were taking into account things like meaning and truth. In other words, if you set up the program just right, then a computer can act as though it is sensitive to the meaning of the symbols it is manipulating.2

Now, even those philosophers who believe everything I’ve said so far think that the computer theory of mind is incomplete. Although it tells us how our brains, composed of regular old matter, can reason (namely, by being computers), it doesn’t give us any kind of hint as to how or why we have conscious experience. There are many famous arguments that the computer theory of mind can never answer these concerns, and that it is, therefore, a fundamentally flawed theory.

I don’t find these arguments convincing. I’m not going to spend time here, though, explaining why. For one thing, I concede that the computer theory of mind doesn’t have a solution to questions of consciousness. I’m not ready, though, to give up any hope that it might be part of a yet undeveloped theory that will answer questions of consciousness. Another reason that these questions shouldn’t cause us to reject a computer theory of mind is that understanding consciousness is just as pressing for any materialist theory at all.

Given the vast successes that humankind has had in understanding the natural world through a materialist lens, it is premature to allow questions of consciousness to overthrow the scientific paradigm. I mention these issues because it is important, and I will return to this later, to recognize that having a theory of mind doesn’t imply that we’ve answered all the questions we can pose about the relationship between mind and matter.

I’ve talked about two mysteries. One of them I think is more or less solved; the other I think is so far from being solved that it shouldn’t be allowed to spoil our theorizing about the mind. Given all this, you might be wondering how I have any hope of showing that human minds are unique in nature and mysterious in any way. After all, in our contemporary, technological world, what is more mundane than a computer?

Lady Lovelace’s Objection

In my own research, I’ve argued that Alan Turing was troubled by an objection to his theory that machines could think. Turing not only contributed to the mathematical and technical concepts of computers, but also gave a very influential account of what it would mean to say that machines can think. He thought that if a computer gave every indication of being a person through text-only communication to a judge who knew it might be a computer trying to fool them, then the computer deserved to be called a thinking being.

In his discussions of thinking things, though, Turing took very seriously the claim that no computer can think, whether or not we agree with his concept of how this would be tested (the procedure just described with a judge and computer communicating by text is now referred to as the “Turing test”).

In some places (but not all) he referred to the objection as “Lady Lovelace’s objection,” after the 19th century mathematician who collaborated with the mathematician, Charles Babbage, on what would have been the world’s first programmable computer (if only more funds had been available, along with an improvement in geared machine parts). Lady Lovelace wrote in her memoirs that computers couldn’t think, because they (only) do what we tell them to.

Turing worried that when we look inside some computers, especially ones that are designed and used by people, we find instructions that a programmer has given it which set out exactly what it will do in a very specific sense: the programmer can anticipate what exactly the computer will do in every circumstance. The computer inside the ATM at the bank is like this. Not only does it seem impossible for such a computer to be creative, but if it were to do something interesting, we would ascribe responsibility to its programmer, not to the computer itself.

Turing realized that an objector to the computer theory of mind might reason that, according to considerations like these, no computer can ever display creative, original works. Everything a computer does is preordained beforehand by its creator (this worry will be familiar, of course, to the theologically knowledgeable reader). Turing’s response to this worry contains two parts.

The first has to do with an important mathematical result in the first paper by him that I mentioned. Turing proved that there is, it turns out, no general method by which we can figure out if an arbitrary string of symbols is a theorem of logic or not. The way he proved this was by first showing that there is no general method for telling if an arbitrary computer, on some input, will ever finish computing.3 Since statements about whether a computer will ever finish computing on a certain input can be expressed as the question of whether a string of symbols is a theorem, there can’t be any general method for deciding this, either.

Turing’s result, sometimes called “the unsolvability of the halting problem” (Turing used the word “halt” to refer to a computer’s having finished computing) contains the seed of a solution to Lady Lovelace’s objection. It says that no matter how knowledgeable you are, and given any tools or methods you have available, there is at least some computer that would be unpredictable by you, even if you are told exactly how the computer works and what its input is.

In other words, being a computer might seem to imply a determinism that rules out originality and creativity, but this is not in fact the case. Turing even imagined how we might build computers that were unpredictable even by people intimately familiar with how they worked.

So, we are in a slightly better position than we were a moment ago. The computer theory of mind leaves the door open for us to do unpredictable things that couldn’t be anticipated (short of miracles) by someone who knew exactly what “program” we were running.

The Problem of Downward Causation

width=335In this section I will formulate a problem that has been posed against any materialist theory of mind. A contemporary philosopher who has developed and clarified this problem in a long series of writings is Jaegwon Kim.4 The problem goes like this: Suppose materialism is right, and physicists do discover the ultimate laws of the universe governing matter and energy. Now, of course, people depend on lots of other things besides physics to understand the world and make predictions – things such as geology, meteorology, and even literary criticism and gossip. But deep down we know that the laws at the level of things like atoms and photons are the real causes of things.

This view of the world combines two ideas that Kim has given technical terms for. First, the idea that physicists identify ultimate causes that make everything else happen is called “supervenience” (in more detail, the idea can be put this way: suppose God fixed the position and velocity of all the particles in the universe, and set the laws of physics to what they in fact are. Then God would have thereby produced all of the continents, tornados, deconstructionist poetics, and hearsay that exist). So, on this view, geology, meteorology, literary criticism, and gossip all supervene on physics.

The second idea is the “exclusion principle.” This says that if you come up with a complete account of what caused some physical event, then there can’t be some other cause of the event. Or, to put it in concrete terms, a collection of atoms and the forces they obey can’t cause an earthquake in addition to the earthquake’s being caused by high pressure in a fault line. On this principle, we may use terms like ‘fault lines’ and “high pressure,” but these are terms that are merely helpful for us in making sense of the world and making predictions. They don’t name real causes.

Putting these two ideas together, the exclusion principle and supervenience, it seems that there is no place in the world for minds to cause events. We may appeal to beliefs and desires in explaining why people do what they do, but in fact there are only atoms, and the other exotica of physics in the void, bouncing against each other. Such a view threatens fundamental concepts of ourselves whether we are theologically inclined or not. It prevents us from understanding human beings as different from earthquakes, or different from arbitrarily circumscribed collections of particles behaving according to laws that don’t mention anything at all that we care about.

In the companion articles in this issue of The Global Spiral, two solutions to this problem are presented. I think they are both well-conceived in that there must be something wrong with the conclusion reached just above, that we never actually make things happen (like deciding to go to work in the morning) by thinking about it.

As you might expect, I think that the computer theory of mind offers a unique way to respond to the problem of downward causation. The next two sections may seem tangential, but they figure centrally in a proposal for solving the problem of downward causation with the computer theory of mind.

What are Computers, Really?

There are all kinds of things in the world that carry information. Stop signs, billboards, and pieces of paper in books all contain symbols that stand for things. So do computers. There is one major difference, though, between modern computers and these other examples in which symbols occur. Computers manipulate their symbols all by themselves. On the other hand, there is a similarity between computers and these other examples of objects that display information. The information they display is determined by interaction with the environment they are in. The reason a green light means “go” isn’t that that’s part of what green lights are; it has to do with societies, laws, and the behavior of people.

On the other hand, there are what we might call the “internal symbols” of computers. These are the microscopic bits of metal, fluctuating between magnetic or electrical states, that are interpreted by the computer itself as being a 1 or a 0. These objects don’t get their meaning by interacting with societies, laws, or people. Their meaning is defined by the proper functioning of the computer. If my laptop is working correctly, then the presence of magnetic fluctuations in its hard drive will be interpreted as things like instructions for what to do when certain inputs are received (such as signals from my mouse and keyboard).

For thirty years, computer scientists and philosophers have worried about the following problem: why should computer parts be treated as any differently from stop signs or billboards? What I mean here is, what makes bits of my hard drive stand for instructions that is different from an act of interpretation by people?

Some have given a clear, if controversial, answer to this question: there isn’t any difference. I have two gripes with this position. First, it makes the whole idea of the computer theory of mind impossible. If being a computer is only a matter of interpretation, then that can’t be what it is that makes something a mind – since then everything (or nothing) would have a mind!

Also, this position leads to some very strange conclusions, ones which its adopters have cheerfully embraced. On this view, pails of water, walls, and any other medium (or large) sized physical object implements every computer! One popular presentation of this idea goes like this: consider the list of instructions that my laptop is running as I type this. Each part of my computer only encodes particular instructions because I interpret it to. Now consider the wall behind me. It has at least as many parts as my laptop (if we consider paint molecules, for example). So, my wall is running all the same software that my computer is – or at least, it is if I choose to interpret it as such!

There are Computers, Really.

Here’s a short response that can be given to a person who thinks that being a computer is just a matter of interpretation. Contemporary airplanes contain computers that figure prominently into their operation. Suppose I offer a person who thinks my (inert, painted) wall is a computer a choice: they can fly an airplane that has a large chunk of my wall “plugged in” to it, or another one that has what I claim to be an “actual computer” plugged in. Surely they’ll choose the latter, thereby admitting that there is more to being a computer than interpretation.

While compelling, I think the skeptic has an answer. They can claim that a thing’s suitability for flying a plane has to do with the object’s non-computational properties, like having the right sort of plug and data storage for the needs of a plane. This response is not enough. The wall behind me isn’t a computer, no matter how hard I try to interpret it otherwise. Motivated by the gut feeling that objects are computers or not regardless of external interpretational attitudes, I’ve tried to figure out what the best philosophical justification could be for this claim.

There has been a great deal of thinking and writing on this matter recently. In very broad terms, I’ve found two kinds of answers given. The first says that some things, but not others, have just the right sorts of parts, working with one another, to match up to a special “program description.” Only those things are computers.

The problem with this story, although it is intuitively appealing, is that only some computers have labels indicating what their parts are. I know that the CPU of my laptop is a different part than my hard drive because they detach easily from one another and have labels showing that they were manufactured by different companies. Neurons and parts of brains don’t have these features, nor do walls.

Another story starts with the idea that computers have parts that have a function – to put it another way, there’s something that a CPU is for. To put it yet another way, there are identifiable circumstances in which the CPU breaks – it malfunctions, if it gets too warm, for example. Philosophers say that when some object has certain behaviors in which it does what it is supposed to, it has a proper function.

As we’ve said already, manufactured computers get their proper functions by their designers. But it is difficult to see how parts of naturally occurring things, like us, could get proper functions. Of course, some theologians believe that God gives a proper function to material things. Philosophers of biology, most notably Ruth Millikan, have argued though that evolution gives an explanation of how things in the natural world can have a proper function without a designer. If some objects gain selective, reproductive success because they have a part that performs in particular ways, then, some have argued, their descendants’ parts will have that performance as their proper function.5

I won’t claim that this is the only view of proper function by philosophers of biology, nor that this view has no loose ends to be tied up. However, it gives us a suggestion as to how the computer theory of mind might capture a truth about human beings: perhaps our brains evolved to be computers, and as such have, as their proper function, to treat their own parts as implementing rules for manipulating symbols in ways that appear reasonable. This view has the elegant consequence that natural computers are very much like artefactual ones: both require design, so long as the cumulative effects of random change and differential reproductive success can be called design.

A Special Solution to a Special Problem

Now I can present a solution to the problem of downward causation. Suppose my computer opens an attachment in Open Office sent to me by a colleague. I can offer two distinct causal stories of what’s happened. First, I could (in principle, but not in practice) enumerate all the particles that compose my computer, and their interactions with the rest of the world. I could then describe the opening of the attachment as causally necessitated by those facts.

On the other hand, I could discuss the program my computer is running, and the file that is sent, and explain that the program has the capacity to open attachments that match the file’s type. This would involve a claim that my computer, in undergoing the causal unfolding mentioned by the previous explanation; itwas doing what it was supposed to. That is, I can describe the computer as accomplishing what is supposed to do, and also say how it accomplished what it is supposed to.

The solution to the problem becomes a bit clearer when put in the technical terms mentioned before. Having a proper function is something that can’t be captured by describing where all the bits of something are right now, and how they are interacting with one another. Instead, having a proper function has to do with having the right kind of history. So, supervenience fails for computers. Similarly, the exclusion principle fails for computers. There can be two different explanations for why a computer does what it does since one may mention those things for which supervenience does hold, while the other may mention the things for which it does not.

In short, if the computer theory of mind is right, and our brains are computers, then we can believe that human beings, and their minds, are the result of entirely physical processes set in motion, perhaps, at the big bang – but that, unlike most things in the universe, we are subject to causal explanations that transcend mere mention of physical forces. Human minds are extremely unusual, special things: they are unique evolutionary products that have computational properties unparalleled by any other naturally occurring physical object. In fact, there still is no consensus on whether artificial computers can in principle be manufactured that have the same computational abilities as human brains.

Speculation

I promised to show that one could be a scientifically respectable materialist, and yet hold views that many think require some sort of theism. To review, I argued first that

• this leaves us with a wonder at the fruits of human creativity.

I showed that because of what Turing proved, we should expect very complicated computers, such as ourselves, to be capable of surprising each other. The life of the mind is not rendered dull, boring, or predictable by being a material object behaving according to deterministic laws.

Then I claimed that the computer theory of mind provided a solution to the question of how it could be that people, and not the atoms that compose them, cause their own behavior – I answered the question “how can we be the authors of our actions, if we are composed of matter that follows physical law?” In other words, I said that despite being composed of matter,

• having a mind marks something special and unusual in the universe.

This is because objects with minds have behaviors that cannot be explained only as the result of physical causes interacting with one another. Unlike almost all other objects, they have behavior that can be called rational, due to their having evolved to implement extremely complicated computers.

I should add some caveats to prevent concerns that I’ve found often come up in discussion of the computer theory of mind. Nothing in the theory says that every, or even most computers have minds. It just says that running the right program, which may require a great deal of complexity beyond anything we can build, is enough to have a mind.

Now I’d like to briefly consider some open questions. Even if the computer theory of mind is correct, there are still significant mysteries that we haven’t even begun to address. I will argue for this by describing some of them, and showing different answers that we can imagine for them but cannot assess.

I have said nothing at all about what makes human brains, the products of evolution operating according to physical principles, conscious. Here are a few questions that we don’t have an inkling of answers to yet:

• Is being a computer running a particular program enough to have conscious experience?

• If two computers run the same program, and one of them has thoughts, does the other one have the same thoughts?

A strange fact about computers is that they can be running very similar programs, yet doing very different things. For example, suppose I write a clever program for sorting a list of things into alphabetical order. Now suppose that two different people use my program: the first owns a dry cleaning business, and sorts customers. The second runs a shipping company, and uses the program to sort destinations for packages. The two programs could even include symbols that are identical – such as “Grimsby” In one case, this refers to a person, and in another case, a city in England.

Some philosophers have argued that because there is no way to determine what a symbol refers to by the structure of a computer program that it is in, the computer theory can’t be right about mind. This seems a little rushed to me.

Here is a different answer that is consistent with both the examples just given, and the general principle extracted from it. Being a computer program of some complicated type may imply being a mind, but not that it is a mind thinking about any particular thing. What this means is that two people might “run the same program,” but be thinking about different objects. In fact, this view is strongly implied by semantic externalism.

We don’t yet have any idea as to why some physical processes give rise to consciousness. Some philosophers, in trying to make progress on this question, have suggested that our experience is a direct result of the content of our experience. So if I am thinking about a tree in front of me, the conscious experience I have is determined by the tree and my relation to it. Semantic externalism, along with “representationism about consciousness,” says that if the computer theory of mind is right, then the conscious experience a person has is independent of which program they are running.

Representationism and semantic externalism do not disprove the computer theory of mind. What would disprove the computer theory of mind, though, would be the demonstration that one computer, running a program, had experiences, and another computer running the same programs didn’t just fail to have the same experiences, but didn’t have experiences at all. To summarize, the computer theory of mind could allow that people running the same program have the same experiences, or different experiences, but not that one had experiences and the other didn’t, since otherwise the computer theory wouldn’t describe the conditions under which we have experiences.

Given that we have no idea what properties of matter experience, I don’t have any idea of how such a counterexample could be constructed. I hope to have shown, though, that since I have offered questions about human consciousness that could be answered in different ways, then even granting that the computer theory of mind is correct, the theory leaves open mysteries concerning the nature of human minds.

Trout, Revisited

Kilgore Trout claims that a complete understanding of physics leaves something very precious out of an account of nature. In particular, it leaves out the human mind. I’ve tried to show that Trout’s claim is correct in a number of ways. First, an understanding of the world in physicalist terms contains a danger, that we will expect the surprise and delight we take in each other to be somehow spoiled, or made impossible.

Second, a physicalist understanding of the world threatens to make us think that there is only one story to tell about why things happen. I think that the computer theory of mind gives us a clear, unique way to see why this conclusion fails in some cases, including, possibly, the case of ourselves.

Finally, having a mind means having special referring relations to the world, and conscious experience of it. It is mysterious why this is the case, and precious that it is.


Endnotes

1 Theorems of pure logic can be thought of as sentences that are true regardless of either how they are interpreted or how the world turns out to be. For example, the statement, “if it is raining then it is raining” is true whether or not it is raining, or even regardless of what “raining” refers to.

2 Here I am expanding on John Haugeland’s “formalist’s motto”: “if you take care of the syntax, the semantics will take care of itself.”

3 Of course, there are many examples of computers and inputs to them that are eminently predictable. What Turing showed was that there is no single method that will always be successful at this task no matter which computer and inputs it is asked to predict.

4 See, for example, Jaegwon Kim, “Explanatory realism, causal realism, and explanatory exclusion”, Midwest Studies in Philosophy, 12:225–240, 1988, and Jaegwon Kim, “Explanatory Exclusion and the Problem of Mental Causation,” in: MacDonald, (ed.), Philosophy of Psychology: Debates on Psychological Causation. Oxford: Blackwell, 1995.

5 For a presentation of these ideas in the context of proper psychological function, see for example Ruth Garrett Millikan, “Truth rules, hoverflies, and the Kripke-Wittgenstein paradox”, The Philosophical Review, 99(3):323–353, 1990.


Bibliography

Jaegwon Kim, “Explanatory realism, causal realism, and explanatory exclusion”, Midwest Studies in Philosophy, 12:225–240, 1988.

Jaegwon Kim, “Explanatory Exclusion and the Problem of Mental Causation,” in: MacDonald, (ed.), Philosophy of Psychology: Debates on Psychological Causation. Oxford: Blackwell, 1995.

Ruth Garrett Millikan, “Truth rules, hoverflies, and the Kripke-Wittgenstein paradox”, The Philosophical Review, 99(3):323–353, 1990.