H+: Ship of Fools: Why Transhumanism is the Best Bet to Prevent the Extinction of Civilization

H+: Ship of Fools: Why Transhumanism is the Best Bet to Prevent the Extinction of Civilization

Transhumanism is the thesis that we can and ought to use technology to alter and improve human biology.1 Some likely targets for the technological makeover of human nature include making ourselves smarter, happier, longer-lived and more virtuous. The operative assumption here of course is that intelligence, moods, longevity and virtues each have deep roots in our biology. By altering biology transhumanists propose to improve human nature to the point of creating a new genus: such as posthumans.2,3 Notice that transhumanism encompasses a moral thesis. Transhumanism does not say that we will create posthumans, rather, it makes a moral claim: we ought to create posthumans.4 The hint of an argument based on the accrual of moral benefits is perhaps obvious from what has been said: to the extent that we value the development of intellectual, emotional and moral virtue5, becoming posthuman is imperative. I won’t pursue this line of argument here directly. Rather, I want to explore the objection that transhumanism is an ill-advised experiment because it puts us at unnecessary risk. My reply will be that creating posthumans is our best bet for avoiding harm. In a nutshell, the argument is that even though creating posthumans may be a very dangerous social experiment, it is even more dangerous not to attempt it: technological advances mean that there is a high probability that a human-only future will end in extinction. 

1.  Unprecedented Dangers of 21st Century Technologies

In a widely read piece, “Why the Future Doesn’t Need us”, Bill Joy argues that one of the main differences between previous technology and 21stcentury technology is the possibility of self-replication.6  Another relevant aspect of 21st century technologies is the fact that they leave very little industrial footprint. For example, it is reasonably easy to monitor which countries are part of the nuclear club with the aid of spy satellites. The size of the industrial infrastructure necessary to make nuclear bombs is such that a country has to go to extraordinary lengths to hide their activities should they wish to keep a nuclear development program secret.

Not so with genetic technologies. True, it helps to have millions of dollars in equipment and a well-trained research team to conduct genetic experiments, but it is not necessary. Even as I write this, private citizens are using genetic technologies in their basements and their garages with no government oversight. This burgeoning movement is referred to as ‘biohacking’. For a few thousand dollars and a small room to work, one can become a biohacker. A recent article in the Boston Globe explains:

The movement is getting much of its steam from synthetic biology, a field of science that seeks to make working with cells and genes more like building circuits by creating standardized biological parts. The dream, already playing out in the annual International Genetically Engineered Machine competition at MIT, is that biology novices could browse a catalog of ready-made biological parts and use them to create customized organisms. Technological advances have made it quite simple to insert genes into bacteria to give them the ability to, for example, detect arsenic or produce vitamins.7

In some ways this is a feel good story in that it promises the democratization of science. Just as computer do-it-yourselfers started to democratize the computer industry in the 1970s, so too will genetic do-it-yourselfers democratize the biological sciences. However, the potential down side is noted in the same article: “But the work also raises fears that people could create a deadly microbe on purpose, just as computer hackers have unleashed crippling viruses or broken into government websites.”  Worries here are fueled by the fact the information about how to construct novel pathogens in animal models is openly published. Little original insight would be needed to apply the same strategies to constructing novel human pathogens.8

The analogy with computer hacking is, in some ways, apt. We are all familiar with computer hackers taking down our favorite websites, or having a virus-infected computer slow to a crawl. On the other hand, the analogy seems to fail to illuminate the magnitude of the risk biological viruses designed by biohackers present. I can live without my computer or my favorite website (at least for a while, and I wouldn’t be very happy) but a biohacker who creates a pathogen or a series of pathogens may wipe out human civilization.

Sometimes it is suggested that there are always survivors when a virus or some other pathogen attacks a population, and so even the worst form of bioterrorism will not kill off the human species. In response, it should be pointed out that it is simply empirically false that there is no evidence that pathogens can cause the extinction of a species.9 A bio-misanthropist who was worried that a virus was not virulent enough to wipe out the entire human population might be well-advised to create two or more viruses and release them simultaneously. Furthermore, it is not clear that one would need to kill every last human to effectively bring civilization to a halt for the foreseeable future.10

2.  Intellectual and Moral Foibles

Fortunately, we are short on examples of biohackers, terrorist organizations or states creating pathogens that destroy human civilization. To illustrate the general points I want to make, a somewhat analogous case involving a naturally occurring rabbit virus will have to serve.

Reasoning that it would be nice to have some rabbits to hunt, in 1859 Thomas Austin released 24 rabbits into the wild in Australia. The old adage, “be careful what you wish for”, seems apropos, for by 1900 there were over two million rabbits in Australia. Through competition this invasive species is estimated to have caused the extinction of about 12% of all Australian mammals. The massive rabbit population has also had a continuing and significant impact on Australian agriculture. To combat the rabbit problem, in 1989 scientists in Australia imported a sample of a deadly virus, Rabbit calicivirus (RCD), from China. A number of biological technologies were used during intense clinical testing of RCD on rabbits and other species in the early 1990s. Results from this research showed that there was no indication of transmission to other species. So, in 1994 a high security test site for field trials of RCD was established on Wardang Island off the coast of Southern Australia. As expected, the test rabbits in the quarantine area quickly became infected with the disease, and so this part of the field trial was a success. However, in October 1995, unexpectedly the virus broke the containment area on Wardang Island and infected the island’s entire rabbit population beyond the test site. On October 10th 1995, the Australian government’s premier scientific agency, the CSIRO, issued the following communiquÈ concerning RCD: “Containment plans are in place in the unlikely event of spread to the mainland.” What the experts at the CSIRO described as an “unlikely event” transpired shortly thereafter: rabbits across many parts of the Australian mainland became infected and died.11 And non-government sanctioned spreading of the virus did not stop there: Private individuals in New Zealand, against the express wishes of their government, illegally imported and released RCD leading to the death of much of the local rabbit population. Animated public debate followed the incident with a certain amount of consensus that there was a moral failure here, although there was disagreement as to how blame was to be apportioned. Some blamed the individuals who imported the RCD against the express wishes of the government; others blamed the government for not supplying funds for more conventional methods of rabbit population control.

There are two lessons to be drawn from this example. Ignorance can lead to biological mishaps. The Australian scientists thought they had their experiment contained, but they failed twice. First was the release from the quarantine area on Wardang island, and second the escape of the virus to the mainland. The second point is that moral failures can also lead to biological disasters. With respect to biohackers the point then can be made that through some unforeseen problem, a deadly biological agent like a virus or bacteria escapes into the environment. Also there is the worry that some misanthropist biohacker may hope to destroy all of humanity. (And should it be objected that this would lead to the demise of the biohacker himself, we are all too familiar with deranged killers murdering dozens of innocent victims only to then turn the weapon on themselves).

3.  Transhumanism: The Most Dangerous Experiment Save Any Other

I want now to turn to our options for dealing with civilization ending threats precipitated by 21stcentury technologies. Broadly construed, our options appear to be three: we eliminate the technologies, as suggested by Joy; we permit them for world-engineering purposes only; or we permit them for world- and person-engineering purposes. I’ll refer to these, respectively, as the ‘relinquishment’, ‘steady-as-she goes’ and ‘transhumanist’ futures. I want to say a bit more about these options, along with some assessment of the likelihood that they will succeed in saving us from a civilization-ending event. 

Option: relinquishment.  Starting with relinquishment, let us think first about what it means to forgo any use of 21st century technologies for both world-engineering and person-engineering purposes. Notice here that the question is not whether we ought to permit the development of 21st century technologies. The reason of course is that it is already too late for that. We have developed at least one, genetic engineering, to the point that it potentially could be used for the purpose of ending civilization.

Now it may be thought that these extrapolations about the possible effects of genetic engineering are a little histrionic. Perhaps, but the fact of the matter is that very few have studied the problem of civilization extinction. Among those who have thought about the problem in any detail, there is almost universal agreement that the probability here is significant, and certainly not where we would like it, namely at 0.12,13 And it is not just tweedy academics who take seriously the possibility of bioterrorism and other technological disasters.

On December 5th2008, while I was in the middle of writing this paper, the following headline appeared in my inbox: “U.S. intel panel sees WMD attack in next five years”.14 Former senators Bob Graham and Jim Talen headed the panel. According to the report, the panel “acknowledges that terrorist groups still lack the needed scientific and technical ability to make weapons out of pathogens or nuclear bombs. But it warns that gap can be easily overcome, if terrorists find scientists willing to share or sell their know-how”.15 Also of relevance is that the report suggests, “the United States should be less concerned that terrorists will become biologists and far more concerned that biologists will become terrorists.” And our concern should only be increasing, since every year it is a little easier to acquire and apply the relevant technical advancements.

So, relinquishment requires us to not only stop future developments but also to turn back the hands of time, technologically speaking. If we want to keep ourselves completely immune from the potential negative effects of genetic engineering we would have to destroy all the tools and knowledge of genetic engineering. It is hard to imagine how this might be done. For example, it would seem to demand dismantling all genetics labs across the globe and burning books that contain information about genetic engineering. Even this would not be enough since knowledge of genetic engineering is in the minds of many. What would we do here? Shoot all those with graduate and undergraduate degrees in genetics and allied disciplines along with all the basement biohackers we can roundup? Think of the alcohol prohibition experiment in the early part of the century in the U.S. Part of the reason that prohibition was unsuccessful was because the knowledge and rudimentary equipment necessary for brewing was ubiquitous. It is these two features, availability of knowledge and equipment, that has made biohacking possible. And where would a relinquishment policy be implemented? If it is truly a viable and long-term strategy then relinquishment will have to be adopted globally. Naturally very few countries with advanced genetic technologies are going to be enthusiastic about genetically disarming unless they have some pretty good assurances that all other countries will also genetically disarm. This leads us to the usual disarmament impasse. In addition to national interests, the relinquishment strategy has to content with large commercial and military interests in developing and using 21stcentury technologies.

I would rate the chances for relinquishment as a strategy pretty close to zero. In addition to the aforementioned problems, it seems to fly in the face of the first law of the ethics of technology: technology evolves at a geometric rate, while social policy develops at an arithmetical rate. In other words, changing societal attitudes takes a much greater time than it does for technology to evolve. Think of the environmental movement. It is almost fifty years since the publication of The Silent Spring, a book often linked with the start of the contemporary environmental movement. Only now are we seeing the first portends of a concerted international effort to fight global warming. And unlike polluters, genetic research has the potential to be virtually invisible, at least until disaster strikes. Bill Joy, as noted, calls for relinquishment. But how relinquishment is to be implemented, Joy does not say. It is much like the environmentalist who proposes to stop environmental degradation by stopping pollution. As far as a concrete plan goes, it is missing just one thing: a concrete plan.

Option: steady-as-she-goes.  The only two options that seem to have any likelihood of being implemented are the steady-as-she-goes and transhumanism. Recall, the steady-as-she-goes option says that it is permissible to develop 21st century world-engineering technologies, but not to use them for person-engineering purposes. The name stems from the fact that, as noted, there are enormous resources devoted at present to the development of genetic and nanotechnologies for world-engineering purposes, and so the proposal is to continue with our current norms.

There are at least two problems with the steady-as-she goes policy. First, there is the worry about how effective a ban on person-engineering is likely to be. The likelihood of an effective ban will depend on what policies are adopted, and little thought has gone into this. A notable exception here is Fukuyama who has made some suggestive recommendations as to how national and international agencies might be built to contain the development of person-engineering.16 If implemented, Fukuyama’s recommendations may well reduce the number of attempts to person-engineer, but Fukuyama has little to say about the seemingly inevitable underground activities of person-engineering. The problem then is that Fukuyama’s version of the steady-as-she-goes strategy may reduce the gross number of person-engineering experiments, but the outcomes of the underground experiments may prove less benign. Unlike what transhumanists propose, a rogue group working clandestinely in opposition to a world ban on person-engineering is less likely to be worried about ensuring their posthuman progeny are as virtuous as possible.

The second, and for our purposes, primary problem with the steady-as-she-goes strategy is that it says nothing about how we are to address the dual-use problem: the development of 21stcentury technologies for peaceful purposes necessarily bring with them the prospect that the same technology can be used for civilization ending purposes. While I don’t agree with Joy about what to do about these threats, I am in full agreement that they exist, and that we would be foolhardy to ignore them. Interestingly, this is where Fukuyama is weakest: he has almost nothing to say about the destructive capabilities of 21stcentury world-engineering, and how the institutions he proposes would control their deadly use. A world where we continue to develop 21stcentury technologies means that the knowledge and limited equipment necessary for individuals to do their own world-engineering, and so potentially their own civilization ending projects (accidently or purposively), will only increase. So, at worst Fukuyama’s proposal is foolhardy, at best it is radically incomplete.

Option: transhumanism future.  The transhumanist future is one where both world-engineering and person-engineering are permitted. Specifically, as noted, the transhumanist view is that we should create persons who are smarter and more virtuous than we are. The application to our problem is obvious: our fears about the misuse of 21st century technology reduce down to fears about stupidity or viciousness. Like the Australian research scientists, the worry is that we may be the authors of an accident, but this time one of apocalyptic proportions: the end of civilization. Likewise, our moral natures may also cause our demise. Or, to put a more positive spin on it, the best candidates amongst us to lead civilization through such perilous times are the brightest and most virtuous: posthumans.17

It is worth pointing out that there is no need to deny what Fukuyama claims: there are real dangers in creating posthumans. The problem with the transhumanist project, says Fukuyama, comes when we think seriously about what characteristics to change:

Our good characteristics are intimately connected to our bad ones: If we weren’t violent and aggressive, we wouldn’t be able to defend ourselves; if we didn’t have feelings of exclusivity, we wouldn’t be loyal to those close to us; if we never felt jealousy, we would never feel love. Even morality plays a critical function in allowing our species as a whole to survive and adapt…. Modifying any one of our key characteristics inevitably entails modifying a complex, interlinked package of traits, and we will never be able to anticipate the ultimate outcome.18

So, although Fukuyama sees the pull of transhumanism, how it might look “downright reasonable”, the fact that traits we might hope to modify are interconnected means that “we will never be able to anticipate the ultimate outcome.”

What Fukuyama fails to address in any systematic way is the fact that there are even greater dangers associated with not creating posthumans. So, a prudential and moral reason for creating posthumans is not that this is without risk, rather, it is less risky than the alternative here: steady-as-she-goes. If forced to put some hard numbers to these scenarios, I would venture to suggest there is a 90% chance of civilization surviving the next two centuries if we follow the transhumanist path, while I would put the chances of civilization surviving a steady-as-she-goes policy at less than 20%. But then, I am an optimist.

It might be objected that it is foolhardy or worse to try to put such numbers to futures where so much is uncertain. I have some sympathy with this objection. Thinking about the future where so much is uncertain is hardly analogous to putting odds on a horse race. On the other hand, a lot more is at stake in thinking about our future and so we have no choice but to try to estimate as best we can various risks. If it were protested that it is simply impossible to make any meaningful estimate then this would prove too much. For then there would be no reason to think that the transhumanist future is any more risky than any other future. In other words, the complaint that the transhumanist future is risky has traction only if we have some comparative evaluation in mind. Surgery that has only a 1 in 10 chance of survival is not risky, comparatively speaking, if the chances of survival without the surgery are zero. Anyone who criticizes transhumanism for putting civilization at risk, as does Fukuyama, must explicitly or implicitly hold that the chances of survival in a non-transhumanist future are greater. This is what transhumanists deny.

This line of thinking is further reinforced when we consider that there is a limit to the downside of creating posthumans, at least relatively speaking. That is, one of the traditional concerns about increasing knowledge is that it seems to always imply an associated risk for greater destructive capacity. One way this point is made is in terms of ‘killing capacity’: muskets are a more powerful technology than a bow and arrow, and tanks more powerful than muskets, and atomic bombs even more destructive than tanks. The knowledge that made possible these technical advancements brought a concomitant increase in capacity for evil. Interestingly, we have almost hit the wall in our capacity for evil: once you have civilization destroying weapons there is not much worse you can do. There is a point in which the one-upmanship for evil comes to an end—when everyone is dead. If you will forgive the somewhat graphic analogy, it hardly matters to Kennedy if his head is blown off with a rifle or a cannon. Likewise, if A has a weapon that can kill every last person there is little difference between that and B’s weapon which is twice as powerful.

Posthumans probably won’t have much more capacity for evil than we have, or are likely to have shortly. So, at least in terms of how many persons can be killed, posthumans will not outstrip us in this capacity. This is not to say that there are no new worries with the creation of posthumans, but the greatest evil, the destruction of civilization, is something which we now, or will soon, have. In other words, the most significant aspect that we should focus on with contemplating the creation of posthumans is their upside. They are not likely to distinguish themselves in their capacity for evil, since we have already pretty much hit the wall on that, but for their capacity for good.

Conclusion

I suspect that those who think the transhumanist future is risky often have something like the following reasoning in mind: (1) If we alter human nature then we will be conducting an experiment whose outcome we cannot be sure of.  (2) We should not conduct experiments of great magnitude if we do not know the outcome.  (3) We do not know the outcome of the transhumanist experiment.  (4) So, we ought not to alter human nature.

The problem with the argument is (2). Because genetic engineering is already with us, and it has the potential to destroy civilization and create posthumans, we are already entering uncharted waters, so we must experiment. The question is not whether to experiment, but only the residual question of which social experiment will we conduct. Will we try relinquishment? This would be an unparalleled social experiment to eradicate knowledge and technology. Will it be the steady-as-she-goes experiment where for the first time governments, organizations and private citizens will have access to knowledge and technology that (accidently or intentionally) could be turned to civilization ending purposes? Or finally, will it be the transhumanist social experiment where we attempt to make beings brighter and more virtuous to deal with these powerful technologies?

I have tried to make at least a prima facie case that transhumanism promises the safest passage through 21stcentury technologies. Since we must experiment it would be foolhardy or worse not to put more thought and energy into the problem of our uncertain future. To the extent that we do not put more thought and energy into the problem, one can only lament the sad irony that “steady-as-she-goes” seems an all too apt order for a ship of fools.


Endnotes

1. This is a shorter version of a paper of the same title: http://www.nmsu.edu/~philos/documents/ship-of-fools-dec-15th-2008.doc Many thanks to Natasha Vita-More for editorial assistance

2. Walker, M. 2002a. “Prolegomena to Any Future Philosophy”. Journal of Evolution and Technology, 10.

3. _____2008. 108-115: “Cognitive Enhancement and the Identity Objection” The Journal of Evolution and Technology, 18: 108-115.  http://jetpress.org/v18/walker.htm.

4. ———2002b. What Is Transhumanism? Why Is A Transhumanist? Transhumanity, http://transhumanism.com/index.php/weblog/more/26/.

6. Joy, B. 2000. “Why the Future Doesn’t Need Us”, Wired, http://www.wired.com/wired/archive/8.04/joy.html

7. Johnson, C. 2008. “Accessible science: Hackers aim to make biology household practice”, Boston Globe, September 15, http://www.boston.com/news/science/articles/2008/09/15/accessible_science/

8. Chyba, C.F. and Greninger, A.L. 2004. “Biotechnology and bioterrorism: An unprecedented world”, Survival 24, 2:143–162.

9. Wyatt KB, Campos PF, Gilbert MTP, Kolokotronis S, Hynes WH, et al. 2008. “Historical Mammal Extinction on Christmas Island (Indian Ocean) Correlates with Introduced Infectious Disease”, PLoS ONE 3(11): e3602 doi:10.1371/journal.pone.0003602

10. Bostrom, N., and Milan Cirkovic (eds.). 2008. Global Catastrophic Risks, Oxford: Oxford University Press.

11. Later RCD was purposely released by Australian scientists, and had a dramatic effect on the rabbit population.

12. Reese, Sir Martin, 2003. Our Final Hour: A Scientist’s Warning: How Terror, Error, and Environmental Disaster Threaten Humankind’s Future In This Century—On Earth and Beyond, New York: Basic Books.

13. Leslie, J.  (1996) says there is at least a 30% chance of human extinction, while Sir Martin Rees (2003) sees our chances as 50/50 of surviving the next century. The most sustained academic discussion can be found in Bostrom and Cirkovic (2008).

14. World Tribune, 2008. “U.S. intel panel sees WMD attack in next 5 years”, December 5th, http://www.worldtribune.com/worldtribune/WTARC/2008/me_terror0770_12_05.asp

15. Ibid.

16. Fukuyama, 2002. Our Posthuman Future: Consequences of the Biotechnology Revolution, Farrar, Strauss, Giroux

17. Walker, M. 2008. 108-115: “Cognitive Enhancement and the Identity Objection” The Journal of Evolution and Technology, 18: 108-115.  http://jetpress.org/v18/walker.htm.

18. Fukuyama, 2004. “Transhumanism,” Foreign Policy, Sept-Oct.