Rethinking Our Utopian Choices: Why Whether the Future Needs Us Matters (Part 2/2)

When Bill Joy discusses Kurzweil’s ideas with his futurist friend he is told that such “changes would come gradually, and that we would get used to them”. Joy remains uneasy with something like the process toward ensuring we have a tremendously long life in a silicon body. His unease stems from the manner that the threats of robotics, genetic engineering, and nanotechnology differ drastically from the threats of earlier technologies. 

Idea of a nanobot...

 

In arguing this, Joy is drawing on his work as a software designer and I do not take him to be playing the role of a Luddite. He notices a difference not merely of degree but of kind in the technologies of the 21st century. Of these technologies “[s]pecifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. A bomb is blown up only once – but one bot can become many, and quickly get out of control.”Joy describes how his work in computer networking has been plagued with out of control replication, replication that (thankfully) has been limited to a single computer or a limited computer network. The hazards of such uncontrolled self-replication in the physical world would be substantial. 

The construction of weapons of mass destruction, whether they be nuclear, biological or chemical requires access to rare raw materials and highly protected information. Weapons of knowledge-enabled mass destruction are, aside from their tendencies toward self replication, within the reach of anyone who has the capacity to think. Joy’s argument appears even more frightening post-9/11 where we might conjure the image of a lone religious fundamentalist in a crop duster infecting field upon field of Florida produce with self-replicating nano-materials or viruses. 

It could be that we need to take stock of what has changed amidst the whirlwind of research surrounding the technologies of genetics, nanotechnology, and robotics. How caught up in the spell of their seeming inevitability are we? Do we realize the stakes and the possible consequences? As Joy suggests: 

Perhaps it is always hard to see the bigger impact while you are in the vortex of a change. Failing to understand the consequences of our inventions while we are in the rapture of discovery and innovation seems to be a common fault of scientists and technologists; we have long been driven by the overarching desire to know that is the nature of science’s quest, not stopping to notice that the progress to newer and more powerful technologies can take on a life of its own. 

And this ‘quest’, which in the Transhumanist or Extropian understanding, has become one of sheer technological progress, may be blind to the very subjects intended to benefit from it. Joy finds it disturbing that we lack a sense of concern for how we might co-exist and integrate with tools capable of replacing our species. This is not a Luddite’s position and is a wise consideration of the dynamic between ourselves and our technologies. 

To argue, as Kurzweil does that we must plough straight ahead (i.e. “I don’t think we can stop it. I think there are profound dangers and I would not say I’m sanguine. But I would say we can deal with them. I don’t think Bill Joy’s solution of relinquishment is the right approach.”) and assume that the transition to a robotic existence would simply sweep up the human away with it is clearly misguided. Furthermore, such a position operates as if this technological process were outside the control of human agency and our ability to heed and learn from past errors. “I believe we would all agree…” Joy writes “…that golden rice, with its built in Vitamin-A, is probably a good thing, if developed with proper care and respect for the likely dangers in moving genes across species boundaries”. Joy’s discomfort with Kurzweil’s predictions stem from his vision of computer software (some of which he helped create) acting autonomously, and acting beyond the dangers that such autonomy entails. 

It should be noted that only in a passing quip does Joy evoke abstract concepts such as ‘human nature’ or ‘human dignity’. These do not appear to be his primary concern and he is certainly not interested in reinforcing them. He is interested in his responsibility as a software developer to engage with theories that appropriate his inventions in order to initiate what he takes to be dangerously risky policies. Drawing on Eric Drexler’s Engines of Creation, Joy outlines the possible risks of uncontrolled replicators: 

” “Plants” with “leaves” no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough omnivorous “bacteria” could out-compete real bacteria: They could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop – at least if we make no preparation. We have trouble enough controlling viruses and fruit flies. Among the cognoscenti of nanotechnology, this threat has become known as the “gray goo problem.” Though masses of uncontrolled replicators need not be gray or gooey, the term “gray goo” emphasizes that replicators able to obliterate life might be less inspiring than a single species of crabgrass. They might be superior in an evolutionary sense, but this need not make them valuable. The gray goo threat makes one thing perfectly clear: We cannot afford certain kinds of accidents with replicating assemblers.” 

The line ‘They might be superior in an evolutionary sense, but this need not make them valuable’ rings over and over in my mind. I am perturbed because this was precisely my reaction to a movie about extraterrestrial life called Is There Anybody Out There? which I saw a couple years ago at the Planetarium de Montreal (in Montreal, Quebec). The movie, although aimed primarily at children, was disturbing. It concluded by crushing the dreams of all the young viewers in the audience of ever meeting E.T., explaining that if life is found on other planets it likely had to survive some of the most inhospitable conditions we can imagine. The movie concludes by claiming that if life does exist outside of earth, it would likely be something like an extremophile microbe. Extremophiles (such as the photosynthetic organism Hipolyth which lives inside rocks in cold deserts) are usually unicellular organisms that thrive in poisonous, radioactive and extremely frigid or hot environments. Astrobiologists expect to find such extremophile’s in places such as the water ocean of Jupiter’s moon Europa. Because these extremophiles, which are unable to communicate, or think, or reason, were capable of surviving the death of entire planetary ecosystems, they are superior to us in an evolutionary sense. 

It is in this light that we can situate the impetus for Joy’s sense of responsibility amidst the clamoring of Kurzweil, Moravec and Minsky that evolution through technological necessity is a utopian premise. Joy writes, near the end of the essay, 

Do you remember the beautiful penultimate scene in Manhattan where Woody Allen is lying on his couch and talking into a tape recorder? He is writing a short story about people who are creating unnecessary, neurotic problems for themselves, because it keeps them from dealing with more unsolvable, terrifying problems about the universe. He leads himself to the question, “Why is life worth living?” and to consider what makes it worthwhile for him: Groucho Marx, Willie Mays, the second movement of the Jupiter Symphony, Louis Armstrong’s recording of “Potato Head Blues,” Swedish movies, Flaubert’s Sentimental Education, Marlon Brando, Frank Sinatra, the apples and pears by Cézanne, the crabs at Sam Wo’s, and, finally, the showstopper: his love of Tracy’s face. Each of us has our precious things, and as we care for them we locate the essence of our humanity. In the end, it is because of our great capacity for caring that I remain optimistic we will confront the dangerous issues now before us. 

When Joy writes of an ‘essence’ of our humanity he is speaking with a sense of wisdom and not a sense of adhering to abstract notions or human essences. 

If we acknowledge that the human and the technological operate in a continuous feedback loop, one influencing the other, we find a responsibility of discerning what is beneficial and what is hazardous to our species. This does not mean our species must remain stagnant. Joy never expresses the wish that we remain as we are. He merely advocates “simple common sense – an attribute that, along with humility, many of the leading advocates of 21st century technologies seem to lack”. In a wonderful example he describes the warnings of his grandmother, a nurse in the First World War, against the overuse of antibiotics. By no means was she an “enemy of progress”, but she possessed a sense of responsibility. 

luddites

 

When Joy writes that the only realistic alternative to erecting a series of shields to protect us from dangerous technologies is limiting the development of “technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge” one could either become outraged (as Kurzweil or Max More do) or discern a sense of responsibility. Kurzweil responds that “Brave New World is a vision of what would be required to implement Joy’s recommendation … The only way you can stop technology advancement — even focused on one area like nanotechnology — would be to have a totalitarian, state-enforced ban.” But his argument, while persuasive, misses the point of Joy’s essay. Each new technology is different from its predecessor. Our society changes as we begin to interact differently. Technologies that emerge carry the mark of these social shifts. robotics, genetic engineering, and nanotechnology carry tremendous new risks and possibilities. To presume that we will deal with them using the same tactics as we have earlier technologies is an error. “This time” Joy writes “…unlike during the Manhattan project – we aren’t in a war, facing an implacable enemy that is threatening our civilization; we are driven, instead, by our habits, our desires, our economic system, and our competitive need to know”. 

We can, through articles like “Why the Future Doesn’t Need Us”, criticize the advocates of a given technological utopia without turning our backs on science and technology. Joy’s essay, bolstered by his work as a computer technician, is a roadmap for scientists and technicians detailing how they might use their own expertise to discover a deeper sense of personal responsibility and agency in criticizing Kurzweil’s fatalistic rhetoric – without being an enemy of science and technology. 

——cybjectisonlya thought experiment——-

Advertisements

~ by dccohen on January 10, 2010.

3 Responses to “Rethinking Our Utopian Choices: Why Whether the Future Needs Us Matters (Part 2/2)”

  1. […] https://cybject.wordpress.com/2010/01/10/rethinking-our-utopian-choices-why-whether-the-future-needs-…We can, through articles like “Why the Future Doesn’t Need Us”, criticize the advocates of a given technological utopia without turning our backs on science and technology. Joy’s essay, bolstered by his work as a computer technician, … […]

  2. A really interesting article. Thank you very much.

  3. I just stumbled upon this blog and find your support of Joy and your dismissive of Kurzweil to be as naive and myopic as you claim Kurzweil is in his response to Joy. The theory that you can prevent technological advancement in any way has never been proven, the march of progress is inexorable.

    The argument that technological progress and ultimately the Singularity can be stopped or controlled exposes your lack of understanding with regard to the exponential progression of science and technology and clearly ignores the failed attempts at controlling knowledge throughout history. Every institution that attempted to maintain power by control of knowledge, information or discovery has failed, even with the tools of mass fear, threatened death and total control of the educational system they failed. Even Bill Joy himself admits to the inability to control the technology he created and no longer controls in any feasible way.

    I am not arguing for the singularity, that would be pointless, its coming regardless of our actions, regardless of our acceptance of it. The real argument now is how do we survive it.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: