|
Why the future doesn't need us.
Our most powerful 21st-century technologies - robotics, genetic engineering, and nanotech - are threatening to make humans an endangered species.
By Bill Joy
From the moment I became involved in the creation of new technologies, their
ethical dimensions have concerned me, but it was only in the autumn of
1998 that I became anxiously aware of how great are the dangers facing
us in the 21st century. I can date the onset of my unease to the day I
met Ray Kurzweil, the deservedly famous inventor of the first reading machine
for the blind and many other amazing things.
Ray and I were both speakers at George Gilder's
Telecosm conference, and I encountered him by chance in the bar of the
hotel after both our sessions were over. I was sitting with John Searle,
a Berkeley philosopher who studies consciousness. While we were talking,
Ray approached and a conversation began, the subject of which haunts me
to this day.
I had missed Ray's talk and the subsequent panel that Ray and John had
been on, and they now picked right up where they'd left off, with Ray saying
that the rate of improvement of technology was going to accelerate and that
we were going to become robots or fuse with robots or something like that,
and John countering
that this couldn't happen, because the robots couldn't be conscious.
While I had heard such talk before, I had always
felt sentient robots were in the realm of science fiction. But now, from
someone I respected, I was hearing a strong argument that they were a near-term
possibility. I was taken aback, especially given Ray's proven ability to
imagine and create the future. I already knew that new technologies like
genetic engineering and nanotechnology were giving us the power to remake
the world, but a realistic and imminent scenario for intelligent robots
surprised me.
It's easy to get jaded about such breakthroughs.
We hear in the news almost every day of some kind
of technological or scientific advance. Yet this was no ordinary prediction.
In the hotel bar, Ray gave me a partial preprint of his then-forthcoming
bookThe Age of Spiritual Machines, which outlined a utopia he foresaw
- one in which humans gained near immortality by becoming one with robotic
technology. On reading it, my sense of unease only intensified; I felt sure
he had to be understating the dangers, understating the probability of a
bad outcome along this path.
I found myself most troubled by a passage detailing
adystopian scenario:
THE NEW LUDDITE CHALLENGE
First let us postulate that the computer scientists succeed in developing
intelligent machines that can do all things better than human beings can
do them. In that case presumably
all work will be done by vast, highly organized systems of machines and
no human effort will be necessary. Either of two cases might occur. The
machines might be permitted to make all of their own decisions without
human oversight, or else human control over the machines might be retained.
If the machines are permitted to make all their own decisions, we can't
make any conjectures as to the results, because it is impossible to guess
how such machines might behave. We only point out that the fate of the
human race would be at the mercy of the machines. It might be argued that
the human race would never be foolish enough to hand over all the power
to the machines. But we are suggesting neither that the human race would
voluntarily turn power over to the machines nor that the machines would
willfully seize power. What we do suggest is that the human race might
easily permit itself to drift into a position of such dependence on the
machines that it would have no practical choice but to accept all of the
machines' decisions. As society and the problems that face it become more
and more complex and machines become more and more intelligent, people
will let machines make more of their decisions for them, simply because
machine-made decisions will bring better results than man-made ones. Eventually
a stage may be reached at which the decisions necessary to keep the system
running will be so complex that human beings will be incapable of making
them intelligently. At that stage the machines will be in effective control.
People won't be able to just turn the machines off, because they will be so
dependent on them that turning them off would amount to suicide.
On the other hand it is possible that human control over the machines may
be retained. In that case the average man may have control over certain
private machines of his own, such as his car or his personal computer,
but control over large systems of machines will be in the hands of a tiny
elite - just as it is today, but with two differences. Due to improved
techniques the elite will have greater control over the masses; and because
human work will no longer be necessary the masses will be superfluous,
a useless burden on the system. If the elite is
ruthless they may simply decide to exterminate the mass of humanity. If
they are humane they may use propaganda or other psychological or biological
techniques to reduce the birth rate until the mass of humanity becomes
extinct, leaving the world to the elite. Or, if the elite consists of
soft-hearted
liberals, they may decide to play the role of good shepherds to the rest
of the human race. They will see to it that everyone's physical needs are
satisfied, that all children are raised under psychologically hygienic
conditions, that everyone has a wholesome hobby to keep him busy, and that
anyone who
may become dissatisfied undergoes "treatment" to cure his "problem." Of
course, life will be so purposeless that people
will have to be biologically or psychologically engineered either to remove
their need for the power process or make them "sublimate" their drive for
power into some harmless hobby. These engineered human beings may be happy
in such
a society, but they will most certainly not be free. They will have been
reduced to the status of domestic animals.1
1
The passage Kurzweil quotes is from Kaczynski's Unabomber Manifesto,
which was published jointly, under duress, byThe New York Times and
The Washington Post to attempt to bring his campaign of terror to an
end. I agree with David Gelernter, who said about their decision:
"It was a tough call for the newspapers. To say yes would be giving in
to terrorism, and for all they knew he was lying anyway. On the other hand,
to say yes might stop the killing. There was also a chance that someone
would read the tract and get a hunch about the author; and that is exactly
what happened. The suspect's brother read it, and it rang a bell.
"I would have told them not to publish. I'm glad they didn't ask me. I
guess."
(Drawing Life: Surviving the Unabomber. Free Press, 1997: 120.)
Bill Joy, cofounder and Chief Scientist of Sun Microsystems, was cochair of the presidential commission on the future of IT research, and is coauthor ofThe Java Language Specification. His work on theJini pervasive computing technology was featured inWired 6.08.
Page 2
>>
Previous Story: Industrial Light
Next Story: A Tale of Two Botanies
Copyright
© 1993-2000 The Condé Nast Publications Inc. All rights reserved.
Copyright © 1994-2000 Wired Digital, Inc. All rights reserved.
|