Chapter 6 - STS
Chapter 6 - STS
6
When Technology and Humanity Cross
A. The Ethical Dilemmas of Robotics
The rapid advancements in technology that the world has witnessed over the past
century have made a reality of many of mankind’s wildest dreams. From being able to
cross the earth, air, and sea at extreme speeds to being able to send and receive
information instantly via the Internet, the technological advancements in recent years
have become cornerstones of modern society. One dream that is still yet to be perfectly
fulfilled by advancements in technology is the development of human-like and self-aware
robots, often referred to as androids. While robotic technology has come a long way since
its initial attempts, the robot which is largely indistinguishable from a human is still far from
a reality. However, as technology continues to develop and evolve exponentially, many
people believe it is only a matter of time. If and when truly "living" robots were to come
about, one can foresee a slew of ethical dilemmas developing.
A complete consensus on the definition of the word “robot” has yet to be reached.
However, it is commonly accepted that robots contain some combination of the following
attributes such as mobility, intelligent behavior, sense and manipulation of environment.
The term “robot” truly extends to more than just androids. The commonly accepted first
use of the word was in 1920 in the form of a play written by Karel Capek. The play was
entitled R.U.R. (Rossum's Universal Robots) and involves the development of artificial
people. These people are referred to as robots and while they are given the ability to
think, they are designed to be happy as servants. The use of the word “robot” in Capek's
play comes from the Slavic languages‟ word for “work,” which is robota.
While the word “robot” was not used until 1920, the idea of mechanical humans
has been around as far back as Greek mythology. One example that closely relates to
the servant robots seen in Capek's play is the servants of the Greek god Hephaestus, the
god of fire and the forge. It is recorded that Hephaestus had built robots out of gold which
were “his helpers, including a complete set of life-size golden handmaidens who helped
around the house”. Another example of robots in Greek mythology comes from the stories
of Pygmalion, who is said to have crafted a statue of Galatea that would come to life.
Beyond the ancient myths which speak of humanoid robots, one of the milestones
in the design and development of such robots came with the discovery of Leonardo Da
Vinci's journals which contained detailed plans for the construction of a humanoid robot.
Inspired by the ancient myths, the robot was designed in the form of an armored knight
and was to possess the ability to sit up, wave its arms, move its head, and open its mouth.
The journals in which the plans were found date back to 1495. It is unknown if this robot
was ever built by Da Vinci, but merely conceiving it was a milestone in the timeline of
robotic history. The Modern State of Robots From Da Vinci to the current day the
development of humanoid robots has continued to approach the goal of a robot that is
indistinguishable from a human. However, despite the massive recent advancements in
technology and even the exponential growth of computing power of the past decades,
this dream is still far from a reality.
76
In a comprehensive article in the New York Times, Robin Marantz Henig discusses
her experiences with what are often labeled “social robots.” These robots are by no means
what the servant robots of Greek mythology have led many people to hope for;; rather
they are infant versions, at best, of the long-hoped-for androids. Henig said these
machines are not the docile companions of the collective dreams, robots designed to
flawlessly serve dinners, fold clothes and do the dull or dangerous jobs that human do
not want to do. Nor are they the villains of the collective nightmares, poised for robotic
rebellion against humans whose machine creations have become smarter than the
humans themselves. They are, instead, hunks of metal tethered to computers, which
need their human designers to get them going and to smooth the hiccups along the way.
Despite the disappointment that many people feel when they are given the chance
to interact with the latest robots, some major players in the robotic industry are quite
optimistic. Rodney Brooks is an expert in robotics and artificial intelligence. In an article
written in 2008, Brooks explains that it is no longer a question of whether human-level
artificial intelligence will be developed, but rather how and when. While it is true that
androids are not the only robots which have a great impact on man’s lives, their
development introduces a set of unique ethical issues which industrial robots do not
evoke. Working under the assumption that it is only a matter of time until androids are an
everyday reality, it is proper to begin thinking about what these ethical issues are and
how they may be dealt with in the coming years. The overarching question that results is
what exactly these robots are. Are they simply piles of electronics running advanced
algorithms, or are they a new form of life? What Is Life? The question of what constitutes
life is one on which the world may never come to a consensus.
From the ancient philosophers to the common man on the street, it seems that
everyone has an opinion on what a living organism consists of. One of the more prevailing
views throughout history has been that of Aristotle. The basic tenets of Aristotle’s view
are that an organism has both “matter” and “form.” This differs from the philosophical
position known as materialism, which has become popular in modern times and finds its
roots among the ancient Indians. Materialism does not entertain any notion of organisms
having a “form” or “soul”;; rather, organisms are made simply of various types of “matter.”
These two views are at odds with one another and the philosophical position society
adopts will inevitably have a huge impact on how humans interact with robots. Aristotle
The view articulated by Aristotle and his modern-day followers describes life in terms of
unity, a composite of both “matter” and “form.” One type of “matter” which Aristotle speaks
of could be biological material such as what plants, animals, and humans consist of.
Another type of “matter” could also be the mechanical and electronic components which
make up modern-day robots. Clearly it is not the “matter” alone which distinguishes
whether an object is a living organism, for if it were, Aristotle‟s view would differ little from
materialism. The distinguishing characteristic of Aristotle is his inclusion of “form.” The
term simply means whatever it is that makes a human a human, a plant a plant, and an
animal an animal. Each of these have a specific “form” which is not the same as its
“matter,” but is a functioning unity which is essential to each living organism in order for
it to be just that, living. The word used to describe the “form” of a living organism is
“psyche” or “soul.”
77
Unlike Aristotle's philosophical view, which was embraced by various religions,
perhaps most notably by the Roman Catholic Church and more specifically by St. Thomas
Aquinas, materialism often finds itself at odds with most religious views in the world.
Catholicism being a prime example of this, one will not find a favorable description of
materialism when looking at the opening lines of its definition in the Catholic
Encyclopedia. The encyclopedia's entry begins by defining materialism as “a
philosophical system which regards matter as the only reality in the world, which
undertakes to explain every event in the universe as resulting from the conditions and
activity of matter, and which thus denies the existence of God and the soul.” Why does it
matter that materialism is at odds with Catholicism and most other religions? More
specifically, what does this have to do with robots and androids? It is relevant because if
materialism is correct, then humans should have the power to develop new forms of life.
If it is true that everything in the universe is simply material and the result of material
interactions, then nothing should be stopping us from creating androids and recognizing
them as just as valid a life form as humans.
The decision of what level of life robots are to be considered is an essential one.
In 1942 Isaac 7 Asimov introduced to the world of science fiction what are known as the
Three Laws of Robotics, which were published in his short story “Runaround.” The laws
Asimov formulated are: First, a robot may not injure a human being or, through inaction,
allow a human being to come to harm. Second, a robot must obey any orders given to it
by human beings, except where such orders would conflict with the First Law. Third, a
robot must protect its own existence as long as such protection does not conflict with the
First or Second Law. While these laws are part of science fiction history, the current state
of robotic technology demands that they be considered in a new light. As with many ideas
once confined to the world of science fiction, Asimov‟s laws are now able to make the
transition into reality. At first glance these three laws seem to be an excellent way to
ensure the safe development of this supposed new life form. However, Asimov‟s laws
presuppose that human life is of greater value than that of the androids being developed.
If we work under the assumption that androids should be considered just below humans,
Asimov‟s laws may hold true. But what if we hold to the conclusion materialism reaches,
that androids should be placed at or above the level of humans? If this is the case,
Asimov‟s laws will not be able to be applied. The main reason is that we could not see
androids as equal forms of life and implement Asimov‟s laws, which place androids in
direct submission to humans. How can it be that an android should give its life for a human
if an android has a right to life equal to that of a human? Imagine an army made up of
both androids and humans. Should the android always give its life to save a human‟s life?
Would human soldiers be willing to die for an android? As much as people may believe
in materialism and come to conclusions that robots will one day be a life form equal to
humans, I find it hard to believe that many people would actually die for a robot. Robot
Code of Ethics While it remains true that robotics technology is not at a place where
ethical codes for robots are necessary, it is not stopping some countries from being
proactive and taking the beginning steps in the development of a robot code of ethics.
South Korea is considered one of the most high-tech countries in the world and
they are leading the way in the development of such a code. Known officially as the Robot
78
Ethics Charter, it is being drawn up “to prevent human abuse of robots—and vice versa”.
The main focus of the charter is said to be on the social problems the mass integration of
robots into society is bound to create. In particular it aims to define how people are to
properly interact with robots, in Stefan Lovgren‟s words, “human control over robots and
humans becoming addicted to robot interaction”. Beyond the social problems robots may
bring with them, there also is an array of legal issues, the primary one in the charter being
what and how information is collected and distributed by robots. To many it seems as
though South Korea‟s Robot Ethics Charter is the beginning of a modern-day
implementation of Asimov‟s Three Laws of Robotics. However, many robot designers
such as Mark Tilden think this is all a bit premature. Tilden claims that we are simply not
at a point where robots can be given morals and compares it to “teaching an ant to yodel”.
Tilden goes on to claim that when we do reach that point, the interactions will be less than
pleasant, stating that “as many of Asimov's stories show, the conundrums robots and
humans would face would result in more tragedy than utility”. Despite Tilden‟s and others‟
pessimistic view of what the future holds for the human-robot relationship, technology will
slow down for no one. It is only a matter of time before other countries will follow in South
Korea’s footsteps and create their own code of ethics for robots and their interactions with
humans.
B. Human, Morals and Machines
Technology has begun to change our species’long-standing experiences with
nature. Now,we have technological nature—technologies that in various ways mediate,
augment, or simulate the natural world. Entire television networks, such as the Discovery
Channel and Animal Planet, provide us with mediated digital experiences of nature: the
lion’s hunt, the Monarch’s migration, or a climb high into the Himalayan peaks. Video
games, like Zoo Tycoon, engage children with animal life. Zoos themselves are bringing
technologies, such as webcams into their exhibits so that we can, for example, watch
animals from the leisure of our home or a cafe. Inexpensive robot pets have been big
sellers in the Wal-Marts and Targets of the world. Sony’s higher-end robot dog AIBO sold
well. Real people now spend substantial time in virtual environments (e.g., Second Life).
In terms of the physical and psychological wellbeing of our species, does it matter that
we are replacing actual nature with technological nature? To support our provisional
answer that it does matter, we draw on evolutionary and cross-cultural developmental
accounts of the human relation with the natural world and then consider some recent
psychological research on the effects of technological nature.
Scientists are already beginning to think seriously about the new ethical problems
posed by current developments in robotics. Experts in South Korea were drawing up an
ethical code to prevent humans abusing robots, and vice versa. A group of leading
roboticists called the Chapter 2 81 European Robotics Network (Euron) has even started
lobbying governments for legislation. At the top of their list of concerns is safety. Robots
were once confined to specialist applications in industry and the military, where users
received extensive training on their use, but they are increasingly being used by ordinary
people. Robot vacuum cleaners and lawn mowers are already in many homes, and
robotic toys are increasingly popular with children. As these robots become more
79
intelligent, it will become harder to decide who is responsible if they injure someone. Is
the designer to blame, or the user, or the robot itself? The ethical or moral sense for
machines canbe built on a utilitarian base. There are special cases that will require
modifications of the core rules that are based on the circumstances of their use. Doctors,
for example, don not euthanize patients to spread the wealth of their organs, even if it
means that there is a net positive with regard to survivors. They have to conform to a
separate code of ethics designed around the needs of patients and their rights that
restricts their actions. The same holds for lawyers, religious leaders, and military
personnel who establish special relationships with individuals who are protected by
specific ethical codes. The simple utilitarian model will certainly have overlays depending
on the role that these robots play. They will act in accord with whatever moral or ethical
code we provide them and the value determinations that we set. They will run the numbers
and do the right thing. In emergency situations, our autonomous cars will sacrifice the few
to protect the many. When faced with dilemmas, they will seek the best outcomes
independent of whether they themselves are comfortable with the actions. So, as with all
other aspects of machine intelligence, it is crucial that these systems are able to explain
their moral decisions to us. They will need to be able to reach into their silicon souls and
explain the reasoning that supports their actions. We need them to be able to explain
themselves in all aspects of their reasoning and actions. Their moral reasoning will be
subject to the same explanatory requirements that we would demand of explaining any
action they take.
Today’s emerging technologies, like Artificial Intelligence (AI), augmented and
virtual reality, home robots, and cloud computing, to name only a few of the sophisticated
technologies in development today, are capturing the imaginations of many. The
advanced capabilities of today’s emerging technologies are driving many academics,
entrepreneurs, and enterprises to envision futures in which their impacts on society will
be nothing short of transformative. Whether these emerging technologies will realize
these ambitious possibilities is uncertain. What is certain is that they will intersect and
interact with powerful demographic, economic, and cultural forces to upend the conditions
of everyday life.
The article “Is Google Making Us Stupid?” by Nicholas Carrs discusses the effects
that the Internet may be having on our ability to focus, the difference in knowledge that
we now have, and our reliance on the Internet. The points that are made throughout Carrs’
article are very thought-provoking, but his sources make them seem invaluable. Carr
discusses the effects that the Internet has on our minds. He feels that the Internet is bad
for the brain. Nicholas Carr writes that he spends much of his leisure time from the Net.
Carr feels like he cannot concentrate on the long passages of reading because his brain
is used to the fast millisecond flow of the Net. “For more than a decade now, I’ve been
spending a lot of time online, searching and surfing.” The supporting idea is that his mind
now “expects to take in information the way the Net distributes it--in a swiftly moving
streams of particles.” His brain wants to think as fast as the Internet goes. In summary,
the article is split into two pieces. The first is Nicholas Carr’s longing for his brain to be
one with the Internet, a man-made machine. The second part of the article is Google’s
standpoint on how our brains should be replaced by artificial intelligence.
80
C. Why the Future Does Not Need Us?
With the accelerating improvements of technology, computer scientists succeed in
developing intelligent machines that can do all things better than human beings. In that
case presumably all work will be done by vast, highly organized systems of machines,
and no human effort will be necessary. Either of two cases might occur. The machines
might be permitted to make all of their own decisions without human oversight, or else
human control over the machines might be retained.
If the machines are permitted to make all their own decisions, we cannot make
any conjectures about the results because it is impossible to guess how such machines
might behave. We only point out that the fate of the human race would be at the mercy of
the machines. It might be argued that the human race would never be foolish enough to
hand over all the power to the machines. But human race would voluntarily turn power
over to the machines or the machines would willfully seize power. Human race might
easily permit itself to drift into a position of such dependence on the machines that it would
have no practical choice but to accept all of the machines’ decisions.
As society and the problems that it faces become more and more complex and
machines become more and more intelligent, people will let machines make more of their
decisions for them, simply because machine-made decisions will bring better results than
man-made ones. Eventually a stage may be reached at which the decisions necessary
to keep the system running will be so complex that human beings will be incapable of
making them intelligently. At that stage the machines will be in effective control. People
will not be able to just turn the machines off because they will be so dependent on them
that turning them off would amount to suicide.
On the other hand, it is possible that human control over the machines may be
retained. In that case the average man may have control over certain private machines
of his own, such as his car or his personal computer, but control over large systems of
machines will be in the hands of the tiny elite - just as it is today, but with two differences.
Because of improved techniques the elite will have a greater control over the masses and
because human work will no longer be necessary, the masses will be superfluous, a
useless burden on the system. If the elite are ruthless, they may simply decide to
exterminate the mass of humanity. If they are humane they may use propaganda or any
other psychological or biological techniques to reduce the birth rate until the mass of
humanity becomes extinct, leaving the world to the elite. Or, if the elite consist of soft-
hearted liberals, they may decide to play the role of good shepherds to the rest of the
human race. They will see to it that everyone’s physical needs are satisfied, that all
children are raised under psychologically hygienic conditions, that everyone has a
wholesome hobby to keep him busy, and that anyone who may become dissatisfied
undergoes “treatment” to cure his “problem.” Life will be so purposeless that people will
have to be biologically or psychologically engineered either to remove their need for the
power process or make them “sublimate” their drive for power into some harmless hobby.
These engineered human beings may be happy in such a society, but they will most
certainly not be free. They will have been reduced to the status of domestic animals.
81
Theodore Kaczynskian American domestic terrorist,also known as the
Unabomber, killed three people during a nationwide bombing campaign targeting those
involved with modern technology and wounded many others. One of his bombs gravely
injured David Gelernter, one of the most brilliant and visionary computer scientists. His
actions were murderous and criminally insane, but his vision describes unintended
consequences, a well-known problem with the design and use of technology, and one
that is clearly related to Murphy’s law–“Anything that can go wrong, will.” Our overuse of
antibiotics has led to what may be the biggest such problem so far: the emergence of
antibiotic-resistant and much more dangerous bacteria. Similar things happened when
attempts to eliminate malarial mosquitoes using DDT caused them to acquire DDT
resistance;; malarial parasites, likewise, acquired multi-drug-resistant genes.
The cause of many such surprises seems clear: The systems involved are
complex, involving interaction among and feedback between many parts. Any changes
to such a system will cascade in ways that are difficult to predict;; this is especially true
when human actions are involved. Biological species almost never survive encounters
with superior competitors. Ten million years ago, South and North America were
separated by a sunken Panama isthmus. South America, like Australia today, was
populated by marsupial mammals, including pouched equivalents of rats, deers, and
tigers. When the isthmus connecting North and South America rose, it took only a few
thousand years for the northern placental species, with slightly more effective
metabolisms and reproductive and nervous systems, to displace and eliminate almost all
the southern marsupials.
In a completely free marketplace, superior robots would surely affect humans as
North American placentals affected South American marsupials (and as humans have
affected countless species). Robotic industries would compete vigorously among
themselves for matter, energy, and space, incidentally driving their price beyond human
reach. Unable to afford the necessities of life, biological humans would be squeezed out
of existence.
A textbook on dystopia and Moravec discuss how our main job in the 21st century
will be “ensuring continued cooperation from the robot industries” by passing laws
decreeing that they be “nice,” and describing how seriously dangerous a human can be
once transformed into an unbounded superintelligent robot. Moravec’s view is that the
robots will eventually succeed us that humans clearly face extinction.
Accustomed to living with almost routine scientific breakthroughs, we have yet to
come to terms with the fact that the most compelling 21st-century technologies–robotics,
genetic engineering, and nanotechnology–pose a threat different from the technologies
that have come before. Specifically, robots, engineered organisms, and nanobots share
a dangerous amplifying factor: They can self-replicate. A bomb is blown up only once–
but one bot can become many, and quickly get out of control. For instance, the sending
and receiving of messages through computer networking creates the opportunity for out-
of-control replication. But while replication in a computer or a computer network can be a
nuisance, at worst it disables a machine or takes down a network or network service.
82
Uncontrolled self-replication in these newer technologies runs a much greater risk: a risk
of substantial damage in the physical world. Each of these technologies also offers untold
promise: The vision of near immortality that Kurzweil sees in his robot dreams drives us
forward;; genetic engineering may soon provide treatments, if not outright cures, for most
diseases;; and nanotechnology and nanomedicine can address more ills. Together, they
could significantly extend our average life span and improve the quality of our lives. With
each of these technologies, a sequence of small, individually sensible advances leads to
an accumulation of great power and, concomitantly, great danger. What was different in
the 20th century? Certainly, the technologies underlying the weapons of mass destruction
(WMD)–nuclear, biological, and chemical (NBC)–were powerful, and the weapons an
enormous threat. But building nuclear weapons required, at least for a time, access to
both rare– indeed, effectively unavailable–raw materials and highly protected information;;
biological and chemical weapons programs also tended to require large-scale activities.
The 21st-century technologies–genetics, nanotechnology, and robotics (GNR)–are so
powerful that they can spawn whole new classes of accidents and abuses. Most
dangerously, for the first time, these accidents and abuses are widely within the reach of
individuals or small groups. They will not require large facilities or rare raw materials.
Knowledge alone will enable their use;; thus, we have the possibility not just of weapons
of mass destruction but of knowledge-enabled mass destruction (KMD), this
destructiveness hugely amplified by the power of self-replication. Failing to understand
the consequences of our inventions while we are in the rapture of discovery and
innovation seems to be a common fault of scientists and technologists;; we have long
been driven by the overarching desire to know that is the nature of science’s quest, not
stopping to notice that the progress to newer and more powerful technologies can take
on a life of its own. Because of the recent rapid and radical progress in molecular
electronics–where individual atoms and molecules replace lithographically drawn
transistors–and related nanoscale technologies, we should be able to meet or exceed the
Moore’s law rate of progress for another 30 years. By 2030, we are likely to be able to
build machines, in quantity, a million times as powerful as the personal computers of
today. As this enormous computing power is combined with the manipulative advances
of the physical sciences and the new, deep understandings in genetics, enormous
transformative power is being unleashed. These combinations open up the opportunity to
completely redesign the world, for better or worse: The replicating and evolving processes
that have been confined to the natural world are about to become realms of human
endeavor. Given the incredible power of these new technologies, should we not be asking
how we can best coexist with them? And if our own extinction is a likely, or even possible,
outcome of our technological development, should we not proceed with great caution?
How soon could such an intelligent robot be built? The coming advances in computing
power seem to make it possible by 2030. Once an intelligent robot exists, it is only a small
step to a robot species–to an intelligent robot that can make evolved copies of itself.
Genetic engineering promises to revolutionize agriculture by increasing crop yields while
reducing the use of pesticides;; to create tens of thousands of novel species of bacteria,
plants, viruses, and animals;; to replace reproduction, or supplement it, with cloning;; to
create cures for many diseases, increasing our life span and our quality of life;; and much,
much more. We now know with certainty that these profound changes in the biological
sciences are imminent and will challenge all our notions of what life is. Technologies,
83
such as human cloning, have in particular raised our awareness of the profound ethical
and moral issues we face. If, for example, we were to reengineer ourselves into several
separate and unequal species using the power of genetic engineering, then we would
threaten the notion of equality that is the very cornerstone of our democracy. Awareness
of the dangers inherent in genetic engineering is beginning to grow, as reflected in the
Lovins’ editorial. The general public is aware of, and uneasy about, genetically modified
foods, and seems to be rejecting the notion that such foods should be permitted to be
unlabeled. But genetic engineering technology is already very far along. As the Lovins’
note, the USDA has already approved about 50 genetically engineered crops for unlimited
release;; more than half of the world’s soybeans and a third of its corn now contain genes
spliced in from some other forms of life. Unfortunately, as with nuclear technology, it is
far easier to create destructive uses for nanotechnology than constructive ones.
Nanotechnology has clear military and terrorist uses, and you need not be suicidal to
release a massively destructive nanotechnological device–such devices can be built to
be selectively destructive, affecting, for example, only a certain geographical area or a
group of people who are genetically distinct. The effort to build the first atomic bomb was
led by the brilliant physicist J. Robert Oppenheimer. Oppenheimer was not naturally
interested in politics but became painfully aware of what he perceived as the grave threat
to Western civilization from the Third Reich, a threat surely grave because of the
possibility that Hitler might obtain nuclear weapons. Energized by this concern, he
brought his strong intellect, passion for physics, and charismatic leadership skills to Los
Alamos and led a rapid and successful effort by an incredible collection of great minds to
quickly invent the bomb. Physicists proceeded with the preparation of the first atomic test
called Trinity despite a large number of possible dangers. They were initially worried,
based on a calculation by Edward Teller, that an atomic explosion might set fire to the
atmosphere. A revised calculation reduced the danger of destroying the world to a three-
ina-million chance. Oppenheimer, though, was sufficiently concerned about the result of
Trinity that he arranged for a possible evacuation of the southwest part of the state of
New Mexico. There was the clear danger of starting a nuclear arms race. Within a month
of that first, successful test, two atomic bombs destroyed Hiroshima and Nagasaki. Some
scientists had suggested that the bomb simply be demonstrated rather than dropped on
Japanese cities–saying that this would greatly improve the chances for arms control after
the war–but to no avail. With the tragedy of Pearl Harbor still fresh in Americans’ minds,
it would have been very difficult for President Truman to order a demonstration of the
weapons rather than use them as he did–the desire to quickly end the war and save the
lives that would have been lost in any invasion of Japan was very strong. The overriding
truth was probably very simple: As the physicist Freeman Dyson later said, “The reason
that it was dropped was just that nobody had the courage or the foresight to say no.” It is
important to realize how shocked the physicists were in the aftermath of the bombing of
Hiroshima on August 6, 1945. They described a series of waves of emotion: first, a sense
of fulfillment that the bomb worked, then horror at all the people that had been killed, and
then a convincing feeling that on no account should another bomb be dropped. Another
bomb was dropped, on Nagasaki, only three days after the bombing of Hiroshima. In
November 1945, three months after the atomic bombings, Oppenheimer stood firmly
behind the scientific attitude, saying, “It is not possible to be a scientist unless you believe
that the knowledge of the world, and the power which this gives, is a thing which is of
84
intrinsic value to humanity, and that you are using it to help in the spread of knowledge
and are willing to take the consequences.” In our time, how much danger do we face not
just from nuclear weapons but from all of these technologies? How high are the extinction
risks? The philosopher John Leslie has studied this question and concluded that the risk
of human extinction is at least 30 percent while Ray Kurzweil believes we have a better
than even chance of making it through, with the caveat that he has always been accused
of being an optimist. Not only are these estimates not encouraging, but they do not include
the probability of many horrid outcomes that lie short of extinction. Faced with such
assessments, some serious people are already suggesting that we simply move beyond
the Earth as quickly as possible. We would colonize the galaxy using von Neumann
probes, which hop from star system to star system, replicating as they go. This step will
almost certainly be necessary billion years from now (or sooner if our solar system is
disastrously impacted by the impending collision of our galaxy with the Andromeda galaxy
within the next three billion years), but if we take Kurzweil and Moravec at their word, it
might be necessary by the middle of this century. What are the moral implications here?
If we must move beyond Earth this quickly for the species to survive, who accepts the
responsibility for the fate of those who are left behind? And even if we scatter to the stars,
is it not likely that we may take our problems with us or find, later, that they have followed
us? The fate of our species on earth and our fate in the galaxy seem inextricably linked.
Another idea is to erect a series of shields to defend against each of the dangerous
technologies. The Strategic Defense Initiative, proposed by the Reagan administration,
was anattempt to design such a shield against the threat of a nuclear attack from the
Soviet Union. But as Arthur C. Clarke, who was privy to discussions about the project,
observed: “Though it might be possible, at vast expense, to construct local defense
systems that would only let through a few percent of ballistic missiles, the much-touted
idea of a national umbrella was nonsense.” Luis Alvarez, the greatest experimental
physicist, remarked that the advocates of such schemes were very bright guys with no
common sense. Similar difficulties apply to the construction of shields against robotics
and genetic engineering. These technologies are too powerful to be shielded against in
the time frame of interest;; even if it were possible to implement defensive shields, the
side effects of their development would be at least as dangerous as the technologies we
are trying to protect against. These possibilities are all, thus, either undesirable or
unachievable or both. The only realistic alternative to limit the development of the
technologies that are too dangerous is by limiting our pursuit of certain kinds of
knowledge. We have been seeking knowledge since ancient times. Aristotle opened his
Metaphysics with the simple statement: “All men by nature desire to know.” We have, as
a bedrock value in our society, long agreed on the value of open access to information
and recognize the problems that arise with attempts to restrict access to and development
of knowledge. In recent times, we have come to revere scientific knowledge. It was
Nietzsche who warned us, at the end of the 19th century, not only that God is dead but
that “faith in science, which after all exists undeniably, cannot owe its origin to a calculus
of utility;; it must have originated in spite of the fact that the disutility and dangerousness
of the ‘will to truth,’ of ‘truth at any price’ is proved to it constantly.” It is this further danger
that we now fully face the consequences of our truth-seeking. The truth that science seeks
can certainly be considered a dangerous substitute for God if it is likely to lead to our
extinction. Our Western notion of happiness seems to come from the Greeks, who defined
85
it as “the exercise of vital powers along lines of excellence in a life affording them scope.”
Clearly, we need to find meaningful challenges and sufficient scope in our lives if we are
to be happy in whatever is to come. We must find alternative outlets for our creative
forces, beyond the culture of perpetual economic growth;; this growth has largely been a
blessing for several hundred years, but it has not brought us unalloyed happiness, and
we must now choose between the pursuit of unrestricted and undirected growth through
science and technology and the clear accompanying dangers
Activity: Film Viewing
Watch the movie “Artificial Intelligence” also known as “A.I.” by Steven
Spielberg. Answer the following questions.
1. At the beginning of the movie, Professor Hobby states that “to create an artificial
being has been the dream of man since the birth of science.” There’s probably an
element of truth to this. Why do we have this fascination?
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
______________
2. One of the scientists at Cybertronics asks, “If a robot could genuinely love a
person, what responsibility does that person hold toward that mecha in return?”
Professor Hobby responds, “In the beginning, didn’t God create Adam to love
him?” What is implied by Professor Hobby’s answer?
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
_________
86
3. Consider some of the imagery the Flesh Fair: motorcycles, cowboy hats, heavy
metal music, flannel shirts. What statement does this make about the kind of
humans that opposed robots?
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
______________
4. The owner of the Flesh Fair states that child mechas like David, were built to
disarm humans by playing on human emotions. Nevertheless, the human
spectators feel sympathy with David, particularly because he pleads for his life.
What abilities would a robot have to exhibit before we would consider it an equal
with humans?
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
____
5. Gigolo Joe tells David that his mother does not love him, but only loves what he
does for her. Is it plausible to think that a normal human could love a robot as
though it were a real human?
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
87
PART III.
SPECIFIC ISSUES IN SCIENCE, TECHNOLOGY AND SOCIETY
Introduction
This section provides overview on how writing evolved through time and internet
came into being. Discussion on how information became accessible and inexpensive thru
the discovery of printing press by Johannes Gutenburg is also presented on this part.
Emphasis is given on the influence of social media to people’s lives.
Further, this section of the module discusses different issues that concern society’s
health and well-being. Basic concepts and ideas on biodiversity, climate change, use of
gene therapy and nanotechnology are also presented here.
Learning Outcomes
At the end of this section, the students are expected to:
1. illustrate how information age and social media have made an impact to our lives.
2. explain the interrelatedness of society, environment, and health.
3. discuss the costs and benefits (both potential and realized) of nanotechnology to
society.
4. describe gene therapy, its various forms and potential benefits and detriments to
global health.
5. identify the causes of climate change and discuss how to apply concepts of STS in
this specific environmental issue.