Bodies Matter: How The Turing Test Is Too Narrow

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 12

Lucy Hodgman Math/Theater 209 3/12/2007 Bodies Matter: How the Turing Test is Too Narrow SPOILER ALERT:

If anyone follows Battlestar Galactica or plans to start watching it, you should know that this paper includes important plot and character details from the miniseries through the third (current at time of writing) season. Imagine you are face-to-face with another person, having a conversation. It might be someone you have just met, or it might be someone you have known for a while. The conversation goes just as you would expect any conversation to go: in other words, this person passes the Turing test with flying colors. No surprise, right? It certainly would be a surprise if you learned that this person you were talking with actually was artificially created. This is what the human population faces in Battlestar Galactica, a science-fiction television show a central theme about the interaction between humans and a species of humanoid robots called cylons. Perhaps the difference between human and cylon would be negligible, but the humans are at war with the cylons. Being able to tell the difference between ones own kind and the enemy is vital. The cylons do effortlessly pass the Turing test, unfortunately for the humans who struggle to tell who is cylon and who is human. Battlestar Galactica is a re-make of a 1970s series. The original is not held in nearly as much esteem as the new show. This is possibly because in the old show, the cylons were like robots in any other TV show or movie: they were clunky metal. The plot twists and mysteries brought up by humanoid cylons did not exist. In the new series, much of the interest comes from the questions, some subtle and some obvious, that arise from the existence of an enemy that is physically indistinguishable from a friend. I will give an

example of one of the more subtle questions, but first I need to offer some more background on cylons. There are twelve cylon models. That is, there are twelve distinct human forms that a cylon can be in. Of each of the twelve models, there are many copies; the audience and the human characters do not know how many copies of each model there are. When one copys body dies, its consciousness is downloaded into a new body on a cylon baseship. (Much of the series takes place in outer space, because the cylons have attacked the humans twelve colonies on planets, and the humans are now fleeing from the cylons who are trying to kill those humans who are left.) The difficulty is that although the humans know that there are twelve models, they did not create these models; it is unclear how they arose, but they may have been created by or evolved from the earlier, clunky metal cylons called centurions, which still exist, presumably for labor purposes. The humans do not know how to recognize a cylon they have not yet been tipped off about, giving the cylons an extreme advantage, in that they can infiltrate human ships (including the big warship for which the show is named). Furthermore, some of the cylons are programmed to not know themselves that they are cylons; they are given false memories and fully believe that they are human, but in crucial moments, their cylon side takes over and launches an attack against the humans they are among. This leads to the example I mentioned earlier. In light of these sleeper agents, the question of whether someone is a cylon and therefore an enemy, or human and therefore a friend, is not as clear-cut as it might be. One of the main characters from the outset of the show, a pilot named Sharon whom the audience grows to love, turns out to be a cylon sleeper agent. The audience learns this at

the end of the mini-series, but Sharon and the rest of the crew take longer to figure it out. Sharon is as shocked and horrified as anyone else (if not more so) as this realization eventually dawns on her. She is consequently both friend and enemy to the crew of the Galactica, who do eventually turn on her ruthlessly after she commits an atrocious and blatant violent crime while in cylon mode. Shortly thereafter, she is murdered by a crewmate, whose subsequent punishment is mild. Meanwhile, another human character, Helo, stranded on one of the twelve colonies post-nuclear attack while the rest of the show carries on in space, comes upon another copy of Sharon. Helo, however, does not know she is a cylon. This copy of Sharon, unlike the one on the Galactica, knows she is a cylon, but she has also been implanted with the memories of the Sharon whom Helo knew on the Galactica. Therefore, she is adeptly able to pretend that she is the same Sharon who he has always known, and she makes up a story about how she too came to be stranded on the planet. To make a very long story short, the two fall in love and conceive a child; Helo finds out that Sharon is a cylon but after a brief period of horror and considering killing her, decides he doesnt care; the two return to the Galactica; and after much time and much wariness by the crew, this copy of Sharon is accepted as a pilot and the couple gets married. They are only marginally accepted by everyone else, and Sharon complains (surely correctly) that she has to fight for acceptance every single day. Once again, however, the line is blurred between cylon and human. Or perhaps more accurately, we are able to see how fine the line has always been. What does all of this mean in the context of the Turing test? It does not refute it: since any cylon (of a model as of yet undiscovered by the humans) can pass easily as human while in face-to-face conversation with a human, surely one could pass as human in

the more limited constraints of the Turing test. This paper is therefore not an attempt to attack the Turing test per se. It instead intends to propose challenges to Turings assertion that embodiment is useless in the attempt to create artificial intelligence. Turing plainly stated, I certainly hope and believe that no great efforts will be put into making thinking machines with the shape of the human body. It is easy enough to see, and agree with, his reasons behind this statement. Even if one takes a reductionist view, in which cognition is reducible to nothing but a material phenomenon, it is difficult to describe thinking in a meaningful way without imagining it as separate from the brain. The Turing test does this while avoiding dualism (in which thought or the soul is an entirely different substance from ordinary matter). Cognition is seen as a program, and this seems like a fair way to approach it. Turing did not want a critic to dismiss artificial intelligence as unintelligent on the basis of its looks, when intelligence is something that could theoretically be a pattern or program instantiated in any material. Dont judge a book by its cover! It seems simple enough. It is not that simple, though. As French points out, our experiences in the world hugely impact the development of our intelligence. He notes that a person with eyes on his knees instead of in his head would have a very different conception of the world. He claims that this person would not pass the Turing test. Why discriminate against him? Clearly, his difference has not led him to less intelligence, just to a differently framed intelligence than we are used to seeing. We would use our own intelligence, upon encountering him face-toface, to determine that he is in fact still a creature of reason. In so determining this, we would be using knowledge of his physical structure: seeing how his body is arranged would

help us deduce why he interacts with us in the way that he does. It relates to how he interacts with the world. Based on his concept of non-embodied artificial intelligence development, Turing offered several possible dates for when a machine might be able to pass his test. During the past half-century, it has become clear that he was overly optimistic. What we have learned, if anything, is that intelligence is much more complex and intricate than we had assumed. No one fully understands how it works; people even disagree over its definition. If we do not know what intelligence is or how it functions, it is a stretch to see how we might be able to create it in a bottom-up way with explicit rules. We do not know what these rules might be. Moreover, people are generally rather bad at knowing what they know. The study of this is called metacognition, and it is interesting because in varying situations, people misjudge what they know. A striking example is with a condition called blindsight, in which damage to the visual cortex of the brain results in a condition of vision without consciousness (Blackmore, 264.) Patients have a blind spot where they swear they can see nothing, but experimental results show that they are better than chance at determining what is in their blind spot. In short, we simply do not always know what we know, and it would be ill informed to assume that we can create artificial intelligence based on what we consciously know (or think we know) about our cognitive processes. Until we learn much more about how our brains work, the much more reasonable and expeditious way to create intelligence would be through the same mechanisms that we ourselves become intelligent. Granted, everyone has to start with a structure that is ready to learn. Learning itself, however, is the crucial part; trying to build a ready-made intelligent creature from the ground would be extremely difficult at this point. It is possible to make a

non-embodied AI agent that learns or adapts; Pinker gives an example of this when discussing the computer program mimicking the development of an eye. This works because the parameters of the program also mimic the environment that the lightsensitive organ is in, and it mimics the evolutionary process that we understand we go through. But there are situations in which we cannot as easily translate an adaptive process from our physical environment to a virtual one, and that is where embodiment comes in. We all know that there are things that take forever to explain to someone but take only a moment to demonstrate; similarly, there are things that we talk about as something you have to experience for yourself. Also, as Boden explains, [l]anguagehas many characteristics arguably due to the fact that we are bodily creatures moving face-forward in a material world. Countless linguistic expressions are metaphors, living or dead, grounded in our bodily experience. (233) Presumably, trying to catalogue and notate these expressions would be difficult and time-consuming. How better for an AI-creature to understand them than to develop them for itself, interacting with the same world that we do? From here, it follows that an AI-creature would more likely pass the Turing test, since its learning and development would have more closely followed that of a human. In addition to embodied learning making an entity more likely to be confused with a human, people also judge intelligence based on what they see, making a humanoid robot likely to be taken for a human with human-like qualities. In Battlestar Galactica, the cylons did not always look like humans. A common sentence heard in the first season of the show, as the humans start to catch on and spread rumors, is The cylons look like us now. It follows naturally so naturally that no one actually mentions it that their intelligence is also like ours. And throughout the show, this proves to be true: although we

are never entirely sure of their motivations, the cylons are not blind killers. They have similarly intricate goals as the humans. They are not unswervingly faithful to their own kind, as the second copy of Sharon shows us with her active and conscious choice to fight for the humans and marry one of them. At closer examination, cylons are not precisely like humans. They are similar enough that they can easily appear this way for their own purposes, but once the human characters or the audience know that a certain person is a cylon, certain things make the difference clear. For instance, cylons have greater stamina and strength than humans. Also, at least one cylon model, Leoben, claims to be able to see the future and the past, and events lend at least some credibility to his claim. When the humans realize this, does this mean that they suddenly see cylons as less intelligent? Of course not. Cylons simply have a slightly different sort of intelligence than humans. The underlying qualities are the same: they can reason, innovate, learn, and manipulate. This on top of the fact that they look like humans means there is no question in the humans minds that the cylons are intelligent. (Whether they should have any rights is a subtler moral question, which I will discuss later.) Ironically, whatever their reasons for looking like humans, this is one of the qualities that uses their skill of manipulation. Along these lines, in The Soul of the Mark III Beast by Terrel Miedaner, one character tries to convince another that humans are machines and machines are a form of life themselves. Dirksen, the defensive character, maintains that she differentiates between breaking a machine and killing an animal; Hunt counters that Dirksen eats meat and that therefore her aversion isnt so much to killing per se as it is to doing it [her]self; it has nothing to do with respect for life and everything to do with the animals resistance of

death: its struggling, looking pathetic, and pleading. Hunt aims to prove this to Dirksen by offering her the chance to smash a robotic beetle. Throughout the excerpt, both he and the author talk about the machine using language that implies life and a mind. Dirksen immediately finds her task difficult to do but continues, determined. The excerpt ends thus: Dirksen pressed her lips together tightly, raised the hammer for a final blow. But as she started to bring it down there came from within the beast a sound, a soft crying wail that rose and fell like a baby whimpering. Dirksen dropped the hammer and stepped back, her eyes on the blood-red pool of lubricating fluid forming on the table beneath the creature. She looked at Hunt, horrified. Itsits Just a machine, Hunt said, seriously now. Like these, its evolutionary predecessors. His gesturing hands took in the array of machinery in the workshop around them, mute and menacing watchers. But unlike them it can sense its own doom and cry out for succor. Turn it off, she said flatly. Hunt walked to the table, tried to move its tiny power switch. Youve jammed it, Im afraid. He picked up the hammer from the floor where it had fallen. Care to administer the death blow? She stepped back, shaking her head as Hunt raised the hammer. Couldnt you fix There was a brief metallic crunch. She winced, turned her head. The wailing had stopped, and they returned upstairs in silence. This is not about intelligence, directly. It is, however, about intuitions that arise from interactions with other beings. It also highlights the way we decide how much respect to give something. Something that we perceive as alive or (at least partially) sentient may not get as much respect as something that we perceive as intelligent, but it gets more than something we perceive as lifeless. Anyone would smash an alarm clock before smashing a purring, warm metal beetle that scurries away when you try to hit it with a hammer but trusts you when you pick it up (as the creature in Miedaners story does). There is a related case of compassion seen in the third season of Battlestar Galactica, when the crew figures out a way to potentially kill off the cylons once and for all. Ever since the attack on the colonies, the humans who are left (only about 40,000) have

been fleeing from the cylons, trying to keep their species alive. Most of the people in power agree that this opportunity to kill the cylons is just what they need. It would ensure the safety of the human species, and they would finally be able to breathe again. Only one character publicly objects. This is Helo. He claims that to kill all of the cylons would be genocide. Even if they are machines, he says, it is clear that they are people. To demolish their entire species would be to do exactly what they had tried to do to the humans it would be an act of genocide. Other characters disagree, saying that to leave the cylons be when they have the chance to kill them, given that the cylons instigated the war in the first place, would be crazy. Furthermore, they do not believe that the cylons are people: most humans believe that cylons, being machines, are distinct from humans, despite their indistinguishable behavior. What makes the genocide argument all the more interesting in this case are, of course, Helos personal feelings for Sharon, his cylon wife. Sharon would (we know) be immune to the killing, but this does not change Helos sense of right and wrong. (Sharon, of course, agrees, but she is not involved in the official argument.) He has come to see cylons as a different type of person not human, perhaps, but still people. Other humans have done as much as they can to keep the cylons in the category of other. Even for them, this is not always easy. When Starbuck, a pilot, talks to Sharon while Sharon is being held in the brig of the Galactica, she remarks that sometimes when she looks at Sharon, she still sees the pilot she used to know the Sharon who botched her landings, and the Sharon who was having a supposedly clandestine affair that everyone actually knew about. She just does not see a machine. In cases where there is personal history, it is difficult to think of cylons as different from humans. That, combined with the inability to physically discern friend from enemy, goes to show just how blurry the line is

between what we want to kill and what we cannot bear to, and how much an emotional connection (as with the metal beetle) can make a difference. None of this is meant to, or can, criticize the Turing test internally. If intelligence truly is separable from embodiment, then the Turing test is right-on. Rather, I have hoped to bring into question Turings assumption that embodiment is the wrong road to go down. Granted, part of his reservations may have been feasibility: he speculated that manufactured bodies would have something like the unpleasant quality of artificial flowers. Whether he considered this inevitable regardless of technological advances or whether he simply thought it was true at the time, it seems to be rather defeatist and possibly irrelevant. Miedaners thought experiment convincingly shows that specific plausibility of a body is not crucial; the interaction between emotional displays and bodies (such as running away whimpering when someone attempts to strike you) is what causes us to relate to other creatures. The example of cylons obviously cannot dispel worries such as Turing had, since cylons are science fiction. Nevertheless, they provide a complementary thought experiment to Miedaners, suggesting that the more something looks and acts like us, the more seriously we take it. These detection aides, along with the power of interaction with the environment in developing intelligence, provide persuasive reasons for going beyond the narrow and misguided track of focusing on software only. Yes, if we want to understand intelligence, we will need to understand the software; but if we just want to create it, there is a more direct way.

References Battlestar Galactica. Executive Producers: David Eick and Ronald D. Moore. SCI FI. 2003-2007. Blackmore, Susan. Consciousness: An Introduction. New York: Oxford University Press, Inc., 2004. Boden, Margaret A. Could a Robot Be CreativeAnd Would We Know? Thinking about Android Epistemology. Ed. Kenneth M. Ford, Clark Glymour, Patrick J. Hayes. Menlo Park, CA: American Association for Artificial Intelligence, 2006. 217-239. French, Robert M. Subcognition and the Limits of the Turing Test. The Turing Test: Verbal Behavior as the Hallmark of Intelligence. Ed. Stuart Shieber. Massachusetts Institute of Technology, 2004. Miedaner, Terrel. The Soul of the Mark III Beast. The Minds I. Composed and Arranged Douglas R. Hofstadter, Daniel C. Dennett. New York: Basic Books, Inc., 1981/2000. 109-113. Pinker, Steven. How the Mind Works. New York: W. W. Norton & Company, Inc., 1997.

Extra bit! Part of the cast of Battlestar Galactica: so far in the series, the audience (and the characters) know that two of these people are cylons (the rest are assumed to be humans, but we may be proved wrong). Bet you cant tell which are which!

http://scifi.about.com/library/graphics/bgs22.jpg

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy