Warning: include(/home/smartonl/royalcustomessays.com/wp-content/advanced-cache.php): failed to open stream: No such file or directory in /home/smartonl/royalcustomessays.com/wp-settings.php on line 95

Warning: include(): Failed opening '/home/smartonl/royalcustomessays.com/wp-content/advanced-cache.php' for inclusion (include_path='.:/opt/alt/php56/usr/share/pear:/opt/alt/php56/usr/share/php') in /home/smartonl/royalcustomessays.com/wp-settings.php on line 95
The Turing Test and the Chinese Room

The Turing Test and the Chinese Room

Strayer BUS310 Assignment 2 – Creating Your Dream Job
July 13, 2018
BUS318 All Discussions (Week 1 to Week 5)
July 15, 2018

Phil1000: Intro: Survey of Philosophy

Week 7 Lecture Notes

Could a Machine Think?

 

Maybe some of you use Siri as a personal assistant on your iPhones, or maybe you give commands to your Amazon Echo device in your living room. These are examples of some of the latest developments in machines that excel at answering questions from actual human beings. IBM’s computer “Watson” (2007) (p. 83) became known as the first major breakthrough within this kind of intelligent technology, and it is an example of what we now call a smart device. This week you need to consider whether or not you think the likes of Siri, Echo, and Watson are intelligent conscious beings that are capable of having thoughts and feelings, or if you think they are “just” machines—parts that are put together to imitate the abilities of human cognitive faculties.

 

Yes, Machines Could Think!

On p. 83 Rachels presents a controversial argument that machines are conscious beings. It is called the Piecemeal-Replacement Argument. As in a lot of areas of philosophy, the argument is based on a thought experiment. At this point, you might be thinking that philosophers are crazy! Why can’t they just present things in “normal” terms and use everyday scenarios? The problem is that things are not as straightforward when we talk about abstract issues, such as what it means to be a conscious being. So don’t let the thought-experiments in philosophy overwhelm you; they can be a fun way to get a grasp on a really difficult topic. As for the argument, it is meant to convince you that you could become a machine by replacing—piece by piece—all the parts of your (organic) brain with tiny chips. The second part of the argument presents an analogy that attempts to convince you that if scientists assemble a robot in a lab, this robot will be a thinking being like you.

 

Rachels covers two objections to the Piecemeal-Replacement Argument (and additional objections to these objections):

1) The Objection That Computers Can Do Only What They Are Programmed To Do (p. 85)

2) The Tipping Point Objection (p. 86)

 

The Turing Test

Make sure to watch the video on the Turing Machine and the Chinese Room that is part of the required material this week, Turing Machine and the Chinese Room.

 

Halfway through the twentieth century mathematician Alan Turing predicted that people would soon start thinking about computers as “thinking beings” in the context of artificial intelligence (AI). Turing asked if there was a way to tell whether a machine was thinking, and shortly thereafter the Turing Test was born (p. 88).

 

This is how the experiment goes: put people in different rooms and only let them communicate via typed messages. The “interrogator” of the experiment is tasked with determining who is a real person and who is a computer based only on the typed messages he receives. The thought is that if he can’t do this, then we have created a machine that counts as having a mind and the mental properties of a person. Have any of you watched Blade Runner? If so, you might be able to find some interesting parallels here.

 

Being on the subject of movies, if you want to watch a brilliant interpretation of what Turing might have been like and what his work was like, I suggest watching The Imitation Game (currently on Netflix FYI). The movie is a portrait of Turing as a lonely, somewhat awkward, genius, and it highlights the work that went into breaking the Enigma codes during World War II. Besides the focus on code breaking, the movie pays attention to the heartbreaking fact that despite his work for the war effort, Turing was ultimately condemned for being gay and that lead to him ending his own life in 1954.

 

Another example of a thinking machine is ELIZA. ELIZA was a natural language processing computer program created by Joseph Weizenbaum at MIT back in 1964-1966. What is worth noting about ELIZA is not the sophistication of her design, as it wasn’t comparably spectacular, but rather, the ways in which people became dependent on ELIZA as a personal therapist. Ultimately, Weizenbaum concluded that humans should not experiment with AI because it is too dangerous.

 

Objections to the Turing Test

I will briefly recap some main objections to the Turing Test. Keep in mind that the test is an application of Behaviorism, which is the (today discredited) idea that thinking and understanding is analyzed through behaviors and behavioral dispositions. Today the predominant ideas are focused on Cognitivism, which means we try to decipher the symbols and ideas in a person’s mind to understand his or her psychological dispositions.

 

1) The Chinese Room Argument (p. 91): This argument is a famous thought experiment by the American philosopher John Searle. Imagine that you are locked in a room filled with books. Through a slot in the door you will be passed notes with symbols that you don’t understand. The books around you contain lists of algorithms, and your instructions told you to pass back the notes with the correct symbols that you found by looking them up. Now consider this: outside the door is a woman who only speaks Chinese. She is the one passing you the notes, and you are, per your instructions, answering her with symbols you look up. The Chinese woman gets the replies back from you and actually believes that she is communicating with someone who speaks Chinese too.

 

Searle says that this is just how a computer works—the function of you in that room is the same as the function of a computer. The computer gets input; it confers with its available data files, and outputs an answer. However, the computer can never be said to truly understand Chinese, it is not actually thinking, but simply simulating. This leaves the problem of where meaning comes from. According to Searle, computers can only work on syntactic properties of symbols. This is fancy language, but it just means that computers can only be concerned with symbols and structures of language, not with semantic properties. Semantic properties are concerned with the meaning of language. In short, Searle concluded that a computer would never be a properly thinking system like the human mind.

The disagreement between the Turing Test and the Chinese Room is based on the Turing Test claiming that a computer, such as the ones we have considered, would have mental states whereas the Chinese Room seems to show that this is just not the case.

 

The “What More Do You Want?” Objection (to the Chinese Room Argument)(p. 93): Some people claim that we are actually doing great at creating AI that matches human intelligence. Their reasoning goes that if we could create machines like the ones in science fiction movies, then what more do you want? By the way, has anyone watched the HBO show Westworld? I personally find it a bit creepy, but it does raise some of the most important questions that we could ask about the use and creation of AI. The robots on Westworld could fool you into thinking they actually have consciousness, and (hypothetically speaking) if that is how far our progress in AI has taken us, what should we make of it? We haven’t talked about ethics yet—what the right thing to do is—but AI, in the sense that Westworld portrays it, raises some really serious ethical questions. If this issue intrigues you, check out this excellent article in the Atlantic by Chrstopher Orr Sympathy for the Robot (it is optional, as in you won’t get quizzed on it, but it is an interesting read).

 

2) Objections from Suilin Lavelle’s video: a) The Turing Test is language based, b) the Turing Test is too anthropocentric (focused on human intelligence), and c) the Turing Test does not take into account the inner states of a machine.

 

Finally, in the Daniel Dennett interview (please watch the video (second link), and if you like, have a look at Dennett’s bio (first link)) pay attention to what Dennett says about the Turing machine and Descartes (you will not get quizzed on Descartes’ actual philosophical outlook—that is something we could devote a whole section of the semester to—but only on what Dennett says in the video).

 

Place Order