John Searle formulated the Chinese Room Argument in the early 80’s as an attempt to prove that computers are not cognitive operating systems. In short though the immergence of artificial and computational systems has rapidly increased the infinite possibility of knowledge, Searle uses the Chinese room argument to shown that computers are not cognitively independent. John Searle developed two areas of thought concerning the independent cognition of computers. These ideas included the definition of a weak AI and a strong AI. In essence, these two types of AI have their fundamental differences. The weak AI was defined as a system, which simply were systems that simulations of the human mind and AI systems that were characterized as an …show more content…
The assumption is that the person is capable of understanding Chinese, simply because he can manage to assemble a set of answers to questions that would be indistiquishable from a person who speaks Chinese. The problem is that the person in the room does not understand any of the answers, but is simply following instructions. Searle utilizes a system’s ability to pass the Turing test as a parameter in the study, though the person would still indeed not understand Chinese. Searle proceeds to refute the claims of strong AI one at a time, by positioning himself as the one who manipulates the Chinese symbols. The first claim is that a system, which can pass the Turing test, understands the input and output. Searle replies that as the "computer" in the Chinese room, he gains no understanding of Chinese by simply manipulating the symbols according to the formal program, in this case being the complex rules. (Searle, 1980) It was not necessary for the operator to have any understanding of what the interviewer is asking, or the replies that produced. He may not even know that there is a question and answer session going on outside the room. The second claim of strong AI, which Searle objects to, is the claim that the system explains human understanding. Searle asserts that since the system is functioning, in this case passing the Turing Test, (Brigeman, 1980) there
Through this, Searle argues that if a human and machine receive the same input and then respond by the same output, how are they any different from one another? When given the same purpose, humans and machines have the same response, therefore machines may have a mind. Gilbert Ryle created The computational theory of mind that claims “Computers behave in seemingly rational ways; their inner program causes them to behave in this way and therefore mental states are just like computational states”. He continues on by saying that “If logic can be used to command, and these commands can be coded into logic, then these commands can be coded in terms of 1s and 0s, therefore giving modern computers logic. Through this, how is one to tell if robots don’t have minds if they use logic just like humans do. When the purpose of humans and machines are the same, they may process differently in order to complete that purpose, although they may have the same output. Because humans and machines receive the same input and return the same output, they both have minds in addition to functions and processes in order to do that.
One of the hottest topics that modern science has been focusing on for a long time is the field of artificial intelligence, the study of intelligence in machines or, according to Minsky, “the science of making machines do things that would require intelligence if done by men”.(qtd in Copeland 1). Artificial Intelligence has a lot of applications and is used in many areas. “We often don’t notice it but AI is all around us. It is present in computer games, in the cruise control in our cars and the servers that route our email.” (BBC 1). Different goals have been set for the science of Artificial Intelligence, but according to Whitby the most mentioned idea about the goal of AI is provided by the Turing Test. This test is also called the
The most prominent example of the concept of a machine being intelligent in the manner of this so
Even with the correct programming a computer cannot freely think for itself, with its own conscious thought. John Searle is a philosopher of mind and language at UC Berkeley. Searle’s Chinese Room Argument is against the premise of Strong AI. He argues that even though a computer may have the ability to compute the use of syntax (Weak AI), a computer could not be able to understand the meaning behind the words it is communicating. Semantics convey both intentional and un-intentional content in communication. Though a computer could be programmed to recognize which words would convey the correct meaning of a symbol. This,
The essay “Watson Doesn’t Know It Won on Jeopardy!” is a paper written by John Searle on February 23, 2011 that probes at how IBM’s computer Watson has no human understanding whatsoever. Searle begins by clearing up the common misconceptions about what a computer actually is. Searle explains that a computer is simply a machine that manipulates symbols based on a programs needs and wants, and that the computational power of a computer is not human understanding; it is in fact a measure of how fast a computer can manipulate symbols. Searle then proceeds to explain the process of how a computer works in terms of a human. He explains that a computer does not understand human language at all. A computer just has a program (in binary) that tells
Understanding the notion of the Chinese room requires a bit of an explanation. Imagine you are solely an English speaking person in a room by yourself, armed with a pencil, and the only things on the walls are a series of instructions and rules. There is a door in the room, and on the other side is a Chinese speaking person. This Chinese speaker is able to slide cards under the door upon which are written Chinese symbols and sentences. The instructions written on the walls allow you to respond appropriately to each symbol, well enough so that the Chinese speaker is fooled into thinking you have a formidable grasp of Chinese. Now imagine that instead of a Chinese speaker outside the room, there is an English speaker, and the same things are written. You would still respond appropriately, convincing the other that you are a native English speaker, which of course, you are. Searle feels that the two positions are unique in that, in the first instance, you are "manipulating uninterpreted formal symbols," simply an instantiation of a computer program. In the second instance, you actually understand the English being given to you.
John Searle 1980(in Cooney, 2000), provides a thought experiment, commonly referred to as the Chinese room argument (CRA), to show that computers, programmed to simulate human cognition, are incapable of understanding language. The CRA requires us to consider a scenario where Searle, who is illiterate in Chinese, finds himself locked in a room with a book containing Chinese characters. Additionally, he has another book which has a set of instructions written in English (which he understands), that allows him to match and manipulate the Chinese characters so that he can provide appropriate written responses (in Chinese) to incoming questions, which are also written in Chinese. Moreover, Searle has a pile of blank paper with which he uses to jot down his answers. Subsequently, Searle becomes so proficient in providing responses that the quality of his answers matches that of a native Chinese speaker. Thus, Searle in the CR functions as a computer would, where he is the system while the books are the program and the blank paper acts as storage.
Serle begins his argument very similarly to Lycan, but instead of defining AI, he uses the terms strong and weak to determine AI. Serle says “According to weak AI, the principle value of the computer in the study of the mind is that it gives us a very powerful tool.” (Serle, p. 344) In turn, Serle says “according to strong AI, the computer is not merely a tool in the study of the mind; rather the appropriately programmed computer really is a mind, in the sense that computers are given the right programs can be literally said to understand and have other
These AI will be able to solve mysteries that no human can. When this becomes a reality, Kelly believes this will take humans to a cultural edge. Since he believes AI is its own form of thinking he expresses AI as
According to the creators of the experiment, proponents of strong artificial intelligence - those who claim that adequate computer programs can understand natural language or possess other properties of the human mind, not simply simulate them - must admit that either the room understands the Chinese language, or passing the Turing test is not enough proof of intelligence. For the creators of the experiment none of the components of the experiment includes Chinese, and therefore, even if the set of components exceeds the test, the test does not confirm that the person actually understands Chinese, since as we know Searle does not know that language.
He does not understand Chinese at all (neither writing, reading, nor speaking) .These
The argument called "Chinese room" was proposed by John Searle and later reproduced in his other works. This argument is directed against the position of the philosophy of mind called functionalism. Elaborating, it may be noted that the argument was directed against the machine functionalism, or, as Searle points out, against the strong version of artificial intelligence.
* Developments in computer science would lead to parallels being drawn between human thought and the computational functionality of computers, opening entirely new areas of psychological thought. Allen Newell and Herbert Simon spent years developing the concept of artificial intelligence (AI) and later worked with cognitive psychologists regarding the implications of AI. The effective result was more of a framework conceptualization of mental functions with
If Strong AI is true, then there is a program for Chinese such that if any computing system runs that program, that system thereby comes to understand Chinese.
In the future, we may be able to build a computer that is comparable to the human brain, but not until we truly understand one thing. Lewis Thomas talks about this in his essay, "Computers." He says, "It is in our collective behavior that we are most mysterious. We won't be able to construct machines like ourselves until we've understood this, and we're not even close" (Thomas 473). Thomas wrote this essay in 1974, and although we have made many technological advances