(Not) Mere Semantics: A Critique of the Chinese Room
The Roman Stoic, Seneca, is oft quoted that it is the power of the mind to be unconquerable (Seneca, 1969). And so seems that, in recent times, Searle has produced a similar rhetoric. (At least insofar as strong AI might ‘conquer’ and reducibly explain mental states). This essay will attempt to do two things: 1) Examine three central objections to Searle’s Chinese Room Argument (CRA); these being the Systems Reply (SR), Deviant Causal Chain (DCC), and what I have termed the Essence Problem. The CRA is found to survive the first three, while damaged by the fourth for its question-begging form. And, 2) it will propose a
The Chinese Room
Searle’s 1980 essay, Minds, Brains and Programs is
…show more content…
The latter more specifically states that thoughts are certain kinds of computation and, as universal Turing machines can compute any kind of computable function, they can in principle be programmed to actuate a human mind.
Searle’s argument can be put propositionally as:
1. If Strong AI is true, then there is a program for Chinese such that if any computing system runs that program, that system thereby comes to understand Chinese.
2. I could run a program for Chinese without thereby coming to understand Chinese.
3. Therefore Strong AI is false. (Cole, 2014)
Although it should be pointed out that what Searle’s precise position has come under scrutiny and there is reason to change what might be considered the ‘success’ of the paper depending on what these reading differences are. (Harnad, 2001)
• “Weak AI”: the claim that computers are merely able to simulate rather than literally think.
• It would seem that much of the battle over the CRA’s validity turns on different intuitions of whether semantic content is reducible to syntactic frameworks. (Can computationalism provide a scientific theory which might elucidate the essential nature of content?)
• Systems Reply (Fodor and Block)
• Searle (1991a) deftly produces an argument to block the systems objection, namely that the individual internalise all the elements of the system. So he concludes, “If he doesn’t understand, then there is no way the system could understand because the
could perform multiple tasks effectively from its possessed knowledge without requiring direct programming. This large task domain is important as with one an A.I. has the ability to excel at multiple tasks, causing its uses to be exponentially vast compared to a standard programmed algorithm (Bostrom). But compared to humans A.I.’s task domain is minuscule. Humans have a massive task domain. Humans can perform a copious amount of tasks even with limited knowledge on a subject. For example “a human who can read Chinese characters would likely understand Chinese speech, know something about Chinese culture and even make good recommendations at Chinese restaurants (LeGassick).” Conversely, it would require several completely distinct A.I.s to perform each task. Researchers still do not fully understand why human brains have such large task domains and are struggling to translate this skill in algorithmic terms. Experts assume A.I. is currently on track to be of human-level intelligence, not just in specific tasks, but all around, by 2040-2050 (Bostrom). Another skill
Even with the correct programming a computer cannot freely think for itself, with its own conscious thought. John Searle is a philosopher of mind and language at UC Berkeley. Searle’s Chinese Room Argument is against the premise of Strong AI. He argues that even though a computer may have the ability to compute the use of syntax (Weak AI), a computer could not be able to understand the meaning behind the words it is communicating. Semantics convey both intentional and un-intentional content in communication. Though a computer could be programmed to recognize which words would convey the correct meaning of a symbol. This,
John Searle 1980(in Cooney, 2000), provides a thought experiment, commonly referred to as the Chinese room argument (CRA), to show that computers, programmed to simulate human cognition, are incapable of understanding language. The CRA requires us to consider a scenario where Searle, who is illiterate in Chinese, finds himself locked in a room with a book containing Chinese characters. Additionally, he has another book which has a set of instructions written in English (which he understands), that allows him to match and manipulate the Chinese characters so that he can provide appropriate written responses (in Chinese) to incoming questions, which are also written in Chinese. Moreover, Searle has a pile of blank paper with which he uses to jot down his answers. Subsequently, Searle becomes so proficient in providing responses that the quality of his answers matches that of a native Chinese speaker. Thus, Searle in the CR functions as a computer would, where he is the system while the books are the program and the blank paper acts as storage.
Their dissension stems not from one being fundamentally right or wrong, but from different assumptions. Kim accuses Searle of “Causal Over-Determination.” He sees Searle as claiming that not only does m(F) cause m(G), but also that F causes G in an equally real way. Since m(G) is the true cause of G, the F to G causation must be illusory. Searle could likewise accuse Kim of “Causal Over-Distinction,” arguing that m(F) is indistinguishable from F, and in that both together as one cause [m(G)+G].
This proves Alan Turing is wrong about machine intelligence because a machine can not truly think -- it is just programed to simulate the knowledge it is given. Another example of searle convincing argument would be that “All the same, he understand nothing of chinese and a fortiori neither does the system, because there isn’t anything in the system that isn’t in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.” Searle interpreted this by clarifying that even if the person remembers the instruction book, he will not understand Chinese. Even if he did all the calculations, memorizes the rules and worked outdoor, This convinces me because what Searle is proving is that the person would not even understand Chinese, because he would not get the meaning of
A prominent question that has popped up in the new ‘digital age’ is the debate on whether a machine can think. There have been many books, articles, and speeches on this topic that have either been for it or against it. Paul and Patricia Churchland, both professors of neurophilosophy at the University of California, San Diego (UCSD), have made their voices heard on whether machines can think or not. Their answer to this question was that a machine could think and they had many arguments to back them up. Their arguments are as follows: the Turing test, the Luminous Room argument, and the Brain Simulator Reply.
Clark and Chalmers begin with a case to illustrate why the mind is extended whereby a person has the option to use their mind (a), use a physical computational aid (b), or a futuristic
John Searle formulated the Chinese Room Argument in the early 80’s as an attempt to prove that computers are not cognitive operating systems. In short though the immergence of artificial and computational systems has rapidly increased the infinite possibility of knowledge, Searle uses the Chinese room argument to shown that computers are not cognitively independent.
Many are disconcerted by the idea that humans and Minds can be described as systems which operate based on interpretations of symbols, much like machines, computers, and robots: things that we have created yet do not think of as being “thinking,” themselves. We, as human beings, are comforted in the notion that we are born into this world with a fully capable Mind, a soul or spirit, and are, thereafter, free to choose our fate as we will. Although it seems plausible that we are born with Mind, I cannot subscribe to such a simplistic version of thinking about our true capacity for affecting outcome.
The power of the mind plays an important role in reflecting our ideas of certain things, which can be supported from “a purely mental
This paper will make critiques of arguments made by Fred Adams and Kenneth Aizawa in their article The Bounds of Cognition, as well as Sean Allen-Hermanson’s Superdupersizing the Mind: Extended Cognition as the Persistence of Cognitive Bloat. The purpose of this paper will be to address a few of the attacks in defence of Clark and Chalmers’ extended mind theory (EMT) by critiquing each author’s respective arguments.
In his paper “Computing Machinery and Intelligence,” Alan Turing sets out to answer the question of whether machines can think in the same humans can by conceptualizing the question in concrete terms. In simple terms, Turing redefines the question by posing whether a machine can replicate the cognition of a human being. Yet, some may object to the notion that Turing’s new question effectively captures the nature of machines’ capacity for thought or consciousness, such as John Searle. In his Chinese room thought experiment, Searle outlines a scenario that implies machines’ apparent replication of human cognition does not yield conscious understanding. While Searle’s Chinese thought experiment demonstrates how a Turing test is not sufficient to establish that a machine can possess consciousness or thought, this argument does not prove that machines are absolutely incapable of consciousness or thought. Rather, given the ongoing uncertainty of the debate regarding the intelligence of machines, there can be no means to confirm or disconfirm the conscious experience of machines as well as the consciousness of humans by extension of that principle.
Rene Descartes’ “Discourse on the Method” focuses on distinguishing the human rationale, apart from animals and robots. Wherein, he does so by explaining how neither animals, nor machines possess the same mental faculties as humans. For Descartes distinguishes the human rationale apart from non-humans, even though he does agree the two closely resemble each other because of their sense organs, and physical functions (Descartes, pp22). Nevertheless, it is because the mechanical lacks a necessary aspect of the mind, which consequently separates them from humans. For in Descartes “Discourse on the Method,” he argues that the noteworthy difference between humans, and the mechanical is that machines are only responding to the world through of their sense organs. Whereas humans possess the significant faculties of reasoning, which allows them to understand external inputs and information obtained from the surrounding environment. This significantly creates a dividing ‘line’, which separates humans from non-humans. For in this paper, I will firstly distinguish the differences between the human and mechanical’s mentality in regards to Descartes “Discourse on the Method”. Secondly, I will theorize a modern AI that could possess the concept of an intellectual mind, and then hypothesize a powerful AI that lacks the ability to understand its intelligence. Lastly, in disagreeing in why there are no such machines that is equivalent to the human mind. For humans don’t possess all the
Substantial studying has been made on the subject and Turing’s overly optimistic point of view, yet, we experience difficulty when trying to combine idea of advancement in technology and what makes us humans: the capability of thinking. Conventionally, we have firmly grasp to the idea that the act of thinking is the official stamp of authenticity which differentiate humans from the rest of beings, and so while trying to decide if a computer can think or not, we are closely scrutinizing the foundation of our nature as beings to its core. But before we dive into the subject matter of why I disagree with Turing, we must inquire about what exactly is thinking. Some have tried to define thinking as having conscious thoughts; but thinking and consciousness are not terminologies that are mutually exchangeable. While thinking is a state of consciousness, consciousness is not thinking. Even as we process information necessary for reasoning, much of our brain activity and processing takes
Artificial intelligence, or AI, is a field of computer science that attempts to simulate characteristics of human intelligence or senses. These include learning, reasoning, and adapting. This field studies the designs of intelligent