While proponents of strong AI believe that machines are capable of imitating human consciousness so well that it might be perceived as actually obtaining consciousness, others such as John Searle believe otherwise. Another logical stance on the question is accepting that machines are able to behave as though it has a conscious mind physically, but also arguing that machines will never be able to possess this consciousness as though a human would. This second type of viewpoint is considered Weak Artificial Intelligence, otherwise known as narrow artificial intelligence. Weak AI is acknowledging that machines can simulate consciousness computationally, but that is different from actually obtaining consciousness.
One of the most widely known
…show more content…
Searle believed that simply manipulating symbols will not ensure computers are able to think or understand, in other words, knowing syntax does not mean understanding the semantics, and function does not mean understanding.
There are of course many criticisms of Searle’s Chinese Room Argument, the main ones include the Systems Reply, the Robot Reply, the Brain Simulator Reply, the Other Minds Reply, and the Intuition Reply.
In Searle’s Chinese Room Argument, the person inside the room is defined as not able to understand Chinese. But in response to that, the Systems Reply brings up the point that while the person might not know any Chinese, the system as a whole understands Chinese. The Virtual Mind Reply is similar to the Systems Reply in that the person inside the room might not understand Chinese by him or herself. But Virtual Mind Reply questions whether understanding is created or not. While the person in the room might not have any knowledge of Chinese at the beginning, running the system might create an agent that does have understanding of Chinese. The Robot Reply, while accepting that the person in the room might not understand Chinese or the computer in the room does not understand a particular language, suggests that “giving a computer a body” would mean something different for the computer. Being able to interact with the environment using sensors might enable the computer to learn. The Brain Simulator Reply suggests to consider the program being
Even with the correct programming a computer cannot freely think for itself, with its own conscious thought. John Searle is a philosopher of mind and language at UC Berkeley. Searle’s Chinese Room Argument is against the premise of Strong AI. He argues that even though a computer may have the ability to compute the use of syntax (Weak AI), a computer could not be able to understand the meaning behind the words it is communicating. Semantics convey both intentional and un-intentional content in communication. Though a computer could be programmed to recognize which words would convey the correct meaning of a symbol. This,
In an effort to prove that computers will never evolve into systems similar to human, Fish (2011) also presents an illogical argument that men will never be able to create a machine that is comparable to mankind, which is a paradox in itself as he tries to prove mankind’s superiority by assuming a limited the scope of man’s intelligence in creating complex machines with cognitive abilities.
The documentary Alien of the Deep Sea presented us with six different experiments aimed at studying different aspects of octopuses' intelligence. I will focus on just one of those experiments and attempt to apply Jackendoff's First Fundamental Argument which argues that a language user's mind can be viewed as an internal computational system containing unconscious set of rules.
In addition, Searle would say that Kim is making a fundamental mistake in thinking of mental states as being caused by physical states (related temporally), rather than existing simultaneously. This limits Kim’s thinking, as two events related causally in this way cannot by definition be the same event. Searle suggests that we include a sort of “permanent causation,” by which molecular structures doesn’t cause hardness, but rather is hardness.
The reply states a computer can only derive the semantics from syntax if given enough connections to assist with the derivation from syntax to semantics. However; since the reply is essentially an incomplete argument against Searle’s argument, it is not a very devastating criticism. If the Chinese Room was put into the robot, and I was given all of this syntactical information about the world, I still would not know what the symbols would mean despite what the robot is actually doing because I am only shuffling symbols. I am unable to gain any meaning from them. (Cole, 4.2) The actions of the robot do not prove that the computer operating it is thinking, it only means that it is still able to run through its own
The main conclusion is that a computer may be able to produce a language but is not able to comprehend the real understanding of that language. There have been many skeptics with different points of view about this theory, as well as supporting the argument. Searle states, “The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.” (Cole,
The Chinese Room is a mental experiment, originally proposed by John Searle and popularized by Roger Penrose, which attempts to counter the validity of the Turing Test and the belief that a machine can come to think. Searle faces the analogy between mind and computer when it comes to addressing the issue of consciousness. The mind involves not only the manipulation of symbols, but also has a syntax and a semantics. Searle in his Mind, Brain and programs text, attacks this thought, and with the China Room experiment he showed how a machine can perform an action without even understanding what it does and why it does it. Therefore according to Searle the logic used by computers is nothing more than one that
John Searle 1980(in Cooney, 2000), provides a thought experiment, commonly referred to as the Chinese room argument (CRA), to show that computers, programmed to simulate human cognition, are incapable of understanding language. The CRA requires us to consider a scenario where Searle, who is illiterate in Chinese, finds himself locked in a room with a book containing Chinese characters. Additionally, he has another book which has a set of instructions written in English (which he understands), that allows him to match and manipulate the Chinese characters so that he can provide appropriate written responses (in Chinese) to incoming questions, which are also written in Chinese. Moreover, Searle has a pile of blank paper with which he uses to jot down his answers. Subsequently, Searle becomes so proficient in providing responses that the quality of his answers matches that of a native Chinese speaker. Thus, Searle in the CR functions as a computer would, where he is the system while the books are the program and the blank paper acts as storage.
In particular, in 1974, Ned Block created the China Brain thought experiment to criticize functionalism and variable realizability. Supposed that the entire population of China was ordered to simulate the working of a single brain, with each person acting as a neuron. To simulate neuron firing, each individual is handed a two-way radio and unique instructions on whom to contact depending on who called him or her. All of this would then be connected to a body, which would provide the sensory input and express the behavioral output for the China “Brain”. Block argues that since this brain contains sensory input, behavioral output and internal mental states, the China brain would be consciousness since it contains all of the elements of a functionalist description of the mind. However, Block thinks that it would be absurd for the China Brain to have any sort of experiences at all. It just doesn’t seem intuitively correct for this being to have consciousness or be able to experience qualia. Thus, Block argues that functionalism and variable realizability are
John Searle developed two areas of thought concerning the independent cognition of computers. These ideas included the definition of a weak AI and a strong AI. In essence, these two types of AI have their fundamental differences. The weak AI was defined as a system, which simply were systems that simulations of the human mind and AI systems that were characterized as an
Substantial studying has been made on the subject and Turing’s overly optimistic point of view, yet, we experience difficulty when trying to combine idea of advancement in technology and what makes us humans: the capability of thinking. Conventionally, we have firmly grasp to the idea that the act of thinking is the official stamp of authenticity which differentiate humans from the rest of beings, and so while trying to decide if a computer can think or not, we are closely scrutinizing the foundation of our nature as beings to its core. But before we dive into the subject matter of why I disagree with Turing, we must inquire about what exactly is thinking. Some have tried to define thinking as having conscious thoughts; but thinking and consciousness are not terminologies that are mutually exchangeable. While thinking is a state of consciousness, consciousness is not thinking. Even as we process information necessary for reasoning, much of our brain activity and processing takes
In his paper “Computing Machinery and Intelligence,” Alan Turing sets out to answer the question of whether machines can think in the same humans can by conceptualizing the question in concrete terms. In simple terms, Turing redefines the question by posing whether a machine can replicate the cognition of a human being. Yet, some may object to the notion that Turing’s new question effectively captures the nature of machines’ capacity for thought or consciousness, such as John Searle. In his Chinese room thought experiment, Searle outlines a scenario that implies machines’ apparent replication of human cognition does not yield conscious understanding. While Searle’s Chinese thought experiment demonstrates how a Turing test is not sufficient to establish that a machine can possess consciousness or thought, this argument does not prove that machines are absolutely incapable of consciousness or thought. Rather, given the ongoing uncertainty of the debate regarding the intelligence of machines, there can be no means to confirm or disconfirm the conscious experience of machines as well as the consciousness of humans by extension of that principle.
no, a machine could not be conscious. I propose that those who argue the yes case that a machine
The Representational Theory of Mind proposes that we, as both physiological and mental beings, are systems which operate based on symbols and interpretations of the meanings of such symbols rather than beings which operate just on physiological processes (chemical reactions and biological processes). It offers that humans and their Minds are computing machines, mental software (the Mind) which runs on physical hardware (the body). It suggests, too, that we are computing machines functioning as something other than a computing machine, just as every other machine does.
* Developments in computer science would lead to parallels being drawn between human thought and the computational functionality of computers, opening entirely new areas of psychological thought. Allen Newell and Herbert Simon spent years developing the concept of artificial intelligence (AI) and later worked with cognitive psychologists regarding the implications of AI. The effective result was more of a framework conceptualization of mental functions with