After I've read both the Star Trek synopsis and chapter 3 of Hasker's textbook "Metaphysics: Constructing a Worldview" it's clear that what is seen of the mind and body issue that was displayed by Picard within the synopsis of Star Trek: The Next Generation was Dualism view. The truth about Maddox is he can be a part of the scientific research the division of Starfleet with orders to dismantle data on the starship enterprise. In the episode, the conflict between Picard and Maddox stems from two different views of what makes a machine a machine and what makes a human a human. The Robot is Picard friend, data is the individual and ought to be able to form choices as a real individual ought to. In my conclusion, I believe that Picard was depicting …show more content…
A machine can so have a brain of a human being but it can't have a mind of a human being and it can't have emotions and feelings and it needs to be void of any ethical choices and moral standards. It can't be considered a human being as it were a machine. Data has the capacity to create choices due to having a brain and more so, the concept of soul is truly a metaphysical one and its presence is up for debate, the JAG officer was really right to abstain from giving any judgment on the issues of the mind and body. Hasker has pointed out that since the properties recognize people from standard materials objects by other creatures which a few of the same questions have to emerge concerning those creatures as emerge concerning human creatures. Data is treated like and acts like a human being in most circumstances. He had formed a relationship and endeavor to be more than just his nuts and bolts, exceptionally much like a human being. The JAG officer focuses on this matter when she says, "we have been moving around fundamental issues such as: does data have a soul?, I don't know that he has, I don't know that I have. I do believe we have a commitment to really give data a change in this matter. Just because these machines aren't human, doesn't mean we aren't committed to treating them with the correct respect. I think that the A.I. is progressing to be a human creation and be the responsible one of the wellbeing of A.I. Within the future and guarantee the ethical
In his writings, “A Contemporary Defense of Dualism,” J.P. Moreland argues the point that the mind and brain are separate from each other. It seems as a quick thought that both are the same. However, the mind deals with ideas, thoughts and hopes. The brain is made up of the neural process. Throughout the entire argument, Moreland tries to prove the theory of physicalism, which is the idea that only things that exist are composed of matter. His explanation is that the soul doesn’t exist and the brain controls everything.
In essence, Cartesian Dualism attempts to solve the mind-body problem – that is, what is the relationship between the mind and the body? The answer, according to this theory, is that the mind and the body are two distinctly different substances that constitute each person. Here, “mind” can be described as a nonphysical thing that thinks and “body” as a living physical thing that does not think. The mind can also exist independently of the body, and both can causally affect one another.
A dualism aspect is presented when Captain Picard mentions that he thinks that Data has the option to say no to being disassembled. According to Hasker (1983), “Dualism begins by taking quite seriously that human beings have both physical properties and mental properties (Husker, 1983). He continues by observing that this is opposed to other assumptions like that of idealism and behaviorism which weaken the two together into one. (Hasker, 1983). Dualism is made clear more when Hasker states that, “physical properties are properties of the body, while mental properties are properties of the mind,” (Hasker, 1983). Picard applies dualism to illustrate that Data is
William Hasker uses multiple categories to list out the different views on the mind/body problem in his book Metaphysics: Constructing a World View. Captain Picard, Data’s commanding officer and the defending attorney for Data’s case, would be classified as a materialist. Picard views Data as having rights just like the rest of the humans who work for Starfleet. In trial Picard says, “We too are machines, just machines of a different type” (Scheerer, 1989). This statement from Picard alone shows his materialism thinking. Materialism thinking does not believe in souls thus since Data is manmade he would not have a soul as well making him an equal to human beings. Picard believes since Data is intelligent, self-aware, and has a consciousness of his existence and actions then this makes Data a sentient.
each other. Both have strong followings of loyal fans that live and breath these classic
Based on my reading of Hasker (1983), I believe the view of the mind or body problem that was exhibited by Piccard in the synopsis from Star Trek: The Next Generation was the Dualism view. Picard stated that, “Data has rights among them the right to refuse to undergo an experimental procedure such as this.” Here, Picard ascribes both physical and mental attributes to the robot (Data). This view correlates to Hasker (1983) statement that, “Dualism begins by taking quite seriously the fact that human beings have both physical properties and mental properties…” (p. 65). Hasker (1983) further indicates that even though the mind and body are different, they are not detached, but continuously interacting (Dualism Interactionism), which is
Star Wars and Star Trek, two universes set in space. One is set in our galaxy and those closes by, the other in a galaxy far far away. They use different types of weapons and ammunition. One universe uses ships with several crew members, while the other uses one-man ships as well as ships with multiple crew members. The number of races and planets between the two universes is vastly different. Also, one is more popular than the other. Both of these universes are similar in concept, but Star Wars has better weapons, races, and more creativity in general.
The Synopsis: Star Trek Episode “The Measure of a Man” deals with the thought that android could have physical and mental properties. In order to fully understand or evaluate this we have to have a clear understanding of the Mind/Body Problems and solutions. Humans are material objects consisting of physical and mental properties. Physical properties examples are height, weight, color, shape or size and mental properties are awareness, consciousness, feeling, thinking, emotions and senses. The problem arises because these properties interact where intentional or unintentional continuously. Hasker discusses several mind/body solutions such as idealism, materialism, behaviorism, dualism, and
Star Wars versus Star Trek. Marvel versus DC. Gandalf versus Dumbledore. Batman versus Superman. Almost everyone that could be asked could pick a side on these topics and their even smaller debated parts: Jedi versus Sith, Captain Kirk versus Captain Piccard, etc. Superhero movies are almost always a hit every year for the summer box office. Except superheroes were not always everyone 's favorite, nor socially accepted in America. This was originally subject matter that stayed within the confines of a certain social class, the nerds and geeks. Nerd culture rose within mainstream culture and media through a staunch refusal to be put to the side and belittled when it did hit big. It has stayed largely and long enough so that most people will tell you that superheroes and space pilots and "all culture is nerd culture now." (Rosenberg, Alyssa)
The current paradigm states “consciousness is organically based and cannot be emulated by A.I.” (Boss). Indeed, Descartes believed that the conscious mind and body are two separate things and, to be human, you need both ( (Matravers). Robot Ash certainly has a body that is separate from the A.I software, but that software is still not a conscious mind, rather it’s, albeit an
Mind-body dualism is usually seen as the central issue in philosophy of the mind. The problem with mind-body dualism is that it is unknown whether the mind really is a separate entity from the human body as Descartes states in his argument, or whether the mind is the brain itself. Descartes believed that in a person existed two major components, the physical body and the nonphysical body which was called the mind or soul. As a scientist, Descartes believed in mechanical theories of matter, however, he was also very religious and did not believe people could merely be mechanical creatures that ran like “clockwork.” And so, it was Descartes who argued that the mind directed thoughts. To account for this, he split the world into two parts,
Imagine, for a second, a not-so-distant future produced not by humans, but a dystopian society engineered by humanity's most amoral of computational artificial intelligence. Built without empathy by their equally emotionless robotic predecessors. Robots that make robots which make more robots, which could make more robots to divide and diversify. Robots that learn and develop based on their interactions, and robots that respond to a variety of external stimuli. Each robot has the capability to learn and store informational data. This matrix of machines uses the remains of our biological and chemical energies, humans: young, old, babies, adults and everything else that could no longer contribute to their robotic overlords, as batteries to power themselves as they systematically replace human life with their robotic and psychopathic need for efficiency. To perfection, for flesh tears and withers, but metal is eternal. But don't worry, these billions of robots have been provided with a manual of the Laws of Robotic Interactions with Humans ... to share.
Another issue brought forward from the movie is whether they should be given the same rights as humans. The movie shows us that the robots have three laws that they live by, the first one being they must protect human from any harm. This first law has a few issues in being that sometimes humans do not need to be protected, for example people who have committed a crime, need to be punished, not protected. The second law tells the robot they are to obey every order given unless it violates the first law. Even if the order is unethical the robot must still obey it. The third law states the robot must protect the robot its self unless it would violate the first two laws. If they were given the same rights as humans would set them free from their laws. Robots cannot function as human because they lack the ability to have compassion or emotion. Robots do not have the ability to make ethical decisions.
There are different types of artificial intelligence that many institutes and companies believe in. However, they all share the same belief of a self-aware, conscious, human-like, computer system, the type of AI people fear. Disregarding that an AI is a computer much like a phone or laptop, one must consider that if something is sentient like a person or animal, should they be treated like one? And if not, is it discrimination or racism when it is treated like an item? Slavery treated people based on race or social status differently, the bottom being slaves. Some might argue that machines don’t have a “soul” or aren’t alive. There are hundreds of questions and ideas that have no clear
Lately there have been more and more smart machines that have been taking over regular human tasks but as it grows the bigger picture is that robots will take over a lot of tasks now done by people. But, many people think that there are important ethical and moral issues that have to be dealt with this. Sooner or later there is going to be a robot that will interact in a humane manner but there are many questions to be asked like; how will they interact with us? Do we really want machines that are independent, self-directed, and has affect and emotion? I think we do, because they can provide many benefits. Obviously, as with all technologies, there are dangers as well. We need to ensure that people always