Kant, a central figure in the world of philosophy and ethics, “argued that morality must ultimately be grounded in the concept of duty, or obligations that humans have to one another, and never in the consequences of human actions” (Tavani, 47). This argument from Kant serves as the foundation for deontological ethics, which believes that morality comes in the form of duties; that humans have the moral duty to do right things and the moral duty to not do bad things. Looking at Frank & Robot, with the imagined-knowledge that perhaps Robot has deontological ethics ingrained in its programming, is important because it shows some of the issues that would appear if we use deontological ethics as the base our future robots’ ethical reasoning. …show more content…
Instead of behaving a certain way due to duty, virtue-based ethics gives people (or robots) the tools and knowledge that allow and show them how to live a virtuous life, which they are expected to live. Virtue-based ethics shines light on the importance of value-alignment in regards to artificially intelligent machines and the potential dangers that can arise if the wrong values are instilled within future robots. One of the most interesting aspects of Robot is the fact that it seems to live in a completely different ethical world, which is why Robot and Frank get along so well. Like Frank, Robot does not prescribe to any set social rules or ethical values as it truly devotes all of its time and energy into its goal of bettering Frank. Robot simply makes choices based off of which decisions align with its morals (bettering Frank) even if it hurts other humans along the way. This is a major obstacle that is going to become more prevalent as the subject of which artificial moral agents will be implemented within robotic machines leaves science-fiction and transfers into reality. Humans, in regards to virtue-based ethics, predominantly focus their attention on their own (and others’)
In the movie, robots are seen as gifts that exist for the common good of society. Although, Del does not perceive robots as saints that aids lazy humans with their undesirable tasks, they are senseless, cold, emotionless structures that are the creation of man’s absurd idea that a mere piece of metal can replace the beating, living heart of a
In his 2011 The Chronicle Review article “Programmed for Love” Jeffrey R. Young interviews Professor Sherry Turkle about her experience with what she calls “sociable robots”. Turkle has spent 15 years studying robotics and its social emergence into society. After extensive research and experimenting with the robots, she believes that soon they will be programmed to perform specific tasks that a human would normally do. While this may seem like a positive step forward to some people, Turkle fears the worst. The article states that she finds this concept “demeaning, ‘transgressive,’ and damaging to our collective sense of humanity.” (Young, par. 5). She accredits this to her personal and professional experience with the robots. Turkle and her
Not only that, these sociable robots inadvertently change the way we view reality around. In today society what was once taboo like talking to an inanimate object is now acceptable because of new technology. Even the
In Death by Robot, Robin Henig talks about what goes into the decision making of the robots and the types of decisions that a robot will have to make, including the difficult ones. For one, he talks about the algorithm that goes into effect when a robot is in a sticky situation. For example, when a patient of the robot is asking for medicine, the robot has to check with the supervisor, but the supervisor is not reachable. This is a situation in which the robot is in a “hypothetical dilemma.” The robot is commanded to make its patient pain-free but only if it can get permission to give the patient medicine from the supervisor. Henig also talks about what the experts in the emerging field of robot morality are going so that robots are able to
“Just as the sun will rise tomorrow morning, so too will robots in our society.” Frank Mullin accurately explains the growing role of robot pets worldwide. Robot pets, are the adorable synthetic toys, that warm the hearts of thousands with their almost life-like movements. Once just a thought and a dream, robot pets now grace the shelves of department stores. Along with their wide popularity comes a question; “Should robotic pets replace real pets?” Well, they interact differently, and are frankly just programmed to do what one sees. Allowing robotic pets is depriving people of the interactions they experience with real pets, and does not nourish responsibility. For now, robotic pets should be left on the shelves because they will never provide
Robot and Frank is an unpredictable film about a retired man, Frank. The movie is set in the near future in New York, where everyone uses technology and robots to help them with simple day-to-day tasks. The movie starts off with Frank trying to steal from his own home until he sees a picture of himself with his two children, Hunter and Madison. Hunter and Madison are grown-ups living on their own, but also always trying to check up on Frank. Frank, on the other hand unintentionally causes all sort of trouble to his children due to his failing memory. So, his son, Hunter presents him with a robot butler. At first, Frank is irritated with his new robot servant. Over time Frank realizes that the robot is actually worthwhile. It cleans the house, cooks, tries to keep a set schedule for Frank, wakes him up in the morning, tries to build Frank's interest in gardening and walks around town with him. In addition, Franks cognitive condition improves as the robot kept him busy. Not to mention, Frank is a semi-retired burglar, who later trains his robot to help him with his new burglaries. The two keep their secret planning between themselves until Frank is almost caught and unfortunately has to erase the robot's memory. The movie ends with Frank living at a nursing home, where, at this point, he is unable to recognize Hunter. Frank's family comes to visit him and he leaves Hunter with a note saying, "Check the robot's garden under the tomatoes. Have fun kids!" Indirectly implying that he hasn't forgotten
Isaac Asimov was an author best known for his popular science fiction novels and short stories. Isaac Asimov’s short stories and novels were particularly focused on hard fiction. This means that his science fiction stories were categorised as realistic and give credibility to the current science. In process of writing these stories he devised “The Three Laws of Robotics”. The three laws are stated as follows: a robot may not injure a human being, or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings except where such orders would conflict with the First law and; a robot must protect its own existence as long as such protection does not conflict with the First or Second law.
The development of humans ethical abilities called virtue through training; often exposes the ethical behavior from being around families and communities. We learn how to be generous, courageous, honest, cheerful, and cooperative through virtues. These virtues come from everyday living conditions as well as from different social settings. We also learn from ethics that the learning habits we have embraced can help us excel in everyday life. “However, PharmaCARE’s virtue ethics with the Colberians were wrong because even though the executive managers own the native land, they should have treated the people with some dignity and respect,” (Halbert & Ingulli, 2012).
Moral Dilemmas and Technology In the short essay “Here’s a Terrible Idea: Robot Cars with Adjustable Ethics Settings,” writer Patrick Lin lays out the inherent issues involved with the possible future scenario of allowing the user of a self-driving car to decide how the car itself should handle crash-avoidance situations. The user could choose to ensure the safety of their own life over anyone else’s, to ensure that the majority of people involved leave the situation unharmed, to ensure that the least amount of legal fees possible are incurred, and infinitely many more possibilities. Crucial ethical dilemmas like these will inevitably keep popping into society’s conscience as the rate of technology development continues to increase rapidly.
i. In William Lycan’s reading Robots and Minds he discusses the debate over the moral rights of robots. His claim is that if robots are complex enough we should consider them to have minds and moral rights, even if we never know for sure. He states how at the moment with all of the knowledge we have today there is no compelling reason to believe that these machines don’t have consciousness. He also brings up two hypothetical examples Harry and Henrietta. In the paper, I will explain why I believe that both robots should be given the same rights as humans if they are made with this complexity.
Imagine, for a second, a not-so-distant future produced not by humans, but a dystopian society engineered by humanity's most amoral of computational artificial intelligence. Built without empathy by their equally emotionless robotic predecessors. Robots that make robots which make more robots, which could make more robots to divide and diversify. Robots that learn and develop based on their interactions, and robots that respond to a variety of external stimuli. Each robot has the capability to learn and store informational data. This matrix of machines uses the remains of our biological and chemical energies, humans: young, old, babies, adults and everything else that could no longer contribute to their robotic overlords, as batteries to power themselves as they systematically replace human life with their robotic and psychopathic need for efficiency. To perfection, for flesh tears and withers, but metal is eternal. But don't worry, these billions of robots have been provided with a manual of the Laws of Robotic Interactions with Humans ... to share.
Virtue, when I hear that word I think of value and morality and only good people can be virtuous. When I hear the word ethics I think of good versus evil, wrong and right. Now when the two are put together you get virtue ethics. You may wonder what can virtue ethics possibly mean. It’s just two words put together to form some type of fancy theory. Well this paper will discuss virtue ethics and the philosophy behind it.
Lately there have been more and more smart machines that have been taking over regular human tasks but as it grows the bigger picture is that robots will take over a lot of tasks now done by people. But, many people think that there are important ethical and moral issues that have to be dealt with this. Sooner or later there is going to be a robot that will interact in a humane manner but there are many questions to be asked like; how will they interact with us? Do we really want machines that are independent, self-directed, and has affect and emotion? I think we do, because they can provide many benefits. Obviously, as with all technologies, there are dangers as well. We need to ensure that people always
After watching the movie I, Robot, I find that many ethical issues come about from the technology shown in the movie. The movie takes place in 2035 and is about robots that are programmed with Three Laws: First Law-A robot must never harm a human being or, through inaction, allow any harm to come to a human; Second Law-A robot must obey the orders given to them by human beings, except where such orders violate the First Law; Third Law- A robot must protect its own existence unless this violates the First or Second Laws. Humans use these robots to do common tasks for them. Some of the ethical questions arisen from this movie include do robots have the ability to make emotional or ethical decision, are they entitled to the same rights as
Based on the trailer for I, Robot (2004), the people overall represent the norm while robots represent the Other in this scenario. In this case, the people show superiority over the robots because they were designed to serve as helpful servants to be trusted with the world. With the belief that robots are not human, they are expected to be programmed for certain duties and jobs. The term basic repression shows that our individual nature makes us human while surplus repression signifies how androids are created to performs the tasks assigned by humans. To say it another way, human beings are normal to society while that robots are depicted as unnatural objects of otherness. As the trailer develops, the android that we expected to take demanding roles start to disobey the world. Just like the plot from Ridley Scott’s cult-classic film Blade Runner (1982), robots seem to always be out of control from human authority. As we see that the robots in the trailer begin to rebel against their makers, we can make an inference that the humans must solve this conflict by conducting extermination. In Scott’s film, the androids, known as replicants, function as pleasures models, protectors, or objective workers controlled by humans until they start to question their purpose in life. The similarity between these two movies delivers the notion that the Other doesn’t want to be restrained as they develop throughout the movie, embracing the belief that in order to live like humans, they must