Kant, a central figure in the world of philosophy and ethics, “argued that morality must ultimately be grounded in the concept of duty, or obligations that humans have to one another, and never in the consequences of human actions” (Tavani, 47). This argument from Kant serves as the foundation for deontological ethics, which believes that morality comes in the form of duties; that humans have the moral duty to do right things and the moral duty to not do bad things. Looking at Frank & Robot, with the imagined-knowledge that perhaps Robot has deontological ethics ingrained in its programming, is important because it shows some of the issues that would appear if we use deontological ethics as the base our future robots’ ethical reasoning. …show more content…
Instead of behaving a certain way due to duty, virtue-based ethics gives people (or robots) the tools and knowledge that allow and show them how to live a virtuous life, which they are expected to live. Virtue-based ethics shines light on the importance of value-alignment in regards to artificially intelligent machines and the potential dangers that can arise if the wrong values are instilled within future robots.
One of the most interesting aspects of Robot is the fact that it seems to live in a completely different ethical world, which is why Robot and Frank get along so well. Like Frank, Robot does not prescribe to any set social rules or ethical values as it truly devotes all of its time and energy into its goal of bettering Frank. Robot simply makes choices based off of which decisions align with its morals (bettering Frank) even if it hurts other humans along the way. This is a major obstacle that is going to become more prevalent as the subject of which artificial moral agents will be implemented within robotic machines leaves science-fiction and transfers into reality.
Humans, in regards to virtue-based ethics, predominantly focus their attention on their own (and others’)
In his 2011 The Chronicle Review article “Programmed for Love” Jeffrey R. Young interviews Professor Sherry Turkle about her experience with what she calls “sociable robots”. Turkle has spent 15 years studying robotics and its social emergence into society. After extensive research and experimenting with the robots, she believes that soon they will be programmed to perform specific tasks that a human would normally do. While this may seem like a positive step forward to some people, Turkle fears the worst. The article states that she finds this concept “demeaning, ‘transgressive,’ and damaging to our collective sense of humanity.” (Young, par. 5). She accredits this to her personal and professional experience with the robots. Turkle and her
Not only that, these sociable robots inadvertently change the way we view reality around. In today society what was once taboo like talking to an inanimate object is now acceptable because of new technology. Even the
In Death by Robot, Robin Henig talks about what goes into the decision making of the robots and the types of decisions that a robot will have to make, including the difficult ones. For one, he talks about the algorithm that goes into effect when a robot is in a sticky situation. For example, when a patient of the robot is asking for medicine, the robot has to check with the supervisor, but the supervisor is not reachable. This is a situation in which the robot is in a “hypothetical dilemma.” The robot is commanded to make its patient pain-free but only if it can get permission to give the patient medicine from the supervisor. Henig also talks about what the experts in the emerging field of robot morality are going so that robots are able to
“Just as the sun will rise tomorrow morning, so too will robots in our society.” Frank Mullin accurately explains the growing role of robot pets worldwide. Robot pets, are the adorable synthetic toys, that warm the hearts of thousands with their almost life-like movements. Once just a thought and a dream, robot pets now grace the shelves of department stores. Along with their wide popularity comes a question; “Should robotic pets replace real pets?” Well, they interact differently, and are frankly just programmed to do what one sees. Allowing robotic pets is depriving people of the interactions they experience with real pets, and does not nourish responsibility. For now, robotic pets should be left on the shelves because they will never provide
i. In William Lycan’s reading Robots and Minds he discusses the debate over the moral rights of robots. His claim is that if robots are complex enough we should consider them to have minds and moral rights, even if we never know for sure. He states how at the moment with all of the knowledge we have today there is no compelling reason to believe that these machines don’t have consciousness. He also brings up two hypothetical examples Harry and Henrietta. In the paper, I will explain why I believe that both robots should be given the same rights as humans if they are made with this complexity.
Despite all they have done for the world, robots have a very unique and extensive history of villainization. There will be many opportunities for them in the future to either make or break society. Popular theories of a robot war are often favorites, but a lot of the possible realities involve a much more passive takeover. Overall, robots are an important aspect to be educated about in this changing world. Simply understanding the implications of artificial intelligence can completely change its impact. Robots will be a part of the future, whether for the good of humans, or to their
In his short stories from The Complete Robot, Asimov intentionally reveals loopholes, ambiguities and inadequacies within the three laws of robotics. In this essay I will analyse how ‘Liar!’ Satisfaction guaranteed and Sally reveal weaknesses within the three laws of robotics. And why Isaac Asimov sabotages his own safeguards.
The development of humans ethical abilities called virtue through training; often exposes the ethical behavior from being around families and communities. We learn how to be generous, courageous, honest, cheerful, and cooperative through virtues. These virtues come from everyday living conditions as well as from different social settings. We also learn from ethics that the learning habits we have embraced can help us excel in everyday life. “However, PharmaCARE’s virtue ethics with the Colberians were wrong because even though the executive managers own the native land, they should have treated the people with some dignity and respect,” (Halbert & Ingulli, 2012).
In the movie, robots are seen as gifts that exist for the common good of society. Although, Del does not perceive robots as saints that aids lazy humans with their undesirable tasks, they are senseless, cold, emotionless structures that are the creation of man’s absurd idea that a mere piece of metal can replace the beating, living heart of a
AI has the potential to change the way we live for better or for worse. “Terminator”, “IRobot”, and “2001: A Space Odyssey” are examples of Hollywood films where artificial intelligence runs amok, resulting in a post-apocalyptic future for humanity. Experts rated “A Space Odyssey” 9 out of 10 on realism because HAL, the supposedly antagonist, never strayed from its programming and killed its crewmates to achieve its goals. HAL was not motivated by survival instinct or emotions but simply instructions from its creator. The film’s message is that human moral is not a requirement for artificial intelligence. As the years go by the perception of AI doesn’t change much until the 1977 Sci-fi film “Star Wars” was released. Suddenly, the robots were the “good guys”. C-3PO is a perfect example of a friendly robot and quickly became recognized as one of the kindest robots in the history of movie robots. Hollywood films have done a good job of explaining that it is up to us to determine if AI works to benefit humanity or help in its destruction.
R2D2 from Star Wars is a robot that helps the humans and is a famous movie icon. Decepticon from Transformers, however, is a widely-known robot that is out to kill humans. These movies and other social inputs have made a two-sided view of robots; either people support robots and are excited to see the future with them or they are against them and fear any kind of artificial intelligence. Robotics is a double-edged sword; there is a considerable amount of evidence and experience to safely say that robots are essential to humans, yet many people have the innate fear that they will surpass humans. Rather than fearing robots, people should try to embrace the advancing technology and the benefits that could result from it.
Imagine, for a second, a not-so-distant future produced not by humans, but a dystopian society engineered by humanity's most amoral of computational artificial intelligence. Built without empathy by their equally emotionless robotic predecessors. Robots that make robots which make more robots, which could make more robots to divide and diversify. Robots that learn and develop based on their interactions, and robots that respond to a variety of external stimuli. Each robot has the capability to learn and store informational data. This matrix of machines uses the remains of our biological and chemical energies, humans: young, old, babies, adults and everything else that could no longer contribute to their robotic overlords, as batteries to power themselves as they systematically replace human life with their robotic and psychopathic need for efficiency. To perfection, for flesh tears and withers, but metal is eternal. But don't worry, these billions of robots have been provided with a manual of the Laws of Robotic Interactions with Humans ... to share.
After watching the movie I, Robot, I find that many ethical issues come about from the technology shown in the movie. The movie takes place in 2035 and is about robots that are programmed with Three Laws: First Law-A robot must never harm a human being or, through inaction, allow any harm to come to a human; Second Law-A robot must obey the orders given to them by human beings, except where such orders violate the First Law; Third Law- A robot must protect its own existence unless this violates the First or Second Laws. Humans use these robots to do common tasks for them. Some of the ethical questions arisen from this movie include do robots have the ability to make emotional or ethical decision, are they entitled to the same rights as
Virtue, when I hear that word I think of value and morality and only good people can be virtuous. When I hear the word ethics I think of good versus evil, wrong and right. Now when the two are put together you get virtue ethics. You may wonder what can virtue ethics possibly mean. It’s just two words put together to form some type of fancy theory. Well this paper will discuss virtue ethics and the philosophy behind it.
Lately there have been more and more smart machines that have been taking over regular human tasks but as it grows the bigger picture is that robots will take over a lot of tasks now done by people. But, many people think that there are important ethical and moral issues that have to be dealt with this. Sooner or later there is going to be a robot that will interact in a humane manner but there are many questions to be asked like; how will they interact with us? Do we really want machines that are independent, self-directed, and has affect and emotion? I think we do, because they can provide many benefits. Obviously, as with all technologies, there are dangers as well. We need to ensure that people always