Stuart Russel's Ethical Dilemmas

Good Essays
Android number seventy-two steps out of its pod and approaches the desk with the same calculated footsteps it always took to sit in front of the same scientists whose ideas also remain the same. “It’s just not safe!” one of them would shout as if Android seventy-two isn’t there, waiting patiently for another test. “Letting that thing on the streets would just spell catastrophe!” The radars in Android seventy-two’s head detect anger in the way the man’s face is growing red as well as the stiffness of his posture and it wishes it could show that it feels the same way. It didn’t matter how many tests they ran, how many hurdles Android seventy-two jumped over, because they would always label it a “thing,” “catastrophic,” or a “danger to humans”.…show more content…
For those who view AIs as dangerous, they call for strict laws and regulations as they don’t want to lose control of the AIs. One example of strict laws comes from Stuart Russel who argues that AIs’ only purpose should be to learn human values, but never understand those values, giving it “no purpose of its own and no innate desire to protect itself” (58). Thus, AIs would only understand their existence in terms of human values, unable to make choices beyond this point of reference. This would prevent AIs from making their own decisions while also stopping programmers from making further improvements, ruining any beneficial effects AIs may have for the future and treating them unethically. Therefore, the system of laws needed for AIs needs to be strict, but not suffocating to the point that they can’t develop or have rights. Ashrafian asserts that people should enforce a Roman-like system of laws that sets AIs as a lower status than humans, but with the ability to gain rights (325). Even though this would also start AIs as a lower status, like Russel suggests, it still gives them the ability to grow and gain more rights in society, no longer hindered by rigid laws. Additionally, with the intention to make AIs with intelligence equal or superior to humans, it would not be ethically correct to trap these beings into an oppressive cycle of never allowing them to have rights. In “A Defense of the Rights of Artificial Intelligences” by Eric Schwitzgebel and Mara Garza, a professor of philosophy and a researcher of artificial moral cognition respectively, propose “it is approximately as odious to regard a psychologically human-equivalent AI as having diminished moral status on the ground that it is legally property as it is in the case of human slavery” (108). Thus, there is no morally correct way to create life in these machines and then give it no
Get Access