Implications of a ban on Lethal Autonomous Weapon Systems: A critical literature review
Abstract
The primary objective of this study is to summarise the findings of a review of recent literature on lethal autonomous weapon systems. This study seeks to address concerns articulated by some scholars on banning the autonomous weapons. This paper argues that such a ban might be challenging for several reasons and can hinder the future development of artificial intelligent systems. Finally, the paper concludes with recommendations on realising the benefits of autonomous weapons by employing regulatory frameworks without prohibiting the autonomous weapons and not compromising human rights.
Keywords
lethal decision-making; automation; killer robots; autonomous weapons; human rights, drones.
Introduction
Militaries around the world have been using technological weapons for hundreds of years and research indicates that in recent years, the usage of artificial intelligence in warfare has significantly increased with the advent of unmanned vehicles such as drones (Kanwar, 2011, p.616). Robotic science offers today’s world many unconventional weapons like autonomous weapons that can make lethal decisions without even involving human in the loop. Krishnan (2009) defines an autonomous weapon to be a computer-based system that can accomplish a mission by ascertaining and engaging targets without needing human intervention. These Lethal Autonomous Weapon Systems are in short called LAWS and
Singer describes Iraq operations as they were being performed in 2008 with the threat of Improvised Explosive Devices, IEDs. “The Explosive Ordnance Disposal, EOD, teams were tasked with defeating this threat, roving about the battlefield to find and defuse the IEDs before they could explode and kill.” 3 Robots such as Packbot and Talon were used to disarm IEDs which save lives of Soldiers and civilians. The proliferation of technology in the battlefield can be seen in today’s combat environment on the ground, sea and air and will continue to grow. He states that “man’s monopoly of warfare is being broken” because digital weapons such as Packbot, Talon, SWORDS, Predator, Global Hawk and many others are a “sign” that “we are entering the era of robots of war.” 4 He supports his theory of the proliferation of technology in weapons by looking at industry growth by providing quantifiable data of rapid growth in industry to meet demands. As he states “in 1999, there were nine companies with federal contracts in homeland security. By 2003, there were 3,512. In 2006, there were 33,890.” 5 Mr. Singer then provides a history of robots, trends, and what we can expect in the future. The book also provides a glimpse of what the author believes can be expected on future battlefields and changes that he thinks U.S. policy makers and military leaders need to address. Some of the changes that can be affected concern law of war, robots role in war, level of robot authority to fight wars and robot
The US has discussed AI issues with Japan, Korea, Germany, Poland, UK, and Italy. The issue of AI is also being discussed in the UN the G-7 and the Organization for Economic Cooperation and Development. (Source 7). During these meeting it was recommended that the US government should develop a government-wide strategy on international engagement related to AI and should have strategies to account for the influence of AI on cybersecurity. Artificial intelligence in weaponry also raises the concern of “meaningful human control” in life or death situations. This could take the humanity out of fighting, if soldiers no longer have to face the people they may have to kill, who is to say they won’t shoot anyone they suspect of doing something unsavory. This is just a concern that should be addressed when deploying artificially intelligent machines to act in place of an army. On a different note, the use of AI may be able to predict attacks prior to them even occurring. An AI monitoring tool called iAWACS, or internet airborne warning and control system, is able to comb through tweet and images and predict the general mood of digital conversation (Fung). This can be helpful in preventing attacks or riots in an area. However, there is always the concern of cybersecurity and chances of hacking that would provide false information to lead the monitors off the trail of the
Assault style or automatic weapons have been a common denominator in almost all fatal mass shootings. It is unnecessary for civilians to purchase automatic weapons for non-military use. If there was a ban on the purchasing of militarized weapons by civilians, it would decrease the number of fatal mass shootings drastically.
Banned assault weapon is not the way to slow the crime rate down, What it going to do is keep the gun in the hand of people that are going to harm someone and keep them out of the hand of people that are going to use them for hunting purposes. I say this because bad people will alway be able to get there hands on them.
The use of drones for carrying out military attacks is an important current topic. While keeping our soldiers safe is a primary concern, sparing the lives of civilians and limiting the destruction of the local infrastructure is another concern from not only a rebuilding point of view, but also from an ethical point of view. In the article “The Drone Wars: International Law Will Not Make Them Humane” the authors discuss the history of technological advances in warfare and provide details of the factors that have keep these advances under control. This article was co-authored by two individuals, Arthur Herman who is a historian and John Yoo who is a law professor. Through the use of examples from history detailing the use of technology in warfare,
It’s just quick initial definition of autonomous weapon system, we are not talking about systems like drones where you have a pilot that’s guiding it throughout its mission and human operator actually pushing the trigger in order to gauge a lethal or initial a lethal strike.
The U.S. military has been, and remains, a world leader in remote targeted killings ever since the fateful events of September 11, 2001. The drone has become dominant in U.S. national security strategy, which has switched from counterinsurgency in the city to counterterrorism from the skies. Now, as the world yearns for the next big thing to change everyday life, mankind turns his gaze to artificial intelligence used to carry weapons of war. For these reasons, drone usage should be limited for the general public in the United States.
In order to understand autonomous weapons, one must understand the basis of artificial intelligence (A.I.). At their very core, A.I.s are algorithms (a step by step procedure based off of mathematical data) that can handle tasks which human intelligence requires. This means they are able to do reasoning tasks like problem solving, prediction, diagnosis, and so forth. On the contrary, the A.I.s portrayed in films and fantasy novels often involves machines that demonstrate human-level intelligence. To put this into perspective, there is a scenario where an A.I. is assigned to drive a car from point A to point B with a separate car accident between the points. A realistic A.I. would still try to drive the car from point A to point B because its
When discussing the potentiality of “robot weapons” it is imperative to critically assess the use of drone warfare as its natural predecessor. According to a 2013 Gallup Poll, “65% support drone attacks on terrorist abroad”, in the United States. Drone warfare is attractive because it claims to keep soldiers safe from on the ground dangers. However, drone pilots, on average, have much higher levels of PTSD than their counterparts in the field. Which is one of the main arguments as to why “robot weapons” should be implemented in their place.
Eleven years ago, the United States Air Force launched a missile from a drone for the first time at a test range in the Nevada desert (Drone Test) . The use of armed drones has risen dramatically since 2009. Now drone strikes are almost a daily occurrence. In 2011 the use of drones continued to rise with strikes in (Afghanistan, Pakistan, Yemen, Libya, Somalia. Proponents of armed drones argue that their ability to watch and wait, with their highly accurate sensors and cameras gives increased control over when and where to strike its both increasing the chances of success and
This could be an option to prevent nations from hacking into our systems. But for instance take call of duty black ops 2s story line. The United States had different weapon systems and drones that were autonomous. Then the antagonist hacked all of the United States systems and used them to destroy the states, and cause a worldwide panic because of all the drones that were across the world. This fear is one that I myself personally am worried about. Another movie that represents autonomous robots is iRobot. What if those drones where hacked and or given a single piece of data and it corrupted the drone and it went rogue and started emptying its payload into city’s. Because as we develop society and become more technologically advanced one day everything will be technology based. And even though we aren’t technologically ready for autonomous drones, fully autonomous unmanned combat aircraft are feasible. But what moral and legal laws would be broken? Advocates of a ban on autonomous weapons often claim that the technology today isn’t good enough to discriminate reliably between civilian and military targets, and therefore can’t comply with the laws of war. In some situations, that’s true. For others, it’s
Autonomous military killer robots act on their own, they do not have a human driver, instead they are programed to act in certain ways in specific situations. The reasons why I think that military autonomous robots should be banned worldwide is because they could be hacked, misinterpret civilians for enemies, and make lethal mistakes.
Unmanned aerial vehicles (UAV’s), commonly known as drones, are already a key element of modern warfare. However, they are now set to take on a much larger role in our society. While they were solely created with good intentions for military use, they are more generally used as surveillance or spy vehicles to this present day. The topic of drones is definitely one of controversy, because while the government claims them for safety purposes, the citizens of the United States perceive them as an unconstitutional breach of privacy. To better understand the issue at hand, I will analyze both the benefits and obstructions of these devices with the help of the “Speciesism: The Movie” documentary directed by Mark Devries, the “Why They Can Watch You:
“Killer Robots” - or, officially, “Lethal Autonomous Robots” (LARs), are robots that - once activated - can operate without any further human intervention. This new weapons system is capable of identifying and attacking a human target on its own. Delegates should note that these robots differ from non-autonomous, armed robots, or peaceful, autonomous
It could even imply how war could become too brutal or more efficient in its destructive nature. The inventor of the machine gun, Hiram Maxim, said that it would make war so wicked that it would not come to an end (Coker et al). This same idea was made by the Wright Brothers, the inventors of the plane, and Guglielmo Marconi, the inventor of the radio. War being “impossible” is still discussed, even today, when the topic of autonomous military technologies is brought to attention. Other problems also arise because of the laws and foundations set by different government organizations. They argue that machines can never replace human’s moral judgement. Even if the judgment of a situation is wrong, machines could never think about the morals and ethics of s situation. Machines are programmed to do specific tasks and will work toward the completion of that task. They will not stop to think about what could be wrong with the actions they are doing and the laws of the battlefield that would stop a normal human soldier to perform such actions. Even if there was a way to incorporate ethical and legal standards into weapon design and robots code “a machine, no matter how good, cannot completely replace the presence of a true moral agent in the form of a human being possessed of a conscience and the faculty of moral judgment” (Anderson and Waxman). Some view these automated technology developments as a crisis for the laws of war. This