# Compare The Arguments Against Autonomous Weapons

Better Essays
In order to understand autonomous weapons, one must understand the basis of artificial intelligence (A.I.). At their very core, A.I.s are algorithms (a step by step procedure based off of mathematical data) that can handle tasks which human intelligence requires. This means they are able to do reasoning tasks like problem solving, prediction, diagnosis, and so forth. On the contrary, the A.I.s portrayed in films and fantasy novels often involves machines that demonstrate human-level intelligence. To put this into perspective, there is a scenario where an A.I. is assigned to drive a car from point A to point B with a separate car accident between the points. A realistic A.I. would still try to drive the car from point A to point B because its…show more content…
Stuart Russell, the director of the Center for Intelligent Systems at the University of California, Berkeley, raises two points to consider as support to banning autonomous weapons: developing A.I. weapons better than human soldiers and the availability of A.I. weapons (Russell, 2015). To put differently, human soldiers can decide whether or not to shoot someone like a civilian in an extreme environment. An A.I. weapon would only act within its algorithm. So unless the algorithm accounts for every possible scenario, it will produce unfavorable results. However, a game theoretical formulation by G. Arslan and J.S. Shamma invalidates this. Russell stated an A.I. is restricted by its algorithm, so it cannot like a human in complex scenarios. But here, it's possible for the A.I. to make much more complex decisions with negotiation mechanisms. An example of this is a young boy, strapped with a bomb, walking towards a squad of soldiers. The soldiers are preoccupied with other things, so they did not notice the boy. The autonomous turret, near the soldiers, recognizes the explosives and prepares to shoot the hostile. The turret’s A.I. is only able to distinguish the difference between lethal and nonlethal targets. However, if a drone with an A.I. built to identify age was there, it can prompt a more favorable scenario. It can stop the turret’s A.I. from killing the boy and proceed to two different courses of action. The turret’s A.I. can shoot the boy in a nonlethal spot to neutralize him or alert the soldiers of an incoming child. From there, the soldiers can either talk some reason to the kid or disable him. Referencing back to Arkin’s point, this is considered the optimal level of human control. The A.I.s makes it easier to accomplish certain goals but the hard decisions are left for humans to make. Also, this allows for more human control in a sense since there are more algorithms