Whether it is in a laboratory, boardroom, the halls of Congress, or at the precipice of a perilous situation, weighing the value of one life versus another and making life and death decisions is an excruciating process with no easy answers (Kaplan, 2016). Nonetheless, as the discussion about autonomous cars and the ethical dilemmas they present rages on, I find it interesting that we are insistent upon imposing higher moral and ethical standards on the programming and decision making abilities of an autonomous car than human drivers (Parnell, 2016). Moreover, as we look to do so, we are also departing from the ethical philosophy that has been utilized for many years to pioneer many of the safety features, laws, and regulations to enhance automobile …show more content…
However, there was little discussion about the significant consequences associated with the human driver’s split second decision (Lin, n.d.). Moreover, when humans are compelled to instinctually react to life threatening stimuli, their decisions are primarily driven by self preservation rather than a cognitive moral philosophy (Taflinger, 1996). As a result, I believe the deontology should be the moral philosophy utilized to address the moral dilemmas associated with autonomous cars. Subsequently, manufacturers, scholars, and law makers should gather data assessing how most humans would react in these scenarios and write code for autonomous vehicles to respond in a similar manner in these excruciatingly painful scenarios. Additionally, utilizing an deontological moral philosophy would also enable auto manufacturers and law makers to continue to leverage the same moral philosophy they have utilized for the past 50 years, as well as the data from past crash tests and experiences to innovate and regulate automobile safety features, such as, seat belts, crumple zones, anti-locking brakes, air bags, and automatic emergency braking systems given that each of these safety systems were designed to protect the passengers inside the vehicle (Rynkiewicz, 2015). Consequently, enabling consumers to reap the many other safety benefits of autonomous vehicles, while having a clear understanding that when faced with a split second perilous decision, the vehicle will mirror the instinctual actions of a human driver, utilize deontological moral philosophy, and attempt to safeguard the lives of the car’s
Each generation, has given something to humanity, that the majority of us, thought was impossible, and no way is it going to happen. Some people believe that self-driving cars is good for the future and others think it will make us depend on technology too much. In all truth self-driving cars, has a lot of potential and unanswered questions: Google has been demonstrating its driverless technology over the past few years by bringing computerization into what has, for over a hundred years, been solely a human activity (Driving an automobile). It has done this by retrofitting Toyotas, Lexus’s and Nissan with cameras and sensors. “Major car manufactures already market and sell high-end vehicles with features like automated braking self-parking, lane- departure warning, and variable speed cruise control.”(Guerra) there is no doubts about self-driving cars have potential but the technology has serious questions to address. “With the news that driverless cars are coming to our roads, should we be discussing what will happen when the cars has to choose between the safety of its occupant and the safety of the road users.” (Wise) Will the car drive itself off a bridge to avoid an accident? Or run into the side walk to avoid hitting a pedestrian are all the serious questions brought up by people.
Right now self-driving cars and trucks are hitting the road and will soon be available to the general market . Major companies like Google, Tesla, Uber and Delphi are leading in autonomous cars industry. In the past few years, these companies have made great strides improving this technology. Addressing the concerns for this technology must be concluded before it reaches the general public. Given the current state of automobiles that don’t need drivers the American consumer needs to be mindful that moral decisions this technology is handling puts them at risk due to the fact that this is emerging technology, laws are being made that will shape this technology, and who is choosing who lives and who dies.
Automobiles are a crucial innovation that play an increasingly significant role in society today. Cars provide people with a primary method of transportation and have emerged as a basic necessity for people in their day-to-day lives. Although they have a variety of positive effects, regular vehicles also come with many different detrimental aspects. These adverse effects can be eliminated by a new transformative type of automobile known as autonomous, or self-driving, vehicles (AVs). Regular vehicles lack the same safety, opportunities for everyone to have the same transportation, efficiency, and environmental benefit. Society would become more advanced by modifying transportation today by only having AVs on the road and no regular vehicles whatsoever. The AVs could permissibly have different moral algorithms from each other, depending on consumer and manufacturer choice; any moral algorithm that
Since the beginning of self-driving cars which first began in 1925 with the creation of the Houdina Radio Control; a car operated by two cars, a transmitter, and an antenna, to now - the futuristic dream of these autonomous cars have transformed into the reality of cars we see now. These cars are nothing short of the new technology advances that have occurred over the past decade. However, with these advances many question whether or not these cars are ready to be sold, due to the fatal accident that occurred May 2016 involving the autonomous Tesla and a white truck. Due to the Tesla not being able to detect the white tractor because of technological issues, the tesla failed to stop, and since the driver was not prepared to steer, it lead to the fatal collision eventually leading to the death of the tesla owner. With the increase of these cars on the road, from companies like BMW, Daimler, Ford, Apple, Uber, and Google, this poses a serious threat to not only the people operating this autonomous vehicle but also to the surrounding drivers. I believe that autonomous cars should not be put on the road, and that these cars are not beneficial to the population.
Self driving cars are, without a doubt, the future of the automobile industry. Although this technology could be extremely beneficial, some tough decisions come along with it. It seems preposterous to use the term “adjustable ethics” when discussing situations that can lead to life or death. How can ethics, an extremely human characteristic, be transferred into a machine without a real understanding of the world around it? By definition, adjustable ethics are decisions made based on the situation surrounding a person, and how they would apply their moral beliefs to that situation. Although innovation has created incredible technology that could save many lives and practically eliminate crash fatalities, there seems to be no concrete answer
We envision in the future, you can take your hands off the wheel, and your commute becomes restful or productive instead of frustrating and exhausting,” said Jeffrey Zients, director of the National Economic Council, adding that highly automated vehicles “will save time, money and lives.” (3). Driverless cars are an on growing technological development that has been gaining popularity in the automotive industry. This new technology is being used in applications such as collision and crash avoidance, but arises a fear for safety. The moral dilemma surfaces from putting human lives in the hands of, essentially, a robot chauffeur. I plan to explore safety ethics through a Utilitarian, including the line drawing technique, and Kantian perspective
With the introduction of self-driving cars an unspoken dilemma has emerged that very few individuals have discussed and that is should a self-driving car kill the owner to save the lives of others? In the first article critique, the Insurance Journal poses the question on what ethical system a self-driving car will adhere to and in addition to this, should the driver or owner be able to choose and adjust that ethical system? In the article, the author reaches out to Ameen Barghi of Oxford University for the options. Barghi stated that there are two philosophical approaches to autonomous cars, one being utilitarianism which says to do what will produce the greatest happiness for the greatest number or people or on the other hand, deontology which argues that some values are simply always true (Insurance Journal, 2015). In addition to this, Barghi adds that once an ethical system has been chosen, it can be further detailed such as will the decisions be based off rule or act utilitarianism (Insurance Journal, 2015).
The article “Here’s a Terrible Idea: Robot Cars with Adjustable Ethics Settings,” presents a very interesting dilemma I had not considered before. In theory, it makes sense that an intelligent vehicle could have the capability to determine who and where to wreck should the decision be needed. Unfortunately, the ethical robot car idea is problematic, and comes with numerous moral issues.
Recently a pedestrian was hit and killed in Tempe, AZ after being stuck by an autonomous vehicle while crossing the road. This incident further highlights the difficulties engineers face when trying to implement concrete standards to which artificial intelligent machines must adhere. “It’s one thing to automate driving on a well-striped, high-quality, cars-only road. But machine learning is harder when you add ‘unpredictable’ people, poorly striped lanes, low quality pavement, inclement weather, and other inconsistencies. Algorithms thrive on order—and city streets have less of it” (Tomer). In this unfortunate scenario, the technology involved failed to respond to an unexpected situation when the victim stepped from the curb onto the road in an area which was not designated as a crosswalk zone. In alternate scenarios the vehicle in question may even be required to make a decision in which there is no favorable outcome, similar to the aforementioned trolley problem, that would satisfy all parties involved. Engineers must decide what actions would be taken in the event that saving one person would almost certainly guarantee the death of another, and which life should be prioritized, if
In today’s world of manual driving, one would probably drive faster than the speed limit in the case of an emergency. Would an autonomous car programmed to follow the laws and regulations of driving have the situational awareness to justify speeding? Perhaps an emergency “speed to the hospital” mode could be installed, but that could give rise to a whole new set of issues. In another scenario, imagine a two-lane road with the cars in both lanes traveling in opposite directions. Suddenly, a squirrel runs out into the middle of the lane. In the choice between harm to oneself and other humans versus harm to a squirrel, it is likely that most humans would opt to sacrifice the squirrel. On the other hand, the self-driving car might be programmed to slam on its breaks, possibly resulting in a crash with the human drivers behind it. If there were no oncoming traffic, a human driver could simply swerve to the other lane to avoid the squirrel, whereas a driverless car that is autonomously following the laws of traffic may be prohibited from crossing the double yellow line that divides the two lanes. It is situations like these that highlight the key issue of how a self-driving car should react when ethics and law
I would choose Deontology for the programming of the autonomous vehicles. “Deontology refers to moral philosophies that focus on the rights of the individuals and on the intentions associated with a particular behavior rather than its consequences.” (Ferrell, et al, 2013, p.159). Why some other life should be taken by the actions of the autonomous vehicle. The rights of the individuals in the video in the other lanes would take preference over the autonomous vehicles rights. Since, the autonomous vehicle would be programmed to not break the law and speed and follow safe following procedures; the vehicle should be able to withstand the impact. However, even if the impact is fatal, I believe the rights of the others in the vehicle should
On a Sunday night, a woman named Elaine Herzberg was walking her bike across the street when she was hit and killed by an autonomous vehicle. While the SUV did have a driver behind the wheel, the car at the time was in self-driving mode as part of an experiment made by the company, Uber. The car was reportedly going 40 miles in a 35-mile zone and showed no signs of stopping, when it speeds toward the pedestrian. This is a tragic loss of life and while the car was still being tested, shows the dangerous consequences technology can have upon people. This is especially concerning since Uber’s overall goal is to have driverless cars drive people around, though it is clear that much more research must be conducted before reaching that point
For example, if your robot car chooses to veer in the direction of one innocent bystander in an effort to save the four people in the robot car from driving off a cliff, how can we guarantee that the four people will actually survive? What if in spite of the vehicles effort to save the four people, it instead leads to the death of everyone involved? Because there is no way of predicting the actual consequences, this is one of the many faults of robot cars. If the vehicle is programmed such that the safety of the consumer is prioritized, it would be in line with that of an ethical egotist; it would only maximize the consumer’s well-being. For example, in an instance where your car is faced with the choice of saving you or a group of four innocent bystanders, it would save you. Because this option is unsettling, it could lead to an undesirable lawsuit against the autonomous vehicle manufacturer. Whether the car is programmed to maximize utility or prioritize the consumer’s life, each option leads to further drawbacks to robot cars.
In their article 'Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?' Bonnefon, Shariff, and Rahwan (2015) argue that the development of Autonomous Vehicles (AVs) comes with a slew of significant moral problems that can benefit from the utilisation of experimental ethics. Bonnefon et al. list the expected benefits that AVs will provide, such as improving traffic efficiency, reducing pollution, and, most importantly, that they are predicted to reduce traffic accidents by up to ninety percent (1). However, the authors point out that, in spite of all the good that will follow from the deployment of AVs, there will be unavoidable
First of all, making a moral decision in some emergency situation may be impossible for an autonomous car. Newcomb (2014) mentions the “tunnel problem” which was a hypothesis about autonomous cars in emergency. There was an autonomous car traveling in a single-lane highway, and a boy who was crossing inside tumbled when the car was going into a tunnel. The car had to strike either side of the tunnel entrance with sacrificing the life of passengers, otherwise hit the boy to death. In this situation, whichever the autonomous car choice, it will take the blame of immorality. Specifically, if an autonomous car was set as