You are on page 1of 8

Brant Carthel

Sec. 912
ENGR 482
4/25/2016

Morality of Self-Driving Cars

Introduction
Every day, human drivers must make ethical and moral decisions while behind the
wheel of a vehicle. Should a driver break the law and go over the speed limit when the flow of
traffic is driving over the speed limit? Many would say yes, since this is the safer option. How
can one program a car to make these logical decisions? Beyond achieving a high level of
artificial intelligence for the self-driving car, each moral situation would have to be individually
programmed. But the issue arises, how could the programmers possibly know every single
situation that could present itself? If the users morality differs from that of the programmer,
the user might find the actions chosen by the self-driving car unacceptable in certain situations.
Humans are not perfect, we learn through making mistakes. Accidents are bound to happen
with self-driving cars, since they are made by humans, whether the fault lies with the decisions
made by the program or a human driver. Who then is responsible for the accident? The
programmer, the user, the company who created the car? Utilitarian ethics will be applied to
the issue to see what moral action would be taken by someone who follows Utilitarian ethics.
The conclusions derived from these ethical theories will seek to thoroughly answer two main
questions, who is responsible for the self-driving cars actions and how should the car respond

in morally ambiguous situations. The conclusion drawn will state that the company should be
responsible for the actions of the self-driving cars, and the software should be programmed to
choose the lesser of two evils, if the situation should arise.
Utilitarian Ethics
Rule utilitarianism looks at the consequences on an action, and the moral action would
be the one that causes the most happiness or well-being to the people affected and not
just the consequences to oneself.[2] Kantian ethics differs in that Kant believed in universalized
rules which can be applied to any situation regardless of the circumstances. Rule utilitarianism
does not have universal rules, but rather the only rule is that the outcome of a moral action
provides the greatest good to all peoples involved. To apply Utilitarian ethics to the morality of
self-driving cars, the first situation to consider would be a common hypothetical moral dilemma
called the Trolley Problem.[1] To apply this ethical experiment to self-driving cars, imagine the
self-driving car is driving at 70 mph down the road. Suddenly, a motorcyclist swerves from the
opposite lane and is sure to make a head on collision with the car. At the same time, a family of
5 is set up on the side of the road selling lemonade which the car would unavoidably crash into
if it were to swerve. Is it ethically permissible to stay the course, purposefully injuring/killing
the motorcyclist and possibly the person operating the self-driving car? Looking at the Trolley
problem and applying utilitarian ethics, the outcome is clearly the same as with virtue ethics. A
utilitarian would see that to maximize the well-being or happiness of the most people, the car
must stay the course in order to kill two people rather than five. Now consider that the
President of the United States is in the motorcycle and there are five convicts on the side of the
road. Still, a strict rule utilitarian would choose to stay the course. One could argue that by

killing the president, the well-being of millions of Americans would be affected, however, there
also might be millions of Americans whose well-being would be increased by the now dead
president if the president were a bad president. The fact of the matter is there would be no
way of knowing the exact second-hand outcomes of either decision, so the rule utilitarian in
this case should only look at the immediate consequences of his/her actions and act
accordingly. In 2007, a study done by psychologists Fiery Cushman and Liane Young along with
the biologist Marc Hauser conducted a test presented to thousands of web users and found
that 89% of the participants would choose the Utilitarian moral to the problem, that is,
whichever choice ended in the least amount of lives lost.[6]
How would a rule utilitarian decide on whom holds the blame for accidents caused by
self-driving cars? To decide, the well-being of all those involved in the situation must be
assessed in each case for whom the blame gets placed on. To simplify matters, only three cases
on whom to blame will be considered: the user, the company or the programmer. In the case
of the user, giving the blame would decrease the well-being of that user, the damages being
monetary. If the company was to be blamed, perhaps the well-being of the people at that
company might be decreased due to bad reputation from the public, but perhaps not. The
effect of receiving blame for car accidents would likely be minimal, unless the car accidents are
frequent enough, in which case the well-being of a much larger population would be increased
if the company were to dissolve or be pushed to improve their software. In either case, the
well-being of a much larger group of people would be increased. Blaming the programmer
would at worst result in the programmer being fired from his job, decreasing his/her well-being,

but with no other obvious or direct effect on the well-being of others. Clearly, the best choice
for a rule utilitarian would be to place the blame on the company.
Variations on the Trolley Problem and General Moral Consensus of the Public
The choice for a devout follower of Utilitarian ethics appears to be concrete in both
situations of the Trolley Problem previously stated. Consider now that the driverless car is
driving safely atop a large cliff. Suddenly, you see that the cliff drops off up ahead and there is
another person to your left and right so were you to swerve to avoid the drop off you would
surely kill one of the two people. The choice becomes, should you sacrifice yourself for the life
of one other person? In this case, the choice for a devout Utilitarian becomes less certain.
Which persons happiness or well-being would be most harmed if they were killed?
Looking strictly at the rules of Utilitarianism laid out by John Stuart Mill can only provide a
simple argument: the moral action is that which causes the maximum happiness to the
maximum number of people. So the conclusion of this particular situation from a Utilitarian
perspective is always unclear, unless some assumptions about Utilitarianism are made which
arent explicitly mentioned by Mill. A recent paper published by Jean-Francois Bonnefon, Azim
Shariff, and Iyad Rahwan showed that most non-philosophers, people probably not concerned
with which ethical theory they follow, believed that cars should be programmed to avoid
hurting bystanders.[4] The authors and researchers of the paper, led by psychologist JeanFrancois Bonnefon from the Toulouse School of Economics, presented a series of different
vehicle collision scenarios to approximately 900 randomly chosen people. They found that 75%
of the participants in the experiment thought that the car should always swerve and kill the
passenger in the car, even if there was only one bystander. The conclusion drawn from the

research is significant in that it shows the general morality believed by most people is that they
choose the moral duty not to do bad things, i.e. murder. Among the people and philosophers
debating moral theory, the solution to the Trolley Problem is complicated by various arguments
and variations on the problem that appeal to ones moral intuitions but point to different
answers.
The precise reason why the Trolley Problem is debated frequently and fiercely with no
clear consensus at least among philosophers, clearly illustrates the tension between the human
moral duty of self-preservation, and the moral duty not to do bad things. As previously stated,
the general consensus of the public seems to agree that the moral duty not to do bad things at
least in this case, would be the correct moral decision.
Criticisms to Utilitarian Approach
Philosophers are far from finding a solution to the Trolley Problem despite the multitude
of papers that debate every tiny ethical detail and present multiple variations on the Trolley
Problem. Former UCLA philosophy professor Warren Quinn explicitly rejected the Utilitarian
approach to the problem, arguing that humans have a duty to respect other persons rather
than following the morality that one should maximize happiness like in Utilitarian ethics.[5] The
conclusion drawn from the paper written by Quinn stated that an action that directly and
intentionally causes harm is ethically worse than an indirect action that happens to lead to
harm. The action of swerving to kill some bystander and save yourself would be seen as a
direct action which intentionally causes harm, which is argued as morally wrong in Quinns
paper. The results of the experiment from the study led by Jean-Francois mentioned earlier

seem to align well with Quinns paper. The general public seems to agree that directly causing
harm to someone in the case of the Trolley Problem where either the passenger or bystander
would be killed, is usually the morally wrong choice.
Conclusion
Summarizing the conclusions drawn from the studies and Utilitarian application to the
Trolley problem gives a relatively clear answer to the Trolley Problem applied to driverless cars
and how the public seems to want the cars to programmed. When looking at the traditional
Trolley Problem, the choice of a Utilitarian is clear and perhaps trivial for one familiar with
Utilitarian ethical theory. The autonomous vehicle should be programmed to make the
decision which results in the least amount of lives lost, which aligns with the Utilitarian moral
ground of maximizing happiness. When looking at whom to blame for accidents involved with
autonomous vehicles, three entities/people were considered: the passenger, the programmer,
or the company which produced the vehicles. By applying the Utilitarian approach of
maximizing happiness for the most amount of people, the conclusion drawn was that the
company should be held responsible for the accident caused by their autonomous vehicles.
The variation on the Trolley Problem which involves sacrificing the passenger in order to save
the life of one bystander could not be given a clear answer by the Utilitarian approach, since
the ethical theory does not specify which persons happiness or well-being is more important.
The solution then to the specific variation on the Trolley Problem was given by studies on the
publics choice for this situation and the arguments of philosophers. The conclusion from both
the public and the philosopher Warren Quinn were that intentionally causing harm to another

person is morally wrong, even in the case which would violate the moral duty of selfpreservation.
In reality, cars will very rarely be in a situation like the Trolley Problem, where there is
only two courses of action, and the computer can predict accurately whether each choice will
lead to death. But in a future where more and more autonomous vehicles are being deployed
onto the roads, its not implausible that the software will at some point have to make a choice
between causing harm to the passenger within or to a bystander. Driverless cars will need to
be able to recognize these situations and react accordingly. Autonomous car manufacturers,
such as Google, have not yet revealed their decision on this issue. Given that there appears to
be general consensus among public opinion, perhaps one day an acceptable solution will be
found.

References
[1] http://www.oswego.edu/~delancey/trolley.pdf
[2] http://www.utilitarianism.com/ruleutil.htm
[3] http://qz.com/536738/should-driverless-cars-kill-their-own-passengers-to-save-apedestrian/
[4] http://qz.com/536738/should-driverless-cars-kill-their-own-passengers-to-save-apedestrian/
[5] http://philosophyfaculty.ucsd.edu/faculty/rarneson/Courses/QuinnonDDE.pdf
[6] http://www.theatlantic.com/health/archive/2014/07/what-if-one-of-the-most-popularexperiments-in-psychology-is-worthless/374931/

You might also like