Are autonomous vehicles challenged by ethical dilemmas?

While truly autonomous vehicles might be decades away, technological advances continue unabated. Computer programmers and specialists in artificial intelligence have worked overtime to figure out clever ways to have the vehicle sense and avoid hazards at the earliest possible moment. This often includes the use of hypothetical, no-win scenarios such as the infamous ethical dilemma known as the trolley problem.

What is the “Trolley Problem?”

The trolley problem is a thought experiment suggested decades ago. It was a philosophical no-win scenario that represented an impossible situation with an impossible decision. As originally suggested, the experiment put a choice in the hands of an observer.

There is a runaway trolley approaching a Y split in its track. If the trolley continues forward, it will hit and kill five people tied to the tracks, unable to move. If the observer pulls a lever, the trolley will divert to a different track. On this track is one person who likewise cannot move. Tragedy will certainly ensue, but what is the more ethical resolution? To do nothing and allow five people to die or to act directly leading to the certain death of one individual?

A similar, modern, version of this dilemma is something programmers and AI experts must work to solve. A driver with a lifetime of experience will have difficulty deciding. Will it be possible to imbue a computer relying on machine learning with the decision-tree necessary to reach a conclusion? It is likely that the machine will ultimately choose the solution that results in fewer fatalities – the lesser of two evils. But this will ultimately cause the programmers to allow the car’s computer to decide on and then take an action that directly leads to the death of a human pedestrian.

Is it possible to “win” a lose-lose situation?

The programmers and auto makers alike look at resolving situations like this in a two-pronged approach:

  1. Advanced sensors: With advanced sensors spread throughout the car, programmers hope to give autonomous vehicles more time to analyze a situation and develop a satisfactory resolution. Imagine the trolley problem – if the observer had a mile or more to act, perhaps he or she could have taken more actions than simply pulling the lever.
  2. Early identification: The fact that many of these hypothetical situations are being identified as potential programming hazards gives the artificial intelligence experts a good deal of time to work toward a resolution. Understanding a challenging scenario exists is the first step to resolving that problem.

While vehicle manufacturers hope that self-driving cars reduce the number of collisions on the road, it might simply shift liability. If an autonomous vehicle caused a crash, who is at fault? The owner of the vehicle? The manufacturer? The programmer? Unfortunately, these can be complex legal proceedings that require skilled guidance.

Recent Posts