Imagine following case: A self-driving car with one occupant drives down a small one-way-road in the city and out of the nowhere a child crosses the street. There is not enough time to brake and there is no space to avoid a crash by steering. The only two possibilites the car has is either to hit the child or to evade and hit the wall. To make the dilemma even harder assume that both decisions lead to the certain death of either the child or the passenger. Which decision would you make? Which one is ethically correct? Is any of both?
With the evolution of more and more autonomous machines and in particular self-driving vehicles many problems will get solved but at the same time new ones arise. An autonomous vehicle takes many decisions for us e.g. when to accelerate or to brake, when to steer, when to indicate and when to change lane. In combination with powerful sensorics and a sophisticated decision making system these possibilities empower the cars to act much better and faster then humans at average. But as Winston Churchill taught us: “With great power comes great responsibility.” This especially applies to intelligent machines like autonomous cars, which in fact can only act like their programmers told them to do. There are not only technical and legal problems self-driving cars or their manufacturers will have to face, no there are much more unpopular questions they going to have to answer.
The MIT wants to dig a little deeper into these questions and wants people to tell them their choice for different scenarios as the one above. In their Moral Machine you are asked to take these hard choices for yourself.
Clearly the whole discussion is taken to very extreme situations which hopefully are never going to happen but we can’t exclude them. But what we can do is trying to avoid them and that will clearly be the approach such manufacturers are trying to take.