How should a self driving car react when its passenger is at risk.
In "The ethical dilemma of self driving cars - Patrick Lin" linked below by TED-Ed poses the question if you were a programmer deciding what to crash into between an obstacle which has a high likely hood of death to the rider of the self driving car, a motorcycle which would most likely kill its rider, and a SUV which is a middle ground between the two.
I believe that there is one really important detail that needs to be considered, who is consenting to self driving cars being on the road? It isn't the motorcyclist and it isn't the person in the SUV. The only people who has actually agreed to let a AI on the road is the company that made the car and the people riding in it. This leads to another question is it ok to decide that you are going to risk other people's lives to protect your own. I don't believe that it is and thus I believe that the self driving car should run in to the obstacle because it's rider and designer does not have have the right to run in to anything else.
I do not believe that this is the choice the company's will make. They are trying to maximize their profits and a car that is safer for the driver sells more then one that is ready to sacrifice them.
Comments
Post a Comment