> I think its an open question how nice society will be to participant unable to retaliate.
I suspect that the initial implied liability carried by the first batch of commercially sold self-driving vehicles will lead to extensive sensor logs and 360° dash cam recordings. In the event a human driver tries to game a self-driving vehicle and it causes an accident, the extensive data logging will supply far more than traffic courts and insurance companies have ordinarily dealt with before, and likely lead to new legal precedents.
If the LIDAR data logging shows a human driver recklessly accelerated towards the self-driving car (presumably thinking it would get out of the way) and the path of least damage correctly chosen was to stay put because of merging cars on either side, then most attorneys and insurance companies will find it hard to challenge sub-second LIDAR data that can be reconstructed to overlay data on the video log.
One potential development out of this push for self-driving vehicles that might catch us by surprise is ubiquitous, extensive data logging in vehicles, both human- and computer-driven. In order to sell the tech at a mass scale, all sorts of sensors must sell much cheaper than even today, and they won't necessarily be limited to only computer-driven cars. Like cell phone cameras made solid-state cameras and storing large amounts of photos and videos cheaper than the mass market imagined possible only 30 years ago, and now people are putting those camera sub-assemblies to use outside of cell phones.
I believe you are right about all of these legal issues, but a very human question remains. If a machine is driving, who decides who dies?
As shitty as the human race is much of the time, people in life or death situations regularly show a lack of self preservation when given the choice between saving themselves or someone else. Human history is littered with countless real, and imagined, heroes that gave their lives to help the species. How does this relate to self driving cars?
A real situation these cars will deal with is weighing the potential death of their occupant vs another person. Humans regularly swerve off the road to to avoid hitting pedestrians and animals. Sometimes they die doing this. If a machine goes on trial for killing a pedestrian rather than risk a 10% chance of death for its passenger, what do we do? A human in most situations would have risked their life to avoid certain death of another, but it's unethical (and nobody would buy) a machine that could willingly sacrifice its owner under any circumstances.
I believe this dilemma is un-solvable. This and similar situations will quickly result in laws that require a human to always be present, negating most of the advantages of automated driving
You are correct; it is un-solvable through technical means, as this dilemma requires a cultural shift that could take several generations to accept sensor- and machine-made decisions that are the embodiment of likely monied interests' policies who will lobby for the necessary legislative scaffolding. Not that will prevent companies from trying technical solutions: it may push sensor boundaries much further out, and at far greater granularity, for example. Scanning out to the mechanical limits of braking distance at current measured road traction, modeling every detected element, and simming projected versus actual paths microsecond by microsecond, and ultra-defensively maneuver. Not even limiting to just reacting to unfolding emergencies, but proactively easing out of potentially developing risk, like noticing it might get boxed in by human-driven vehicles (maybe identified by their erratic steering, not even requiring an IFF-type system), and easing out to join a "herd" of self-driving cars further ahead.
I suspect that the initial implied liability carried by the first batch of commercially sold self-driving vehicles will lead to extensive sensor logs and 360° dash cam recordings. In the event a human driver tries to game a self-driving vehicle and it causes an accident, the extensive data logging will supply far more than traffic courts and insurance companies have ordinarily dealt with before, and likely lead to new legal precedents.
If the LIDAR data logging shows a human driver recklessly accelerated towards the self-driving car (presumably thinking it would get out of the way) and the path of least damage correctly chosen was to stay put because of merging cars on either side, then most attorneys and insurance companies will find it hard to challenge sub-second LIDAR data that can be reconstructed to overlay data on the video log.
One potential development out of this push for self-driving vehicles that might catch us by surprise is ubiquitous, extensive data logging in vehicles, both human- and computer-driven. In order to sell the tech at a mass scale, all sorts of sensors must sell much cheaper than even today, and they won't necessarily be limited to only computer-driven cars. Like cell phone cameras made solid-state cameras and storing large amounts of photos and videos cheaper than the mass market imagined possible only 30 years ago, and now people are putting those camera sub-assemblies to use outside of cell phones.