While lots of companies in Silicon Valley work on self driving cars many people remain skeptical of these. The introduction of such cars raises technical, legal and most importantly moral problems.
'Self driving' comes at various levels. A parking assistant may help those people who otherwise can't park in a decent manner. Cars that drive automated on clear roads but with the oversight of a driver can fail because the supervising driver gets bored and stops to concentrate on the traffic situation. Fully autonomous cars, who do not need a driver, are still far from the state of the art.
The Society of Automotive Engineers defines the full driving automation of a vehicle as:
the full-time performance by an automated driving system of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver
Autonomous cars depend on sensors to have situational awareness. Sensor can fail. They can be spoofed by weather phenomenons or willful attacks. Autonomous cars need an immensely complex software to make decisions. Paraphrasing Tony Hoare:
There are two types of computer programs. One is so simple that they obviously contain no error. The other is so complicate that it contains no obvious errors.
All self driving cars will have bugs. Their software is extremely complicate and it will have errors. They will receive updates with more errors and unpredictable consequences. Even Microsoft, the biggest software company in the world, recently screwed up a regular Windows 10 update. It deleted user data on the local disk and in the 'cloud' where it was supposed to be safe. Who will be guilty when an autonomous car bluescreens and causes an accident?
Next to the technical and legal problems autonomous cars also create moral ones. They need rules to decide what to do in extreme situations. A kid jumps into the path of the car. Should it continue straight on and hit the kid? Should it veer left into the group of chatting seniors? Or to the right where a policeman is issuing parking tickets? What set of rules should the car's software use to decide such situations?
Researchers asked people around the world how a 'Moral Machine' should behave. The respondents faced thirteen different scenarios with two possible outcomes and had to click on their preferred option. If it is inevitable to kill one person to let another one survive would you prefer the woman or the man to live on? The older person or the younger one? The passengers of the car or the pedestrians who cross the road in spite of a red light?

bigger
Check out the scenarios and decide yourself.
Preliminary results of the large but not representative study were recently published in Nature. The people who answered preferred strollers, kids and pregnant women the most. Cats, criminals and dogs lost out.

The study found cultural and economic differences. People from Asian countries with a Confucian tradition showed a higher preference for old people to survive. Countries with a Christian tradition preferred younger ones more. People in Latin American preferred the survival of a male person more than people in other cultures. As an older male person in Europe I am not really comfortable with these results.
Inevitably the inclusion of such preferences in decision making machines will at some point be legislated. Would politicians regulate these to their own favor?
The people who took the test disfavored 'criminals'. Should the 'Moral Machine' decision be combined with some social scoring?
The Chinese government is currently implementing a social credit system for all its citizens. A person's reputation will be judged by a single number calculated from several factors. A traffic ticket will decrease ones personal reputation, behaving well to ones neighbors can increase it. Buying too much alcohol is bad for one's score, publicly lauding the political establishment is good. A bad reputation will have consequences. Those with low ratings may not be allowed to fly or to visit certain places.
The concept sounds horrible but it is neither new nor especially Chinese. Credit scores are regularly used to decide if people can get a loan for a house. Today's credit scoring systems are black boxes. The data they work with is often out of date or false. The companies who run them do not explain how the judgment is made. The U.S. government's No-Fly-List demonstrates that a state run system is not much better.
The first wave of the computer revolution created stand-alone systems. The current wave is their combination into new and much larger ones.
It is easy to think of a future scenario where each persons gets a wireless readable microchip implant to identify it. In a live-or-die scenario the autonomous car could read the chips implants of all involved persons, request their reputation scores from the social credit system and decide to take the turn that results, in sum, in the least reduction of 'social value'. The socially 'well behaved' would survive, the 'criminals' would die.
Would we feel comfortable in such a system? Could we trust it?