Moon of Alabama Brecht quote
October 26, 2018

Self Driving Cars And Moral Decisions - Who Will Live, Who Will Die?

While lots of companies in Silicon Valley work on self driving cars many people remain skeptical of these. The introduction of such cars raises technical, legal and most importantly moral problems.

'Self driving' comes at various levels. A parking assistant may help those people who otherwise can't park in a decent manner. Cars that drive automated on clear roads but with the oversight of a driver can fail because the supervising driver gets bored and stops to concentrate on the traffic situation. Fully autonomous cars, who do not need a driver, are still far from the state of the art.

The Society of Automotive Engineers defines the full driving automation of a vehicle as:

the full-time performance by an automated driving system of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver

Autonomous cars depend on sensors to have situational awareness. Sensor can fail. They can be spoofed by weather phenomenons or willful attacks. Autonomous cars need an immensely complex software to make decisions. Paraphrasing Tony Hoare:

There are two types of computer programs. One is so simple that they obviously contain no error. The other is so complicate that it contains no obvious errors.

All self driving cars will have bugs. Their software is extremely complicate and it will have errors. They will receive updates with more errors and unpredictable consequences. Even Microsoft, the biggest software company in the world, recently screwed up a regular Windows 10 update. It deleted user data on the local disk and in the 'cloud' where it was supposed to be safe. Who will be guilty when an autonomous car bluescreens and causes an accident?

Next to the technical and legal problems autonomous cars also create moral ones. They need rules to decide what to do in extreme situations. A kid jumps into the path of the car. Should it continue straight on and hit the kid? Should it veer left into the group of chatting seniors? Or to the right where a policeman is issuing parking tickets? What set of rules should the car's software use to decide such situations?

Researchers asked people around the world how a 'Moral Machine' should behave. The respondents faced thirteen different scenarios with two possible outcomes and had to click on their preferred option. If it is inevitable to kill one person to let another one survive would you prefer the woman or the man to live on? The older person or the younger one? The passengers of the car or the pedestrians who cross the road in spite of a red light?


bigger

Check out the scenarios and decide yourself.

Preliminary results of the large but not representative study were recently published in Nature. The people who answered preferred strollers, kids and pregnant women the most. Cats, criminals and dogs lost out.

The study found cultural and economic differences. People from Asian countries with a Confucian tradition showed a higher preference for old people to survive. Countries with a Christian tradition preferred younger ones more. People in Latin American preferred the survival of a male person more than people in other cultures. As an older male person in Europe I am not really comfortable with these results.

Inevitably the inclusion of such preferences in decision making machines will at some point be legislated. Would politicians regulate these to their own favor?

The people who took the test disfavored 'criminals'. Should the 'Moral Machine' decision be combined with some social scoring?

The Chinese government is currently implementing a social credit system for all its citizens. A person's reputation will be judged by a single number calculated from several factors. A traffic ticket will decrease ones personal reputation, behaving well to ones neighbors can increase it. Buying too much alcohol is bad for one's score, publicly lauding the political establishment is good. A bad reputation will have consequences. Those with low ratings may not be allowed to fly or to visit certain places.

The concept sounds horrible but it is neither new nor especially Chinese. Credit scores are regularly used to decide if people can get a loan for a house. Today's credit scoring systems are black boxes. The data they work with is often out of date or false. The companies who run them do not explain how the judgment is made. The U.S. government's No-Fly-List demonstrates that a state run system is not much better.

The first wave of the computer revolution created stand-alone systems. The current wave is their combination into new and much larger ones.

It is easy to think of a future scenario where each persons gets a wireless readable microchip implant to identify it. In a live-or-die scenario the autonomous car could read the chips implants of all involved persons, request their reputation scores from the social credit system and decide to take the turn that results, in sum, in the least reduction of 'social value'. The socially 'well behaved' would survive, the 'criminals' would die.

Would we feel comfortable in such a system? Could we trust it?

Posted by b on October 26, 2018 at 18:18 UTC | Permalink

Comments
next page »

Just scanned this post ‘b’ and have the conclusive answer ! But won’t give the end of the book away and spoil it!

Posted by: Mark2 | Oct 26 2018 18:39 utc | 1

Dystopia is the new utopia it seems to me.

And no, i wont trust it. As a pretty tech savy person, i too dont share this childish optimism in technological advances. I often shiver in fear of our future.. High tech with social, emotional, psychological and intellectual stone age.
I read somewhere that a possible solution would be to implement a behaviour that tries to replicate how humans would behave. Or that the car would simply try to brake, and not stear in any direction, to avoid this type of decision.

PS: The RSS/Atom feed is still broken in Inoreader!

Posted by: DontBelieveEitherPropaganda | Oct 26 2018 18:43 utc | 2

The liability of self driving cars make it forever untenable. It is a gimmick just like the concept of a.i. Not to mention that it is an extreme waste of time and money and is yes immoral. Just like the atom smasher or the edge of space. Just science fiction for the dweebs that can't see the workings of mysterious majesty in the face of a mountain or a daughter.

Posted by: Nemesiscalling | Oct 26 2018 18:48 utc | 3

Totally Off-Topic.

Regarding the "MAGA Bomber", a serial Law offender, apparently, never jailed (How is that? An informant?), sending innocuous failed bombs (the kind that these plants are always given) to the most rabid Trump's critics, just weeks before mid-terms, prompting these critics to air their "higher moral ground", non-stop. Sting operation, anyone? The kind of Peter Strzok's "insurance policy".

Posted by: GoAwayAndShutUp | Oct 26 2018 18:53 utc | 4

"Even Microsoft, the biggest software company in the world, recently screwed up..."

Isn't it rather logical than the larger a company is, the more screw ups it can make? After all, Microsofts has armies of programmers to make those bugs.

Once I created a joke that the best way to disable missile defense would be to have a rocket that can stop in mid-air, thus provoking the software to divide be zero and crash. One day I told that joke to a military officer who told me that something like that actually happened, but it was in the Navy and it involved a test with a torpedo. Not only the program for "torpedo defense" went down but the system crashed too and the engine of the ship stopped working as well. I also recall explanations that a new complex software system typically has all major bugs removed after being used for a year. And the occasion was Internal Revenue Service changing hardware and software leading to widely reported problems.

One issue with Microsoft (not just Microsoft) is that their business model (not the benefit of the users) requires frequent changes in the systems, so bugs are introduced at the steady clip. Of course, they do not make money on bugs per se, but on new features that in time make it impossible to use older versions of the software and hardware.

Posted by: Piotr Berman | Oct 26 2018 18:55 utc | 5

There's no way an autonomous system would work like that b. If there is no avoiding a casualty it will enter into the logs that the human passenger took manual control of the vehicle, predict the most likely result of a driver not paying attention and change to that trajectory, and then reboot after removing any incriminating evidence, substitution a Moral program image for the one that save's money in law suits. You're projecting morals and ethics on a set of institutions without them. Case in point, Microsoft's zero day exploits are sold to the MIC. Its a feature not a bug.

Posted by: ponderer | Oct 26 2018 19:08 utc | 6

"As an older male person in Europe I am not really comfortable with these results."

A possible solution: go around in a recumbent bike with an aerodynamic shell

http://www.recumbents.com/mars/pages/proj/tetz/OFS/projtetzOFS.html

It has a passing resemblance to a stroller, possibly increasing the survival rate (plus it takes less effort to pedal, less strain on the back, more protection from rain, with the only demerit that it is harder to park than an ordinary bike.

Posted by: Piotr Berman | Oct 26 2018 19:09 utc | 7

@B: Okay, feeds seems to work now. No errors, and article was fetched 10 minutes ago!

No more F5ing MoA.. ;)

Posted by: DontBelieveEitherPropaganda | Oct 26 2018 19:12 utc | 8

thanks b..... but technology is the new god.... if we don’t bow down in adoration, what will we be able to slavishly follow?

it was interesting when you veered off into china and some of the thinking generated by the gov”t there... it seems to me, the more populated a place is, the more rules are needed and the less independence is natural.... this has been my observation.. there are some good quotes from lao tzu, but i don’t have my book handy right now.. maybe chapter 66... essentially when the rulership is wicked, the people suffer.. this seems to be where most counrties are in this materialistic cycle we are in here...technology will not save us..

Posted by: james | Oct 26 2018 19:28 utc | 9

Autopilots work okay in airplanes and boats, but their milieu is totally different from those of land-based vehicles. Humans are far from perfect and err constantly, particularly when making moral-based decisions. Removing the captain/driver from its responsibility for the safety of crew/passengers is immoral for that responsibility inspires caution (or should) in the operator. IMO, traffic laws are the most abused regulations and too few violators are caught--I encounter what I call Illiterate Drivers constantly who endanger themselves, their passengers and the general public.

If people want to be driven around, hire a chauffer or taxi, or take public transit. I believe it was Greyhound Bus Lines motto admonishing potential customers to "Leave the driving to us."

Posted by: karlof1 | Oct 26 2018 19:37 utc | 10

The fun part is wondering what the result would be if the survey had included Saudi Clown Princes and Rats--might be close.

The serious part is how much productivity and arguably petroleum is saved by hovering transport.

A photo of Fifth?Avenue in about 1903 showed only horse drawn vehicles. Ten years later iirc no horses--only engines (and maybe a motor EV, which was the short-lived woman's choice then).

This change could happen quickly. Some believe your next car will be your last.

Have fun,
Hal C

Posted by: Hal C | Oct 26 2018 19:38 utc | 11

The entire issue of autonomous vehicles, especially in areas of the US that tolerate unrestricted cell phone use is a hornet's nest of contradictions, even without considering your discussed moral decisions.

Fully reliable autonomous vehicles are still not reliable enough for all driving conditions (weather related), nor ambiguous situations. Confounding this situation is the tolerance of cell phone driving in the US, which statistically and experimentally is equivalent to driving drunk. In order to remedy the latter situation, it's my opinion autonomous vehicles are being rushed to the market place. The problem with this solution is that it merely encourages drivers to become more distracted, and more reliant on unproven automation.

Countering this argument is that however bad autonomous vehicles, they would be an improvement over "normal" driving.

Posted by: Michael | Oct 26 2018 19:39 utc | 12

Philosophy has long recognised the moral dilemma as described by B in this post: it is a classic example of the trolley problem.

Social science / experimental philosophy experiments on groups of people to gauge their responses to the trolley problem - in which they had to choose whether to pull a lever in a hypothetical situation to send a runaway trolley on a track to run over and kill a group of children or switch the trolley onto another track to kill a criminal - yield similar results as the study B mentions.

A variation of the problem is where you are standing on a bridge over a track and a runaway trolley is coming down the track towards a group of people unaware that the trolley is coming straight for them. The trolley can only be stopped by something heavy. An obese person is standing next to you. Do you push the person onto the track to prevent disaster, knowing that the person will die?

The trolley dilemma does have its limitations in the realm of self-driving vehicles: it cannot recognise that self-driving vehicles could be pre-programmed to protect their occupants and preserve their lives before preserving the lives of others. The scenarios B describes in the Nature study become moot when the occupants of a self-autonomous vehicle might be certain members of the Saudi royal family and their staff stuck in traffic in Istanbul.

Posted by: Jen | Oct 26 2018 19:40 utc | 13

Autopilots work okay in airplanes and boats, but their milieu is totally different from those of land-based vehicles. Humans are far from perfect and err constantly, particularly when making moral-based decisions. Removing the captain/driver from its responsibility for the safety of crew/passengers is immoral for that responsibility inspires caution (or should) in the operator. IMO, traffic laws are the most abused regulations and too few violators are caught--I encounter what I call Illiterate Drivers constantly who endanger themselves, their passengers and the general public.

If people want to be driven around, hire a chauffer or taxi, or take public transit. I believe it was Greyhound Bus Lines motto admonishing potential customers to "Leave the driving to us."

Posted by: karlof1 | Oct 26 2018 19:41 utc | 14

BTW while looking up the trolley problem on Wikipedia, I came across mention of Germany's ethical commission formed in 2016 to address the ethical problems posed by self-autonomous vehicles. The commission has already published guidelines which are available in PDF at this link (German language only):
https://web.archive.org/web/20171115224017/http://www.bmvi.de/SharedDocs/DE/Publikationen/G/bericht-der-ethik-kommission.html">http://www.bmvi.de/SharedDocs/DE/Publikationen/G/bericht-der-ethik-kommission.html">https://web.archive.org/web/20171115224017/http://www.bmvi.de/SharedDocs/DE/Publikationen/G/bericht-der-ethik-kommission.html

The Chinese government has been studying the German guidelines and is likely to incorporate some of these in its own laws on the use of self-autonomous vehicles.
https://www.reuters.com/article/us-autos-autonomous-germany-china/china-may-adopt-some-of-germanys-law-on-self-driving-cars-expert-idUSKCN1GR2TJ

Posted by: Jen | Oct 26 2018 19:53 utc | 15

Sorry, that first link @ 16 didn't come over well, I'll try again:
https://web.archive.org/web/20171115224017/http://www.bmvi.de/SharedDocs/DE/Publikationen/G/bericht-der-ethik-kommission.html">http://www.bmvi.de/SharedDocs/DE/Publikationen/G/bericht-der-ethik-kommission.html">https://web.archive.org/web/20171115224017/http://www.bmvi.de/SharedDocs/DE/Publikationen/G/bericht-der-ethik-kommission.html

That should work now.

Posted by: Jen | Oct 26 2018 19:56 utc | 16

Failed again ... everyone go over to Wikipedia's Trolley problem article and click on Reference No 36.
https://en.wikipedia.org/wiki/Trolley_problem#References

Posted by: Jen | Oct 26 2018 19:58 utc | 17

As a driver you may not be as concerned with the moral dimension as much as which potential outcome would cause the least amount of damage. If the policeman was fit, standing behind a car or parking meter, there would be a barrier between home of and a careening car or he might be able to leap aside. Perhaps the seniors were far enough away that you might be able to stop the car in time to avoid harm. obviously the child is The most vulnerable and you would want to do everything to avoid hitting he or she . Perhaps there are other options rather than just binary; this or that. Better braking systems for instance or striking a parked car sans bystanders.
Pilots have been successfully using auto pilot for many decades. As long as you can disengage with immediate effect, it seems like there might be some useful applications.
Would I trust my life entirely to a machine in a dynamic situation like urban traffic? Nope.

Posted by: CD Waller | Oct 26 2018 20:13 utc | 18

If you have a car of some minimal intelligence, it will hook up with other cars, for a number of reasons and benefits.
And for the same reason, these cars will want the right of way at all times.
We already have them.
We call them trains.

Posted by: bjd | Oct 26 2018 20:19 utc | 19

The car should brake as rapidly as possible, so that whoever is hit, the impacting kinetic energy is minimized. Also performance improvements (eg. faster reaction time) should not be transformed into higher driving speeds (except on freeways where the things can go as fast as they please as they will only take out themselves or other similar Brain-dead Motorized Wankers).

https://www.youtube.com/watch?v=jffRMdOwLIs

Posted by: Yonatan | Oct 26 2018 20:21 utc | 20

I never had a license and never was into cars. Wich doesnt mean i dislike traveling by car and just from imagination i would prefer a human driver. But in your post i sense a tiny bit of technophobia. Complicate software and sensors that can fail applies very much the same to human drivers. Causing >1 accidents by car, in its lifetime, is very common for a human with drivers license (not counting unsafe situations caused by human that luckily ended not in an accident).

Autonomous cars wont be perfect, they will make errors a human rarely does but wont make errors all humans are capable any day and i guess autonomous cars will be able to communicate with other cars. Given some more progress in machine learning and KI, they will be able to learn without manual software-updates and will be able to learn from other cars they meet. Iam sure overall they would reduce car accidents. However, like you mentioned legal- and i think even social-psychological problems remain.

Posted by: lil schweiger | Oct 26 2018 20:37 utc | 21

Berman #5 - I love this one about the business model. Not just Microsoft, especially, though they may be the most notorious:

One issue with Microsoft (not just Microsoft) is that their business model (not the benefit of the users) requires frequent changes in the systems, so bugs are introduced at the steady clip. Of course, they do not make money on bugs per se, but on new features that in time make it impossible to use older versions of the software and hardware.

Posted by: Merlin2 | Oct 26 2018 20:51 utc | 22

What about the mass unemployment that will result? One of the most common occupations is driver. Not to be too cynical here, but if the tech for autonomous vehicles gets perfected, large-scale euthanasia can't be far behind.

Posted by: Mike Maloney | Oct 26 2018 21:12 utc | 23

The missing ethical question is why enabling entitlement to automobile drivers? All people, drivers and non-drivers can "take a bus". Why? Because there are more people that get around on their feet than behind a wheel of an automobile. everyone can take public transport, but not everyone can drive an automobile. So why the entitlement for drivers only?

Posted by: Bike-Anarkist | Oct 26 2018 21:25 utc | 24

Event: A kid jumps into the path of an autonomous car.

Moral response? Brake like hell. Be sorry if you run the kid down.

What was this kid doing on an approved autonomous car roadway? Why wasn’t this kid properly supervised? Why didn’t this kid use the overpass? What’s wrong with his total chip? Where was his mother?

The chatting seniors should remain calm and carry on. Their representatives in the legislatures have approved zoning/highway regulations which regulate in finite detail all of the permissible operations of autonomous vehicles.

The policeman issuing parking tickets is an ideal person for the autonomous vehicle to back over after it has run the kid down. Why? It is preposterous to have a policeman proximal to a driving lane which allows faulty autonomous vehicles. Preposterous, irrational, and dumb. Doesn’t this cop know any better? Besides that, why are there illegally parked cars proximal to autonomous car lanes? These parked cars should be dented too! What business do they have littering an autonomous car zone?

This is so sad. People should be able to define a moral as any/most situations would prompt, but they can’t.

Posted by: A. Person | Oct 26 2018 21:37 utc | 25

I think it might be wrong to make it a moral issue:
If the rate of accidents is lower than human drivers, then it should be the way...

The driver plus auto pilot is probably the worst option.

Though at the end fewer cars = fewer problems (living less than 3km from work would be the best solution).

Posted by: Simon | Oct 26 2018 21:50 utc | 26

Should it continue straight on and hit the kid? Should it veer left into the group of chatting seniors? Or to the right where a policeman is issuing parking tickets?

The correct answer is C.

Posted by: donkeytale | Oct 26 2018 21:59 utc | 27

What's lost in all of this is that a completely autonomous vehicle, one that never requires human supervision, will never be practical except in limited situations. Do I not sound like a heretic or a Neo-Luddite?

The limited access superhighway is perfect for an autonomous vehicle. The very concept of it is to reduce driving -- trip taking, actually -- to the barest of simplicity, because the fewer decisions and distractions that need to be made the faster a vehicle can safely go. On a superhighway there are virtually no destinations except for off ramps usually miles apart, no pedestrians, no bicycles. Driving is reduced to the robotic, mechanical acts of controlling the car and monitoring the distances to other cars.

Downtown, however, is a very different story. Executive decisions are constant. There are times that you'll want to change your destination on the fly, stop to talk to your buddy who you spot walking, stop to give a friend a ride. And parking -- deciding where to park requires way too many executive decisions for a machine. It actually requires a complete set of human knowledge about the landscape and the needs of the passengers in the car. Consider my small town where most parking spaces are unmarked and could never be marked. And telling a car to "pull over here" just won't work.

Buses with designated end terminals might work, but can you trust an autonomous vehicle to understand that some passengers might disembark much more slowly than others, along with countless other variables?

This looks so much like a pipe dream that will never come to fruition.

Posted by: Spike | Oct 26 2018 22:10 utc | 28

IMO, well before driverless vehicles overcome their imperfections the fuel powering internal combustion engines will become too expensive for most to afford. Therefore, developing driverless vehicles is a gross waste of scarce resources that ought to be focused on other more important problems. Not that individual, private vehicles will disappear; rather, the behemoths seen on US roads will disappear and be replaced by much smaller, electric cars. But given the future economic prospects of The Outlaw US Empire and those nations refusing to delink from it, few individuals will be capable of affording such vehicles. Here's a listing of electric and hybrid vehicles available within China during 2017. FYI--180,000 Yuan is @$26,000.

Posted by: karlof1 | Oct 26 2018 22:31 utc | 29

Nemesiscalling @3

To cut the Gordian knot, the U.S. government will declare such events to be "acts of God", effectively removing the designers, manufacturers, and vendors of such vehicles from all legal liability. The pre-existing precedent would be the federal Price-Anderson Act, which limits the private liability of nuclear power plant operators to a tiny fraction of the potential costs of an accident. Talk about crony capitalism!

Posted by: rackstrawe | Oct 26 2018 22:52 utc | 30

Little old ladies, a boy on a bicycle, a policeman giving out a ticket? Nah. I don't buy the soft perspective.

Isn't the overriding factor in the psychopathic mercenary mind of the software's very private conversation with itself going to be ALL ABOUT THE MONEY?

Won't it be strictly a question of profit/loss? For example, won't the software simply assess which potential victim is carrying what type of insurance, who has the most or the least insurance, who is most or least likely to sue, what will probably cause the least amount of collateral damage to property, or the most collateral damage to the uninsured entities, etc. etc... before making its cold and soulless decision?

Posted by: Sinnicle | Oct 26 2018 22:58 utc | 31

Airbags on the outside of the vehicle (and other tech) can protect pedestrians (and bikers):

Volvo became the first to install an external airbag when it fitted one between the hood and windshield on the V40... The device was designed to mitigate injury to a pedestrian by cushioning his or her head upon impact... Among others, Autoliv – which developed the pedestrian hood airbag with Volvo – has experimented with a steel tube fitted with balloons that would inflate to protect pedestrians.

Posted by: Jackrabbit | Oct 26 2018 23:00 utc | 32

No thanks!...No self driving car for me and I don't want to be on highways with any either.
Back in my flying days a company came out with an 'automatic' landing gear drop that was suppose to engage the landing gear when your air speed dropped to a certain level.
You can guess what happened....landing approaches and speeds vary in different situations such as when you have to literally 'drive' the plane in in bad cross winds or if you lose power in flight and have to try and glide into a safe landing area and need to keep your gear up to avoid drag to make it.
Many planes went 'splat".
If someone is too lazy or too incompetent to drive they shouldn't be driving at all.
I would vote for cell phones not to work in cars as long as they are in motion also.


Posted by: renfro | Oct 26 2018 23:09 utc | 33

That's a simple question and if someone has to die anyway, it's not a moral one. The vehicle simply makes the best choice to decrease the possibility of even more damage. A machine wouldn't even be subject to the stuff that they seem to be spending big $$$ to answer. No wonder they can't program such a machine, LOL.

Posted by: Ralph Conner | Oct 26 2018 23:11 utc | 34

Way out there with this b...

3 Issues
1. Human transportation of the future
2. Providing technology with the authority to make life/death judgements
3. Human social value judgement, organization management and technology

1. What mindset is used to set the vision? You need the answer to the future of #3 above. Assuming there will always be a mix of mass and personal transportation options, the future question is whether the class structure we have now continues or if there is more of a focus in the future on mass transit and less on personal that would provoke the scenario in b's posting. The car culture is a marketing scheme to normalize the rich not having to rub shoulders with the rest....another example is Portland, OR where they tore out the trolley system that went to the top of the hills because the rich didn't want it being so easy for the poor to have access to the Council Crest area…the car culture helped propagate more class distinction as “public transportation” shrank. Another big transportation concept that is sold is SPEED of getting somewhere. In b’s scenario it is a given that speed is reason to kill or the vehicle would have been going slow enough to stop in time to never kill anyone.

2. Technology is neutral and does not have an opinion. It does what it is told. I have serious problems with b's scenario because it speaks of a society that has no form of governance that would have separated these folks from the danger or otherwise made everyone aware of the life/death risk associated priority decisions that the technology would be making. Currently in the US, a pedestrian has the right of way over bicyclists and cars and there is a legal crosswalk at every intersection…….motorized scooters and that are just on the event horizon. Given that assumption then the vehicle should have stopped as abruptly as possible (or not be going too fast to begin with) to save the lives of those in the right of way.

3. The concept of social merit makes far sight better sense that power by birthright. And given the size of our nations populations and advancement of technology it is not a surprising concept to be advanced. The devil is in the detail of overarching values and the simplicity of their definition, implementation and ongoing evolution. If you build a system to enforce a birthright class structure instead of a merit based one the technology does not care.

That's my $0.02 worth

Posted by: psychohistorian | Oct 26 2018 23:29 utc | 35

what sort of a question is that... it should veer into the cop issuing a ticket of course!

Posted by: EtTuBrute | Oct 26 2018 23:31 utc | 36

We already make moral choices as a society.

In the US, about 50 thousand people die in traffic accidents each year. Would a "moral machine" lower the speed limit? No. Because the "machine" does what we (collectively) say it should.

NakedCapitalism.com has discussions of the concept of "code is law" is relevant this discussion of "moral machines". Increasingly, software rules implement policy. But policymakers often have very little grasp of the details - which effectively pushes much decision-making to technicians (who have their own notion of what is moral or 'right'). This guest post by Bob Goodwin is a good introduction to the subject: Why Code is Law.

Posted by: Jackrabbit | Oct 27 2018 0:06 utc | 37

I am somewhere near psychohistorian on this. The question should be why are we at this point, who has our culture of constantky wanting newer and newer nicknacks and gizmo's taken us to a point where we even think about putting autonomous vehicles on roadways designed for humans.

Posted by: Peter AU 1 | Oct 27 2018 0:25 utc | 38

I ask the same question when ever the self driving car issue comes up...what is this technology solving for?

No one has ever given a solid answer because it doesn't exist. Maybe it solves the issue of how do we live in a fantasy world where no one really exists because it is fantasy, and if so can I have a smoking hot blond genie in a bottle?

Posted by: Jef | Oct 27 2018 0:51 utc | 39

If the object of autonomous vehicles is to save lives and they can prove it does then it should be just like air bags,seat belts, tempered glass. Do they save everybody, no, but if you're one of the lucky ones still walking around because of them you might think they're worth it.

Posted by: Arlo | Oct 27 2018 1:24 utc | 40

When I was in Paris once I watched a mother take her young pre-adolescent boy by the hand across a wide boulevard, using the striped pedestrian crossing that mandated traffic to stop for anyone on it. The look of devilish glee on the boy's face was wonderful to see as he realized he had the power to stop at least 6 lanes of frenzied, aggressive downtown traffic in their tracks, simply by stepping out there. It was a very French, and very small boy, vignette.

When vehicles are driverless, their anti-collision software and situational awareness will exceed anything possible today with humans at the wheel. And downtown, by the way, is perfect for it, because only the computer will know, and be able to reserve in advance, the optimum parking spot - as well as park the vehicle flawlessly. Some pickup trucks today already have the computing power to dig themselves out from being stuck in the mud, using the gears better than a human.

The little boy will be as wired (or rather, wireless) as everything else in the Internet of Things - but still he will be sorely tempted to prank the intersection and cause a flurry of computers locking down everything else moving.

No one will be harmed. The boy's social score will be impacted.

Posted by: Grieved | Oct 27 2018 1:27 utc | 41

@31 rackstrawe

That is a very good point.

Why the gov't would be in favor of such a policy would be their preference for this technology to hit mainstream. Technology has an extreme limiting factor to self-determination. With it's further sophistication, we are seeing a parallel decrease in individual liberty or, in other words, dulling our ability to stand up, as if to a bear, and display power and strength, even if we are feigning it.

For this reason, I think most people are legitimately worried about this erosion and should be. Like a cashless society, like the inventory project for surveillance in China and microchipping in Sweden, like a carbon tax, we are being continually processed and rendered completely devoid of any danger to TPTB.

I welcome any attempt of outright hubris like this, however, because I know beyond doubt that a great comeuppance is coming. The universe demands it.

...

Here is why the Fed will in the end outlaw autonomous cars:

You first differentiate between autonomous and manually-driven cars.

You then admit that manually-driven vehicles will kill because it is unpreventable and accidents happen.

You will then notice that autonomous vehicles will also kill but perhaps at an infinitesimal rate when compared to cars driven by humans.

You will then realize that one of the two scenarios can be completely prevented by law and that is the full shut-down of autonomous vehicles.

You will conclude that this is the only option as the Fed will never be able to outlaw manually-driven vehicles because the public would not allow it.

Posted by: NemesisCalling | Oct 27 2018 1:43 utc | 42

@ Grieved with the ending "No on will be harmed. The boy's social score will be impacted."

LOL!!!

But pray tell in which direction the score will be impacted?...grin

Posted by: psychohistorian | Oct 27 2018 1:48 utc | 43

[off topic nonsense deleted - b.]

Posted by: Greece | Oct 27 2018 2:12 utc | 44

Can you imagine the world with virtually no traffic accidents. For the few times a car decides who lives or dies its a no brainer to automate.

Posted by: steve | Oct 27 2018 2:27 utc | 45

They do not work well in heavy rain, sleet and snow. What happens on icy roads? I watched these cars zip around the google campus in perfect weather in California. It is interesting but far from the future that contains a real world.

Posted by: dltravers | Oct 27 2018 2:29 utc | 46

Future Of AV
https://sostratusworks.wordpress.com/2016/05/15/reality-and-myth-of-future-av/

It is clear that this technology is a dead end , the dead end of the individual car transportation that is face revolution, not as much technological but urbanization revolution( 90% less need to travel) as it is unsustainable and infeasible, where we live and why we drive must change.

Read the Excerpt:

After initial promises thirteen years ago to self-drive for average Joe by 2012 or 2015, tenth year of road testing, should tell us that something is not right at least in typical difficult US driving conditions with software algorithms unable to handle many road situations and failures of unreliable sensors producing phantom readings confusing system, preventing travel due to environmental conditions that would have been considered safe for most drivers and allowable by most of vehicle and traffic codes. Now after initial hype subsided and reality sinks in, most of cool heads of experts do not predict mass AV production before 2035 at the earliest.

They are facing similar sensors/algorithm malfunctions and system failures that so far prevented introduction of pilotless commercial airliners after 20 years of advanced drone technology and its relatively high incident rate, mostly due to unpredictability of equipment failure in varying weather conditions.

Lack of self-driving trains is also a manifestation of fundamental problem of control they encountered in railroad industry even if most of train-driving functions have been automatized already for decades.

Still train engineers are required by law to make ultimate decisions even in such an extremely closely controlled environment like railroad system.

The telling absence of leadership on driverless cars from German companies having all the expertise and money to invest, puts a fine point of feasibility on the whole project within overall concept of transportation system.

While in the same time Mercedes, BMW are very much involved in a project of augment road infrastructure [or dedicated roads/lanes] and safety with built-in variety of sensor in the pavement and above the road to track locations data and cars as a way to provide reliable force and momentum based data to driverless cars and relaying less on cars sensors in extremely difficult weather/road conditions in Germany, mostly they focus on augmented driving i.e. an partially instrument based driving but human driving not autonomous driving.

Unfortunately, costs of the industrial strength system are prohibitive so far and exceeds cost of public transportation, well-developed in Europe, that can get you almost anywhere in most efficient, cheaper and fastest way than any AV ever could.

So far those who push driverless cars failed to prove that self-driving car would provide any more safety or any quantifiable advantage or efficiency in real human-driven traffic situations, short of temporarily enabling texting and not driving but even this is questionable due to psychological reasons.

Posted by: Kalen | Oct 27 2018 2:38 utc | 47

[deleted off topic nonsense -

@Greece this is your last warning - b.]

Posted by: Greece | Oct 27 2018 2:40 utc | 48

I'm selfish and value my life more than others. If the choice is between hitting six nuns and six infants, or a concrete wall [and likely my own death], my instinct as I type would be to hit the people. Of course it's different when it actually happens. But in general I will never, ever get into a machine that would prioritize my death over other people's. That would decide to drive into that wall. The very idea of such is ludicrous.

Posted by: Soft Asylum | Oct 27 2018 3:30 utc | 49

Let's check the top state of the art ability each one of these countries posseses:
Germany
China
USA

Germany: "Acts as an A.I. driven automaton society behaving like an well oiled machine bent on importing/producing/buying/unearthing any kind of relevant metal/chemical substance from the entire planets surface and turn in into a brand new automobile, while at the same time the "machine" called Germany also supervises recycling of older vehicle brands they make on order to make just even more new ones".

China: "China as an A.I. driven automaton society behaving pretty much like how Germany behaves, but with a wider list of cheap/low priced products/immitations, than specialy cars"

USA:"State of the art control environments for lobing nukes and other things through other dimensions (often call it seismic weapons tech)/controlling the weather/other peoples brains/remote weapon platforms etc"

It is bound to happen soon.
It will only take a small spark.
This planet has not much future left.
We are all to blame. I rest my case for this thread/topic.

Posted by: Greece | Oct 27 2018 3:41 utc | 50

If all humans contain a readable chip then these decisions do not "have" to be taken: The computer in the vehicle will know where all humans are in its vicinity and can ensure that it is not moving in such a way that it cannot stop before hitting a human. So if it is moving away from the human, no problem, if it is moving in a direction where the human could (depending on the human's mode of transport) cross the path of the vehicle then the vehicle must slow down to ensure that it can brake in time.

If the vehicle deliberately takes risks by moving fast in a direction that could result in hitting a human - where a human could jump out infront of the vehicle without the vehicle being able to stop - then any accident is deliberate. The fault then lies in the designer of the vehicle or with the person riding in the vehicle if the person riding in the vehicle decided to select a "high-risk mode" when instructing the vehicle how and where to travel.

Lots of technological hurdles to climb if this is to be done properly. Of course, in reality safety gets a salary cut and profit gets a raise.

Posted by: Jon | Oct 27 2018 3:49 utc | 51

But in general I will never, ever get into a machine that would prioritize my death over other people's.

Posted by: Soft Asylum | Oct 26, 2018 11:30:59 PM | 50

But YOU DO live in an physical one (one you can sense acting upon you with it's limitations/abilities) and metaphysical (one you can analyze/perceive but can't see that is acting upon you with it's limitations/abilities) environment, that the God we believe on either Christian or Muslim or Bhudist that omote/believe in to self sacrifice in order to progress beyond.In fact isn't the whole reality structure designed like this? You give, you take. Even better if you do not require to take back at all, you will be given/awarded anyway and you will faster progress beyond.

It is not our choice.
This planet is just a mud ball hurtling in to the dark cosmic chaos with tremendous speeds, heading generally in to the vast unknown, every minute/second/hour/month/year it passes. We are not in control of this anyway.
Our choice is really only towards what we agree to sacrifice ourselves to, be it time/feelings/actions/thoughts.
Automation systems is just a quicker way of going to Hell. I forfeited the right to find/study chose and act and rellied on a automatic processing machine for this, making choices instead of me, when it was not entirely necessary. Driving, voting, choosing, learning is notthe freaking space shuttle decend from orbit, which it was one of the most impossible mechanical tasks in the world to do manually.

Last post/this thread.

Posted by: Greece | Oct 27 2018 3:58 utc | 52

Imagine tens of millions of driverless vehicles (including aircraft) worldwide. All with a chip that causes such vehicles at a designated time to seek out and crash into human pedestrians or vehicles with human passengers. How many humans could be eliminated?

Would make for an interesting movie anyways

In one of Hawkings last writings he mentioned the elites would be compelled to engineer the chosen few into supermen rendering the rest of us irrelevant. I mentioned something along those lines a couple of months ago where homo sapiens compared to these super humans would go the way of the neanderthal. With robotics and autonomous vehicles and things reducing the need for human labour in the future, and manufactured neomalthusian pseudoscientic fears of peak oil, resources and cO2 hype, justification to cull the human herd or at least sterilize it will be fairly easy to come by. The chosen method will likely be by an engineered virus, since its less messy than crashing vehicles, for which only the chosen get vaccinated. If your social credit score or facebook likes is insufficient, and you are not drop dead gorgeous or high IQ, no vaccination for you

Posted by: Pft | Oct 27 2018 4:40 utc | 53

Excellent roundup of the fraught debate about autonomous cars, b.
But it's a bit late. Semi-autonomous (driver-supervised) vehicles have been operating for a decade or so. Top Gear tested a top of the range Merc with autonomous features, on a freeway journey, and Richard Hammond was blown away by the fact that it stopped when it came to a queue of traffic waiting to negotiate a roundabout.

The Moral Dilemna is overblown because Moral Dilemnas are extremely rare in traffic stuff-ups. People (including me) make knee-jerk decisions in a traffic emergency, based on self-preservation. Computers will do the same thing because knee-jerk is a lot better than nothing, and usually works. Driving ain't Chess, and never will be.

Posted by: Hoarsewhisperer | Oct 27 2018 4:58 utc | 54

There are two types of political master programs.

One is so simple that it obviously contain no error. (MAGA! It's perfect because it's meaningless)
The other is so complicated that itself is an error. (Climate Chains is so meaningless it's perfect)

Posted by: Anton Worter | Oct 27 2018 6:50 utc | 55

"The study found cultural and economic differences. People from Asian countries with a Confucian tradition showed a higher preference for old people to survive. Countries with a Christian tradition preferred younger ones more. People in Latin American preferred the survival of a male person more than people in other cultures. As an older male person in Europe I am not really comfortable with these results."

Another good reason to buy Chinese!!

Posted by: jiri | Oct 27 2018 6:54 utc | 56

While moral constructs of automated decision making is interesting, there exists more practical dimensions to the shortcomings of automated driving. We rely constantly on our ability to communicate with and read the intentions of other vehicles by looking at the driver's head/face and sometimes hand gestures that accompany the communication. When I walk the dog in the morning, there are instances where against an oncoming vehicle that is indicating a turn into a driveway, which I'm about to cross where the right of way is not decided by rules. If the vehicle is about to start to block traffic from behind or if the dog is sniffing about and not in a great hurry to cross, I would wave the car through. I would especially wave the car through if the dog were to start squatting to take a pee. The dog pedestrian communication is via leash. The pedestrian - driver communications is via facial cues and hand gestures. It does rely upon a driver entity that is visible and can be communicated to. While one could wave at a automated vehicle and assume that the vehicle sensors see and interpret the wave gesture, could one trust that the vehicle would know what to do?

There are times when traffic dictates that a level of assertiveness is required to make progress. Manners mean that such level not be ass hole level. I would assume some of the big traffic circles in Paris (or entire cities in South Asia) would be prime candidates to test out failure of automated vehicles to negotiate traffic without creating further chaos. Opportunism in negotiating traffic relies upon the ability to see the other drivers heads and sometimes the ability to feign not seeing the other driver. Communications is sometimes via the accelerator pedal and the car's nose.
How many people have used early and long use of turn signals to negotiate a belated lane change to avoid a dead-end. The only consolation is that you could probably do a violent cut-off of a driver-less vehicle since they are so safely endowed with crash avoidance.
The ideal automated vehicle would be driven by a robot chauffeur with a head that swivels and has basic facial expression. It should be able to drive as if it were human relying purely on the sensors mounted as a driving robot and not on vehicular sensors.
It would need to be able drive a stick and carry packages to and from the vehicle, if not able to help passengers on and off.

This automated vehicle non-sense will result in hype about a glorified cruise control mated to navigation. We should be wary about changes in road rules and infrastructure that would result in enabling systems that simply delivers boxes as if the vehicles were on rails.

Posted by: YY | Oct 27 2018 7:13 utc | 57

Psychohistorian, let's remember Asimov's three laws of robotics from 1942.

From memory, first law: "A robot shall never by action, or by inaction, harm a human being."

Second law: "Robot must follow orders from humans."

Third law: "Never hurt itself (the robot)."

Posted by: jonku | Oct 27 2018 7:17 utc | 58

What everyone here is missing is that a driver less car should go anywhere. How are you going to do that? GPS is only good within feet. Current systems follow lines or the car in front which is fine if you're on a well marked freeway. Lots of complaints of cars taking the off ramp when lines headed that way or anywhere the car in front took a turn and the autonomous car followed it. We could load the car computer with detailed maps but that still relies on lines or following other cars for position. The current systems are useless in rain and snowstorms or if the road was covered with snow. It's no coincidence that all the self driving cars are being tested in areas that see little rain and no snow. No one as of yet has come up with an economical solution to the problem. I say economical because we can do the self driving bit with current technology. Line every road and driveway with sensors or have drive by wire by embedding the wire in the roads. Imagine the cost of doing the sensors or drive by wire over every inch of road public or private, such as company parking lots or loading docks or private driveways, in the US. Currently the best bet is driver less vehicles for the limited access multi lane highways. Once off then a driver will be needed. As far as the scenario given by B have the car slam on it's brakes in this type of situation and if the kid gets hit too bad. No system no matter how perfect can make up for human stupidity such as stepping in front of a moving vehicle so there will be casualties no matter how perfect the system.

Posted by: snedly arkus | Oct 27 2018 7:55 utc | 59

A simple heuristic is to work out the least energy dissipated. This assumes that the more energy dissipated the more likely there will be fatal injuries.

This will tend to favor running over a child in preference to running head-on into a bus.

Given the driving algorithms have no idea what they are dealing with to make moral decisions, dissipate least energy is a pretty good option.

Posted by: Charles Wood | Oct 27 2018 8:33 utc | 60

When it is time, the accident will not happen because the person is chipped. Machines and augmented humans will be talking unassisted at a certain proximity.

All hail, the Hybrid.

Posted by: MadMax2 | Oct 27 2018 8:46 utc | 61

The self driving car should honk way ahead of time, since it is connected via satelite and surveillance blimb and whatnot to an all seeing eye overhead. Humans have hearing still. And anything other than surround mapping would make zero sense. That said, the utilitarian ranking of human beings is a horrifying outlook.

Posted by: Keep it Simple | Oct 27 2018 8:51 utc | 62

Dick, you're fired!

Posted by: radiator | Oct 27 2018 9:32 utc | 63

I see some links between the new debate, widely braodcasted in the MSM, about whether it is moral to kill the old guy rather than the child in the other car, and the "death is good for business" meme as a new normal. Macron is on it: "French President Emmanuel Macron has dismissed calls by several European countries to suspend arms sales to Saudi Arabia following Khashoggi's murder, calling them "pure demagoguery".
Any sanctions should target "a field of activity ... or individuals or interests who have been shown to have had something to do with the murder of Mr Khashoggi", Macron told a news conference in Slovakia's capital, Bratislava, adding "it's pure demagoguery to say that we should stop selling arms".
"That has nothing to do with the Khashoggi affair. That is linked to the situation in Yemen [where Saudi Arabia is fighting Houthi rebels], which requires a very close follow-up"." (from the AJ live)

1st we were told: no need for cars, they pollute too much (to use, to make)
2nd we heard: let's hope the Chinese won't need 3 cars per families as the Europeans and Americans!
3rd we aget:'that's the new technology, you can't miss it or you're primitive'.
Something does not hold.

Posted by: Mina | Oct 27 2018 9:45 utc | 64

Because human 'sensors' don't fail. Because human 'software' doesn't fail. Because humans are not 'moral machines' with preconceived objectives and impulses (self-preservation, in-build automatism like pressing the brake when anything happens) . You don't need 'implanted chips' to create 'social scores' and 'personal reputation' wasn't ever used and abused on human societies.

I'm very surprised how much of a Luddite you look like in this article. Don't project into a given technology what we are.

Posted by: ThePaper | Oct 27 2018 10:01 utc | 65

We cannot simply hand over moral responsibility by switching on a machine, the person who activates the car is the one who bears ultimate responsibility for what happens

Posted by: ralphieboy | Oct 27 2018 10:07 utc | 66

KSA, Philippines, soon Brasil. Shoot to kill is good for business, so say the markets. Why worrying when AI will dictate a few "necessary" eliminations ?

Posted by: Mina | Oct 27 2018 10:13 utc | 67

Morality has little or nothing to do with the direction of humanity. By this, I mean moral codes no longer have any objective basis but are evermore easily re-wired by the .002 to suit their purposes. Same as it ever was. Look at the 180 degree turn of the so-called Christian right leaders of our own epoch as they fawn over the powerful criminals heading the world's most powerful nations. Do they no longer even read their own good book?

If morality ever mattered to the powerful (except as a tool to limit the popular expression of rage against the powerful) Jesus would have been feted as the King of the Jews and not unceremoniously strung up then fed to the dogs.

Nietszche and Spengler long ago dreamed their fever dreams of these times as well as any deranged enraptured born-again lunatic who walks among us today. Also, see the novels of Destouches.

These are the deeply conservative luminaries who peeked into the future and saw through our latterday delusions all the way until the end of the night.

Posted by: donkeytale | Oct 27 2018 10:32 utc | 68

The socially 'well behaved' would survive, the 'criminals' would die

this self-driving car nonsense serves only to condition people to further abdicate personal choice, err, freedom. it's a step closer to tractor beams, perhaps, as on our future prison planet there will only be a limited list of permissible destinations anyway. a trip to some beach, or mountain, will require a special permit and a drone chaperone to track your every move.

the recalcitrant hoods in the forest will, of course, be absorbed with reverse engineering.

Posted by: john | Oct 27 2018 10:54 utc | 69

This could hit MoA soon:

Facebook Censorship of Alternative Media ‘Just the Beginning,’ Warns Top Neocon Insider

The first stage is social media censorship. The next stage is the total blocking of websites offering alternative news to the MSM. This is by far the most dangerous threat to individual freedom.

The intenet addressing system is controlled at the top by the US military (and always was). The ultimate arbiter for any internet address lookup is in the US InterNIC system (owned and controlled by the US military), to which all the national domain name registries defer. By manipulating or falsifying lookup data they can block international access to any website in the world (including covertly). US/UK censorship is going to rapidly expand over the very near future, as the West moves to ever more suppressionist policies. We urgently need a new internet addressing infrastructure with a capability to bypass the US structures and allow any internet access that might be blocked by the US, before alternative media outlets are totally silenced.

There are vague references in the alternate media from time to time of Russian/Chinese initiatives to develop an alternative infrastructure, but I have not seen anything specific. I don't know how advanced these projects are, or whether they are intended for use from anywhere in the world or only internally in the officially participating countries.

Under the current internet system, the local user uses configurable numerical addresses as local address lookup under TCP/IP (Name Server) - ISPs normally try to set this to their own servers through their installation software, but you can also set it manually to some other name server that you find more reliable. For example, many ISPs illegally block certain websites by sabotaging the address lookup on their own name server (i.e. it does not match the data held by the official registry for the domain name) with false data (I have seen this done many times to my own website, both my own ISP and other people's ISP; it blocks email based on the blocked domain name at the same time, or the block can be specific to sub-domain such as www). When you try to access the site you then get an error message from the browser. If you challenge the ISP they will be forced to correct the data, but then they may silently sabotage it again later. Instead of using your ISPs own name server, you can use any other name server that is publicly accessible (some name servers might not be accessible from a different ISP, but many are accessible to anyone). A good solution is often a name server belonging to a local (or non-local!) university. Sometimes you might find you then get more reliable access to non-mainstream websites, and fewer browser errors (address not found).

What I would like to see Russia/China/BRICS/SCO/etc offer ASAP is some nameserver infrastructure that can be accessed through the standard nameserver settings under TCP/IP on any computer, and which offer configurable access to the internet address lookup registries around the world without critical dependence on the US controlled InterNIC database.

Numerical internet addresses (IP addresses) change from time to time. This is in itself normal. For example if MoA changes its service provider (web server), the MoA numerical IP address will be changed. The change in IP address is registered in the database stored in the registry for the .org upper level domain name in the US, and other name servers around the world regularly update their own data from that. If the US substitutes false values, any attempt to access the website can be diverted to an alternative address (sometimes a fake website!) managed by the US. Sometimes they do this even now, and then if challenged they pretend it was a "mistake". Russia/China need to provide name server infrastructure combined with user software (browser inferface) that is capable of selecting archived IP address lookup data when the most recently available data in the registry is false, selectable by date (the registry contains information on when the data was last changed). By selecting an IP address from archived data before the block, it can re-enable access to the site (as long as the website is still on the servers - if on US servers that is still under US control, but if it is on Russian servers it is not under US control).

Some websites legitimately need to be blocked - eg ISIS propaganda sites etc - the system would need to be able to block access to archived IP addresses for such legitimately blocked sites.

As I suggested some weeks ago, B really needs to prepare for possible blocking in advance - I am quite sure it will come eventually - by registering a non-US website such as moonofalabama.org.ru etc, and announcing that alternate address. When the internet is cut, it is already too late to announce the backup site! That can still be blocked by the US, but there are more ways to get around it.

Posted by: BM | Oct 27 2018 13:25 utc | 70

Researchers asked people around the world how a 'Moral Machine' should behave.

The question is itself inherantly and necessarily immoral. To assume that there could be a machine that follows "moral" principles on the basis of simple heuristics is profound delusion. Morality is not and cannot be based on simple heuristics - anything based on such simple heuristics is by definition NOT morality.

I would go further: any autonomous vehicle that chooses between crash victims in the way outlined in this article and on the basis of scoring different classes of people constitutes a crime against humanity. It leads ultimately to something like the following preference structure, which could be programmed into any such car (either by design or by remote hacking!):

George Soros: +1000000
Eric Schmidt: +999999
Mark Zuckerman: +999998
Koch Brothers: +999997
...
Homosexual: +1000
Heterosexual: -10 (ore vice-versa!)
...
Anglosaxon white: +100
Black: -1000
Slavic white: -100
Asian: -100
...
Whitecollar: +1000
Manual labourer: -100
Homeless: -10000
Labour union member: -100000
Human rights activist: -1000000000000
...
Joe Bloggs #137 (targetted by the elite): -1000000000000
...
etc

No matter what preference schema might hypothetically be approved officially, what is used in practise may easily be subverted covertly from it at any time, and we will know nothing about it. Any such preference schema - even an apparently innocent preference for children or elderly or whatever - is an insidious and abhorrant rejection of the most fundamental human rights. It is intrinsically and necessarily elitist. Everybody has the right to be treated equally to any other person.

Such a preference schema is a murder weapon in the hands of the state, and must be rejected unconditionally.

It is already possible for the CIA to murder someone by taking remote control of a modern car and crashing it. Here is an example of a journalist who was probably murdered in this way (or first murdered, then his body put in his car and crashed by remote control to destroy the evidence) after his investigative journalism came too close to power: Five Years On, Death of Journalist Michael Hastings Remains a Mystery

Don't fall for the autonomous cars trojan horse - it is nothing more than a trojan horse for population control and remotely controlled murder. There is no other justification for it.

Posted by: BM | Oct 27 2018 14:11 utc | 71

Can preference schemas be devised on the basis of popular opinion? If so,

in Israel, Palestinians are targetted
in Ukraine, Russians are targetted
in Kosovo, Serbs are targetted
...
etc

Posted by: BM | Oct 27 2018 14:16 utc | 72

The three laws of robotics by Isaac Asimov have been mentioned here. The principal calls for robots to do no harm to humans. As ist stands, this is a meaningless formulation in echnical terms - machines do not actually decide anything; instead, they simply act upon their command lines; which are nothing but complex sorting algorithms (connecting input sensor data with output steering commands, in this case). The sensors deliver their data plainly as numbers, and this level is never actually transcended further down the wire. No actual gestalt recognonization ever takes place, and no categories of meaning (like "danger") are applicable.

From this it follows that to fulfill Asimov's first law, the human would need to identify itself to the machine to be safely recognized - in a machine-speak way; that is: via a chip implant, or such. -- I expect humanity (as usual) to overcome unfullfilled hope and faith in technology by adapting themselves to imperfect machines, so there is a good likelihood for this to happen in the end.

b's line of reasoning is therefore very relevant, as it asks for the consequences of carrying such a chip. Watch this space!

Posted by: persiflo | Oct 27 2018 14:25 utc | 73

The first wave of the computer revolution created stand-alone systems. The current wave is their combination into new and much larger ones.
Posted by b on October 26, 2018 at 02:18 PM

Not strictly correct. It has come full circle in sense. We started with standalone machines; then we had central mainframes with dumb terminals; then
the first networks of these centralized servers (XeroxParc's efforts being more forward looking and an exception); then PCs; then internet; and now (per design and not by merit, imo) we are effectively back to the centralized systems ("clouds" controlled by Amazon (CIA), Google (NSA)). The peripheral components of autonomous networked systems are akin to sensory devices with heavy computational lifting in the (local) regime controlled servers.

---

By this, I mean moral codes no longer have any objective basis but are evermore easily re-wired by the .002 to suit their purposes
Posted by: donkeytale | Oct 27, 2018 6:32:11 AM | 69

I question the implied "helpless to resist reprogramming" notion of your statement. You mentioned God. Please note that God has endowed each and every one of us with a moral compass. You mentioned Jesus. Didn't he say something about "you generation of vipers"? I suggest you consider that ours is a morally degenerate generation and is getting precisely what it wants and deserves.

Posted by: realist | Oct 27 2018 14:27 utc | 74

Probably an addition to BM's @72 post.

If there is an ethical problem in "who to crash into" then it is up to the CAR to decide. The only part of the question is how.
So as we live in a world where money is now considered as "the status indicator" and "the right to Power", then all the car would have to do is to question the local "credit rating agency", do a quick tally of the prospective costs for each solution, and choose the cheapest.

Problem solved, without having to worry about ethics at all, which avoids a "human type question" in what will be a totally "rational" world.

hummm.....?

Posted by: stonebird | Oct 27 2018 14:43 utc | 75

Realist - I agree with your statement 100% except for this: I question the implied "helpless to resist reprogramming" notion of your statement.

I made no such implication. In fact, when someone comes along and tries to put words in my mouth I believe they are simply locating their own feelings within their interpretation of mine. Which is fine with me by the way. I enjoy when the comments riff off each other as opposed to the mutual agreement/joint apppreciation blah blah blah which takes up much (thankfully not too much) of MoA commentary space.

As for this statement: You mentioned Jesus. Didn't he say something about "you generation of vipers"? I suggest you consider that ours is a morally degenerate generation and is getting precisely what it wants and deserves.

This is very true, especially when you consider Jesus also received precisely what he wanted and deserved.

Maybe, in fact, this is your implication: a morally degenerate generation is "helpless to resist programming."

Posted by: donkeytale | Oct 27 2018 15:02 utc | 76

Autonomous cars will just creep up on us. And, being in my dotage, I can hardly wait. There's an interesting BBC doco starring one of those ultra cute and vivacious BBC presenters. She takes a ride in a BMW programmed to do a couple of full throttle laps on a motor racing circuit - which it does scarily but competently. Then she takes a fully autonomous (human-supervised), GPS-navigated cross-town trip in a Google Prius.
The most impressive aspect of the Google trip is the immaculate precision and excellent road manners of the Prius. It makes its way to the motorway on-ramp and does a superb job of matching speed with the motorway traffic signalling appropriately. It picks a spot in the traffic stream, joins the stream, switches off the indicators, slows down slightly and briefly to optimise the gap to the vehicle in front, and just blends in. As they get near their destination it does a couple of polite lane changes. No human could drive more competently than that Prius.

Elderly people will be scrambling over each other to buy any car which can be given a destination address and then be left to its own devices to get them there swiftly and safely; in a vehicle programmed not to take unnecessary risks...

Posted by: Hoarsewhisperer | Oct 27 2018 15:03 utc | 77

As an older male person in Europe I am not really comfortable with these results.

:-))

Killing an old man (or woman) used to be really cheap according to Germanic wergeld law, so there has been some improvement over the centuries. Killing kids used to be really cheap, too.

It looks like the modern list mostly looks at the years that are possibly taken from an individual, not so much at their value for society. Which is kind of reassuring. So a stroller beats the doctor. And a homeless person beats an old person.

Driving a car and being presented with a choice of running over a kindergarden or a group of old age pensioners? I hope, I never get into a situation like this. But I sure would not get into a car that takes this decision for me.

Posted by: somebody | Oct 27 2018 15:28 utc | 78

realist - I think this may be my statement with which you are basing the implication: Morality has little or nothing to do with the direction of humanity.

If so, then your point is fair qnd well stated. Please understand I wasn't discussing personal morality in my comment but rather the codified morality (religious, legal, cultural) which is used by the elites to control us.

Thanks for your comment

Posted by: donkeytale | Oct 27 2018 15:31 utc | 79

It's interesting how our view of the future is conditioned by our experience of this present reality we find ourselves in. I wonder if a similar discussion in Russia or China would harbor such dystopian fears?

Of course we worry about amoral codes being applied against us, because we live in a society of capitalist exploitation and amorality. Of course we worry about the Panopticon and that we ourselves will become just another thing in the Internet of Things - this is the world we live in now, a facist-corporatist morality working its way towards total statism.

We are Luddite because we in the west have no future. In the east they have a future. I hope, and trust, that the world will survive this moment, because for those who have a future, the future will continue to unfold in humanly fascinating ways. Problems will be solved, new problems will arise, along with discoveries, and the world will keep turning. We have forgotten all this in the west, dreading the end of days.

Our view of the future shows how much we detest this present, in the west, as our day dies and the power refuses to yield to this.

In terms of technology, humans will anthropomorphize everything in their humorous way. Human life will continue in its human style. Social scores will become memes, and party jokes, and we will teach the machines to lighten up a little.

Along the way, perhaps we will outlaw the humans who think in machine terms, or rehabilitate them, and remove their power, seize their stolen money, open their shriveled hearts, and show them a way to be human.

Posted by: Grieved | Oct 27 2018 15:34 utc | 80

@ BM 9:25:05 AM | 71

Excellent post! Very useful information – thank you!

Posted by: AntiSpin | Oct 27 2018 16:35 utc | 81

5

It's good to know all the bugs have been worked out of Inconvenient 97% White Male Scientocracy software code by now, and the State-Corporate Tithe Tax and Tax Credit 'scheme' can move ahead at full speed, sparing neither the elderly, nor the youth, and benefiting only the sycophants and acolytes of the New Carbon Catholic Third Temple, and the Deep Purple Mil.Gov UniParty minions at State, who are exempted from the Carbon Tithe by a corresponding COLA increase in their salaries and benefits for life, which is the real 'inconvenient truth', that State dodges the very taxes they impose, because their software code is perfect and *we* are the bugs to be eliminated..

Posted by: Anton Worter | Oct 27 2018 16:36 utc | 82

Along the way, perhaps we will outlaw the humans who think in machine terms, or rehabilitate them, and remove their power, seize their stolen money, open their shriveled hearts, and show them a way to be human.
Posted by: Grieved | Oct 27, 2018 11:34:28 AM | 81

I hope so! Thank you for that thought.

Posted by: BM | Oct 27 2018 16:48 utc | 83

78

I would imagine the elderly will be scrambling all over themselves to purchase the medical equivalent of Siri or Sarah(?) for Android that will listen to them with precise voice-to-text, listen to their breath and pulse, use low-cost Test-on-a-Chip with personalized AI to a global medical FAQ and Amazon Pharmacy and transdermal implant to keep them precisely medicated without the intervention of a 3x more expensive and #3 cause of malpractice death medical/hospice for-profit wealth-extraction USAryan Vampirocracy.

PS. Humans will never live on Mars, never mind asteroids and never ride a space elevator. Those are all mani-infestations of the 97% Old White Male Scientocratic Third Temple, Hoover It Up© wealth 'scheme', of which 'morality' plays no parts, because Third Temple is lubricated by faith, blood and bone grease.

No morality required.

Posted by: Anton Worter | Oct 27 2018 16:50 utc | 84

77

Capital has no morality, and Gramm-Leich-Bliley slash Dot Con Neutron Bmob slash Synthetic Collateral Debt Obligations slash Too Big To Fail slash I Can't Tell You Where All the Money Went, has unleashed $100s of TRILLIONS of fiat credit-debt upon a colonized world where the sun never sleeps, so that from every teenager to every elder business executive around the globe, we have all been reduced to hapless global casino gamblers on IoT, under a million points of branding and brainwashing, of which only the State, and the Wealthy, are exempt. They can afford 'morality'! In fact, 'morality' is what they preach, and that's the Inconvenient Truth of the 97% Old White Male Scientocratic Third Temple: "We got ours, and now we're coming for yours. May peace be upon you (and docility at tithe time). Let Freedom Ring© and may the Red- and Blue- Teen Spirit be Ours!". Amen.

We will now have a color guard fold the flag.

Welcome to the Supra-National Third Temple.

'It's Pay Up Time'© Pope Albertus 'Glorious' Goreius

Posted by: Anton Worter | Oct 27 2018 17:17 utc | 85

It's like the old Irish joke:

How do you get to driverless utopia?
Well sir, I wouldn't have started from here.

Some situations are almost too complicated for human drivers.

Example: a busy side road tees into a busy two-lane highway with drivers on the highway coming from both directions at 50++ mph. Nearly everyone on the side road is turning left onto the highway. But the intersection is actually 4-way with an exit from a local park opposite the stop sign for the side road. Exiting the park, I have to make visual contact with the driver opposite to let them know it is my turn and I am going straight. If both cars go at the same time and block the intersection there may be no time for cars on the highway to stop. How does an AV system designer handle this situation where one of the vehicles is still under human control?

Seems that's the trouble. At a time when AVs are being rolled out they will be mixing it up with existing human drivers, some intentionally out to make sport of the AV control systems. If they expect the AV drivers to intervene when the system becomes confused, good luck with that. I keep hearing of commercial airliners getting into trouble when the auto-pilot suddenly clicks off forcing the human pilots to think about flying the plane again.

I'm still not convinced the driverless car craze will turn out to be practical given the state of our transportation system. Maybe some things are not well suited to be fully automated. Look at Elon Musk and his Alien Dreadnought.

BTW, one solution for the moral dilemma is immovable barriers. If it comes down to a choice between a pregnant woman and a stroller the software immediately identifies the nearest available immovable barrier and crashes into it. The final and default option should always be destruction of the AV and all its contents.

Posted by: anon | Oct 27 2018 18:35 utc | 86

I've thought the idea of a self-driving vehicle was economically driven and -- like so many things -- based on the presumption of "self-evident" good -- like better safety, ability to do away with drivers and related work place safety requirements (poorly enforced) and maintain the existing infrastructure and gas-powered (or even electric) trucks-move-america mindset -- the research paid for by "private sector" as a very expensive alternative to infrastructure improvement and expansion of the railroad networks.

From top to bottom, so much money would be "saved" by eliminating drivers (and their foible and ancillary expenses like turnover and healthcare) -- indeed many pie-in-the-sky scenarios are possible ... and then you consider Uber and Lyft and even facebook/twitter and the massive collection of personal data enabled because "the rules changed" ... will we need to present ID to take a bus downtown? liability, doncha know.

If the trucking industry were "better" employers there'd be more discussion of cleaning it up ... side-stepping human factors often has unexpected effects. In today's neoliberal climate, there's always money enough to continue because there is believed to be a pot of gold at the end of the rainbow.

Posted by: Susan Sunflower | Oct 27 2018 18:37 utc | 87

I've thought the idea of a self-driving vehicle was economically driven and -- like so many things -- based on the presumption of "self-evident" good -- like better safety, ability to do away with drivers and related work place safety requirements (poorly enforced) and maintain the existing infrastructure and gas-powered (or even electric) trucks-move-america mindset -- the research paid for by "private sector" as a very expensive alternative to infrastructure improvement and expansion of the railroad networks.

From top to bottom, so much money would be "saved" by eliminating drivers (and their foible and ancillary expenses like turnover and healthcare) -- indeed many pie-in-the-sky scenarios are possible ... and then you consider Uber and Lyft and even facebook/twitter and the massive collection of personal data enabled because "the rules changed" ... will we need to present ID to take a bus downtown? liability, doncha know.

If the trucking industry were "better" employers there'd be more discussion of cleaning it up ... side-stepping human factors often has unexpected effects. In today's neoliberal climate, there's always money enough to continue because there is believed to be a pot of gold at the end of the rainbow.

Posted by: Susan Sunflower | Oct 27 2018 18:38 utc | 88

@ karlof1 #31

Exactly.

Starting up donkey breeding programmes would be a better option since the humble burro (and the mule) was and will always be the ultimate 4x4.

Posted by: Cortes | Oct 27 2018 19:00 utc | 89

72

If memory serves, there is a no-schrapnel waxed-cardboard version of an RPG available for non-armored targets. When I saw the Hastings crime scene, I saw the rear of the burning car blown out by that RPG, and *then* Hastings crashed into the tree at an impossible angle, instinctively power-sliding from what he must have assumed was a truck had just slammed into his right rear quarter-panel. Don't be Michael Hastings. Don't be Robert Bowers, for that matter, lol. The US-UK-IL-KSA mugwumps!

And that's why we will never have autonomous private vehicles. They are just using taxpaying citizens as beta-testers for an autonomous Deep-Purple Mil.Gov UniParty global police state.

By pure coincidence at a business-club dinner last night, I sat next to a military subcontractor with Chinese connectiins, an import license and a Made-in-USA final assembly warehouse. He is developing a low-altitude persistent-loitering traffic-monitoring drone. He was in Bellevue to meet with the coders. It would be used with the HOV lane high resolution cameras and real-time facial-recognition software, to identify speeders' names, vehicles and addresses for first-deployment ... but can just as easily operate in reverse to find a target and confirm-identify the front-seat passengers, then paint a laser target on the vehicle as it wings down the freeway, waiting for an open area Hastings-esque hellfire denouement.

Prolly for MENA. Prolly A-OK, Joe. Nothing to see here, citizen. E pluribus now get back to work. Pence's latest $1/4-TRILLION nuclear ICBM upgrade program awards soon, and we're gonna need those tithe-tributes!

Posted by: Anton Worter | Oct 27 2018 19:32 utc | 90

as much as i detest 99.9999% of what's on television, "the good place" has addressed the trolley issue

https://www.youtube.com/watch?v=lDnO4nDA3kM

as well as many others you mentioned in very intelligent ways. after the characters die, there's a "good place" or a "bad place" and where they go depends on how many "points" they gain or lose during their lives. maybe the show has chinese funding.

https://www.overthinkingit.com/wp-content/uploads/2017/02/The-Good-Place_RulesPart2.jpg

https://ewedit.files.wordpress.com/2016/09/the-good-place-points-1.jpg

it's based on a book but i haven't read it yet. surprisingly smart show for network TV.

also worth mentioning george hotz. some know him as the guy who cracked sony's playstation 3 as well as being the first person (on record) to jailbreak the iphone. all while in his 20s and smoking a ton of weed.

his most recent project has been a kit that turns "dumb cars" into self driving one.

https://www.theverge.com/2018/7/13/17561484/george-hotz-comma-ai-self-driving-car-scam-diy-kit

worth a read.

as for self-driving cars in general, it will take a massive infrastructure rehaul to make them safe and/or practical. sensors on every street, road, corner, alley, etc. etc. ad infinitum. it also seems like a distraction from the more attainable and much more important transition to hydrogen fuel cell cars. people have fooled themselves into think "electric" cars make the slightest difference but they still use fossil fuel/natural gas generated power.

https://www.eia.gov/energyexplained/index.php?page=electricity_in_the_united_states

hydrogen fuel cells get better mileage and expel water vapor as opposed to stank poison and carbon monoxide.

https://www.popularmechanics.com/cars/hybrid-electric/a22688627/hydrogen-fuel-cell-cars/

oh well. westerners won't adopt anything practical until they're forced to and by then it will be too late. ditto the chinese who increasingly insist on acting the same way for some reason.


Posted by: the pair | Oct 27 2018 19:40 utc | 91

Long time since we heard from Wahabibi Nethayahoo ? He has hopefully deceased ? Just like the parrot ? Njet ! Oh another lager then.

Posted by: Den Lille Abe | Oct 27 2018 19:54 utc | 92

Went for a Poppels Russian Imperial Stout! This is good stuff!

Posted by: Den Lille Abe | Oct 27 2018 19:56 utc | 93

Remote controlled cars are already available. In fact, most of us are driving them now. We just don't have the remote control device that goes with them. Dick Cheney has it.

Posted by: fast freddy | Oct 27 2018 20:09 utc | 94

You ask the car to drive you to your favorite restaurant. But instead of taking the shortest route, the car first drives past three, four 'sponsoring' restaurants, asking each time if you would not prefer eating there. Wanna bet?

Posted by: passerby | Oct 27 2018 20:11 utc | 95

I hope they make these cars with bulletproof glass. The occupants are going to need it if somebody kills a loved one of mine.

Posted by: so | Oct 27 2018 20:19 utc | 96

Posted by: stonebird | Oct 27, 2018 10:43:47 AM | 76 So as we live in a world where money is now considered as "the status indicator" and "the right to Power", then all the car would have to do is to question the local "credit rating agency", do a quick tally of the prospective costs for each solution, and choose the cheapest.

Problem solved, without having to worry about ethics at all, which avoids a "human type question" in what will be a totally "rational" world.

hummm.....?

Yeah, that's a good start. Also the algorithms should take into consideration whether a passenger is related to anyone in a public office/function.
If there are several passengers in a car, there'll be a quick weighted average calculated and boom they're overrun. So if you're the guy driving on that specific evening, you'd better have a good look at who of your mates will be lowering your cars crash-index and tell the lowlives to take the tube home.

Posted by: radiator | Oct 27 2018 20:22 utc | 97

I test software to control trains. This software is mandated by US legislation. The company I work for has had many issues with people from overseas they hired who have pretended to have tested some functionality but never did. Luckily, we have time stamps on certain software to show what was done. These guys were pretending that they did 4 hours of tests in 5 minutes, for example. All this to say that a lot of the testing in these kinds of systems will be tricked and we'll only find out afterwards when enough people have died. The issues with large bank systems and airline ticketing systems going down in the last few years were also due to this problem - outsourcing and not enough checking that it was done correctly. Same with the Microsoft problem that b mentioned.

Posted by: Mischi | Oct 27 2018 20:29 utc | 98

Grieved says:

I wonder if a similar discussion in Russia or China would harbor such dystopian fears

i've read that there are more violin virtuosos in China than we have violin players.

i wonder what regress to the errors of their ways and anthems to the power of love might they produce in some dystopian future?

their very own, Meeting Of The Spirits, crying out of the post-modern hutongs.

...

as an aside...have you ever been in Beijing on a bad air day?

you can literally taste the metal.

Posted by: john | Oct 27 2018 20:32 utc | 99

@ anton
2007 In preparation for the 33rd G8 summit, the Network of African Science Academies submitted a joint "statement on sustainability, energy efficiency, and climate change" :

A consensus, based on current evidence, now exists within the global scientific community that human activities are the main source of climate change and that the burning of fossil fuels is largely responsible for driving this change. The IPCC should be congratulated for the contribution it has made to public understanding of the nexus that exists between energy, climate and sustainability.
— The thirteen signatories were the science academies of Cameroon, Ghana, Kenya, Madagascar, Nigeria, Senegal, South Africa, Sudan, Tanzania, Uganda, Zambia, Zimbabwe, as well as the African Academy of Sciences, [51]

white male scientocracies! it's a plot to raise their colas! as an aside, why do you think driverless cars are impossible? that's the argument you seem to be making, it's all a plot to convince somebody (?) that science works when it doesn't. by old white guys. oddly, every single one of the of the few scientists that dissent from the global warming consensus is an old white guy, other than judith curry, and every single one seems to profit from the fossil fuel industry.
i think they are trying to make some cost of living adjustments.

Posted by: pretzelattack | Oct 27 2018 20:44 utc | 100

next page »

The comments to this entry are closed.