Menu
Here's why self-driving cars may never really be self-driving

Here's why self-driving cars may never really be self-driving

In an accident, the driver of an autonomous vehicle would likely still be responsible for damages

It sounds like the beginning of a bar room joke.

Two self-driving cars are headed down the highway when the lead car decides to   speed up to avoid being rear-ended by the second. That car, in turn, slows down to avoid hitting the first. Then a third car suddenly comes between the two, prompting  the slower car to change lanes to avoid and accident.

The problem: There are cars in the lanes on either side of it.

What's an autonomous car to do? The answer is no joke.

The scenario is called "a ripple factor" and it's one of many researchers at Carnegie Mellon University (CMU) are studying to understand how embedded software could  address a myriad number of unexpected situations that could cause accidents as self-driving vehicles speed toward reality.

The proof-of-concept software, called KeYmaera X, isn't being developed to simply figure out how an onboard vehicle computer should reason through all possible scenarios but also to determine when it should hand control over to the person behind the wheel.

tesla fatality Courtesy Robert VanKavelaar/Handout via REUTERS

A Tesla Model S involved in a fatal crash on May 7, 2016 had the top third of the car sheared off when it hit a tractor-trailer truck in Williston, Fla. The NHTSA determined Tesla’s Autopilot semi-autonomous driving system was not at fault.

One problem with trusting autonomous vehicle software to control a two-ton car (or a 16-ton semi-tractor truck) is that each manufacturer programs its product differently from its competitors. And, if a software glitch exists in one vehicle, it exists in the entire line of cars or trucks.

"Whether you trust it depends on how it's programmed," said André Platzer, an associate professor at CMU's School of Computer Science. ""If one person makes one mistake in one circumstance, that doesn't mean all other people will make the same mistake in those same circumstances. You don't want to a fatal mistake one human can make [replaced] with 50,000 mistakes a computer can make."

Platzer feels it is imperative to keep a driver alert and engaged behind the wheel of "driverless" vehicles. An expert in driverless cars, Platzer is involved in DARPA's High-Assurance Cyber Military Systems (HACMS) project, which learns from the military's experience in developing hardened technology for controlling autonomous vehicle systems.

The problem with creating an effective computer algorithm for self-driving vehicles is more about ensuring the vehicle is self-aware, and therefore capable of recognizing its own operating limitations.

"...The world is a complicated place and that makes the road a complicated place. Most of the time, roads are the same, but every once in a while the situation is a bit different," Platzer said. "Even if you take a million of these scenarios..., you've still not tried all the cases by testing alone."

The issue with self-driving vehicles extends well beyond safety; it's also a legal one. As autonomous vehicles gain in popularity, liability questions about who is to blame when an autonomous car crashes are also growing. If an autonomous car crashes, who is at fault? The driver -- even though the car was driving itself? The manufacturer? The developer who created the autonomous software?

"Supposed I write a piece of software and it has an inherent flaw. It starts causing  injuries and property damage. Am I protected? The answer is: Unlikely," said Michael Overly, a partner and intellectual property lawyer with Foley & Lardner LLP. "People whose property was damaged or they were injured would sue for negligence."

Overly said a handful of proposed regulatory schemes would allow manufacturers to dodge liability in these cases. But a recent report from the American Association for Justice notes that the regulations would do nothing to advance public safety. Ultimately, it's going to be up to the courts to decide.

161221 waymo 2 Waymo

A Chrysler Pacifica Hybrid minivan with Waymo self-driving technology.

When a consumer or business purchases a self-driving vehicle, they're essentially signing a contract, taking responsibility for what could happen when the software is activated, Overly said. But, consider the pedestrian bystander who is hit by such a vehicle. "There is no contract with that person," Overly said.

"For the foreseeable future..., this software will always require the driver of the car to ultimately be in charge of it," Overly said.

It's not just software flaws or safety omissions that could plague the auto industry with lawsuits. If the software allows a hacker to gain control of a vehicle, the liability would also likely fall on the manufacturer.

"Was the creator of the software acting negligently when he or she created it? That'll be a very high standard when there's software allowing a heavy object at high speed [that's]able to create a lot of harm," Overly said. "Lawyers look for the deep pockets, and who would that be?"

Currently, there are no regulations governing self-driving vehicle liability, Overly said. Part of the problem is that lawmakers who once viewed autonomous vehicles as an issue that was five years out, are now faced with a nearly immediate problem.

"No one would have predicted how rapidly self-driving cars would have rolled out," Overly said. "My suggestion is that if the software is operating under federal guidelines, then liability should not be attached."

While U.S. legislators have yet to fully address the liability issue of self-driving vehicles, other nations have -- even if the technology is still several years down the road. For example, Great Britain expects to see the first driverless cars on its roads in 2020.

Self driving car Daimler AG

The S 500 Intelligent Drive by Mercedes-Benz on an autonomously driven journey.

This month, the U.K.'s Department for Transport announced it plans to require owners of cars with advanced driver assistance systems (ADAS) to carry two-in-one insurance policies: one to cover the person when they're controlling the vehicle and the other for the car when it is in autonomous mode.

Therefore, when in self-driving mode, insurance companies will likely go after the vehicle manufacturer for accident claims and not the driver. While today's semi-autonomous and fully autonomous vehicles rely on GPS, radars and cameras to detect what's around them in order to avoid accidents, tomorrow's vehicles will likely include vehicle-to-vehicle (V2V) and vehicle to infrastructure (V2I) short-range communications. V2V communications will enable vehicles to know what vehicles around them are about to do; and V2I communications will also let cars know what's  around the corner.

For example, V2I technology currently focuses on warning motorists about features such as red lights, intersections, bad weather conditions, approaching curves and construction zones "so they can alter their behavior," according to Chrispher Dolan, an attorney with the American Association for Justice.

autonomous fusion mcity 39a9958 hr 100677146 orig Ford

A Ford self-driving prototype being tested at the University of Michigan's Mcity proving grounds.

The U.S. National Highway Traffic Safety Administration (NHTSA), Dolan wrote recently, has declared regulatory jurisdiction over V2V technology pursuant to the 1966 National Traffic and Motor Vehicle Safety Act, with an initiative called Intelligent Transportation Systems (ITS).

"NHTSA estimates that a fully mature V2V system could eliminate as many as 81% of all light-vehicle crashes and 81% of all heavy-truck crashes annually," Dolan said.

In every scenario, however, many experts and industry pundits believe a self-driving car will always need to be able to hand over control to a real person behind the wheel.

Disengaging self-driving technology

Consumer Watchdog, a nonprofit public interest group, said the need for legislation to require a driver behind the wheel "is obvious" after reviewing the results from companies that have been testing self-driving cars in California since September 2014.

In California, companies permitted to test autonomous vehicle technology are required to file "disengagement reports" explaining when a test driver had to take control of the vehicle to avoid an accident.

In 2016, the state posted autonomous vehicle "disengagement" reports from 11 companies: Bosch, BMW, Delphi, Ford, GM, Google's spin-off Waymo, Nissan, Honda, Mercedes-Benz, Volkswagen and Tesla Motors.

p1170406 Martyn Williams

Hyundai's Ioniq autonomous car in Las Vegas on Jan. 4, 2017.

Over the past two years, Waymo has lead the industry with 97% of all miles test driven and with the fewest number of driver disengagements. In 2015, after driving more than 454,000 miles, Waymo's test drivers had to take control 341 times. In 2016, during more than 635,000 test miles, drivers took control just 124 times -- a 19% decrease over the previous year.

Bosch, which supplies automakers with autonomous driving technology, had 1,442 disengagements during 983 miles driven; Parts supplier Delphi reported 178 disengagements over 3,125 miles. Ford reported 590 miles driven with three disengagements. Nissan had 4,099 miles driven and 28 disengagements. Mercedes-Benz reported 673 miles driven with 336 disengagements. Volkswagen reported 10,416 miles driven and 260 disengagements in 2015. (It didn't report data for 2016). And Tesla said its fully autonomous test vehicles drove only 550 miles and had 182 disengagements.

Consumer Watchdog has said there's been "an unjustified rush to deploy self-driving autonomous vehicle technology that will threaten the safety of the nation's highways."

"We agree that some automated vehicle technologies, such as automatic emergency braking, will save lives. But those systems are designed to work with and augment a human diver's abilities and compensate for their shortcomings," said John Simpson, Consumer Watchdog's project director, referring to a announcement last year.

Consumer Watchdog said California's "disengagement reports" show self-driving vehicles are not always capable of seeing pedestrians and cyclists, traffic lights, low-hanging branches, or the proximity of parked cars. That suggests a risk of serious accidents involving pedestrians and other cars.

"The cars also are not capable of reacting to reckless behavior of others on the road quickly enough to avoid the consequences," the group said.

Even so, while autonomous vehicles may not be able to react to every situation, data clearly shows they can reduce the number of accidents due to human error.

For example, in January, a federal investigation by NHTSA that included crash rate data provided by Tesla found that vehicles using the company's advanced driver assistance system (ADAS) Autopilot features were actually 40% less likely to crash.

Tesla CEO Elon Musk has also claimed the Autopilot can cut the number of accidents caused by human error by as much as half.

Still, Tesla offered an over-the-air software upgrade in November for vehicle owners that improved Autopilot, including a requirement that drivers keep their hands on the steering wheel.

connected cars autonomous vehicles NHTSA

Vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) electronic communications technology is helping to advance autonomous vehicles.

Tesla's system further reinforces driver engagement through a "strike out" strategy. Drivers that do not respond to visual cues in the driver monitoring system alerts to keep their hands on the wheel may "strike out" and lose Autopilot function for the remainder of the trip.

CMU's Platzer agrees that while a human driver cannot compete with the precision and speed of a computer, they do possess a greater ability to react to more challenging situations that arise on the road. The problem to be solved is how to create vehicles that can determine who should be in charge.

"Whenever a car gets into a challenging intersection, the computer should shut down,  admitting it cannot handle it and allowing the human driver to take over for now -- that's completely acceptable," Platzer said. "But if a computer doesn't recognize the situation is outside the bounds of what it was designed for, then it's situation where the human will always have to be in charge."

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Daily BriefingICYMI

More about AssuranceBoschGoogleHyundaiMellonTeslaTransportTransportation

Show Comments
[]