[ad_1]
In one bad week in March, two people were indirectly killed by automated driving systems. A Tesla vehicle drove into a barrier, killing its driver, and an Uber vehicle hit and killed a pedestrian crossing the street. The National Transportation Safety Board’s preliminary reports on both accidents came out recently, and these bring us as close as we’re going to get to a definitive view of what actually happened. What can we learn from these two crashes?
There is one outstanding factor that makes these two crashes look different on the surface: Tesla’s algorithm misidentified a lane split and actively accelerated into the barrier, while the Uber system eventually correctly identified the cyclist crossing the street and probably had time to stop, but it was disabled. You might say that if the Tesla driver died from trusting the system too much, the Uber fatality arose from trusting the system too little.
But you’d be wrong. The forward-facing radar in the Tesla should have prevented the accident by seeing the barrier and slamming on the brakes, but the Tesla algorithm places more weight on the cameras than the radar. Why? For exactly the same reason that the Uber emergency-braking system was turned off: there are “too many” false positives and the result is that far too often the cars brake needlessly under normal driving circumstances.
The crux of the self-driving at the moment is precisely figuring out when to slam on the brakes and when not. Brake too often, and the passengers are annoyed or the car gets rear-ended. Brake too infrequently, and the consequences can be worse. Indeed, this is the central problem of autonomous vehicle safety, and neither Tesla nor Uber have it figured out yet.
Tesla Cars Drive Into Stopped Objects
Let’s start with the Tesla crash. Just before the crash, the car was following behind another using its traffic-aware cruise control which attempts to hold a given speed subject to leaving appropriate following distance to the car ahead of it. As the Tesla approached an exit ramp, the car ahead kept right and the Tesla moved left, got confused by the lane markings on the lane split, and accelerated into its programmed speed of 75 mph (120 km/h) without noticing the barrier in front of it. Put simply, the algorithm got things wrong and drove into a lane divider at full speed.
To be entirely fair, the car’s confusion is understandable. After the incident, naturally, many Silicon Valley Tesla drivers recreated the “experiment” in their own cars and posted videos on YouTube. In this one, you can see that the right stripe of the lane-split is significantly harder to see than the left stripe. This explains why the car thought it was in the lane when it was actually in the “gore” — the triangular keep-out-zone just before an off-ramp. (From that same video, you can also see how any human driver would instinctively follow the car ahead and not be pulled off track by some missing paint.)
More worryingly, a similar off-ramp in Chicago fools a Tesla into the exact same behavior (YouTube, again). When you place your faith in computer vision, you’re implicitly betting your life on the quality of the stripes drawn on the road.
As I suggested above, the tough question in the Tesla accident is why its radar didn’t override and brake in time when it saw the concrete barrier. Hints of this are to be found in the January 2018 case of a Tesla rear-ending a stopped firetruck at 65 mph (105 km/h) (!), a Tesla hitting a parked police car, or even the first Tesla fatality in 2016, when the “Autopilot” drove straight into the side of a semitrailer. The telling quote from the owner’s manual: “Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h)…” Indeed.
Tesla’s algorithm likely doesn’t trust the radar because the radar data is full of false positives. There are myriad non-moving objects on the highway: street signs, parked cars, and cement lane dividers. Angular resolution of simple radars is low, and this means that at speed, the radar also “sees” the stationary objects that the car is not going to hit anyway. Because of this, to prevent the car from slamming on the brakes at every streetside billboard, Tesla’s system places more weight on the visual information at speed. Because Tesla’s “Autopilot” is not intended to be self-driving solution, they can hide behind the fig leaf that the driver should have seen it coming.
Uber Disabled Emergency Braking
In contrast to the Tesla accident, where the human driver could have saved himself from the car’s blindness, the Uber car probably could have braked in time to prevent the accident entirely where a human driver couldn’t have. The LIDAR system picked up the pedestrian as early as six seconds before impact, variously classifying her as an “unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path.” Even so, when the car was 1.3 seconds and 25 m away from impact, the Uber system was sure enough that it concluded that emergency braking maneuvers were needed. Unfortunately, they weren’t engaged.
Braking from 43 miles per hour (69 km/h) in 25 m (82 ft) is just doable once you’ve slammed on the brakes on a dry road with good tires. Once you add in average human reaction times to the equation, however, there’s no way a person could have pulled it off. Indeed, the NTSB report mentions that the driver swerved less than a second before impact, and she hit the brakes less than a second after. She may have been distracted by the system’s own logging and reporting interface, which she is required to use as a test driver, but her reactions were entirely human: just a little bit too late. (If she had perceived the pedestrian and anticipated that she was walking onto the street earlier than the LIDAR did, the accident could also have been avoided.)
But the Uber emergency braking system was not enabled “to reduce the potential for erratic vehicle behavior”. Which is to say that Uber’s LIDAR system, like Tesla’s radar, obviously also suffers from false positives.
Fatalities vs False Positives
In any statistical system, like the classification algorithm running inside self-driving cars, you run the risk of making two distinct types of mistake: detecting a bike and braking when there is none, and failing to detect a bike when one is there. Imagine you’re tuning one of these algorithms to drive on the street. If you set the threshold low for declaring that an object is a bike, you’ll make many of the first type of errors — false positives — and you’ll brake needlessly often. If you make the threshold for “bikiness” higher to reduce the number of false positives, you necessarily increase the risk of missing some actual bikes and make more of the false negative errors, potentially hitting more cyclists or cement barriers.
It may seem cold to couch such life-and-death decisions in terms of pure statistics, but the fact is that there is an unavoidable design tradeoff between false positives and false negatives. The designers of self-driving car systems are faced with this tough choice — weighing the everyday driving experience against the incredibly infrequent but much more horrific outcomes of the false negatives. Tesla, when faced with a high false positive rate from the radar, opts to rely more on the computer vision system. Uber, whose LIDAR system apparently generates too-frequent emergency braking maneuvers, turns the system off and puts the load directly on the driver.
And of course, there are hazards posed by an overly high rate of false positives as well: if a car ahead of you inexplicably emergency brakes, you might hit it. The effect of frequent braking is not simply driver inconvenience, but could be an additional cause of accidents. (Indeed, Waymo and GM’s Cruise autonomous vehicles get hit by human drivers more often than average, but that’s another story.) And as self-drivers get better at classification, both of these error rates can decrease, potentially making the tradeoff easier in the future, but there will always be a tradeoff and no error rate will ever be zero.
Without access to the numbers, we can’t really even begin to judge if Tesla’s or Uber’s approaches to the tradeoff are appropriate. Especially because the consequences of false negatives can be fatal and involve people other than the driver, this tradeoff effects everyone and should probably be more transparently discussed. If a company is playing fast and loose with the false negatives rate, drivers and pedestrians will die needlessly, but if they are too strict the car will be undriveable and erratic. Both Tesla and Uber, when faced with this difficult tradeoff, punted: they require a person to watch out for the false negatives, taking the burden off of the machine.
[ad_2]
Source link