The fatal collision took place in Williston, Florida on May 7 when a tractor-trailer crossed the path of a 2015 Tesla Model S. Accounts say that the car drove under the truck's trailer, which sheared off the vehicle's roof, then traveled for another 100 feet until it collided with a utility pole.
Authorities have identified the driver of the Model S as Joshua Brown. Tesla has confirmed that Mr. Brown was using its Autopilot system at the time of the crash.
In a blog post, Tesla explains what went wrong:
"[T]he vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S."
A sad inevitability
The story of Mr. Brown's death is making headlines--and rightly so. Tesla is the first automaker to offer something akin to self-driving software, and this is the first fatality associated with it. Not surprisingly, Tesla opens its post with a plea for calm:
"This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles. Worldwide, there is a fatality approximately every 60 million miles."
And as grim as it might seem, we all knew this was coming. Even as Tesla has touted the gee-whiz factor of its Autopilot software, it's always been obligated to include an important footnote--one that the company reiterates in its blog post: "Tesla disables Autopilot by default and requires explicit acknowledgement that the system is new technology and still in a public beta phase before it can be enabled" (emphasis ours).
This tragic story is the result of flawed software, but there will be future tragedies, too--ones that might be blamed on software that works all-too-well.
As we discussed last fall (and as many news orgs have recently reported), autonomous driving systems will eventually have to include some kind of moral code. That embedded sense of ethics will force autonomous cars to make tough decisions: kill the driver, or kill the pedestrian? Kill the driver, or kill the group of cyclists?
In short, autonomous driving software will only make the world better, not perfect. Mr. Brown's death is a testament to that.
A setback for autonomous cars?
This incident may not have much of an effect on consumers, who are, by some accounts, growing even more wary of autonomous vehicles. (By coincidence, the researchers at AlixPartners released a study yesterday that showed 75 percent of motorists would agree to be driven around by an autonomous car. That's not just a reversal of previous surveys, it's also an example of very bad timing.)
However, Mr. Brown's death may cause NHTSA to proceed with greater caution, and that could slow the rollout of upcoming autonomous cars.
In 2013, the agency published a list of guidelines for states that wished to legalize autonomous cars. That fall, one of NHTSA's lawyers suggested that talk of selling self-driving vehicles to the public was premature, but less than two years later, Tesla made Autopilot available to Model S owners. And in February 2016, NHTSA determined that Google's autonomous software qualified as a "driver" for legal purposes.