Body Counts, Benchmarks & Teslas Commentary
Blomst / Pixabay
Body Counts, Benchmarks & Teslas

The National Highway Traffic Safety Administration (NHTSA) recently opened a formal investigation into Tesla’s driver assistance system because it appears to have difficulty recognizing stationary emergency vehicles with flashing lights, such as police cars and firetrucks.

This investigation presents an opportunity for the NHTSA to answer an important question: What is the federal government’s position on deploying a machine driver that is less safe than the average human driver? The NHTSA ought not let this moment pass without an answer to this question.

Even if the NHTSA were to adopt a regulatory position that prohibits deployment of a machine driver that is less safe than the average human driver, that is only part of the puzzle. To implement this regulatory position would require identification of standards and metrics to measure safety performance. In addition to U.S. based accidents, the investigation will take place in the shadow of a recent report of an accident in England in which a Tesla injured six school children and a parent. How might the NHTSA investigation proceed?

Initially, Tesla undoubtedly will defend itself by asserting that some of the crashes at issue occurred through product misuse (when not the fault of another driver). Tesla owner’s manuals require that Tesla vehicles operate at all times with an attentive human driver able to override operational choices made by the system’s autopilot or full self-driving mode. This defense ought to fall on deaf ears.

Tesla is the ne’er-do-well of the self-driving car industry because Tesla owners sometimes operate their vehicles as if they had fully autonomous driving capability when they do not. Moreover, Tesla’s design does not incorporate effective safeguards against this improper use.

Essentially, Tesla allows its owners to pretend that they have fully autonomous vehicles when the automation is only partial. Tesla’s CEO, Elon Musk, has even granted an interview while riding in a Tesla without his hands on the wheel. Particularly given Tesla’s various shortcomings, the NHTSA ought to focus on the performance of Tesla cars in actual use (and not performance within their operational design domain). Assume the NHTSA properly rejects the operator misuse defense. What next?

Tesla may argue that, while these crashes are unfortunate, any reprimand is inappropriate because it believes its current self-driving technology, even though only Level 2 as rated by the Society of Automotive Engineers, is nevertheless actually safer than a human driver, measuring safety by comparable miles traveled without a fatality. This is the place where the investigation will settle the issue of metrics and standards, identifying the appropriate data and structure of safety case to use for evaluation.

A key issue for metrics and standards will be how to determine comparability of miles traveled between machine drivers and human drivers without a fatality. Another issue will be whether the question of safety ought to be considered not simply with reference to fatalities, but should also include non-fatal injuries. This, in turn, will require a principle of comparison between fatalities and injuries (i.e. does the death of one person equate to injuries which put five persons in wheelchairs?).

If, as many think is likely, the chosen data, metrics and standards show that current Tesla vehicles are not safer than human drivers when operated in autopilot or full self-driving mode without an attentive human driver ready to intervene, then the investigation might arrive at the money question:  Will the NHTSA permit deployment of a machine driver today that is less safe than a human driver, if that deployment will yield information which leads to a future with fewer highway fatalities?

This is the money question for regulators because two different cost/benefit calculations might justify deployment of fully self-driving cars. We do not yet know which benchmark the regulators will adopt (if indeed they adopt any standard).

First, a self-driving car might be safer than a human driver upon initial deployment. This might justify a regulator’s decision to allow deployment on a simple cost/benefit analysis. Deployment might remain problematic under a favorable balance of utilities, however, despite an overall safer system, if the system increased the risk for highway workers and public safety officers, such as police and firemen. This possibility appears to concern two Senators who have called for the FTC to investigate Tesla.

Second, and more problematic, early deployment of a self-driving car that is less safe than a human driver might be justified on the belief that any incremental increase in highway fatalities in the near term might be offset by a greater decrease in highway fatalities in the long term. This is a “harm now, benefits later” justification. A “harms now, benefits later” utilitarian justification is problematic for regulators because it treats current highway users as a means to the end of benefiting future highway users. Yet, as a matter of public policy ought not our regulators make choices which benefit the majority of the citizens? What is the proper balance between personal rights of existing highway users and maximizing utility of the group over time?

There is a third possibility which surfaces another important question: Will the NHTSA permit deployment of a machine driver if the relative capabilities of a machine driver versus a human driver are unknown to a reasonable scientific certainty at the time a company wants to deploy its machine driver at scale? The relative performance of a machine driver compared with the average human driver might be unknown and unknowable to a moral certainty at the time a company plans to deploy its self-driving cars. What then? Is a reasonable scientific certainty the same as a moral certainty?

Deployment in the face of an unknown is problematic because it appears to be a significant gamble which might be taken only if we believe that self-driving car technology will, to a moral certainty, eventually exceed the safety performance of a human driver. Though many believe the technology will eventually “get there,” developing a self-driving car that is demonstrably better than a human driver in a full range of likely operating conditions (including rain and snow) has proven extremely difficult.

In one possible world, machine drivers eventually achieve better-than-a-human-driver safety performance. But, how do we know that machine driver technology will improve to the point of being safer than the average human driver in our future world? This is an epistemological question. If the anticipated safety improvements never materialize, then the consequentialist or utilitarian justification for early deployment seems to evaporate. Opting for deployment in the absence of knowledge also seems like gambling the lives of highway users in order to make a profit for manufacturers of machine drivers. For this reason, acting under the shadow of ignorance appears morally questionable.

The NHTSA needs to adopt a benchmark for deployment of self-driving car technology so the public might better evaluate the morality of the actions and omissions which lead to the body count in fatal accidents involving self-driving cars. My personal evaluation of the effectiveness of our regulatory system depends on whether or not the regulators have allowed fatalities in the body count to include incremental deaths resulting from premature deployment of self-driving car technology, and how they have disclosed their standard. I do not readily accept the proffered reasons to deploy technology that we reasonably believe either is initially less safe than the average human driver, or in the face of an unknown. Without an open and robust public debate first taking place, I would not take the gamble. I suspect many others share my concern.

 

William H. Widen is a Professor at University of Miami School of Law, Coral Gables, Florida.

 

Suggested citation: William H. Widen, Body Counts, Benchmarks & Teslas, JURIST – Academic Commentary, August 24, 2021, https://www.jurist.org/commentary/2021/08/william-widen-tesla-self-driving-crashes/.


This article was prepared for publication by Sambhav Sharma, a JURIST Staff Editor. Please direct any questions or comments to him at commentary@jurist.org

Opinions expressed in JURIST Commentary are the sole responsibility of the author and do not necessarily reflect the views of JURIST's editors, staff, donors or the University of Pittsburgh.