Q&A: Who’s responsible when AI makes mistakes?

What happens when artificial intelligence gets it wrong? From self-driving cars to medical tools, rapid advances in AI are raising new ethical and legal concerns.

University of Virginia professor David Danks is uniquely positioned to speak about what happens when AI fails. He joined the University in January with dual appointments in the Department of Philosophy and the School of Data Science. We sat down with him to discuss the recent AI boom and the ethical challenges it raises. 

Portrait of David Danks in a well lit interior.

UVA professor David Danks holds appointments in the Department of Philosophy and the School of Data Science. (Contributed photo)

Q. What are your thoughts on accountability when AI makes a high-stakes mistake?

A. We’re facing a competition between two futures. One is a future in which we don’t solve this problem and instead just keep saying, “There always has to be a human who’s accountable.”

We’re starting to see some signs that this might be the future we’re moving to, where a radiologist, for example, has to sign off on every AI diagnosis, but they aren’t given the time to actually second-guess the AI diagnosis.

They’re essentially having to put their signature on diagnoses, and thereby be accountable for any errors, even though they didn’t meaningfully get to contribute in any way to that diagnosis. So, we’re solving the accountability problem, but only by making humans the perpetual scapegoats.

Q. How could accountability be addressed with AI systems? 

A. A different future is one where we think carefully about issues of accountability and liability and recognize that the companies and organizations creating these systems should bear some accountability when the systems fail or go awry.

We have very well-established product liability law in essentially every country in the world, and one way to do this would be through product liability law.

If a company such as OpenAI does not want to assume liability, then they can put that into their contracts, and they can negotiate who bears the liability. This is a very standard sort of contract law. 

We need something more than just liability, though, because increasingly, we’re seeing AI systems that act in surprising and novel ways without direct prompting by a human. 

Q. When do you think the accountability issue will come to a head? 

A. I think the place we’re going to see this play out first, or most likely going to play out, is in fields around home, consumer robotics, things like autonomous vehicles, software and cars. 

Imagine you purchase a self-driving car. Who bears the liability when it gets into an accident when you’re not in the car? Is it you, as the owner? Is it the company that created the self-driving car system? Or do we say, “No, you need separate self-driving insurance on top of your regular insurance so that when things go wrong, you’re covered?” This is all going to have to get worked out in the next five to 10 years.

Much as we allow corporations in the United States to be legal persons, perhaps algorithms and systems need to start having some form of legal personhood, such that they can have insurance. 

If they do something wrong, there’s an insurance payout, or they might have property that could be seized, as compensation when the AI makes a mistake.

Q. As these systems become more powerful, what should we do to make sure the tools are benefiting us?

A. The single biggest step that we can take collectively is actually to have an understanding of what we want these technologies to do.

The rhetoric and hype around AI emphasize the ways in which it will transform our lives, and companies are selling a very rosy picture of those transformations without people having a real understanding of the risks that are also there. 

And so, we’re stumbling toward a future that nobody really intended or had in mind.

Q. Do you think deploying this technology responsibly is possible, given the race that tech companies are in right now? 

A. I do believe that companies could be more responsible in how they develop and deploy these systems without bankrupting themselves. It’s massively more expensive to find problems later rather than fixing them early. 

Most of the responsible AI practices that people have advocated for, myself included, are ultimately about knowing why you’re building whatever it is you are building.

I think a lot of the race narrative is actually self-generated by the companies, rather than reflecting actual economic pressures. 

Q. What advice do you have for individuals as we adjust to this changing technological landscape?

A. Perhaps the most important thing that people can do is be educated. Too many people use technology without thinking about why. Instead, we should all strive to think about whether AI will be useful for us and only use it if it will.

Media Contacts

Davene Wasser

Writer and Editorial Specialist School of Data Science