What happens when artificial intelligence gets it wrong? From self-driving cars to medical tools, rapid advances in AI are raising new ethical and legal concerns.
University of Virginia professor David Danks is uniquely positioned to speak about what happens when AI fails. He joined the University in January with dual appointments in the Department of Philosophy and the School of Data Science. We sat down with him to discuss the recent AI boom and the ethical challenges it raises.
UVA professor David Danks holds appointments in the Department of Philosophy and the School of Data Science. (Contributed photo)
Q. What are your thoughts on accountability when AI makes a high-stakes mistake?
A. We’re facing a competition between two futures. One is a future in which we don’t solve this problem and instead just keep saying, “There always has to be a human who’s accountable.”
We’re starting to see some signs that this might be the future we’re moving to, where a radiologist, for example, has to sign off on every AI diagnosis, but they aren’t given the time to actually second-guess the AI diagnosis.
They’re essentially having to put their signature on diagnoses, and thereby be accountable for any errors, even though they didn’t meaningfully get to contribute in any way to that diagnosis. So, we’re solving the accountability problem, but only by making humans the perpetual scapegoats.
Q. How could accountability be addressed with AI systems?
A. A different future is one where we think carefully about issues of accountability and liability and recognize that the companies and organizations creating these systems should bear some accountability when the systems fail or go awry.
We have very well-established product liability law in essentially every country in the world, and one way to do this would be through product liability law.
If a company such as OpenAI does not want to assume liability, then they can put that into their contracts, and they can negotiate who bears the liability. This is a very standard sort of contract law.
We need something more than just liability, though, because increasingly, we’re seeing AI systems that act in surprising and novel ways without direct prompting by a human.
Q. When do you think the accountability issue will come to a head?
A. I think the place we’re going to see this play out first, or most likely going to play out, is in fields around home, consumer robotics, things like autonomous vehicles, software and cars.
Imagine you purchase a self-driving car. Who bears the liability when it gets into an accident when you’re not in the car? Is it you, as the owner? Is it the company that created the self-driving car system? Or do we say, “No, you need separate self-driving insurance on top of your regular insurance so that when things go wrong, you’re covered?” This is all going to have to get worked out in the next five to 10 years.