In case you missed it, last weekās disclosure by film director Morgan Neville that he used artificial intelligence to recreate the voice of deceased celebrity chef Anthony Bourdain in parts of his new documentary film, āRoadrunner,ā has caused a stir.
Some people believe Neville had an ethical obligation to inform the audience that he had done so. Others believe it was well within his artistic license.
Then there are those stuck in the middle, not sure where the line in the sand should be ā or if there should even be a line at all.
In the film, Neville uses the AI-created voice to read a line from an email Bourdain wrote to a friend.
On one hand, the words coming out of Bourdainās mouth werenāt made up out of thin air ā which is sometimes the case when AI is used for nefarious purposes.
On the other, the director tried to give the impression that the voice was Bourdainās. Thereās also the fact that Bourdain couldnāt give consent for his voice to be re-created.
University of Virginia media studies associate professor William Little, who teaches a course centered on the use of AI and film, said he understands the controversy. One thing to remember, he said, is that the manipulation of voice in film goes back a long way.
āThe voiceover, where you can have someone whoās actually dead narrating the film, whoās speaking from beyond the story or beyond the grave and looking back on events, is a fairly typical film noir technique,ā Little said. āOne could argue that Bourdainās AI-manipulated voice represents a version of a voiceover. He is speaking, after a fashion, from beyond the grave.ā
āFilm will also use the device of the voice that is offscreen ā you canāt locate it. It might be a ghostly voice. Coincidentally, a perfect example of such a voice is the voice of the most famous AI in cinema, HAL in ā2001: A Space Odyssey.ā ⦠You donāt see lips moving, thereās no body attached to it. You kind of sense where the voice is coming from, but you donāt know exactly.ā
Still, Little understands the feeling of being duped, which some people who have watched the Bourdain film said they have experienced.
For a deeper dive into the debate, UVA Today caught up with Little, whose research interests focus on genre study of film, history and philosophy of film, and film and critical race studies.
Media studies associate professor William Little teaches a course centered on the use of artificial intelligence and film. (Photo by Sanjay Suchak, University Communications)
Q. You havenāt seen the film, but from what youāve heard about how the director used AI to mimic Anthony Bourdainās voice at certain junctures, never disclosing that he did so until after the fact and in a very casual way, what was your reaction?
A. I find this whole phenomenon rather fascinating, as itās bound up with a very complicated dynamic of betrayal. For me, thatās the richest aspect of this particular issue.
In one regard, thereās the idea that the director betrayed the audienceās confidence in what they are hearing. Since Bourdain didnāt actually vocalize the words at issue, the director broke a tacit contract with the audience.
Likewise, in some quarters there is concern that the director betrayed Bourdain and his family. He couldnāt get Bourdainās consent, and there may be some question about whether he received the familyās consent. So, I understand that audiences might feel betrayed. Iām not sure I totally agree with that point of view, but I understand it.
I would suggest that such a feeling of betrayal may be linked to a persistent, dystopian narrative about AI as constituting a threat of betrayal. Itās the idea that AI ā not to mention those who develop and implement AI ā is an untrustworthy tool. In this narrative, AI proves unfaithful to the purpose of improving or enhancing human experience. AI proves disloyal. It is turned against human beings, or, as numerous sci-fi films have dramatized, it turns on human beings.
There is evidence in the real world to support this suspicion about AI. For instance, the use of AI in surveillance, policing and job recruitment has been marked by racial bias.
All this is related, in an interesting way, to the one line in the Bourdain documentary that the director acknowledged was manipulated by AI. Bourdain says to friend, āYouāre successful, and Iām successful and Iām wondering, āAre you happy?āā I would argue that this line is also bound up with betrayal. In a sense, Bourdain is saying, āLife is letting me down somehow. Itās not what it could or ought to be.ā This, to me, is much more fascinating than coming down firmly on one side or the other of the controversy about the directorās use of AI.
Q. How does the fact that this was a documentary play into everything? A documentary is obviously much different than an action movie, right? You sort of have an expectation of what you are watching being āreal.ā Should there be different expectations based on genres?
A. Good question. I mean, no documentary is objective. It is a creative work of nonfiction. The filmmaker makes creative decisions, at every turn, from the look of the credits to editing, framing, lighting, sound and use of voice. The assumption that a documentary is real begs to be examined carefully. It depends what you mean by āreal.ā
Bourdain himself was extensively involved in documentary television productions (cooking show; travelogue show) that constitute a form of reality TV, a genre in which what constitutes the real is thoroughly subject to manipulation. To his credit, Bourdain acknowledged that his television work involved acting on his part, that he was performing a role.
So, it would be one thing if Neville had made up dialogue that Bourdain never produced anywhere, at any point, and claimed that it was Bourdain. That, I think, would have been a problem. But he did āsayā those words. He delivered them through the medium of writing, not speech.

