Generative AI (Gen AI) has developed so quickly and made such strides in its ability to imitate the real in recent years that it may not seem surprising that the latest cinematic front to worry about is its potential consequences for documentary film.
At a talk held at the genre’s premier annual gathering — the International Documentary Film Festival Amsterdam (IDFA) — documentarians discussed the future of truth on film in the age of AI and the threats it poses to future viewers’ ability to distinguish fact from fiction.
Warnings about AI tools and their implications for the medium have been sounded in the documentary world for longer than most, with concerns first raised in 2021 when the film Roadrunner about TV globetrotter Anthony Bourdain courted controversy after it was revealed that the filmmakers had failed to disclose that several lines of Bourdain’s voiceover were created by training AI software using samples of his voice to appear as if he had actually recorded them before his death.
In 2023 the Archival Producers Alliance wrote an open letter in which the members called for “responsible use of technology in documentary filmmaking” and advocated for “transparency and the establishment of best practices to ensure that trust with viewers remains intact”.
As Alissa Wilkinson of The New York Times wrote in a recent article, the open letter pointed out many shocking examples of documentary films that had broken that trust by using artificially created historical voices presented as if audiences were “hearing authentic primary sources, when they were not”, AI-generated “historical images”, “fake newspaper articles” and “non-existent historical artefacts”.
While most of us may like to think that we’re smart enough to tell fakery from truth when it comes to artificially generated images, in recent years the ability to generate these has become far more sophisticated and, most worryingly, far easier for anyone with a phone to create. That means there are millions of deepfake, AI-generated videos and photographs circulating in an unregulated environment that could pose serious headaches for future archivists, audiences and documentary filmmakers.
While documentary viewers long ago came to normalise the use of re-enactments in the genre, traditionally these scenes are either shot in a stylised way that makes it clear that they’re not captured in the moment or by the use of labels that indicate that they are re-enactments. The acceptance of their use is part of a social contract between filmmakers and their audience and if they’re marked out as different to real, archive and interview footage, no-one is complaining. The problem arises when AI-generated footage begins to look and sound like archival footage generated by a prompt plugged into a tool.
How can we then trust that the slew of true-crime and other documentaries — produced with such speed that they often follow attention-grabbing global headlines by mere months —aren’t guilty of cutting corners and presenting fakery as truth?
The solution as proposed last year by the Archival Producers Alliance in a set of best practices is that any use of AI should be clearly indicated as such, whether that’s to create a voice from written text, enhance an old photo or clean up a scratchy archival recording. These are simple ways in which the historical record and machine creations can be kept separate and help to stop any potential “muddying the historical record”.
That may seem like a neat, equitable solution to the documentary AI conundrum, but as Wilkinson observes there is another, stickier and bigger problem that AI raises not just for documentaries but “for all of us”. It’s a phenomenon known as the “liar’s dividend”, a term first coined by two law professors in 2019.
As audiences and society become more aware of how easy it is to create fake videos, the ability of those caught in compromising situations on video to claim that such videos are AI-generated also becomes easier. Knowing that AI can increasingly create convincingly real images and videos makes it harder to dismiss such claims, leading to a situation in which the once seemingly incontrovertible truth of tape cannot be trusted anymore.
As Wilkinson describes it: “Every video is now Schrodinger’s video: it’s both real and not real” and, if one were to take the idea of the “liar’s dividend” one step further, the logical conclusion may be that “no claim that a video is real will ever be persuasive".
The old idea that an unearthed buried video could be used to counter official narratives and untruths, and the ease of shooting and creating videos with a smartphone allow for a worrying future in which fake videos may be misconstrued as actual historical archive in years to come.
Because AI video generators like Open AI’s recently released Sora 2 are trained on millions of hours of existing documentaries, their ability to mimic the visual vocabulary of the genre makes it more difficult to tell the real from the unreal. Even films that rely on seemingly impossible-to-deny archives, like police bodycam footage or soldier camera recordings, can, under the “liar’s dividend” logic, be dismissed by those refusing to acknowledge the ugly truth they are meant to reveal.
There are plans afoot by documentary makers to try and codify a series of new best practices intended to counteract the rising threat that AI-generated imagery poses to truth and to their livelihoods, including the protection of genuine archive material, the development of a technical standard that can certify the source of online materials and the introduction of new means of authenticating footage.
Whether they will be able to protect what’s left of the idea of “truth” in a world where anyone can make their own, only time and the speed of the developments in technology, will tell.











Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.