A furious political leader who screams a message from hatred against a worshiping audience. A child crying for the massacre of her family. Men in prison uniforms, starving into the edge of death because of their identity. While you read every sentence, specific images probably appear in your mind, brought in your memory and our collective consciousness by documentaries and school books, news media and museum visits.
We understand the meaning of important historical images such as these – images that we must learn to move forward – largely because they have captured something true around the world when we were not there to see it with our own eyes.
As archive producers for documentary films and co-director of the Archival producers Alliance, we are very concerned about what could happen if we can no longer trust that such images reflect the reality. And we are not the only ones: prior to This year’s OscarsVariety reported that the Motion Picture Academy is considering demonstrating contenders that they announce the use of generative AI.
Although such disclosure can be important for feature films, it is clearly crucial for documentaries. In the spring of 2023 we started to see synthetic images and audio that were used in the historical documentaries we were working on. Without transparency standards, we fear that this mixing of real and unreal the non -fiction genre and the indispensable role it plays in our shared history can endanger.
In February 2024, OpenAi viewed an example of its new text-to-video platform, Sora, with a clip called “Historical Footage of California during the Gold Rush.” The video was convincing: a smooth current full of the promise of wealth. A blue sky and rolling hills. A flourishing city. Men on horseback. It looked like a western where the good guy wins and drives away in the sunset. It looked authentic” But it was fake.
OpenAi presented ‘Historical Footage of California during the Gold Rush’ to show how Sora, officially released in December 2024, makes videos based on user prompts using AI that ‘understands and simulates reality’. But that clip is not a reality. It is a random mix of images, both real and presented by Hollywood, together with the historical prejudices of industry and archives. Sora, just like other generative AI programs such as Runway and Luma Dream Machine, scraps the contents of the internet and other digital material. As a result, these platforms simply recycle the limitations of online media and undoubtedly strengthen prejudices. But when we look, we understand how an audience can be fooled. Cinema is powerful in this way.
Some in the film world have met the arrival of generative AI tools with open arms. We and others see it as something deeply disturbing on the horizon. If our trust in the truthfulness of Visuals is crushed, powerful and important films can lose their claims about the truth, even if they do not use material generated by AI.
Transparency, something that resembles the food labeling that consumers informs about what is going on in the things they eat can be a small step forward. But no regulation of AI public making seems to be in the next hill and comes to save us.
Generative AI companies promise a world where everyone can create audiovisual material. This is deep when it comes to representations of history. The proliferation of synthetic images makes the task of documentaries and researchers – guarantees the integrity of primary source material, by digging archives, to accurately present the history – even more urgent. It is human work that cannot be replicated or replaced. You just have to look at the Oscar-nominated documentary “Sugarcane” of this year to see the power of careful research, accurate archive images and well-reported personal story to expose hidden history, in this case about the abuse of First Nations children in Canadian residential schools.
The speed with which new AI models are released and new content is produced makes the technology impossible to ignore. Although it can be fun to use these tools to imagine and test, what results are not true work of documentation – of people who witness. It’s just a remix.
In response, we need a robust AI media literacy for our industry and the general public. At the Archival producers Alliance we have published a series of guidelines – approved by more than 50 industrial organizations – for the responsible use of generative AI in documentary, practices that our colleagues start to integrate into their work. We have also made a call for case studies from AI use in documentary film. Our goal is to help the film industry ensure that documentaries deserve that title and that the collective memory they inform will be protected.
We don’t live in a classic western; Nobody comes to save us from the threat of non -regulated generative AI. We have to work individually and together to maintain the integrity and various perspectives of our real history. Accurate visual records not only document what happened in the past, they help us understand, learn the details and – perhaps even more important in this historic moment – learn to believe It.
If we no longer accurately accurately the highlights and lows of what came earlier, the future we share can also be little more than any remix.
Rachel Antell, Stephanie Jenkins and Jennifer Petrucelli are co-director of the Archival producers Alliance.
Leave a Reply