AI has Hollywood on the ropes

OpenAI does not avoid copyrighted content; it puts the onus on studios to notify it of any violations and to opt out

Tilly Norwood, an AI-generated ‘actor’, reacts in an AI-generated image obtained by Reuters on October 1 2025. Picture: PARTICLE6/HANDOUT VIA REUTERS
Tilly Norwood, an AI-generated ‘actor’, reacts in an AI-generated image obtained by Reuters on October 1 2025. Picture: PARTICLE6/HANDOUT VIA REUTERS

A friend who I’ve been working with on a script for a feature film recently sent me a series of clips made using the OpenAI video creation tool Sora. It’s a video-generating software that allows users to create video based on prompts and, for those working in film, it’s a useful way to provide a visual version of ideas on the page without spending years hustling to get funding for development.

Another friend, who runs a TV production empire creating and producing content for local television, regularly uses the tool for pitches to networks and streamers to show potential funders what his ideas might look like before they greenlight productions.

Sora is a quick, easy-to-use visual AI generator that can create material that looks like high-end glossy Hollywood work, though it is still unable to create a full film without viewers noticing the difference. In the world of advertising, where clients have traditionally forked out millions for short, glossily produced work, Sora may pose a much more immediate threat as companies looking to slash costs and cut out the Madison Avenue men could theoretically just produce ads by prompting AI and receiving the results within minutes — slick, movie-like and ready to go to market.

The outcry over the debut of the world’s first completely AI-generated actress — Tilly Norwood — at a film festival in Switzerland last month is another alarm bell for the industry. If you can create your own performer who, unlike real ones, is subject to no restrictions other than those you can imagine, then why would you want to put up with the egos and high fees demanded by real ones?  

While tech companies like OpenAI have repeatedly paid lip service to the idea that they will not violate copyright for the creation of their products, the recent controversy surrounding the launch of Sora 2 shows they operate on a much more worrying principle. Winston Cho, writing in The Hollywood Reporter, described it as “ask forgiveness, not permission”.

When it came to Sora 2, OpenAI’s approach was not to avoid using any copyrighted content or likenesses as part of the material the tool uses but to place the onus on Hollywood to notify the company of any copyright violations after the fact and to opt out of the program using their intellectual property.

Studios have to individually flag properties they don’t want Sora to have access to. OpenAI CEO Sam Altman faced a backlash from Hollywood on the release of Sora 2 earlier this month, but the company’s policy requires those claiming copyright violations to spend money and time informing OpenAI of which properties they don’t want used, while OpenAI devotes little or no resources to internally flagging potential copyright violations.

When it launched, Sora 2 featured clips that included recognisable images and characters from movies, TV shows and video games, including Bob’s Burgers, SpongeBob SquarePants, Pokémon and Grand Theft Auto, before agencies protested and scrambled to ensure that they opted out of the program.

According to one studio exec interviewed for The Hollywood Reporter piece, “This was a very calculated series of moves [Altman] made. They knew exactly what they were doing when they released this without protections and guardrails.”

As agencies seek legal advice and consider whether to pursue legal action, Sora 2 continues to rocket up the charts of free apps available for download and generate content that spreads across the internet and social media like a virus. Meanwhile, the industry is trying to create a strategy to protect itself, after the fact.

The tech industry’s “opt out”, or fair game, policy was, as lawyer Rob Rosen told The Hollywood Reporter, a “false bargain where they can do this unless you opt out. And if you didn’t, it’s your fault.”  For his part, Altman has written on his blog that OpenAI “is hearing from a lot of rights holders who are very excited for this new kind of ‘interactive fan fiction’ and think this new kind of engagement will accrue a lot of value to them”.  

Hollywood is in a Catch-22: if studios decide to sue OpenAI for copyright violation, they will run the risk of not being able to participate in copyright-approved, lucrative partnerships with the company in future. If they don’t do anything, then AI programs will continue to produce content clearly trained using their products that they have no control over.

While some mega media corporations that own studios could decide to develop their own video-generation products that allow users to create content based on their products and intellectual property for a fee, not all companies can afford to spend the money and time on such a strategy. What’s more, performers and directors, whose work is usually owned by the companies they make it for, seem to have few options available to them.  

For now, as Cho points out in his article, the situation is tense and uncertain and, once again, “Hollywood is fighting a battle with a well-capitalised AI industry, and it’s losing ground, much like what happened at the dawn of the internet when it was too slow to combat piracy”.

The future’s not ours to see, but increasingly it’s OpenAI and Silicon Valley’s world, and we just live in and with it.

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Comment icon