BusinessPREMIUM

'The Wizard of Oz' gets an AI makeover

The Wizard of Oz at Sphere is not so much a remake as a reinvention, where AI both enhances the movie and expands its reality.

Google CEO Sundar Pichai. Picture: REUTERS/ STEPHEN LAM
Google CEO Sundar Pichai. Picture: REUTERS/ STEPHEN LAM

It begins with a tornado. Not the one that swept Dorothy out of Kansas in 1939, but a digital whirlwind that greeted an invitation-only audience for a preview inside the Sphere in Las Vegas — a 112m-tall dome where technology and storytelling collide with AI. The Wizard of Oz at Sphere is not so much a remake as a reinvention, where AI both enhances the movie and expands its reality.

Business Times was there to witness a world first: a Hollywood classic transformed into a 270-degree immersive experience, using AI to go beyond the frame, literally and figuratively. The preview was the curtain-raiser to the Google Cloud Next conference in Las Vegas.

“I’ve never felt so large and so small simultaneously — it’s like I’m in a quantum state,” said Google and Alphabet CEO Sundar Pichai, as he opened the event from a tiny stage beneath a giant screen.

Pichai described the project as a culmination of creativity and computation: “There were a lot of hard, technical challenges to solve. Over the last year, we’ve been pushing the boundaries of our frontier models — including Gemini and our state-of-the-art video generation model, Veo.”

AI went beyond the standard approach of upscaling the original visuals, which would have artificially changed low-resolution images into ultra-high resolution. Instead, it reimagined them. Every scene was reconstructed using Google DeepMind’s generative models, which filled in details that the original film never captured. Where the 1939 cameras cut off characters at the knees or edges of the frame, AI now paints them in full. When Dorothy walks down the Yellow Brick Road, one doesn’t see her from only one angle. The entire environment unfolds around the viewer.

Ben Grossman, CEO of Magnopus studio, one of the creative partners on the project, said it was never about stretching the film. “The original movie is in a rectangle. Inside the Sphere, it would be the biggest rectangle in the world, but it would still be a rectangle,” he said. “We had to go beyond that. We needed to complete the characters that were cut off outside the field of view of the original cinema production. We had to generate performances that were never captured on camera.”

The process, known as outpainting, allowed AI models to generate missing body parts, facial features and costume details by referencing the film itself.

Using generative AI techniques, we fine-tuned the models on the original Wizard of Oz data. So if you wanted to see Dorothy’s shoes in a scene where her feet weren’t visible, the AI knew how to show them.

“Using generative AI techniques, we fine-tuned the models on the original Wizard of Oz data. So if you wanted to see Dorothy’s shoes in a scene where her feet weren’t visible, the AI knew how to show them,” said a DeepMind researcher.

Ralph Winter, head of physical production at Sphere, demonstrated how the system worked. “This is where the model is learning from the training from The Wizard of Oz data. It takes a video input and extends it. We’re not just upscaling pixels — we’re asking the AI to imagine what should be there, based on the intent of the filmmakers.”

Maintaining that intent was paramount. “You can’t just take it and do anything with it,” said producer Jane Rosenthal. “This is part of our cultural history. The key was to maintain the integrity of the original filmmakers.”

What resulted can hardly be called an upgrade. If anything, it is a new cinematic category.

Sphere’s executive chair, Jim Dolan, who had shepherded a series of boundary-shifting productions at the Sphere, put the venue in context: “This is not just a building. You go in and you’re inside the content. It’s experiential. We needed a property that would take advantage of every capability inside the venue.”

The capabilities were put to the test. Every close-up of Judy Garland’s face had to be enhanced using super-resolution models, trained on footage and Technicolor references from archival sources.

“We even tracked down a Technicolor notebook from a cameraman on Gone With the Wind to work out how the lenses operated at the time,” said one researcher.

The AI “shoot” meant the entire film had to be broken down into individual shots, with new detail added and missing visuals invented — from eyelashes to background props. “It’s a little bit of an archaeological dig,” said Rosenthal. “But we had to do it with AI.”

Pichai described the outcome as “a glimmer into the future of what’s possible with AI in media and entertainment”. He pointed out that even 12 months ago such an undertaking would not have been feasible. “The capabilities just weren’t there.”

It was made possible by the Google Cloud infrastructure, designed not just for scale but for speed.

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Comment icon