Welcome to Today’s AIography!

Netflix just open-sourced an AI tool that does something the VFX industry has been charging real money for, and almost nobody is covering it correctly. OpenAI quietly retired Sora 1 the same week it opened Sora 2.0 Beta to creators. Reuters made the Bollywood AI story global. ByteDance is rolling out Seedance 2.0 to a select group of creators through a soft-launch creator program. And a working AI filmmaker just revealed that the second season of an Amazon Prime show was directed using Kling AI for shots most viewers will never spot. Let's dive in!

In today’s AIography:

  • Netflix VOID Erases Objects From Video, Physics and All

  • Sora 1 Is Dead. Sora 2.0 Beta Just Opened to Creators.

  • Reuters Just Made Bollywood AI a Global Story

  • Dreamina's Seedance 2.0 Soft Launch Is the Real "Going Mainstream" Story

  • House of David Season 2 Hid Kling AI Inside an Amazon Prime Drama

  • AIography's AI Filmmaking & Content Creation Directory

  • Short Takes

  • One More Thing…

Read time: About 8 minutes

THE LATEST NEWS

TL;DR: Netflix's AI team, working with researchers from INSAIT at Sofia University, just open-sourced VOID (Video Object and Interaction Deletion). It removes objects from video clips and automatically corrects the physical interactions those objects had with the rest of the scene. Take out a person holding a guitar, and the guitar falls naturally because gravity takes over. No existing video object remover does this. The whole thing is on Hugging Face under an Apache 2.0 license, free for commercial use.

Key Takeaways:

  • VOID's value is causal reasoning about scenes, not just pixel filling. Existing video inpainting tools (ProPainter, DiffuEraser, Runway's object removal, MiniMax-Remover, ROSE, Gen-Omnimatte) erase the object and patch the background. They don't reason about what the object was doing. VOID does, and beats every one of those tools head-to-head on the paper's benchmarks.

  • The technical innovation is the quadmask. Instead of a binary keep-or-remove mask, VOID uses a 4-value mask that encodes the primary object (remove), overlap regions, affected regions (things that will fall or move when the object is gone), and the background to keep. This gives the diffusion model a structured semantic map of physical causality.

  • The training data is the genius part. Real paired before-and-after video data doesn't exist at scale, so the team built it synthetically. They used HUMOTO (human-object interactions rendered in Blender with physics simulation, where you remove the human from the simulation and re-run physics forward to get a physically correct counterfactual) and Google Kubric for object-only collisions. Trained on 8x A100 80GB GPUs.

  • VOID is built on top of Alibaba's CogVideoX-Fun-V1.5-5b-InP base model, a 5B parameter 3D Transformer. The pipeline also uses Meta's SAM2 for segmentation and Google Gemini 3 Pro for scene analysis. So this is a Netflix-led project that integrates research output from Alibaba, Meta, and Google. That's a story by itself.

  • Practical specs: requires a GPU with 40GB+ VRAM (A100 territory), default resolution 384x672, max 197 frames per clip, two-pass inference where Pass 1 handles most clips and Pass 2 fixes object morphing on longer sequences. Apache 2.0 license. Free to use commercially. 529 GitHub stars in the first ten days.

My Take:

This is the most quietly important release of the week and it's getting nowhere near the coverage it deserves. Object removal in video is one of those expensive, unglamorous post tasks that eats hours and produces visibly wrong results. VFX shops have been charging real money to fix shots where the obvious solution would be "just take that thing out," because the obvious solution creates physics problems that take weeks to fix by hand. VOID is the first system I've seen that actually understands what's hard about the problem. The quadmask is a different way of thinking about what you're asking the model to do. You're not telling it to fill in pixels. You're telling it which parts of the scene are about to change because of what you're removing. That distinction matters more than it sounds like it should.

The training pipeline is the part that made me sit up. Generating paired counterfactual videos by re-running Blender physics simulations after removing the actor is exactly the kind of synthetic data trick that makes you go "oh, of course, why didn't anyone do that years ago." It's also the kind of trick that's hard to copy. You need a physics engine, you need motion capture data, and you need someone to think of doing it that way.

Watch what happens in the next thirty days. Someone is going to wrap this in a Resolve or Premiere extension, and the day that ships is the day a category of paid VFX work starts moving toward zero.

TL;DR: OpenAI confirmed this week that Sora 1 is retired, less than a year after launch. At the same time, Sora 2.0 Beta opened to a wider creator group with mixed first reactions. It's a single story being told in two halves, and most outlets have been running them as separate items, which has created real confusion about what's actually shipping.

Key Takeaways:

  • Sora 1 is officially gone. OpenAI is pushing existing users toward Sora 2.0 and treating the retirement as a graceful migration. The reality is the tool never recovered from the Disney deal collapse and the Chinese-model leapfrog earlier this year.

  • Sora 2.0 Beta is now in the hands of a broader creator pool. Early reactions are split. Some call the quality jump significant. Others say consistency and prompt adherence still trail Kling 3.0 and Seedance 2.0 in head-to-head tests.

  • Limitations becoming clear in early use: tighter content filters than Sora 1, aggressive copyright blocking, and inconsistent results on photorealistic human faces. The character consistency issues that plagued Sora 1 appear partially addressed but not solved.

  • The strategic pivot: OpenAI is positioning Sora 2.0 as a creator tool first and a general-purpose video generator second. The framing matches the broader OpenAI shift toward owning the conversation rather than dominating raw model benchmarks.

  • The narrative gap: most outlets have still been writing "Sora is dead" headlines based on Sora 1's retirement, while creators with Sora 2.0 Beta access are quietly testing the new version. Expect a wave of corrected coverage in the next week.

My Take:

The Sora story has been muddled all week and that's nobody's fault but OpenAI's. Retiring a flagship product the same week you open the beta of its replacement is the kind of communications choice that only makes sense if you're trying to bury one story under another. I don't blame them for trying. Sora 1 was a high-profile stumble. The Disney walkaway, the Chinese tools eating its lunch, the New York Times calling it "underwhelming" within weeks of launch. Closing that chapter quietly while opening a new one is sound damage control.

The question for filmmakers is whether Sora 2.0 is actually competitive. From the early creator reactions I've seen, the answer is "maybe, eventually." Right now Kling 3.0 and Seedance 2.0 are the tools to beat, and Veo 3.1 is the free option that has gravity. Sora 2.0 needs a clear win in something to justify giving it another look. If that win is character consistency or creative control, OpenAI has a fighting chance. If it's just better-looking clips, that boat has sailed.

Image via REUTERS/Priyanshu Singh

TL;DR: Reuters published "AI is rewiring the world's most prolific film industry" on April 4, and within 48 hours the story had been syndicated by The Hindu, The Hindu BusinessLine, Oman Observer, Caliber.az, and a dozen other outlets across South Asia, the Middle East, and Europe. This is the moment Bollywood's AI story stopped being a regional tech beat and became a global business story.

Key Takeaways:

  • The Reuters piece by Munsif Vengattil documents how Indian studios are deploying AI at a scale "unseen elsewhere": creating fully AI-generated films, using AI dubbing to release into 20+ regional languages simultaneously, and rewriting endings of test-screening flops with AI-generated alternatives.

  • Specific example cited: "Mahabharat: Ek Dharmayudh," an AI-generated series featured at Collective Artists, with characters Gandhari and Dhritarashtra rendered entirely with AI tools. This is not a tech demo. It's a real production going to audiences.

  • The economics matter. India produces roughly 1,800 films per year compared to Hollywood's 500. Mass dubbing into Hindi, Tamil, Telugu, Bengali, Marathi, and a dozen other languages used to be one of the largest single line items in a Bollywood production budget. AI dubbing collapses that cost.

  • The Reuters story has been picked up by outlets that don't normally cover AI: Oman Observer, Caliber.az, regional Hindu publications, business wire feeds across Asia. That's the syndication signature of a story that's about to hit Western trade press in the next news cycle.

  • Western AI filmmaking YouTube channels still aren't covering this. As of this writing on April 6, no major English-language AI filmmaking creator has published a deep-dive on Bollywood's AI adoption. The competitive gap is real.

My Take:

We saw the India angle two days ago in this newsletter, and at the time it felt like an ahead-of-the-curve call. Forty-eight hours later Reuters made it official, and the trade press cascade is already underway. Here's what's worth noticing: Reuters isn't a tech publication. When the wire services run a story like this, it's because their financial analysts are seeing real money move.

Bollywood isn't experimenting with AI to be trendy. They're doing it because the language fragmentation in Indian cinema makes localization the single biggest cost on every production, and AI dubbing solves it. That's a business problem with a technical solution, which is the only kind of AI story that actually matters.

Bollywood is now rewriting endings with AI, cutting production schedules, mass-dubbing films into a dozen languages, and shipping at a pace nobody else is attempting. The Hollywood film industry has spent the last year arguing whether AI should touch a movie and continues to do so. India spent the same year shipping movies.

The interesting question isn't whether they're right. It's what the math looks like when one country decides the debate is over and the other hasn't finished arguing.

TL;DR: ByteDance is rolling out Seedance 2.0 through Dreamina's Creator Partnership Program (CPP). Multiple working AI filmmakers got early access this week and are publicly posting test reels. The tool is "rolling out in select countries and regions only." This is the actual mechanism behind the "Seedance going mainstream" narrative everyone has been writing about: a coordinated, geo-fenced creator program rather than an open release.

Key Takeaways:

  • @JeffSynthesized (8.7K followers, working AI filmmaker) posted his Dreamina Seedance 2.0 flight test on April 5: "I joined Dreamina AI CPP and got early access to use Dreamina Seedance 2.0! This is my flight test. Just a heads-up: it's currently rolling out in select countries and regions only." 111 likes, 15 retweets, 5,700 views in the first 32 hours. Real demo footage attached.

  • @Uncanny_Harry (14.5K followers) and @Magiermogul are also in the program and have posted their own early test reels. Uncanny_Harry's announcement: "Cinematic AI action has arrived! I joined Dreamina AI CPP and got early access to use Dreamina Seedance 2.0! Dreamina Seedance 2.0 is now available on both Dreamina AI WEB and APP."

  • @TheoMediaAI (Theoretically Media, 11.8K followers) covered the rollout in a YouTube video framed as a "Sorta Release." That "Sorta" framing is the key tell. This isn't a press release and a download link. It's a wave of partnership-program access landing in select creators' hands at the same time, with everyone posting demos in coordinated fashion.

  • The CPP appears to be ByteDance's way of building public awareness for Seedance 2.0 outside traditional press cycles. Pick the creators you trust, give them early access, let their audiences see real demos, build the narrative organically. It's the same playbook Apple has used for years with iPhone reviewers, applied to AI video.

  • Practical specs from the demos already public: 15-second clips at cinematic quality, multi-input control (you can upload up to 12 reference assets), CapCut integration via Dreamina's "open board" canvas for storyboarding. The same Seedance 2.0 underlying model, with Dreamina as the consumer-facing product wrapper.

My Take:

The headlines have been calling Seedance 2.0 a "mainstream moment" for the last few weeks, and the narrative has been right but the explanation has been wrong. There hasn't been a single tipping-point post or article. What's actually happening is that ByteDance built a creator partnership program, picked credible AI filmmakers, gave them access, and let them post organically. Each demo lands as if it's an independent discovery. They're all part of the same coordinated rollout.

That's a smarter distribution play than buying ads or running press tours. Your creator program is only as good as the creators you can get, which makes it harder to copy than it looks. Watch which AI filmmakers Dreamina is picking, because that list is essentially ByteDance's bet on who matters in the space.

If you cover AI video and you're not in the program, that's probably worth a brief "should I be?" moment. Or maybe it's worth a brief moment of relief that you're not in someone's marketing apparatus. Both reactions are honest ones.

TL;DR: Working AI filmmaker Jeff Synthesized revealed last week that he directed episodes 4 and 5 of season 2 of House of David, the Amazon Prime biblical drama, after spending months in Greece on the production. Buried in his announcement: "For those who know you might even spot some @Kling_ai, for those who don't you'll never know. This is the future of entertainment." A Hollywood show, on a major streaming platform, with hidden Kling AI shots that most viewers will never identify.

Key Takeaways:

  • The post: Jeff Synthesized directed episodes 4 and 5 of House of David season 2. House of David is an Amazon Prime original biblical drama about the rise of King David. Season 2 dropped earlier this month. The use of Kling AI was not announced in any official press materials.

  • The phrasing matters. "For those who don't, you'll never know" is the entire AI-in-production story compressed into eleven words. The promise of the technology has always been invisibility. The shots that work are the shots you can't tell are AI. By that test, Kling 3.0 is now passing in real productions on real streaming platforms.

  • The post got 334 likes, 29 retweets, 29,496 views, with comments mostly from other working AI filmmakers congratulating him. No major trade press coverage. No Variety story. No Hollywood Reporter follow-up. The disclosure happened in plain sight on X and the trades didn't notice.

  • The case-study arc here is what to track. House of David is not an experimental short. It's a multi-million-dollar streaming production with episodes shot in Greece, a full crew, and a major distribution deal. Kling AI was apparently brought in for specific shots that were either too expensive or too dangerous to capture conventionally. That's the real "AI in filmmaking" story, not the AI-only short films that get most of the press.

  • The pattern this fits into: every disruption in production technology has followed the same arc. The new tool starts as an experiment, gets hidden inside one or two shots in a real production, then quietly becomes the default for that specific use case before anyone notices. Digital intermediate, motion capture, virtual production, and now AI video generation. The "hidden in a real show" stage is where AI video is right now.

My Take:

This is the part of the AI-in-filmmaking story that nobody is telling correctly. Most of the press coverage focuses on either fully-AI experimental shorts (which have niche audience appeal) or AI tool launches (which are interesting for a week and then forgotten). What actually changes the industry is what happens when AI tools get used invisibly inside conventional productions. That's where the budget impact lives. That's where the workflow integration lives. That's where the editor-and-cinematographer level adoption lives.

Jeff Synthesized's House of David disclosure is a small moment in a huge story. There are almost certainly other shows and films right now using AI video tools for specific shots without disclosing it, because there's no upside to disclosing and a meaningful downside in audience perception. The first major studio that openly announces "we used Kling for shot 47 because we couldn't afford a helicopter that day" will get hammered in the press. The first studio that does it quietly will save money and never tell anyone. Guess which one is more common right now.

The case-study arc to track over the next year is "where is AI hiding inside the regular productions you already watch?" The answer, increasingly, is everywhere.

ESSENTIAL TOOLS

AI Filmmaking & Content Creation Tools Database

Check out the Alpha version of our AI Tools Database. We will be adding to it on a regular basis.

Got a tip about a great new tool? Send it along to us at: [email protected]

SHORT TAKES

ONE MORE THING…

Video of the Week

"AI is rewiring the world's most prolific film industry" - Reuters

This week's pick isn't a creator video. It's the Reuters piece on Bollywood's AI adoption, embedded with footage of "Mahabharat: Ek Dharmayudh" and clips from other Indian AI productions. Watching this back-to-back with the average Western AI filmmaking YouTube tutorial is a useful exercise in perspective. The Reuters footage shows real productions, going to real audiences, in a country that produces more films per year than any other. The Western tutorials show creators experimenting with prompts. Both have value, but only one is currently moving the global film industry. If you only have time for one video this week, watch this one.

Hiring in 8 countries shouldn't require 8 different processes

This guide from Deel breaks down how to build one global hiring system. You’ll learn about assessment frameworks that scale, how to do headcount planning across regions, and even intake processes that work everywhere. As HR pros know, hiring in one country is hard enough. So let this free global hiring guide give you the tools you need to avoid global hiring headaches.

What did you think of today's newsletter?

Vote to help us make it better for you.

Login or Subscribe to participate

If you have specific feedback or anything interesting you’d like to share, please let us know by replying to this email.

AIography may earn a commission for products purchased through some links in this newsletter. This doesn't affect our editorial independence or influence our recommendations—we're just keeping the AI lights on!

Reply

Avatar

or to participate

Keep Reading