Welcome to Today’s AIography!
Good morning, AI filmmakers.
Hollywood drew a line for the 99th Academy Awards. AI can't act. AI can't write. The rule does not touch editing or post-production. Mumbai is going the other way. Shekhar Kapur is directing an AI-native sci-fi series for a studio that says eighty percent of Indian films already use AI in pre-vis. Netflix posted a Product Manager role for AI Video at up to $545,000 a year, scoped from pitch to post. The pros stopped picking one video model and started routing each shot. And Google I/O is two weeks out.
In today's AIography:
The Oscars 2027 Rules and What They Mean for Post
Netflix Will Pay Up to $545,000 for an AI Video Product Manager
Shekhar Kapur Will Direct an AI-Native Sci-Fi Series for Mumbai's Studio Blo
The New Pro Move: Routing Shots Between Veo, Kling, Runway, and Seedance
Google I/O Lands May 19. The Veo Update Is the Watch.
AIography's AI Filmmaking & Content Creation Directory
Short Takes
One More Thing…
Final Thoughts
Read time: About 8 minutes
THE LATEST NEWS
TL;DR: The Academy of Motion Picture Arts and Sciences announced rule changes on May 1, 2026 for the 99th Academy Awards. Acting categories now only consider performances "credited in the film's legal billing and demonstrably performed by humans with their consent." Writing categories require human authorship. The Academy also reserved the right to ask any film to verify its AI usage. Editing, cinematography, sound, and the rest of the post-production categories were not addressed. This is the first time a major industry awards body has put a human-only fence around specific creative categories.
Key Takeaways:
Acting and writing get human-only locks. Acting needs legal billing plus a person on set, with consent. Writing needs a human author. No middle ground on either.
Editing, post, sound, and craft categories are untouched. AI tools can still assist throughout the cut. The eligibility line covers two specific categories, not the whole production.
The Academy can ask. Films can be required to verify their AI usage. The mechanics of that verification have not been spelled out, which means producers and union counsel will be the first to map it.
Announced May 1, broader coverage May 2. TechCrunch, the AP wire, and the Baltimore Chronicle all picked up the rule change inside 48 hours. The conversation is widening from "what's banned" to "where the eligibility line actually sits."
The 99th Academy Awards happens in March 2027. That's the cycle these rules cover. Productions in post right now are the first ones inside the new rules.
My Take:
Headlines are reading this harder than the rule actually is. The Academy did not ban AI from filmmaking. They blocked AI from acting and writing nominations. If you cut, color, mix, supervise post, or do VFX, nothing in the eligibility language touches your craft. Your AI-assisted workflows are still eligible in your categories.
What changed is the line for above-the-line credit. If you used an AI performance for a real role, that performance can't get nominated. If an AI-written draft was the working script, that screenplay can't get nominated. The cutting-room consequence is metadata. You will be asked, in writing, what AI was used and where. Producers will start passing that question down to every department. The skill that becomes hire-able this year is being able to answer it cleanly. Which tools, which shots, which steps, which version. The shops that already track that paperwork will move first.

Image generated with ChatGPT Image 2
TL;DR: Y.M. Cinema Magazine reported May 3 that Netflix is hiring a Product Manager for AI Video at $310,000 to $545,000 a year. The role is in Los Angeles with monthly travel to Los Gatos. Netflix's own description of the work covers "pitch to post," every stage of content production. The job listing names directors, editors, colorists, VFX artists, and cinematographers as the people the role serves.
Key Takeaways:
Salary band $310K to $545K. That's top-of-band Netflix product leadership, not a research-side experiment. The number tells you how Netflix prices the job.
"Pitch to post" scope. The role covers development, pre-production, production, post, VFX, finishing, and delivery. Not a niche tool team. A full-stack workflow team.
The customers in the listing are working creatives. Directors, editors, colorists, VFX artists, cinematographers. Netflix is hiring someone to ship tools that production crews actually use.
The brief calls out "model development to real filmmaking workflows." Netflix wants someone who can translate AI research into tools that hold up on a real shooting schedule.
Location matters. LA-based with monthly Los Gatos travel says the production crews and the engineering teams are close enough to ship together.
My Take:
Salary bands tell you what an organization actually believes. Netflix is paying half a million dollars for one person to own AI video product. That's not a hedge. That's a planted flag.
The "pitch to post" framing is the part that lands for working pros. Last year, the AI conversation at most studios was siloed inside a research lab or VFX. This listing puts AI video product management at the center of a working production stack. The hire will not be writing white papers. The hire will be shipping tools that an editor opens on a Monday morning and closes on a Friday afternoon. Multiply this hire across the major studios over the next twelve months. The people who can speak both fluent post and fluent AI will get those interviews. Build that vocabulary now.
TL;DR: The Hollywood Reporter ran a feature on May 1 about India's AI filmmaking surge. Mumbai-based Studio Blo announced that Shekhar Kapur, the director of Elizabeth and Bandit Queen, will direct a sci-fi series titled Warlord that will be created entirely with AI tools. Studio Blo co-founder Dipankar Mukherjee told THR that "around eighty percent of Indian films are already using AI extensively in pre-visualization." Studio Blo's typical timeline for a feature-length AI film is six to twelve months.
Key Takeaways:
A name director on an AI-native project. Shekhar Kapur has two Oscar-nominated features and a long studio track record. This is not a first-time AI filmmaker testing tools. It's a senior director designing an AI-first production.
Eighty percent in pre-vis, per Studio Blo. That number, if accurate to the broader Indian film industry, is bigger than what most US working editors are seeing in their cutting rooms.
Six to twelve months for a feature. Studio Blo's stated timeline is a fraction of a traditional VFX-heavy schedule. The math comes from cutting iteration time.
Project: Warlord. A sci-fi series, AI tools across the pipeline. Studio Blo is the production house. Shekhar Kapur is the director.
The contrast with the Oscars story is the framing. Hollywood drew a line on AI eligibility in two categories. India is moving faster on AI in production across the whole pipeline. Both are true at the same time.
My Take:
Mumbai is not waiting for Hollywood to figure out where the line is. Eighty percent of Indian films already using AI in pre-vis is a working-editor number, not a slide-deck number. That is craft hours saved, shot lists rebuilt overnight, location scouts replaced by reference frames a director can sign off on.
Shekhar Kapur attaching to an AI-native sci-fi series puts senior craft authority behind a production model that the West is still calling experimental. He has cut films the old way for forty years. He has now decided the new way is interesting enough to direct. That's the read. When craft leaders move, the work moves. The AIography audience should be reading world cinema as the leading indicator on AI production this year, not the trailing one.
TL;DR: Recent comparison and release-tracker pieces from Magic Hour, InVideo, and AI/ML API converge on the same practitioner finding. No single AI video model is best for every shot. Pros are picking models per shot. Kling and Seedance for cinematic look and longer sequences. Runway for editorial and workflow integration. Veo for narrative coverage and 4K finish. Sora for scene-to-scene continuity. The single-model loyalty era is over. The defensible asset is the routing pipeline.
Key Takeaways:
No model wins every category. Magic Hour's tracker maps Seedance to cinematic realism and longer sequences, Kling to physics and motion accuracy, Runway to editing and workflow, Sora to narrative continuity. Each is named for what it does best.
Multi-tool workflow is the default now. The trackers describe creators "combining generation tools, editing tools, animation tools into a single workflow." The skill is sequencing them.
The defensible asset is the routing knowledge. Anyone can buy a Kling subscription. The pro move is knowing which shot goes to which model.
The cost of switching models is dropping. Firefly inside Adobe routes between thirty-plus video models. Comfy workflows route between open-source models. CapCut routes Seedance natively. The friction of trying a second model is lower than it was six months ago.
The pricing math is now per shot, not per subscription. Pros think in cost per generation per model, then pick the cheapest model that lands the look the shot needs.
My Take:
This is the shift I want every working editor to internalize this week. A year ago the AI video conversation was tribal. You were a Runway person, a Kling person, a Sora person. The argument was about which model was best. That argument is over. The pros are not loyal to a single model anymore. They route.
The reason is craft. A shot that needs cinematic motion realism goes to one model. A shot that has to lock into a Premiere timeline with a specific look goes to another. A shot that has to hit 4K for finish goes to a third. The editor's instinct of "this shot needs this lens, this lighting, this cut length" applies one layer up. This shot needs this model. The model is now a department.
That mental shift opens a hire-able skill. If you can sit in front of a producer and say "I'd send these three shots to Kling, those four to Veo, this one to Seedance, and we'll finish in Runway," you are the person who knows how AI video actually gets made. The release-tracker pages are the new sound-mixing reference shelves. Read them every two weeks and the routing intuition builds.
Try This Now:
Pick a 30-second sequence from a recent project. Break it down into individual shots. For each shot, pick which AI video model you would use, and write a one-line reason.
Bookmark the Magic Hour release tracker, the InVideo Kling vs Sora vs Veo vs Runway comparison, and the AI/ML API Veo 3.1 deep dive. Read them every two weeks.
Run the same prompt through two different models on the same shot. Compare what each one held and what each one drifted on. That comparison is your model-selection instinct getting trained.
Build a one-page "routing card" for yourself. List six common shot types: hero close-up, motion-blur action, dialogue medium, vista wide, mood transition, product insert. For each, write your default model and your backup. Update it every month as the rankings shift.
TL;DR: Google I/O 2026 runs May 19 and 20 at Shoreline Amphitheatre in Mountain View. Google has used I/O as the launch stage for major Veo announcements two years running. Industry trackers and preview write-ups expect another Veo update at this year's event, with longer multi-shot consistency and higher resolution as the rumored upgrades. The official Google brief promises "AI breakthroughs and updates in products across the company, from Gemini to Android and more."
Key Takeaways:
Dates and venue confirmed. May 19 and 20, Shoreline Amphitheatre, Mountain View. Two weeks out as of this newsletter.
Google's I/O track record on Veo. The 2024 and 2025 events both included major Veo reveals. The pattern is the basis for this year's expectations.
Industry trackers expect a new Veo build. Preview pieces, including the OpusClip and imagine.art tracker pages, point to longer multi-shot consistency and higher resolution as the likely upgrade points.
Gemini 2.5 video generation is also on the watchlist. OpusClip's preview frames the broader expectation as Gemini-level video features moving native, not just inside Vids and Flow.
The current ranking will move if Google ships. Magic Hour's tracker has Runway Gen-4.5 as the current top scorer in some benchmarks and Kling 3.0 leading on cinematic look. A Veo upgrade with longer consistency could reshuffle the top of the list.
My Take:
I'm setting a calendar reminder for the morning of May 19. So should you. Google I/O is now the single biggest two-day window in AI video. If a new Veo build lands and matches the early signals, the multi-model routing playbook in the previous story rewrites itself overnight. The shot that today goes to Kling for the cinematic motion run might be a Veo shot by Memorial Day.
The smart move two weeks out is not to panic-research. It's to pick one of your routing decisions and write down why you'd make it today. Then, on May 20, look at the new Veo features and ask whether your reason still holds. That's how pattern recognition builds for the next cycle. The models will keep moving. The question to track is which working-creative reasons hold across model generations and which ones break.
ESSENTIAL TOOLS
AI Filmmaking & Content Creation Tools Database
Check out the Alpha version of our AI Tools Database. We will be adding to it on a regular basis. Got a tip about a great new tool? Send it along to us at: [email protected]
Adobe Color Mode (Beta). Premiere's new ground-up color grading workspace. RTX-accelerated. Public beta now, GA later in 2026.
Firefly AI Assistant (Coming Soon). Adobe's creative agent. Multi-step orchestration across Premiere, Photoshop, Lightroom, Express, and Firefly. Public beta date TBD.
Avid Content Core. New cloud-native SaaS data layer for global media assets. Comes out of the Avid + Google Cloud partnership. Agentic AI in Media Composer is the headline.
NVIDIA Lyra 2.0. Single-image to walkable 3D scene. Apache 2.0. Exports Gaussian splats and meshes to Unreal and Unity.
Magic Hour AI Release Tracker. Up-to-date model rankings and release notes across the AI video model landscape. Useful as a routing reference.
InVideo Kling vs Sora vs Veo vs Runway. Side-by-side comparison piece, updated as new builds ship.
SHORT TAKES
Obsidian Studio's Cannes panel is May 18. The "humans over hype" AI cinematic studio (Kling AI partner, Imagine Entertainment partner) puts its full-stack workflow in front of Cannes buyers a day before Google I/O opens. Two big AI calendar moments inside one week.
Doug Liman's "Bitcoin: Killing Satoshi" hits the Cannes sales market this month. A $70 million AI-assisted feature with Gal Gadot and Casey Affleck, originally budgeted at $300 million the traditional way, is the first big production with public AI numbers attached. The Cannes pickup will be the first real commercial reaction.
SAG-AFTRA bargaining is live. Talks resumed April 27 under media blackout. Tilly Tax language and AI cost-parity provisions are the live items to track. The union's framing is to push for AI performances to cost as much as human ones.
Korea's "I Am Popo" is in theaters. The first fully-AI-produced theatrical feature opened April 26. The production angle, not the box office, is the story. World cinema is shipping AI features inside the same week Hollywood is debating eligibility rules.
CapCut's Seedance 2.0 integration is the multi-model on-ramp for non-pros. Multimodal audio and video joint generation inside the most popular consumer video editor on the planet. The first place a working creator can route a Seedance shot inside an existing timeline they already use.
The AIography Skool community is running a routing-card swap. If you build a model-routing card from this week's Try This Now, post it. Other working pros' cards are the fastest way to test your reasoning against the field.
ONE MORE THING…
Video of the Week
Dave Clark on Seedance 2.0 Multi-Shot Consistency
AI filmmaker and Promise Studio Co-Founder, Dave Clark (@Diesol on X) posted a working note this week on Seedance 2.0's realism, with the eye on lighting consistency across multiple shots in the same scene. The reason it stuck out is the framing. Not "this model is amazing." Instead, "lighting consistency is the tell." That's the working-editor read on a video model. Save it as the calibration test you run on every new build that ships.
FINAL THOUGHTS
The Week's Stories Share a Thread Worth Naming.
The Academy drew a line on AI in two categories. Mumbai poured AI into eighty percent of its pre-vis pipeline. Netflix budgeted half a million dollars for one person to own AI video product. The pros stopped picking one video model and started routing four. Google I/O is two weeks out and will likely move the rankings again.
What ties those together is that nobody is asking whether AI is in the cutting room anymore. The question moved up a level. Where does AI sit in the credits. Who gets paid for the AI work. Which model handled which shot. How fast can we ship it. The conversations have stopped being about whether the technology works. They're about who owns what when the technology works.
For a working editor, an assistant, a director, or a producer reading this, the move is the same one I keep coming back to. Test the tools. Build the routing instinct. Track the eligibility paperwork. Read world cinema as the leading indicator. Bookmark the trackers. Set the calendar reminder for May 19. The map of who is making the next movie keeps getting redrawn. Stay on it.
Stay sharp. Keep creating.
— Larry
10x the context. Half the time.
Speak your prompts into ChatGPT or Claude and get detailed, paste-ready input that actually gives you useful output. Wispr Flow captures what you'd cut when typing. Free on Mac, Windows, and iPhone.
What did you think of today's newsletter?
AIography is the AI filmmaking newsletter for filmmakers, editors, and content creators navigating the biggest technological shift since digital.
If you have specific feedback or anything interesting you’d like to share, please let us know by replying to this email.
AIography may earn a commission for products purchased through some links in this newsletter. This doesn't affect our editorial independence or influence our recommendations—we're just keeping the AI lights on!




