
Welcome to Today’s AIography!
Good afternoon, AI filmmakers.
NAB 2026 opened in Las Vegas on Saturday and the platforms that power working editors all went on offense. Adobe rewired Premiere around a new Color Mode and dropped Firefly AI Assistant, a natural-language agent that reaches across Premiere, Photoshop, Lightroom, Illustrator, Express and Firefly. Avid handed Google Cloud a spot inside Media Composer. NVIDIA open-sourced a tool that turns one photograph into a walkable 3D set. Doug Liman's $70 million AI-assisted feature finally put real production numbers on the table. And a dead actor got a standing ovation at CinemaCon for an AI performance. Let's dive in.
In today's AIography:
Adobe Rewrites Premiere: Color Mode, Firefly AI Assistant, Kling 3.0 in Firefly, Frame.io Drive
NVIDIA Lyra 2.0: Build an Explorable 3D Set From a Single Image
Avid + Google Cloud: Agentic AI Moves Into Media Composer
Inside Doug Liman's $70 Million AI Feature: Real Numbers From the Cutting Room
AI Val Kilmer at CinemaCon: The First Full AI Performance in a Theatrical Release
Essential Tools
Short Takes
One More Thing…
Read time: About 8 minutes
THE LATEST NEWS
POST TOOLS
Adobe Rewrites Premiere: Color Mode, Firefly AI Assistant, Kling 3.0 in Firefly, Frame.io Drive
TL;DR: At NAB 2026 on April 15, Adobe made the biggest editor-facing move in years. Color Mode ships as a public beta, a ground-up color grading environment inside Premiere built specifically for editors, going directly at DaVinci Resolve. Firefly AI Assistant, Adobe's new creative agent, orchestrates multi-step work across Premiere, Firefly, Photoshop, Lightroom, Illustrator, Express and more. Kling 3.0 and Kling 3.0 Omni are now integrated inside Firefly Video Editor, alongside 30+ other video models. Frame.io Drive mounts your cloud projects as a local drive. This isn't a feature update. It's a reorganization of what the tools are.
Key Takeaways:
Color Mode is a Premiere beta today, GA later in 2026. Adobe describes it as the first color-grading environment "built for editors." It keeps video front and center through the grade instead of pulling you into a scopes-first panel workflow. RTX-accelerated.
Firefly AI Assistant is an agent, not a feature. Natural-language prompts orchestrate work across Premiere, Photoshop, Lightroom, Illustrator, Express and Firefly. Adobe's own language: "orchestrate and execute complex, multi-step workflows." Public beta "coming soon," no firm date.
Kling 3.0 and Kling 3.0 Omni are inside Firefly. Plus 30+ other video models. Firefly becomes the routing layer: you prompt, Adobe picks the model, and the output stays inside your Creative Cloud project.
Frame.io Drive kills the upload/download dance. Mount Frame.io projects as local storage. Work on cloud media as if it lived on your machine. No more syncing.
After Effects 26.2 and Premiere Pro 26.2 shipped alongside. New Film Impact effects, sharper Object Masking, and a searchable Sequence Index panel.
My Take:
This is the first NAB in memory where Adobe didn't announce incremental AI features tacked onto existing tools. They announced a reorganization of what the tools are. Color Mode isn't a panel, it's a mode. Firefly AI Assistant is an agent over all of Creative Cloud, not a single tool. Kling inside Firefly isn't a partnership as much as a router, sending an editor's prompt to whatever model fits the shot. For anyone paying for a Creative Cloud subscription — and that's basically every working editor and filmmaker I know — the thing you're paying for every month is fundamentally different this week than it was last. DaVinci Resolve users, Adobe just said "we see you." AI video tool users, Adobe just said "you don't need to leave Premiere." It's the walled garden growing teeth.
Try This Now:
Update to Premiere Pro 26.2 via Creative Cloud and switch to the new Color Mode workspace. Cut a one-minute piece start to finish inside Color Mode and see where the workflow bends.
In Firefly Video Editor, generate a short clip using Kling 3.0 Omni. Specify camera angle, shot duration, and character consistency. Import directly into your Premiere timeline.
Mount a Frame.io project via Frame.io Drive. Open an asset as if it were local and time how much friction vanishes.
Watch for Firefly AI Assistant's public beta. When it drops, the first thing worth testing is a multi-step task like "cut this 20-minute interview down to 8 minutes, color it, and export for YouTube."
TL;DR: NVIDIA open-sourced Lyra 2.0 on April 14. The framework turns a single photograph into an explorable 3D world with real geometry. Apache 2.0 license. Built on Wan 2.1-14B. Exports to Gaussian splats and mesh models you can drop straight into Unreal Engine or Unity.
Key Takeaways:
One image in, walkable 3D out. Feed it concept art, a reference photo, or an AI-generated still. Get back a persistent 3D environment with actual spatial geometry. Not a parallax trick.
Gaussian splats and meshes. Both output formats are game-engine-ready. Unreal and Unity ingest directly. Real-time walkthroughs on reasonable hardware.
Apache 2.0, free to use. No licensing fees. No API costs for the base model. Clone, modify, ship.
Fixes the #1 AI filmmaking pain point, which is spatial consistency. Generate a location once, then shoot it from any angle without the scene rearranging itself between takes.
Built on Wan 2.1-14B. It uses camera-path generation to create roaming videos, then reconstructs 3D geometry from those paths.
My Take:
Every AI filmmaker I know has hit the wall where you generate a beautiful frame and then discover you can't move the camera without breaking the whole scene. Lyra eliminates that wall. For previz, location scouting, and virtual production, this is a genuine paradigm shift, and it's open-source. The first tools that wrap Lyra into something a non-coder can use are going to own the indie and educational market for the next twelve months. Watch for ComfyUI workflows and Blender plugins before the end of May.
Try This Now:
Clone the Lyra 2.0 repo at github.com/nv-tlabs/lyra.
Pull the model weights from Hugging Face.
Feed it a single reference: a Midjourney still, a concept art frame, or a real location photo.
Generate camera-path videos exploring the space from several angles.
Export the Gaussian splat or mesh into Unreal Engine or Unity and walk your scene in real time.
TL;DR: On April 16, Avid and Google Cloud announced a multiyear partnership to embed generative and agentic AI into Media Composer, plus a new cloud platform called Avid Content Core. The stated goal: turn what Avid calls a "mostly manual" production process into something AI-assisted at every layer.
Key Takeaways:
Media Composer gets AI. Google Cloud's generative and agentic AI move into the NLE that cuts virtually every major studio film and broadcast program. Not a bolt-on plugin, a platform-level integration.
Avid Content Core. A new cloud-native SaaS platform that acts as a unified data layer across facilities. AI-powered asset management at the infrastructure layer, not just inside the timeline.
"Agentic" is the keyword. Think AI agents that perform production tasks autonomously inside the edit environment, not just filters or generators.
Runs on Google Cloud's Vertex AI and Gemini models. Whatever Google ships on their enterprise side now has a path into Media Composer.
My Take:
Adobe made the louder NAB announcement this week, but Avid's move might be the more important one long-term. Adobe owns the editor's everyday toolkit. Avid owns the machine rooms at every major studio and network. When Avid says "agentic AI in Media Composer," that's AI reaching into the rooms where Oscar-winning films actually get cut. For anyone working on Avid (and most working editors still do), your tool is evolving under your feet. Start asking now what "agent" means for the people whose job was "assistant editor." Because Avid just told you.
Go from AI overwhelmed to AI savvy professional
AI will eliminate 300 million jobs in the next 5 years.
Yours doesn't have to be one of them.
Here's how to future-proof your career:
Join the Superhuman AI newsletter - read by 1M+ professionals
Learn AI skills in 3 mins a day
Become the AI expert on your team
PRODUCTION CASE STUDY
Inside Doug Liman's $70 Million AI Feature: Real Numbers From the Cutting Room
TL;DR: On April 15, TheWrap and World of Reel broke detailed production numbers on Doug Liman's "Bitcoin: Killing Satoshi," a $70 million AI-assisted feature that wrapped principal photography in March. Gal Gadot, Casey Affleck, Pete Davidson, and Isla Fisher star. Producers Ryan Kavanaugh, Matt Kavanaugh, Garrett Grant, and Lawrence Grey had budgeted the traditional version at $300 million before choosing to use AI. The physical shoot ran 20 days inside a converted West London car showroom wrapped in gray screen (blue and green tested subpar for AI compositing). Post-production is running 30 weeks with 55 AI artists at Acme AI & FX. First commercial test: the Cannes sales market in May.
Key Takeaways:
$70 million budget, down from a $300 million traditional estimate. The producing team costed the non-AI version before making the call. This is the first public set of budget numbers on an AI-assisted feature that isn't a theoretical slide deck.
20-day physical shoot, 30-week post-production, 55 AI artists at Acme AI & FX. The cutting room is the movie. That's where the $70 million is actually funding. A 30-week post on an indie is already long. Running it with 55 AI specialists on top of a conventional edit crew is what the budget is buying.
Crew totals: 107 cast, 100 shoot crew, 54 non-shoot crew. An indie-feature footprint, not a $300 million tentpole. If you've ever staffed a shoot, those numbers describe a completely different production than what the script originally called for.
The "gray box." A converted former car showroom in West London, walls wrapped in gray screen. The team tested blue and green and both pulled subpar against AI compositing pipelines. Gray won. That's a working detail you won't find in a press release.
Cinematographer: Henry Braham (Guardians of the Galaxy Vol. 3, The Suicide Squad). Screenwriter Nick Shenk ("The Mule") wrote an early draft three years ago. Liman came on at Kavanaugh's request.
Acme AI & FX has 10 more projects queued, with studios planned for New York, Vancouver, and Spain. If this film lands at Cannes, the infrastructure is already booked up behind it.
Cannes sales market in May is the first real commercial test. Buyers will see the film and the cost structure pitched together.
My Take:
I've been waiting for these numbers. Not the theoretical "AI will save 50%" slide decks. Real numbers from a real production with real names on it. $300 million down to $70 million came from a producing team that actually budgeted both versions. That's the piece I trust.
30 weeks of post with 55 AI artists is a long schedule, but it's not a Pixar schedule. It's what you'd expect for a feature with this much compositing. And the gray-screen detail tells me these people did the work. Green and blue have been the industry standard for sixty years because chroma-keyers were built around them. An AI compositor doesn't key the way a chroma-keyer does. It segments. Gray gives it more signal to work with. Small detail, big tell. The production team knew what they were doing.
The line that stuck with me was from Producer Lawrence Grey: "AI doesn't replace the human component. The human component is desperately needed in the process." Screenwriter Nick Shenk wrote an early draft three years ago. A name cinematographer shot the live-action. 55 AI artists in post. 107 cast on screen. The humans didn't leave. They just moved around. For a working editor, this is the first piece of reporting that gives you something to budget against, staff against, and pitch against. What you do with it depends on which chair you sit in.
PRODUCTION CASE STUDY
AI Val Kilmer at CinemaCon: The First Full AI Performance in a Theatrical Release
TL;DR: At CinemaCon in Las Vegas on April 15, "As Deep as the Grave" premiered a trailer featuring an AI-rendered Val Kilmer in a lead role (Father Fintan), with more than an hour of screen time built from his AI likeness. Kilmer's estate was paid. His daughter Mercedes endorsed it. The room applauded. The broader industry response has ranged from celebration to alarm.
Key Takeaways:
Full AI performance, not a cameo. Kilmer appears throughout the film as Father Fintan. Major dramatic scenes, dialogue, emotional arcs, all generated from his AI likeness.
Family-approved. Mercedes Kilmer actively endorsed the project. The estate was compensated.
Co-stars include Tom Felton and Abigail Lawrie in live-action roles alongside the AI Kilmer.
Industry split. GQ called it "a grim omen for the future of movies." Deadline's coverage was more measured. Variety ran multiple pieces framing it as tribute vs. precedent.
First of its kind at theatrical scale. There have been AI cameos (the digital Peter Cushing in Rogue One) and AI touch-ups. There has never been a major theatrical release built around a deceased actor's full AI performance.
My Take:
This is the case study every future deal will get measured against. Consent, estate compensation, family endorsement — Kilmer's team did it about as carefully as you could imagine. And even with all of that, you've still got outlets calling it "grim." The reason this hits harder than Rogue One's digital Cushing is scope: a brief scene versus a leading role. Once a leading role is possible, the question isn't "should we bring back a dead actor." It's "which living actors can studios skip hiring?" That's a SAG-AFTRA question now, and it's why the April 27 talks matter more than the contract language suggests. For AI filmmakers working with digital likenesses: get the consent in writing, pay fair, credit clearly, document every step. The industry is going to audit cases like this for years.
ESSENTIAL TOOLS
AI Filmmaking & Content Creation Tools Database
Check out the Alpha version of our AI Tools Database. We will be adding to it on a regular basis.
Got a tip about a great new tool? Send it along to us at: [email protected]
SHORT TAKES
Interpositive's 40-30-40. Deadline reported on April 1 that Ben Affleck's AI firm Interpositive, now owned by Netflix, has been pitching concrete cost-cutting targets for AI-assisted productions: 40% savings on additional production units in cities outside the main location, 30% on art department, 40% on set dressing. Affleck's framing: AI will "disintermediate more laborious, less creative, more costly aspects of filmmaking." Those are the numbers SAG-AFTRA sees when it sits down April 27.
AI video adoption just crossed the majority line. 78% of marketing teams now use AI-generated video in at least one campaign per quarter, up from 30% in early 2024. That's a 2.6× jump in under two years. If you're an editor chasing brand work, AI video is table stakes now.
Dodge College students revolted over AI grants. Chapman University's Dodge College, the school that trained the Duffer Brothers, invited "AI actress" Tilly Norwood to speak and announced AI filmmaking grants. Film students pushed back hard. Dean Stephen Galloway doubled down. The generational fault line in film education is now visible from orbit.
Google Veo 3.1 is free for every Google account. Ten generations per month via Google Vids and Flow. No credit card. Best free AI video tool available right now if you have Workspace access.
Google Veo 4 is rumored for late April or Google I/O in May. Multiple sources point to a hard launch by end of May.
6 AI Prompts to Speed Up Indie Filmmaking. Loglines, shot lists, location scouting, scheduling. Practical, beginner-friendly, worth forwarding to the collaborator on your team who still thinks AI is a phase.
ONE MORE THING…
Video of the Week
"PI HARD" — Official Trailer, by AI OR DIE Productions
Bengt Tibert is an Artist and Creative Director based in Warsaw, Poland, working with photography, video and AI. His latest work is a full action-comedy trailer made entirely with AI. Grok for the writing, Kling for video, Freepik for visuals, Fish Audio and ElevenLabs for voices, and Seedance for additional footage. It's ridiculous, self-aware, and surprisingly well-paced. The kind of thing that shows what a solo creator can pull off when they know the toolkit, and a good calibration check for where AI comedy shorts actually are this month.
FINAL THOUGHTS
This was a tools week that rewrote the job description.
Adobe, Avid, and NVIDIA, the three names that power professional post-production, all moved within 48 hours of each other. And they all moved in the same direction: toward agents, multi-model routing, and cloud-native workflows. The editor's toolkit isn't becoming AI-enabled. It's becoming AI-shaped. When Color Mode is the default workspace, Firefly AI Assistant is the prompt line, and Media Composer is running Google's agents, the person at the keyboard is doing a different job than they were a year ago.
Meanwhile the actual numbers on an AI-assisted feature finally went public. $300 million down to $70 million on Doug Liman's "Bitcoin: Killing Satoshi." 30 weeks of post. 55 AI artists. A gray screen in a converted West London car showroom. The Interpositive numbers in Short Takes, 30% off art department, 40% off set dressing, line up with what Liman's team actually built. These aren't theoretical savings anymore. They're line items in a real shooting schedule.
The tools accelerate. The numbers firm up. And the working filmmakers, the people actually making the movies, are learning a new job in real time.
Stay sharp. Keep creating.
— Larry
What did you think of today's newsletter?
If you have specific feedback or anything interesting you’d like to share, please let us know by replying to this email.
AIography may earn a commission for products purchased through some links in this newsletter. This doesn't affect our editorial independence or influence our recommendations—we're just keeping the AI lights on!







