- AIography
- Posts
- The Last Human Host
The Last Human Host
Oscar night's anti-AI standing ovation happened four days after Netflix wrote a $600M check for AI post-production. Welcome to Hollywood's most expensive contradiction.

Welcome to Today's AIography!
Last night's Oscars were a four-hour argument with itself — Conan O'Brien called himself "the last human host," Will Arnett got a standing ovation for declaring "animation is more than a prompt," and Ben Affleck sat somewhere in that room having just sold his AI post-production startup to Netflix for $600 million. That tension between public resistance and private adoption defined the entire week. Apple moved Final Cut Pro to subscription with AI tools baked in. NVIDIA and ComfyUI made local 4K AI video generation real on consumer RTX GPUs. The WGA is heading into contract negotiations with AI as the central demand. And an open-source tool called MatAnyone 2 just made green screens optional. The rules are being written — right now, from both sides of the stage.
In today’s AIography:
Main Stories
• Oscars 2026: Hollywood's anti-AI reckoning meets the $600M reality check
• Apple Creator Studio moves Final Cut Pro to subscription with AI features
• Runway Characters: Real-Time AI Avatars Powered by a World Model
• WGA & SAG-AFTRA 2026 negotiations: AI is the central battleground
• MatAnyone 2: AI video matting that kills the green screenEssential Tools
• Runway Characters — real-time AI avatars via GWM-1 world model
• Bloomway CinePro — public beta launches today
• Luma Dream Machine — Ray 3 Modify, Character Reference, Boards
• Picsart AI Playground — 90+ models in one interface
• FilmPilot.ai — AI pre-production tools for scriptsShort Takes
Video of the Week
Final Thoughts
Read time: About 8 minutes
THE LATEST NEWS
OSCARS / AI RESISTANCE
Oscars 2026: Hollywood Stages an Anti-AI Reckoning on the World's Biggest Stage
TL;DR: The 98th Academy Awards became an unexpected referendum on AI in filmmaking. Host Conan O'Brien opened by calling himself "the last human host." Will Arnett delivered an impassioned anti-AI speech during the animation awards. Autumn Durald Arkapaw made history as the first woman to win Best Cinematography — a deeply human achievement in a year obsessed with artificial creation.
Key Takeaways:
Conan O'Brien opened with AI jokes: "the last human host of the Academy Awards" — next year's host will be "a Waymo in a tuxedo"
Will Arnett brought the audience to its feet: "Tonight, we are celebrating people, not AI. Animation is more than a prompt. It's an art form and it needs to be protected."
Autumn Durald Arkapaw won Best Cinematography for Sinners — the first woman and first Black person to ever win the category in 98 years
Anti-tech undercurrent ran throughout the ceremony (The Guardian: "a notable undercurrent of anti-tech resistance")
Elephant in the room: Netflix just paid $600M for Ben Affleck's AI post-production startup InterPositive four days earlier
Best Picture: One Battle After Another (Paul Thomas Anderson) took six awards including Best Picture and Director
Ryan Coogler (Sinners) won Best Original Screenplay — human storytelling front and center
Why It's Important:
The Oscars are Hollywood's annual story about itself. And the story Hollywood chose to tell last night was: we are human, and that matters.
This wasn't subtle. From the opening monologue to the animation awards to the cinematography win, the ceremony repeatedly drew a line between human craft and AI generation. When Arnett said "animation is more than a prompt," he wasn't just defending animators — he was articulating the emotional core of Hollywood's AI anxiety: that the things we value about filmmaking are precisely the things AI threatens to commoditize.
But here's the contradiction: the same industry that cheered Arnett's speech just watched Netflix write a $600 million check for AI post-production tools. The same week Spielberg told SXSW he's "never used AI," David Fincher is reportedly using InterPositive's tech on his next Brad Pitt film.
For AI filmmakers, the signal is clear: Hollywood wants to be seen resisting AI publicly while adopting it privately. The Oscars are the public face. The deals are the private reality. If you're building with AI tools, you're on the right side of where the industry is going — but don't expect applause from the stage.
APPLE / EDITING TOOLS
Apple Creator Studio: Final Cut Pro Goes Subscription and Gets AI
TL;DR: Apple launched Apple Creator Studio, bundling Final Cut Pro, Logic Pro, Pixelmator Pro, Motion, Compressor, and MainStage into a subscription model. The update adds new AI features including a beat detector for music-based editing, plus premium AI capabilities across Pages, Numbers, Keynote, and Freeform.
Key Takeaways:
Subscription model replaces one-time purchases for Final Cut Pro and Logic Pro
New AI tools include a beat detector for music-based editing and AI-powered features across Apple's creative suite
Bundled apps: Final Cut Pro, Logic Pro, Pixelmator Pro, Motion, Compressor, MainStage
Available across Mac, iPad, and iPhone
Major shift from Apple's traditional buy-once model for creative professionals
Why It's Important:
This is Apple drawing its line in the AI filmmaking landscape — and doing it the Apple way: integrated, curated, and subscription-priced.
For 25 years, Final Cut Pro was a one-time purchase. That era is over. Apple is betting that AI features justify recurring revenue — and they're probably right, because continuous AI development requires continuous investment.
The beat detector feature is a perfect example of AI that assists rather than replaces. Matching edit points to music beats is tedious, mechanical work that editors have done manually for decades. Automating it doesn't eliminate the editor — it frees them to focus on story, pacing, and emotion. That's the kind of AI integration that Hollywood can actually get behind, especially after last night's anti-AI Oscars speeches.
The real question: will Apple's AI features keep pace with what's happening in the broader AI video ecosystem? NVIDIA is making local 4K AI video generation possible. Runway is hosting third-party models. Adobe has conversational AI in Photoshop. Apple's walled garden approach means tighter integration but potentially slower innovation. For editors committed to the Apple ecosystem, Creator Studio is the future. For everyone else, the open ecosystem is moving faster.
AI AVATARS / WORLD MODELS
Runway Characters: Real-Time AI Avatars Powered by a World Model

AI Image Generated with Nano Banana 2
TL;DR: On March 9, Runway launched Characters — a real-time video agent API built on their General World Model (GWM-1). From a single reference image, you can create fully custom conversational AI avatars with photorealistic faces, natural lip-sync, expressive gestures, and configurable voice and personality. No fine-tuning required. BBC and Silverside are already using it in production.
Key Takeaways:
Single image to avatar — upload one photo and get a fully animated, conversational character. Photorealistic, animated, stylized, human or non-human
Built on GWM-1 — Runway's General World Model, their most ambitious architecture beyond clip generation. This is a world simulator, not a video generator
Full conversational expressiveness — natural facial expressions, eye movements, lip-syncing, and gestures during both speaking and listening states
API-first deployment — configure voice, personality, knowledge base, and actions. Characters can create support tickets, take orders, pull from enterprise knowledge bases
Real-time performance — maintains quality across extended conversations without degradation
Already deployed — BBC and Silverside are active partners. Available to all Runway customers today at dev.runwayml.com
Consumer access — preset avatars available in the Runway web app for hands-on testing
Why It's Important:
This is Runway's clearest signal yet that they're not just a video generation company anymore — they're building toward simulated worlds with intelligent characters inside them.
The shift from GWM-1 to Characters is the shift from "generate a clip" to "generate a person who can talk to you in real time." That's a fundamentally different product category. Video generation is a rendering tool. Characters is an interactive media platform. Runway just jumped from competing with Kling and Sora on clip quality to competing with the entire digital human industry — and they're doing it with a world model backbone that nobody else in AI video has.
For filmmakers, the implications run deep. Need a talking head for a documentary interview re-creation? A branded spokesperson for a product video? A virtual host for an interactive experience? Characters handles all of that from one reference image, no motion capture studio, no facial rigging, no days of post-production cleanup. The API-first approach means you can integrate it directly into production pipelines — it's not a toy you prompt, it's infrastructure you build on.
For the broader industry, pay attention to what Runway is really saying: the future of online interaction is real-time generated video, not text in a box. Characters today are customer support agents and brand mascots. Tomorrow they're NPCs in interactive narratives, virtual actors in AI-generated films, and digital humans that maintain persistent relationships with audiences. Runway is building the platform layer for all of it.
The question isn't whether AI characters become part of filmmaking. It's how fast filmmakers learn to direct them.
LABOR / AI CONTRACTS
WGA & SAG-AFTRA 2026 Negotiations: AI Is the Central Battleground

AI Image Generated with Nano Banana 2
TL;DR: With the WGA contract expiring May 1 and SAG-AFTRA extending its current negotiations, both guilds are heading into 2026 talks with AI as the defining issue. The WGA wants expanded AI protections beyond 2023's initial provisions, including stronger guardrails against AI-generated scripts and mandatory disclosure. SAG-AFTRA is fighting digital twin exploitation and AI voice cloning.
Key Takeaways:
WGA contract expires May 1, 2026 — negotiations about to begin
SAG-AFTRA extended its current contract negotiations
AI is the #1 issue for both guilds, building on 2023's initial AI provisions
Current WGA provisions prohibit companies from giving writers AI-generated scripts for rewrite fees or requiring AI software use
Mandatory AI disclosure already required for written materials
2026 demands include expanded AI protections, improved healthcare, increased streaming residuals
Guild leaders "willing to fight" — echoes of 2023 strike energy
All three guilds (WGA, SAG-AFTRA, DGA) seeking AI progress in this cycle
Why It's Important:
The 2023 strikes gave Hollywood its first AI guardrails. The 2026 negotiations will determine whether those guardrails hold — or whether the technology has already outrun them.
In 2023, nobody was debating $600 million AI acquisitions or real-time AI avatars or 4K local video generation. The AI provisions negotiated during the strikes were written for a world that barely exists anymore. Sora didn't exist. Runway Characters didn't exist. InterPositive was still in stealth.
The WGA's current AI provisions are defensive: you can't force a writer to use AI, and you must disclose AI involvement. But they don't address what happens when a showrunner uses AI to generate a first draft privately, then hands it to a writer for "polishing." They don't cover AI tools that generate story outlines, character arcs, or dialogue suggestions that a writer then reshapes. The gray areas are enormous and growing.
For AI filmmakers and creators, these negotiations will define the legal and cultural framework you'll operate within. If the guilds secure strong protections, the line between "AI-assisted" and "AI-generated" becomes legally meaningful. If studios push back successfully, AI tools become standard infrastructure — useful for everyone but potentially threatening to traditional employment.
VFX / OPEN SOURCE
MatAnyone 2: AI Video Matting That Kills the Green Screen
TL;DR: MatAnyone 2, accepted to CVPR 2026, delivers pixel-level alpha mattes for human foreground extraction from video — no green screen, no manual rotoscoping required. The open-source framework runs locally with GitHub code and a HuggingFace demo already available.
Key Takeaways:
Pixel-level alpha mattes — not rough cutouts, but clean compositing-grade edges (hair, transparent fabrics, motion blur)
No green screen needed — works on any footage, any background
Real-time capable — approaching live background removal speeds
Open source — code on GitHub, demo on HuggingFace
Accepted to CVPR 2026 — peer-reviewed, not vaporware
Pairs with AI background generation — clean alpha channel + AI scene generation = complete virtual production pipeline
Why It's Important:
Green screens are a $100+ million industry segment. They require dedicated studio space, controlled lighting, specialized rigging, and skilled VFX artists for cleanup. Every filmmaker who's fought with green spill, uneven lighting, or hair fringing knows the pain.
MatAnyone 2 makes all of that optional.
The quality leap here is critical: previous AI background removal tools produced rough masks suitable for social media but nowhere near broadcast quality. MatAnyone 2 produces actual alpha mattes — the same kind of output that professional rotoscoping teams spend days creating manually. Hair strands. Transparent fabrics. Motion blur edges. The stuff that separates amateur compositing from professional work.
For indie filmmakers, this is transformative. Shoot anywhere. Composite anything. You don't need a studio, you don't need a green screen, and you don't need a rotoscoping team. Combined with NVIDIA's local 4K upscaling and open-source video generation models, you now have a complete compositing pipeline that runs on a single workstation.
The VFX industry should pay close attention. If MatAnyone 2 reaches production quality (and early demos suggest it's close), the demand for manual rotoscoping — one of the foundational VFX tasks — drops significantly. "Everything becomes post" includes the tools becoming post-production artists themselves.
ESSENTIAL TOOLS
Bloomway AI's CinePro enters public beta today (March 16). The platform uses "Keyframe Splitter & Linking" technology with 700+ cinema-grade camera movements, AI chapter segmentation for precise editing, and "One-Take" long sequence generation. Includes professional TTS narration. Positioned as a "Lean Cinema" workflow tool.
Luma AI shipped significant updates to Dream Machine in early March: Ray 3 Modify enables text-based video editing (describe changes in natural language), improved Character Reference controls for consistency across clips, and Boards — an expanded workspace for organizing multi-shot projects. Steady, useful iteration.
Picsart launched AI Playground with access to 90+ models from a single prompt: Google Veo 3.1, OpenAI Sora 2, Kling 3.0, Runway Gen-4.5, Luma Ray, GPT Image, Flux Kontext, Ideogram, Recraft, and ElevenLabs. One interface to test-drive every major AI video and image model. The model aggregation trend is accelerating.
VersusMedia launched FilmPilot.ai on March 12 with audience simulations, character art generation, script insights, and AI-powered pre-production tools. Built by veteran technologist Ryan Vinson. Targets the gap between script completion and production — the planning phase where AI can accelerate without replacing creative decisions.
AI Filmmaking & Content Creation Tools Database
Check out the Alpha version of our AI Tools Database. We will be adding to it on a regular basis.
Got a tip about a great new tool? Send it along to us at: [email protected]
SHORT TAKES
ONE MORE THING…
Video of the Week
Sexyy Red — "If You Want It" (Official Music Video, dir. Hidji World)
This is what AI in mainstream music video production looks like in 2026 — not as an experiment, but as a creative choice by a major artist with millions of followers.
Sexyy Red's "If You Want It" video, co-directed with Hidji World (known for surreal, tripped-out visuals), uses AI-generated choreography and deepfake effects throughout — including an AI-generated pregnant dance sequence that went immediately viral. The dancers twerk in the rain in movements that are clearly AI-generated, and rather than hiding it, the video leans into the uncanny quality as a stylistic decision.
It's not trying to pass as real. It's using AI's weirdness as an aesthetic. That's a meaningful shift — from "can you tell it's AI?" to "of course it's AI, and that's the point."
Love it or hate it, this is AI video in the wild: millions of views, mainstream distribution, and a creative team that chose AI for what it adds, not what it replaces. Watch it and decide for yourself.
FINAL THOUGHTS
Last night was a mirror.
The Oscars showed us an industry that's scared of AI and can't stop buying it. Conan called himself the last human host and got a laugh. Will Arnett said animation isn't a prompt and got a standing ovation. And somewhere, Netflix's servers are processing InterPositive's code that will automate chunks of post-production on their next 50 films.
I've said it before: everything becomes post. And this week proved it from every angle.
NVIDIA just made local 4K AI video generation a desktop reality. MatAnyone 2 killed the green screen. Apple is adding AI to Final Cut Pro. The WGA is heading into negotiations because AI isn't theoretical anymore — it's in the pipeline.
But here's what the Oscars also showed: human craft still wins. Autumn Durald Arkapaw didn't use AI to become the first woman to win Best Cinematography. Ryan Coogler didn't prompt-engineer Sinners. Paul Thomas Anderson directed humans, not avatars.
The pattern from every tech disruption I've lived through — analog to digital, linear to nonlinear, broadcast to streaming — is the same: the technology changes, the storytellers don't. The people who understand story, pacing, emotion, and audience will thrive with AI tools. The people who think AI replaces understanding will make expensive demos that nobody watches twice.
Learn the tools. Master the craft. They're not the same thing, but you need both.
See you Wednesday.
— Larry
If you have specific feedback or anything interesting you’d like to share, please let us know by replying to this email.
AIography may earn a commission for products purchased through some links in this newsletter. This doesn't affect our editorial independence or influence our recommendations—we're just keeping the AI lights on!
What did you think of today's newsletter?Vote to help us make it better for you. |



Reply