This website uses cookies

Read our Privacy policy and Terms of use for more information.

Welcome to Today’s AIography!

Happy Tuesday. A lot moving this week.

The 79th Festival de Cannes opens today with five AI works in the Immersive Competition and a two-day AI for Talent Summit later this week. Google I/O is one week out and the rumor mill is already out of control. Krea AI dropped its first in-house foundation model this morning. Things are moving faster than dailies on a David Fincher film.

But the thing that genuinely caught me this week wasn't on X or YouTube. It was a piece of Chinese AI filmmaking we hadn't been watching, doing numbers that would put any Western AI account I follow into a coma.

Don’t miss our Must Watch AI Video. 2.9 million views on X in three days! The numbers on this one tell you something.

In today's AIography:

  • Cannes 2026 Opens Today With the Festival's Strongest AI Programming Yet

  • Google I/O Is One Week Away. Veo 4 (Or A New "Omni" Model) Expected

  • KREA 2 Launches Today As Krea AI's First In-House Foundation Model

  • GPT-5.5 Instant ships as the new default ChatGPT model.

  • Short Takes

  • Essential Tools

  • One More Thing

  • Final Thoughts

Read time: About 7 minutes

THE LATEST NEWS

Image credit: 79th Festival de Cannes

TL;DR: The 79th Cannes Film Festival opens today (May 12) and runs through May 23. Two AI-related programs are worth tracking this year. The Compétition Immersive selected five AI works for its 2026 program. Cannes Next is running a two-day AI for Talent Summit on May 15-16, with Darren Aronofsky, Google's James Manyika, OpenAI's Chad Nelson, and Luma AI's Josh DiCarlo on the speaker bench.

Key Takeaways:

  • Five AI works in the Compétition Immersive 2026: GAWD v. The People (an audience-as-court AI tribunal installation), LILI (narrative short set in an Iranian city), tAxI (multi-sensory installation in a self-driving cab), From Dust (a virtual reality opera with electronic vocal ensemble Sjaella), and Beyond The Vivid Unknown (an AI-driven reimagining of Koyaanisqatsi as a living system).

  • The AI for Talent Summit is invite-only, May 15-16 at Plage des Palmes. Speakers include Aronofsky, Manyika (President of Research and Labs at Google and Alphabet), Chad Nelson (OpenAI), Josh DiCarlo (Luma AI), Bonnie Rosen (Disney Accelerator), Scott Mann (Flawless), Guillaume Duchemin (La Fémis), Margarita Grubina (Respeecher), and Alex George (ElevenLabs).

  • Cannes Next, the Marché du Film's innovation program, runs in parallel May 12-20 with a full slate of programming around virtual production, AI, and creator-economy tracks.

  • Cinéma de Demain continues as the umbrella brand for forward-looking Cannes programming.

My Take:

The story is the scale, not a new category. Cannes has had AI work inside the Immersive Competition before. The difference this year is the concentration of programming, and especially the speaker bench at the AI for Talent Summit.

When Darren Aronofsky is sitting on a Cannes panel about AI tools alongside the President of Research at Google and the head of DreamLab at Luma, the credibility line has moved. The conversation at the top of the industry shifted from "is this legitimate filmmaking" to "which of the five Immersive Competition works was the strongest this year."

For a working filmmaker, the practical signal: festivals are how the industry credentials new categories of work. Cannes is the loudest signal in the chain. Toronto, Venice, Berlin, Sundance, and Tribeca all watch what Cannes does. Expect the rest of the circuit to expand their AI programming over the next twelve months.

Try This Now:

Take ten minutes today to review the five Compétition Immersive selections at the URL below. They are the bar Cannes set for AI-collaborative work this year. Frame your own work in progress against what they did well or what they left untouched.

Image credit: Google

TL;DR: Google I/O 2026 runs Monday and Tuesday next week (May 19 and 20). Google has used I/O for major Veo announcements in 2024 and 2025, so a 2026 reveal is expected. A pre-event leak from the Gemini UI shows a "Powered by Omni" label near the video generation tab, hinting at either a new wrapper for Veo or a brand new Gemini multi-modal model. Prediction markets give about 69 percent odds of a Veo 4 launch before June.

Key Takeaways:

  • I/O is Google's stage for media model reveals. Veo 1 launched there in 2024, Veo 3 in 2025.

  • The leaked "Omni" label suggests a unified video plus audio plus image model, not just a video generator.

  • Whether Omni is a new model or a wrapper for Veo is the open question. Either way, Veo 3.1 pipelines should be ready to retest by next Wednesday.

My Take:

I am not going to rebuild a working pipeline on a leak. But I am going to wait until next Wednesday before locking any new Veo 3.1-dependent shooting plans. The math is straightforward. A Veo 4 launch shifts the routing equation. An Omni launch shifts it more.

If you are running Veo 3.1 in production this week, finish what you started. Hold any fresh long-horizon work that depends on the current model behavior. Park new projects for seven days.

Try This Now:

Make a one-line note in your project folder of any Veo 3.1 quirks you have learned to work around. When the new model ships, that note is your retest cheat sheet. Without it, you will spend hours rediscovering what you already know.

TL;DR: Krea AI announced KREA 2 today (May 12), its first foundation model built entirely in-house rather than fine-tuned on third-party weights from Stability AI or another lab. The pitch is aesthetic diversity and precise style control. K2 Studio, the new style-template feature, lets creators build a reusable look using style guidelines, comment guidelines, and a like-and-dislike feedback loop.

Key Takeaways:

  • KREA 2 is Krea AI's inaugural foundation model, built from scratch rather than fine-tuned.

  • Trained for aesthetic diversity instead of default photorealism. Designed for creators who want a distinct visual look across a project.

  • K2 Studio turns "style" from a one-shot prompt into a trainable instrument: write guidelines, generate, like or dislike, iterate.

  • Early-access creators (including LudovicCreator on X) report consistent look generation across a campaign without needing to train a LoRA.

  • Available now at krea.ai.

My Take:

The style-template approach is the part working creators should pay attention to. Most image-gen platforms treat style as a one-shot prompt: type the right reference name, hope the model still knows it next week. K2 Studio treats style as a trainable instrument. Guidelines plus a feedback loop equals a reproducible look set.

That matters in production. If you need three weeks of social posts to feel like they came from the same hand, the current platforms cannot reliably deliver without either fine-tuning a LoRA (expensive, technical) or accepting drift across the campaign. KREA 2 is a bet that creators will pay for consistent style more than for incremental realism.

Krea building this in-house instead of fine-tuning on someone else's weights is also the right product call. It means they can iterate the model's training data toward their specific aesthetic-diversity bet, rather than inheriting whatever bias the base model came with. Expect more platforms to follow once they realize their differentiation is in the model itself, not in the UI wrapper.

Try This Now:

Open krea.ai. Use K2 Studio to write a style template for one of your active campaigns. Style guidelines, comment guidelines, ten generations, like or dislike each, then a fresh one. See how close the eleventh comes to your intended look. That is your test of whether the feedback loop is doing real work.

TL;DR: OpenAI released GPT-5.5 Instant on May 5 as the new default model for ChatGPT. Faster latency than the prior default with the same reasoning floor. Every ChatGPT user is now running on it without opt-in.

Key Takeaways:

  • The reasoning floor did not change. The latency did.

  • The new default affects every workflow that uses ChatGPT through the standard interface or the default API tier.

  • No backward compatibility break, but timing-dependent workflows should retest.

My Take:

A quiet release with a real practical effect. If your filmmaking workflow uses ChatGPT for script drafts, prompt engineering, or story development (most working creators do), you are now running on a different default. Quality should hold because the reasoning floor did not move. The speed change is what to track.

If you have a pipeline that times out at certain thresholds, retest. If you batch requests, the new latency changes your batch sizing math. A few minutes of retest now saves real debug time later.

Try This Now:

Open your most-used ChatGPT workflow today. Time one full pass. Note the result. Compare against your memory of last week. That is your baseline for the new default.

SHORT TAKES

  • Autodesk Project Falcon is a free browser-based 3D kitbashing tool that exports to Blender, Maya, 3ds Max, Cinema 4D, and Houdini. We covered the launch a week ago. Worth a callback: it pairs cleanly as a pre-viz layer for AI image pipelines. Chrome recommended.

  • Google Future of Filmmaking XPRIZE opened on May 5 with a $3.5M prize pool. Indie filmmakers can submit. Worth a serious look if you are building toward a calling-card short this year.

  • Blender Quick Erosion Filter is a free add-on for terrain and landscape pre-viz. Useful for AI image pipelines that need source geometry.

  • OpenAI Voice Intelligence API (May 7) is still untouched tutorial territory. Neither Theoretically Media nor Curious Refuge has posted a workflow walkthrough. Open lane for anyone willing to build the first one.

ESSENTIAL TOOLS

AI Filmmaking & Content Creation Tools Database

Check out the Alpha version of our AI Tools Database. We will be adding to it on a regular basis. Got a tip about a great new tool? Send it along to us at: [email protected]

ONE MORE THING…

Must Watch AI Video

Zombie Cleaner (丧尸清道夫) From Mx-Shell

A three-and-a-half minute AI short called Zombie Cleaner (丧尸清道夫) by a Chinese creator named Mx-Shell crossed 2.9 million views on X in three days. His native Bilibili upload of the same short sits at 816,000 plays and is still climbing. Either side of that math, this is one short pulling numbers that the Western AI filmmaking accounts I follow do not pull.

It is a domestic answer to Love Death + Robots. Same anthology register, same horror cuts. This is a remastered version of an earlier piece, so Mx-Shell was already iterating in public before this one broke.

The tool stack is what working creators should track. He built it inside ByteDance's Xiaoyunque "Skylark" Creator Program, an invite-only track that pairs top-tier Seedance access with native Bilibili distribution. That is a vertical integration we do not have a Western analog for. The closest comparison is Adobe Firefly inside Premiere, which is one company at the post layer, not a tool-plus-platform-plus-audience play.

The Western AI filmmaking conversation found Mx-Shell through this one viral clip. Bilibili knew him before. If you only watch X and YouTube, you are reading the wrong half of the map.

Credit: Mx-Shell. Posts natively on Bilibili. The X version that crossed 2.9M is what put him in the Western feed.

FINAL THOUGHTS

Two patterns from this week's stories.

First: the bar at the top of the industry is moving. Cannes putting AI work across two competition slots plus a high-level summit with Aronofsky on the speaker bench is the most important institutional signal of the year so far. The working AI filmmakers who shipped real work in 2025 and early 2026 are now eligible for recognition that did not exist when they started.

Second: the tooling layer is shifting from third-party fine-tunes toward in-house foundation models. Krea building KREA 2 from scratch is a leading indicator. Expect more platforms to follow once they realize their differentiation is in the model itself, not in the UI wrapper. If you are evaluating tools right now, ask which ones own their stack and which ones are renting someone else's weights.

If you have shipped something this year, the runway just got longer.

See you Thursday.

Stay sharp. Keep creating.

-Larry

Stop Drowning In AI Information Overload

Your inbox is flooded with newsletters. Your feed is chaos. Somewhere in that noise are the insights that could transform your work—but who has time to find them?

The Deep View solves this. We read everything, analyze what matters, and deliver only the intelligence you need. No duplicate stories, no filler content, no wasted time. Just the essential AI developments that impact your industry, explained clearly and concisely.

Replace hours of scattered reading with five focused minutes. While others scramble to keep up, you'll stay ahead of developments that matter. 600,000+ professionals at top companies have already made this switch.

What did you think of today's newsletter?

Vote to help us make it better for you.

Login or Subscribe to participate

AIography is the AI filmmaking newsletter for filmmakers, editors, and content creators navigating the biggest technological shift since digital.

If you have specific feedback or anything interesting you’d like to share, please let us know by replying to this email.

AIography may earn a commission for products purchased through some links in this newsletter. This doesn't affect our editorial independence or influence our recommendations—we're just keeping the AI lights on!

Reply

Avatar

or to participate

Keep Reading