
Welcome to Today’s AIography!
Canva just launched Canva AI 2.0, calling it "the biggest launch ever" and a generational leap powered by the Canva Design Model, the in-house foundation model they quietly shipped back in October. They also rolled out Learn Grid (free education for everyone), Canva Offline, and a new Print Shop. At the Semafor World Economy Summit today, Runway CEO Cristóbal Valenzuela pitched studios on making 50 films with what they currently spend on one $100M blockbuster. The WGA ratification vote opens today. Google quietly expanded Veo 3.1 to anyone with a Google account, 10 generations a month, free. And Anthropic dropped Opus 4.7 the same morning. Every vendor wants to own the whole stack.
In today’s AIography:
Canva AI 2.0 — Nine New Capabilities Built On The Canva Design Model
Runway CEO Pitches Hollywood: 50 Films Instead Of One $100M Blockbuster
The Labor Clock Starts Today: WGA Ratification + SAG-AFTRA Talks April 27
Veo 3.1 Goes Free For Any Google Account
SeeGen AI Launches on Seedance 2.0
AIography's AI Filmmaking & Content Creation Directory
Short Takes (including Anthropic's Opus 4.7 drop)
One More Thing…
Final Thoughts
Read time: About 8 minutes
THE LATEST NEWS
TL;DR: Canva launched Canva AI 2.0 at Canva Create 2026, positioning it as "a generational leap forward" and "our biggest launch ever." It's built on the Canva Design Model, an in-house foundation model Canva actually launched back in October 2025 and is now powering the new Canva AI across the Visual Suite (Docs, Presentations, Sheets). Canva AI 2.0 rolls up nine new capabilities: four foundational (Conversational Design, Agentic Orchestration, Memory Library, Layered Object Intelligence) plus five workflow extensions (Connectors, Scheduling, Web Research, Brand Intelligence, Canva Code 2.0). Available today in research preview via a hidden "activate superpowers" on the Canva homepage. General availability rolls out over the coming weeks.
The nine capabilities:
Conversational Design. Start with a conversation, a goal, or a rough idea. Canva AI generates a fully layered design with layout, hierarchy, and brand from the first output.
Agentic Orchestration. Canva AI takes actions on your behalf, coordinating images, text, fonts, colors, and layouts across an entire design.
Memory Library. Starts with an auto-generated "About Me" Canva Doc built from your existing designs. Core memories, shared memories, and a Knowledge Base will be added over the coming months.
Layered Object Intelligence. Every generated design comes fully layered and editable, not a flat locked render. Powered by the Canva Design Model that launched in October.
Canva AI Connectors. Slack, Gmail, Google Drive, Calendar today, more apps coming. Summarize Slack conversations, build a doc from your calendar.
Scheduling. Tasks run automatically in the background, even while you're offline. Weekly social posts, overnight research, morning briefing docs.
Web Research. Research on demand or scheduled, pulled directly into your design as structured, editable content.
Brand Intelligence. Auto-applies your fonts, colors, and style from the first output. Retroactively rebrands existing designs in one pass.
Canva Code 2.0. Built in collaboration with OpenAI and Anthropic. HTML and code import and editing, interactive experiences built from a prompt, publish to your own domain with forms collecting responses in Canva Sheets.
Also launched at Canva Create 2026 (non-AI-2.0):
Learn Grid. All-in-one education destination. Thousands of ready-to-teach learning resources, 16 languages, curriculum-mapped. Launching free for everyone, not just education users. Activities, worksheets, games, whiteboards, with answers feeding straight into Canva Sheets for tracking.
Canva Offline. One of the most-requested community features ever. Make any design available offline, keep editing without signal, sync when you're back online.
Canva Print Shop. New print product, 60+ items, brand kit carries through every design, one tree planted per print order (16M trees planted to date). Canva is also now running renewable-energy solar farms to match every US and Canada print order.
Affinity Brand Kits connected to Canva. Craft in Affinity with free pro tools (now part of the Canva family), move seamlessly into Canva as brand templates the whole business can use.
Try This Now:
Research preview opens today via a hidden "activate superpowers" sequence on the Canva homepage. Watch the keynote replay for the unlock. If you're in the Canva ecosystem, test these three things on your actual work:
How well does Layered Object Intelligence hold up when you edit ONE element deep in a generated design? The pitch is only that element changes. Try a structural edit and see what else shifts.
Does Memory Library carry brand voice AND visual style, or just visual? Train it on your work, ask for copy, see if the voice lands.
Is Canva Code 2.0 HTML import bidirectional, or is it a one-way door? Bring clean HTML in, edit in Canva, export, edit externally, import again. See what breaks.
My Take:
The actual story is the Canva Design Model. If Canva has built a foundation model trained specifically on design structure (layout, hierarchy, layered objects), that's a durable moat against every "AI plus design" competitor stacking existing image models with template libraries on top. If it's marketing language for a fine-tuned diffusion layer with better layout heuristics, the moat is thinner. We won't know for weeks. Research preview means limited access and managed demos.
Two things worth watching. First, the Connectors list. Canva is not trying to be a design tool anymore. It's trying to be where business teams start and end the day, with design as the connective surface. Slack, Gmail, Calendar. They want to sit between every workflow and the finished visual output. That's a different market than creators. Second, the OpenAI and Anthropic collaboration on Canva Code 2.0. That's Canva partnering with the largest LLM vendors to make HTML editable inside the Canva canvas. Interesting tactical move. They're not building their own frontier LLM. They're building the design layer the frontier LLMs will generate INTO.
Memory Library is the lock-in mechanism. The longer you use Canva, the more it knows you, the harder it is to leave. That's deliberate. And Learn Grid going free-for-everyone is a sneaky competitive move against every $29-to-$99/mo AI education Skool community currently being sold to teachers. Watch the rollout.
PRODUCTION ECONOMICS
Runway CEO Pitches Hollywood: 50 Films Instead Of One $100M Blockbuster
TL;DR: At the Semafor World Economy Summit this week, Runway CEO Cristóbal Valenzuela made the boldest pitch yet for AI's role in changing Hollywood's economics. His exact words: "If you're spending a hundred million dollars on making one feature film, which is 90 minutes, imagine taking a hundred million dollars and spending it on, like, 50 movies. Same quality. Same amount of output, visually. But you make way more content." Runway is valued at north of $5 billion. Valenzuela framed filmmaking as a numbers game, solving what he called "a quantity problem," and argued that more shots on goal improves the chance of a hit.
Key Takeaways:
The math: $100M divided by 50 films equals $2M per film. That's indie-feature budget territory, not studio
Valenzuela isn't saying AI replaces filmmakers. He's saying the economics of greenlight-versus-skip flip when production cost drops by 50x
Supporting data point: Doug Liman's upcoming AI feature "Bitcoin: Killing Satoshi" was budgeted at $70M, reportedly down from an estimated $300M in traditional production. Studio-quality output at a 4x reduction, not 50x. Valenzuela's number is aspirational, not demonstrated
Same "portfolio approach" logic that's driven streaming commission models for the past several years, now applied to feature production instead of TV series
Runway and competitors (Kling, Luma, Veo, Seedance) all benefit from this pitch because it positions their tools as the enabling layer
My Take:
The 50-films-for-$100M number is a sales pitch, not a forecast. I've seen enough sizzle-reel math from the craft side to know the difference. The actual ratio on a well-run AI feature right now is closer to 4-to-1, not 50-to-1, and that's with teams who know what they're doing.
But the underlying argument still matters. Production costs dropping by 4x, or 10x, changes what a studio is willing to greenlight. It changes what an indie producer can attempt. It changes what "enough money to make a real film" means for someone working on a laptop in Lagos, Lima, or Lansing. That's the piece to watch.
The uncomfortable counter is also real. Fifty films at $2M each that nobody watches isn't a win. The bottleneck in this industry isn't supply. It's attention. More content chasing the same finite audience time is a squeeze, not a democratization. The filmmakers who come out ahead in this era are the ones who make two films people actually finish, not fifty films nobody starts. Runway's pitch optimizes for volume. The actual question is whether AI lets you make better work, not more work.
TL;DR: The Writers Guild ratification vote on the four-year AMPTP deal opens today (April 16) and closes April 24. The contract secures $321M into the WGA health plan, raises the streaming success bonus from 50% to 75% of base residuals, bans studios from training AI on scripts, and requires notification if studios license writers' work for AI training. It does not secure payment for training data already used. SAG-AFTRA resumes AMPTP talks April 27 under a mutually agreed media blackout. DGA follows May 11. Current SAG-AFTRA contract expires June 30.
Key Takeaways:
WGA vote window: April 16-24. Needs more than 50% of rank-and-file. Four-year term is a concession that could complicate future above-the-line bargaining
AI win is structural: studios cannot train AI on covered scripts, and must notify the WGA if they license written material elsewhere. AI loss is financial: no payment for training data already consumed
SAG-AFTRA leverage points: AI protections, digital replica rules, the Tilly Tax proposal (a surcharge on AI-character usage). Duncan Crabtree-Ireland has called airtight AI protections a hard line
The calendar matters. WGA sets the template, SAG-AFTRA negotiates the actor version, DGA picks up in May. Any AI protection language that holds across all three becomes the industry baseline
My Take:
The WGA deal is the clearest signal yet that unions can win structural AI protections but can't yet win paid licensing for training data already consumed. "Already consumed" is where most of the real economic value sits. A studio that trained on 20 years of your scripts before contract language existed isn't paying retroactively. That horse is out of the barn.
For SAG-AFTRA, the stakes are different. Digital replica, voice replica, body and likeness use in productions that may not have cast the actor at all. Watch the timing. If the AMPTP pushes SAG-AFTRA toward a similar four-year structure, that's the industry coordinating a long horizon to lock in these terms. A shorter deal gives actors a faster renegotiation window if AI tech keeps moving at its current pace.
TL;DR: As of April 2, anyone with a standard Google account can generate up to 10 Veo 3.1 videos per month via Google Vids. Free, 720p, 8-second clips, native audio. No subscription required, no developer account, no Gemini API access. This is the consumer rollout, and it's the genuinely new news this week. AI video with synchronized audio has crossed the free-tier threshold. The separate Gemini API paid preview (for developers building video workflows at scale) already shipped last quarter.
Try This Now:
If you've been on the fence about Veo 3.1, test it today in Google Vids with any Gmail account:
Open Google Vids and select "Create with Veo 3.1" from the generation menu
Write a prompt with explicit audio direction. "Exterior, golden hour, distant traffic, leaves rustling." Audio detail matters because audio is native
Compare the native-audio Veo output against a matched Seedance 2.0 or Kling clip that requires a separate audio pass
Track how often the native audio removes a post step versus how often you still need to layer your own SFX
Key Takeaways:
Veo 3.1 free tier is 10 generations per month, 720p, 8 seconds per clip, synchronized audio, any Google account
This is the first free-tier AI video model from a major platform with native audio. That's the real step change
The Gemini API paid preview (with Scene Extension, reference images, first-and-last-frame transitions) remains a separate developer product. Not what opened up this week
Scene Extension itself has been available in the Gemini API since October 2025. Still worth testing, and we're breaking it down in this week's Reels, but it's not this week's news
My Take:
The free tier is the story most creators will actually feel. Ten generations a month isn't a production pipeline, but it's enough to build a sample, pitch a concept, or run a direct comparison against Seedance or Kling for a specific shot. For people who've been Seedance-first because it was accessible, this changes the landing page for a new creator looking to try AI video. Google Vids now holds the "free and good enough" slot. That matters for adoption. It matters less for working pros who are already paying for a generation stack.
The thing worth watching is whether Google pushes the 10-gen cap up as a growth lever, or keeps it tight to funnel serious users toward AI Pro and Ultra. Watch the cap. Watch the quality ceiling on that cap. Two different levers, two different business models.
TL;DR: SeeGen AI launched April 15 as a creative suite built with Seedance 2.0 capabilities: text-to-video, image-to-video, frame-level editing, multi-reference workflows (images, video clips, audio, text), all wrapped around ByteDance's current benchmark-leading video model. Operated by HAOAPP HONG KONG LIMITED, founded by CEO Ethan Liu. The company plans to expand beyond Seedance to include Kling and HappyHorse models in future releases.
Key Takeaways:
SeeGen is the third major distribution layer on top of Seedance 2.0 this month, after Dreamina's Creator Partnership Program and Seedance arriving on Runway (non-US Unlimited/Enterprise only)
Product pitch: multi-reference input workflows. Guide a generation with a combination of images, video clips, audio, and text, rather than text prompts alone
Frame-level editing defines scene composition at individual frame positions. Motion and camera replication copies the movement signature from one clip into another
Ethan Liu has publicly said future releases will incorporate Kling and HappyHorse. SeeGen is betting on being a multi-model wrapper, not Seedance-exclusive
My Take:
The proliferation of wrappers on top of Seedance 2.0 is its own story. Dreamina's invite-only creator program, Seedance on Runway internationally, Seedance in ComfyUI for local pipelines, and now SeeGen. Four entry points into the same underlying model, each with its own pricing layer. Most of these wrappers end up commodified because the model is the moat, not the interface. But the early ones get the first wave of user data and creator testimonials.
What caught my eye is the explicit plan to add Kling and HappyHorse. If SeeGen lands those integrations, it becomes a single interface for the three top video models on the current leaderboard. Watch for the Kling announcement specifically. That's the harder integration to pull off and the one that makes the product defensible.
ESSENTIAL TOOLS
AI Filmmaking & Content Creation Tools Database
Check out the Alpha version of our AI Tools Database. We will be adding to it on a regular basis.
Got a tip about a great new tool? Send it along to us at: [email protected]
SHORT TAKES
Anthropic Ships Opus 4.7 The Same Morning As Canva Create: Today's stack race gets a third character. Canva claims the Creative OS layer. Google expands the free video tier. Anthropic ships the agentic engine underneath. Opus 4.7 brings measurable gains on multi-step, tool-dependent workflows, plus vision input up to 2,576 pixels on the long edge (three times prior models), with pricing unchanged from 4.6 ($5 per million input, $25 per million output). Available across Claude products, the API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry. If you're building agents or running coding workflows inside your production stack, this is the engine your creative tools will be riding on in six months whether you notice it or not.
Steven Soderbergh Reaffirms "A Lot Of AI" For Spanish-American War Film: Oscar winner using AI for dream spaces and war sequences. Film with Wagner Moura in active development.
The Tencent 30B "Hunyuan" Model Rumored For April Is An Agent LLM, Not Video: If you saw 30B Hunyuan video chatter, it's a misread. That rumor is a general-purpose language model for agentic workflows. HunyuanVideo 1.5 (the actual video product) is 8.3B params on Hugging Face.
HunyuanVideo 1.5 Generates On RTX 4090 In 75 Seconds: Tencent's open-source video model runs end-to-end in 75 seconds on a single consumer GPU. Apache 2.0 license, full training code available.
Seedance 2.0 Ecosystem Expands To Four Distribution Channels In Two Weeks: Runway (non-US), ComfyUI, Dreamina CPP, now SeeGen AI. US availability still geo-blocked due to copyright disputes.
ONE MORE THING…
Video of the Week
The Video of the Week this week isn't actually a whole video. It's a really valuable community tip that solves a major limitation in Seedance 2.0, posted by @MrDavids1 with the actual fix from @simeonnz.
Note to self: even the most awesome new models have issues. The ByteDance team isn't going to fix every gap themselves, and community workarounds often land faster than official patches. When the model everyone's benchmarking leaves a wall in your workflow, the answer usually shows up on X or Reddit before it shows up in the docs. Props to simeonnz for solving it and to MrDavids1 for spreading it.
Worth a click if you're working in Seedance 2.0 this week.
Every vendor this week is pitching the same thing in a different suit. Canva wants to be your entire work surface. Design, docs, research, automation, interactive publishing, spreadsheets, print, education, the works. NAB wants to be your production stack. Google wants to be your video generation layer. Anthropic wants to be the engine under all of it. Everyone wants to own everything. Fine. That's what vendors do when the underlying tech stabilizes.
The question worth carrying into Canva AI 2.0, into NAB next week, into every press release for the next month: can I swap this out in six months if something better comes along? If no, the bundle is a lock-in, not a productivity win. Memory Library is Canva's most honest move in that direction. The more you use it, the more it knows you. Which is the same thing as saying the more it knows you, the harder it is to leave.
Labor won structural protection against AI training on new work. They didn't win compensation for the training that already happened. That's the shape of a lot of AI deals ahead. "We'll stop doing the thing you didn't want us to do. We won't pay for what we already did." Actors start their version of that conversation April 27.
Meanwhile the open-source floor keeps rising. HunyuanVideo 1.5 on a card you can buy at Micro Center. Seedance 2.0 in ComfyUI. Veo 3.1 free for anyone with a Gmail account. The commercial vendors bundle up while the free tools level up. That's a squeeze. The vendors who survive are the ones who give creators genuine leverage, not fancier versions of what you already had.
Tool cycles have a shape. This is one of them. New capability, vendor expansion, bundling, consolidation, commodity at the bottom and premium at the top. We're in the bundling phase. Consolidation comes next. Keep your exit costs low. Keep your hardware current. Keep your creative judgment sharp. No matter which bundle wins, the person who knows what a good shot looks like is still the person the market pays for.
See you next week.
What did you think of today's newsletter?
Distinguish real intent from malicious intent.
hCaptcha User Journeys finds malicious intent across sessions, devices, and apps. Detect intent signals that expose risk before it escalates.
Understand motives, not just outcomes. Book a demo and find out how it works.
If you have specific feedback or anything interesting you’d like to share, please let us know by replying to this email.
AIography may earn a commission for products purchased through some links in this newsletter. This doesn't affect our editorial independence or influence our recommendations—we're just keeping the AI lights on!







