What happens when the S&P moves 3% during your commute?

We are living in volatile times. While you cannot control the state of international affairs, you can position your portfolio accordingly.

Liquid is one of the fastest growing trading platforms, allowing users to trade stocks, commodities, FX, and more 24/7/365 from their phone and computer.

Welcome to Today’s AIography!

A week ago, nobody in the AI video space had heard of HappyHorse. Today it sits atop every major video generation benchmark, and the person who built it already made the last model that held the crown. Meanwhile, Canva acquired an agentic AI company 48 hours before its biggest annual event, NAB opens this week with "agentic" on every vendor's badge, and the open source video models are getting good enough to change the math for independent creators. Busy week!

In today’s AIography:

  • HappyHorse-1.0: The Anonymous Model That Beat Everything

  • Canva Acquires Simtheory and Ortto - Agentic AI Meets the Design Platform

  • NAB 2026 Opens This Week: The AI Tools That Will Actually Ship

  • Open Source Video Models Cross the "Good Enough" Line

  • Pika 2.5: The Social-First Bet on Effects Over Realism

  • AIography's AI Filmmaking & Content Creation Directory

  • Short Takes

  • One More Thing…

Read time: About 7 minutes

THE LATEST NEWS

TL;DR: An AI video model nobody had heard of appeared anonymously on the Artificial Analysis Video Arena on April 7, climbed to #1 across multiple categories, and three days later Alibaba confirmed it was theirs. The team lead? The former Kuaishou VP who built Kling AI's technology.

Key Takeaways:

  • HappyHorse-1.0 appeared without attribution on April 7 and topped the Artificial Analysis text-to-video and image-to-video rankings, beating Seedance 2.0 and Kling 3.0 in blind user voting

  • Alibaba confirmed on April 10 that HappyHorse was built by their ATH Innovation Unit, a newly formed internal team

  • The project is led by Zhang Di, former Kuaishou VP and technical architect behind Kling AI, who returned to Alibaba in November 2025

  • The model has been internally deployed on Alibaba's Bailian MaaS platform, with public API access confirmed as coming soon (no date announced)

  • Early Chinese tech press reports suggest HappyHorse is rumored to land on third-party platforms like fal.ai, which would give Western indie creators direct access

My Take:
The chess move here is worth studying. Zhang Di didn't just leave Kuaishou. He built the model that was, until recently, the benchmark king. Now he's done it again, at a different company, in what appears to be a few months. The architectural knowledge for building a top-tier video generation model is concentrating in a very small number of people, and those people move between companies faster than product cycles.

The practical question is access. Alibaba's Bailian platform isn't where most Western creators go looking for tools. The interface is in Chinese, the documentation assumes you're building enterprise apps, and the pricing structure isn't designed for individual filmmakers generating test shots. If HappyHorse lands on fal.ai or a similar third-party API, it becomes relevant to anyone running generation workflows tomorrow. If it stays locked inside Alibaba's ecosystem, it's a benchmark number that doesn't change your Tuesday. Worth noting: HappyHorse climbed the leaderboard through blind user voting, not lab benchmarks. Real people preferred its output over Seedance 2.0 and Kling 3.0 without knowing who made it. That's harder to argue with than a self-reported metric. Watch for API availability announcements over the next couple weeks.

TL;DR: Canva acquired two companies in one week: Simtheory (an agentic AI collaboration platform) and Ortto (customer data and marketing automation with 11K+ customers across 190 countries). The full reveal drops at Canva Create in Los Angeles on April 16.

Key Takeaways:

  • Simtheory lets users build custom AI assistants and orchestrate multi-model workflows within a shared workspace

  • Ortto brings customer data infrastructure and marketing automation, serving 11,000+ customers across 190 countries

  • Canva's COO said Simtheory "accelerates our evolution from a design platform with AI tools to an AI platform with design and productivity tools at its core"

  • Both acquisitions were announced April 8-9, less than a week before Canva Create (April 16, 10 AM PT)

  • Canva is promising its "biggest evolution yet" at the event, and the acquisitions suggest the keynote will center on agentic capabilities

My Take:
Read that COO quote one more time: "from a design platform with AI tools to an AI platform with design and productivity tools." That's a company telling you it's inverting its entire product hierarchy. If Canva delivers, you wouldn't go to Canva to make a poster and then optionally use AI. You'd tell an AI agent what you want and the design would be one output among several. The Ortto acquisition might actually matter more than Simtheory for content creators. Customer data plus design plus distribution in one platform means Canva could start competing with HubSpot and Buffer, not just Photoshop. Imagine creating a social post, targeting it based on audience data, and scheduling it, all without leaving one app. That's the pitch. Whether the execution matches is what Wednesday's keynote is for. If you're weighing decisions about your social media or marketing automation stack right now, pause. Wait until after Create. What they announce could change the math.

TL;DR: NAB 2026 runs April 18-22 in Las Vegas, and the pre-show announcements have a single dominant theme: agentic AI in production workflows. Footage that tags itself, newsrooms that auto-generate graphics, live production that self-directs.

Key Takeaways:

  • Avid debuts Content Core on AWS: AI-powered content intelligence that connects across editorial and news platforms without making editors leave the tools they already know

  • Eluvio unveils EVIE (Eluvio Video Intelligence Editor): inline, frame-accurate AI video analysis with agentic orchestration for title libraries and live sports

  • Cuez introduces an agentic AI framework for live production plus a story-centric newsroom that auto-generates graphics from story metadata

  • Ross Video and HighField AI team up for context-aware graphics, where the system reads the story and produces matching lower thirds and full-screens

  • DigitalGlue shows creative.space: natural-language footage search that replaces keyword-based MAM with queries like "show me the wide shot of the bridge at sunset"

  • Panasonic brings AI-tracking PTZ cameras that follow subjects on their own, no dedicated operator needed

My Take:
NAB is where you separate what's actually shipping from what's still a conference slide. This year the vendors have collectively decided "agentic" is the word that sells. Eluvio doing frame-accurate analysis on title libraries, Cuez handling live production decisions, everybody pitching the same thing: the AI doesn't just assist, it acts. Whether it actually does what it says on the badge is a different question, and that's what the show floor is for. Two announcements are worth watching closest if you work in post. Avid's Content Core running on AWS means cloud-native editorial intelligence in the workflows you already use. Not another bolt-on subscription you have to sell your supervisor on. Just smarter media management inside the NLE you're already fighting with. Eluvio's frame-accurate analysis is the other one. AI that actually understands timecode and can work at the frame level across entire title libraries. If you've ever spent three hours manually tagging footage for a sports recap, you know why that matters. Both of these are products with release dates and pricing, not research demos. If you're at NAB, those booths first. If you're not, watch for hands-on reviews from the floor starting Friday.

TL;DR: Three open source video models have crossed the quality threshold where they're real alternatives to paid APIs: LTX 2.3, Wan 2.2, and HunyuanVideo 1.5. Each has different strengths. All are free to run locally. And HappyHorse's potential release could leapfrog the lot.

Key Takeaways:

  • LTX 2.3 (Lightricks) generates about twice as fast as Wan 2.2, (actually 8-18X depending on settings!) supports native 4K with audio, and handles portrait/social content in 9:16 well

  • Wan 2.2 (Alibaba) has the best motion realism of any open source model, and recently got instruction-based video editing via Wan 2.7 VideoEdit

  • HunyuanVideo 1.5 (Tencent) runs 8.3 billion parameters on as little as 14GB of VRAM, so it works on consumer hardware like gaming PCs

  • All three run locally on a Mac Studio or NVIDIA workstation at zero cost per generation after the hardware purchase

  • HappyHorse-1.0's team has confirmed API access is coming, but no open source weight release yet. If Alibaba does release weights, it would immediately become the strongest free option

Try This Now:
If you have an NVIDIA GPU with 12GB+ VRAM or a Mac with 16GB+ unified memory, these run today:

  1. Install ComfyUI (free, open source), the standard workflow engine for local AI generation

  2. Download LTX 2.3 for speed, or Wan 2.2 for better motion

  3. Start with image-to-video: grab a still frame from your edit timeline and generate a 4-second extension

  4. Expect ~30 seconds per clip on LTX 2.3, ~60 seconds on Wan 2.2, both on an RTX 4090

  5. Seedance 2.0 also just arrived in ComfyUI this week if you want to compare against the current benchmark leader

My Take:
Here's where the economics get interesting. The paid APIs (Runway, Kling, Pika) charge per generation. Fine when you're testing or pulling one hero shot for a project. But if you're iterating through 50-100 variations trying to find the right movement, the right lighting, the right timing on a camera push, those per-generation charges add up in a hurry. I've talked to creators running $200-$300 monthly API bills just on generation costs alone. Open source on local hardware costs you nothing per generation after the upfront buy. A decent NVIDIA 4070 Ti Super runs about $800. That gets you unlimited generations, no monthly ceiling, no throttling during peak hours. The gap between open source and closed model quality has been narrowing for months. LTX 2.3 is fast enough for actual production workflows, not just weekend experimentation. Wan 2.2's motion realism is approaching what you'd expect from a paid service. None of this makes Runway or Kling irrelevant - their interfaces, character consistency controls, and director-style features still matter plenty when you need precision. But for raw generation quality when you're budget-conscious or need high-volume iteration, the free options crossed the "good enough" line.

Source: PikaLabs

TL;DR: While most AI video companies chase cinematic realism, Pika 2.5 doubles down on Pikaffects - one-click creative transformations that melt, inflate, crush, or LEGO-ify objects in video. The play: own the "scroll-stopper" niche.

Key Takeaways:

  • Pikaffects are one-click video effects: select an object, pick a transformation (melt, inflate, crush, LEGO-ify, explode), and Pika applies it with consistent physics

  • Pika 2.5 is faster and more prompt-accurate than previous versions

  • The positioning is deliberately social-first: 3-5 second hooks for TikTok, Instagram Reels, and YouTube Shorts

  • While Runway, Kling, Seedance, and now HappyHorse all compete on cinematic realism, Pika carved out the viral effect niche with minimal direct competition

  • Free tier available, so it's the lowest bar to clear for creators wanting to test AI video effects

Try This Now:

  1. Go to pika.art and sign in (free tier works)

  2. Upload a short video clip of any product, object, or scene

  3. Select an object and choose "Inflate" or "Melt" from the Pikaffects menu

  4. Use the 3-5 second result as an opening hook for a Reel or Short, before cutting to your actual content

  5. Takes about 90 seconds from upload to export. The surreal transformation grabs attention before the viewer's thumb can scroll.

My Take:
Pika looked at a market where everyone was racing toward photorealism and went the other direction. LEGO-ify your coffee cup. Melt a sneaker. Inflate a car. None of it is "cinematic" in any traditional sense, but open Reels for five minutes and you'll see why it works. Three seconds of "wait, what?" before your actual message is worth more than thirty seconds of perfectly realistic footage that reads as stock video. Nobody stops scrolling for realism. They stop for the unexpected. It's a narrower market than "replace your camera," but Pika can own it. And when every other company is fighting over the same benchmark charts, having a niche to yourself is underrated. I'd rather be the only company doing one thing well than the sixth-best company doing the same thing as everyone else.

ESSENTIAL TOOLS

AI Filmmaking & Content Creation Tools Database

Check out the Alpha version of our AI Tools Database. We will be adding to it on a regular basis.

Got a tip about a great new tool? Send it along to us at: [email protected]

SHORT TAKES

ONE MORE THING…

Video of the Week

MiniMax partnered with the Lisbon Loras collective to see what happens when you put professional European filmmakers in a room with Hailuo AI's Director models. The resulting work blends Portuguese visual storytelling with AI-generated imagery, using Hailuo 2.3's camera and narrative controls to maintain directorial intent instead of surrendering it to the algorithm. What's worth paying attention to here isn't the technical quality (though it holds up). It's the attitude. These are filmmakers who came into the room with shot lists, storyboards, and a specific visual language they wanted to achieve. They used the AI tools the way they'd use a crane or a dolly or a particular lens, as equipment in service of a vision they already had. That's where AI filmmaking stops being a curiosity and starts being craft. The distinction matters. "I generated this" is a tech demo. "I directed this, and the generation was one of the tools I used" is filmmaking.

FINAL THOUGHTS

Final Thoughts

A model nobody was tracking showed up last week and beat everything that had a name. The person who built it had already done it once before, at a different company, with a different model. Three days of speculation. Then Alibaba raised their hand.

I keep coming back to the timeline. Zhang Di left Kuaishou, went to Alibaba, and had a new model topping the charts in roughly four months. That's not a product cycle. That's barely enough time to set up your desk and get your badge photo taken. The iteration speed in AI video generation has outrun every framework we use to measure it.

And that's just the closed models. On the open source side, three models that didn't exist 12 months ago can now generate production-quality video on hardware you buy at Micro Center. The paid services still have better interfaces and consistency controls. But the gap in raw output quality is shrinking, and it's shrinking fast.

People keep asking me which model to learn. Wrong question. Learn the workflow. Learn how to evaluate what comes out the other end. Learn what a good shot looks like, what good pacing feels like, how to tell a story in 60 seconds or 60 minutes. The model will change. It changed this week. It'll change again. What you bring to it won't.

NAB opens Friday. The vendors will say "agentic" until the word loses all meaning. Some of what they're showing will actually ship. Avid's Content Core looks real. Eluvio's frame-accurate analysis looks real. The test is always the same: whether it survives contact with an actual deadline.

Canva's Create event is Wednesday. If they deliver on the agentic promise, the social media toolkit conversation changes for everyone who makes content. If they don't, it's another keynote with a countdown timer.

I've been in post long enough to know what happens when the tools change underneath you. Avid replaced Moviola & Steenbeck. Final Cut challenged Avid. Premiere came back from the dead. Every time, the editors who survived were the ones who understood story structure and pacing, not the ones who'd memorized keyboard shortcuts for one specific application. The tools changed. The craft didn't.

AI video generation is the same pattern at ten times the speed. The model on top today won't be on top next month. If your workflow depends on one model staying there, you're building on sand. If you can swap the generation engine without relearning everything around it, you're building on something that lasts.

The leaderboard reshuffled this week. It'll reshuffle again. The question isn't who's on top. It's whether you've built a workflow that doesn't care.

See you Thursday.

What did you think of today's newsletter?

Vote to help us make it better for you.

Login or Subscribe to participate

If you have specific feedback or anything interesting you’d like to share, please let us know by replying to this email.

AIography may earn a commission for products purchased through some links in this newsletter. This doesn't affect our editorial independence or influence our recommendations—we're just keeping the AI lights on!

Reply

Avatar

or to participate

Keep Reading