- AIography
- Posts
- Runway Breaks New Ground With Emotionally Expressive Characters
Runway Breaks New Ground With Emotionally Expressive Characters
Film Tech Leaps Forward with New AI Tools: Five Big Developments Changing Production
Welcome to Today’s AIography!
This week is all about breakthroughs - Runway bringing emotion to AI characters, Adobe revolutionizing sound design, and Meta making waves with both CoTracker3 and their Blumhouse partnership. Plus, Krea's new platform might just solve our multi-platform headaches. Finally, don't miss our exciting announcement about AIography's new community platform!
In today’s AIography:
Runway Announces “Act One,” Brings Emotional Range to AI Characters
Blumhouse Partners with Meta to Test AI-Generated Short Films
Adobe’s Project Supersonic, First Text-to-Sound Tool for Editors
ANNOUNCEMENT - The Future of Filmmaking is HERE
Meta's CoTracker3: Advanced Motion Tracking for Modern VFX
Krea Unifies Leading AI Video Models Under Single Platform
Essential Tools
Short Takes
One More Thing…
Read time: About 5 minutes
THE LATEST NEWS
TL;DR: Runway's Act One introduces a revolutionary AI tool that allows filmmakers to create characters with deeply expressive, human-like emotions going beyond current capabilities. Integrating features similar to Live Portrait, Act One enables emotional depth in full-body character performance, offering unprecedented control over AI-generated sequences. Take a look at this character driven scene.
Key Takeaways:
Emotionally Expressive Characters: Act One allows characters to exhibit realistic, full-body emotions, making it a significant leap in AI’s ability to generate lifelike performances that go beyond facial expressions, unlike Live Portrait, which primarily focuses on faces.
Cinematic-Quality Visuals: The tool empowers filmmakers to craft highly detailed and dynamic scenes, rivaling the visual quality of traditional film production.
Impact on Storytelling: By simplifying the technical execution of complex sequences, Act One allows filmmakers to focus on narrative depth, enabling richer and more immersive storytelling.
Fusion of AI and Traditional Techniques: Act One seamlessly blends AI’s efficiency with human-like creativity, enhancing workflows without sacrificing artistic integrity. It offers more flexibility than Live Portrait, which focuses on animating portraits rather than full-body actions and interactions.
Accessible High-End Production: The tool democratizes high-quality production, making it accessible to independent filmmakers and smaller studios, breaking down barriers to creating professional-grade content.
Why It’s Important:
Act One is set to transform the filmmaking landscape by bridging the gap between AI-generated sequences and human-like emotional performance. Unlike Live Portrait, which animates still images of faces, Act One allows for emotion-driven action in characters beyond facial expressions, revolutionizing the way directors handle emotional depth and action sequences. As AI becomes a creative collaborator, filmmakers of all levels will find new ways to push the boundaries of storytelling and production.
TL;DR: Blumhouse Productions has partnered with Meta to explore AI-generated filmmaking through Meta's Movie Gen tool. Notable filmmakers including Casey Affleck and Aneesh Chaganty are creating short films using the system, providing critical feedback before its planned wider release. The collaboration marks one of the first major studio experiments with AI-generated content.
Key Takeaways: Professional Testing: Established filmmakers are evaluating Meta's Movie Gen tool in real production environments, providing insights into its practical applications.
Creative Exploration: The system allows generation of 16-second video sequences from text prompts, enabling rapid visualization of creative concepts.
Industry Feedback: Meta is actively incorporating filmmaker input to refine the technology before its anticipated 2025 public release.
Production Integration: The experiment explores how AI generation tools might complement traditional filmmaking processes rather than replace them.
Studio Investment: Blumhouse's involvement signals growing interest from major production companies in AI's potential role in filmmaking.
Why It's Important:
This partnership represents one of the first systematic evaluations of AI video generation by a major production company. By involving established filmmakers in the development process, Meta and Blumhouse are working to ensure the technology addresses actual production needs rather than theoretical use cases. The results of this experiment could influence how AI tools are integrated into professional filmmaking workflows.
TL;DR: Adobe's Project Supersonic introduces sophisticated text-to-audio generation directly within video editing timelines. Moving beyond basic sound libraries, this tool allows editors and sound designers to generate context-aware audio effects at professional quality levels. The technology shows particular promise in its ability to understand visual context and generate appropriate sonic elements.
Key Takeaways: Timeline Integration: The system generates sound effects directly within editing software, eliminating traditional sound library searches and streamlining the audio design workflow.
Context Recognition: Advanced object recognition automatically identifies visual elements requiring sound, suggesting appropriate audio options based on scene content.
Quality Output: Generated audio maintains professional standards at 48kHz, meeting broadcast and theatrical requirements without additional processing.
Iterative Design: Multiple sound variations can be generated for each prompt, allowing sound designers to explore different approaches efficiently.
Vocal Sketching: An innovative feature allows sound designers to roughly vocalize desired effects, which the system then translates into polished audio.
Why It's Important:
Project Supersonic represents a fundamental shift in sound design workflow. Rather than merely offering another sound library, it provides a generative tool that understands context and responds to specific needs. This could significantly impact both high-end productions, where it might speed up initial sound design passes, and independent projects, where access to extensive sound libraries has traditionally been a limiting factor. The technology's ability to maintain quality while offering creative flexibility suggests a future where sound design becomes more iterative and experimental without sacrificing professional standards.
ANNOUNCEMENT
The Future of Filmmaking Is Here - AIography - Your Hub for News, Tools, Learning, and More
We're excited to announce that we’ve started building a new community on the Skool platform, dedicated to all things AI filmmaking and creative media! You can find it here: AIography - AI Filmmaking Academy. Although we haven’t made the group public yet, it’s free to join for those interested in being a part of this growing space.
We’ve also created a short, confidential survey, and we’d love your feedback! By filling it out, you’ll help shape this community into a valuable resource for filmmakers and creators looking to stay on top of the latest AI tools and trends. Your input will allow us to tailor the experience and ensure it’s as engaging and helpful as possible.
If you’re interested, please fill out the survey here: Join the Survey. Thanks in advance for your support! We can’t wait to build something incredible together.
Image: Meta Research
TL;DR: Meta's CoTracker3 represents a substantial advancement in point tracking technology, particularly in its ability to handle real-world footage through pseudo-labeling techniques. The system maintains tracking through complex occlusions and varying lighting conditions, addressing long-standing challenges in motion tracking for visual effects.
Key Takeaways: Real-World Performance: Unlike previous systems trained primarily on synthetic data, CoTracker3 excels with actual production footage through innovative pseudo-labeling techniques.
Occlusion Handling: The system maintains consistent tracking when objects move behind others or temporarily disappear from frame, reducing the need for manual cleanup.
Architectural Efficiency: Streamlined design eliminates redundant processes found in older tracking systems, resulting in faster processing without sacrificing accuracy.
Dual-Mode Operation: Both real-time and offline tracking modes allow flexibility between immediate feedback and higher-precision results.
Scale Adaptability: The system performs consistently across both micro-movements and large-scale motion, making it suitable for various tracking tasks.
Why It's Important:
Motion tracking forms the foundation of modern visual effects work, and CoTracker3's improvements in handling real-world footage could significantly reduce the manual intervention typically required. While tracking technology has existed for decades, the ability to maintain accuracy through occlusions and lighting changes addresses key pain points in production pipelines. This could particularly impact projects where traditional tracking solutions require extensive artist cleanup.
every video model in one place.
we partnered with the top AI video providers to bring the best video models into Krea.
now you can create with @Hailuo_AI#MiniMax, @LumaLabsAI, @runwayml, @pika_labs, and @Kling_ai. x.com/i/web/status/1…
— KREA AI (@krea_ai)
10:07 AM • Oct 17, 2024
TL;DR: Krea has integrated AI video generation models from Hailuo, Luma AI, Runway, Pika Labs, and KlingAI into a unified platform. This consolidation eliminates the need for creators to switch between multiple applications, marking a significant step toward streamlining AI video production workflows.
Key Takeaways: Comprehensive Integration: The platform incorporates major AI video models including Hailuo, Luma AI, Runway, Pika Labs, and KlingAI in one interface.
Workflow Efficiency: Users can access and compare outputs from different AI models without leaving the platform or managing multiple subscriptions.
Creative Flexibility: The range of available models allows creators to leverage each system's unique strengths for different aspects of their projects.
Resource Management: Consolidated access potentially reduces the overall cost compared to maintaining separate subscriptions to multiple platforms.
Standardized Interface: A unified control scheme helps creators focus on creative decisions rather than learning multiple user interfaces.
Why It's Important:
This consolidation addresses a growing challenge in AI video production where creators often need to master and maintain multiple platforms to access the best tools for specific tasks. While individual AI video models excel in different areas, having them accessible through a single interface could significantly reduce technical overhead and allow creators to focus on creative decisions rather than platform management.
ESSENTIAL TOOLS
Tools to Check Out
Essential tools page/database still under construction. Until then, check out and bookmark the following pages.
RunwayML - The first mover and leading vendor of AI Video-gen and editing tools. Look at their new Gen-3 feature
Luma Dream Machine - Another powerful AI video generator with lots of features.
PikaLabs - The closest competitor to Runway but coming on fast
Midjourney - Leader in still image generation.
Pixverse - Another good video generator. Simple to use.
Hedra - Generate expressive and controllable human characters
Krea.ai - Multiple AI generation models in one platform
ElevenLabs - Powerful AI voice generator
Suno - Currently considered the best AI music generator
Udio (beta) - Neck and neck with Suno for music generation
Claude 3.5 Sonnet - Claude’s new model that’s taking the chatbot world by storm.
ChatGPT - Well, you probably already know and are using this one
SHORT TAKES
Haiper Releases Version 2.0 of it’s Video Generator
Sneak Peaks from Adobe MAX 2024: Innovations to Watch for Creative Professionals
Kaiber Launches Superstudio: A Unified AI Platform for Image and Video Creation
Suno Introduces 'Suno Scenes': Visual Storytelling Meets AI Music Creation
Claude 3.5 Major Update: AI Can Now Control Software Directly
ONE MORE THING…
This week’s standout video is a stunning 30-second spec ad for Nike, crafted entirely using AI by French director Alexandre Tissier. Made in October 2024, this piece blends cutting-edge creativity and technology, showcasing how AI can revolutionize ad production. Using tools like MidJourney, RunwayML, Adobe, OpenAI, ElevenLabs, and more, Tissier delivers a visually striking and conceptually sharp piece that pushes the boundaries of what’s possible in AI-driven filmmaking.
Be sure to check out the directors' work:
Instagram: Alexandre Tissier and Nitch Hiltgen.
For more, visit Alexandre Tissier's website.
What did you think of today's newsletter?Vote to help us make it better for you. |
If you have specific feedback or anything interesting you’d like to share, please let us know by replying to this email.
AIography may earn a commission for products purchased through some links in this newsletter. This doesn't affect our editorial independence or influence our recommendations—we're just keeping the AI lights on!
Reply