- AIography
- Posts
- New Breakthroughs: From Adobe’s Firefly to Strada’s AI Search
New Breakthroughs: From Adobe’s Firefly to Strada’s AI Search
The latest tools redefining video production & post.
Welcome to Today’s AIography!
Once again, we’ve packed this week’s newsletter with the latest news in AI and filmmaking—so much so that it was tough to fit everything in! From game-changing tools like Adobe’s Firefly Video Model to ethical debates in documentary filmmaking, there’s plenty to dive into. As always, we hope you enjoy reading, and don’t hesitate to send your feedback—we’d love to hear from you!
In today’s AIography:
Adobe's Firefly Video Model: Revolutionizing Video Editing with AI
OpenAI Unveils o1: A New Era of AI Reasoning and Problem-Solving
Strada Promises to Change Video Workflow with AI-Powered Search and Collaboration
Runway’s Gen-3 Alpha: Transforming Ordinary Videos into AI-Enhanced Spectacles
Flawless AI Unveils DeepEditor: Transforming Video Editing with AI
Essential Tools
Short Takes
One More Thing…
Read time: About 5 minutes
THE LATEST NEWS
TL;DR: Adobe is rolling out the Firefly Video Model later in 2024, adding AI-driven enhancements to video editing in Premiere Pro. This new generative AI tool allows editors to create B-roll, extend clips, and generate visual effects using only text prompts, offering a more streamlined and creative editing experience.
Key Takeaways:
Global Rollout: The Firefly Video Model will be introduced in beta towards the end of 2024, following the success of Adobe’s image-focused Firefly models.
AI-Generated Footage: Editors will be able to generate B-roll footage to fill gaps in their timelines or create entirely new sequences with just a few text prompts, saving time in sourcing stock footage.
Generative Extend Feature: A notable feature called Generative Extend will help editors seamlessly extend video clips to match timing needs without awkward transitions or reshoots.
Commercially Safe Content: Adobe ensures that all AI-generated content is based on legally approved sources, eliminating the risk of copyright issues for professional filmmakers and content creators.
Why It’s Important: This tool represents a leap forward in AI-enhanced post-production, particularly for editors pressed for time or budget. By generating footage and extending clips within the editor on demand, Adobe is positioning AI as an essential tool for enhancing creativity and reducing manual labor in video editing. While this could lead to faster production cycles, it also questions the future role of traditional filming for stock or B-roll footage.
TL;DR: OpenAI has introduced the o1 model, (aka “Strawberry”), designed to tackle more complex problem-solving in areas like science, coding, and mathematics. The model’s goal is to 'think' more like humans, spending more time on complex reasoning tasks before delivering answers. There are two versions: the more robust o1-preview and the faster, streamlined o1-mini.
Key Takeaways:
Global Release: The o1 models are now accessible for ChatGPT Plus, Team, Enterprise, and API users, offering a range of problem-solving enhancements across industries.
Advanced Problem-Solving: These models excel at tasks that require deep reasoning and human-like strategies, including coding, mathematical proofs, and even scientific theory crafting.
Two Options: The o1-preview version offers a more powerful reasoning engine, while the o1-mini version is tailored for tasks requiring quick responses, especially in coding, making it more cost-efficient.
Enhanced Safety Measures: OpenAI has also equipped these models with advanced safety features to reduce risks like jailbreaking and improper use, ensuring a more secure AI experience for users.
Why It’s Important: For filmmakers and creative professionals, this model opens up new possibilities for integrating AI into more technical aspects of production, like coding or complex video effects simulations. The ability to solve complex problems more like a human could lead to innovations in film technology, pushing the boundaries of what AI can do in the creative process.
TL;DR: Strada has opened it’s beta program introducing its powerful AI-driven platform that transforms video search and collaboration. The tool analyzes footage, generating metadata for things like faces, objects, actions, and even emotions, which makes it easier to locate clips in a sea of video content. Strada integrates seamlessly with popular NLEs (non-linear editors), offering creative teams a more efficient and organized post-production workflow.
Key Takeaways:
AI-Powered Video Search: Strada uses advanced AI to automatically tag footage with detailed metadata, such as recognizing faces, locations, objects, and even emotions, significantly reducing the time spent on logging and sorting clips.
Seamless Integration: The platform exports its AI-generated metadata directly into popular video editing software like Adobe Premiere Pro, Avid Media Composer, DaVinci Resolve, and Final Cut Pro, allowing editors to work efficiently within their existing workflows.
Collaboration-Friendly: Strada enables multiple team members to collaborate in real time, improving communication between editors, directors, and producers, which speeds up the decision-making process and eliminates redundancies.
Massive Time Savings: Early adopters of Strada report substantial time savings—up to five days a month—due to the AI’s ability to quickly find specific clips, making it a game-changer for large-scale productions with complex footage libraries.
Why It’s Important: Strada is poised to make a significant impact on video post-production workflows by solving one of the industry's most frustrating problems: sifting through hours of footage to find the perfect clip. By automating the logging process, this tool frees editors and creative teams to focus on what really matters—telling the story. For professionals in the filmmaking and video production space, adopting Strada could mean drastically cutting down on tedious tasks while enhancing overall productivity and collaboration.
TL;DR: Runway's new Gen-3 Alpha model introduces a video-to-video tool that can transform existing footage using AI. Through simple text prompts, filmmakers can alter video settings, characters, or even entire scenes, opening up a world of creative possibilities in post-production.
Key Takeaways:
AI Video Editing: This feature lets users input text prompts to make sweeping changes to their video content—whether it’s adjusting the lighting, altering actors' performances, or changing the entire setting of a scene.
Full Video Suite: Gen-3 Alpha completes Runway's suite of tools, which already included text-to-video and image-to-video capabilities, making it a one-stop shop for AI-enhanced video creation.
Precise Controls: With new features like Motion Brush and Advanced Camera Controls, editors can fine-tune specific elements, achieving more precise results and greater creative control.
Short Clip Limitations: For now, AI-generated clips are capped at 10 seconds in length, so we’re not quite at the stage of creating full-length films this way—yet.
Why It’s Important: For filmmakers and post-production pros, Runway’s Gen-3 Alpha represents a potential game-changer in video editing. The ability to reshape footage with minimal effort could significantly streamline the editing process, saving time and resources. However, it also raises important questions about the role of traditional editing and visual effects in an increasingly AI-driven workflow.
Image: Flawless
TL;DR: Flawless AI has launched the beta version of DeepEditor, an AI-powered video editing (VFX) tool that allows users to adjust facial expressions, lip-sync movements, and even change languages in video content—all without reshoots or heavy visual effects work. The tool offers a range of innovative features, from expression adjustments to language localization, making it a versatile solution for filmmakers and editors alike.
Key Takeaways:
AI-Driven Facial Editing: DeepEditor enables precise manipulation of facial expressions and lip movements within existing footage, allowing editors to refine performances or make post-production adjustments without the need for reshoots.
Language Localization: The tool can sync lip movements to dubbed audio tracks, making it easier to localize content for international markets while preserving performance authenticity.
Versatile Modifications: Additional features like expression adjustments, age modification, and text-to-speech capabilities provide a wide range of creative options for post-production teams.
Cost and Time Efficiency: DeepEditor has the potential to significantly cut costs and save time by eliminating the need for complex visual effects or additional shoots, making it a valuable tool for film and TV editors.
Why It’s Important: DeepEditor introduces an exciting new level of flexibility in video post-production, allowing editors to fine-tune performances or localize content for different regions without extensive reshoots or heavy visual effects. For filmmakers and post-production professionals, this tool could streamline workflows and reduce costs, all while maintaining creative control over the final product. However, it also raises ethical questions about the manipulation of actor performances, something editors and filmmakers will need to carefully consider as AI technology evolves.
ESSENTIAL TOOLS
Tools to Check Out
Essential tools page/database still under construction. Until then, check out and bookmark the following pages.
RunwayML - The first mover and leading vendor of AI Video-gen and editing tools. Look at their new Gen-3 feature
Luma Dream Machine - Another powerful AI video generator with lots of features.
PikaLabs - The closest competitor to Runway but coming on fast
Midjourney - Leader in still image generation.
Pixverse - Another good video generator. Simple to use.
Hedra - Generate expressive and controllable human characters
ElevenLabs - Powerful AI voice generator
Suno - Currently considered the best AI music generator
Udio (beta) - Neck and neck with Suno for music generation
Claude 3.5 Sonnet - Claude’s new model that’s taking the chatbot world by storm.
ChatGPT - Well, you probably already know and are using this one
SHORT TAKES
Vidu AI: China’s New Contender in Text-to-Video Generation
LA Tech Week’s Culver Cup: AI Film Challenge with a $100K Prize
Cate Blanchett Sounds Alarm on AI at TIFF: A Call for Imagination in Innovation
Runway's Vision for General World Models: Expanding AI's Understanding of Reality
Navigating the AI Frontier: Documentarians Grapple with Ethical Use of AI
AI in TV Production: Revolutionizing Content Creation and Viewer Experience
Fran Drescher's Stand Against AI: SAG-AFTRA’s Fight for Actors’ Rights
ONE MORE THING…
This week’s film is an entry to the Runway Gen:48 3rd Edition AI Film Competition. It’s a visually striking piece from filmmaker YZA Voku. Winners of this Edition of Gen:48 will be announced on this Friday, Sept 20th at 9am on the Gen:48 website. We look forward to seeing what others have created!
What did you think of today's newsletter?Vote to help us make it better for you. |
If you have specific feedback or anything interesting you’d like to share, please let us know by replying to this email.
AIography may earn a commission for products purchased through some links in this newsletter. This doesn't affect our editorial independence or influence our recommendations—we're just keeping the AI lights on!
🦾 Master AI & ChatGPT for FREE in just 3 hours 🤯
1 Million+ people have attended, and are RAVING about this AI Workshop.
Don’t believe us? Attend it for free and see it for yourself.
Highly Recommended: 🚀
Join this 3-hour Power-Packed Masterclass worth $399 for absolutely free and learn 20+ AI tools to become 10x better & faster at what you do
🗓️ Tomorrow | ⏱️ 10 AM EST
In this Masterclass, you’ll learn how to:
🚀 Do quick excel analysis & make AI-powered PPTs
🚀 Build your own personal AI assistant to save 10+ hours
🚀 Become an expert at prompting & learn 20+ AI tools
🚀 Research faster & make your life a lot simpler & more…
Reply