TLDR: Adobe Firefly has introduced significant updates, including a new ‘Generate Sound Effects’ feature, enhanced video capabilities with improved motion fidelity and advanced controls, and expanded integration with leading third-party AI models. These advancements aim to provide creators with more precise and versatile generative AI tools for video production, from visual composition to immersive audio.
Adobe Firefly has rolled out a comprehensive suite of new features and integrations, significantly bolstering its generative AI capabilities for video creators. The updates, announced on July 17, 2025, focus on enhancing both visual and auditory aspects of video production, making the platform more robust and versatile.
One of the standout additions is the ‘Generate Sound Effects’ feature, currently in beta. This innovative tool allows users to create custom sound effects using simple text prompts or by guiding the timing and intensity with their own voice. Whether it’s a lion’s roar or ambient nature sounds, the feature supports a wide range of audio generation, including impact and atmospheric sounds, and enables layering with existing audio or user-uploaded video. Adobe emphasizes that, like other Firefly generative AI models, this feature is commercially safe, providing creators with confidence in their outputs. Generated sound effects can be seamlessly integrated within the Firefly web app and exported directly to Adobe Express.
Beyond audio, Firefly’s video capabilities have received substantial upgrades. The Firefly Video Model now boasts improved motion fidelity, resulting in more natural movements, smoother transitions, and lifelike accuracy in generated videos, particularly beneficial for depicting animal motion, human behavior, and atmospheric elements. New frame-level video editing controls have been introduced, including ‘Composition Reference’ for guiding framing based on visual references, ‘Keyframe Cropping’ for refining shots across sequences, and ‘Style Presets’ to apply consistent aesthetics such as claymation, anime, or line art. Users can now also create videos in vertical, horizontal, or square aspect ratios, providing greater flexibility for various platforms.
Adobe has also significantly expanded its partner AI model offerings within the Firefly ecosystem. New models from Pika, Luma AI (Ray 2 and Pika 2.2), and Topaz Labs (Image and Video Upscalers) join existing integrations with Google’s Veo 3, Moonvalley’s Marey, Runway’s Gen-4, OpenAI, Ideogram, and Black Forest Labs. This broader selection empowers users to choose the best AI model for specific creative needs across image, video, and sound, all accessible with a single Adobe sign-in. Another notable beta feature is ‘Text To Avatar,’ which can transform scripts into avatar-led videos, with options for avatars, accents, and backgrounds from a library.
Also Read:
- Firebase Studio Revolutionizes Development with Autonomous AI Agents and Enhanced Gemini Integration
- Naver Revolutionizes Content Creation with New Generative AI Tools for Global Creators
To further streamline the creative process, an ‘Enhance Prompt’ feature has been added, which refines user prompts with language that Firefly can better understand, leading to more accurate results. Adobe reiterates its commitment to ethical AI development; content created in Adobe apps will not be used for model training, and all AI-generated content in Firefly is tagged with Content Credentials as part of Adobe’s Content Authenticity Initiative, disclosing the model used to create the asset. Since its launch, Firefly models have been utilized to generate over 26 billion assets globally, underscoring its widespread adoption and impact on the creative industry.


