TLDR: Runway ML has officially launched Aleph, a groundbreaking in-context video model that promises to revolutionize video editing and content creation. Utilizing a single text prompt, users can now perform complex tasks such as altering environments, transforming objects, generating new camera angles, and applying diverse visual styles, marking a significant leap forward in generative AI capabilities.
Runway ML, a leader in artificial intelligence for creative applications, has announced the general availability of its highly anticipated in-context video model, Aleph. Released to the public and via its new API in late July and early August 2025, Aleph is poised to redefine the landscape of video production by enabling unprecedented levels of control and creativity through natural language prompts.
Aleph stands out as a state-of-the-art foundational model for multi-task visual generation. Its core innovation lies in its ability to interpret and execute complex video manipulations from a single text command. This means creators can now effortlessly add, remove, or transform objects within existing footage, generate entirely new camera angles from a static scene, and modify visual styles and lighting with remarkable precision. For instance, a user could simply type a prompt to ‘add snow to a summer scene’ or ‘change the building style to sci-fi,’ and Aleph would intelligently apply these changes, even adjusting environmental factors like lighting to ensure realism.
The model’s advanced capabilities extend to empowering sophisticated visual effects (VFX) pipelines. Users can alter environments, shift the time of day from dawn to dusk, or change seasons within a video with ease. Demonstrations have shown Aleph transforming mundane objects, such as packing boxes, into ice blocks or hay bales, all while maintaining ultra-realistic textures and interactions within the scene. Crucially, Aleph doesn’t merely overlay changes; it intelligently integrates them, considering how new elements or environmental shifts would naturally affect the entire composition and subjects within the video.
Industry observers are hailing Aleph as a potential ‘game changer,’ suggesting it could render traditional, time-consuming video editing timelines obsolete for many tasks. Nico, a representative from Runway ML, highlighted in a recent update that Aleph brings ‘a lot of novel features that haven’t been possible like this until this point,’ emphasizing its capacity to fundamentally change how creatives interact with video technology. The company expressed excitement about rolling out Aleph to all users, anticipating its transformative impact on creative workflows.
Also Read:
- Luma AI Unveils ‘Modify with Instructions’ for Advanced AI Video Editing
- MidJourney Unveils High-Definition Video Mode, Expanding Generative AI Capabilities
Furthermore, Aleph supports iterative scene building, even integrating with mobile devices to allow users to capture real-world elements with their camera and incorporate them into their AI-generated scenes. This accessibility underscores Runway ML’s commitment to democratizing advanced AI tools for a broader audience of artists, filmmakers, and content creators.


