spot_img
HomeNews & Current EventsLiquid AI Unveils LFM2: Open-Source Small Foundation Models Redefining...

Liquid AI Unveils LFM2: Open-Source Small Foundation Models Redefining Edge AI Performance

TLDR: Liquid AI, an MIT spin-off, has released its next-generation Liquid Foundation Models (LFM2) as open-source on Hugging Face. These small foundation models are touted as the world’s fastest and best-performing for edge devices, offering significant advancements in speed, energy efficiency, and quality for on-device AI applications.

CAMBRIDGE, Mass. – Liquid AI, a pioneering company spun out of MIT, announced on July 10, 2025, the public release of its groundbreaking next-generation Liquid Foundation Models (LFM2). These small foundation models are now available open-source on Hugging Face, marking a significant stride in making high-performance AI accessible for edge computing and on-device deployment.

The LFM2 series is engineered to set new industry benchmarks for speed, energy efficiency, and overall quality within the edge model class. Unlike conventional transformer-based architectures, LFM2 models are built upon a ‘first-principles approach,’ utilizing structured, adaptive operators. This innovative design facilitates more efficient training, faster inference, and superior generalization capabilities, particularly in scenarios involving long contexts or resource-constrained environments.

According to Liquid AI, the LFM2 models deliver a remarkable 2x faster decode and prefill performance compared to models like Qwen3 on CPU. Furthermore, they demonstrate significantly better performance across various size classes in critical areas such as instruction-following and function calling, which are essential for building reliable AI agents. The company also highlights a 300% improvement in training efficiency over previous Liquid Foundation Model iterations, positioning LFM2 as a highly cost-effective solution for developing capable, general-purpose AI systems.

“At Liquid, we build best-in-class foundation models with quality, latency, and memory efficiency in mind,” stated Ramin Hasani, co-founder and CEO of Liquid AI. “The LFM2 series of models is designed, developed, and optimized for on-device deployment on any processor, truly unlocking the applications of generative and agentic AI on the edge.”

The initial release includes models such as LFM2-350M (0.4 billion parameters), LFM2-700M (0.7 billion parameters), and LFM2-1.2B (1 billion parameters). These models are not only available for download on Hugging Face but can also be tested through the Liquid Playground. Liquid AI plans to integrate these models into its Edge AI platform and an iOS-native consumer application in the coming days.

Also Read:

Liquid AI emphasizes that LFM2 is an ideal choice for local and edge use cases, catering to a growing market where enterprises are shifting from cloud-based large language models (LLMs) to more cost-efficient, fast, private, and on-premise intelligence solutions. The models are released under an open license based on Apache 2.0, allowing free use for academic and research purposes. Commercial use is permitted for smaller companies (under $10 million in revenue), while larger enterprises are required to obtain a commercial license directly from Liquid AI.

Ananya Rao
Ananya Raohttps://blogs.edgentiq.com
Ananya Rao is a tech journalist with a passion for dissecting the fast-moving world of Generative AI. With a background in computer science and a sharp editorial eye, she connects the dots between policy, innovation, and business. Ananya excels in real-time reporting and specializes in uncovering how startups and enterprises in India are navigating the GenAI boom. She brings urgency and clarity to every breaking news piece she writes. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -