TLDR: The rapid expansion of generative AI is raising concerns about its environmental footprint, particularly regarding energy consumption. New approaches, such as chiplet-based architectures and model distillation for efficient inference, are emerging to make AI more sustainable and accessible. Companies like d-Matrix are at the forefront of developing solutions that reduce the energy, cost, and time required for AI operations, paving the way for broader AI adoption.
The burgeoning field of generative AI, while offering transformative potential, is facing increasing scrutiny over its sustainability. Experts are highlighting the significant energy consumption associated with training and deploying large AI models, raising questions about the long-term environmental viability of the current AI boom. This challenge is leading to a focus on ‘inference’ – the process of using a trained AI model to make predictions or generate outputs – as a key area for efficiency improvements.
One of the primary concerns is the sheer scale of current AI models, which often require immense computational resources for training, making them costly and energy-intensive. This not only limits accessibility for many companies but also contributes to a substantial carbon footprint. For instance, the training of a model like Mistral Large 2, as of January 2025, generated an estimated 20.4 kilotons of COâ‚‚ equivalent and consumed 281,000 cubic meters of water over 18 months of usage. The marginal impact of a 400-token response from an AI assistant like Le Chat is still measurable, at 1.14 gCOâ‚‚e and 45 mL of water.
In response to these challenges, companies like d-Matrix are developing innovative solutions. Sid Sheth, CEO and co-founder of d-Matrix, emphasizes the need for a more sustainable approach to AI. Their solution involves a chiplet-based compute architecture specifically designed for AI inference. This architecture aims to run generative AI faster, cheaper, and with significantly less energy. The core idea is to move away from solely relying on massive models and instead focus on distilling larger models into smaller, yet still highly capable, versions. This allows for efficient inference without a substantial loss in performance.
Sheth highlights that the future of computing in the ‘age of inference’ will be heterogeneous, meaning no single computing solution, including GPUs, will be optimal for all tasks. Instead, specialized solutions will emerge to optimize for cost, latency, and throughput. d-Matrix’s approach, for example, focuses on ‘low latency batched inference,’ which is particularly relevant for enterprise applications that don’t require the massive simultaneous user loads seen in consumer applications like ChatGPT.
The goal is to make AI more accessible and affordable, allowing a wider range of enterprises and individuals to benefit from its capabilities. By making inference more efficient, the industry can overcome the current barriers of high cost and energy consumption, enabling AI to truly proliferate and become a ubiquitous tool. This shift towards efficient inference is seen as crucial for the widespread adoption and long-term sustainability of artificial intelligence.
Also Read:
- Generative AI Fuels Unprecedented Demand for Infrastructure, Unveiling New Investment Frontiers
- Indian AI Startups Prioritize Open-Source Small Models Amidst Evolving Landscape
Beyond hardware, research is also advancing in making AI more robust to real-world data challenges. Google DeepMind researchers, for example, introduced the LSM-2 (Large Sensor Model 2) framework with Adaptive and Inherited Masking (AIM) to enable direct learning from incomplete wearable data. This innovation allows AI models to dynamically adjust to missing data during inference, producing robust representations even with partial or systematic data gaps. This is vital for applications like health monitoring, where sensor data is often incomplete due to various factors.


