TLDR: This research paper presents the first systematic study on applying post-training quantization (PTQ) to diffusion large language models (dLLMs). It identifies activation outliers as a key challenge for low-bit quantization in dLLMs. The study evaluates various PTQ methods, finding that 4-bit is optimal for weight-only quantization and 8-bit for weight-activation. It also highlights that GPTQ and rotation-based methods (like DuQuant) are most effective, and that math/code tasks are more sensitive to quantization, while instruction-tuned dLLMs show greater robustness.
Large Language Models (LLMs) have transformed how we interact with technology, excelling in tasks from writing to coding. Recently, a new type of LLM, called Diffusion Large Language Models (dLLMs), has emerged as a promising alternative to the more common auto-regressive LLMs. These dLLMs use a unique approach, leveraging full attention and denoising strategies for natural language generation, offering finer control over the output. However, their immense size and high computational demands make it challenging to deploy them on everyday devices like smartphones or smart home gadgets.
This is where a technique called Post-Training Quantization (PTQ) comes into play. PTQ is a method used to compress neural networks by reducing the precision of their numerical representations, essentially making them smaller and faster without needing to retrain the entire model. While PTQ has been widely adopted for traditional LLMs, its application to dLLMs has remained largely unexplored until now.
A groundbreaking study, titled “Quantization Meets dLLMs: A Systematic Study of Post-training Quantization for Diffusion LLMs”, presents the first comprehensive investigation into quantizing these diffusion-based language models. The researchers, including Haokun Lin, Haobo Xu, and their colleagues, identified a key challenge: the presence of “activation outliers” in dLLMs. These are unusually large activation values that can dominate the model’s dynamic range, making it difficult to maintain precision when reducing the bit-width (the number of bits used to represent data).
The study systematically evaluated state-of-the-art PTQ methods across various dLLM types, task categories, and different levels of quantization (bit-widths). Their analysis provides crucial insights into how dLLMs behave when compressed.
Also Read:
- Unpacking Pruning Strategies: A Deep Dive into One-Shot, Iterative, and Hybrid Model Compression
- A Smarter Way to Decode Text with Diffusion Models
Key Findings from the Study:
Bit-Width Preferences: For methods that only quantize the model’s weights (weight-only quantization), 4-bit precision was found to be the most effective, offering a good balance between compression and performance. However, when both weights and activations are quantized (weight-activation quantization), 8-bit precision is recommended as it results in almost no loss of performance. Dropping to 4-bit for weight-activation quantization proved to be significantly more challenging, often leading to substantial performance drops.
Optimal Quantization Methods: The research highlighted specific PTQ methods that performed best. For weight-only quantization, GPTQ consistently outperformed AWQ across most tasks. For weight-activation quantization, rotation-based methods like DuQuant and QuaRot showed clear advantages over SmoothQuant, with DuQuant generally being the most effective choice. These rotation-based methods are better at handling the activation outliers found in dLLMs.
Task Sensitivity: While most PTQ methods performed well on general knowledge tasks like question answering, the models showed noticeable degradation on more complex tasks such as mathematical reasoning and code generation. This suggests that these demanding tasks require higher precision, and even small errors introduced by quantization can accumulate and significantly impact accuracy.
Model Type Robustness: An interesting observation was that instruction-tuned dLLMs (models specifically trained to follow instructions) exhibited greater resilience to quantization compared to their base model counterparts. This indicates that fine-tuning might make models more robust to the effects of compression.
The findings from this study lay a crucial foundation for future research in making dLLMs more efficient and deployable on resource-constrained devices. By understanding the challenges posed by activation outliers and identifying the most effective quantization strategies, the path is cleared for developing more practical and widespread applications of these powerful language models.


