TLDR: MEGS 2 is a new framework that significantly reduces the memory footprint of 3D Gaussian Splatting (3DGS) for real-time rendering on various devices, including mobile. It achieves this by replacing memory-intensive Spherical Harmonics with lightweight Spherical Gaussians for color representation and introducing a unified soft pruning method that simultaneously optimizes both the number of 3D Gaussians and the parameters per Gaussian. This results in substantial VRAM compression while maintaining high rendering quality, making 3DGS more accessible for edge devices.
3D Gaussian Splatting (3DGS) has rapidly become a leading technique for creating realistic 3D scenes from multiple images, offering impressive visual quality and real-time performance. However, its widespread adoption, especially on devices with limited resources like smartphones and laptops, has been hindered by its significant memory demands. This challenge is precisely what a new research paper, titled “MEGS 2: MEMORY-EFFICIENT GAUSSIAN SPLATTING VIA SPHERICAL GAUSSIANS AND UNIFIED PRUNING,” aims to solve.
Authored by Jiarui Chen, Yikeng Chen, Yingshuang Zou, Ye Huang, Peng Wang, Yuan Liu, Yujing Sun, and Wenping Wang, MEGS 2 introduces a novel framework designed to make high-quality 3DGS rendering accessible on a broader range of devices, from powerful desktops to mobile platforms.
The Memory Bottleneck in 3DGS
While many existing methods focus on compressing the storage size of 3DGS data, the critical issue for real-time performance on edge devices is rendering memory (VRAM). Current compression techniques often require fully decoding the compressed data before rendering, which can still lead to a large memory footprint, sometimes even larger than uncompressed 3DGS. Other methods that prune (remove) unnecessary 3D Gaussians help, but they often reach a limit where further reduction compromises visual quality.
The MEGS 2 team observed that the total memory consumption for 3DGS rendering is tied to two main factors: the overall number of 3D Gaussian primitives and the amount of data (parameters) stored for each primitive. To truly address the memory bottleneck, both of these factors need to be optimized simultaneously.
Introducing Spherical Gaussians for Color Representation
One of MEGS 2’s key innovations is replacing the memory-intensive Spherical Harmonics (SH) with lightweight, arbitrarily-oriented Spherical Gaussians (SG) for representing the view-dependent color of each 3D Gaussian. Spherical Harmonics, while effective for low-frequency lighting, require many coefficients to capture localized, high-frequency details like sharp reflections, making them difficult to compress efficiently.
In contrast, Spherical Gaussians are more compact and excel at modeling view-dependent signals with fewer parameters. Their complexity can be flexibly controlled by adjusting the number of “lobes” – essentially, directional components that define how light reflects. This inherent locality and sparsity make SGs a much more suitable choice for memory-efficient compression without sacrificing visual quality, especially for highlights.
A Unified Soft Pruning Framework
Building on the efficiency of Spherical Gaussians, MEGS 2 proposes a novel unified soft pruning framework. Traditionally, reducing the number of 3D Gaussians and reducing the parameters per Gaussian were treated as separate problems. MEGS 2 combines these into a single, memory-constrained optimization problem.
This framework dynamically prunes redundant lobes for each primitive (reducing parameters per primitive) and also prunes entire Gaussian primitives with near-zero opacity (reducing the total primitive count). By optimizing both factors together, MEGS 2 finds a better balance and achieves a globally optimal memory footprint, avoiding the sub-optimal solutions often found by sequential pruning methods.
After the optimization, a post-processing step removes the primitives and lobes that have been effectively minimized. To ensure rendering quality is maintained, a clever color compensation strategy is introduced, which recovers the energy of the removed lobes by adjusting the diffuse color of the parent primitive.
Also Read:
- SBS: A New Approach to Efficient Neural Network Compression
- Boosting AI Performance with Lookup Multivariate KANs
Impressive Results and Future Implications
The experimental results of MEGS 2 are compelling. Compared to vanilla 3DGS, MEGS 2 achieves an astounding 8x reduction in static VRAM (memory needed to load the scene) and nearly a 6x reduction in rendering VRAM (peak memory during rendering) across various datasets. Even against GaussianSpa, a state-of-the-art lightweight 3DGS method, MEGS 2 still manages a 2x static VRAM compression and a 40% rendering VRAM reduction, all while maintaining or even improving rendering quality.
The framework demonstrates interactive frame rates on a wide range of devices, from desktop GPUs to mobile chipsets like the MediaTek Dimensity 9400+ and Qualcomm Snapdragon 865, where traditional 3DGS often struggles or fails to run. This significant leap in memory efficiency opens up new possibilities for real-world applications of 3DGS on edge devices, such as mobile 3D scanning, virtual try-on experiences, and real-time rendering in video games.
MEGS 2 sets a new standard for 3DGS compression by prioritizing rendering VRAM efficiency, a crucial step towards enabling high-quality neural rendering on the edge. For more technical details, you can refer to the full research paper here.


