TLDR: Huawei Technologies has officially launched its CloudMatrix 384 AI computing system, also known as Atlas 900 A3 SuperPoD, at the World Artificial Intelligence Conference (WAIC) 2025. This new system is designed to directly compete with Nvidia’s most advanced AI offerings, boasting superior performance metrics in computing, memory capacity, and bandwidth.
Shanghai, China – Huawei Technologies has made a significant stride in the artificial intelligence landscape with the public debut of its CloudMatrix 384 AI computing system at the World Artificial Intelligence Conference (WAIC) 2025. The system, officially named Atlas 900 A3 SuperPoD, is positioned as a direct competitor to Nvidia’s leading AI products, particularly the Nvidia GB200 NVL72 AI tech system.
Industry experts note that the CloudMatrix 384 system rivals Nvidia’s most advanced offerings, signaling Huawei’s aggressive push to capture a larger share of China’s burgeoning AI sector. The unveiling at WAIC, a prominent three-day event showcasing the latest AI innovations, drew considerable attention to Huawei’s booth.
Key specifications revealed by Huawei highlight the CloudMatrix 384’s impressive capabilities. It is reported to deliver 300 PFLOPs of dense BF16 computing, which is double the performance of Nvidia’s GB200 NVL72 AI tech system. Furthermore, the CloudMatrix 384 boasts a 3.6x increase in memory capacity and 2.1x greater bandwidth compared to its Nvidia counterpart.
The system is built upon the super-node Ascend platform, utilizing high-speed bus interconnection capabilities to ensure low-latency linkage between its 384 Ascend NPUs. Huawei emphasizes that the CloudMatrix 384 addresses common association issues between computing, storage, and other critical resources within a cluster, leading to a more organized and stable setup compared to traditional AI clusters. This systematic engineering optimization also contributes to reduced failure rates during large-scale training.
Huawei highlights three core advantages of its Ascend AI chip-based CloudMatrix 384: ultra-large bandwidth, ultra-low latency, and ultra-strong performance. These features are designed to empower enterprises with enhanced AI training capabilities and stable reasoning performance for models, ensuring long-term reliability.
Also Read:
- Nvidia AI Chips Worth $1 Billion Reportedly Smuggled into China Amid US Export Controls
- CoreWeave Leads AI Cloud with First NVIDIA GB300 NVL72 Deployment at Switch’s EVO AI Factory
The introduction of CloudMatrix 384 underscores Huawei’s commitment to advancing its AI technology amidst a competitive global market and ongoing efforts to establish self-sufficiency in critical technology sectors.


