TLDR: Researchers at KAIST, in collaboration with HyperExcel, have developed a groundbreaking Neural Processing Unit (NPU) technology that enhances the inference performance of generative AI models, including ChatGPT, by more than 60% while significantly reducing power consumption. This innovation addresses critical memory bottlenecks and offers a compelling alternative to traditional GPUs for AI infrastructure.
SEOUL, South Korea – The Korea Advanced Institute of Science and Technology (KAIST) has announced a significant breakthrough in artificial intelligence semiconductor technology, unveiling a new Neural Processing Unit (NPU) that promises to revolutionize the efficiency of generative AI services like ChatGPT. Developed by a research team led by Professor Park Jong-se from the School of Computing, in collaboration with Professor Kim Joo-young’s startup HyperExcel from the School of Electrical Engineering, this core technology has demonstrated an average performance improvement of over 60% for generative AI model inference.
Beyond the remarkable speed increase, the newly developed NPU also boasts a substantial reduction in power consumption, operating at just 44% of the energy required by the latest Graphics Processing Units (GPUs) for comparable tasks. This dual benefit of enhanced performance and reduced energy footprint positions the KAIST innovation as a potential game-changer in the rapidly evolving AI landscape.
The research team achieved these impressive results by meticulously addressing key challenges in AI inference. Their methodology involved minimizing accuracy loss through sophisticated lightweight processing techniques during the inference phase, alongside effectively resolving persistent memory bottleneck issues that often plague large-scale AI operations. Unlike GPUs, which frequently necessitate multiple units to meet demanding memory bandwidth and capacity requirements, this novel NPU technology enables the implementation of equivalent AI infrastructure with fewer units, offering a more streamlined and cost-effective solution.
Professor Park Jong-se elaborated on the technical prowess behind the innovation, stating, “By combining lightweight techniques that reduce memory requirements while maintaining inference accuracy with optimized hardware design, we have implemented an NPU that improves performance by an average of over 60% compared to the latest GPUs.” He further emphasized the broad implications of their work, adding, “It is expected to play a key role not only in data centers but also in the AX (AI Transformation) environment represented by ‘Agentic AI’, which is an active executable AI.”
Also Read:
- Generative AI Propels Robots to New Heights and Safer Landings at MIT CSAIL
- Nvidia’s AI Supremacy Tested as Major Clients Pursue ‘Sovereign AI’ and Intensified Competition
The findings of this pivotal research were first presented at the prestigious 2025 International Symposium on Computer Architecture (ISCA), held in Tokyo, Japan, on June 21. The development comes at a time when NPUs, specialized semiconductors designed for AI computations, are increasingly viewed as a viable alternative to the GPU market, currently dominated by NVIDIA. Domestic South Korean companies such as Rebellions and Furiosa AI are actively pursuing technology localization in this critical sector, further highlighting the strategic importance of KAIST’s latest contribution.


