TLDR: Veteran GPU architect Raja Koduri has launched Oxmiq Labs, a new venture that secured $20 million in funding to develop and license RISC-V-based GPU intellectual property for the AI market. The company aims to challenge the industry’s reliance on closed ecosystems by offering customizable hardware blueprints and a software compatibility layer, OXPython, to run CUDA applications on non-NVIDIA hardware. This initiative signals a strategic shift towards a more open, flexible, and heterogeneous future for AI compute infrastructure.
Raja Koduri, the veteran GPU architect with tenures at Apple, AMD, and Intel, has launched a new venture, Oxmiq Labs, focused on developing and licensing RISC-V-based GPU intellectual property. While on the surface this is a new hardware play, for the AI/ML professionals building the future of intelligence, the launch of Oxmiq Labs is the clearest signal yet that the AI hardware market is fundamentally shifting. Backed by $20 million in seed funding from investors like MediaTek, this isn’t about one company’s success; it’s a strategic inflection point compelling every AI architect, data scientist, and ML engineer to re-evaluate their long-term reliance on closed compute ecosystems.
For Architects: This is About De-Risking Your Entire Compute Strategy
For years, the unspoken risk in every AI roadmap has been the deep, systemic dependency on a single vendor’s architecture. Oxmiq’s core strategy directly confronts this vendor lock-in. By adopting an asset-light, IP-licensing model akin to Arm’s, the company is not building chips to sell, but blueprints for others to create their own custom silicon. This model, centered on the open-standard RISC-V instruction set, empowers a multi-vendor ecosystem where organizations can diversify suppliers and reduce the strategic risk of being beholden to one company’s pricing, supply chain, and roadmap. For the AI architect, this isn’t just a new component choice; it’s a new paradigm for designing resilient, future-proofed infrastructure.
For Developers & Engineers: A Pragmatic Bridge Away from the Walled Garden
The single greatest barrier to migrating AI workloads has always been software. Oxmiq’s most crucial innovation may not be its hardware IP, but its software-first approach. The company is developing OXPython, a compatibility layer designed to allow Python-based CUDA applications to run on non-NVIDIA hardware without modification or recompilation. This is a direct, pragmatic solution to the CUDA moat. Initially launching on Tenstorrent’s AI accelerators, this hardware-agnostic software stack signals a commitment to developer portability over hardware exclusivity. For ML, NLP, and Deep Learning engineers, this means the skills and codebases you have built for years could soon be portable across a new, competitive landscape of hardware, drastically reducing the friction and cost of exploring more efficient or specialized accelerators.
Dissecting the Technology: What OxCore and RISC-V Truly Mean for AI Workloads
At the heart of the hardware offering is OxCore, a modular GPU IP core built on RISC-V that integrates scalar, vector, and tensor compute engines. The choice of RISC-V is critical. As an open ISA, it allows for domain-specific customization that is impossible with proprietary architectures. This means licensees can build highly specialized chips—from low-power edge devices to massive data center SoCs—that are optimized for specific multimodal or AI workloads, rather than relying on a one-size-fits-all GPU. Combined with the OxQuilt chiplet platform, which allows for a mix-and-match approach to compute, memory, and interconnects, this creates a flexible foundation for building the next generation of AI accelerators. This moves the industry away from monolithic chip design and towards a more agile, bespoke future.
A Forward-Looking Takeaway: Prepare for a Heterogeneous Future
The emergence of Oxmiq Labs is more than just another competitor entering the ring; it is a catalyst for a market-wide strategic reassessment. The key takeaway for every AI/ML professional is that the era of unquestioned allegiance to a single hardware stack is ending. The confluence of a legendary architect’s vision, a capital-efficient IP licensing model, and the strategic imperative of an open, CUDA-compatible software layer is a powerful force for change. The immediate action is not to abandon current infrastructure but to begin planning for a heterogeneous compute future. Start evaluating emerging hardware solutions and, more importantly, begin architecting your software and MLOps pipelines with portability in mind. The winners in the next decade of AI will be those who build for flexibility, not those who remain locked into the perceived safety of a single, proprietary ecosystem.
Also Read:


