Post-Script: The Great RISC-V Secession
New Evidence Shows the Paradigm Shift Is Already Underway
In my previous article “The Export Control Trap”, I argued US export controls were inadvertently funding China’s next computing paradigm. Within days of publication, new market data emerged that doesn’t just validate this thesis – rather it suggests the transformation is happening faster and broader than what even earlier pessimistic projections anticipated. This warrants a post-script.
The Inflection is Here Already
While following up on a couple of data-points after the last article, it becomes clear that the inflection point I was anticipating in a near future has in fact already arrived. Newly released market data in January 2026 should alarm anyone vested in continued Western AI dominance, and should prompt the need to pay attention to trajectory, impact, and interplay of market forces. Not only for model evolution and logic capabilities, but also paying close attention to the underlying technology enablers. What I note is:
First, the economic divergence is quantified.
Last year it was made clear that DeepSeek-V3’s documented training was estimated to have cost somewhere around $5.5 million versus Western frontier models’ $50-200 million - a 10-40x efficiency advantage. This is of course based on DeepSeek’ own published information but even if that number was twice or thrice as high, it doesn’t change the challenge to the economic model that is held by Western AI frontier labs. On its own DeepSeek’s achievement is noteworthy but this underscores the thesis that China is not replicating, it’s forging it’s own strategy around the constraints. The efficiency gain is secured not through better hardware, but through architectural innovation forced by those constraints1. Their 671 billion parameter model activates only 37 billion per token through Multi-head Latent Attention (MLA) and DeepSeekMoE architecture, achieving comparable performance with a fraction of the compute2.
Media analysts contrasted this with widely cited estimates of tens to hundreds of millions of dollars in compute expenditures by leading AI frontier models, though no audited breakdown exists for models like GPT-5.2 or Sonnet 4.5 so broadly all the claims remain stated costs are not confirmed costs3.
Inference economics tell another eyebrow-raising story: DeepSeek’s claimed cost structure delivers operations at $0.10 per million tokens while Western models remain at $1.00-$5.00, creating a 10x to 50x operational efficiency gap4. Sure, those cost will shrink, and over time will the gap between western and Chinese models may close, but the advantage set by a different architectural paradigm may persist over a longer time until new technological breakthroughs swing the balance again.
Crucially, these are not forecasts or speculative models. They are measurements of a transformation already in motion.
Second, the alternative ecosystem has achieved critical mass.
A growing share of new AI accelerator startups in the APAC region now adopt RISC-V as their primary instruction set architecture, often explicitly to ensure sanction-resilient operations. This isn’t just about hedging, it’s driving a wholesale ecosystem pivot. Market forecasts project RISC-V growth from $2.49 billion in 2025 to $10.77 billion by 2030, with 2026 orderbooks already tracking 35% growth to $3.34 billion5 6. The market share likely came a shock to many analysts, and as the position evolves in 2026 and onwards, it suggests that in the markets where RISC-V has entered so far, currently standardised MCUs for IoT and automotive use-cases, the number represents a noteworthy for this open-standard instruction set architecture (ISA). The trajectory suggests we’ve crossed the threshold from experimental alternative to industry standard in formation.
Third, the incumbent has validated the challenger’s path.
NVIDIA has ported its CUDA platform to support RISC-V instruction set architecture on the CPU side, enabling RISC-V cores to orchestrate workloads in CUDA contexts7. When the dominant player validates the challenger’s architecture through direct technical integration, the paradigm shift effectively becomes official policy rather than speculation. NVIDIA’s shipment of over one billion RISC-V cores in 2024 alone – primarily for system management and secure boot tasks – demonstrates this isn’t symbolic support but operational deployment at scale8. Critics may say that RISC-V doesn’t materially change the playing field for the 2, 3, and 5nm chips but that is only true in the short term. In a longer term perspective RISK-V showcases and diverging architectural solutioning to the AI compute problem and although not a direct Kuhnian paradigm shift today, it marks the “development-by-accumulation” and the challenge of “normal science” as an established path towards the shift in paradigm itself. Export controls on advanced AI chips have incentivized Chinese ecosystem actors to pursue greater domestic autonomy in both hardware and software AI stacks, and DeepSeek’s deployment strategy and partnerships with domestic toolchains illustrate how this dynamic influences product positioning.9
Secession Documented: Hardware & Software Sovereignty at Scale
RISC-V has moved from academic curiosity and edge-case challenger to institutionalized national strategy. China has integrated it into its 14th Five-Year Plan as an “insurance policy” against restricted access to ARM and x86 architectures10. The significance extends beyond market projections because RISC-V represents what China now codifies as a “geopolitically neutral” instruction set architecture, a defensive posture with profound offensive implications.
Unlike x86, which maintains a rigid, frozen instruction set controlled by Intel and AMD, RISC-V allows Chinese firms to add custom AI instructions directly into silicon. This enables chips that are highly optimized for specific AI workloads, such as LLM inference, compared to general-purpose architectures. As an open-source standard governed by a foundation based in neutral Switzerland, RISC-V is technically unsanctionable. Washington cannot turn off the architecture the way it can restrict ARM licenses or ASML lithography exports.
Hardware sovereignty has transitioned from development to operational deployment. Alibaba’s XuanTie C930 server processor, launched in March 2025, is now shipping at volume to domestic datacentres11 12. This 64-bit, RVA23-compliant processor doesn’t merely imitate ARM; it eliminates the licensing dependencies Washington currently leverages for technological containment. Companies like RIVAI and Nuclei System are leveraging RISC-V’s modular vector extensions to build hardware optimized for post-transformer architectures, particularly those emphasizing sparse computation and efficient inference. As a result, these firms are not deeply locked into NVIDIA-centric design assumptions, in part because sanctions limited early dependence on that ecosystem.
The strategic positioning extends to supply chain control. By the end of 2026, China is projected to control 45% of global capacity for 28nm-and-above mature node production. [footnote: Projections China will hit 39% by 2027 is based on earlier market research, but when factoring in “under-construction” capacity that comes online throughout 2026, the 45% figure is frequently used by multiple sources to describe China’s share of new global capacity additions.13 14 15 This isn’t just market share; it also includes control over the foundational chips that power automobiles, industrial automation systems, IoT devices, power grid infrastructure, medical equipment, and defence systems. While Western policy obsesses over 3nm cutting-edge processes, China is systematically dominating production for the nodes that run the physical world’s existing infrastructure.
Software stack maturation is accelerating in parallel with hardware deployment. Moore Threads’ MUSA (Moore Threads Unified System Architecture) platform ships with the Musify toolkit designed to lower barriers for porting CUDA-originated code to domestic GPU products16. While precise performance relative to native CUDA workloads varies by application and ecosystem maturity, the existence of automated translation tools means the barrier to switching isn’t “rewrite everything from scratch” but “port and optimize iteratively.” This mirrors the Linux playbook: initial compatibility through translation layers, then progressive native optimization as the ecosystem matures.
Huawei’s CANN (Compute Architecture for Neural Networks) provides a comprehensive layered software stack for Ascend accelerators, positioned as a complete AI compute platform with integration into popular frameworks [footnote: https://en.wikipedia.org/wiki/MindSpore]. While not yet standardized as a universal solution across all domestic hardware vendors, CANN represents a key pillar of China’s AI software ecosystem. The lack of complete standardization isn’t weakness; rather, it’s the natural stage of early ecosystem development, comparable to Unix fragmentation before POSIX standardization emerged from competitive pressure.
The broader AI software ecosystem also includes open and vendor-agnostic systems like Triton and JAX, which reduce dependency on handwritten CUDA code across the entire industry. These frameworks provide hardware abstraction that benefits China without requiring Chinese development leadership. Every framework that abstracts away hardware specifics makes NVIDIA’s CUDA advantage more fragile, and China benefits from Western companies’ own efforts to reduce vendor lock-in.
And just to be clear; this isn’t capability development in laboratory settings. It’s operational deployment at scale with shipping products, established supply chains, and documented performance metrics.
Counterpoint: Why the Scaling Paradigm May Still Prevail
Before concluding, a critical counterargument must be engaged. The scaling paradigm that built the current AI era is not defenceless, and its adaptive capacity remains formidable.
A fair critique of this analysis is that it may overstate the durability of efficiency-driven architectural divergence while understating the adaptive capacity of the dominant scaling paradigm. Western AI leaders are not blind to efficiency constraints, nor are they locked into a single architectural trajectory by inertia alone. The same organizations investing in massive compute infrastructure are also leading advances in sparsity, quantization, distillation, and inference optimization. If efficiency proves decisive, there is no structural barrier preventing Western ecosystems from incorporating those innovations rapidly, especially given their access to capital, manufacturing, and global talent.
Moreover, frontier performance remains tightly coupled to scale in domains that matter strategically, including multimodal reasoning, scientific discovery, and generalization under distribution shift. For these tasks, access to leading-edge fabrication, extreme bandwidth memory, and large-scale training clusters may continue to confer decisive advantages that efficiency alone cannot fully substitute. If future breakthroughs remain compute-hungry, the scaling paradigm may not be transitional but foundational, and the current wave of efficiency gains may represent optimization rather than displacement.
Finally, the Chinese ecosystem’s embrace of alternative architectures carries its own risks. Fragmentation across instruction sets, toolchains, and software stacks can slow innovation, raise integration costs, and limit the transferability of breakthroughs. Open standards like RISC-V reduce dependency but do not automatically guarantee coherence or performance leadership. History offers examples where open, modular ecosystems struggled to outperform tightly integrated incumbents over long periods.
From this perspective, the current divergence could reflect temporary adaptation to constraint rather than the emergence of a superior paradigm. Only time will tell. If constraints ease or if scale-dependent breakthroughs dominate future progress, Western architectural investments could remain not only relevant but decisive.
Who Is Actually Trapped
Interestingly, we are witnessing the birth of two parallel, increasingly incompatible innovation ecosystems. The West continues optimizing within an established paradigm, protected by export controls and sustained by massive capital investment in known architectures. China, denied access to that paradigm, is being forced to build the alternative.
The question my previous article raised remains unanswered: who is actually trapped by the Export Control regime? The RISC-V data suggests we’re watching the answer materialize in real time.
We are not merely building a fortress around our current advantages. We are pouring concrete, in the form of trillion-dollar investments and rigid policy, into fortifying a paradigm whose foundations are already shifting. China is being forced to build the future in what appears to be the “left field” of efficiency and architectural autonomy.
The trap isn’t for them. It’s for us. We are the ones being locked into a trillion-dollar ditch of our own making.
The Great RISC-V Secession isn’t just about chips. It’s about who gets to define what “advanced” means in the next era of computing. And right now, the player being forced to redefine it has more incentive, more capital committed to alternatives, and far less to lose than the incumbent defending yesterday’s paradigm.
If you want more - here’s the first article on this topic
https://www.reuters.com/technology/artificial-intelligence/what-is-deepseek-why-is-it-disrupting-ai-sector-2025-01-27/
https://arxiv.org/pdf/2412.19437?
https://www.lemonde.fr/en/economy/article/2025/01/28/chinese-start-up-deepseek-disrupts-the-artificial-intelligence-sector_6737513_19.html
https://www.reuters.com/technology/chinas-deepseek-claims-theoretical-cost-profit-ratio-545-per-day-2025-03-01/
https://www.globenewswire.com/news-release/2026/01/27/3226141/28124/en/Reduced-Instruction-Set-Computer-V-Risc-V-Market-Report-2026-10-77-Bn-Opportunities-Trends-Competitive-Landscape-Strategies-and-Forecasts-2020-2025-2025-2030F-2035F.html
https://riscv.org/blog/risc-v-set-to-announce-25-market-penetration-open-standard-isa-is-ahead-of-schedule-securing-fast-growing-silicon-footprint
https://www.tomshardware.com/pc-components/gpus/nvidias-cuda-platform-now-supports-risc-v-support-brings-open-source-instruction-set-to-ai-platforms-joining-x86-and-arm
While specific core counts for any single product such as the Rubin platform are not confirmed in open primary sources, Nvidia’s broader use of embedded RISC-V for system management and secure boot tasks has been noted in industry discussion and in RISC-V International’s reports on shipments of RISC-V cores by Nvidia. (Direct product disclosures specific to Rubin remain sparse in public documentation.)
https://apnews.com/article/00c594310b22afbf150559d08b43d3a5
RISC-V and China’s 14th Five-Year Plan: China’s 14th Five-Year Plan (2021–2025) and its associated industrial policy guidance place strong emphasis on semiconductor self-reliance and the reduction of dependence on foreign proprietary technologies. Within this framework, RISC-V has emerged as a strategically promoted instruction set architecture, widely discussed in official-adjacent policy interpretations and industry guidance as a pathway to reduce long-term reliance on ARM and x86. Chinese financial and policy media, including Sina Corporation Finance (https://finance.sina.com.cn/tech/roll/2025-04-18/doc-inetmqpy6125990.shtml), have explicitly linked RISC-V development to Five-Year Plan objectives, citing ministry-level support, industry alliance formation, and ecosystem investment. This policy direction has been reinforced by additional international reporting by Reuters (https://www.reuters.com/technology/china-publish-policy-boost-risc-v-chip-use-nationwide-sources-2025-03-04) noting that Chinese authorities are preparing nationwide guidance to promote adoption of open-source RISC-V architectures as part of a broader effort to mitigate technology export risks and strengthen domestic control over core computing platforms.
https://www.tomshardware.com/pc-components/cpus/alibaba-launches-risc-v-based-xuantie-c930-server-cpu-ai-hpc-chip-ships-this-month-more-designs-to-follow
https://www.techerati.com/news-hub/alibaba-unveils-server-grade-high-performance-risc-v-chip/#:~:text=According%20to%20Alibaba%2C%20the%20C930,for%20cloud%20and%20server%20deployment
https://www.scmp.com/tech/article/3336933/china-remain-worlds-biggest-buyer-chipmaking-equipment-through-2027-semi
https://www.trendforce.com/news/2026/01/12/news-chinas-domestic-chip-equipment-adoption-beats-2025-target-at-35-led-by-naura-amec/#:~:text=Meanwhile%2C%20ACM%20Research’s%20single%2Dwafer,90%25%2C%20as%20Anue%20highlights
https://www.tomshardware.com/tech-industry/chinas-mature-chips-to-make-up-28-percent-of-world-production-creating-oversupply-western-companies-express-concern-for-their-survival
https://www.tomshardware.com/pc-components/gpus/chinas-moore-threads-polishes-homegrown-cuda-alternative-musa-supports-porting-cuda-code-using-musify-toolkit



