No H200 Chips Sold to China Two Months After Trump Approved Exports And Beijing May Already Have Its Own Answer
No H200 Chips Sold to China Two Months After Trump Approved Exports And Beijing May Already Have Its Own Answer
- Why Enterprise RAID Rebuilding Succeeds Where Consumer Arrays Fail?
- Linus Torvalds Rejects MMC Subsystem Updates for Linux 7.0: “Complete Garbage”
- The Man Who Maintained Sudo for 30 Years Now Struggles to Fund the Work That Powers Millions of Servers
- How Close Are Quantum Computers to Breaking RSA-2048?
- Why Windows 10 Users Are Flocking to Zorin OS 18 Instead of Linux Mint?
- How to Prevent Ransomware Infection Risks?
- What is the best alternative to Microsoft Office?
No H200 Chips Sold to China Two Months After Trump Approved Exports And Beijing May Already Have Its Own Answer
Zero Sales, Zero Urgency: Why China Hasn’t Bought a Single Nvidia H200 — And May Not Need To
February 24, 2026
Two months after the Trump administration reversed course and permitted Nvidia to export its H200 AI chips to China, not a single chip has changed hands.
That striking fact emerged Monday during a hearing of the House Foreign Affairs Committee, where David Peters, Assistant Secretary for Export Enforcement at the U.S. Department of Commerce, confirmed that H200 sales to Chinese customers remain at zero.
When asked by Democratic Representative Sydney Kamlager-Dove how many H200 chips had been approved for sale to China, Peters responded plainly: “My understanding is that none so far.”
The testimony arrives at a sensitive moment for Nvidia, whose earnings report is due Wednesday — a report investors are watching closely for any signs of the company’s return to what was once one of its most lucrative markets.
A Policy Reversal With Strings Attached
On December 8, 2025, President Trump announced that Nvidia would be allowed to sell H200 chips to “approved customers” in China — a significant departure from years of tightening export restrictions. But the policy comes laden with conditions. The U.S. government would collect a 25% cut of sales revenue, and every prospective customer must individually apply for and receive government approval before any transaction can be completed.
The cumbersome approval process, combined with the substantial revenue-sharing requirement, has so far produced no completed sales. Analysts note that Chinese buyers face both a financial disincentive — the 25% fee makes H200s considerably more expensive — and significant regulatory uncertainty, as policy could tighten again before any purchase is delivered.
Nvidia CEO Jensen Huang himself acknowledged the uncertainty ahead of the earnings report, saying he was unsure whether Chinese companies would even end up buying H200 chips.
What Is the H200, and Why Does It Matter?
Released by Nvidia in November 2023, the H200 is an upgrade of the H100 chip — the workhorse behind many of the world’s most advanced large language models. The H200 offers substantially higher memory bandwidth at 4.8 terabytes per second, making it especially powerful for AI inference workloads. It sits a generation below Nvidia’s latest Blackwell architecture, which remains fully off-limits for Chinese buyers.
For years, the H200 and H100 were unavailable to Chinese companies due to U.S. export controls targeting advanced computing chips. Nvidia responded by designing the China-specific H20 — a stripped-down version with deliberately capped capabilities — but Washington banned that chip earlier this year as well, further restricting China’s access to Nvidia’s product line.
China’s Domestic Alternative: Impressive Progress, Real Limits
The more strategically significant question behind the zero-sale figure is whether China even needs the H200 anymore.
The answer is nuanced. China has made remarkable progress in building a domestic AI chip ecosystem, with Huawei’s Ascend 910C emerging as the most capable homegrown alternative. The chip uses a dual-chiplet architecture built on SMIC’s 7nm-class process and delivers approximately 3.2 terabytes per second of memory bandwidth. Huawei has also developed the CloudMatrix 384 — a server rack that clusters 384 Ascend 910C chips together using high-bandwidth optical interconnect — and positions it as a system-level competitor to Nvidia’s far more advanced GB200 platform.
In some specific inference workloads and at scale, the CloudMatrix 384 has shown competitive performance with H200-class clusters. In a leaked Baidu internal benchmark, a cluster of Huawei 910B GPUs matched an equivalent H100 cluster on Llama-2-70B training, and for inference tasks, the 910B actually outperformed the H200 on tokens-per-watt once sequence lengths exceeded 4,000 tokens.
However, the picture is less flattering at the individual chip level. The Ascend 910C delivers a total processing performance score of 12,032, compared with the H200’s 15,840, and has memory bandwidth of 3.2 TB/s versus the H200’s 4.8 TB/s. A Council on Foreign Relations analysis estimates that the Ascend 910C delivers roughly 60% of the inference performance of Nvidia’s H100 under comparable conditions — and the H200 substantially improves upon the H100.
Software reliability is another persistent challenge. Huawei’s chips run on its CANN framework, which has improved significantly but still lacks the maturity and stability of Nvidia’s CUDA ecosystem, which has been refined over more than a decade. At large cluster scale, Chinese AI labs have reported reliability issues that add operational complexity and cost.
According to Huawei’s own three-year roadmap, the company does not plan to release a chip competitive with the H200 until the fourth quarter of 2027 at the earliest — and even then, severe chip manufacturing bottlenecks at SMIC mean that production volumes are expected to represent only 1–2% of U.S. chip production capacity in 2026.
Huawei has announced its next-generation Ascend 950PR for the first quarter of 2026, with the 950DT to follow later in the year. The more competitive Ascend 960, projected to roughly match H200 performance in raw compute, is not slated for release until the fourth quarter of 2027.
Strategic Calculations on Both Sides
The zero-sales figure likely reflects procedural friction and strategic caution more than pure self-sufficiency. Chinese tech giants including ByteDance, Alibaba, and Baidu have been investing heavily in domestic Huawei hardware, both out of necessity and as a hedge against future policy swings in Washington. Reports indicate that Huawei is preparing to scale Ascend 910C production to approximately 600,000 units in the coming year.
White House officials have reportedly reviewed Huawei’s CloudMatrix 384 system and concluded that the decision to allow H200 exports was partly motivated by a desire to ensure that China remains locked into Nvidia’s CUDA software ecosystem — keeping American AI infrastructure dominant even as Chinese hardware catches up. White House spokesman Kush Desai stated the administration is “committed to ensuring the dominance of the American tech stack — without compromising on national security.”
The broader picture is one of accelerating strategic decoupling. China is becoming measurably less dependent on Nvidia than it was three years ago, and its domestic chip investment has intensified dramatically under export pressure. But a meaningful performance gap remains — particularly for frontier AI model training, which continues to rely overwhelmingly on American chips.
Whether the H200 ultimately finds buyers in China will depend as much on Washington’s regulatory patience as on Beijing’s technological confidence. For now, the number stands at zero.
Reporting based on congressional testimony, Nvidia earnings guidance, and analysis from the Institute for Progress, Council on Foreign Relations, and Tom’s Hardware.
