Wednesday, April 29, 2026
Search

Global AI Hardware Race Strains Supply Chains Ahead of Nvidia's Earnings

The worldwide scramble to build AI infrastructure is exposing critical bottlenecks deep in the semiconductor supply chain, from burn-in testing equipment to high-bandwidth memory and data centre interconnects. With Nvidia set to report earnings on February 25, signals from key suppliers reveal an ecosystem under intense pressure — strong demand colliding with lead times that span months or years. The race is no longer just about chips: it is about who controls the infrastructure behind them.

ViaNews Editorial Team

February 18, 2026

Global AI Hardware Race Strains Supply Chains Ahead of Nvidia's Earnings
Image generated by AI for illustrative purposes. Not actual footage or photography from the reported events.
Loading stream...

Across continents — from silicon fabs in Taiwan and South Korea to data centre campuses in the United States, Europe, and the Gulf — the global buildout of artificial intelligence infrastructure is exposing a web of supply chain dependencies that no single country or company fully controls. As Nvidia prepares to report quarterly earnings on 25 February, a cluster of supplier disclosures is offering an unusually granular view of where the bottlenecks lie.

California-based Aehr Test Systems, whose equipment is used to stress-test high-power semiconductors before deployment, issued second-half FY2026 bookings guidance of $60M to $80M — a striking jump from just $6.2M in the preceding quarter. The driver is almost entirely AI-related demand for wafer-level and packaged-part burn-in, the process by which accelerator chips are subjected to extreme heat and voltage to weed out early failures before they reach production servers. CEO Gayn Erickson confirmed that the company's lead customer for its Sonoma platform — an AI ASIC manufacturer — has submitted a "very large forecast," with shipments beginning in Q1 FY2027 from May 2026.

The Sonoma system now supports configurations handling up to 2,000 watts per device — a figure that reflects the thermal intensity of modern AI accelerators, which increasingly rival small industrial machines in their power consumption. Aehr says it can ship more than 20 units per month, but the consumables at the heart of wafer-level testing carry an eight-week lead time, and new high-bandwidth film products require over a year to develop. These timelines are structural, not logistical — they cannot be shortened simply by throwing money at the problem.

That reality has global consequences. Nations and blocs investing heavily in sovereign AI capacity — including the European Union's AI gigafactory initiative, Saudi Arabia's HUMAIN programme, Japan's government-backed semiconductor revival, and India's expanding chip ambitions — are all ultimately dependent on the same constrained layer of specialised test and packaging equipment, much of it produced by a handful of companies concentrated in the United States, Netherlands, and Japan.

Interconnect infrastructure faces similar pressure. Credo Technology, which supplies high-speed active electrical cable and optical interconnect solutions for data centre switching fabrics, guided third-quarter revenue to $335–345M, reflecting sustained capital expenditure from hyperscale operators expanding AI training clusters globally. As GPU counts per cluster climb into the tens of thousands, the bandwidth of the network fabric connecting them becomes the binding constraint — and Credo's order trajectory suggests that constraint is translating directly into revenue.

On the memory front, HBM3e — the high-bandwidth memory integrated into Nvidia's Hopper and Blackwell architectures — remains in tight supply. South Korean manufacturers SK Hynix and Samsung, along with US-based Micron, are the world's only producers of HBM at scale, giving them outsized influence over the pace at which next-generation AI systems can be deployed. Supply tightness has been a recurring theme for several quarters, and there is little indication of near-term relief as demand from hyperscalers in North America, Asia, and the Middle East continues to outpace production ramp-up.

A subtler dimension of the supply chain story involves intellectual property. Groq, a US startup building inference accelerators designed to compete with Nvidia on cost and latency, has entered a licensing arrangement with Nvidia for AI chip intellectual property — a signal that even the most ambitious challengers must navigate an IP landscape increasingly dominated by Nvidia's foundational patents. For international chipmakers in China, Europe, and elsewhere attempting to build domestic alternatives, this consolidation of IP adds another layer of complexity to an already difficult path toward technological independence.

Taken together, the picture that emerges is of an AI hardware ecosystem that is scaling rapidly but unevenly — with demand running well ahead of the infrastructure required to sustain it. The coming Nvidia earnings report will set a tone, but the deeper story is playing out across supply chains that span a dozen countries and are measured not in quarters, but in years.


Sources:
1 Nasdaq, "Aehr Test (AEHR) Q2 2026 Earnings Call Transcript" (January 16, 2026)
2 Yahoo Finance, "Credo Technology Group Holding Ltd Reports Second Quarter of Fiscal Year 2026 Financial Results" (December 01, 2025)
3 Yahoo Finance, "Stock market today: Dow, S&P 500, Nasdaq end higher in volatile trading day as Apple jumps" (February 17, 2026)