The fiscal third quarter of 2026 results from Lumentum Holdings offered one of the clearest signals yet that a powerful new cycle in artificial intelligence (AI) infrastructure is underway and that the bottleneck is no longer computing power, but the networks that connect it.
A sweeping new agreement between Anthropic and Google Cloud is throwing into sharp relief just how concentrated — and how enormous — the artificial intelligence boom has become.
GlobalWafers said on May 4 that its first-quarter performance reflected a transitional period, as short-term cost pressures and capacity expansion weighed on margins even as demand tied to artificial intelligence (AI) and high-performance computing began to strengthen.
Flex shares rose 13% in after-hours trading on May 5 after the electronics manufacturing services (EMS) provider forecast fiscal 2027 results above Wall Street expectations and announced plans to spin off its Cloud and Power Infrastructure segment into a separate publicly traded company.
AMD's fiscal first-quarter 2026 earnings call was not just a victory lap for another data center beat. It was a strategic argument from management: AI infrastructure is no longer only an accelerator story. It is becoming a full compute-platform story, where CPUs, GPUs, memory, software, and rack-scale systems all have to move together.
Google's effort to expand its tensor processing units (TPU) beyond its own cloud is meeting resistance from some of the AI infrastructure companies best positioned to distribute alternative chips, with executives from Nebius, Lambda, and CoreWeave saying they do not plan to adopt TPUs anytime soon, according to The Information.


