AMD's fiscal first-quarter 2026 earnings call was not just a victory lap for another data center beat. It was a strategic argument from management: AI infrastructure is no longer only an accelerator story. It is becoming a full compute-platform story, where CPUs, GPUs, memory, software, and rack-scale systems all have to move together.
Google's effort to expand its tensor processing units (TPU) beyond its own cloud is meeting resistance from some of the AI infrastructure companies best positioned to distribute alternative chips, with executives from Nebius, Lambda, and CoreWeave saying they do not plan to adopt TPUs anytime soon, according to The Information.


