Google unveiled a new industry-wide initiative to redefine data center architecture for the artificial intelligence era, announcing the creation of an "agile, fungible data center" workstream under the Open Compute Project (OCP).
The effort, introduced by Partha Ranganathan, vice president and engineering fellow at Google, seeks to build modular, interoperable data centers capable of adapting to the explosive growth and volatility of AI workloads.
Credit: Digitimes
Ranganathan described the moment as the "beginning of an intelligence revolution," saying the diversity and speed of technological progress—spanning seven generations of Tensor Processing Units (TPUs), multiple GPU platforms, and varied data center models—demand a fundamental shift in system design. "The world has changed dramatically," he said. "We need to build data centers that can change just as fast."
AI growth surpasses all previous computing eras
Ranganathan shared internal metrics highlighting the scale of AI's growth over the past 12 to 18 months. AI-accelerated usage within Google has increased 15-fold, machine-learning storage 37-fold, and the total number of AI tokens processed 50-fold, with Google systems now handling a quadrillion tokens per month across its global services.
He likened OCP contributors to "the rocket builders," enabling AI exploration. Google's infrastructure, he said, now supports AI transformation across its entire ecosystem—from consumer products (with Gemini AI integrated into all 15 major Google apps) to enterprise services such as sales and cybersecurity, and scientific breakthroughs including AlphaFold, which has reshaped protein-structure prediction and drug discovery.
Credit: Digitimes
The AI hypercomputer integrates hardware and systems innovation
Google's AI strategy is built around what Ranganathan called the AI hypercomputer—a cross-stack, co-designed architecture spanning silicon, systems, networking, and cooling. The approach integrates custom TPUs with advances in power delivery, optical networking, and liquid cooling, enabling 10–100× improvements in cost and power efficiency over the past decade. Google already operates multiple megawatts of liquid-cooled infrastructure, he said, underscoring the scale of its operational deployment.
To extend this integrated approach to the broader industry, Ranganathan announced a new OCP workstream backed by major partners. The group aims to formalize standards around modularity, interoperability, and common interfaces across compute, storage, networking, security, and sustainability domains.
Standardization targets power, cooling, and sustainability
Agile power delivery: Building on last year's discussions about rising rack density, OCP members have coalesced around 400-volt architectures and disaggregated power delivery under the Mount Diablo project, incorporating technologies such as solid-state transformers. New work is defining standards for microgrids and battery energy storage systems to help data centers mitigate the synchronous power "spikiness" typical of AI training workloads—and, eventually, enable them to return power to the grid.
Credit: Digitimes
Fungible cooling: Google's Project Issue, its advanced liquid-cooling system contributed to OCP earlier this year in 2025, is now seeing widespread adoption among vendors. The community is working to standardize chilled-water temperatures and layout parameters for colocation facilities, enabling more flexible and interchangeable cooling deployment.
Sustainability: Google has introduced a new methodology to measure the electricity, carbon, and water footprint of AI workloads. Ranganathan said an average Gemini 2.0 inference consumes less than five drops of water and roughly nine seconds of television viewing's worth of energy and carbon emissions, underscoring the progress toward efficiency.
Security: Google continues to expand the Caliptra security framework, now offering post-quantum protection. The latest Caliptra 2.1 adds open-source cryptographic key management, while OCP Safe has become the default platform for secure auditing.
The next moonshot uses AI to design systems
Ranganathan closed by calling on the OCP community to pursue what he called the "AI-for-AI" challenge—using machine intelligence to design next-generation systems.
He cited Google's AlphaChip project, which applies AI to chip floorplanning and has already improved power, performance, and area (PPA) metrics while cutting design time.
"AI-assisted system design—from silicon to software to manufacturing—is the next moonshot," Ranganathan said. "It's how we'll achieve the next orders of magnitude in efficiency and capability."
Article edited by Jerry Chen