CONNECT WITH US

How is India's AI governance framework reshaping where AI runs?

Prasanth Aby Thomas, DIGITIMES, Bangalore 0

Credit: AFP

India's tightening AI governance rules are no longer just shaping compliance strategies. They are beginning to influence where artificial intelligence workloads are deployed, pushing sensitive applications toward India-hosted and on-premise infrastructure.

Rather than acting as a standalone policy initiative, India's AI governance framework is increasingly influencing infrastructure decisions, particularly in sectors that handle large volumes of personal or regulated data. Vendors supplying AI servers and high-performance computing systems say enterprises are now factoring compliance, auditability, and data locality into their AI procurement strategies.

Swastik Chakraborty, vice president of technology at Netweb Technologies, said the enforcement of India's Digital Personal Data Protection (DPDP) Act is playing a key role in this shift. "It is not just for high-risk workloads like credit, health, or law enforcement," he said. "Any platform that shares or uses Indian personal data is now under an obligation to comply with reporting and protection requirements."

Netweb is a manufacturing partner for Nvidia, producing AI and high-performance computing systems in India.

According to Chakraborty, data localization requirements are also expanding beyond banking and financial services into other industry verticals.

Regulators themselves are adopting India-hosted cloud environments to ensure supervisory access to sensitive data. "Regulators like the Reserve Bank of India (India's central bank) are themselves moving toward Indian cloud or sovereign cloud environments," he said.

From policy to infrastructure

While much of the public discussion around AI governance focuses on data, Netweb argues that regulation is increasingly shaping decisions around models, algorithms, and infrastructure.

Chakraborty said enterprises and government agencies are seeking to localize not only datasets but also AI models trained on Indian data and the algorithms used to process them.

This shift has direct implications for hardware demand. Enterprises deploying larger and more accurate AI models require newer generations of CPUs and GPUs, along with significantly larger memory footprints. "Bigger models require bigger memory so that the model and data can be loaded and trained or fine-tuned," Chakraborty said.

As a result, system procurement is moving toward higher-density configurations that prioritize efficiency per dollar and per watt. Chakraborty noted a gradual transition from air-cooled systems to liquid-cooled designs to improve GPU utilization and reduce operating costs.

What 'sovereign AI' means at the hardware level

Netweb defines sovereign AI primarily as an infrastructure problem rather than a software or cloud abstraction. "Sovereign AI starts from the hardware," Chakraborty said, pointing to the need for trust and security mechanisms at the most basic level of a server.

This includes securing and verifying the boot process across CPUs, GPUs, network interface cards, and baseboard management controllers, as well as validating firmware integrity. Before a server is allowed to join a cloud environment, Chakraborty said, it must pass attestation checks to ensure it meets governance requirements.

Confidential computing is another key element, extending protection beyond data at rest and in transit to data during processing. "If someone changes even a few bits of a model during inference, the model can start behaving erratically," he said, noting that such attacks could undermine months of training effort and significant GPU investment.

Observability and auditability also feature prominently. Chakraborty said governance requires high-fidelity logging at both the hardware and application levels to support forensic analysis and regulatory oversight. "You can only control what you can observe," he said.

Edge AI and citizen-scale deployments

India's AI ambitions are not limited to centralized data centers. Chakraborty said governance requirements are extending to edge deployments, particularly as AI is increasingly used for citizen-scale services in healthcare, financial services, and civic infrastructure.

Edge nodes, he said, must mirror many of the trust and security features found in data centers, including secure hardware modules and remote attestation. This is especially important in India, where connectivity between edge locations and central systems can be intermittent. "The edge node needs to do time-critical inference locally, with only alerts or summaries sent back," he said.

Such deployments are driving demand for compact, secure AI systems capable of operating independently while remaining compliant with governance rules.

GPUs and supply considerations

The shift toward sovereign AI infrastructure comes at a time when GPUs and AI accelerators remain globally constrained. Netweb positions its design partnerships and local manufacturing as a way to mitigate supply risks.

Netweb works as a design and manufacturing partner for Nvidia on systems built around Nvidia's Grace CPU and GH200 Grace Hopper platforms, and has secured a domestic contract worth about INR17.34 billion (approx. US$210 million) to build Blackwell-powered AI servers for sovereign AI infrastructure projects in India.

Beyond hardware assembly, Netweb argues that managing large GPU clusters requires integrated software capabilities for provisioning, monitoring, and utilization tracking. Chakraborty said such tools are increasingly important as enterprises scale deployments from experimentation to production.

Looking ahead

Netweb expects AI infrastructure demand to grow across regulated sectors such as banking, healthcare, and government, with edge AI playing an increasingly important role. Use cases range from fraud detection and digital identity verification to medical screening and civic services.

While India's AI governance framework is still evolving, infrastructure vendors say its impact on hardware procurement is already visible. The challenge ahead will be balancing compliance, cost, and scalability as AI deployments move from pilots to nationwide systems.

Article edited by Jack Wu