CONNECT WITH US

Hybrid models, local servers, and Netweb's role in India's AI ecosystem

Prasanth Aby Thomas, DIGITIMES, Bangalore 0

Swastik Chakraborty, VP, Technology, Netweb Technologies. Credit: Netweb

India's ambition to become a trillion-dollar digital economy is creating demand for AI infrastructure that can support both rapid experimentation and cost efficiency.

Swastik Chakraborty, Vice President of Technology at Netweb Technologies, said the company is working to provide an indigenous hardware and software ecosystem to meet this requirement.

"India is going to become a kind of one trillion digital economy in the globe," Chakraborty said. He pointed to the country's demographic dividend and large-scale digital public infrastructure as unique strengths. "Adoption of AI, as well as using AI to solve some of the potential perennial problems of India, using India's own datasets, happens to create a very, very unique opportunity."

Indigenous hardware and software stack

Netweb manufactures servers at its Faridabad facility, including systems for Nvidia's forthcoming superchip that integrates CPU and GPU components. The company is also preparing servers for the upcoming Nvidia B200, B300, and AMD-based GPUs.

Chakraborty emphasized that the value lies not only in the hardware but in ensuring full platform utilization. "No one purchases a GPU server for the sake of purchasing, but they would like to leverage the platform benefits to the full extent so that they can help solve their business problems," he said.

To address this, Netweb has developed Skylus.ai, a composable GPU scheduling and resource management software. "People can actually deploy the required amount of GPU resources to connect to their workloads and then get the job done. And once the job is done, the resources come back to the pool," Chakraborty explained.

Cost, hybrid models, and democratization

Cost remains a barrier for many organizations, but Chakraborty argued that new deployment models are making AI more accessible. "GPU cost happens to be one of the biggest deterrents for enterprises to think twice, to invest and then get a business outcome out of that investment," he said.

He added that AI inference workloads no longer always require expensive GPUs. "Even the low-power CPUs, or big power CPUs, can run some of the large language models, at least as far as inferencing is concerned," he said.

Hybrid and cloud-bursting models are increasingly being used to manage training and inference workloads. "When it comes to a lot of data to be churned, especially for the training of foundation models, sometimes on-prem cloud may be a better alternative. But hybrid is the motion," Chakraborty noted.

Support for research institutions

Netweb is also focusing on the education and research sector, where budgets are limited. The company has introduced "research-as-a-service" solutions and integrated research information systems. These are designed to unify data ingestion, warehousing, governance, and publication, supporting entire project and publication workflows.

"We have actually created a pipeline which can be instantiated by the educational institution to facilitate the need so that they can create that entire workflow and derive the deliverables they are looking for," Chakraborty said.

He added that collaboration features are central to this vision, drawing parallels with global platforms. "As you may be familiar with arXiv, before it is being published, it is made available for multiple different users to go through and put their comments and suggestions. We can also have a mechanism to create that kind of collaboration platform, maybe within the educational organization," he said.

Building a local AI ecosystem

Chakraborty said Netweb's strategy is to combine indigenous hardware with indigenous software, supporting startups, enterprises, and government institutions.

The company also collaborates with research organizations, which he said are key to adopting AI for local problem-solving.

"AI centers of excellence are primarily the innovation-as-a-service solution," Chakraborty said. "It's about how quickly we can shorten the path from imagining something, ideating something, and then ultimately putting it into actual execution and deployment."

Article edited by Jack Wu