CONNECT WITH US
NEWS TAGGED LANGUAGE
Monday 30 March 2026
Samsung invests in AI chip design startup to speed silicon and cut power use
Samsung Electronics has backed AI chip design startup Normal Computing in a US$50 million funding round, expanding its push into AI-driven electronic design automation and next-generation...
Sunday 29 March 2026
Innodisk says AI success depends on software-hardware integration, signaling shifts for edge and industry deployments
Innodisk told attendees at the 2026 AI EXPO that effective AI deployment requires more than raw computing power; it depends on tight integration between software and hardware, and...
Sunday 29 March 2026
Adata invests US$3 million in KonstTech to boost AI computing infrastructure
Amid the rapid development of generative artificial intelligence (GenAI) and large language models (LLM), global demand for high-performance computing (HPC) continues to rise. Memory...
Friday 27 March 2026
In-depth: Google TurboQuant cuts LLM memory 6x, resets AI inference cost curve

Google has introduced TurboQuant, a compression algorithm that reduces large language model (LLM) memory usage by at least 6x while boosting...

Thursday 26 March 2026
MediaTek advances Taiwanese language AI amid linguistic and security challenges
MediaTek highlighted at AI EXPO Taiwan 2026 how language-specific challenges complicate the deployment of AI globally, as Taiwanese tonal variety, mixed writing systems, and local...
Tuesday 24 March 2026
Why Switzerland chose a bottom-up approach over big tech grandstanding
During the 2026 World Economic Forum (WEF), Nvidia CEO Jensen Huang made an impassioned case for digital sovereignty. "Build your own AI, take advantage of your fundamental natural...
Monday 16 March 2026
Analysis: Alibaba Qwen talent exit highlights ByteDance's push in multimodal AI
A core researcher behind Alibaba's Qwen large language model has left the company and is reportedly joining ByteDance's AI research unit Seed, Chinese media reported. The move underscores...
Monday 16 March 2026
AWS and Cerebras collaborate on faster AI inference for Amazon Bedrock
Amazon Web Services (AWS) and AI chip startup Cerebras Systems said they are working together to bring a high-speed AI inference architecture to Amazon Bedrock, a managed service for...
Thursday 12 March 2026
Taiwan IC designers in rack-level AI delivery

Introduction

Monday 9 March 2026
Micron: LPDRAM server demand to outpace market; Taiwan key production base
The rapid expansion of generative artificial intelligence (AI) and large language models (LLMs) is driving a new phase of transformation in data center memory architecture, according...
Friday 6 March 2026
Nvidia's LPU push could reshape inference economics as OpenAI signals major buy
Nvidia plans to shift the AI compute battleground from training to inference by integrating language processing unit technology and offering multiple inference chips, with OpenAI agreeing...
Friday 6 March 2026
Alibaba faces questions over Qwen continuity after sudden departures and structural shift
Alibaba Group Holding Ltd.'s core team behind its Qwen large language model faced renewed turbulence after the abrupt resignation of its original technical lead prompted an emergency...
Friday 6 March 2026
Alibaba’s Qwen loses its architect, stirring questions about China’s AI drive
Alibaba's large language model ambitions have been jolted by an unexpected leadership departure. In the early hours of March 4, the head of Alibaba Group's Qwen artificial intelligence...
Tuesday 3 March 2026
AI data centers redraw the power map: Driving 800V DC and solid-state transformers into the next battleground
As demand for computing power from AI large language models (LLMs) continues to surge, power density in data centers is rising in tandem, bringing the conversion losses of traditional...
Wednesday 25 February 2026
Next in line to challenge Nvidia: Taalas hardwires Llama into silicon, claims 17,000 tokens per second
Toronto-based AI chip startup Taalas says it can hardwire a large language model directly into silicon to accelerate inference beyond what conventional GPUs can deliver. Founded in...