CONNECT WITH US
Thursday 13 June 2024
Fibocom propels 5G RedCap CPE solution integrated with newly launched RedCap module FG332 and Wi-Fi 7/6 technologies at Computex 2024
Fibocom (Stock code: 300638), a global leading provider of IoT (Internet of Things) wireless solutions and wireless communication modules, today announces the expansion of its RedCap module portfolio by launching FG332 during Computex 2024 and introduces a cutting-edge 5G RedCap CPE solution integrated with the newly launched FG332 and the latest Wi-Fi 7/Wi-Fi 6 technologies
LATEST STORIES
Friday 9 May 2025
Is your AI system built to last—or bound to fail? Only DEEPX has the answer
DEEPX ensures unmatched AI reliability with lower power, lower heat, and a total cost of ownership lower than even "free" chips.For Lokwon Kim, founder and CEO of DEEPX, this isn't just an ambition—it's a foundational requirement for the AI era. A veteran chip engineer who once led advanced silicon development at Apple, Broadcom, and Cisco, Kim sees the coming decade as a defining moment to push the boundaries of technology and shape the future of AI. While others play pricing games, Kim is focused on building what the next era demands: AI systems that are truly reliable."This white paper," Kim says, holding up a recently published technology report, "isn't about bragging rights. It's about proving that what we're building actually solves the real-world challenges faced by factories, cities, and robots—right now."Credit: DEEPXA new class of reliability for AI systemsWhile GPGPUs continue to dominate cloud-based AI training, Kim argues that the true era of AI begins not in server racks, but in the everyday devices people actually use. From smart cameras and robots to kiosks and industrial sensors, AI must be embedded where life happens—close to the user, and always on.And because these devices operate in power-constrained, fanless, and sometimes battery-driven environments, low power isn't a preference—it's a hard requirement. Cloud-bound GPUs are too big, too hot, and too power-hungry to meet this reality. On-device AI demands silicon that is lean, efficient, and reliable enough to run continuously—without overheating, without delay, and without failure."You can't afford to lose a single frame in a smart camera, miss a barcode in a warehouse, or stall a robot on an assembly line," Kim explains. "These moments define success or failure."GPGPU-based and many NPU competitor systems fail this test. With high power draw, significant heat generation, the need for active cooling, and cloud latency issues, they are fundamentally ill-suited for the always-on, low-power edge. In contrast, DEEPX's DX-M1 runs under 3W, stays below 80°C with no fan, and delivers GPU-class inference accuracy with zero latency dependency.Under identical test conditions, the DX-M1 outperformed competing NPUs by up to 84%, while maintaining 38.9°C lower operating temperatures, and being 4.3× smaller in die size.This is made possible by rejecting the brute-force SRAM-heavy approach and instead using a lean, on-chip SRAM + LPDDR5 DRAM architecture that enables:• Higher manufacturing yield• Lower field failure rates• Elimination of PCIe bottlenecks• 100+ FPS inference even on small embedded boardsDEEPX also developed its own quantization pipeline, IQ8™, preserving <1% accuracy loss across 170+ models."We've proven you can dramatically reduce power and memory without sacrificing output quality," Kim says.Credit: DEEPXReal customers. Real deployments. Real impact.Kim uses a powerful metaphor to describe the company's strategic position."If cloud AI is a deep ocean ruled by GPGPU-powered ships, then on-device AI is the shallow sea—close to land, full of opportunities, and hard to navigate with heavy hardware."GPGPU, he argues, is structurally unsuited to play in this space. Their business model and product architecture are simply too heavy to pivot to low-power, high-flexibility edge scenarios."They're like battleships," Kim says. "We're speedboats—faster, more agile, and able to handle 50 design changes while they do one."DEEPX isn't building in a vacuum. The DX-M1 is already being validated by major companies like Hyundai Robotics Lab, POSCO DX and LG Uplus, which rejected GPGPU-based designs due to energy, cost, and cooling concerns. The companies found that even "free" chips resulted in a higher total cost of ownership (TCO) than the DX-M1—once you add electricity bills, cooling systems, and field failure risks.According to Kim, "Some of our collaborations realized that switching to DX-M1 saves up to 94% in power and cooling costs over five years. And that savings scales exponentially when you deploy millions of devices."Building on this momentum, DEEPX is now entering full-scale mass production of the DX-M1, its first-generation NPU built on a cutting-edge 5nm process. Unlike many competitors still relying on 10–20nm nodes, DEEPX has achieved an industry-leading 90% yield at 5nm, setting the stage for dominant performance, efficiency, and scalability in the edge AI market.Looking beyond current deployments, DEEPX is now developing its next-generation chip, the DX-M2—a groundbreaking on-device AI processor designed to run LLMs under 5W. As large language model technology evolves, the field is beginning to split in two directions: one track continues to scale up LLMs in cloud data centers in pursuit of AGI; the other, more practical path focuses on lightweight, efficient models optimized for local inference—such as DeepSeek and Meta's LLaMA 4. DEEPX's DX-M2 is purpose-built for this second future.With ultra-low power consumption, high performance, and a silicon architecture tailored for real-world deployment, the DX-M2 will support LLMs like DeepSeek and LLaMA 4 directly at the edge—no cloud dependency required. Most notably, DX-M2 is being developed to become the first AI inference chip built on the leading-edge 2nm process—marking a new era of performance-per-watt leadership. In short, DX-M2 isn't just about running LLMs efficiently—it's about unlocking the next stage of intelligent devices, fully autonomous and truly local.Credit: DEEPXIf ARM defined the mobile era, DEEPX will define the AI EraLooking ahead, Kim positions DEEPX not as a challenger to cloud chip giants, but as the foundational platform for the AI edge—just as ARM once was for mobile."We're not chasing the cloud," he says. "We're building the stack that powers AI where it actually interacts with the real world—at the edge."In the 1990s, ARM changed the trajectory of computing by creating power-efficient, always-on architectures for mobile devices. That shift didn't just enable smartphones—it redefined how and where computing happens."History repeats itself," Kim says. "Just as ARM silently powered the mobile revolution, DEEPX is quietly laying the groundwork for the AI revolution—starting from the edge."His 10-year vision is bold: to make DEEPX the "next ARM" of AI systems, enabling AI to live in the real world—not the cloud. From autonomous robots and smart city kiosks to factory lines and security systems, DEEPX aims to become the default infrastructure where AI must run reliably, locally, and on minimal power.Everyone keeps asking about the IPO. Here's what Kim really thinks.With DEEPX gaining attention as South Korea's most promising AI semiconductor company, one question keeps coming up: When's the IPO? But for founder and CEO Lokwon Kim, the answer is clear—and measured."Going public isn't the objective itself—it's a strategic step we'll take when it aligns with our vision for sustainable success." Kim says. "Our real focus is building proof—reliable products, real deployments, actual revenue. A unicorn company is one that earns its valuation through execution—especially in semiconductors, where expectations are unforgiving. The bar is high, and we intend to meet it."That milestone, Kim asserts, is no longer far away. In other words, DEEPX isn't rushing for headlines—it's building for history. DEEPX isn't just designing chips—it's designing trust.In an AI-powered world where milliseconds can mean millions, reliability is everything. As AI moves from cloud to edge, from theory to infrastructure, the companies that will define the next decade aren't those chasing faster clocks—but those building systems that never fail."We're not here to ride a trend," Kim concludes. "We're here to solve the hardest problems—the ones that actually matter."Credit: DEEPXWhen Reliability Matters Most—Industry Leaders Choose DEEPXVisit DEEPX at booth 4F, L0409 from May 20-23 at Taipei Nangang Exhibition Center to witness firsthand how we're setting new standards for reliable on-device AI.For more information, you can follow DEEPX on social media or visit their official website.
Tuesday 6 May 2025
Best and brightest AI minds celebrated at Taiwan's inaugural Best AI Awards
To cultivate a stronger foundation of artificial intelligence (AI) talent and encourage greater investment in AI research and development, Taiwan's Ministry of Economic Affairs (MOEA) hosted the inaugural 'Best AI Award Competition,' culminating in the finals held on May 3, 2025, at the Taipei International Trade Center's Exhibition Hall 1.This year's competition attracted participation from 1,253 teams spanning 36 countries. From this pool, 233 teams advanced to the finals, where they competed for gold, silver, and bronze awards, as well as honorable mentions, across the 'AI Application' and 'IC Design' categories. These categories were further divided into groups: Public Corporations, SMEs and Startups, Students, and International Teams.The Gold Award was bestowed upon eight winners: HiTRUSTpay, EYS3D Microelectronics, Daya Yoo, Jmem Tek, National Central University, National Taiwan University, Touch Lab (Philippines), and Arba Lab (UK).According to an MOEA press release, the 'Best AI Awards' aspires to be Taiwan's equivalent of the "Oscars for AI", embodying diversity, global reach, and a forward-looking vision. Its core objectives are to ignite the creativity of the next generation, foster stronger ties between academia and industry, and nurture a deeper pool of AI-savvy talent and innovative enterprises. Ultimately, the competition aims to drive the industrialization of AI and the adoption of AI across industries, thereby solidifying Taiwan's position in the AI landscape. The competition offers substantial prizes, with student group winners vying for up to NT$300,000 and winners in the enterprise open, startup and SME, and international groups competing for up to NT$1 million.Credit: MOEAAt the awards ceremony, Deputy Minister Ho Chin-tsang emphasized AI's accelerating transformation of the global industrial structure, impacting sectors from manufacturing and healthcare to finance and everyday services. He stressed the 'Best AI Awards' competition platform's crucial role in forging stronger links between talent development, technological applications, and industry demands. In response to the widespread anticipation for AI technologies, the MOEA seeks to foster a practical, application-oriented approach, encouraging innovative concepts to address real-world industry challenges. This strategy aims to continuously cultivate new talent and generate cutting-edge solutions within Taiwan's evolving AI ecosystem. Deputy Minister Ho also highlighted that the competition entries exemplify the fusion of AI technology with tangible needs and creative execution, showcasing the immense potential of translating innovative ideas into viable solutions. Looking ahead, the MOEA will sustain its commitment to facilitating the adoption of innovative solutions and maximizing their value through strategic policy initiatives and industry partnerships.Credit: MOEADirector-General Kuo Chao-chung of the MOEA's Department of Industrial Technology underscored the impressive international engagement of the inaugural Best AI Awards, attracting both enthusiastic domestic participation from enterprises and academic institutions and the involvement of 353 international professionals from 36 countries, including India, the Philippines, the United States, and the United Kingdom. This global participation establishes the competition as a vital platform for the international exchange of AI innovation. To expedite the industry's capacity building in AI talent and applications, the MOEA will not only continue to host the Best AI Awards but also leverage the pilot production capabilities of schools and research institutions to support businesses in design, new product development, and prototyping. Furthermore, the MOEA will collaborate with agencies such as the Small and Medium Enterprise Administration and the Industrial Development Administration to facilitate industry-wide AI transformation and develop practical AI expertise.Chiu Chui-hui, Director-General of the Industrial Development Administration, cited a report by the Artificial Intelligence Technology Foundation, which identifies the 'shortage of relevant technical talent' as Taiwan's primary obstacle to AI advancement. The Global Artificial Intelligence Index corroborates this, revealing that while Taiwan excels in infrastructure (ranking 4th), it faces challenges in talent (38th), research (27th), and commercialization (39th). Consequently, the Best AI Awards are designed to expedite the real-world application of AI and the cultivation of skilled professionals. Chiu emphasized the competition's significance as Taiwan's premier AI competition, characterized by its scale, prestigious awards, and high standards (with a highly selective 7.4% award rate). He expressed his hope for collaborative efforts across all sectors to broaden the adoption of AI and harness its power to drive industrial innovation.The 'Best AI Awards' entries spanned a diverse range of application areas, including information and communication technology (18.4%), manufacturing (16.2%), healthcare (15.9%), wholesale and retail (10.2%), education (8.6%), and finance (7.8%). This diversity not only underscores the application of cutting-edge technologies but also highlights the immense potential of AI to be successfully integrated into various industries.The gold medal winners' covered areas and their attempted solutions to problems are summarized as follows:CategoryTeamWork TitleDomainPain Points to SolveAIStudentNational Central UniversityRealization of Highly Flexible Production LinesManufacturing AutomationTraditional production lines are inflexible and struggle to adapt to rapidly changing and varied production needs.AI SME and StartupData YooFarmiSpaceFarmingEnhance the efficiency and management of agricultural production, and optimize agricultural resource allocationAI Public CompanyHiTRUSTpayVeri-id equipment id verification and real-time AI anti-scam/fraud serviceCybersecurity, FintechSolve device identity verification issues, and provide real-time prevention of financial fraud, thereby enhancing transaction security.AI_INTLTouchLab (Philippines)AI driven TOUCH System – Digitizing TouchDigitalization, human-machine interactionDigitize touch, enabling machines to understand and simulate tactile sensations, thereby enhancing the precision and scope of human-computer interaction.IC StudentNational Taiwan UniversityTernary-weight Transformer model software-hardware synergetic design and neural network accelerator IC design and implementationIC design, AI hardware accelerationDesign higher-performance, lower-power AI chips to accelerate the computation of Transformer models and improve the efficiency of AI applications.IC SME & StartupJmem TekArgusNPU – PQC security edge AI inference systemCybersecurity, Edge AI, AI IC designProvide AI inference capabilities with post-quantum security on edge devices to protect sensitive data and enable high-performance AI applicationsIC Public CompanyeYs3D MicroelectronAI Edge Computing Car Parking Management SystemSmart City Transportation ManagementUse edge AI to improve parking management efficiency, optimize the allocation of parking resources, and alleviate traffic congestion.IC_INTLArbaLab(UK)ArbaEdgeAI Edge ComputingRealizing high-performance AI computing on edge devices to reduce reliance on cloud computing, enhance response speed, and improve privacy protection.Source: MOEAThe winning entries of the 'Best AI Awards' showcased the dynamic development and diverse applications of AI technology, with particularly strong innovation evident in sectors such as healthcare and manufacturing, as well as in the burgeoning fields of AIoT and edge AI integration.The Ministry of Economic Affairs pointed out that looking ahead, it will continue to collaborate with industry, academia, and research institutions, hoping that the "Best AI Awards" can become an important annual platform to promote Taiwan's AI technology development, talent cultivation, and innovative applications, and to continuously discover more promising new talents. In addition, matchmaking events will also be held concurrently during the "COMPUTEX" exhibition every May, with plans to invite more than 20 domestic and foreign venture capitalists and buyers to participate, fostering in-depth exchanges and cooperation between participating teams and the industry, and further expanding the commercialization opportunities for AI innovative applications.Through this comprehensive strategy, the Ministry of Economic Affairs aims to expedite the creation of groundbreaking AI applications and the cultivation of interdisciplinary AI expertise, ultimately steering Taiwan toward its ambitious vision of becoming a globally recognized "AI Island". For the latest updates, follow the official LinkedIn page of the Best AI Awards.
Wednesday 5 June 2024
Allxon unveils industry-first OOB Cloud Serial Console for NVIDIA Jetson at COMPUTEX 2024
Allxon today announced Allxon OOB Cloud Serial Console, marking a milestone as the industry's first offering tailored for the NVIDIA Jetson family. Allxon OOB Cloud Serial Console empowers direct device-level troubleshooting and remote access via debug UART. Allxon will be hosting a live demonstration at COMPUTEX 2024, showcasing this powerful technology alongside AVerMedia.Partnering with AVerMedia, which unveiled its Standard Carrier Board D115W for the NVIDIA Jetson Orin NX and NVIDIA Jetson Orin Nano modules, Allxon showcased its power-related features for swift disaster recovery.Allxon OOB Cloud Serial Console leverages NVIDIA Jetson devices' serial console port through the hardware interface for seamless remote access and troubleshooting. This innovative solution provides unprecedented convenience and top-tier security. Unlike traditional SSH methods, Allxon OOB Cloud Serial Console eliminates the need for cumbersome server setups and fixed ports, significantly boosting security and operational flexibility.Allxon OOB Cloud Serial Console has received worldwide acclaim during its early access (EA), attracting partners including telecommunications giants, smart security innovators, independent hardware vendors, and leading-edge AI software developers. Allxon is thrilled to announce that its OOB Cloud Serial Console general availability (GA) is set for quarter three of 2024.Allxon will showcase Allxon Cloud Serial Console at COMPUTEX 2024 at booth #L1309a, Hall 1, 4F in Nangang Exhibition Center Halls 1 and 2 from June 4th to 7th.Learn more about Allxon's platform https://www.allxon.com/solutions/swiftdr.
Tuesday 4 June 2024
Chenbro's latest AI and cloud server chassis solutions make stunning debut at Computex 2024
Chenbro (TWSE: 8210), a pioneer in the design and manufacturing of own-brand rackmount systems, is participating in COMPUTEX Taipei from June 4 to 7. Focusing on the theme of AI, Chenbro is showcasing its latest NVIDIA MGX chassis products and OCP DC-MHS cloud server chassis solutions, seizing more AI business opportunities.This year, Chenbro is flexing its muscles with its OTS (Off-The-Shelf), ODM/JDM, and OEM Plus service models. In addition to highlighting OTS server chassis solutions for AI, Cloud, Storage, and Edge applications, Chenbro is exhibiting JDM/OEM products co-created with customers, showcasing its strong R&D design capabilities and manufacturing prowess and realizing win-win partnerships.Aiming for Next-Generation Server Development with Unreleased Enclosure SolutionsEric Hui, President of Chenbro, highlighted the role of NVIDIA MGX in bringing accelerated computing into any data center with modular server designs. These designs offer multiple form factors, including 1U, 2U, and 4U, enabling diverse configurations of GPUs, CPUs, and DPUs to fulfill various computing requirements. As an NVIDIA partner, Chenbro is showcasing NVIDIA MGX server chassis in 2U and 4U form factors to address enterprise-level AI application needs, and is exhibiting 1U and 2U compute trays to support customers deploying the GB200 NVL72 and NVL36.Chenbro is also introducing a new generation of cloud server chassis solutions compliant with the OCP DC-MHS standard, and collaborating with Intel on server architecture. Chenbro's DC-MHS enclosure solutions offer Full Width (FLW) and Density Optimized (DNO) specifications in 1U and 2U form factors, supporting E3.S and E1.S storage devices, to meet product development demand for next-generation high-performance servers.Also at Chenbro's booth is a unique data center display featuring a blend of virtual and physical cabinets, showcasing its Tri-Load high-density storage server chassis solution, which has won both the MUSE and TITAN design awards. Known for its exceptional heat dissipation and load-bearing mechanisms, the Tri-Load series ensures easy maintenance and stability in data center operations. Chenbro is also showcasing Edge AI solutions with short-depth server chassis capable of accommodating GPU deployment, enabling AI computing at the edge.Collaborating for a Win-Win Partnership Corona Chen, CEO of Chenbro, underscored the company's commitment to tracking the product roadmaps of tech giants such as NVIDIA, Intel, AMD, and Ampere. By leveraging modular design, Chenbro ensures maximum compatibility and can offer a wide range of server chassis solutions, adhering to the slogan "Whatever's inside, Chenbro outside." Through diverse business service models, Chenbro is actively collaborating with global customers to seize opportunities in the AI and cloud server industry.This year, along with TechTalk sessions that share innovative product and industry insights, Chenbro is also showcasing joint product demonstrations with motherboard partners such as Gigabyte, MSI, ASRock, Tyan, and Compal, as well as storage device partners like Toshiba, Seagate, and Kingston. In addition, server products created through JDM/OEM collaboration with Hyve Solutions, Wiwynn, Pegatron, MSI, ASRock, and ADLINK are on display. Lastly, Chenbro will hold a joint VIP night co-hosted with JPC and FSP, showcasing Chenbro's collaborative achievements with customers and partners in win-win partnerships.Amidst the wave of green exhibitions, Chenbro is further showcasing its commitment to sustainability through participation in COMPUTEX ESG GO and the Sustainable Design Award competition. Chenbro applies the principle of 3R (Reuse, Reduce, and Recycle) not only in product design but also in booth design, paving the way for a low-carbon, sustainable future.
Tuesday 4 June 2024
VIA Labs launches PortSense: AI-ready features for enhanced USB hub management
At Computex 2024 in Taipei, Taiwan, VIA Labs announced PortSense, a suite of manageability and intelligent connectivity features for USB Hubs that sets a new standard for docking station functionality in business and professional environments.PortSense is an exclusive VIA Labs hardware and software solution embedded in the latest revisions of VIA Labs hub products, and it enables supported products to retrieve USB descriptor information from connected devices, even without a host system. USB descriptors contain vital details about the connected USB devices, such as the device class and capabilities, the product name and manufacturer information, serial number if present, and much more.Typically, a host system uses this information to identify, configure, and interact with connected devices. However, with PortSense, a managed docking station can collect usage data, perform tasks like pre-configuration and device inventory, and assist in implementing corporate policies.PortSense is available in VIA Labs VL817 USB 5Gbps Hub and VL822 USB 10Gbps Hub. The VL832 USB4 Endpoint Device has an integrated USB 10Gbps hub and supports PortSense.While PortSense can function in autonomous mode, its true potential is unlocked when integrated into a connected platform where AI could be applied for analytics and policy control. Hubs with PortSense can communicate with an external controller using a standard I²C interface to share collected USB descriptor information and offer a range of manageability controls, such as enabling or disabling ports, changing connection speeds, resetting devices, and toggling USB battery charging.These controls can be applied on a per-port basis, including the upstream port, providing granular control over each connection. PortSense can be used to collect detailed user activity and enable analysis of usage patterns over time, making it perfect for hot-desking setups or as part of a comprehensive office management solution.With PortSense, VIA Labs is enhancing the modern workspace with more intelligent and connected solutions. The advanced capabilities of PortSense extend beyond basic data collection and control, providing valuable insights that help businesses manage their USB peripherals more effectively.For instance, IT administrators can use PortSense to maintain an inventory of connected devices, ensuring that only authorized devices are in use. In addition, the AI-Ready features of PortSense support enforcing security policies, such as maintaining an allowlist or blocklist of devices or limiting the use of specific USB device classes in sensitive areas.By acting as one layer of a comprehensive security strategy, PortSense helps reduce the risk of data breaches and unauthorized data transfers. This approach enhances operational efficiency and security, helping companies maintain a reliable and secure technology infrastructure. VIA Labs has recently released a white paper with more technical details about PortSense, which can be found here: https://www.via-labs.com/pressroom_show.php?id=98VIA Labs PortSense: An exclusive suite of manageability and intelligent connectivity features for USB Hubs
Tuesday 4 June 2024
Skymizer launches ET2 IP solution, creating more possibilities for LLM with hardware and software platform
After announcing its foray into the LLM (Large Language Model) IP market, Skymizer recently unveiled a series of hardware and software solutions centered around LLM IP.These offerings are designed to bring more imagination to the LLM application service market. Skymizer's IP solution series is codenamed EdgeThought, with the first market-ready solution named ET2, capable of efficiently handling all current edge devices requiring LLM, including the recently released Llama3 with a parameter scale of up to 8 billion.Before introducing the IP solution, Skymizer's primary solutions were in the compiler space, bridging the gap between chips and software. This background has endowed the company with extensive experience in overall system hardware and software integration and optimization. As the demand for LLM rises, Skymizer, leveraging its solid market foundation, has entered the IP market to cater to diverse vertical application needs.Launch of integrated hardware-software platform: ET2 features edge computing, LLM, and AI inferenceAccording to Skymizer Executive Vice President William Wei, ET2 encompasses three key elements: edge computing, LLM, and AI inference. Besides accommodating various LLMs in the market, ET2 can flexibly expand computing resources to meet client needs.If the parameter scale of the LLM to be processed is too large, expansion can achieve the required computing power, naturally increasing memory capacity and power consumption. In addition to the existing IP solutions, Skymizer also launched the SkyGenie SDK (Software Development Kit).The SDK can address various categories of LLMs, including general, domain-specific, and private, assisting various industry applications. This enables software developers to create corresponding applications based on different LLM types, optimizing overall system performance. During COMPUTEX 2024, Skymizer will further demonstrate applications such as smart factories' Autonomous Mobile Robots (AMRs), Drive-thru ordering smart assistants, and smart automotive scenarios using ET2 and other hardware-software solutions.Wei emphasized that Skymizer's comprehensive hardware-software system development experience allows the broad tech industry ecosystem to benefit from Skymizer's complete platform solutions. He revealed that while the market currently sees edge AI GPUs performing at about 20 tokens per second, Skymizer's ET2 tests show around 32 tokens per second, with implementation costs at just 1/100 of Edge AI GPUs. This high cost-performance ratio makes ET2 an ideal choice for cost-sensitive end applications.First Chip with ET2 solution to debut at CES 2025 In the semiconductor domain, Wei candidly shared that Skymizer is open-minded and actively collaborating with domestic and international design services and IP firms. He also revealed that ET2 is highly expandable, from small IoT MCUs to high-performance Edge Servers.When paired with higher bandwidth memory interfaces, it can function as a server-level inference engine for multi-user, multi-batch processing. The first chip adopting ET2 is expected to debut at CES 2025, positioning ET2 as a game-changer for edge device LLM inference.ConclusionSkymizer's launch of its first IP solution, ET2, marks a new milestone for the company. With rich hardware-software integration experience, Skymizer not only provides high-performance, cost-effective edge computing solutions but also equips developers with powerful tools through the SkyGenie SDK to meet diverse LLM application needs.These innovations enhance the performance of different vertical market applications and create new possibilities for smart factories, autonomous mobile robots, smart automotive, and more. Skymizer's comprehensive platform solutions are poised to be a significant driving force in the future LLM application market.Skymizer not only provides LLM silicon intellectual property solutions but also offers the SkyGenie SDK, supporting various types of LLMs. This makes it easier and faster for AI application developers to create applications, enabling hardware chip partners to achieve higher integration and better meet market demands
Tuesday 4 June 2024
AI application transformation drives significant increase in storage technology demand, says Wallace Kou, President and CEO of Silicon Motion
The rapid development of AI has had a massive impact on global industries. The emergence of innovative technologies such as generative AI and machine learning has not only enhanced the performance of devices like smartphones, PCs, data centers, automotive systems, and industrial applications but has also spurred the demand for high-performance storage solutions in the market. Whether used for AI training or inference, the most crucial aspect is data. Swiftly acquiring the necessary data, securely storing data generated through AI training or inference, and properly protecting it is a vital issue within the entire AI ecosystem. In response to this trend, Silicon Motion, the world's largest NAND Flash controller supplier, has a comprehensive product layout and market strategy in place to help customers seize this immense business opportunity.Recently, storage technology has gained significant market attention. Wallace Kou, President and CEO of Silicon Motion, identified five major driving factors. Firstly, the rapid development of AI technology, as mentioned earlier, has led to a surge in storage demand. Secondly, the expansion of AI application scenarios necessitates storage systems that provide high-speed data access, large storage capacity, and a stable and reliable operating environment. Thirdly, with the widespread adoption of AI applications, users place greater emphasis on protecting sensitive data, requiring storage systems with more sophisticated data encryption and access control mechanisms to ensure data security. Fourthly, storage technology continues to advance, including innovations such as QLC NAND, NVMe protocol, PCIe Gen5, providing new solutions to AI storage challenges. Finally, the continuous increase in storage performance and capacity demands in areas such as data centers and cloud services, driven by applications like AI and big data, further expand the market space for high-performance and large-capacity enterprise storage solutions.Rise of AI Edge Devices Demands Progressive Functionality in Storage ComponentsWallace Kou further elaborated on the current market trends and changes in demand for storage components. He mentioned that current AI applications still heavily rely on the high computational capabilities of data centers and cloud servers to process large language models, leading to a significant surge in demand for High Bandwidth Memory (HBM). However, with considerations for cost reduction, system latency reduction, and enhanced privacy protection, AI applications have started transitioning from cloud platforms to personal computers and AI smartphones, where miniature language models run on edge devices.In this context, edge devices must improve performance, optimize human-machine interfaces, ensure seamless operation, and prioritize storage device performance, reliability, and data security. Moreover, to accommodate the vast amount of data generated by AI applications, edge devices need to expand their capacity while also managing cost increases, underscoring the importance of QLC NAND. It's anticipated that QLC applications will expand from the current PC market to mobile phones and enterprise applications.Another emerging trend is performance optimization. He highlighted the profound impact of data placement technology on AI performance, noting that major NAND manufacturers have recently introduced Zoned UFS, grouping similar types of data within the same storage block. For enterprise applications, ZNS and FDP technologies have adopted similar concepts, significantly enhancing read performance and extending device lifespans.To meet the aforementioned demands, Wallace Kou believes that a comprehensive approach is needed, combining product strategy, design services, and organizational transformation to satisfy customer needs. On the product front, comprehensive customer support is essential, along with actively adopting new technologies to address market demands. Regarding design services, different application areas have varying requirements, and customers have a high demand for customized designs. Storage component suppliers must integrate advanced technology with robust firmware design capabilities to provide intelligent, efficient, and reliable storage solutions across various sectors. Lastly, enterprise organizations need to segment based on different customer applications, enhancing service quality and efficiency, focusing on delivering specialized domain products and customized services.Silicon Motion Unleashes Three Strategies to Meet Diverse Customer NeedsResponding to the aforementioned demands, Silicon Motion has implemented a comprehensive strategy. Wallace Kou highlighted that all product lines of the company support QLC, catering to customers' requirements for high-capacity storage applications while achieving cost reduction and rapid shipping objectives. Beyond product lines, the strategic level also encompasses technical support in storage and firmware. For instance, in terms of edge devices, Silicon Motion's PCIe Gen5 SSD controller and UFS4.0 controller can fulfill the needs of AI PCs and AI smartphones with ultra-high performance and low power consumption. In enterprise storage, the MonTitan enterprise SSD development platform embraces key enterprise storage technologies like ZNS and FDP, optimizing data placement within systems. Moreover, Silicon Motion integrates its PerformaShape and NANDCommand technologies to guarantee maximum bandwidth performance and reliability, meeting the rigorous requirements of AI applications for high performance and reliability.Regarding design services, Silicon Motion maintains long-term collaborations with major smartphone manufacturers, PC OEMs, hyperscalers, car makers, industrial product manufacturers, and other partners to deeply grasp customer pain points and requirements. This enables them to design products that align better with real-world application scenarios. Wallace Kou noted that firmware design capability stands as one of Silicon Motion's significant competitive advantages in the NAND flash controller field. Over time, they have merged advanced technology with robust firmware design capabilities to deliver intelligent, efficient, and dependable storage solutions for various sectors. Silicon Motion's firmware solutions not only boast highly optimized and stable features but also enable customized services, tailoring optimal solutions based on customer application scenarios, performance requirements, and specific needs.Fine-tuning Organizational Structure to Enhance Firmware Customization Service AdvantagesAs part of its organizational restructuring, Silicon Motion has recently established two major business groups: Client & Automotive Storage (CAS) and Enterprise Storage & Display Interface Solution (ESDI) to assist customers across different industries in overcoming challenges in the AI era.The CAS group focuses on providing solutions such as client SSD controllers, mobile controllers, Ferri embedded storage, and expandable storage controllers. Its applications primarily cover areas like PCs, smartphones, automotive, gaming consoles, and industrial applications. On the other hand, the ESDI business group concentrates on enterprise SSD controllers and display interface products, targeting markets such as data centers, servers, workstations, enterprise applications, as well as USB display and embedded GPU applications. Wallace Kou stated that a specialized organizational division of labor can help Silicon Motion efficiently meet the diverse needs of different customers, accelerate innovation, and enhance overall competitiveness. Whether in consumer, enterprise, automotive, or industrial markets, Silicon Motion can provide more professional and tailored solutions closer to the demands.In the mobile sector, Silicon Motion has established strategic partnerships with several mobile phone manufacturers and mobile chip manufacturers to jointly develop high-performance, low-power, high-capacity storage solutions. Through firmware design capabilities and customized services, they meet specific requirements of phone manufacturers, optimizing storage performance and power consumption for AI smartphone development. In the data center domain, Silicon Motion has partnered with several major enterprise storage providers and hyperscalers in North America and China, securing commercial cooperation and providing enterprise storage solutions that meet performance and reliability demands.Looking ahead, Wallace Kou believes that with the continuous evolution of technologies like 5G, IoT, electric vehicles, IoV, and generative AI, the demand for storage in the market will continue to expand. Silicon Motion will increase its R&D investment, leverage its competitive advantages in firmware design and customized services, closely collaborate with partners in different fields, and provide more efficient, reliable, and secure storage solutions to help customers seize vast opportunities in the AI era.
Tuesday 4 June 2024
Nordic Semiconductor demonstrates integrated short-range, Wi-Fi and cellular wireless communication innovations at Computex 2024
The savvy smart location tags, asset trackers, connected health sensing devices, smart home appliances, advanced vehicles, and smart electric vehicle systems show that the popularity of Internet of Things (IoT) devices has penetrated nearly every aspect of our lives. The wireless communication technologies, coupled with powerful cloud computing and Artificial Intelligence (AI) technologies, play a significant role in driving technological advancement.Nordic Semiconductor, a leading global provider of wireless IoT solutions, provides an exclusive product and technology showcase during Computex Taipei 2024. Strongly demonstrating the Nordic's product series with short-range, medium, and long-distance wireless communication technologies, helps customers to develop modern devices and notable applications while continuously improving by delivering low-power multi-protocol wireless technologies.In this interview with Mr. Richard Chen, regional sales manager of Nordic Semiconductor, he indicates that product development engineering teams are consistently struggling to implement the well-balanced features between connectivity, security, power saving, and other fancy functions while consumers' endless demand for intelligent, sophisticated and instant-response IoT devices. These difficult trade-offs are achieving longer battery life and lower energy consumption in devices.To help customers solve these tough problems, Nordic displays a series of wireless communication products and application demonstrations at the Taipei W Hotel in Xinyi District between June 4 to 7 2024. It covers several important product highlights below in this article. There are key product information and major demos on the following web pages of https://www.nordicsemi.com/Events/2024/Computex-Taipei for further reference.Short-ranged smart home applications for Matter and Bluetooth LE Audio + Auracast featuresNordic is set to showcase a range of smart applications based on Matter standard and Bluetooth wireless technology. The demos are highlighting Matter-over-Thread / Matter-over-Wi-Fi use cases. The booth shows an integration of Nordic's development kits and devices including models of nRF54H20 DK, nRF54L15 DK, and nRF7002 EK.Developed by Connectivity Standards Alliance (CSA), Matter is a smart-home connectivity standard that ensures connected devices interoperate reliably. Industry leaders such as Apple, Amazon, and Google have promoted Matter standards and are supported by their IoT eco-systems.The recent released Matter 1.3 standard on May 8, 2024 brings support for more appliances like kitchen robotics.More supported devices across ecosystems require bigger software to handle it. One of the strengthening features of Nordic solutions is the enlarged Flash memory size to satisfy customers to accommodate big software stacks and update firmware easily through Over-The-Air technology.Another advantage of Nordic solution is the cyber security features like nRF9160 low power system-in-package (SiP) and nRF5340 system-on-chip (SoC) achieved Platform Security Architecture (PSA) Certified Level 2. The latest models such as nRF54H20and nRF54L15 SoC have passed PSA Certified Level 3. These certifications provide Nordic customers with an assurance offering a secure platform to build IoT products ensuring that all devices on the network are communicating securely and privately.Meanwhile, the demos of Nordic's Bluetooth Low Energy flagship SoCs are another spotlighting focus. Nordic's award-winning Bluetooth LE series solutions will be displayed at the booth based on the nRF5340 Audio DK development kit delivering both low-energy and high-quality wireless audio as well as the broadcasting capabilities of Bluetooth LE Audio with Auracast.Bluetooth LE Audio is the next generation of wireless audio streaming technology. The stand will also play host to a Bluetooth LE Audio and Auracast demonstration.Bluetooth LE Audio pursues advanced sound quality with lower power consumption and reduces the delay between audio and visual experience to improve the immersive experience. Moreover, Auracast technology uses broadcast technology to share audio streaming services with multiple people.In addition to hearing aid applications, Auracast is also aiming at some interesting use scenarios such as museums and art exhibitions navigation services to open up unlimited possibilities for broadcast applications in the future.Low-powered Wi-Fi 6 and Cellular wireless communication applicationsNordic builds the mid-range product portfolio by introducing low-powered Wi-Fi 6 technology to explore new applications for high-speed data communication in the edge nodes. Taking edge computing use cases as an example, Chen described, the IoT sensors transfer sensing information wirelessly on the edge side to cloud servers for connecting Nordic's Thingy:53 IoT machine learning platform for training a model in the cloud. The model can then be deployed straight back onto the SoC on the edge side for deployment and inferencing. The entire process is quick and requires no coding knowledge.There are further examples in the real world. For example, the sensing audio data or abnormal vibrations of industrial fans or motors in the factory can be captured for AI training in the cloud. After downloading the AI models to the edge SoC, the systems will have the prediction features to identify and predict the motor systems' possible failures and malfunctions to ensure that the right motor maintenance strategy is implemented at the right time.Talking about long distance wireless applications, Nordic focuses on cellular LPWAN technologies to connect devices through 4G/LTE, and 5G communication standards. The product series are ranging from nRF91 SiP product lines to newly released low-power cellular SoC chips to support LTE-M and NB-IoT protocol.The product combination provides major benefits of low power consumption, cost-effectiveness, and size reduction due to the integrated multi-protocol cellular modem and transceiver. Giving battery-driven devices better prospects, Nordic cellular LPWAN solutions represent a leap forward in connecting remote meter-reading networks for water, gas, and electricity.In the Nordic suite, the guests will witness the convergence of GNSS, Cell-based, and SSID-based Wi-Fi locationing in the demo stations. Using the nRF Cloud Location Services alongside DECT NR+ nodes, this showcase demonstrates asset tracking with precision and accuracy. Running predictive maintenance ML models by Edge Impulse's ML software, there is a further demonstration station to show Thingy:53 IoT machine learning prototyping platform to allow guests to examine and feel the prediction functions.In the demos of Human Interface Devices, Nordic uses Bluetooth LE gaming mouse device to emulate high-speed automatic packet transmission to a gaming PC. This gaming mouse delivers up to 8,000 data reports per second, ensuring high speed and minimal delay in intense and fast-paced gaming scenarios. This improved user experience that hasn't been seen before.As a pioneer of Bluetooth LE technology, Nordic has evolved into a full-spectrum wireless IoT company offering a range of solutions based on leading industry standards including LTE-M, NB-IoT, DECT NR+, Wi-Fi 6, Matter, and Thread, says Chen. Nordic's diversified offering at Computex 2024 is showing greater market expansion not only to maintain a significant market share in Bluetooth Low Energy but also to expand their cellular IoT offerings. Nordic hopes to work with Taiwan's electronics supply chains and ecosystem partners to embrace IoT business opportunities to spur growth in increasingly fast-paced markets.Capable of supporting Bluetooth 5.4 and future Bluetooth specifications, plus LE Audio, Bluetooth mesh, Thread, Matter, and more, the nRF54H20 will be the foundation for a new wave of revolutionary IoT end-products. Its combination of advanced features will enable complex end-products that have previously been unfeasibleCIoT shows Nordic's goal to streamline cellular product development and support the entire product lifecycle. This is why we have integrated all the different parts of our cellular offering into a complete solution – a fully Nordic-owned and controlled offering that includes hardware, software, tools, cloud services and our world-class supportNordic is actively involved and one of the main contributors the development of Matter. Matter aims to make it easy for developers to create a secure and reliable solution. If you want your products to be interoperable with the major smart home ecosystems, Matter is the way to go
Tuesday 4 June 2024
Kioxia Taiwan showcasing advanced memory technologies and ecosystem applications at Computex 2024
COMPUTEX Taipei 2024 will be held from June 4 to June 7 at Taipei Nangang Exhibition Center, Halls 1 and 2. KIOXIA Taiwan will showcase its full range of memory products and technologies under the theme of "Memory Maker – Making Flash Memory Solutions for Every Application" at booth X0001 on Floor 3 of Hall 2 in the Semiconductors & Hospitality Suites.In addition to exhibiting KIOXIA's memory products, the booth will feature joint displays with multiple ecosystem partners to highlight the critical role KIOXIA plays in the related ecosystem. This exhibition underscores KIOXIA's indispensable position in the current technological landscape and the significant benefits its products and technologies provide.Business Memory SolutionIn the business memory solution area, KIOXIA will showcase cutting-edge flash memory technologies, including the 8th generation BiCS FLASH and CMOS directly Bonded to Array (CBA) architecture, alongside memory products for various applications.Business Solid State Drive SolutionThe area of business solid-state drive solutions (SSD) will feature a range of current and new products. The highlight will be the data center grade SSD - CD8P series, which utilizes the PCIe 5.0 interface and comes in both 2.5-inch and E3.S form factors.Optimized for the performance, latency, power consumption, and thermal requirements of data center environments where power and cooling efficiency is critical. KIOXIA's CD8P Series provides the predictability and consistency needed for a seamless user experience.In addition, several leading ecosystem partners will showcase their server systems compatible with KIOXIA products, providing visitors with a comprehensive understanding of the vital role and applications of KIOXIA's business SSDs in nowadays' large-scale computing demands. As one of the leading manufacturers in memory technology in business SSD, KIOXIA is focused on enhancing the performance, offering stable and reliable memory solutions.Personal Memory SolutionsIn the personal memory solution area, KIOXIA will exhibit a variety of SSDs, memory cards, and USB flash drives. A key highlight will be the debut of the EXCERIA with Heatsink SSD, scheduled for release in the second half of 2024.This PCIe 4.0 SSD offers a sequential read speed of up to 6,200MB/s and is designed for PC systems with M.2 interfaces and PlayStation 5, providing a new option for gamers and general users. In addition, KIOXIA will showcase the newly launched EXCERIA PLUS G2 microSD card, offering up to 2TB of storage, 100MB/s read speed, and support for UHS-I and U3 (V30) speed classes, providing users a microSD with larger capacity and faster reading options.Highlights of KIOXIA Taiwan's BoothVisiting the booth to experience the advanced memory technologies and understand KIOXIA's pivotal role in the ecosystem through product displays and joint displays with ecosystem partners, KIOXIA Taiwan invites industry leaders and relevant professionals to join KIOXIA and Make Memories at COMPUTEX Taipei 2024.Credit: Company
Tuesday 4 June 2024
JMicron teams up with KaiKuTek for a sensational debut at Computex 2024 - Innovation ignites technology evolution
JMicron Technology Corp. teamed up with its wholly owned subsidiary, KaiKuTek Inc., to exhibit a variety of high-speed interface bridge controller solutions at the Sea Hall of the Taipei Marriott Hotel in Nangang during COMPUTEX 2024.They also showcased various application solutions integrating artificial intelligence and millimeter-wave radar technology, including air gesture control and object sensing. JMicron Technology Corp., known for its high-speed interfaces bridge controller chips, unveiled its latest generation of products.These chips represent a significant leap forward in transmission speed, stability, and efficiency, garnering attention from industry insiders and attendees alike. They also expanded the application fields of external storage to various smartphone platforms, simplifying cross-platform data exchange and positioning themselves as an industry focal point for future growth.Tony Lin, VP of Marketing & Sales Center at JMicron Technology Corp., emphasized the company's commitment to extensive collaboration on next-generation bridge control chips with its primary customer base. This initiative aims to lead trends in high-speed interfaces and storage applications, creating new market opportunities in collaboration with customers.KaiKuTek Inc. showcased groundbreaking products, leveraging its leading position in 60GHz millimeter-wave radar technology. The company focuses on radar sensing, AI/ML technology, antenna design, and gesture recognition, enhancing various smart applications.New products include TWS earbuds and intelligent eye massager with air gesture control for intuitive user experiences. In addition, a smart fan solution incorporates air gesture control and precise positioning through millimeter-wave radar, achieving automatic fan direction adjustment.Contactless products are increasingly integrated into daily life, with applications developed using millimeter-wave radar sensing technology and artificial intelligence projected for significant growth. The collaboration between JMicron Technology Corp. and KaiKuTek Inc. injects vitality into the industry, exploring broader horizons for technological innovation and enhancing user experiences.Mike Wang, General Manager of KaiKuTek Inc., anticipates applying accurate tracking and positioning and air gesture recognition technology in various fields, including industrial, automotive, personal consumer electronics, and IoT. Through system-level AI radar sensing solutions, KaiKuTek Inc. aims to realize non-contact long-distance object detection and operation, revolutionizing human-machine interaction interfaces.The success of COMPUTEX 2024 underscores the bright future for JMicron Technology Corp. and KaiKuTek Inc. Together, they will continue innovating, driving technological progress, and shaping the industry landscape.