CONNECT WITH US
Wednesday 6 June 2018
AMI partners with KingTiger to push computer reliability and performance
As developments in artificial intelligence (AI) and cloud services drive rapid industry growth and bring about a wave of products and services that highlight convenience and user needs, demand for computation power and system stability is also increasingly challenging. In response, developers are making more powerful operating systems and diverse software functions as well as new generation processors.Among these advancements, memory supporting high-speed data transfer plays a critical role. The use of dynamic random access memory (DRAM) with large capacity and high-speed data transfer interface designs is making data throughput growing at unprecedented rates. A common PC nowadays comes with 8GB DRAM.System performance and stability builds on top of memory reliability. However, with memory process technologies continuing on the miniaturization trend and new software applications keeping on evolving, all types of memory errors are prone to happen, resulting in system malfunction or failure which could have a negative influence on user experiences and brand reputation.BIOS vendors have a far-reaching influence on PC system stability and performance. In an exclusive interview by Digitimes, PaiLin Huang, general manager, AMI Taiwan and Bosco Lai, CEO and executive president, KingTiger, talked about their collaborations to bring users a total solution to DRAM error detection and correction.DRAM test equipment is a core business of KingTiger, serving major DRAM providers and users around the globe so the company has extensive knowledge about memory testing procedures. With growing popularity of mobile devices, memory products have transitioned from a modular form factor to being soldered directly onto mini motherboards and therefore application specific testing has become part of KingTiger's main services.According to Lai, as DRAM process technologies trend toward miniaturization and capacities grow higher and higher, DRAM manufacturers have difficulty implementing comprehensive tests on their products. For example, DRAM design today has a less-than-one signal-to-noise ratio and manufacturers can hardly cope with problems like early wear-out, variable retention time (VRT) or frequent occurrences of errors with in-plant testing.Some DRAM errors only occur when the memory is put to use under a combination of certain platforms and application scenarios. KingTiger's core expertise is its patented logic/system dual mode testing technologies. It is well aware that in-plant testing is unable to cope with the requirements by the slew of systems on the market so KingTiger leverages its thirty years of experiences in developing unique memory checking and correcting technologies to introduce a patented intelligent memory surveillance (iMS) software solution that works like the human immune system. It ingeniously conducts memory testing, scanning and checking when the processor is idling. When a section of memory is found to be defective, it is marked as unusable to ensure the memory system can continue to function normally. The complete approach to memory error detection, correction, diagnosis and handling includes comprehensive memory management functions such as inspection and failure isolation and warning. Memory reliability and performance is enhanced without occupying system resources, so it is called the memory system's silent guardian.As a matter of fact, KingTiger iMS has had some impressive success stories mainly for its use in server systems in recent years. This is because server systems impose much more stringent requirements on performance and stability than PC. For example, Inspur adopted KingTiger's iMS solution in its Tiansuo M13 in 2017 and was able to reduce system instability resulting from memory errors by 95%. Such significant improvement gave rise to positive market reviews on Inspur servers and also provided a stage for KingTiger iMS to excel on.AMI BIOS combines iMS Lite for added strengthPC manufacturers and leading brands cannot afford to overlook memory risks. Outstanding achievements of the iMS solution enabled KingTiger to enter into collaborations with AMI and further expand into motherboard, computer assembly, computer brands and white-box markets with AMI BIOS. Commenting on the partnership, Huang indicates the two companies share the common objective to boost computer stability and make benchmark progress. AMI chose to engage in collaborations with KingTiger in the early development of its major BIOS products in an attempt to shorten customers' R&D cycles. This will allow users of AMI BIOS to experience iMS Lite's capabilities in memory error detection and correction and also give added values to AMI's line of BIOS products.If users wish to upgrade to the full version iMS, they can choose iMS-enabled DIMMs or purchase KingTiger solutions online in the future. The full version iMS solution provides around-the-clock uninterrupted operation and further features memory failure warning. Based on KingTiger's smart algorithm, the solution can calculate parameters that can accurately indicate imminent memory failure and initiate preventive actions, so system stability can be enhanced significantly to provide a high-quality computer system with self-correcting memory, thereby delivering premium user experiences.At Computex 2018, AMI plans to showcase a full spectrum of BIOS products and KingTiger iMS Lite at booth L1332 on the fourth floor at Nangang Exhibition Center. AMI customers, partners and Computex visitors are all welcome to the AMI booth for a firsthand experience on amazing AMI innovations.Note: KingTiger and AMI have successfully integrated Memtest86, one of the most common memory testing standards in the industry, in AMI BIOS. Memory errors detected by MemTest86 in iMS-enabled systems can be directly corrected by iMS.Bosco Lai (left), CEO and executive president, KingTiger, and PaiLin Huang (right), general manager, AMI Taiwan
Wednesday 6 June 2018
Biostar iMiner series offers turnkey solution for crypto mining
Biostar has introduced a turnkey solution for mining at home with iMiner A578X8D/iMiner A564X12P/iMiner A578X6. The iMiner series comes as a single unit black box machine that supports ethOS, Windows 10 as well as Linux and is fully equipped with Biostar TB250-BTC series motherboard, CPU, GPU, memory, and power supply. It allows ultra-mining flexibility as it supports GPU mineable crypto currencies such as Ethereum, Monero, Bitcoin Gold and Zcash.All iMiner systems are based on the Intel 3930 CPU and Intel B250 chipset for maximum mining power. Biostar iMiner A578X8D with ETH hashrate of 220 MH/s (+/-5%) uses the popular Biostar TB250-BTC D+ with 8 x AMD RX570 8G graphics card and a high performance 1600W single rail 12v power supply (optional) with dual-ball bearing fans to offer ultra durable operation around the clock. Biostar iMiner A564X12P with ETH hashrate of 148 MH/s (+/-5%) uses the Biostar TB250-BTC PRO with 12 x AMD RX560 4G graphics cards on a 1300W power supply. Biostar iMiner A578X6 with ETH hashrate of 165 MH/s (+/-5%) also uses the Biostar TB250-BTC PRO with a modest 6 x AMD RX570 8G (which can be expanded for up to 12 GPUs) with a 1300W power supply.All Biostar iMiner models come with unique software: BIOS Working/Error/None States - Detect GPU state on POST screen, so miners can fix it before entering the OS. The BIOS detects PCI-E slot state according to their positions: Working, Error, and None. Working means that the GPU is operating at normal state. Error means that the data is incomplete. None means that no signal for GPU.Mining Doctor - Biostar's exclusive application to check the current state of each GPU such as usage, core, clock speed, memory clock speed, fan speed, and temperature. In addition, if iMiner has an error state, it immediately sends an email notification making it easy to monitor and manage scaled mining farm remotely.Biostar iMiner series uses the best mining hardware and offers an easy-to-use solution for both professional miners and home miners. There are three different models to choose from. The Biostar iMiner A578X6 is a start with hashrate of 165MH/s (+/-5%) and allows for additional GPU expansion. Next would be Biostar iMiner A564X12P with hashrate of 148 MH/s (+/-5%). For the ultimate performance, go with Biostar iMiner A578X8D with hashrate of 220 MH/s (+/-5%). In addition, software provided will make this an enjoyable and easy mining experience.Biostar exhibits at Taipei Nangang Exhibition Center, Hall 1, L1217a at Computex 2018.
Wednesday 6 June 2018
Colorful Technology and Chaintech showcase the latest technologies and applications of computer boards and cards
China's No. 1 gaming hardware brand Colorful Technology joins forces with Chaintech to exhibit its high-end brand iGame series at booth M1112 at Computex 2018.iGame is Colorful's only graphics card brand with a high-end positioning. Undergoing a decade of development since its debut in 2008, the iGame series has expanded to include motherboards, storage and memory devices entering the market this year and has secured the largest share of the brand graphics card market. Gearing iGame toward the gaming segment, Colorful forayed into international markets in 2017 and is enjoying rapid growth in shipments to Southeast Asia and Korea. In particular, iGame SSD is gaining widespread popularity in Japan.With 22 years of continuing efforts and extensive experiences in the DIY computer market, Colorful has been able to stay as the top vendor in the China brand graphics card market for 15 years in a row.Colorful participates in a variety of gaming events to engage gamers. Hot League of Legends (LOL) e-sport teams including Snake and RNG have chosen to use iGame in training and competition. Colorful has designed iGame with gamers in mind and crafted motherboards and graphics cards specifically for e-sport teams. Furthermore, Colorful has been participating in or sponsoring e-sports tournaments including LOL, Overwatch, PlayerUnknown's Battlegrounds and Dota 2 at home and abroad and also organizing its own Colorful Games Union (CGU) competitions. By partaking in global e-sports games, Colorful looks to enhance the iGame brand awareness.Celebrating iGame's 10th anniversary, Colorful will unveil a whole-new logo in addition to the latest generation graphics cards, motherboards and memory devices with the iGame theme design at COMPUTEX 2018. It will also jointly present two new gaming computers with Chaintech and Intel in the afternoon of June 6.At Comnputex 2018, Colorful will showcase iGame GTX1080Ti Vulcan X OC - the world's first graphics card with an LCD display, in addition to iGame GTX1080 Vulcan X OC, iGame GTX1070Ti Vulcan X Top, iGame GTX1070 Vulcan X OC and iGame GTX1060 Vulcan X OC 6G of the Vulcan series. Furthermore, the high-end iGame GTX1080Ti Kudan will also be on exhibit along with the Neptune and Customization series. The 10th anniversary retro edition and new products featuring the iGame design concept will also be the highlights of this year's exhibition.Colorful's iGame GTX1080Ti Vulcan X OC is powered by the advanced Pascal GP102 GPU, featuring a one-key overclock of 1620MHz and a boost clock of 1733MHz as well as 11G GDDR5X 352-bit memory. It is equipped with a high-performance power supply for the GPU core and memory, the SWORIZER cooler, 1.68 million color-capable RGB lighting, and PCB protected with high-strength alloy. As a most prominent feature, the cooler is built in with the iGame Status Monitor which shows the core frequency, core temperature, fan speed and memory usage during operation. The core usage and load level is displayed in the form of a load bar so the user can stay aware of the current operating condition of the graphics card.Also at Computex 2018 will be iGame Z370 Vulcan X and the limited-edition motherboard specifically for RNG crafted by the outstanding iGame team. They use ATX Intel Z370 motherboards with LGA 1151 sockets designed to fully utilize the power of 8th generation Coffee Lake processors and feature RAM overclock to 3200MHz as well as three PCI-E and three PCI-Ex1 slots. Targeting the high-end gaming segment, these motherboards are refined with the exclusive iGame pure-power inductance (IPP) and silver plating technology (SPT) that enables two times better stability and antioxidant property than copper plating, USB ports for gaming gear, GamerVoice with audio gold-plating interface delivering 8-channel sound and KillerE2500 network cards for enriched gaming experiences.Aside from consumer-oriented gaming products, Colorful will also present motherboards and graphics cards for industrial control applications. Moreover, Colorful will also feature an exhibit themed on cryptocurrency mining products including mining motherboards, graphics cards and machines that are in massive shortage in China.Colorful will continue working on technology innovations, channel expansions, brand publicity enhancements and international market developments. As Colorful places great importance on its new product announcements and exhibitions at Computex 2018, Shan Wan, chairman of Colorful, will personally attend Computex 2018 to exchange views with industry players. Colorful extends a warm welcome to visitors to booth M1112.Colorful's flagship iGame series
Wednesday 6 June 2018
VATek Gen-3 modulator chips bring advance features to new tier of DTV headends
Vision Advance Technology has introduced the Gen-3 Platform to the market. VATek took years of dedication to bring the new series to reality. The Gen-3 portfolio includes two Super Enmoder chips (B3+ & B3) and a brand new Super Modulator (A3). The new platform is designed to exceed the expectation of modulator makers by offering technologies and features that previously could only be implemented with FPGAs. This will be the first non-FPGA chip to support DVB-T2 (base 1.3.1) modulation.Gen-3 Super Enmoder can support up to seven different digital TV standards, and featuring AVC + MPEG-2 dual format encoder. By incorporating key modulation capabilities and media processing performance advancements, the Gen-3 Super Enmoders are designed to transform headend products into ultimate broadcast devices, which can work and deliver high quality TV program to every digital TV in different standards. That helps headend manufactures to expand the market worldwide.Gen-3 Super Modulator (A3) has new architectures, engineered to result in significant improvements over the Gen-1 modulator. A newly added stream engine that can work as transport stream regulator that can implement PID filtering, regulate transport stream and insert customized PSI/SI info into media stream. A3 can support any video encoder without MUX units. A3 modulator is also equipped with VATek's latest modulator engine, bringing more DTV standards including ISDB-T & DVB-T2 to the chip. It allows manufactures to design UHD (4K) DVB-T2 headend product with ease.VATek will also release new firmware platform to all series of products. The new platform is engineered to deliver efficiency and design flexibility. The update allows each VATek product to share the same developing tools and control logic. Several features and function upgrades will be found in the new firmware as well. The most significant improvement will be the control method. A register-like control mechanism will be introduced to replace the current gateway control system. R2 control logic will be also available in the new firmware platform, allowing every VATek modulator & Enmoder chip to control R2 RF chip automatically. Developers no longer need to build R2 driver, and will still be able to access the RF chip, and users can even conduct IQ balance calibration with a few mouse clicks. VATek also has made a major upgrade to extend TS API, and with new auto data repeat features, the chip now can automatically mux the repeated PSI/SI data into the stream reduce system loading dramatically. VATek will provide paid PSI/SI design service for customers who use the new platform. They can purchase the authorization key to enable the PSI/SI function.The VATek Gen-3 series is available this June and is expected to be found in consumer devices in the fourth quarter of 2018. Visit VATek website at www.vatek.com.tw for details.VATek GEN-3 Product Comparison Super Modulator A3Super enmoder B3+Super enmoder B3Media iNTERFACEsTS Serial/Parallel,USBBT656 / 1120,I2SBT656 / 1120,I2Sfhd h.264 / mpeg-2 eNCODER---YesYesdvb-t2YesYes---dvb-t, atsc, j83a/b, dtmb, isdb-tYesYesYesSource: VATekThe VATek GEN-3 series, designed to support next-generation broadcast technology. New DVB-T2 & AVC video engine brings exceptional performance and quality to the industry.
Wednesday 6 June 2018
Gigabyte to demonstrate integrated AI/Data Science Cloud at Computex 2018
Gigabyte, an industry leader in server systems and motherboards, has collaborated with local cloud and storage platform providers to showcase an integrated "AI/Data Science Cloud" at Computex 2018, demonstrating how customers can build a private cloud to own and protect their data while connecting with public cloud services, and incorporate inbuilt AI (artificial intelligence) capabilities to use big data for real-time deep learning and inference processing (AIoT).Gigabyte Featured HardwareGigabyte's Network & Communication Business Unit exhibit area at Computex will showcase its six main product lines and their target applications: H-Series Density Optimized Servers for cloud computing; G-Series High Performance Computing Servers for AI; S-Series Storage Servers for big data; W-Series Workstations for content creation and software development; R-Series General Purpose Rack Servers for enterprise IT; and Racklution-OP Open Rack Standard products for hyper scaled data centers.Product featuresProduct seriesTarget applicationComputex 2018 featured productProduct featuresH-SeriesDensity-optimized servers for hyper-converged data centers / cloud service providersH281-PE02U 4 node server with dual-socket Intel Xeon Scalable CPU and 4 expansion slots per node & optional liquid cooling systemG-SeriesHigh performance GPU servers for AI and deep learningG481-S804U dual-socket Intel Xeon Scalable CPU server w/ 8 x SXM2 NVIDIA V100 / P100 GPU modules & optional liquid cooling systemS-SeriesScale-out storage for onsite, offsite software defined storageS451-3R04U dual-socket Intel Xeon Scalable CPU server w/ 36 x 3.5" SSD / HDD drives and E-ATX form factor motherboard for configuration flexibilityR-SeriesGeneral purpose rack servers for enterprise ITR281-Z922U dual-socket AMD EPYC 7000 series CPU server w/ 24 x 2.5" ultra-fast NVMe U.2 storage drives. Top benchmark score on SPEC.orgW-SeriesOffice environment workstation for media content creation, or an innovative platform for software developmentW281-T91Dual-socket Cavium ThunderX2 ARM CPU workstation towerRACKLUTION-OPOpen rack standard compliant servers for hyper scale data centersDO60-MR012OU 21" open rack standard compliant mini-rack w/ compute & storage nodesAI/Data Science Cloud DemonstrationGigabyte will not only showcase its hardware during Computex but also demonstrate to its customers the practical applications of its products by building a private cloud in collaboration with its partners InfinitiesSoft and Bigtera. It calls the demonstration an "AI/Data Science Cloud." It is an AI and big data analysis enabled hybrid cloud, enabled with Infinities CloudFusion cloud management platform and Bigtera VirtualStor scale-out storage, running onsite at Gigabyte's Computex booth on its H-Series, G-Series and S-Series server products. It provides remote cloud services in categories such as compute, storage, big data analytics, deep learning and AI training, and management functions. The platform has a high availability architecture that can avoid a single point of failure and can be quickly scaled out at remote locations.Infinities CloudFusionGigabyte's "AI/Data Science Cloud" has been built with InfinitiesSoft CloudFusion, a comprehensive cloud management solution that can support and integrate over 30 different private and public clouds with a single platform. For this demonstration, a hybrid cloud has been built with OpenStack and Bigtera VirtualStor storage working seamlessly under the hood of CloudFusion, running on Gigabyte's H281-PE0 and H261-N80 hyper-converged servers, and connected with public cloud services. HPC application containers have also been set up and integrated into this cloud with Kubernetes and running on Gigabyte's G481-S80 GPU servers to provide remote AI training and deep learning capabilities. Since CloudFusion supports a highly elastic open API interface for developers, many other additional public or private clouds can also be connected and integrated into this platform to ensure it is future-proof.Bigtera VirtualStor ScalerGigabyte's "AI/Data Science Cloud" includes a scale-out storage cluster created with Bigtera's VirtualStor Scaler storage platform and running on Gigabyte's S-Series storage servers. VirtualStor Scaler provides customers with a cost effective x86 scale-out storage solution that allows them to pay as they grow. VirtualStor Scaler's scale-out architecture provides the flexibility to specify the storage type (NAS, SAN, object storage), performance (IOPS and throughput), and efficiency all while delivering resilient and secure capacity. VirtualStor Scaler's unique advantages include offering multi-tenant storage capabilities to provide different "virtual storage" for different tenants, making storage management flexible, as well as providing functionality to consolidate legacy devices and help seamlessly migrate old data to a new storage system without downtime.Gigabyte to demonstrate integrated "AI/Data Science Cloud"Gigabyte's Computex booth is at Taipei World Trade Center Hall 1, D0002For more information on Gigabyte server products, please visit: http://b2b.gigabyte.com For more information on InfinitiesSoft CloudFusion, please visit: http://www.infinitiessoft.com For more information on Bigtera VirtualStor Scaler, please visit: http://www.bigtera.com
Wednesday 6 June 2018
V-Color unveils upgraded PRISM RGB memory and PCIe M.2 RGB SSD VPM800
V-Color unveils its latest DDR4 Prism RGB memory and PCIe SSD VPM800 at Computex 2018. Carrying on the V-Color style, the new products feature not only a total upgrade to performance but also dazzling designs and colors, bringing gamers never-before-seen visual enjoyments and top-notch performance.Combining the latest heat sink design and improved light bars to present fascinating lighting effects mimicking water flow, DDR4 Prism RGB is a must-have for PC modders and gamers. It comes with several models including the entry-level DDR4-2666 8GB CL 16 1.2V memory available in red and gray and two high-end models DDR4-3000/3200 8GB for heavy gamers and overclockers. DDR4-3000 8GB CL 15 1.35V features silver heat sinks while DDR4-3200 8GB CL16 1.35V, the highest spec among the Prism RGB series, is equipped with heat sinks made of special metallic materials with optimal heat transfer property to enhance heat dissipation. Built with Samsung B-die chips, DDR4-3200 8GB unleashes full overclocking potential for gamers to keep pushing the limit.DDR4 Prism RGB products are built with 10 layers of PCB. They support Intel XMP 2.0 one-key overclock, enabling an instant boost to blazing performance. RGB lighting is controlled by Gigabyte and ASRock motherboard software and works in sync with other RGB devices to create dazzling, ever-changing colorful effects controllable by users in a snap of fingers.The new RGB PCIe SSD VPM800 series is another highlight of V-Color's offerings. VPM800 SSDs use Silicon Motion's SM2263XT controllers and support PCIe 3.0 x4 interface and NVme 1.3 protocol. Take VPM800 480GB for example. It enables sequential read performance of up to 2,000MB/s and sequential write performance of up to 1,600MB/s. Maximum random reads and writes are rated at 250K IOPS and 200K IOPS respectively. VPM800 uses Toshiba's new 3D NAND Flash chips to significantly boost storage density. On the outside, V-Color still incorporates color IC patent in its PCIe SSD, coupled with RGB lighting, to instantly transform the cold, hard feel of an SSD to an eye-catching RGB work of art in concert with the lighting effect of Prism RGB. The two make a new gaming package with compelling cost advantages that gamers simply cannot miss this year.V-Color chairman Tomson Ho thinks RGB is iconic and trendy in the gaming market. This is why V-Color is launching at full blast the Prism RGB series combining cool looks and powerful performance. "Prism RGB series uses metal heat sinks with optimal heat dissipation effects. Its weight also exceeds other products in the same category by more than 10g. V-Color aims to bring unrivaled experiences to gamers, no expense spared," said Ho with confidence.With respect to the RGB synchronization issue gamers are concerned with, Ho indicated V-Color is working on a patented solution that combines software and hardware and synchronizes peripheral devices. The solution is scheduled for release by year-end 2018. Users can control RGB synchronization simply using a smartphone app. It is easy to use with no need for complicated BIOS settings.Amid a market keen to get into overclocking, Ho holds a different view. "V-Color focuses on products that have a larger user base and strives to build up market presence in this segment. V-Color enables gamers to overclock their systems by themselves for better cost-performance ratios, rather than telling them where the limits are right in the beginning," said Ho.Furthermore, V-Color's unique iMS technology has been around for a while. Ho commented, "iMS is now available with a simple push of the F4 key and the motherboard will automatically engage in memory error checking and correcting without the need for complicated BIOS operations. As such, motherboard makers no longer have to teach users how to do this. V-Color is working with Gigabyte and ASRock on iMS, which allows software and hardware to combine forces, demonstrating V-Color's outstanding R&D strength." To put it another way, V-Color looks to collaborate with system vendors and enhance service quality and efficiency to further boost system vendors' brand image. The partnership projects are 98% finished and final releases are planned for June 2018.V-Color's Computex 2018 booth at Taipei Nangang Exhibition Center, Hall 1, J0818
Wednesday 6 June 2018
Clientron introduces multi-display F620 based on AMD R series CPU
Clientron Corp, a world-leading provider of thin client, POS and embedded systems, introduces the multi-display thin client F620, based on the third generation of AMD's R Series embedded platform (Merlin Falcon) to provide the advantages of powerful, flexible, secure and easy management for virtual work environments. A full feature set of I/O designed specifically for high-end customer demands makes the F620 an ideal virtual desktops solution for banking, healthcare, government and manufacturing applications.Featuring low power consumption compliant with Energy Star 6.1, the F620 uses AMD's latest high performance RX-216GD 1.6GHz dual core SOC with fully integrated solution for the most demanding GPU workload, providing powerful multimedia capability and enhanced user experience. It supports three ultra-high resolution 4K monitors via two DisplayPorts and one HDMI port, with an additional option for the fourth display for flexible expansion for various applications.The F620 is configurable to two DDR4-1600MHz sockets with capacity up to 16GB, SATA III mSATA interface for storage, and provides rich I/O ports including one COM, two USB 3.0, six USB 2.0, one GbE LAN, optional fiber optic LAN, audio ports, one HDMI and two DisplayPorts. Expansion is easy with I/O options including an additional DP port, OmniKey smart card reader, and M.2 interface for WLAN module. With TPM (Trusted Platform Module) 2.0 design on board, the F620 provides security and privacy benefits. The F620 thin client offers rich benefits including 64-bit Windows 10 IoT Enterprise and Linux operating systems support, long product lifecycles and optimized application for all the major remote networking protocols including VMware Horizon View and Citrix XenDesktop, XenApp, and Microsoft RemoteFX.With the benefits of fanless, secure management, low power consumption, and excellent visual desktop experiences, the F620 is an optimum choice for business environment requiring maximum flexibility for running rich graphical applications.F620 thin client's key features:- Designed with AMD Merlin Falcon dual core embedded processor- Ultra-compact and fanless design- Supports high speed interface: DDR4, USB 3.0, SATA III, 4K display- Native 3-display support (optional fourth display)- Support Kensington lock- Optional smart card reader- Optional WiFi moduleThe F620 thin client is available now. For more details of F620, please visit www.clientron.comClientron introduces the multi-display thin client F620
Tuesday 5 June 2018
The new era of GPU computing has arrived
NVIDIA's GPU Technology Conference (GTC) Taiwan attracted more than 2,200 technologists, developers, researchers, government officials and media last week in Taipei. GTC Taiwan is the second of seven AI conferences NVIDIA will be holding in key tech centers globally this year. GTC is the industry's premier AI and deep learning event, providing an opportunity for developers and research communities to share and learn about new GPU solutions and supercomputers and have direct access to experts from NVIDIA and other leading organizations. The first GTC of 2018, in Silicon Valley in March, hosted more than 8,000 visitors. GTC events are showcases for the latest breakthroughs in AI use cases, ranging from healthcare and big data to high performance computing and virtual reality, along with many more advanced solutions leveraging NVIDIA technologies.GTC 2018 in San Jose debuted the NVIDIA DGX-2 AI supercomputing system, a piece of technology that AI geek dreams are made of. The powerful DGX-2 system is an enterprise-grade cloud server that combines high performance computing with artificial intelligence requirements in one server. It combines 16 fully interconnected NVIDIA Tesla V100 Tensor Core GPUs for 10X the deep learning performance compared with its predecessor, the DGX-1, released in 2017. With a 1/2 a terabyte of HBM2 memory and 12 NVIDIA NVSwitch interconnects, the DGX-2 system became the first single server capable of delivering 2 petaFLOPS of computational capability for AI systems. It is powered by NVIDIA DGX software stack and a scalable architecture built on NVSwitch technology.In this interview, Marc Hamilton, NVIDIA's Vice President of Solutions Architecture and Engineering, talks about GTC and the development of Taiwan's technology ecosystem. He and his engineering team work with customers and partners to deliver solutions powered by NVIDIA artificial intelligence and deep learning, professional visualization, and high performance computing technologies. From many visits to ecosystem partners and developers, Hamilton is very familiar with the pace of AI development in Taiwan.AI is dealing with HPC-class scaling problemsAI technologies elevate the enterprise by transforming the way we work, increasing collaboration and ushering in a new era of AI-powered innovation. AI solutions are rapidly moving beyond hype and into reality, and are primed to become one of the most consequential technological segments. Enterprises need to rapidly deploy AI solutions in response to business imperatives. The DGX-2 system delivers a ready-to-go server solution that offers the path to scaling up AI performance.DGX-2 is designed for both AI and HPC workloads and simplifies the speed of scaling up AI with flexible switching technology for building massive deep learning compute clusters, combined with virtualization features that enable improved user and workload isolation in shared infrastructure environments. With this accelerated deployment model and an open architecture for easy scaling, development teams and data scientists can spend more time driving insights and less time building infrastructure.For example, running HPC applications for weather forecasting means dealing with the massive scale of computation nodes. Forecasts are created using a model of the Earth's systems by computing changes based on fluid flow, physics and other parameters. The precision and accuracy of a forecast depend on the fidelity of the model and the algorithms, and especially on how many data points are represented. Computing a weather forecast requires scheduling a complex ensemble of pre-processing jobs, solver jobs and post-processing jobs. Since there is no use in a forecast for yesterday, the prediction must be delivered on time, every time. The prediction application is executed on a server node and receives reports from the monitoring programs distributed over the compute nodes.Typically, these would be large distributed memory clusters, made up of thousands of nodes and hundreds of thousands of cores. Many HPC applications work best when data fits in GPU memory. The nature of the computations is built on interaction between points on the grid that represents the space being simulated, and stepping the calculated variables in time. It turns out that in today's HPC technology, the moving of data in and out of the GPU is more demanding in time than the computations performed. To be effective, systems working with weather forecasting and climate modeling require high memory bandwidth and fast interconnect across the system.NVSwitch maximizes data throughput between GPUs leveraging NVLinkMemory is one of the biggest challenges in deep neural networks (DNNs) today. Memory in DNNs is required to store input data, weight parameters and activations as an input propagates through the network. Developers are struggling with the limited memory bandwidth of the DRAM devices that have to be used by AI systems to store the huge amounts of weights and activations in DNNs.Having long relied on PCI Express, when NVIDIA launched its Pascal architecture with the Tesla P100 GPU in 2016, one of the consequences of their increased server focus for Pascal was that interconnect bandwidth and latency became an issue. The data throughput requirements of NVIDIA's GPU platform began outpacing what PCIe could provide. As a result, for their compute focused GPUs, NVIDIA introduced a new interconnect called NVLink.With six NVLink per GPU, these links could be teamed together for greater bandwidth between individual GPUs, or lesser bandwidth but still direct connections to a greater number of GPUs. In practice this limited the size of a single NVLink cluster to eight GPUs in what NVIDIA calls a Hybrid Mesh Cube configuration, and even then it's a NUMA setup where not every GPU could see every other GPU. Utilizing more than eight GPUs required multiple systems connected via InfiniBand, losing some of the shared memory and latency benefits of NVLink and closely connected GPUs.In a DGX-2 system, there are 16 Volta GPUs in one server. So NVIDIA introduced NVSwitch, which is designed to enable clusters of much larger GPUs by routing GPUs through one or more switches. A single NVSwitch has 18 full-bandwidth ports, three times more than a single Tesla V100 GPU, with all of the NVSwitch ports fully connected with an internal crossbar.The goal with NVSwitch is to increase the number of GPUs that can be in a cluster, with the switch easily allowing for a 16 GPU configuration with 12 NVSwitch interconnect (216 ports) in the system to maximize the amount of bandwidth available between the GPUs. NVSwitch enables GPU-to-GPU communications at 300GB per second, which already has double the capacity from the DGX-1 (and the HGX reference architecture it's based on). This advancement will drive hyper-connection between GPUs to handle bigger, more demanding AI projects for data scientists.NVIDIA wants to take NVLink lane limits out of the equation entirely, as using multiple switches should make it possible to build almost any kind of GPU topology in theory.Deep learning frameworks such as TensorFlow don't need to understand the underlying NVLink topology in a server thanks to NVIDIA's NCCL (NVIDIA Common Collectives Library), which is used by TensorFlow and all leading DL frameworks. NVIDIA's AI software stack is fully optimized and updated to support developers using DGX-2 and other DGX systems. This includes new versions of NVIDIA CUDA, TensorRT, NCCL and cuDNN, and a new Isaac software developer kit for robotics. Hamilton highlighted the release of TensorRT 4.0, a new version of NVIDIA's optimizing inference accelerator. TensorRT 4.0 integrates with the TensorFlow 1.7 framework. TensorFlow remains one of the more popular deep learning frameworks today. And NVIDIA engineers know their GPU well and make TensorRT 4.0 software to accelerate deep learning inference across a broad range of applications through optimizations and high-performance runtimes for GPU-based platforms.Hamilton mentioned lots of TensorFlow users will gain from the highest inference performance possible along with a near transparent workflow using TensorRT. The new integration provides a simple API that applies powerful FP16 and INT8 optimizations compiling TensorFlow codes using TensorRT and speed up TensorFlow inference by 8x for low latency runs of the ResNet-50 benchmark.In edge computing, TensorRT can be deployed on NVIDIA DRIVE autonomous vehicles and NVIDIA Jetson embedded platforms. Deep neural networks on every framework can be trained on NVIDIA DGX systems in the data center, and then deployed into all types of edge devices. With TensorRT software, developers can focus on developing advanced deep learning-powered applications rather than taking time for fine tuning performance for inference deployment.HGX-2 server platform as a reference design for cloud data centersThe DGX-2 server is expected to ship to customers in Q3 2018. Meanwhile, bringing together the solution expertise of Taiwan's ecosystem partners and global server manufacturers, NVIDIA announced the HGX-2 cloud-server platform with Taiwan's leading server makers at GTC Taipei. The NVIDIA DGX-2 server is the first system built using the HGX-2 reference design.The server industry has been one of the few industries that have remained strong for Taiwan ODMs and increased opportunities in the AI field will help Taiwan system makers. NVIDIA engineering teams work closely with Taiwan ODMs to help minimize the development time from design win to production deployments. The HGX-2 is designed to meet the needs of the growing number of applications that seek to leverage both HPC and AI use cases. Those server brands and ODMs are designing HGX-2-based systems to build a wide range of qualified GPU-accelerated systems for hyperscale data centers.The HGX-2 server reference design consists of two baseboards. Each comes equipped with eight NVIDIA Tesla V100 32GB GPUs. These 16 GPUs are fully connected through NVSwitch interconnect technology. With the HGX-2 serving as a building block, server manufacturers will be able to build full server platforms that can be customized to meet the requirements of different data centers.NVIDIA AI collaboration in TaiwanHamilton says the areas of AI collaboration in Taiwan include hands-on training of 3,000 developers on leading applications of deep learning and providing high-level internship opportunities for Taiwanese post-doctoral students to work with NVIDIA engineering teams. The first AI hospital in Taiwan, sponsored by the LEAP program, which is supported by the Ministry of Science and Technology (MOST), is making it possible for doctors to see disease earlier and better understand it through advanced breakthroughs in AI.Another case Hamilton highlighted is AI helping semiconductor foundries to identify wafer defects. The solution focused on using AI to sharpen the domestic semiconductor market's competitive position. The wafer defects detection system uses physics-based Instruments to examine the images of wafers by leveraging NVIDIA GPU-based optical neural network. The same idea has been modified for use in the printed circuit board (PCB) industries to make visual inspection of PCBs more accurate and give production line mangers a significant edge in discovering and resolving product issues.NVIDIA HGX-2 cloud server platform
Tuesday 5 June 2018
VIA Labs announces immediate availability of USB-IF certified Power Delivery 3.0 Silicon
With the finalization of USB PD 3.0 Programmable Power Supply (PPS) protocol last year and mobile phones widely adopting USB-C interface, the unification of standard for mobile devices will not only drive fast charging opportunities, but also create new use cases and mobile device peripheral requirements. In response to the trend, VIA Labs Inc has taken the lead to launch certified USB PD 3.0 solutions, aiming to help customers quickly enter the market with complete reference designs."In order to fulfill the increasing needs for thinner and lighter phones, faster charging speed, and full-screen display, new mobile phones will not only adopt USB-C interface, but will also eliminate the traditional 3.5mm audio jack," said David Hsu, associate VP of Product Marketing, VIA Labs, Inc. "The trend will become prevalent gradually from high-end phones down to middle-range phones, which will definitely drive opportunities for new peripherals, such as USB-C headsets, dongles, and docking stations."A mobile phone with only one USB-C port and no audio jack will create some use cases that are different from current consumer habits. The most common one is the need to charge and listen to music at the same time. Therefore, new types of peripherals are required to provide more convenient user experience. According to Hsu, "For a long time, peripherals have played the role of 'extending possibilities,' and VIA Labs is devoted to providing more use cases to further extend USB-C's diverse functions."With the concept of 'Extending More Possibilities,' VIA Labs is committed to developing silicon solutions for various application scenarios, so that mobile phone makers can use them to create higher values and differentiated features for their peripheral devices, in a bid to meet consumers' practical application requirements and to create better user experience.Take VIA Labs VL104 DisplayPort Alternate Mode controller as an example, it integrates one upstream USB-C port and two downstream USB-C ports to form one combination package, App6 reference design. The two downstream ports can support both charging and the headphone to enable intelligent interchangeable functions. It is a design that truly meets and even optimizes user experience. In addition, VL104 integrates buck-boost converter on a single chip to dramatically reduce the PCB size. As a result, designing more varied, compact and smaller peripheral devices has been made possible.For USB-C power adapter applications, VIA Labs has introduced a highly integrated VP302 chip, which is optimized for next-generation USB-C wall charger and power adapter applications that support real-time finely adjustable voltage and current output. The new Programmable Power Supply (PPS) capability is the foundation for several new fast charging methods that promise not only more rapid charging, but also lower device temperature when compared to legacy methods. VIA Labs VP302 has obtained USB-IF PD 3.0 certification.Before the specification of fast charging for mobile phones is unified, every mobile phone manufacturers will have their unique specification for fast charging. Though it is convenient in a short term, in a longer term it will cause damage to mobile phones due to adapter mismatch. After the finalization of USB PD 3.0 PPS protocol, we will see USB-C fast charging and manufacturers' own technologies coexist in the near future. It is expected that USB-C fast charging will eventually hit the mainstream as consumers get used to using any USB-C-based power adapter to support different brands of mobile phones.Delivering 'combination-package solutions' based on application scenarios of consumer needsIn order to confront the fiercely competitive USB-C silicon market, VIA Labs chose to target at dongle and hub applications that were not so prevalent at the beginning. With correct strategy of emphasizing improvements to user experience and providing complete turnkey solutions, VIA Labs has successfully helped peripheral makers move into mass production quickly, and its products have also been adopted by Huawei, Nintendo and other internationally renowned manufacturers."We aim to provide combination 'packages' for various application scenarios. With the diverse functions of USB-C, we have developed seven reference designs for App1-App7 use cases, covering from simple video dongle, the combination of data, video, audio, and charging, to multi-function dongle with any port charging through. Now, the term, App1-App7, has become a common usage in the industry," said Hsu. "For example, VL104 can be used to support App6/App7 use cases. Just like ordering meals from MacDonald's, our customers can easily and quickly get their wanted designs by simply telling us the number (App1-App7)."As the development of USB-C is taking off, creation value will be a key to success in the market. With the penetration rate of USB-C in mobile phones gradually increasing, phone makers will try to provide better standard peripherals to create differentiated features. Once consumers' habits are established, the demand for third-party peripherals will also rise.To address the need, VIA Labs will strengthen cooperation with phone makers to further explore the mobile peripheral market by conceiving possible use cases and providing them with diverse design options. For example, by fully leveraging the advantages of USB-C interface, docking stations can be used as an extension of voice assistant for mobile phones to enable the functions beyond listening to music and charging. This, again, demonstrates the concept of "Extending More Possibilities." Hsu is also optimistic that more new application scenarios and requirements for USB-C will pop up, and the business outlook for the market is very promising in 2019.VIA Labs' VL104 supports APP6: Interchangeable charge through audio dongleVIA Labs' VL104 supports APP7: Multi-function dongle + any port charge through
Tuesday 5 June 2018
CWT combines forces with eTreego to optimize charging piles' efficiency amid rapidly growing electric vehicle market
The growing awareness on environmental protection and energy conservation has prompted governments worldwide to enforce energy efficiency policies. With the development of electric vehicles (EV) as one of their priorities, many countries are now planning on a timeline between 2020 and 2040 to ban new fossil-fuel cars' sales. Worldwide efforts to promote EV adoption have introduced immense opportunities, attracting many Taiwan high-tech players to launch into the market. Channel Well Technology (CWT), with years of experience in the power supply business, is also making active efforts in the R&D of EV charging pile equipment. Wei-Ting Ou, who's in charge of CWT's EV charging pile department, points out that most people tend to think Taiwan-based manufacturers focus more on developing technologies for EV body structures, however, achievements they've accumulate in charging pile industry is in fact another rising star.CWT engages in collaborations with the startup eTreego by providing interior power supply systems to be integrated into its charging piles. Most charging stations currently on the market can only support a few cars once and may have difficulties satisfying demand in high traffic areas. eTreego's charging pile on the other hand offers a chance to reduce the contract capacity of the charging station through non-uniform charging technology. For eTreego's piles, aside from critical control modules, rely heavily on interior robust power supply technologies. With this need, achievements through long-term devotion in power supply have allowed CWT to be the perfect match.Overall grow of the EV industry did not advance as expected for the past for reasons as follows. First of all, due to competitions among leading international automakers, the Taiwan market had not unified EV charging standards during EV industry's early-stage, which led to holdbacks of domestic manufacturers, resulting in longer R&D cycles. Furthermore, consumers generally have the impression that EVs run out of power quickly. Long charging time and insufficient charging stations are additional factors that hinder EV's growth.However, EV's market is picking up its speed, and the explosive growth can be expected, especially driven by government policies, according to Ou. This is also the main reason why leading companies are scrambling for a share of the market. Taiwan's EV standards and regulations are close to its completion. EV battery life has greatly improved from a 100km range to 300km or above. There are DC charging stations that can charge an electric car's battery to its 80% in less than 40 minutes. On the other hand, for prolonging the battery life purpose, AC charging generally requires 4 to 12 hours which is recommended that EV owners charge their vehicles at home overnight. In brief, convenience and feasibility of EV charging have significantly improved.The growing EV infrastructure market is now creating rising opportunities and in the years to come to those who are well prepared. To capture these opportunities, CWT has taken the initiative not only to work on its own power supply research but also to engage in joint developments with eTreego in an attempt to gain a strong foothold in the EV charging equipment market.Wei-Ting Ou, in charge of CWT's EV charging pile department, expects flourishing opportunities as government policies promote EV adoption worldwide.