CONNECT WITH US
Monday 30 May 2016
NVIDIA Pascal taking us one step closer to a VR world
One of the most exciting things about graphics chip design house NVIDIA is that the company has vision; it is always taking us one step closer to the future. While best known for its gaming GPUs, NVIDIA has also used its vision to stake leadership positions in areas such as high performance computing (HPC), desktop virtualization and artificial intelligence. And now with the launch of its new Pascal architecture, the company is taking the market one step closer to realizing the vast potential of virtual reality (VR).To be clear, the amazing performance of the recently launched GeForce GTX 1080 has traditional gamers excited and lining up to get hold of the fastest gaming GPU in the world. Based on NVIDIA's next-generation Pascal architecture, the GTX 1080 features 2560 CUDA Cores running at speeds over 1600 MHz, while delivering a 70% performance increase over the Maxwell-based GeForce GTX 980.The GTX 1080 also delivers 3x the power efficiency of the Maxwell architecture, allowing for unheard of clock speeds of over 1700MHz while consuming only 180 watts of power. Pascal-based GPUs also provide 8GB of GDDR5X memory in a 256-bit memory interface running at 10Gb/sec, with this expanded bandwidth driving a 1.7x improvement in effective memory bandwidth over standard GDDR5. NVIDIA has achieved all this through a design that takes advantage of a leading-edge 16nm FinFET manufacturing process that packs 7.2 billion transistors into one chip, delivering a dramatic increase in performance and efficiency.Virtual reality enhancementsAmazingly, the new GeForce GTX 1080 can double the virtual reality performance of the Maxwell-based GeForce GTX TITAN X. One driver of this new level of performance is pure horsepower. For example, VR headsets render games and applications at a rendering resolution equivalent to 3024x1680, and need to do so at a sustained 90 frames per second (FPS). Failure to maintain a constant 90 FPS results in stuttering and hitching that ruin the experience.But NVIDIA understands VR performance is not only about horsepower. The company has always been keenly interested in the capabilities of VR and has been working in the field for about a dozen years. Up to this point, NVIDIA has mostly been working in the professional and vertical segments developing high-end immersive environments that could cost millions of dollars. Bringing VR to the mainstream has been a goal of the company over the past several years, especially when a little startup called Oculus came to its GTC Conference in 2013. In line with industry developments, NVIDIA started considering VR rendering as part of its previous-generation Maxwell. And now with Pascal, NVIDIA has tailored the architecture to enable a whole new level of presence in VR.For example, one major feature of the Pascal architecture implemented to optimize the VR experience is the Simultaneous Multi-Projection (SMP) engine, which basically changes the way a GPU renders to a display. Since the early days of 3D rendering, the graphics pipeline has been designed with a simple assumption that the render target is a single, flat display screen. With the SMP engine, the GPU can generate multiple projections of a single stream, thus mapping a single image onto up to sixteen different projections from the same viewpoint. This capability enables the GeForce GTX 1080 to accurately match the curved projection required for VR displays, the multiple projection angles required for surround display setups, and other emerging display use cases such as augmented reality (AR).This technology is especially suited for helping with VR display systems, which also put a lens between the viewer and the screen, requiring a new type of projection that is different from the standard flat planar projection that traditional GPUs support. With Pascal, this can now be done efficiently. Traditional GPUs also make the claim that they support these types of displays, but they can only achieve this with significant inefficiencies - either requiring multiple rendering passes, or rendering with overdraw and then warping the image to match the display, or both.Pascal also can render geometry for the left and right eyes simultaneously in a single pass, meaning an application can instruct the GPU to replicate geometry up to 32 times (16 projections x 2 projection centers) without additional application overhead.In addition, all the processing is hardware-accelerated, and the stream of data never leaves the chip. Since the multi-projection expansion happens after the geometry pipeline, the application saves all the work that would otherwise need to be performed in upstream shader stages. The savings are particularly important in geometry-heavy scenarios, such as tessellation.Asynchronous timewarpAnother requirement of some virtual reality applications is the ability to handle multiple independent workloads that work together to contribute to the final rendered image. For example, timewarp is a feature implemented in the Oculus SDK that allows for the rendering of an image and then to perform a postprocess on that rendered image to adjust it for changes in head motion during rendering. To lower latency, a technique called asynchronous timewarp is used to regenerate a final frame based on head position just before display scanout, interrupting the rendering of the next frame to do so.The challenge for the GPU is that an asynchronous timewarp operation must be completed before scanout starts or a frame will be dropped. In this scenario, the GPU needs to support very fast and low latency preemption to move the less critical workload off of the GPU so that the more critical workload can run as soon as possible.Unfortunately, a single rendering command from a game engine can potentially contain hundreds of draw calls, with those draw calls containing hundreds of pixels that need to be shaded and rendered. If preemption is implemented at a high level in the graphics pipeline, it would have to complete all of this work before switching tasks, resulting in a potentially very long delay. To address this issue, Pascal is the first GPU architecture to implement Pixel Level Preemption. This is a process where the graphics units keep track of their intermediate progress on rendering work, so that when preemption is requested, they can stop where they are and preempt quickly.And this can be done at a very fine grain level, at a single pixel level.VR WorksOn top of the changes in the GPU architecture, NVIDIA has instituted a new VR Works SDK to support Pascal. VRWorks is NVIDIA's suite of APIs, libraries, and engines to allow application and headset developers to get the best experience out of NVIDIA GPUs.In the past, NVIDIA focused its VRWorks development on areas such as reducing latency, improving graphics performance and creating plug-and-play compatibility for headsets that improve the overall graphics quality and multi-projection capabilities. However, with Pascal the company has introduced some new features to improve the user's overall presence in virtual reality.The first is a feature called VRWorks Audio, which basically calculates sound propagation. Traditional audio is positional, using volume differences to tell you where the source is coming from within an environment. But this doesn't really reflect the real world, where sound spreads in many directions, not just directly toward a listener, meaning we also experience sound in the form of reverberation. But this indirect sound also depends on the size, shape, and material properties of the surrounding area. NVIDIA VRWorks Audio uses ray tracing, a technique used in generating images in computer graphics to trace the path of audio propagation through a virtual scene. VRWorks Audio simulates the propagation of acoustic energy through the surrounding environment using its NVIDIA OptiX ray tracing engine to calculate the movement and propagation of sound around a virtual environment.NVIDIA has also announced PhysX for VRWorks which enables realistic collision detection. When a user reaches out with a touch controller and grabs an object, the objects will respond realistically using PhysX. This interaction with the virtual object allows the game engine to provide an accurate response, both in terms of what I see and the haptic response. All objects in the environment, whether it be water or fire, will behave realistically and as I would expect them, so when a user steps in and puts on that headset, he or she really steps into and is immersed fully in the VR world.One of the demos, NVIDIA has prepared to show developers how they can take full advantage of Pascal, and how much fun and immersive it can be, is called VR Funhouse. Players can bounce between ten different mini-games, which let them tackle carnival-inspired challenges such as tossing basketballs, popping balloons, and shoot at targets fired from a cartoon cannon. VR Funhouse is the first full VR experience that NVIDIA is publishing itself and will be distributed through Steam as open source, so developers can get a jump start in creating similarly compelling experiences. Visitors at Computex Taipei will also be able to tap into the power of Pascal by experiencing NVIDIA VR Funhouse.NVIDIA role in VR marketIt is fair to say that NVIDIA delivers one of the core enabling technologies for the VR ecosystem, with its processors powering the headsets while also enhancing the video and now audio for the applications. NVIDIA GPUs are also used for 360 stitching and for most of the 360 videos developed. People are also starting to use GPUs for advanced input and tracking.As the market moves forward, NVIDIA believes VR is going to be increasingly important as a competing platform that changes the way we enjoy entertainment, the way we interact with friends and family socially and the way we get business done. The company points out the development of VR will be somewhat analogous to that of smartphones, where over the past 10 years, smartphones have brought about a new revolution in video content creating and consumption, delivered new types of mobile gaming and brought about new ways to interact with friends with applications like Facebook and Snapchat, and doing business with applications like Uber.Moreover, VR applications are expected to develop holistically. Countless verticals will utilize the technology to market their products, such as through virtual home tours; educate the public, such as virtually traveling up Mount Everest; or to train their staff, such as at hospitals.On the other side, there will be the gaming community to develop the applications and hardware that will make VR more widespread for consumers.And NVIDIA will be right in the middle every step of the way, or perhaps one step ahead. Visit NVIDIA at Computex to learn more.With the launch of its new Pascal architecture, NVIDIA is taking the market one step closer to realizing the vast potential of virtual reality (VR).
Monday 30 May 2016
Conexant Introduces New Family of USB-C Audio CODECs, Redefines Possibilities for Audio Accessories
This week at CES Asia 2016, audio and imaging innovation leader Conexant Systems, Inc., will be highlighting their AudioSmartTM product line of highly efficient, powerful silicon and software solutions that enhance the audio experience. Today, the company has announced the CX20985 and CX20899, two new USB DSP audio CODECs that support USB Type-C (USB-C) connectors in smartphone headsets and docking stations. Both the CX20985 and CX20899 are highly integrated, single-chip solutions that are fully compliant with the USB-C standard - as well as enterprise requirements for Skype for Business.One of the earliest entrants into the USB headset space, Conexant is the leading provider of gaming and office headset technology, having shipped over 40 million units to market leaders. According to Saleel Awsare, Conexant senior vice president and general manager, this track record of innovation makes Conexant uniquely qualified to identify and implement the next big audio accessory trend. "We saw the introduction of the USB-C connection as a way to bring a new class of features to audio accessories and radically enhance the user experience," Awsare noted. "While there are other vendors that offer USB headset technology, we have the audio expertise to deliver what they cannot: single chip, comprehensive solutions that include software and are truly plug-and-play. We're taking our innovative technology that is enabling advancements such as voice control and contextual awareness in smart home devices, TVs, smartphones, and more - and we're bringing all of its benefits to audio accessories."The adoption of USB-C has been the fastest in the history of USB standards, and, with many smartphone, laptop and tablet manufacturers already implementing USB-C connectivity into their products, it is well on its way to becoming the universal connection choice. Much smaller in size than a standard USB-A connector, USB-C also beats previously released USB connectors in terms of power, bandwidth and data speeds. Featuring reversible plug orientation, USB-C eliminates the need for multiple connectors by consolidating everything into one cable.Featuring a stereo 24-bit DAC and ADC for music and voice communication applications, the CX20985 supports sampling rates of up to 48kHz. It minimizes bill of material (BOM) costs by eliminating the need for an external crystal, and integrates a capless headphone driver that produces a full-range frequency response and eliminates AC coupling capacitors. The CX20985 family is available in 50-pin QFN and a tiny 46-pin WLCSP package that requires minimal PCB area - making it ideal for USB headset and docking station designs.The CX20899 integrates Conexant's award-winning digital signal processor (DSP) for an enhanced audio and voice experience. DSP algorithms include AEC, NR, programmable EQ, DRC, microphone AGC, volume control and microphone boost. A true-ground capless headphone driver delivers high quality, power efficient playback and contributes to minimal BOM costs. The CX20899 also features a single universal jack that supports headsets, headphones, external microphones, and line-in devices.Both the CX20985 and CX20899 feature a built-in, four-conductor headset jack that supports headphone/headset auto detection, as well as auto-switching between OMTP and CTIA-style headsets without the need for external components.Continuing to lead the way forward for audio accessories, Conexant plans additional enhancements to its AudioSmart family of USB-C compliant audio CODECs. Innovative features such as advanced noise cancellation (ANC) will be introduced for hearables including health monitoring headsets and other applications in the near future.Available in 60-pin QFN packages, the CX20899 is currently in mass production. Now sampling, the CX20985 will be available in a 50-pin QFN package, with mass production set to begin in July 2016.About ConexantConexant Systems, Inc., an audio and imaging innovation leader, combines its significant IP portfolio in DSP, analog and mixed signal technology with embedded software to deliver highly innovative silicon and software solutions to enrich and expand audio and imaging capabilities. Both enterprise and consumer markets are addressed by Conexant's AudioSmartTM and ImagingSmartTM solutions. Products with the company's technology built-in include PCs, tablet computers, TVs, headsets, printers, video monitors, game consoles and a variety of other devices. Founded in 1999, Conexant is a privately-held fabless semiconductor company headquartered in Irvine, Calif. with offices and design centers worldwide.
Monday 30 May 2016
Introduction of LYNwave smart antenna solution
In present days, multimedia streaming and IOT connectivity demands are increasing rapidly; therefore, wireless throughput and quality of service are extremely important for user experience. In order to achieve high quality wireless performance for multiple clients within an interference environment, we need a better wireless radio frequency control technology, and also a better wireless beacon frame steering management method.iFast technology includes multiple module antennas which provide specific radiation patterns and an advanced integrated technology with several management and control module blocks at firmware level. iFast algorithm is based on the analysis of huge field test data measured from real environment with client signal, radio conditions ,package status, etc. This intelligent method can choose the most suitable antenna pattern from signal and interference aspect to increase SNR (Signal to noise ratio) for better wireless performance. Its continual learning from the real environment make it to be best fit for complex and rapid changing environment.In iFast hardware architecture, each antenna unit is composed of an omnidirectional antenna element and several directional elements. The radiation pattern can be changed by controlling different reflectors. When the antenna is set to omnidirectional mode, as shown in Fig. 1, it can provide transmit or receive equally well in all directions similar to external dipole antenna. When the antenna is set to directional mode, as shown in Fig. 2, it can provide high directional antenna gain, high front-to-back ratio, which can provide better signal quality and coverage, and also avoid interference in the same time.iFast technology provides a unique automatic adjustment smart antenna technology, it can greatly improve the signal strength and coverage, increasing the throughput and wireless capacity. It offers benefits to 802.11a / b / g / n and 11ac Wave 2 MIMO devices, and enhances MIMO diversity gain and the possibility of frequency reuse.LYNwave is providing iFast technology at booth A0307 in World Trade Center Hall 1 Computex Taipei 2016. For more information about iFast, you are welcome to visit us.Omnidirectional antenna patternCombine directional antenna pattern
Friday 27 May 2016
Industry's First Universal Stylus Demonstration Planned for COMPUTEX 2016
The Universal Stylus Initiative (USI) announced today the industry's first demonstration of multiple USI styluses, interfacing with a USI enabled Windows PC, following the recently completed USI 1.0 Device and Stylus Specification.The demonstration is planned to coincide with COMPUTEX Taipei 2016 on Monday May 30, 2016, 1500-1700 (CST) at the Taipei International Convention Center (TICC), 1 Hsin-Yi Road, Section 5, Taipei, Taiwan in Room 103. Registration is free and open to the public."Since USI's founding a little over a year ago, the organization has doubled to over 30 members and published the USI 1.0 global interoperable stylus specification," said Peter Mueller, chairman, USI. "Our next milestone is a live multiple stylus demonstration at COMPUTEX Taipei. The COMPUTEX demonstration is the first time multiple active styluses will be used with a touch-enabled device using the USI 1.0 specification."Register for the live demonstration: https://www.surveymonkey.com/r/press_may16. Learn More: USIMembers: USI membersJoin: USIThe Universal Stylus Initiative (USI) defines industry-wide standards for interoperable communication between an active stylus and touch-enabled devices such as phones, tablets, computing and entertainment platforms, enabling manufacturers to design products to a single standard, rather than the variety of proprietary approaches now in use. USI 1.0 styluses will be able to communicate with different touch sensors and touch controller integrated circuits running on different operating systems - a new capability for the industry.USI 1.0 also fosters a consistent user experience while increasing the availability and consumer appeal of the active stylus to provide industry-wide interoperability and functions and features not supported by current styluses.
Wednesday 25 May 2016
UMC Holds 2016 Japan Technology Forum
United Microelectronics Corporation (NYSE: UMC; TSE: 2303) ("UMC"), a leading global semiconductor foundry, today held its 2016 Japan Technology Forum at the Tokyo International Forum. UMC's forum theme this year is focused on the foundry's "Innovation by Collaboration" business model that leverages strategic partnerships to realize accelerated, mutual success in areas such as R&D, IP, market development and bringing customer products to timely volume production.The event also serves as a platform for UMC and its ecosystem partners to showcase their strengths in process technology, manufacturing, EDA, IP, testing, packaging and market applications to support Japan IDM and fabless companies. UMC's CEO, Po Wen Yen, presented the opening remarks, while Ryo Ogura, president & CEO of New Japan Radio (NJR), delivered the guest keynote speech.CEO Yen said, "Japan's high-tech segments are experiencing a new wave of growth in new applications such as automotive ICs, IOT, AR/VR, UAV, medical and robot. These diverse vertical markets require strong partnerships and comprehensive technologies in order to realize customized application specific solutions."Mr. Yen continued, "UMC's success in delivering rapid results for our foundry customers is a product of our ability to closely collaborate with customers and supply chain partners. We collaborate with customers to create customized technologies that provide essential product differentiation in the marketplace, while also offering specialized IP and application platforms to streamline customer engagement with UMC. We look forward to bringing these competitive advantages to our Japan-based customers."In addition to highlighting UMC's partnership model at the technology forum, the foundry will also showcase its competitive technology offerings such as 14nm FinFET, volume production 28nm high-k/metal gate, RFSOI, MEMS, 2.5D/3DIC, BCD and automotive IC grade 1 and grade 0 manufacturing capabilities. UMC operates a sales office in Tokyo, and is a joint venture partner in Mie Fujitsu Semiconductor (MIFS), a 300mm foundry company in Mie prefecture, Japan. The company is also in the equipment move-in phase for its new 12" United Semi joint venture fab located in Xiamen, China, which is scheduled for production in late 2016.About UMCUMC (NYSE: UMC, TWSE: 2303) is a leading global semiconductor foundry that provides advanced IC production for applications spanning every major sector of the electronics industry. UMC's robust foundry solutions enable chip designers to leverage the company's sophisticated technology and manufacturing, which include 28nm gate-last High-K/Metal Gate technology, ultra-low power platform processes specifically engineered for Internet of Things (IoT) applications and the highest-rated AEC-Q11 Grade-0 automotive industry manufacturing capabilities. UMC's 10 wafer fabs are located throughout Asia and are able to produce over 500,000 wafers per month. The company employs over 17,000 people worldwide, with offices in Taiwan, mainland China, Europe, Japan, Korea, Singapore, and the United States. UMC can be found on the web at http://www.umc.com/.Note from UMC Concerning Forward-Looking StatementsSome of the statements in the foregoing announcement are forward looking within the meaning of the U.S. Federal Securities laws, including statements about future outsourcing, wafer capacity, technologies, business relationships and market conditions. Investors are cautioned that actual events and results could differ materially from these statements as a result of a variety of factors, including conditions in the overall semiconductor market and economy; acceptance and demand for products from UMC; and technological and development risks. Further information regarding these and other risks is included in UMC's filings with the U.S. Securities and Exchange Commission, including its registration statements and reports on Forms F-1, F-3, F-6 and 20-F and 6-K, in each case as amended. UMC does not undertake any obligation to update any forward-looking statement as a result of new information, future events or otherwise, except as required under applicable law.
Tuesday 24 May 2016
AVerMedia Mini-PCIe Frame Grabber Solutions on NVIDIA Tegra K1 Platform
AVerMedia today announces Mini-PCIe frame grabber solutions on NVIDIA Tegra K1 platform.DarkCrystal HD Capture Mini-PCIe C353 is a proven Mini Card frame grabber for various industrial applications. Features of C353 include the small form factors and the support of HDMI and VGA video capture up to 1080p30, with the acceptance of video input at 1080p60. C353 also comes with an extended temperature version, C353W, which can operate in the temperature range from -40 degree C to +85 degree C.Both C353 and C353W are very much suitable for the applications of robotics, UAV (i.e. drone), medical image, UGV, AOI, and other video-enabled equipment for automation, AI, and deep learning. For the detailed functional specifications of C353 and C353W, please refer to the following links.http://www.avermedia.com/professional/product/c353/overviewhttp://www.avermedia.com/professional/product/c353w/overviewAs the leading expert of frame grabber solutions worldwide, AVerMedia now offers Linux driver, OpenCV integration, and other support of C353/C353W on NVIDIA Tegra K1 platform, which are listed below.C353/C353W Linux driver pre-compiled for Linxu4Tegra R21.4 (https://developer.nvidia.com/linux-tegra-r214)- V4L2 API support- GStreamer pipeline examples- Reference code for integrating C353/C353 video capture with GPU/CUDA optimized OpenCV on Tegra K1The benefit of using AVerMedia C353/C353W on NVIDIA Tegra K1 platform is to enable the application developers to acquire video feeds from many other kinds of cameras and/or video devices through HDMI and VGA interfaces. This can greatly free up the application developers from the constraint of NVIDIA Tegra K1 platform, which currently can only acquire video feeds through MIPI-CSI or USB interfaces.Following this C353/C353W support on NVIDIA Tegra K1 platform, AVerMedia is also working on more frame grabber solutions on Tegra K1 and X1 platform. Should you need the early engagement with our work, please send us an email @ Liwen.Liu@avermedia.com.About AVerMedia TechnologiesAVerMedia is the leader in Digital Video and Audio Convergence Technology. Aside from the full line of TV Tuners and gaming recorder products, AVerMedia provides frame grabbers, streaming encoders and video systems for consumer and corporate markets. As a leader in innovative manufacturing and environmentally friendly products, AVerMedia is also highly involved in community and social responsibilities. AVerMedia also partners with ODMs for the development of AVerMedia's technologies for integrated applications.
Tuesday 24 May 2016
QSAN introduces XCubeSAN XS5200 series and XCubeDAS XD5300 series at Computex 2016
QSAN Technology, Inc. today announced the product launch of its brand new XCubeSAN XS5200 series and XCubeDAS XD5300 series at upcoming Computex 2016 (booth J1223, Nangang exhibition center). XCubeSAN is QSAN's next generation SAN storage platform, featuring the latest Intel Xeon D-1500 processors and the QLogic 2600 series Gen 5 (16Gb) Fibre Channel quad-port controller.XCubeDAS is QSAN's next generation DAS (Direct-Attached Storage) expansion enclosure product line which fully adopts native 12G SAS 3.0 technology and it can serve as expansion enclosure of XCubeSAN while providing storage capacity to the servers. "Both XCubeSAN and XCubeDAS are QSAN's commitment and answers to its customers to offer fully-featured enterprise-level SAN and DAS storage systems to SMB businesses. XCubeSAN series is designed and optimized for I/O, scalability and reliability to accelerate enterprise mission-critical applications and enhance IT efficiency and agility." said Gordon Hsu, director of product management.QSAN worked with Intel and QLogic to build the next generation storage platform to the highest quality. Thanks to integrating the best and latest silicon technologies from Intel and QLogic, XCubSAN XS5200 series can deliver up to 12,000 MB/s read and 8,000 MB/s write in throughput and over 1,500,000 IOPS to enhance enterprise I/O critical applications, media & entertainment, large-scale surveillance, high performance computing and virtualized datacenters. "Data center customers are seeking new levels of density, integration and intelligence in their storage systems," said Andrea Nelson, Marketing Director of Storage of Intel's Data Center Group. "The Intel Xeon processor D-1500 product family allows storage providers like QSAN to provide their customers with these important features and deliver greater efficiency and return on their investment.""QLogic Gen 5 Fibre Channel technology addresses the growing bandwidth requirements of today's highly virtualized data centers," said Vikram Karvat, vice president of products, marketing and planning, QLogic. "QLogic's innovative multi-port isolation architecture provides maximum IOPS and low latency to support business-critical applications, scaling performance and delivering the ultimate in reliability."Both XCubeSAN and XCubeDAS products come with a complete range of form factors, including 4U 24bay, 3U 16bay, 2U 12bay, and 2.5" high density 2U 26bay. Latest Intel Xeon Broadwell-DE CPU, DDR4 ECC memory, native 12Gb SAS 3.0, and dual host card design make XCubeSAN future-proof and can deliver astounding performance to meet the demands of all kinds of enterprise applications. XCubeSAN is also optimized for SSD drives to support auto tiering and SSD caching to further enhance performance, increase efficiency with a lower total cost of ownership. Every new feature on XS5200 series is future-focused, our performance and data security has been pushed to the limit to ensure we have a truly enterprise storage system. XS5200 series will feature SED drive support, iSCSI force field protection for mutant DDoS attack, and offer super capacitor module and M.2 flash module for immediate memory protection.Product models are offered in base units plus a variety of host cards to choose from to allow maximum flexibility and scalability to meet limited IT budgets and fit in all kinds of modern IT deployments. Please visit our website or contact our sales representatives for more information.XS5226-D
Monday 23 May 2016
Schurter has expanded its presence in Europe by creating a new subsidiary in Poland, Schurter Electronics Sp. z o.o.
SCHURTER has expanded its presence in Europe by creating a new subsidiary SCHURTER Electronics Sp. z o.o. in Poland, based in Warsaw. SCHURTER plans to strengthen its local presence with the newly opened Polish Sales Agency. They will offer a local design-in centre to support the design in the electronics industry of Poland.The decision of SCHURTER to expand to Poland was due to the increasing demand for electronic components, which is paying special attention to support its development in this sector. Poland has announced ambitious economic targets. This encloses electronics used in automotive, medical devices, renewable energy or generally in industrial applications. This is typically application areas where SCHURTER offers its extensive expertise.According to Mariusz Duczek, Managging Director of SCHURTER Electronics Sp. z o.o. in Warsaw, "Poland experiences it's golden age again and electronics industry is changing now. The expansion to Poland, a country that decided to invest heavily to develop its electronics industry, means excellent opportunities for SCHURTER. At the same time I believe that entry of the company with global experience and huge range of electronics components will contribute to the development of the local industry. With this latest move SCHURTER clearly indicates its commitment to becoming a leading industrial partner in the development of the Polish electronics industry."The new Polish company reflects the collaboration policy of SCHURTER, and is based on the signature of industrial partnerships with other Polish companies. In fact, SCHURTER has been in Poland for several years, represented through well-known distributors, Semicon and TME.SCHURTER CEO Ralph Müller added: "The establishment of this new company will raise the partnerships SCHURTER has already established in Poland to a new level. Thereby we also support the intensive efforts of the Polish Government to build a strong electronics industry. At the same time we are strengthening our competitiveness by being able to involve local partners in our supply chain. The opening supports the fulfillment of our long-term strategic objectives and thus fits well with our expansion plans."The new sales company in Poland arises after the acquisition of the dutch Danielson Group in 2015, a manufacturer of touch - input systems , in a number of other acquisitions and start-ups worldwide.
Monday 23 May 2016
Winmate, Inc. to showcase latest automotive solutions for the Industry 4.0 at Computex 2016
Winmate, Inc. will showcase its latest industrial solutions for the Internet of Things (IoT) and Industry 4.0 at Computex 2016 which takes place May 31st - June 1st in Taipei, Taiwan. The company will demonstrate solutions for the industries operating in some of the most challenging environments in Booth #M1235 which is located on the left side of the visitor entrance N.Winmate, Inc. aims to provide automotive enterprise-ready solutions for vertical markets, focusing on HazLock, marine, warehouse and industrial automation, and healthcare. This year Winmate is ready to present new series of Human Machine Interfaces (HMI) for industrial and building automation, new line of flat ECDIS Marine Panel PC and Displays, Vehicle Mount Computers, Full IP67 Stainless Panel PC for ATEX Zone 2 and 4K Medical Display."We're looking forward to demonstrating our latest solutions for the Industry 4.0 because it's a win-win situation for solution providers and manufactures," said Allan Lin, vice-president of Winmate Inc. "Attendees will see a variety of applications that are addressing their increasing demands for process automation and efficiency with a very compelling total cost of ownership".Visit Winmate, Inc. at booth #M1235 at the Computex 2016 exhibition and its team will be more than happy to introduce you the solutions in more details.About Winmate. Inc.For more than 18 years, Winmate Inc. has been the global leader in developing advanced rugged, mobile technologies for industries operating in some of the most challenging environments. From research and development to manufacturing, and in-house testing, Winmate, Inc. manages the entire product development process to ensure you have access to the most robust, current, safe and rugged mobile technologies available.Winmate, Inc. unveils complete line of rugged vehicle mount solutions at Computex 2016
Friday 20 May 2016
WebRTC Promotes Cross-Platform Video Messaging, Inspiring Creative IIoT Uses
The peer-to-peer (P2P) based Web Real-Time Communication (WebRTC) is an open source standard created by the World Wide Web Consortium (W3C) to support the usage of HTML5 video and audio protocols. Besides traditional P2P voice and video communication, WebRTC has diverse application potentials and can be used for video conferencing and IoT-related applications such as remote diagnostics and security surveillance.In order to accelerate the standardization of IoT device connectivity, Intel, Microsoft, Cisco and other organizations have established the Open Connectivity Foundation (OCF) in February 2016. Various large organizations in OCF share a promising outlook for WebRTC growth and have already begun incorporating some of its specifications into the OCF standard, demonstrating the development potential of WebRTC. WebRTC is not only an indispensable element for IoT, but also leads the way for the development of real-time cross-platform video messaging applications.WebRTC highlights that it requires no additional software or plug-ins and only a web browser is needed to stream video and audio data and share information. This overcomes the technical barriers imposed by hardware platforms and operating systems (OS) and reduces development complexities. Furthermore, with support for HTML5 and codecs such as VP8, VP9 and H.264, WebRTC allows developers to easily build real-time P2P applications for different platforms with reduced coding effort.P2P Communication Made Easy with Web BrowsersAlex Perng, General Manager of NEXCOM's IoT Business Unit, believes that more than 80 percent of internet data consist of unstructured data, and the voice and video data contained within will increase at a staggering rate in the future. With WebRTC standardizing video and audio transmission, development towards WebRTC is inevitable. In addition, as most current voice and video streaming applications are built on non-standard frameworks, various dividing barriers exist, creating obstacles that cloud the all things connected vision idealized by Industry 4.0 and Industrial IoT (IIoT); further exemplifying the great potential uses for WebRTC.With a positive outlook on WebRTC, NEXCOM has collaborated with Intel for the past two years to develop the first ever client/server-based real-time video conferencing software, ToGazer, which incorporates the WebRTC P2P communication model and expands on it into a multipoint communication and collaboration platform. ToGazer can support voice and video communication, presentation uploads, desktop and file sharing, session recording and various other enterprise conferencing features.ToGazer achieves cross-platform video conferencing in five ways. First, it utilizes the cross-platform nature of WebRTC, which allows users to conference on any device, as long as a web browser is available. Second, it modifies the P2P architecture into a client/server model to support multipoint conferencing. Third, the platform uses the server to schedule conferences, provide privacy and record sessions. Fourth, ToGazer is optimized for Intel's platform to deliver the best possible quality. Lastly, ToGazer leverages an open source architecture, which greatly lowers costs."Video conferencing represents a milestone for NEXCOM in the WebRTC application development space, but conferencing is not the sole purpose," says Perng. "ToGazer originally focused more on video conferencing features. However, ever since its public introduction, many users have been creative in using it to support applications such as call center, remote education services and online radio broadcasts. Take a call center application as an example, in order to provide online call center support, operators had to install expensive VoIP handsets and adjust the network to accommodate video and audio data, which is a complicated process and difficult to maintain compared to using WebRTC-based communication with just a PC and microphone."AR/VR Integrated WebGL Gives Birth to Innovative Industrial ApplicationsPerng emphasizes, "The WebRTC-based ToGazer video conferencing application is only just the beginning step. In the future a great opportunity exists for ToGazer to have a significant place in the industrial sector."Although current industrial applications rarely involve the use of video and audio transmission, the amount of browser-based IoT applications is not a minority; even a simple ARM-based terminal device is capable of running a web browser. In addition to video and audio transmission, WebRTC can transfer plain text and data, provide cross-hardware and cross-OS support, and run independently on a browser. These characteristics and benefits all match the needs of IIoT, showing a bright future for WebRTC to flourish within the next two years.Furthermore, most communication protocols in industrial environments lack support for video and audio transmission. As Industry 4.0 develops, the number of machine-to-machine communication and communication of devices with MES/ERP systems will grow, increasing the demand for real-time voice and video communication. In that event, businesses can simply add WebRTC protocol support into the industrial protocols to fill in the communication gap, skipping the need to modify the existing infrastructure.Another worthy mention is that some businesses are already integrating WebGL technology into WebRTC to provide 3D image transmissions, bringing virtual reality (VR) to browsers. By combining this with augmented reality (AR) technology, controlling micro robots into hazardous working environments can be made possible to help factory operators to collect operational data from remote device, unlocking infinite industrial application possibilities.Figure: WebRTC is a real-time P2P communication technology. Besides traditional voice and video communication, WebRTC has diverse application potentials and can be used for video conferencing and IoT-related applications such as remote diagnostics and security surveillance.About NEXCOMFounded in 1992, NEXCOM integrates its capabilities and operates six global businesses, which are IoT Automation Solutions, Intelligent Digital Security, Internet of Things, Interactive Signage Platform, Mobile Computing Solutions, and Network and Communication Solutions. NEXCOM serves its customers worldwide through its subsidiaries in five major industrial countries. Under the IoT megatrend, NEXCOM expands its offerings with solutions in emerging applications including IoT, robot, connected cars, Industry 4.0, and industrial security.http://www.nexcom.com/WebRTC is a real-time P2P communication technology. Besides traditional voice and video communication, WebRTC has diverse application potentials and can be used for video conferencing and IoT-related applications such as remote diagnostics and security surveillance.