As a company that provides the architecture at the heart of modern computing, Arm aims to maintain its leading role in the emerging machine learning (ML) market by leveraging its existing advantages to drive the widespread application of ML on various edge devices.
At Computex 2019, Jem Davies, Arm Fellow, VP, and GM, Machine Learning Group, explained Arm's views and strategies on the development of the ML market, emphasizing that Arm is the only vendor in the market that has the broad portfolio of CPU, GPU and NPU, as well as strong ecosystem support. By adopting a total compute approach, Arm will be able to provide the best integrated solutions to address the challenges today, in a bid to enable huge possibilities of ML applications.
ML is going everywhere
"There is no doubt that ML is going everywhere," Davies said. Arm estimates that there are 4 billion smartphones in the world, and 85% of smartphones run ML only on CPU or CPU+GPU. And looking at the most common use cases, ML is already taking place on the CPU from Google Translate, "Bokeh" focus on Instagram, speech recognition to 3D secure login.
In addition, Davies stressed, "Some ML algorithms are used in the areas that we never thought of before. For example, in voice recognition, traditionally, we need to perform audio processing, such as noise cancellation, microphone separation, and beam forming, and then run ML algorithm. But now, you can directly throw the raw microphone data to ML algorithm and it will clear up the noise itself. Or, tell you what's wrong when you are running by putting air pressure sensors into your shoes."
"More and more we have seen that ML is used in a very disruptive way, and the trend surprises lots of people. With the huge amount of available data at the edges and in the cloud, we expect an explosion of creativity in the future, and Arm will strive to unleash the possibilities with our wide range of ML-optimized solutions."
ML is a software problem
From Arm's perspective, ML is fundamentally a software problem. "ML starts with the CPU, and every device that runs ML has a CPU, which runs the code or hands it to the GPU or NPU. That's why we enhance our Cortex-A and Cortex-M cores to run ML more efficiently, and also introduces dedicated ML processors to address the requirements of higher performance and power efficiency."
Davies said that it's Arm's unique advantage to put CPU/GPU/NPU designers and software architects who optimize the codes for CPU/GPU/NPU in the same room, so that we can produce the RTL simulation for the code running for the three processors. Arm is capable of enabling hardware and software together seaminglessly."
By taking a holistic view on this, Davies believes that Arm can provide the most flexible and integrated ML solutions to meet customers' different requirements. With a common hardware architecture, Arm aims to strengthen its software and ecosystem supports to help accelerate ML deployment and overcome the fragmentation challenges that the industry is facing today.
"We have one of the biggest computing ecosystem, but for ML, it's quite different from the existing one, since there are lots of new players coming into this new area," Kathleen Kallot, Director, Machine Learning Ecosystem, ML Group, said. "It's important for us to engage with key partners. For example, we are partnering with Google to develop TensorFlow Lite Micro for embedded devices."
"In addition, as ML algorithm players are key to drive innovation, we also need to engage directly with them, to make sure they can get the best performance out of our IPs. We can expect lots of things coming this year and next year, and the ecosystem is building very fast."
Arm's ML processor
As ML emerges, domain-specific computing becomes a buzzword in the market. As a leading general-purpose computing technology provider, Davies stressed that Arm has been in the domain-specific game for 15 years, saying, "Of course, we provide CPU, and as display function gets important, we have developed GPU and make it successful in the market. And now, as the market evolves, we also move into the ML processor segment."
In fact, it's Jem Davies that makes Mali GPU a successful story at Arm, and now he wants to repeat the history again in the ML game. "For some tasks require higher efficiency, they might need domain-specific processors to run the specific workloads. For us, ML processor is just the domain-specific processor for neural network computing to handle matrix arithmetic and convolution."
"From a system perspective, we will not push people to use ML processors. Depending on different use cases, customers need to figure out what the best way is to run the ML code, and sometimes, maybe the CPU is enough."
Being a part of Arm's complete portfolio, ML processor boasts an industry-leading power efficiency of 5 TOPs/W and outstanding performance of up to 4 TOP/s. Also, with multicore scalability, it can scale up to eight NPUs and 32 TOPs in a single cluster or 64 NPUs in a mesh configuration. "Data compression technology is important for ML processor development. With our advantage in GPU and video, we will bring that into ML," Davies stressed.
Taking a holistic approach to prevail in the ML market
With so many existing and new players jumping into the NPU market, Davies is confident that Arm will still prevail in this area, saying, "Different from other companies, we consider ML a software problem, not a hardware one. That's why we invest lots of resources to make Arm NN good and easy to use for developers, since making software run efficiently on Arm cores is our job."
Though Arm is not the first one to move into NPU market, he said that it gives us the "second-mover advantage", so that we can learn from those who claim to be the first, and provide something better.
As ML market is highly fragmented with many different software and hardware architectures, Davies said, "The ecosystem wants one solution and only tolerates 2 or 3, there is no way to have two hundreds solutions in the market. So we will see that lots of players will be out of business.
In particular, most of the start-ups are focusing on developing ML hardware, and they don't have enough resources to write software and build the ecosystem."
On the other hand, Davies has seen amazing innovation in the ML algorithm space, "These new players are very important to our ecosystem, we will do our best to leverage their expertise to enable more possibilities."
He summarized that, to truly unlock the next generation of ML use cases, we need the building blocks optimized and built together from the ground up. And Arm can provide that total compute solutions from hardware, software, and ecosystem to actually fulfill the market needs.
Jem Davies, Arm Fellow, VP, and GM, Machine Learning Group
Arm "Total Compute" provides the best integrated solutions to enable huge possibilities of ML applications
DIGITIMES' editorial team was not involved in the creation or production of this content. Companies looking to contribute commercial news or press releases are welcome to contact us.