CONNECT WITH US

Best AI Safety is ensuring good guys innovate much faster: former Foxconn and MIH CTO William Wei

Judy Lin, DIGITIMES Asia, Taipei 0

Credit: DIGITIMES

Although the world's first Artificial Intelligence (AI) Safety Agreement was signed in the UK last week, many top scientists who have the most amount of knowledge and expertise in AI, including William Wei, former Foxconn and MIH chief technology officer and now Executive VP, CMO to Taiwan-based AI startup Skymizer, voiced disagreement and concern over the overblown fear.

"In the event of AI regulation is imposed, guess whose innovation will be harnessed, the good guys or the bad guys?" Wei asked. "Regulation is a double-edged sword, even though the regulators have good intentions to prevent risks, the more likely scenario is that only the good guys listen to them and stop innovating, while the bad guys grab the chance and continue to progress with their purposes."

Generative AI's rapid development reflects the dilemma of innovation, said Wei. "Open-source is the most powerful force in technology innovation today, governments and big companies are now worried that they cannot control it, so they want to slow down the progress of AI innovation. If they really need to regulate AI, they should put the regulations on the applications and deployment side, not fundamental R&D," said Wei.

Overhyped fears do more harm than good

The timing is also in question. For many AI experts, the AI revolution that they perceive as the fourth industrial revolution to change the way people live and work is just beginning. Wei is not the only one arguing against regulating AI at such an early stage.

Andrew Ng, founder and CEO of LandingAI, said in his X-posting that he believes overhyped fears about AI leading to human extinction are causing real harm, including young students discouraged from entering AI because they don't want to contribute to human extinction and bad regulation such as requiring licensing of large models, which will crush open-source and stifle innovation. An open letter was signed by the tech community, calling for a spectrum of approaches — from open source to open science and coordinated efforts by scientists, tech leaders, and governments to work on responsible AI.

Although Elon Musk, the founder of Tesla called AI the "biggest threat to humanity", ironically, his new company Grok continues to train LLM.

Experts generally expressed a consensus that coordinated efforts are needed to prevent risks, but AI technology must not be demonized or kidnapped by political or special-interest agendas.

According to the Financial Times (FT), several key AI players, including OpenAI, Google DeepMind, Anthropic, Amazon, Mistral, Microsoft, and Meta, signed the landmark but not legally binding, document that concludes the two-day summit.

Representatives from 28 countries, including the US, China, and the EU, participated in the AI Safety Summit to tackle the risks of frontier AI models. In addition to the United Kingdom and the United States, the governments that participated in the signing included Australia, Canada, the European Union, France, Germany, Italy, Japan, Singapore, and South Korea. However, China was not a signatory.

Regulating who and what?

Wei started his career in 1993 at Steve Jobs's NeXT Computer, which created its OPENSTEP Operation System & SDK based on open-source technologies, which became today's iOS & MacOS SDK, and later spent a decade at Apple Inc. after NeXT was acquired in 1997. He is a firm believer and a participant in open platforms, having witnessed the key role open-source platforms played during the transition from the PC era to the Internet era, and then the mobile era.

"If regulation is necessary, then human beings should be the subject of such regulation, not AI," said Wei. "Today's AI is only math; it doesn't even care if you unplug it. Only human beings would have the motivation to use AI to destroy the enemies or competitors in order to secure their own survival."

The whole process of deliberation about the regulations must be open and transparent, otherwise, it will be a disaster, said Wei.

There are so many things happening each day on the dark web, which the governments have no control over, said Wei, who laments that the latest AI Armageddon hype is just doing a favor to the special interests, and has no power over countries or dark forces that never play their game according to the rule of law.

"Frontier AI is now data-driven and developed on the neural network, but it has no purpose or intention to do harm to humanity unless trained by humans for that purpose," said Wei. "So instead of AI, it is the people that should be regulated; But isn't that what our current law has been doing all along?"

Generative AI is software 3.0, it is a megatrend that is unstoppable, but future innovation must be driven by open source and open platforms, said Wei. "Open source is so powerful that all the big companies which are trying to build their walled gardens just cannot keep up with the speed of innovation."

AI arms race

Right before the AI Safety Summit, US President Joe Biden issued an executive order to evaluate and mitigate the risks of artificial intelligence (AI) systems to ensure safety, security, and trust, while promoting an innovative, competitive AI ecosystem that supports workers and protects consumers. US National Telecom and Information Administration (NTIA) will prepare a report to assess the risks and benefits when model weights – the arrays of numbers that form the linchpin of AI models – are published online or "open-sourced."

"Open-source materials, and the communities that create them, can drive innovation. But these model weights may also pose risks if they fall into the wrong hands," the press release of the US Department of Commerce sounded very much prone to do something on open source.

The Executive Order requires "companies, individuals, or other organizations or entities that acquire, develop, or possess a potential large-scale computing cluster to report any such acquisition, development, or possession, including the existence and location of these clusters and the amount of total computing power available in each cluster."

The US government, specifically, is asking for information regarding "any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations."

And information on "any computing cluster that has a set of machines physically co-located in a single data center, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second [Flops] for training AI" was also wanted.

However, according to Datacenter Dynamics, it is not clear what precision Flops the government expects companies to benchmark to - FP64, 32, 16, or even 8 (the lower the precision, the higher the Flops number). AI companies usually use 16-bit precision to report their performance, and this is the most likely metric the White House is using.

Besides effecting the order on its own companies and organizations, US Secretary of Commerce Gina Raimondo's speech at the AI Safety Summit called for coordinated actions among allies: "It will require the commitment of every country to prevent misuse and ensure that dangerous AI technologies do not fall into the wrong hands. We cannot be daunted by that. Instead, we must be called to action, together."

While AI is now politicized, some geopolitical experts have already been anticipating an "AI War". Former US Secretary of the State Henry Kissinger has warned of a potential AI arms race between the US and China. The US has already tightened the exports of semiconductor hardware such as the GPUs of Nvidia, AMD, and Intel that are used to develop generative AI and large-language models to countries that it sees as threats. Will there be new restrictions on the software side to impede open-source innovation in China?

"Well, we can only hope that the new regulations do not slow down the innovations of the good guys who could have been the people to save humanity," shrugged Wei. "Since we have no control over the bad guys, the best thing we can do is to help the good guys innovate faster than the other side."