CONNECT WITH US

MOST announces AI R&D guidelines

Bryan Chuang, Taipei; Adam Hwang, DIGITIMES Asia

The Ministry of Science and Technology (MOST) has announced AI Technology R&D Guidelines in a bid to create a reliable environment conforming to international trends of AI R&D and to provide directions for Taiwan AI researchers to follow.

Considering AI's disruptive innovations in many areas, such as biomedicine, autonomous vehicles and education, and its potential negative impact on human economic, social and political life, many countries and organizations have established ethics standards for AI R&D, such as the EU's Ethics Guidelines for Trustworthy AI, OECD's Principles on Artificial Intelligence and IEEE's Ethically Aligned Design-Version II, MOST noted.

MOST said its guidelines are based on three core values: (1) Human-centered, an AI-based society should respect human dignity, rights and freedom and application of AI is to prompt human welfare and hike human living standards; (2) sustainable development, AI R&D should seek balance among economic growth, social progress and environmental protection to reach co-existence and common prosperity among human being, society and environment; (3) diversity and inclusion, AI R&D is to create an AI-based human society of diverse value concepts and backgrounds via interdisciplinary dialog mechanisms, MOST explained.

There are eight guidelines derived from the three core values: (1) Common good and well-being among human being, society and environment; (2) Fairness and non-discrimination. R&D of AI hardware, software, algorithms and related decision making should respect human dignity and rights to avoid risks of prejudice and discrimination; (3) Autonomy and control. As AI is applied to helping men's decision making, R&D staff members of AI hardware, software and algorithms should let human being have complete and effective autonomy and control of such technologies; (4) Safety. R&D staff members should secure stable and safe technological operation of AI hardware, software and algorithms, including risk control and monitoring, to build reliable AI environment; (5) Privacy and data governance. Effective data governance is crucial to protection of privacy, therefore, AI R&D staff members should have collection, processing and use of personal data in compliance with regulations concerned; (6) Transparency and traceability. In fairness to interested parties affected by AI decision making, information concerning development and application of AI hardware, software and algorithms, such as modules, mechanisms, parameters and computing, should be provided and disclosed in minimum to enable general understanding of how the AI decision is made. In addition, data, data labeling and algorithms used in the AI decision making have to be appropriately recorded and stored to let them traceable to interested parties affected by AI decision making for relief and clarification; (7) Explainability. AI decision making should be presented to make it explainable to users of AI hardware, software and algorithms as well as interested parties affected by the AI decision making; (8) Accountability and communication. For AI hardware, software and algorithms, mechanisms should be established for explaining AI decision making processes and consequences as well as for accountability, communication and feedback of opinions.