CONNECT WITH US

Lessons from South Korea as Taiwan prepares AI basic law with clearly defined terms

Daniel Chiang, Seoul; Rodney Chan, DIGITIMES Asia 0

Credit: DIGITIMES

South Korea is set to implement the Basic Act on the Development of Artificial Intelligence and the Establishment of the Foundation for Trustworthiness (the AI Basic Act) in January 2026. However, following the bill's passage, numerous controversies have emerged, particularly regarding the definitions of various concepts and regulated entities. Industry observers emphasize that legislation must focus on promoting industrial development while avoiding imposing excessive obligations and responsibilities on businesses.

First, it is essential to clearly understand the core functions of the AI Basic Act.

The head of the AI legal system division at the Korea Legislation Research Institute (KLRI) explained that due to the rapid advancement of AI technology, many areas that are difficult to define precisely are included in the bill, making South Korea's AI Basic Act appear overly broad.

The KLRI researcher noted that clarifying the regulatory level appropriate for the AI Basic Act—as a basic law per se—is critical. It is necessary to determine which parts should have their supervision delegated to other particular laws and how the basic law should coordinate with other legal frameworks. Given the extensive scope of the AI Basic Act itself, its formulation must consider how it can connect with other laws and the entire legal system.

Therefore, there should be clear definitions of regulated subjects, but some ambiguity can remain within the scope of obligations and responsibilities to allow minimal operational flexibility. Including excessively detailed controls in subordinate laws or guidelines could lead to overregulation.

A professor from Hanyang University pointed out that South Korea's AI Basic Act establishes three main regulatory criteria: high-impact AI based on application domains, high-performance AI based on computational volume, and generative AI based on functional characteristics.

Possible issues with the AI Basic Act

Each criterion alone is not highly stringent, but if two or all three apply simultaneously, it will result in overlapping controls and excessive burdens.

Notably, South Korea classifies AI-regulated parties simply into "developers" and "users." This simplistic categorization places all obligations on AI businesses (both developers and users) without considering differences in their roles and capabilities.

For example, for high-impact AI systems that significantly affect human life, safety, or constitutionally protected fundamental rights, both developers and users must first assess whether the AI qualifies as high-impact. They are required to conduct prior impact assessments, fulfill transparency obligations (such as informing users in advance that services operate based on AI), and ensure safety and reliability throughout the service lifecycle.

If transparency obligations are violated, fines up to KRW30 million (approx. US$21,657) may be imposed, along with requirements for data submission, onsite inspections, corrective actions, and suspension orders to enforce compliance.

The professor believes the core philosophy of the AI Basic Act is balancing development and regulation by prioritizing trust in AI to secure national competitiveness. However, the basic act, as it currently stands, may struggle to achieve this balance.

A lawyer similarly highlighted that the responsibility for fulfilling the transparency obligation of "prior notice"—whether it lies with the developers or users—remains unclear. Implementing the law as it is now could trigger contractual disputes.

Additionally, the definition of "user" is ambiguous. Taking medical diagnostic systems as an example, stakeholders include developers who create CT and MRI analysis software, manufacturers integrating these functions into medical devices, hospitals and physicians using the equipment for diagnosis, and patients who are directly affected. Yet, no clear delineation exists as to who qualifies as the "user."

Overall, South Korea's AI Basic Act still faces significant controversy over key aspects such as defining regulated entities, allocating responsibilities, and determining regulatory scope.

Looking ahead, Taiwan must strive to establish a clear and predictable regulatory framework when drafting related AI laws to avoid vague provisions that cause disputes or stifle industrial innovation.

Article edited by Jack Wu