CONNECT WITH US

Industry calls for delay as EU's AI crackdown begins

Ollie Chang, Taipei; Emily Kuo, DIGITIMES Asia 0

Credit: AFP

The EU has released the Code of Practice ("Code") for General-Purpose AI Models (GPAI), a preparatory measure for the AI Act set to take effect on August 2, 2025. The Code aims to guide companies in complying with the upcoming regulations. However, the timeline is tight, and the industry is calling for a delay in enforcement, claiming that the preparation time is insufficient.

Originally scheduled for release in May, the Code was delayed but finalized in July. It is led by the European Commission (EC) and developed in collaboration with AI labs, tech companies, academia, and digital rights organizations. It focuses on transparency, copyright protection, and enhanced safety and safeguards for advanced models. Models such as OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini all fall under the GPAI classification.

Henna Virkkunen, executive vice president for digital policy at the EC, stated that the Code ensures that AI innovation remains safe and transparent. The EU is currently conducting risk assessments to determine whether it can be implemented as scheduled in August. Companies are encouraged to sign the code voluntarily, but violators of the AI Act may face fines of up to 7% of their global annual revenue. It is currently not disclosed how many companies are willing to sign.

The EU will require companies to disclose extensive information about their models, including training processes, data volume and sources, energy usage, computational resources, distribution methods, and licensing. The most contentious requirement among companies involves disclosing the sources of training, testing, and validation data, with an explicit ban on the use of pirated content.

Where a company's data comes from must also be specified, be it via web crawlers, publicly available datasets, non-public datasets obtained from third parties, user data, privately sourced datasets, synthetic data, or more. Companies must also indicate whether inappropriate data sources, such as involuntarily shared intimate images or child abuse content, have been identified and removed.

Developers must respect creators' requests to exclude copyrighted content and must implement mechanisms to address AI-generated copyright violations. For high-capability models, companies are required to monitor deployment continuously and allow independent external evaluators access to model systems for audit purposes.

The AI Act's four risk levels are unacceptable, high, limited, and minimal. GPAI's regulations are to be rolled out in stages, with high-risk applications involving facial or biometric recognition to be set in place in August 2026.

Forty-five European companies, including ASML, Mistral AI, SAP, Philips, Airbus, and Mercedes-Benz, signed an open letter urging the EU to delay enforcement of GPAI and high-risk AI regulations by two years to give the industry more time to prepare.

According to Euronews, companies that sign the Code will not be expected to fully comply by August 2. The EU is also considering delaying certain obligations for high-risk systems. For GPAI models that were launched in Europe before the AI Act takes effect, companies will be granted a 36-month transition period to bring their products into compliance.

Article edited by Jack Wu