US firms preparing for the EU AI Act will have an advantage

The European Parliament, the European Council and the European Commission reached consensus on 8 December on the final contours of what will be the world’s first comprehensive regulation of artificial intelligence – the EU AI Law.

Although the final text has not been published, it appears that the regulation will affect a wide variety of organizations, including US-based ones with little or no presence in the EU.

Combined with existing EU regulatory laws such as the General Data Protection Regulation, the AIA is poised to have a significant impact on US businesses across all sectors and industries.

Risks

A risk-based approach is one of the key features of AIA. Under the proposed framework, some AI systems would be completely banned within the EU, such as those that affect fundamental rights, while others would be categorized as posing a limited or high risk.

There are several open questions and takeaways for business in the US. Even if the AIA becomes binding law, it will be some time before it takes effect.

The final text of the AIA is likely to be published in the EU Official Journal in early 2024. The regulation is likely to enter into force two years after its entry into force, although some provisions may enter into force earlier. That means the law may not go into effect until 2025 or 2026 — an eternity in the AI ​​world.

Systems identified as presenting limited risk will be subject to certain transparency obligations, while those deemed high risk will face more stringent obligations, such as carrying out risk assessments, adopting certain governance structures and maintaining certain level of cyber security.

Examples of high-risk systems include certain medical devices, recruitment, human resources, and management of critical infrastructure such as water, gas, and electricity. The proposed framework also addresses the use of general-purpose AI and foundational models.

For high-impact, general-purpose AI, organizations will face additional obligations, such as rulemaking around model evaluations, systemic risk assessments, competitive testing, incident reporting, cybersecurity, and energy efficiency reporting.

AI Evolution

As AI continues to develop at breakneck speed, the AIA framework may become outdated and out of step with AI’s trajectory.

We also do not know to what extent the AIA will have extraterritorial reach to US businesses that do not have operations in the EU. Initial drafts indicate that the law will apply to providers who place AI systems on the market or put systems into service within the EU, regardless of whether those providers are established in the EU.

Initial drafts also indicate that the AIA will apply to suppliers and users of AI systems outside the EU when the result produced by the system is used in the EU.

The latest standard could affect a wide range of companies that produce AI results for users located within the EU – perhaps a wider standard than even the GDPR.

A provider is defined in previous drafts as any natural or legal person who develops an artificial intelligence system or has an artificial intelligence system developed with a view to placing it on the market or in service under its own name or trademark, whether for payment or free of charge at a fee.

Requirements

U.S. businesses, many of which use general-purpose AI to create products and services, must also closely monitor the AIA’s requirements regarding the use of general-purpose AI. The AIA will likely require US firms to carefully manage risk with respect to general-purpose AI, including providing, in some cases, technical documentation and detailed summaries of the content used to train such systems.

This can be difficult to do for many organizations, as there is not much visibility into how current general purpose AI is being trained or developed. In addition, the AIA may require larger general-purpose AI that poses a systemic risk to undergo additional testing, including model evaluations and competitive testing.

The AIA may ultimately determine systemic risk based in part on size measurements — otherwise known as floating-point operations. This could mean that platforms greater than ChatGPT 3.5 could be seen as posing a systemic risk, potentially affecting businesses relying on such technology as the backbone of their products and services.

While the final scope and impact of the AIA on US businesses remains an open question, they can prepare now by developing, implementing and maintaining an AI governance framework.

This will ensure responsible development and deployment of AI in the enterprise while minimizing risks. It will also help future-proof organizations against any potential compliance issues.

AI governance, although it varies between businesses, typically involves creating an AI registry for current and potential future use cases; adding a cross-functional AI governance committee; existence of robust policies, tolerances, standard operating procedures and transparency mechanisms; and promoting a culture of responsible use of AI.

Organizations that get it right can gain market share and meet the growing needs of customers, partners and regulators who are increasingly looking to organizations to develop and deploy AI only in a responsible, safe and ethical way.

This article does not necessarily reflect the views of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Information about the author

Peter Stockburger is a managing partner in Dentons’ San Diego office, a member of the Venture Technology and Emerging Growth Group and co-head of the Autonomous Vehicles practice.

Write for us: Guidelines for the author

Leave a Comment

Your email address will not be published. Required fields are marked *