The EU introduced its first AI Regulation (EU Regulation No. 2024/1689 on artificial intelligence) on 1 August, 2024. This Act aims to strike a balance between reducing AI-related risks and promoting the use of AI, whilst ensuring the EU remains a global leader in AI innovation and investment. Businesses have 3 years to fully comply with the legislation, with all key obligations to be in place within the next 24 months.
What it does
The Act focuses heavily on protecting people’s health, safety and rights from the risks associated with AI. It categorises AI based on risk, breaking this down into four risk categories. This risk-based approach represents a key objective of the EU to remain competitive and have proportionate regulation of AI. It is designed to ensure that AI is applied safely and that the obligations that are required under the legislation does not act as a deterrent against the safe application of AI.
Who is affected?
The Act applies to creators, distributers and users of AI systems who deploy AI in a professional capacity in the EU, as well as to third-country providers of AI (if their AI outputs affect the EU).
What Risk Categories are Involved?
Unacceptable Risk
AI categorised under the Unacceptable-Risk category are prohibited. The category includes systems that could pose a threat to the safety, livelihoods or fundamental rights of citizens. Examples given in the Act include:
High Risk
The majority of the Act focuses on high-risk AI systems, which face strict regulations. These include AI used in:
Limited Risk
This category is smaller in comparison to the other three and obligations are lesser, focusing on transparency. It provides that developers and providers must ensure that end-users are aware that they are interacting with AI. For example, users should be knowledgeable that they are interacting with AI when engaging with a chatbot or deep-fake.
Minimal Risk
Systems which do not fall under any of the other three categories are unregulated. This category includes the majority of AI applications currently available on the EU single market, such as AI-enabled video games and spam filters.
Obligations of High Risk AI providers
High-Risk AI is subject to strict regulations before it may be put on the market including the following:
Enforcement and Penalties
The European AI Office was established by the European Commission to the AI Act's enforcement at EU level. At a national level, EU Member States must establish their own national competent authorities to enforce the rules in their countries before next August. Companies found not to be in compliance with the Act could face fines potentially amounting up to €35 million or 7% of global annual turnover.
Conclusion
This legislation will be welcomed by a wide variety of businesses seeking to make use of the benefits that come with AI. The balance struck in the Act between risk mitigation and promotion of the use of AI systems is designed to ensure legal certainty for developers and users while also encouraging market uptake of AI. Companies intending to use AI in the EU must familiarise themselves with the Act, understand the risk categories, and comply with all obligations to avoid hefty penalties and successfully leverage AI technologies.