The Europe AI Act offers a fresh legal framework designed to promote artificial intelligence innovation while safeguarding basic rights and safety.
AI systems are classified by several different levels of risk: forbidden, high-risk, restricted risk, and minimal risk, or no risk. U.S. enterprises have to comply if they provide AI-related services in the EU, include AI in goods sold by EU-based companies, or handle EU citizens’ data.
High-risk AI systems — like those used in employment, education, and healthcare — have stricter criteria, including employing high-quality data, adopting risk management to handle vulnerabilities, guaranteeing human oversight, and satisfying strong standards for accuracy, resilience, and cybersecurity. Ignoring rules could result in hefty fines.
The Act will be implemented in phases, with completion by August 2027 planned. U.S. companies should aggressively prepare by creating AI governance frameworks, conducting risk analyses, reviewing codes of practice, and setting up monitoring and testing systems.
The EU AI Act aims to guarantee the responsible development and application of generative AI technology by balancing the protection of fundamental rights and public safety with the encouragement of artificial intelligence innovation. Established by the European Union, its legal structure affects U.S. companies in several respects.
How Are U.S. Businesses Affected By the EU AI Act?
Even without a physical presence, U.S. companies that run or provide AI-related services on the EU market have to follow the EU AI Act. This includes companies that process data on EU citizens or include AI technologies in goods offered by EU-based corporations.
The AI Act affects AI users (deployers) in professional capacities, even as its main concentration is on AI suppliers (developers). Providers of high-risk AI systems — such as those used in law enforcement, employment, and critical infrastructure — as well as general-purpose AI models (GPAI), which have broad uses across many sectors — have additional responsibility.
With a risk-based approach, the Act groups artificial intelligence systems based on their possible influence on security, safety, and basic liberties. It also forbids outright unacceptable risk AI techniques such real-time remote biometric identification systems and social score.
What To Know About Risks and Compliance With the EU AI Act
With a risk-based approach, the EU AI Act divides artificial intelligence systems into four different tiers to ascertain providers’ and deployers’ responsibilities:
AI systems judged to be an intolerable danger are strictly forbidden, including those using social scores or deceptive deep fakes.
High-risk artificial intelligence systems include the use of AI in employment, education, and healthcare, among other important industries.
These systems demand strict compliance rules, including making sure training uses high-quality data:
- Putting strong risk control mechanisms into use
- Preserving thorough technical documentation
- Ensuring significant personal supervision
- Complying with strict criteria for cybersecurity, resilience, and accuracy
Systems like chatbots fit this category and have particular transparency obligations under limited-risk artificial intelligence.
Most artificial intelligence applications fit either minimum or no systemic risk artificial intelligence systems, endangering basic rights or safety.
This risk-based approach helps the European Commission and Member States concentrate their regulatory efforts on the AI office most likely to cause damage, promoting innovation and confidence in AI applications benefiting society.
By matching these legal requirements, businesses guarantee they are creating and using trustworthy AI that upholds basic rights, data protection, and safety criteria.
What Are the Penalties for Non-Compliance With the EU AI Act?
Emphasizing the need to follow its AI regulations, the EU AI Act creates a tiered system of sanctions for non-compliance. The fines range according to the degree and kind of infringement and can be enforced on distributors, importers, manufacturers, and AI providers.
Most Serious Violations
Using illegal AI techniques, including social scoring or manipulative deep fakes, may result in fines of up to €35 million or 7% of the company’s worldwide annual revenue, whichever is larger.
High-Risk AI Systems Non-Compliance
Fines of up to €15 million or 3% of worldwide annual revenue may follow from failing to satisfy the particular requirements for high-risk artificial intelligence systems, including those pertaining to data quality, risk management, and human oversight.
False Data
Giving authorities erroneous, incomplete, or misleading information might cost fines of up to €7.5 million or 1.5% of world annual revenue.
Factors including the type and scope of the infringement, the company’s intention, and any attempts made to minimize damage can help you ascertain the precise fine amount. These fines highlight the EU’s will to uphold rigorous standards for the development and application of artificial intelligence, safeguarding public safety, data privacy, and fundamental liberties.
What Is the Timeline for the EU AI Act?
The European Union’s Artificial Intelligence Act (EU AI Act) was published in the EU Official Journal on July 12, 2024. This sets forth the following important milestone dates:
- August 1, 2024: The AI Act will “enter into force.ˮ
- February 2025: Prohibitions on unacceptable risk AI Chapter I and Chapter II will apply.
- August 2025: Notifying authorities AI models Chapter V, Governance Chapter XII and Confidentiality Chapter III Section 4, General purpose Chapter VII, Confidentiality and Penalties Article 78 will apply, with the exception of Fines for GPAI providers Article 101.
- August 2026: The remainder of the AI Act will apply, with exception of the Classification Rules for High-Risk AI Systems Article 6.
- August 2027: Classification Rules for High-Risk AI Systems. The corresponding obligations in this Regulation will apply.
While the implementation of the EU AI Act is expected to continue rolling out until 2027, US start-ups and established companies should start preparing now by:
- Developing AI governance frameworks
- Conducting conformity assessments of their AI systems
- Reviewing data quality and management AI practices
- Establishing processes for ongoing monitoring and testing
What Are the Exclusions From the EU AI Act?
Understanding exclusions from the EU AI Act can help you better understand the ways that AI is being used, and in which circumstances AI usage is permitted or exempt from penalties.
Scientific Research and Development
The EU AI Act should support innovation and respect scientific freedom without hindering research and development. Therefore, AI systems and models created solely for scientific research and development should be excluded from its scope.
Biometric Verification
AI systems used specifically for biometric verification aim to confirm a person’s identity for accessing services, unlocking devices, or securing premises should be excluded.
Military, Defense, and National Security
AI systems used for military, defense, or national security purposes, regardless of whether a public service or private entity carries out these activities, should be excluded.
Read more from Mosey:
- Corporate Resolution Guide: Uses, Examples, and Best Practices
- Wyoming Surety Bond: Nonresident Employer Bond Compliance (2024)
- Board of Directors Positions: Roles and Requirements
- HIPAA Compliance Checklist: Protecting Patient Privacy
- Nonprofit Board Positions: Structure, Roles, and Requirements
- What Is Wrongful Termination and How To Avoid as an Employer