Security Operations, Public sector, Artificial Intelligence

The potential negative effects of AI have prompted governments to address the rapid ascension and evolution of the technology. The latest and most comprehensive response comes out of the EU, which has entered into force the AI Act to ensure AI systems align with the morals and ethics of the member states and tackle potential areas of exploitation and misuse. 

By establishing risk-based oversight across the member states, the EU hopes to create a trustworthy environment where innovation is balanced with ethics and a center for human-centered AI technology. 

This means organizations will be held accountable for using and deploying AI responsibly. As the EU AI Act contains a broad range of regulations, we’ve outlined its key components to help these organizations understand what it is and how they might be affected.

What is the Artificial Intelligence Act of the European Union?

The EU AI Act, also known as the Artificial Intelligence Act of the European Union, is the world’s first comprehensive AI law. Broadly, the AI Act categorizes AI systems based on the level of risk posed to the public and establishes responsibilities for companies creating and deploying the technology. It addresses both AI systems and general-purpose AI systems.

The law is a part of the EU’s digital strategy under its digital transformation priority. It will be enforced and overseen by the European AI Office, which was established within the European Commission as the center of AI expertise and will become the foundation for a single governance framework.

What organizations will be impacted by the EU AI Act?

The EU AI Act establishes obligations for providers, deployers, importers, distributors and product manufacturers of AI systems with a link to the EU market. This includes providers with AI products on the EU market, deployers of AI located in the EU, and providers and deployers in other countries if the output of the AI is being used in the EU.  

For the purposes of this act, an AI system is defined as, “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

What are the act’s different risk levels for AI?

The EU AI Act has established rules for each level of risk.

Unacceptable risk

These AI systems are considered a threat to the public and are prohibited under the AI Act. They include:

  • AI that deploys techniques that are either subliminal or purposefully manipulative or deceptive, which impair individuals’ ability to make informed decisions—causing them to make a decision they would not have otherwise made, or a decision that is reasonably likely to cause the decision maker or anyone else significant harm.   
  • AI that exploits vulnerable populations due to age, disability or socioeconomic status with the objective or effect of distorting the behavior of a person in a manner that will cause harm to that individual or others. 
  • AI systems for social scoring. In other words, evaluating or classifying individuals or groups based on social behavior or personal traits, causing detrimental or unfavorable treatment of those people.
  • AI systems that create or expand facial recognition through the untargeted scraping of facial images from the internet or CCTV footage. 
  • AI-enabled real-time remote biometric identification systems in publicly accessible places (except in narrowly defined cases like counterterrorism)

High-risk AI systems

High-risk AI systems have the potential to significantly impact people and, while not outright banned, will be subject to additional requirements under the AI Act. They include AI systems used in:

  • Critical infrastructure: Safety components in management and operation of critical digital infrastructure, road traffic, or for water, gas, heating or electricity supply
  • Education and vocational training: Admissions for educational institutions, evaluating learning outcomes, assessing appropriate access to education, and monitoring prohibited behaviors of students during tests 
  • Employment, work management software and access to self employment: Recruitment and placement of job advertisements, sorting applications or evaluating candidates
  • Private and essential public services: Evaluating eligibility or creditworthiness for access to essential services

Limited risk AI systems

These are AI systems with lighter transparency requirements. For example, chatbots and deep fakes where developers and deployers are responsible for ensuring the end user is aware it’s an AI interaction.

Minimal risk AI systems

Minimal risk AI systems are unregulated and include the majority of AI applications currently available on the EU market (i.e. video games and spam filters).

How does the AI Act handle general-purpose AI (GPAI)?

General-purpose AI, or GPAI, is an “AI system based on a general purpose AI model that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.” 

These high-impact types of AI models are specifically addressed in the EU AI Act as broad-use AI systems with a wide range of potential applications like large language models (LLMs), foundation models or multimodal models. These AI models create challenges since the designers cannot always predict what they’ll be used for or where they’ll be used in the world.

That makes them difficult to regulate, but creators are subject to specific codes of practice within the AI Act, including transparent documentation of training and testing, reporting serious incidents, and assessing and mitigating possible risks. 

Notably, unless they present a systematic risk, free and open license GPAI models only need to comply with copyrights and publish the training data summary.

When does the EU AI Act go into effect? 

The AI Act is currently in effect, but will gradually be implemented in phases over the next several years. Generally, organizations will have 24 months from August 2024—when the legislation entered into force—to comply with the majority of provisions with a few exceptions

  • The ban of AI systems posing unacceptable risks will apply six months after the entry into force
  • Codes of practice will apply nine months after entry into force
  • Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

EU AI Act Timeline

  • March 2024: The Artificial Intelligence Act is adopted by the European Parliament
  • May 2024: The AI Act is approved by the European Council of the European Union in May
  • July 2024: The EU AI Act is published in the EU official journal
  • August 2024: The AI Act is entered into force across all 27 EU member states
  • Feb 2025: Rules on subject matter, scope, definitions, AI literacy and prohibition come into effect
  • August 2025: Rules on notifications, GPAI models, certain enforcement issues and penalties come into effect
  • August 2026: The general grace period for high-risk AI systems ends and the bulk of the operative provisions come into effect
  • August 2027: Rules on high-risk AI systems come into effect
  • August 2030: Grace period for high-risk AI systems intended for use by public authorities ends

What are the penalties for non-compliance? 

The penalty for non-compliance is a fine of up to 35 million euros or 7 percent of worldwide annual turnover—whichever is higher. There are lesser penalties: 

  • 15 million euros: Or 3 percent of worldwide annual turnover for breaking specific provisions and fines for incorrect, incomplete 
  • 7.5 million euros: Or 1 percent of worldwide annual turnover for providing misleading information to notified bodies or national competent authorities 

Startups are subject to the same fines, but whichever is a lower of the two numbers.

The EU AI Act seeks to balance the safety of the EU without hindering the innovation and growth created by the technology. The comprehensive legal framework will create a standard across the member states aligned with the morals and ethics of the EU—while boosting trust in AI and protecting the public from unethical AI uses.

Dataminr’s AI Platform

Dataminr’s AI platform is at the forefront of innovation in predictive, generative and regenerative AI. Get a first-hand look to see how it helps organizations like yours strengthen organizational resilience in an increasingly unpredictable world.

Learn More
December 9, 2024
  • Security Operations
  • Public sector
  • Artificial Intelligence
  • Corporate Security
  • Cyber Risk
  • Public Sector
  • Insight
Blog

2004 Boxing Day Tsunami: How AI Is Redefining Disaster Response

Commemorating the 20th anniversary of the 2004 Boxing Day tsunami, Dataminr’s CSO shares his personal account and how advances in AI have transformed event detection and response.

Blog

How AI Is Transforming Cyber Threat Detection

Explore how AI is revolutionizing real-time cyber threat detection in the public sector, enabling organizations and agencies to enhance their risk management and implement a proactive defense strategy against cyber threats.

Video

Direct Relief: Improving the Lives of Communities Worldwide With Dataminr 

A first-hand look at how humanitarian aid organization Direct Relief minimizes the impact of crises on communities around the world with the power of AI-enabled real-time risk alerts.