Blog

EU AI Act summarized: Everything you need to know

Artificial intelligence (AI) is rapidly changing our world – from automated systems in industry to intelligent applications in everyday life. To ensure that AI remains safe, reliable and transparent, the EU AI Act now comes into play. The EU’s new AI regulation sets clear guidelines and standards to protect consumers, promote innovation and strengthen the single market. In this article you will find a concise summary of the EU AI Act, its key deadlines, the companies affected and the impact on the European market.

Authors

Tobias
Reuter

Principal

EU AI Act Summary - The most important facts in brief:

The first provisions of the EU AI Act will apply from February 2, 2025, from which point further requirements will come into force at regular intervals until December 31, 2030. The individual dates can be found in detail in the “Deadlines” section.

The AI Act affects all providers, importers, distributors and users of AI systems that are placed on the EU market or put into operation in the EU, regardless of whether they are based in the EU or not.

In addition to various requirements aimed at promoting transparency and creating a uniform basis at EU level, requirements for high-risk systems and prohibited practices have also been defined. These include, for example, manipulative or deceptive techniques, social scoring or the processing of personal and sensitive data to categorize or identify users.

The law is intended to protect consumers by ensuring that AI systems are used safely and ethically. Small and medium-sized enterprises (SMEs) and start-ups benefit from special incentives to promote innovation and facilitate market access.

The EU AI Act sets these deadlines

The EU AI Act was published on July 12, 2024 and defines different deadlines for the start of application of the corresponding regulations.

July 12, 2024

Publication in the Official Journal of the EU, setting the dates for the entry into force of the regulation.

July 12, 2024

February 2, 2025

Start of application of the prohibition of prohibited practices and the requirements for AI competence.

February 2, 2025

May 2, 2025

Codes of conduct to be complied with for general purpose AI (GPAI) are to be published by the Commission.

May 2, 2025

August 2, 2025

Entry into force of the provisions for GPAI providers and operators.

August 2, 2025

August 2, 2026

Start of application of all other provisions of the EU AI Act (exception Art. 6 para. 1)

August 2, 2026

August 2, 2027

End of the transition period for GPAI models that were put into operation before August 2, 2025 Classification rules for high-risk systems enter into force (Art. 6 para. 1)

August 2, 2027

These companies are affected by the EU AI Act

The EU AI Act distinguishes between different categories in the EU AI Act, to each of which different guidelines apply. Most of the obligations apply to providers who intend to put high-risk AI systems into operation in the EU or whose output of high-risk AI systems is used in the EU.

Affected are:

  • Offer AI systems on the domestic market.
  • Must provide technical documentation and risk management.
  • Have primary responsibility for the security and transparency of AI systems.
  • Use AI systems for business purposes.
  • Must ensure that the systems are used correctly and ensure transparency.
  • Distribute AI systems within the EU.
  • Are responsible for ensuring compliance with regulations, including verification of conformity.
  • Produce AI components or complete systems.
  • Carry additional duties, especially for high-risk AI systems
  • Any organization outside the EU that wants to deploy AI systems in the EU.
  • Must comply with the same regulations as providers in the EU.

Special case of GPAI: The EU AI Act distinguishes between traditional AI systems and general purpose AI (GPAI). For example, the LLMs of major providers such as OpenAI, Meta, Google and Microsoft also fall under GPAI. GPAIs are particularly responsible for providing technical documentation and instructions for use, protecting copyright and ensuring transparency regarding the training data of AI models. Model evaluations, counter-tests and cybersecurity measures are also intended for GPAIs.

Impact & measures -
How the EU AI Act will change the internal market

Uniform rules for the introduction of AI systems on the market or in the EU service.

Some AI practices are expressly prohibited. These include:

  • Use of manipulative and deceptive techniques that significantly influence the user’s behavior
  • Exploiting a person’s vulnerability (e.g. age, disability, social or economic situation) to negatively influence behavior
  • Predictions or inferences based on personality traits of the user that lead to discrimination or negatively influence behavior.
  • Assessment of the risk of the user committing a criminal offense if the assessment is based solely on the creation of a profile of a natural person or the assessment of personality traits.
  • Creation or expansion of databases used for facial recognition and targeted reading of data from the Internet or video surveillance systems.
  • Deriving emotions from a natural person in the workplace and in educational institutions (with the exception of medical or safety-related reasons)
  • Categorization of natural persons on the basis of their biometric data in order to draw conclusions about demographic characteristics such as their ethnicity, political opinion, trade union membership, religion or sexual orientation.
  • Real-time identification of people in public spaces based on biometric data

Clear specifications and obligations for operators of such systems.

Requirements to ensure transparency for less risky systems.

Requirements for the market launch of general AI models.

Market surveillance, governance and enforcement measures.

Support for innovation with a special focus on SMEs and start-ups.

Companies (operators) must ensure that those affected (e.g. employees) can use AI systems competently.

Risk-based classification of AI systems

The risk categories of the EU AI Act are a central part of the regulation, as they determine which requirements and regulations are applied to different AI systems. The EU AI Act divides AI systems into four risk categories:

Unacceptable risk
(Unacceptable Risk)

systems that pose a clear threat to the security, fundamental rights or values of the EU.

  • Subliminal or manipulative techniques that influence the behavior of users.
  • Social scoring (e.g. rating people based on their behavior or personal characteristics).
  • Real-time biometrics in public spaces (e.g. facial recognition), with narrow exceptions.

Such systems are completely prohibited.

High risk
(High Risk)

Systems that can have a significant impact on security or fundamental rights.

  • AI systems in critical areas such as healthcare, transportation, education and criminal justice.
  • AI systems for migration and border control.
  • Implementation of a risk management system
  • Ensuring data quality
  • Transparency and human monitoring
  • Systems must be tested and certified before being launched on the market.

Limited risk
(Limited Risk)

Systems with potential impact, but little influence.

Chatbots or AI systems that can influence people directly (e.g. through interactions).

Transparency obligations, such as:

  • Users must be informed that they are interacting with an AI system.
  • Disclosure of certain functionalities.

Minimal risk
(Minimal or No Risk)

Systems that pose no or only a very low risk.

  • No specific requirements or regulations.
  • Suppliers can apply voluntary codes of conduct.
  • Implementation of a risk management system
  • Ensuring data quality
  • Transparency and human monitoring
  • Systems must be tested and certified before being launched on the market.

Sanctions and penalties

Like every regulation, the EU AI Act also provides for sanctions for companies that violate the new directives. The following list shows the penalties for corresponding violations:

Violation of prohibited AI practices
A fine of up to EUR 35 million or 7% of the company's worldwide annual turnover (whichever is higher).
Breach of other obligations
A fine of up to EUR 15 million or 3% of the company's global annual turnover (whichever is higher).
Incorrect or misleading information to authorities
A fine of up to EUR 7.5 million or 1% of the company's worldwide annual turnover (whichever is higher).

Special regulations

In addition to the defined penalties, mitigating or aggravating circumstances, such as the severity or duration of the infringement or the potential harm to those affected, apply when setting fines.
For SMEs and start-ups, the fines are limited to the lowest amount or percentage.

The winners of the EU AI Act

Startups and SMEs focusing on ethical AI applications will particularly benefit from the support measures provided for in the EU AI Act, which support innovation and development. In addition, the support measures and regulations not only create incentives for innovation, but also lay the foundation for a uniform legal framework in Europe that strengthens the internal market.

Consumer protection is at the top of the list: Consumers and users benefit from the new transparency obligations, which promote trust in AI systems and enable safe and traceable use.

The transparency obligations also contribute to fairer competition by ensuring that all providers have to meet the same requirements. In the long term, this could also open up new market opportunities for SMEs, especially in an environment that prioritizes ethical and human-centric AI.

Summary: More security and trust through the EU AI Act

The EU AI Act is a milestone in the regulation of AI systems and places a clear focus on safety, transparency and ethics. With harmonized regulations and targeted support measures, it not only creates trust among consumers, but also supports the competitiveness of companies, especially SMEs and start-ups. For companies, the Act offers an opportunity to establish themselves sustainably on the market with compliant and innovative solutions.

The EU AI Act is therefore changing the rules of the game – in order to remain competitive, our experts will be happy to help you understand the new regulations and use AI safely, efficiently and in compliance with the rules in your company.

Your contact person

Ventum Consulting Tobias Reuther
Tobias
Reuter

Principal and expert for AI scaling

Scroll to Top