On 13 March 2024, the proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules for artificial intelligence and amending certain Union legislative acts (hereinafter: The Artificial Intelligence Act or the EU AI Act) was adopted. The objectives of the Artificial Intelligence Act include making AI systems placed on the market and used in the EU safer, protecting fundamental rights, EU values and legal certainty, promoting innovation in AI and supporting the single market. The standard text of the Artificial Intelligence Act basically categorises the AIs to be regulated according to the risks they would pose if placed on the market or otherwise used. Accordingly, the regulation distinguishes between prohibited AI practices, high-risk AI systems, and low-risk AI systems.

Prohibited AI Practices according to the EU AI Act

Artificial intelligence legislation regulates among the prohibited AI practices operations that pose an abnormally high risk of impact on the fundamental rights of natural persons. The Artificial Intelligence Act among prohibited AI practices the following:

  1. the use of harmful subliminal (non-detectable) techniques that substantially distort behaviour;
  2. behaviour-distorting, harmful techniques that exploit the vulnerability of a person (i.e. disability, social or economic status etc.);
  3. social scoring systems that classify natural persons with adverse consequences;
  4. systems that create or enhance facial recognition databases by non-targeted retrieval of facial images from the Internet or closed-circuit television;
  5. the use of AI systems that infer the emotions of natural persons in workplaces and educational establishments (except where purpose of the use, installation or distribution of the AI system is for medical or security reasons);
  6. biometric categorisation systems;
  7. use of real-time remote biometric identification systems in publicly accessible places for law enforcement purposes’ (there may be some exceptions to this point in certain justified cases, e.g. ‘to locate missing children, to prevent a specific, significant and imminent threat to the life or physical safety of natural persons, to prevent a terror attack’) [1]

High risk AI systems according to the Artificial Intelligence Act

The Artificial Intelligence Act sets forth an itemised list of areas where the use of AI may be high risk. These areas include the following:

  1. Biometric identification and categorisation of natural persons;
  2. Management and operation of critical infrastructure (i.e. AI systems used as security components in road traffic, water, gas, heating and electricity supplementary services);
  3. Education and training (AI systems that determine access or assignment of natural persons to education and training institutions; student assessment);
  4. Employment, employee management and access to self-employment;
  5. Access to and utilisation of basic private and public services and benefits (assessment of eligibility for public assistance benefits, consideration of creditworthiness, systems for deployment of firefighters, doctors)
  6. Law enforcement (risk assessment, polygraph, deepfake detection etc.);
  7. Migration, asylum and border control (risk assessment, polygraph, travel document verification, visa applications etc.)
  8. Administration of justice and democratic processes (application of law to specific facts)”[2]

In addition, an AI system that is „intended to be used as a safety component of a product covered by the Union harmonisation legislation listed in Annex I or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing of the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.”[3]

Requirements for high-risk AI systems and obligations for providers, users and other parties

If an AI system is classified as a high-risk system, it is essential that it complies with the requirements set out in the Regulation in order to operate lawfully under the Artificial Intelligence Act.  Requirements established for high-risk AI systems include the implementation of a risk management system; the establishment of appropriate data governance and data management practices; the maintenance of technical documentation and records; transparency criteria; the guarantee of oversight of AI systems by a natural person; and the assurance of accurate and robust operation and a satisfactory level of cybersecurity. [4]

Providers of high-risk AI systems must also undergo a conformity assessment procedure prior to placing their systems on the market or putting them into service; operate a quality management system; keep records and logbooks; fulfil their duty to provide information; take prompt corrective action in the event of unlawful operation; and cooperate with the competent authorities as appropriate. The legislation on artificial intelligence also established additional obligations for product manufacturers, authorised representatives, importers and users. [5]

Low-risk AI systems

The criterion for Low-Risk AI systems is simply that users are made aware that they are interacting with an AI-driven system. However, in order to ensure safe operation, service providers may voluntarily agree to adopt a code of conduct that is common in the industry in which they operate. [6]

Penalties:

In the event of any infringement, the AI legislation provides for the imposition of administrative fines set forth in the EU AI Act.

The highest category of fines are to be applied if the offender does not refrain from AI practices referred to in Article 5. In this case, „an administrative fine of up to EUR 35 000 000 or, if the offender is a company, an administrative fine of up to 7% of its total worldwide annual turnover in the preceding financial year, whichever is the higher.”

Violations of other provisions of the EU AI Act are considered to be more lenient, as they are punishable by „an administrative fine of up to EUR 15 000 000 or, if the offender is a company, an administrative fine of up to 3% of its total worldwide annual turnover in the preceding financial year, whichever is higher.”

A third category of fines is imposed where „incorrect, incomplete or misleading information is provided in response to a request from notified bodies or competent national authorities. In this case, an administrative fine of up to EUR 7 500 000, or, where the offender is a company, it is liable to a fine of up to 1 % of its total worldwide annual turnover in the preceding financial year, whichever is the higher.” [7]

Effective date:

The Artificial Intelligence Act will enter into force on the twentieth day following that of its publication in the Official Journal of the European Union (expected entry into force: August 1st, 2024), but will be generally applied 24 months following the date of entry into force (2 August 2026). The exceptions to this are Chapters I and II, certain provisions of Chapter III, Chapter V, Chapter VII and Chapter XII and Article 6(1) of the Regulation, which will be applicable in phases, within6, 12 and 36 months following the entry into force of the Act. [8]

Proposal:

In order to promote compliance, the Commission will issue guidance within 18 months of entry into force, but given that the Regulation will become applicable after 24 months, waiting for guidance may be risky. If you believe that your activities fall within the scope of the EU AI Act, we recommend that you contact a consultant or professional as soon as possible to prepare and implement a legitimate practice. [9]

2024.07.19.

Do you have a question about data protection or the position of Data Protection Officer? Contact me!

Dr. Miklós Péter – GDPR lawyer dmp@dmp.hu / +36306485521

Read our other articles as well!

DPO qualification
Data breach notification GDPR

[1] AI Act – Chapter II Prohibited Artificial Intelligence Practices Article 5
[2] AI Act – Chapter 3 HIGH RISK AI SYSTEMS Section 1 CLASSIFICATION OF AI SYSTEMS AS HIGH RISK Article 6 Classification rules for high risk AI systems (2) -> ANNEX III High risk AI systems referred to in Article 6(2)
[3] AI Act – Chapter 3, Section 1, Article 6.
[4] AI Act – Chapter 3 Section 2 Requirements for high-risk AI systems
[5] AI Act – Section 3 Obligations of providers and deployers of high-risk AI systems and other parties
[6] https://www.linkedin.com/posts/victoriabeckman_victoria-beckman-eu-ai-act-chart-iapp-ugcPost-7174144488536182785-qoJS?utm_source=share&utm_medium=member_ios
[7] AI Act – Title X, Confidentiality and Penalties, Article
99
[8] AI Act – Article 113 Entry into force and application
[9] https://www.linkedin.com/posts/axel-anderl-86365a1_ai-act-walk-trough-dorda-ugcPost-7174386121542430720-lKVD?utm_source=share&utm_medium=member_ios

This website is maintained by Dr. Miklós Péter Ákos, attorney at law registered in the Budapest Bar Association (registered office: 1028 Budapest, Piszke utca 14., tax number: 42982117-2-41, BAR ID number: 36079442) in accordance with the laws and internal regulations applicable to lawyers, which, together with information on client rights, is accessible at www.magyarugyvedikamara.hu. The blog posts and articles on the website do not constitute specific legal advice, an offer or a solicitation. It is intended to inform the website visitors about the areas of expertise of Dr. Miklós Péter Ákos attorney at law. The website has been prepared in accordance with the Hungarian Bar Association (MÜK) Presidium's Resolution No. 2/2001 (IX.3.) on the "Content of the website of the Hungarian Bar Association" and with the provisions of Chapter 10 of the MÜK's Rules of Procedure No. 6/2018 (26.III.). Legal notice​

Web: ZK DESIGN - Ügyvédhonlap