MEPs want fundamental rights assessments, obligations for high-risk users – EURACTIV.com

The European Parliament’s co-rapporteurs circulated new compromise amendments to the Artificial Intelligence (AI) Act proposing how to carry out fundamental rights impact assessments and other obligations for users of high-risk systems.

The new compromise was circulated on Monday (January 9) to be discussed at a technical meeting on Wednesday. It is one of the last batches to complete the first review of the AI ​​Act, a landmark legislative proposal to regulate technology based on its potential to cause harm.

Fundamental rights impact assessment

The co-rapporteurs want to include a requirement for all users of high-risk AI systems, both public bodies and private entities, to carry out a fundamental rights impact assessment, listing several minimum elements the assessment should include.

Notably, the AI ​​users would have to consider the intended purpose, the geographic and temporal scope of the use, categories of individuals and groups affected, specific risks for marginalized groups, and the foreseeable environmental impact, for instance, in energy consumption.

Other elements include compliance with EU and national legislation and fundamental rights law, the potential negative impact on EU values, and, for public authorities, any considerations on democracy, the rule of law and the allocation of public funding.

For the leading MEPs, the users should draft a detailed plan on how the direct or indirect negative impact on fundamental rights will be mitigated. In that absence, they would have to inform the AI ​​provider and the relevant national authority without delay.

“In the course of the impact assessment, the user shall notify relevant national competent authorities and relevant stakeholders and involve representatives of the persons or groups of persons who are reasonably foreseeable to be affected by the high-risk AI system,” the compromise reads.

Examples of the representatives that would provide inputs for the impact assessment include equality bodies, consumer protection agencies, social partners and data protection authorities. The users should allow them six weeks to provide such input, and if they are a public body, they should publish the result of the impact assessment as part of the registration to the EU register.

Obligations for users of high-risk systems

The co-rapporteurs made some significant additions to the obligations for users of AI systems considered at high-risk, for instance, ensuring that they have the appropriate robustness and cybersecurity measures in place and that these measures are regularly updated.

Moreover, “to the extent the user exercises control over the high-risk AI system,” users would have to assess the risks related to the potential adverse effects of use and the respective mitigation measures.

If the users become aware that using the high-risk system according to the instructions entails a risk to the health, safety or protection of fundamental rights, they would have to immediately inform the AI ​​provider or distributor and the competent national authority.

The compromise specifies that the users would have to ensure human oversights in all the instances required by the AI ​​regulation and ensure that the people in charge have the necessary competencies, training and resources for adequate supervision.

High-risk AI users would also have to maintain the automatic logs generated by the system to ensure compliance with the AI ​​Act, auditing any foreseeable malfunctioning or incidents and monitoring the systems throughout their lifecycle.

Before a high-risk AI system is implemented in a workplace, the users should consult with worker representatives and inform and obtain the employees’ consent.

In addition, the users would have to inform the individuals affected by the high-risk system, notably concerning the type of AI being used, its intended purpose and the type of decision it makes.

A paragraph has also been added to address generative AI, increasingly popular models like ChatGPT that can generate content based on human inputs. The users of such systems would have to disclose that such text was AI-generated or manipulated unless the content went through a human review and its publisher is liable or holds editorial responsibility.

AI providers should closely cooperate with users to comply with such obligations. In turn, users would have to cooperate with the national authorities on any action related to high-risk systems.

Obligations for distributors, importers and users

Distributors, importers, users, and any other third party would be considered providers of a high-risk system, with relative obligations, under some specific circumstances that the leading MEPs have heavily edited.

For instance, if they modify the intended purpose or make any substantial modification that makes an AI a high-risk application.

Another condition is if the high-risk system was put into services under their name or trademark unless a contractual arrangement assigns the obligations differently.

When these third parties become a new AI provider, the original provider should cooperate closely with them to comply with the regulation’s obligations.

[Edited by Alice Taylor]

Leave a Reply

Your email address will not be published. Required fields are marked *