EU, US step up AI cooperation amid policy crunchtime – EURACTIV.com

Washington and Brussels are stepping up their formal cooperation on Artificial Intelligence (AI) research at a crucial time for EU regulatory efforts on the emerging technology.

The European Commission and the US administration signed an “administrative agreement on Artificial Intelligence for the Public Good” at a virtual ceremony on Friday evening (January 27).

The agreement was signed in the context of the EU-US Trade and Technology Council (TTC), launched in 2021 as a permanent platform for transatlantic cooperation across several priority areas, from supply chain security to emerging technologies.

The last high-level meeting of the TTC was held in the US in December, and Artificial Intelligence was presented as one of the most advanced areas in terms of cooperation.

In particular, the two blocs endorsed a joint roadmap for reaching a common approach on critical aspects of this emerging technology, such as metrics to measure trustworthiness and risk management methods.

“Based on common values ​​and interests, EU and US researchers will join forces to develop societal applications of AI and will work with other international partners for a truly global impact,” Internal Market Commissioner Thierry Breton said in a statement.

Research collaboration

Building on the AI ​​roadmap, the US and EU executive branches are stepping up their collaboration to identify and develop AI research that has the potential to address global and societal challenges like climate change and natural disasters.

Five priority areas have been identified: extreme weather and climate forecasting, emergency response management, health and medicine improvements, electric grid optimization, and agriculture optimization. This type of collaboration was until now narrower and limited to more specific topics.

While the two partners will build joint models, they will not share the training data sets with each other.

Large data sets often contain personal data that is difficult to untangle from the rest. There is currently no legal framework for sharing personal data across the Atlantic due to the disproportionate nature of the US surveillance regime certified under the Schrems II verdict of the EU Court of Justice.

“The US data stays in the US and European data stays there, but we can build a model that talks to the European and the US data because the more data and the more diverse data, the better the model,” a senior US official told Reuters.

The Commission stressed that, as part of the agreement, the two partners would share the findings and resources with other international partners that share their values ​​but lack the capacity to address these issues.

As both Washington and Brussels note that the agreement builds upon the Declaration for the Future of the Internet, signatories to the Declaration are likely candidates that could benefit from the outcome of this research.

Risk Management Framework

While the EU-US collaboration on AI marked a, for now symbolic, step forward with the administrative agreement, Washington seems determined to put some of its standards on the map at a time when the EU is finalizing the world’s first rulebook on Artificial Intelligence.

Last Thursday, the day before the announcement, the US Department of Commerce’s National Institute of Standards and Technology (NIST) published its Artificial Intelligence Risk Management Framework, which sets out guidelines for AI developers on mapping, measuring, and managing risks.

This voluntary framework developed in consultation with private companies and public administration bodies well represents the American non-binding approach to new technologies. When they are regulated, that often occurs at the state level in relation to specific sectors such as healthcare.

By contrast, the EU is currently advancing the work on the AI ​​Act, horizontal legislation to regulate all AI use cases based on their level of risk, notably including a list of areas at high risk like health, employment and law enforcement.

The AI ​​Act is expected to be highly influential and possibly set international standards on several regulatory aspects via the so-called Brussels effect. As most of the world’s leading companies in the field are American, it is not surprising that the US administration has been trying to shape it.

In October, EURACTIV revealed that Washington was pushing for the high-risk categorization to be based on a more individualized risk assessment. Importantly, the US administration argued that compliance with NIST’s standards should be considered an alternative way to comply with the self-assessment mandated in the EU’s AI draft law.

The publication of this Framework comes at a critical time for the AI ​​Act, as EU lawmakers are on their way to finalizing their position before starting interinstitutional negotiations with the European Commission and the member states.

“The AI ​​Risk Management Framework can help companies and other organizations in any sector and any size to jump-start or enhance their AI risk management approaches,” said NIST Director Laurie Locascio in a statement.

[Edited by Nathalie Weatherald]

Leave a Reply

Your email address will not be published. Required fields are marked *