In the advent of the finalization of the EU Code of Practice, there have been discussions on whether there’s a legal basis in the AI Act for public transparency of AI risk management documents in the EU Code of Practice.
We argue in this article that there is a clear case for public transparency of risk management documentation both within the AI Act itself and, more broadly, in the Union’s founding Treaties, such as Article 169 of the Treaty on the Functioning of the EU which highlights a “right to information”. As such, we believe that the final version of the Code of Practice should keep the measures on public transparency of risk management documentation that were in the three previous drafts.
First, Article 1 of the AI Act makes clear that “The purpose of this Regulation is to promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter”. Transparency is a necessary condition to ensure trustworthy AI, to allow affected persons to make informed decisions and for them to benefit from their right to an effective remedy as ruled by the Court of Justice of the European Union. This is also the reason why public transparency is a key principle enunciated in the Ethics guidelines for trustworthy AI of the High-Level Expert Group on Artificial Intelligence (AI HLEG) and the European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies (2020/2012(INL)) (2021/C 404/04), all clearly mentioned in the AI Act.
Second, public transparency is an essential component of AI literacy. Article 95 (2) of the EU AI Act explicitly states that Codes shall include elements such as “(a) applicable elements provided for in Union ethical guidelines for trustworthy AI”, and this clearly includes public transparency and “(c) promoting AI literacy”. In this regard, not only does Article 3 (56) clarify that “‘AI literacy” means skills, knowledge and understanding that allow (...) affected persons, taking into account their respective rights and obligations in the context of this Regulation (...) to gain awareness about the opportunities and risks of AI and possible harm it can cause” but to avoid any misunderstandings or misinterpretations, the AI Act (preamble point (20)) specifies that AI literacy should equip affected persons with “the knowledge necessary to understand how decisions taken with the assistance of AI will have an impact on them. In the context of this Regulation, AI literacy should provide all relevant actors in the AI value chain with the insights required to ensure the appropriate compliance and correct enforcement.”
Third, Article 56(2) states that "The AI Office and the Board shall aim to ensure that the codes of practice cover at least the obligations provided for in Articles 53 and 55". This makes clear that the Code of Practice may include measures that go beyond Articles 53 and 55, where warranted. Additionally, Article 56(4) states that “The AI Office and the Board shall aim to ensure that the codes of practice clearly set out their specific objectives and contain commitments or measures [...] to ensure the achievement of those objectives, and that they take due account of the needs and interests of all interested parties, including affected persons, at Union level.” Taking due account of the needs and interests of affected persons when a company deploys an AI that, if mismanaged, would materially increase the risk to their physical integrity includes providing them with all relevant information regarding the level of risk that they’re exposed to and the measures the company took to reduce it as low as reasonably practicable.
As a result, the right to public transparency of the Safety and Security Frameworks (SSF) and Model Reports that justify sufficient mitigations of significant externalities affecting EU citizens has a basis in the Act’s text. This basis is reinforced by the proportionality logic. The proportionality principle applied to this case highlights the benefits of public transparency and shows that the obligations imposed on model providers remain balanced and not unduly burdensome:
1) Appropriateness - Public disclosure of the SSFs and Model Reports effectively improves the identification, assessment and management of one or more of the systemic risks that the AI Act seeks to address.
SSFs and model reports provide the necessary evidence of safety with regards to systemic risks. These risks, by definition, are characterised by their significant likelihood to expose third parties, including EU citizens, to unconsented severe risks, including to their physical integrity. As discussed above, a citizen exposed to a negative externality of significant magnitude has a right to know the magnitude of that risk and what measures were taken to minimize that risk. Knowledge of such exposure also enables society to mitigate residual risks by identifying flaws in risk assessment and management processes as soon as possible, developing mitigations, and preparing and deploying defensive infrastructure when necessary. As such, public transparency regarding the measures that organisations are taking to limit the likelihood and severity of threats to EU citizens are highly appropriate.
2) Necessity - There is no less restrictive yet equally effective means to achieve the intended legal objective. In other words, there is no viable alternative that imposes a lower economic, operational, or privacy burden while still effectively managing systemic risks.
Public transparency cannot be replaced as a mechanism for informing citizens of the unconsented third-party risk they're exposed to as a result of companies’ AI deployment activities and companies’ measures put in place to mitigate these. The most minimalist implementation that would enable such information to be available to those who want it is that it be available upon request, as opposed to available by default. This would be similar to public transparency in terms of consequences for the company due to the right of EU citizens to share the concerns they may have publicly.
3) No Manifest Imbalance between the Costs and Benefit of Measure - The benefits associated with public disclosure evidently outweigh the costs.
Given that companies are assembling SSFs and model reports sufficient to demonstrate their compliance, the difference in costs between sharing these with the AI Office and making them publicly available, perhaps merely upon request, is minor. The public disclosure does not require producing any additional information and therefore does not add a significant burden. On the other hand, the benefits of EU citizens being aware and able to react to the unconsented third-party risk they're exposed to by AI deployment of a given company and whether the mitigations implemented by said company are proportionate to the magnitude of the expected harm are extremely high.
Signatories
Organisations
Center for AI & Digital Humanism
The Future Society
SaferAI
European Writers' Council
Centre pour la Sécurité de l'IA (CesIA)
Pour Demain
Individual Signatories
Pr. Margot E. Kaminski, Professor of Law, Colorado Law School
Dr. Giulia Gentile, University of Essex
Dr Karine Caunes, Research Associate, Lyon 3 University, Editor-in-Chief, European Law Journal
Dr. Marta Bieńkiewicz
Dr. Nada Madkour
Caroline Friedman Levyn, Center for AI and Digital Policy
Colm O'Shea, COMAC Capital, Ireland
Evan Murphy, Director, AI Governance & Safety Canada; Non-Resident Research Fellow, AI Security Initiative, Center for Long-Term Cybersecurity, UC Berkeley
Deepika Raman, Center for Long-Term Cybersecurity, UC Berkeley
Monique Munarini, University of Pisa
Manuel Rico Rego, Asociación Colegial Escritoras y Escritores
Krystal Jackson, Center for Long-Term Cybersecurity, UC Berkeley