A pressing question to ask is whether — and if yes, how — novel societal harms caused by AI systems are adequately addressed within the contours of existing legal categories and legislation. This article focuses on the risks and harms concerning the rule of law. It will highlight three issues by discussing relevant provisions of the Council of Europe Framework Convention on AI, Human Rights, Democracy and the Rule of Law (Framework AI Convention), which was concluded on September 5, 2024 in light of unfolding case law and business practices.
An added value of the Framework AI Convention lies in protecting not only human rights but also democratic processes and the rule of law in the context of AI. Whereas Article 4 of the Framework AI Convention concerns human rights, Article 5(1) focuses on state parties’ obligation to adopt measures so that AI systems are not used to undermine the integrity of democratic processes and the rule of law. The Convention purports to cover risks and harms which are of such a large scale and systemic nature that erode democratic processes and the rule of law. More specifically, the letter of Article 5 refers to many elements intrinsic to the rule of law: integrity and effectiveness of democratic institutions and processes, separation of powers, judicial independence and access to justice. The explanatory report brings more clarity adding political pluralism, information integrity, the principles of accountability, legality and non-arbitrariness and the risk of influencing court rulings.
Certain algorithmic harms are not necessarily captured and effectively addressed by existing areas of law, including human rights law. Recent pieces of legislation introduced on an EU level — the Digital Services Act and the AI Act — protect different (legal) categories (e.g., civic discourse, electoral processes) which may function as “proxies” for the rule of law and they help with (at least partly) fleshing out elements thereof.
Legality and non-arbitrariness when deploying AI systems
Courts have already pronounced on the pressing concerns for robust legislative frameworks and strong safeguards. The European Court of Human Rights in Glukhin v Russia and the English Court of Appeal in Ed Bridges v South Wales Police found the specific use of (live and/or automated) facial recognition technologies not only to violate human rights but also to be premised upon fundamentally deficient legal frameworks. Strong safeguards to protect from arbitrariness encompass states’ positive duties to take all reasonable measures to ensure that an AI system does not have an inbuilt bias (Ed Bridges) or to proactively confirm the validity of algorithmic assessments (Ewert case, Supreme Court of Canada).
Ramifications of algorithmic opacity to judicial oversight and accountability
Algorithmic opacity inhibits access to justice and one’s right to an effective remedy. Crucially, courts’ ability to exercise judicial oversight as a fundamental rule of law requirement is also at stake. AI systems that are developed privately and used in the public sector are shielded from scrutiny due to restrictions imposed by contracts and/or IP. The Loomis judgment exemplifies these concerns. The Supreme Court of Wisconsin dismissed too lightly the claim that the proprietary nature of an assessment tool predicting individuals’ risk of recidivism prevented the challenge of the accuracy of said assessments. The court overestimated judges’ perceived ability to legally appreciate the limitations of the scientific validity of risk assessments. Questions of judicial oversight became more pronounced in the SyRI case. The District Court of The Hague openly acknowledged that the absence of pertinent information concerning an AI system deployed to predict social welfare fraud prevented it from answering a series of legal questions concerning the AI system’s nature, legality and necessity.
Individuals’ ability to freely form opinions and democratic processes
Article 5(2) of the Framework AI Convention concretely links an individual’s ability to freely form opinions with democratic processes. Notwithstanding that Article 7 covers states’ obligation to respect individual autonomy, the rationale of Article 5(2) is twofold: first, it pinpoints at the formation of political opinions, as an aspect of individual autonomy; and, second, it stresses the implications of manipulating and altering political opinions on a large scale. The novel harms here stem from the use of recommender or advertisement systems based on targeting and manipulative techniques optimised to appeal to human vulnerabilities. Such techniques include disseminating or amplifying misleading or deceptive content (e.g., AI-generated content or AI-enabled manipulation of authentic content) or facilitating disinformation campaigns.
To give an example, the effectiveness of measures that X and TikTok take to mitigate disinformation campaigns which manipulate voters are under scrutiny. Romania’s recently canceled presidential elections or how these two platforms recommend pro-AfD content to non-partisan users ahead of Germany’s parliamentary elections testify to the risks in electoral processes. Actual or foreseeable negative effects on electoral processes are also one of the systemic risks that providers of very large online platforms and search engines must identify, analyse and assess (Article 34(1)(c) of the EU DSA).
However, ‘democratic processes’ under Article 5(2) of the Framework AI Convention is a broader concept to ‘electoral processes’. Democratic processes bring to the foreground the overall environment giving rise to the formation of political views. Civic discourse (a concept incorporated in Article 34(1)(c) of the EU DSA), political pluralism, and the political content to which individuals are exposed are crucial elements. An apt example is how AI systems on social media platforms curate the visibility of political content. The European Commission presently investigates Meta’s updated policy demoting political content in the recommender systems of Instagram and Facebook with evidence to suggest the exclusion of specific topics from civic discourse (e.g., reproductive rights).
Interestingly, certain AI systems which have an appreciable bearing on individuals’ ability to freely form opinions may be incompatible altogether with international human rights law and the rule of law. On this front, Article 16(4) of the Framework AI Convention is criticised for leaving too much discretion to states to assess the need for banning AI systems. Article 5 of the EU AI Act, on the other hand, prohibits AI systems that manipulate and exploit human vulnerabilities, even though we are a long way from being able to ascertain when the strict qualifiers in the text of Article 5 AI Act are fulfilled.
Mando Rachovitsa is an Associate Professor of Human Rights Law at the University of Nottingham.