Council of Europe Framework Convention on Artificial Intelligence
The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law is the first of its kind.
The First International Treaty on Artificial Intelligence
The Council of Europe (CoE) Framework Convention on Artificial Intelligence and Human
Rights, Democracy and the Rule of Law (the Convention on AI) is the world’s first international treaty on artificial intelligence (AI). The Convention sets forth various requirements for what future AI legislation will look like in the jurisdictions of its signatories. To date, only the European Union (EU) has a comprehensive law on AI, the EU AI Act.
For people familiar with the EU AI Act, many principles and aspects of the Convention on AI will look familiar. The EU AI Act goes further than the Convention in specificity and detail, but the Convention illuminates the likely future baseline regulatory burden for most private businesses in jurisdictions around the world.
Council of Europe
The CoE is a leading organization that supports international unity regarding human rights, democracy, and the rule of law. Based in Strasbourg, France, the CoE should not be confused with EU institutions. The CoE welcomes participation of many states outside of Europe, including the United States (US), and its chief outputs are multilateral treaties and conventions on issues regarding the protection of fundamental human rights. In March 2022, the CoE removed Russia as a member following Russia’s full scale invasion of Ukraine.
Negotiations
Negotiations began in 2019 with the establishment of the ad-hoc Committee on Artificial Intelligence, which was later succeeded by the permanent Committee on Artificial Intelligence. Finalized in May 2024, the treaty opened for signatures in September. The United States signed the Convention on AI on 5 September 2024.
Previous Agreements on AI Governance
The Convention on AI is indeed the world’s first legally binding treaty for AI and how AI systems should be governed, it is not the first multilateral agreement on AI. The Organisation for Economic Co-operation and Development (OECD) established the Principles for Trustworthy AI in 2019, revising them in May 2024, which align closely with the Convention’s standards. Additionally, the Bletchley Declaration, adopted in November 2023 by a similar group of nations, set voluntary commitments for AI safety cooperation.
A subsequent meeting in May 2024 produced the Seoul Declaration, which advanced AI safety through commitments to establish AI safety institutes and to foster collaborative research.
G7 nations also adopted The Hiroshima AI Process, which produced two documents in October 2023: principles for the use of AI (which align closely with the OECD) and a voluntary code of conduct encouraging organizations to undertake risk assessments, implement transparency measures, and adopt robust governance, privacy, and security practices.
Taken together, these legally non-binding agreements illustrate a consistent multinational approach over several years to achieve consensus regarding AI risks and governance.
The Convention on AI complements existing non-binding agreements, including the OECD Principles, Bletchley Declaration, Seoul Declaration, and G7’s Hiroshima AI Process.
A Commitment to Minimum AI Requirements in National Legislation
By signing this treaty, the parties commit themselves to drafting national legislation that conforms to the Convention’s minimum requirements. Such legislation is expected in the next several years as countries adopt laws governing AI.
Chapter III of the Convention on AI identifies various principles related to activities within the lifecycle of artificial intelligence systems. These include the following:
Human dignity and individual autonomy,
Transparency and oversight,
Accountability and responsibility,
Equality and nondiscrimination,
Respect for privacy and personal data protection,
Reliability, and
Safe innovation.
The Convention’s definitions and principles generally align with the principles set forth by the OECD and other national standards bodies, with the addition of “safe innovation.” Safe innovation is a concept that is supported by Article 57 of the EU AI Act, which requires that each Member State establish a regulatory sandbox by 2026. The general intent is to provide an environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems for a limited time before they are placed on the market or put into service. National regulators provide appropriate guidance, supervision, and support for AI system designers and developers in the sandbox to focus on identifying risks, in particular to fundamental rights, health and safety, testing, mitigation measures, and their effectiveness.
Important Exemptions
As with the EU AI Act, the Convention on AI does not cover the use of AI systems for national security and defense purposes. Several nations are developing lethal autonomous weapons systems (LAWS), especially for the wars in Ukraine and the Middle East. AI is poised to make these systems more lethal and effective. The UN Secretary General has called LAWS politically unacceptable and morally repugnant, and he has called for their prohibition. But nations are loath to limit their options when it comes to technological innovation for national defense and security - novel applications for AI systems in the defense space are no different.
AI Governance Requirements
The Convention on AI outlines various governance requirements for national AI legislation, specifically covering the areas of remedies, procedural safeguards, and the assessment and mitigation of risks and adverse impacts.
Remedies
Article 14 sets forth that signatories must provide appropriate remedies in national law if and when violations of human rights occur during the lifecycle of artificial intelligence systems.
Specifically, people subject to the application of AI systems must have access to documentation about how the AI system operates. If decisions are made about people through the use of AI systems, people must be suitably informed and they must have the ability to contest such decisions to a competent authority.
Procedural Safeguards
Signatories to the Convention on AI must ensure that, when an AI system significantly affects human rights, effective safeguards and legal protections are available for the people who are impacted. Additionally, laws should require developers and deployers of AI systems to inform individuals when they are interacting with an AI system instead of a human.
Assessment and mitigation of risks and adverse impacts
Signatories must establish risk and impact management frameworks to ensure that AI systems are safe. Such risk assessments should include the following, at a minimum:
The context and intended use of artificial intelligence systems
The severity and probability of potential impacts
The perspectives of relevant stakeholders, in particular persons whose rights may be impacted
Provide continuous monitoring for risks and adverse impacts
Provide documentation of risks, actual and potential impacts, and the risk management approach, and
Require, where appropriate, testing of artificial intelligence systems before making them available for first use and when they are significantly modified
Signatory laws should identify measures that address and mitigate the adverse impacts of AI systems. And finally, signatory laws should address the potential need for limiting or banning certain uses of AI systems where such uses are harmful to human rights, the functioning of democracy, or the rule of law.
The principles and requirements in the Convention on AI will shape many national privacy laws around the world.
Think Globally Now
The Convention on AI marks a pivotal step toward globally consistent AI governance, setting baseline standards to protect human rights, democracy, and the rule of law in an era of rapid technological change. Although previous agreements laid important non-binding groundwork, this treaty formalizes a commitment to ensuring that AI systems operate transparently, safely, and responsibly across borders.
The Convention on AI provides a blueprint for national AI legislation that balances innovation with ethical responsibility. As countries draft and refine their future AI laws, businesses should heed the principles and requirements in the Convention to embrace business strategies, policies, and practices that will ensure compliance anywhere in the world.
Contact Us
If you want more information about the impact of data protection, privacy, and artificial intelligence on your business, please reach out for a free consultation. 1GDPA assists organizations that need professional advice on securing and leveraging their data in a responsible and legally compliant manner. We will be happy to help you create, update, and mature your data protection, privacy, and AI governance, risk, and compliance programs.