The Council of Europe’s draft AI Treaty: balancing national security, innovation and human rights?

Commentary
Christopher Lamont
18
March 2024

The Council of Europe’s draft Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, aims to become a “first-of-a-kind treaty” according to an announcement that marked the finalization of the convention’s draft text. This effort to conclude a multilateral treaty on AI aspires to set global standards for artificial intelligence that are consistent with human rights, democracy, and the rule of law.

The announcement of a finalized text, which has yet to be made public, that will be put forth to the Committee of Ministers for eventual adoption and opening to signatures, comes on the heels of concern that this effort would collapse following the U.S.’ appeal to exempt private companies from the scope of this new regulatory instrument. While these developments were largely overshadowed last week by the European Union’s AI Act, another recent groundbreaking move to regulate AI, the Council of Europe’s AI Convention - unlike the EU AI Act - is an international instrument that would be open to ratification by Council of Europe member states and non-member states alike.

Before moving forward, it is important to underline that the Framework Convention on Artificial Intelligence remains the product of a consensus among the 46 member states of the Council of Europe, with input from influential non-voting external observers within the Council, such as Canada, Japan, and the United States. This consensus appears to have been achieved on some of the most divisive questions on the global governance of AI. These include where to strike the balance on contentious points of friction, such as the prospective regulatory scope of a new treaty, with particular regard to national security, defense, and crucially on the question of obligations of private companies.

With respect to human rights treaties, we often find an empirical puzzle that suggests the more rigid and less flexible a treaty’s provisions, the fewer states will commit to ratification. This implies a tradeoff between a treaty’s binding depth and its breadth in terms of potential membership. The Council of Europe’s draft framework convention at first glance appears to be no exception to this observation.

Indeed, the necessity of keeping Canada, Japan, and the United States onboard with the Council of Europe’s AI convention, led to a global back-and-forth between these observer members on the one hand, and the European Commission on the other, with the latter insisting private companies not be excluded from the world’s first treaty on artificial intelligence. To be sure, Brussels would like to see as much alignment as possible between the EU AI Act and the Council of Europe’s AI Treaty.

However, according to initial reports, the compromise that was agreed within the Council of Europe will grant broad leeway to states to ‘pick and choose’ whether or not they will apply the provisions of the treaty to the private sector. Moreover, when it comes to national security, there was a broad carve out that does not apply the provisions of convention to activities that protect national security interests of a signatory state.

For civil society organizations closely watching the Council of Europe process, the prospect for a double blow of failing to address private companies while also providing states with a broad national security exemption would provide, according to widely endorsed open letter, “little meaningful protection to individuals who are increasingly subject to powerful AI systems prone to bias, human manipulation, and the destabilisation of democratic institutions.”

In addition to these concerns, there is the added problem of the domestic political climate in the United States, which alongside Canada, Israel, Japan, and the United Kingdom, sought to limit the convention’s scope to public bodies. While the U.S. certainly wasn’t alone in advancing this position, it is difficult to envision a path for the U.S. Senate to ratify any new major human rights treaty commitments, meaning that there is a real risk that while the Council of Europe opted for a draft convention that could attract the broadest possible number of signatories in addressing U.S. concerns on binding obligations for private companies, this has done little to alleviate doubts as to Washington’s ability to ratify this convention.

On the other hand, the difficulties encountered during this negotiation process only shed light on a relatively small schism among relatively like minded democracies on the future of AI governance. Observer states that had input into these negotiations aren’t the only global players active in seeking to set standards for AI regulation, but as a community of states that largely share core democratic values, the Council of Europe, the European Union and their partners have a strong stake in establishing norms of AI governance that can address the effects of datafication and algorithmic decision-making in a manner that is consistent with human rights.

What do these developments mean for the global governance of artificial intelligence? The compromises necessary to secure what human rights advocates will see as a watered down instrument will certainly temper ambitions for future attempts to secure an international consensus on the governance of AI, particularly as it relates to human rights. At the same time, the fast changing AI governance space will require a constant revisiting and recalibrating of past compromises while also looking towards new and emerging governance frameworks. In the near term this means addressing interoperability questions between various national regulatory approaches being advanced, but also not abandoning efforts to address human rights, democracy, and rule of law challenges posed by artificial intelligence.

Download PDF