UN Summit of the Future: A Critical Moment for Global AI Governance

Commentary

Representatives and leaders from national and local governments, civil society, private sector, and other segments of the society will convene at the UN Summit of the Future held between September 22-23, 2024 in New York to adopt the Pact for the Future. An ‘action-oriented outcome’ document, which will include an equally ambitious Global Digital Compact and a Declaration on Future Generations as annexes, the Pact for the Future is expected to be endorsed by UN member states at the conclusion of the Summit.

Building upon UN Secretary General Antonio Guterres’ 2021 Common Agenda report, and expanded upon in various UN General Assembly decisions and resolutions, the Pact for the Future acknowledges the inadequacy of our existing global governance framework. Governance institutions of the present have proven inadequate for addressing the ambitious goals that the UN and its member states have set out. Principal among these are unlocking the opportunities and mitigating the risks that emerging technologies pose for development, security, and human rights, as well as the indispensability of making the necessary changes to enable concerted and decisive action to address shared challenges.

In recent years, Artificial Intelligence (AI) has seen immense improvements in capabilities and reach across every aspect of human activity. AI can be used to greatly improve the world or cause irreparable harm to it. Leading experts from across the world expect these capabilities to continue to improve, with some expressing serious concern about the systemic risks they pose for humanity and to the planet. As these advancements progress, the urgency of developing effective global frameworks for AI governance becomes increasingly clear.

As a result, it is essential that as leaders, academics, and activists from around the world meet in New York, that one of their main priorities should be the adoption of a comprehensive global compact for the governance of AI systems that establishes clear international standards, prioritizes safety, ethics, and transparency, and ensures equitable access to AI technologies across all regions.

There are some positive signs in this regard. The US- and China-sponsored UN General Assembly resolutions calling for ‘safe, secure, and trustworthy’ AI passed with overwhelming support in the UN earlier this year, underscoring the global support for such standards, not to mention the proliferation of dozens of regional, national, and local AI regulations and legislation already signed into law.

However, in light of well-documented and deeply concerning cases of algorithmic and systemic bias, violations of human rights, and the weaponization of AI, it is important that Summit participants negotiate and agree on the creation of an international regulatory body to oversee the development and deployment of AI systems, ensuring compliance with ethical guidelines, addressing risks such as bias, privacy violations, weaponization, and preventing the monopolization of AI power by a few countries and tech companies. Such a body must also promote transparency, ensuring  especially the developers of such AI systems can be held accountable.

A coordinated effort to develop AI safety measures, including research into robust and aligned AI systems that are controllable, interpretable, and aligned with human values, is necessary to mitigate the potential risks posed by increasingly autonomous technologies. The so-called ‘AI alignment problem,’ implies that there is a real possibility for a mismatch between human values and machine goals. As some have pointed out, this also means that any well-intentioned regulatory efforts may be subject to similar value misalignment vulnerabilities. This is why any regulatory attempts must take place alongside rigorous scientific research to prevent socially and institutionally undesirable outcomes.

One of the main drivers behind the Common Agenda, and indeed of the Summit, is the realization that UN member states have fallen woefully short of the benchmarks outlined in the UN Sustainable Development Goals (SDGs). While AI does have the potential to help overcome this lag in progress, AI that is weaponized, controlled by a handful of actors, or not properly aligned with human values risks not only halting progress on these goals but decimating the progress that has been made.

AI may not only increase inequality both across the Global North and Global South, but also within countries. Numerous studies have shown that the adoption of emerging technologies can be disruptive to both economies, and political stability. While in the Global North such disruption may be offset by potential efficiency gains and relatively stable political institutions, the picture across the developing world appears more complicated, possibly leading to increases both in social and economic inequalities and government oppression and instability.

It is important, therefore, that advanced economies commit at the UN Summit to share the benefits of new and emerging technologies with developing countries, fostering collaboration, capacity building, and knowledge transfer to reduce global disparities in technological advancement. In this regard, AI could potentially be harnessed to advance SDGs. Aspirations expressed in UN General Assembly resolutions must translate into real action plans that ensure that developing nations are not left behind in the digital divide, and provide necessary infrastructure, resources, and expertise.

Such efforts should prioritize sustainable development, empowering these countries to harness technological advances for economic growth, social progress, and addressing critical challenges like public health and climate change. Such prioritization requires an inclusive dialogue between governments, civil society, and private industry to ensure that the voices of diverse communities, including marginalized and underrepresented populations, are central in shaping the future of AI governance.

Download PDF