Ignoring the hype about Elon Musk’s attendance, this week’s AI safety summit is serious business. It marks the first significant diplomatic interventions on AI, and therefore is likely to frame globally how the world’s governments might respond to this technology. There have been some welcome but tentative government interventions so far. The US introduced a voluntary code for AI companies developing models which just this week has become an executive order, while a more sophisticated legislative framework is currently working its way through the European parliament. Yet despite this, we are still a long way off from creating any binding international agreements about limiting the scale or scope of this technology.
The problem though, and one that is symptomatic of this government, is who has (or who hasn’t) been invited, and what therefore the governments of the world are responding to. The main audience for the summit, the people Sunak is convening are the people who he hopes will be able to influence the future of AI. Quite rightly this includes China, who have begun developing their own national models in response to Chat GPT. Beyond that, Rishi’s audience includes AI companies whom he hopes will commit to some basic mitigations on accountability and governance, policy makers who need to think about regulation and how to monitor advancements, and the scientific community who need to identify shared priorities for research.
While these are all important groups to convene, there is a notable gap in the summits invite list. Sunak’s summit aims to look at how frontier AI models present risks not only in terms of the technology existentially but also societally, yet there is a real absence of wider civil society groups and organisations who advocate for citizens in attendance.
Even those civil society groups who do advocate for the rights of citizens, although for many of the invitees this is not their main focus, are dwarfed by the financial power and reach of the businesses joining them. Of the 40 corporations present at the summit, just picking out Amazon Web Services, Google, Microsoft, Tencent and Nvidia, gives us a combined market cap of $6.55 trillion. In the face of this level of corporate power, the relatively tiny civil society organisations (for example the Alan Turing Institute, who are attending and are one of the UK’s leading institutes for data science and AI, has an annual income of £51 million) may be able to make their voices heard, but will they be listened to?
They should be, because when thinking about AI safety we can’t just think about the Terminator existential scenario, we also need to consider the risks of mass unemployment or the social problems which arise from any industrial transition. Moreover, large, existentially complex challenges such as climate collapse or technological change don’t just require action at the international level, but also sustained action at the local immediate level. Too often we assume that simply convening global leaders or scientists will be enough, when more often than not our international governance systems fall short of actually producing any concrete action to tackle these complex problems for the people they will most severely affect.
This is important because so much of how AI develops will be shaped by how organisations and institutions in society are actually using it. The future isn’t solely created by inventors at the forefront, but it’s built and realised by the institutions of the present. Just as we have grand summits like Sunak’s, we also have individual organisations across the country having the same questions and conversations about frameworks and policy. How should AI be used in my organisation? Why would we use it? How transparent should we be about its use?
In this broader context it feels like civil society and the wider community of people using this technology do seem to be somewhat missing from this conversation.
Indeed a coalition of over 100 civil society groups including the TUC have issued a letter this week calling the AI summit “a missed opportunity” as the “communities and workers most affected by AI have been marginalised by the Summit.” This seems to be a recurring problem with discussions surrounding AI, as Rootcause’s analysis of around 1,000 traditional media articles found, corporate actors are by far the most frequently quoted by journalists covering AI, with no civil society voices featuring amongst the top-25 most-quoted people or organisations.
The absence of these voices means some important key economic and political questions do not seem to be on the table. Questions such as who should own new models? How do we ensure singular companies don’t establish a monopoly on this technology? How do you balance regulation of this technology with the availability of open source models? Arguably many of these models are built on collective things that we as a civilisation have produced. How is the value of this technology equally distributed? How are decisions about how AI is governed being made collectively and democratically? These are critical issues for Labour and with the Tories ignoring them, it offers a chance to lead.
Already these issues have begun to be more deeply explored by a few different progressive groups. Groups like Labour for the Long Term are calling for the creation of a Brit GPT for public services while the Public AI network is campaigning for “public capacity in AI for the public good”. Similarly the Collective Intelligence Project has begun experiments in how deliberative democracy can be used to govern AI run with Taiwan’s minister of digital affairs Audrey Tang. It is in these deeper questions of ownership and democracy which the government’s current summit seems sadly lacking.
Over the past decade, in the face of other recent technological developments such as social media, politicians have been far too slow to understand and engage seriously with questions of safety, governance and regulation at all levels of society- from the lack of worker protections in gig work to harmful content on social media platforms. It seems this oversight is at risk of repeating itself. If Sunak’s summit has been convened to frame our collective future international response to AI, it’s a grave error to limit that opportunity to those who have the most to gain from AI, and exclude those with the most to lose.
If you enjoyed this blog, check out Connectivity Conflicts and the Contest for Cyberpower: On the Age of Unpeace.