AI Governance for Latam: Mapping the Most Relevant Global and Regional Forums (Part 1)

diverse that, at times, they can feel overwhelming to those of us closely following the various activities or agendas where AI is discussed with the aim of agreeing on principles, norms, or technical standards and human rights applied to its design, deployment, operation, and uses.

However, not all forums and spaces have direct or immediate relevance to Latin America. From the myriad of forums discussing this topic, only a handful involve the states of our region, and therefore, they become relevant due to the commitments these states would assume and in which citizens and civil society could participate and scrutinize.

Here we focus on these spaces, in particular, on what happens in the G20, ECLAC, BRICS+, the OAS, and the most recent Intergovernmental Council for AI. However, we know that there are scenarios that can indirectly influence our states, such as what happens, for example, in the Council of Europe and its regulation of AI. But what about the secret spaces of bilateral AI regulation that we are losing sight of and that can indirectly influence the discussion at the regional and global levels?

AI Governance: All at Once Everywhere at the Same Time?

As we mentioned, there are other regulatory processes deliberately escaping public consensus. These are bilateral meetings between Big Tech and two specific countries: the United States and China, which, unlike countries in Latin America—and much of the world—not only consume AI systems but also produce them. The relevant agreements on the future of AI regulation and governance are happening there.

It is in these closed and non-transparent spaces, which therefore evade social scrutiny, where regulatory agreements emerge that should matter to us, perhaps as much or more than other discussions on regulation and governance taking place in the more well-known regional or global forums—of which we list only a few below.

For example, thanks to the press, we know that the U.S. government and Big Tech engage in dialogues in which the latter suggests how to be regulated—a discursive strategy that might make one believe they are 'asking' for regulation under the narrative of 'existential risks,' suggesting urgency and concern, but disregarding the current and real harms associated with their products. A narrative that, fundamentally, seeks to instrumentalize decision-makers so that dominant actors in the digital ecosystem can impose their own agenda, which, to begin with, removes transparency from the equation with the public.

The relationship between Big Tech and China is even more opaque. In a recent news article published by the Financial Times FT on secret diplomacy between representatives of companies like OpenIA, Anthropic, or Cohere, and representatives of the Chinese government, they not only refused to comment on the issues discussed in these meetings, which, according to the article, included aspects related to the regulation of their products, but it is expected that the conversations will continue in the future to continue addressing the challenges of aligning AI systems with social codes and norms. From which country or countries? The article does not specify, but even there, the same narrative about 'existential risks' is maintained.

So, there are global and regional forums that turn a blind eye to discussions on AI governance that convene multiple parties and would allow other interests and rights at stake to be made visible and articulated—with greater or lesser obstacles, in any case. But perhaps not all discussion forums are equally relevant, or perhaps not all deserve the same level of attention if what is sought, given the unease generated by their diversity and heterogeneity, is to prioritize and focus attention on those that are more critical due to their lack of openness and transparency.

Faced with scenarios of opaque and secret interaction in which the future of a technology with far-reaching impacts for the societies of the current and future world is discussed, what strategies should be employed to make agreements between Big Tech and some states transparent, and how to open up such discussions to social participation? What counterweights can be imagined so that other values at stake are considered in bilateral decisions about the future of a critical technology?

For now, it is worth recognizing that forums and spaces open to dialogue—openness and participation which in practice may be more or less criticizable—are numerous, but the few opaque and closed environments to social participation exert a relevant counterweight in the discussion on AI governance.