Published on 24 January 2024

AI provokes geopolitical issues that affect us all

Of course, AI is about technology, but it is also a major issue for governments, businesses, and international and regional organizations. As it develops, it raises economic, political, and cultural questions and is leading to various dependencies, power struggles, rivalries, risks, and opportunities. All players need to defend their interests and play their cards right. Alice Pannier, a researcher who heads the Geopolitics of Technology Program at the French Institute of International Relations (Ifri), explains more.

Illustration d'un fond étoilé
Illustration de la carte du monde
Miniature de la photo d'Alice Pannier

Alice Pannier,
Researcher, at the head of the Geopolitics of Technology Program
French Institute of International Relations (Ifri)


What does AI’s geopolitical map look like today?

“At a major governmental level, we can see two main aspirations: firstly, competitiveness and economic power, and secondly, greater security guarantees.”

Alice Pannier, French Institute of International Relations (Ifri)

Alice Pannier: Key players include individual governments or, in the case of the European Union, regional institutions, but also private companies, and even individuals. Their own dynamics may be parallel or associated without necessarily going in the same direction. And the landscape also varies depending on the scale of the analysis you’re looking at.

If we look at a major governmental level, such as the United States or China, and functions such as the European Union – and within the European Union, its member states – we can see two main aspirations. Firstly, to increase competitiveness and economic power, since they drive growth, economic development, and modernization. Secondly, greater security guarantees, since these systems are central to a nation’s critical infrastructures, military, or home office security including the police and criminal justice systems.  

This leads to the need for a government to reduce its dependency on foreign organizations for certain technologies, including AI. We’re talking about a nation’s desire for technological sovereignty, technological independence, and technological leadership.

What about the role of companies?

A.P.: Companies contribute to technology development, including at the individual level. They are very densely interrelated. These are companies that operate on a global scale. We’re well aware of the widespread use of American technological solutions in Europe for example. Moreover, the digital economy is more or less globalized, even if some countries restrict the use of foreign technologies, such as China, among others.

This strong interconnection also requires the use of Open Source solutions, which constitute real ecosystems. Indeed, we can see that software developed in open source, including for AI, is supported by developers from all companies, including Chinese companies such as Huawei. These are also subject to restrictions, for example in the United States or Europe, for certain technological equipment.

These two levels, state and business, operate according to quite different rules, between the national interest on the one hand and the private interest on the other, which often pull in opposite directions.


In what way is artificial intelligence central to these power struggles?

“AI’s geopolitical map must take into account who owns the computing power, who owns the data, and who conducts the AI research.”

Alice Pannier, French Institute of International Relations (Ifri)

A.P.: When we talk about artificial intelligence, it’s important to understand its various building blocks. There are algorithms, which are spoken about a lot in the context of regulation, particularly by the European Union. There’s also the data, computing power, and cloud systems in which this data is stored, and which will be used to train the models. Finally, there’s a whole internet network infrastructure that underpins it all.

To establish AI’s geopolitical map, attention must be paid to the balance of power and therefore take into account who owns the computing power, who owns the data, and who conducts the AI research.

The result is a complex picture of realism, power struggles, a great deal of interconnection, and at the same time a national desire to take back control of cyberspace within their borders. Each government wants to be sure that the rules that apply to it are in line with their values, whether they are Chinese, American, Russian, or European, etc.

Do each of these major states have a strategic vision?

A.P.: Faced with the dual challenges of economic competitiveness and national security, each state is trying to find their own balance as the latter will impact civil liberties.

In terms of regulatory ambitions, the European Union stands out for its so-called horizontal approach, i.e. dealing with AI as a whole, which is also a risk-based approach. The aim is to restrict uses deemed too risky, which may conflict with individual rights and freedoms already enshrined in law. To this end, the EU may ban certain types of artificial intelligence, or at least certify or control a set of systems that are deemed high-risk, for example in education or employment, to avoid discrimination.

In the United States, the approach is more vertical, choosing for the time being to delegate the control of roadmaps to each federal department or agency so that AI is used according to rules that, for example, protect the consumer or combat discrimination. Nevertheless, the executive order issued by President Biden last October places as much emphasis on developing American competitiveness and the ability to attract talent as on regulation.

The Chinese are also regulating AI, more specifically generative AI. In the same way as in Europe, there is a certain transparency requirement in terms of algorithms to share information on model training and data sources, etc. But there’s a marked difference in the values and principles that underpin the control: this is not to protect individual rights and freedoms, but to respect socialist values. Generative AIs can’t be seen to generate content that could undermine national security. However, in China national security is a broad notion.

Are some states adopting alternative positions? ?

A.P.: The United Kingdom and Israel are known to have a more proactive approach to innovation, but with relatively weak regulations underpinning it. And it is true that the UK is one of the European countries that is innovating the most in AI and digital tech. For the time being, I don’t believe that any analysis has been conducted to understand the impact of this stance since Brexit. In the absence of safeguards, abuses can occur.

How does the balance of power manifest itself between public organizations?

“The United States wants to win the tech race against China. The Chinese want to develop their own national technologies and alternatives to American tech.”

Alice Pannier, French Institute of International Relations (Ifri)

A.P.: The United States wants to win the tech race against China, whereas strictly speaking, the balance of power was initially justified by the need for national security. The aim is to encourage innovation in the United States and to curb China’s technological development through restrictions on exports and the transfer of technology to China, but also by prohibiting American players from investing in certain Chinese high tech sectors. This applies to a range of fields, from supercomputers and semiconductors to artificial intelligence in the broadest sense. The United States is trying to convince its partners in Europe, Asia, and the Global South not to adopt Chinese technologies and to opt for restrictive measures similar to its own.

The Chinese want to develop their own national technologies and alternatives to American tech resulting in less dependency on them. Moreover, the balance of power leads them to try to maintain links with other regions of the world, including the European Union, in order to limit the effects of the American measures.

The European Union is in an uncomfortable position. It has strong AI assets, and the sector is developing at a good pace, especially in France in recent years. Nevertheless, the EU is very dependent on American technologies and Chinese equipment. Thus, the Sino-American tension is creating a lot of concern in Europe.

The EU aims to maintain a good trade relationship with China so that it can cooperate on major issues such as climate change and the ecological transition. But at the same time, it wants to reduce Chinese interactions and dependencies in a more discreet way on matters such as AI. For example, Chinese OEMs are banned in the EU. Cooperation in AI research or telecoms is being scaled back or called into question.

How does the European Union clash with the United States?

“The biggest issue is in terms of dependence on the large American platforms.”

Alice Pannier, French Institute of International Relations (Ifri)

A.P.: The biggest issue is in terms of dependence on the large American platforms, which are the main service providers for Europeans particularly in cloud, AI, and automation.

To deal with the challenge, you either need to find an alternative or ensure these tech giants behave in an appropriate way with regard to European interests and law.

European sovereignty is central this dual agenda. GDPR and the new regulations on AI are intended to apply to large American systems deployed in the European Union. The European market allows extraterritorial reach and influence, although it should be noted that the lobbying from American tech giants during the preparation of the new AI regulation was extremely intense.

How great is the influence of private companies on AI?

A.P.: If we look at it from France’s point of view, there have been concrete changes in terms of dialog and cooperation between the state and critical infrastructure service providers. It is the major foreign players, above all American, who now occupy these same strategic positions.

However, these private companies have their own interests, are subject to US law, and in some cases have extraterritorial influence. Because they operate within many links in the value chain, from office automation tools to the cloud and network infrastructures, there is a multifaceted dependence over which the French government has no control. And so, in the end, we can’t impose our own rights or laws on these service providers even in France – we’ve found ourselves in an almost inverse relationship of dependence.

The challenge that underpins the development of a French and European AI ecosystem, as in other digital sectors, is to be able to have greater trust in service providers. Jean-Noël Barrot, the French government’s Minister Delegate for Digital Affairs, has often spoken of the importance of cultural relations. This makes perfect sense in terms of AI, because all the systems that are and will be created by French or European companies will carry our culture and our values, via the data on which they are trained, or even in their algorithms. The issue isn’t just about our economy or national security, but it also has cultural and civilizational dimensions. We can debate that further, but in any case it has already been raised by the minister.

How are the major American tech giants addressing these cultural concerns?

A.P.: I guess the answer from a large American company would be to say that its GenAI systems are trained on open data, which is accessible on the Internet, and therefore ultimately a reflection of what is on the Internet and not stemming from a particular political or cultural vision – a rather agnostic and technicist vision. I don’t think our expectations are really being met.