The geopolitics of artificial intelligence

Anastasia Kapetas | 24 December 2020
No image


As artificial intelligence technologies become more powerful and deeply integrated in human systems, countries around the world are struggling to understand the benefits and risks they might pose to national security, prosperity and political stability.

These efforts are still very much a work in progress. Australia is developing a whole-of-government AI action plan, led by the Department of Industry, Science, Energy and Resources. The department released a discussion paper this year and finalised its call for submissions in November.

In line with the department’s brief, the paper concentrates on the economic potential of AI, while acknowledging the need for a human-centred, ‘responsible’ AI regime. That reflects a push internationally to conceptualise AI in terms of digital human security and human rights.

But AI technologies also have serious implications for national security and geopolitics, which need to be thoroughly explored in any discussion of what an AI framework for Australia might look like.

In any discussion of AI, it’s important to note that definitions of the technology are not settled and the applications are vast. But most definitions circle around the idea of machine learning, the ability of a digital technology not just to automate a function, but to learn from interactions with its environment and optimise a function accordingly.

The AI systems that we need to think about in national security terms include surveillance and profiling, the persuasive AI that pervades digital social networks, predictive algorithms and autonomous systems. It is also important to think about the control of the entire AI supply chain, from the human source of the datasets that AI technologies need to learn from, to research and development and technology transfers, and the effects of AI systems on societies.

But the AI geopolitical picture is now a contested tangle of state rivalry, multinational monopoly growth and public anxiety.

That AI is deeply embedded in the discourse of geopolitical competition is well established. The belief that AI will be the key to military, economic and ideological dominance has found voice in a proliferation of grand AI mission statements by the US, China, Russia and other players.

Whether an AI advantage will deliver pre-eminent power to any one nation is arguable. National control over AI technology still remains elusive in a world of globalised R&D collaboration and supply chains and transnational digital technology companies.

But the perception, at least, has driven intense national economic competition over establishment of global AI-powered monopolies in almost every sector—energy, infrastructure, health, online gaming, telecommunications, news, social media and entertainment—and the enormous data-harvesting power that goes with them.

Governments are also racing to develop AI military technologies like autonomous lethal weapons and swarming technology as well as the AI-enhanced surveillance, communications and data-exploitation capabilities they hope will give their military forces the decisional edge on the battlefield.

At the same time, countries are trying to unwind the globalisation of AI technology in order to control R&D collaboration and technology transfers. Individual nations and alliance systems are beginning to champion their own versions of AI norms and technology bases.

In the process, the huge datasets held by governments, corporations and various data brokers have become a strategic asset. They are coveted as the raw fuel needed to train machine-learning algorithms.

Governments have been actively exploring the ways in which these datasets can be weaponised, how they might be used to create cyber weapons targeting critical infrastructure, influence the information systems of another country, build better profiles of its elites for influence targeting and form a clearer picture of the internal dynamics of a political system.

As these uses continue to be experimented with, how datasets are collected and where they are housed is becoming a national security issue. The decision by the US and others to ban Huawei and break up TikTok can be seen at least partially in this context.

But as the competition for the AI edge heats up, the initial excitement and uncritical embrace of this technology has darkened to a mood of profound unease.

Democratic governments are being forced to grapple with the fact that the AI algorithms that run social media platforms operate to maximise user engagement and encourage behavioural change which can then be sold to advertisers. And these learning algorithms have supercharged the possibilities for what some analysts have termed ‘sharp power’—the manipulation of public sentiment through computational propaganda, disinformation and conspiracism by foreign actors and their domestic proxies.

Deep fakes—synthetic media created with the help of machine learning—can be fun but are becoming another tool in the burgeoning disinformation arsenal. AI-generated disinformation was reportedly used by China to interfere in the Taiwanese presidential election in January, and by partisan operatives to discredit Democratic candidate Joe Biden’s son Hunter ahead of the US election. 

The past year has at times seemed like a laboratory for demonstrating the malignant effects of AI-driven communications platforms on politics. The corrosive effect on credible governance and institutional legitimacy, in the case of the US, has threatened democratic norms, the ability of the government to mount a credible pandemic response and its reputational power abroad.

Further, the increasing AI-enabled convergence of the physical and digital worlds is constantly creating new infrastructure vulnerabilities. The development of 5G ‘smart cities’—the mass automation of public infrastructure via sensors and learning algorithms—will open up even more avenues for surveillance, data weaponisation and criminal cyber activity, and will provide foreign adversaries with further means to reach into societies at the granular level. The recent discovery of a massive cyber intelligence campaign against US security systems, enabled through a US government software contractor, is a reminder of what’s possible here.

All of this has governments and publics around the world signalling alarm, if not outright panic, about the destructive power of AI platforms. As the year comes to a close, the US has launched anti-trust actions against Google, which will almost certainly survive into a Biden administration. The EU has opened an investigation into anti-competitive conduct by Amazon. This alarm is no longer confined to democratic countries. China is drawing up new anti-trust measures squarely aimed at is own AI behemoths of Alibaba, Tencent and Baidu.

This fight between states and global platforms will be a defining feature of the next decade, as will be the fight for public trust in AI technologies. China’s pioneering work in deploying state-directed, AI-enhanced surveillance provides an illustration of a chilling totalitarian vision of intimate control of individual citizens through their dependence on integrated digital smart systems.

As citizens feel more and more powerless against the growing use of AI, they will increasingly push both governments and platforms to be more ambitious in designing technologies and enforceable regulatory regimes that are centred on the public interest.

By engaging transparently with the high risk to security, democracy and social cohesion inherent in many AI applications, Australia has the opportunity to develop innovative policy that could help set standards internationally.

Anastasia Kapetas is national security editor at The Strategist. 

This article was originally published on The Strategist.
Views in this article are author’s own and do not necessarily reflect CGS policy.


Comments