Short circuit: What will artificial intelligence mean for diplomacy?
Over the past decade and a half, the rise of new technologies, social media platforms in particular, has transformed the field of diplomacy. Diplomats have new tools at their disposal, and social media now puts global audiences in easy reach of foreign policymakers, allowing governments to reach out to these audiences instantly. They are able to foster an exchange of ideas between individuals and civil society at home and abroad. Social media continues to drive an ongoing evolution of how countries can develop and leverage their “soft power”.
While social media platforms are easy to use and complement the work of diplomats, they have also become a double-edged sword. When social media was in its infancy, very few experts predicted that these same tools would be used by authoritarian states to launch misinformation campaigns, spy on citizens, harvest personal data, or interfere in the democratic elections of other states. Neither did many anticipate that non-state actors would use these technologies to plan attacks, incite violence online, broadcast propaganda, and brainwash others to engage in violent extremism.
Diplomats are now in a constant cycle of needing to update their ways of working to deal with the evolving challenges the stem from emerging technologies. One such frontier technology, Artificial Intelligence (AI), is fast starting to reshape our world. It is imperative that diplomats understand the key concepts of AI, how AI will impact diplomacy and international relations, and how it might be deployed for malicious purposes.
Why is this important? Because the AI evolution is happening now. It will continue to reshape most, if not all, industries and professions, including diplomacy. Diplomats need to see AI as a new addition to the wider toolkit that states use to influence other states and non-state actors. AI will permit countries to assert more power in the digital space and influence digital actors.
Diplomats receive regular training on language, culture, negotiation skills, religion, and international law, to name just a few. Going forward, they will need to have a conceptual and practical understanding of AI. Machine learning, algorithms, automation, bots, deepfakes and machine-driven communications tools (MADCOMS) all need to become part of the diplomat’s lexicon.
In the Atlantic Council report “MADCOM Future”, Matt Chessen details how MADCOMS, “[t]he integration of AI systems into machine-driven communications tools for use in computational propaganda computational propaganda,” are developing at break neck speeds.
MADCOMS have the potential to produce highly personalized propaganda that will enhance various actors’ ability to influence people by tailoring persuasive, distracting, or intimidating messaging. Computational propaganda includes the use of automation, algorithms, and big-data analytics to manipulate public life by spreading disinformation online, producing automated amplification with bots and fake accounts.
This type of AI can extrapolate trends and large-scale patterns of behavior, which can be used to influence opinions, choices and decisions of individuals and the wider society being targeted. It is expected to lead to dynamic content generation, psychometric profiling, and automated video and audio manipulation tools. Massive amounts of online data can be processed to identify people based on their personality, political preferences, religious affiliation, demographic data, and other personal interests.
Falsifying Reality: Deepfakes
Deepfakes, which are media (video, audio, and images) altered by AI to falsify reality, pose a unique challenge. Their potential impact should not be underestimated. They can be used to exploit or sabotage individual identities, undercut rational decision-making, distort policy debates, manipulate elections, erode trust in institutions, exacerbate social cleavages, generate civil unrest, and disrupt bi-lateral relations between countries. Imagine the non-consensual computer-generated version of an elected official’s face, such as Barrack Obama, using a series of pictures that closely matches the original expressions of another person in a video.
This technology can make anyone appear to say or do something that they never said or did, e.g. speaking in derogatory tones towards an ethnic or religious group. These AI-enabled methods that allow the creation of deepfakes are becoming more and more sophisticated, easily accessible, and relatively low-cost to produce. They also have the potential to become all-the-more threatening if used by computational propagandists for political manipulation.
Nefarious actors continue to generate innovation in AI for machine-generated influence of public opinion and sow division. The ongoing diplomatic spat between Canada and Saudi Arabia is a case in point. When the Canadian Embassy in Riyadh issued a statement in Arabic on Twitter calling for the Saudi government to release women’s rights activists in the Kingdom, the Canadian government observed that AI powered bots were quickly deployed to foment societal divisions and encourage “separatist sentiments in Quebec”, threatening the country’s political stability.
AI models will increasingly be misused to generate fake news and spread malicious disinformation. Sophisticated algorithms are being developed to complement and eventually overtake what actual people are doing. In “The Coming Automation of Propaganda” article for War on The Rocks, Frank Adkins and Shawn Hibbard warned “recent advances in artificial intelligence (AI) may soon enable the automation of much of this work, massively amplifying the disruptive potential of online influence operations.”
Former Chief of the Russian General Staff Yuri Baluyevsky said a few years ago that a victory in information warfare “can be much more important than victory in a classical military conflict, because it is bloodless, yet the impact is overwhelming and can paralyze all of the enemy state’s power structures.” Russia has a long history of exacerbating divides on fractious social issues by targeting susceptible minority groups. It would be naïve for diplomats to believe AI will not be weaponized by Russia, other authoritarian states, and malicious non-state actors.
We are headed toward a future where machine-driven communications, enabled by AI tools, will dominate the online information environment. Soon, it may impossible for people to tell whether they are interacting with a human or a robot online.
Authoritarian states, where journalists and civil society organizations have little to no freedom to hold governments accountable for the unethical use of AI, face little to no constraints in how they use these emerging technologies globally and against democracies. At the same time, AI gives authoritarian states a technological edge in an expanding digital world. At this year’s World Economic Forum in Davos Switzerland, George Soros singled out China’s use of AI against its citizens and open societies as a “mortal threat”.
Democratic governments are starting to act. Global Affairs Canada has established the Center for International Digital Policy that monitors and responds to the misuse of AI. The Canadian Foreign Service Institute has begun training Canadian diplomats on the impact of AI cluster technologies on diplomacy. The Ministry of Foreign Affairs of the Netherlands has created the annual “Digital Diplomacy Camp” event, which brings together tech experts and civil society leaders under one roof to shares ideas and best practice with Dutch diplomats.
While this is a positive start, diplomats must catch up before it is too late. Democratic and open societies need to empower their diplomats and provide them with additional resources and training on all aspects of AI. Given the internet is international, it is imperative countries see the emergence of AI as a global issue, not just a technical one.
Kyle Matthews is Executive Director of the Montreal Institute for Genocide and Human Rights Studies at Concordia University. He is also a fellow at the Canadian Global Affairs Institute and a member of the Global Diplomacy Lab.
This article was first published in the Soft Power 30 report, a joint effort from Portland Communications, Facebook and the Center on Public Diplomacy at the University of Southern California. To see the report, please click https://softpower30.com/wp-content/uploads/2019/10/The-Soft-Power-30-Report-2019-1.pdf