In 1982, Time Magazine’s cover depicted a man sitting opposite a computer screen. The headline read “The Computer Moves In”. Time’s cover captured a pivotal civilizational moment, as the introduction of the personal computer would lead to the creation of a new society- a digital society complete with its own logics, norms, values and laws of thermo-dynamics. A society centered on constant connectivity and unlimited access to information. Last week, Time Magazine’s cover depicted a chat with the AI system ChatGPT. Though the headline was missing, the association with the 1982 cover was evident as we are now at the moment when “AI Moves In”.
AI, or artificial intelligence is in no way a new technology. Machines have for some time had the ability to perceive, synthesize and infer information. Moreover, AI’s have already become household fixtures. Google search recommendations, social media feeds as well as Siri and Alexa are all examples of AI’s that are used daily by billions of people. However, what is unique or revolutionary about ChatGPT is its open and easy to use interface. Until now humans have mostly interacted with the outputs of AI be it when viewing tailored social media content or scrolling through Google pages. Those who did interact with AI systems were professionals or had professional knowledge and the ability to write code and program AI systems.
Now, however, every internet user can use an AI system.
The main challenge is that much like algorithms, ChatGPT is a black box. We know little about its sources of information and the data it uses to answer users prompts, or questions. It is possible that ChatGPT draws its information from biased sources, from sites that are ripe with inaccuracies or databases whose information is outdated. There have even been reports of ChatGPT inventing information. One academic asked ChatGPT to offer an academic definition of the term ‘nation branding’. While ChatGPT’s answers were accurate, it listed several academic references that do not exist.
In recent weeks, as ChatGPT has moved into the house, academics and policy makers have warned that open AI may have a detrimental impact on society. Students could use ChatGPT to write essays, applicants could use ChatGPT to take their exams while legislators might use ChatGPT to formulate laws and regulations introduced in parliaments. While these challenges are noteworthy, they do not include the main challenge that ChatGPT will pose to diplomats.
Thus far, discussions on how ChatGPT may impact diplomacy have focused on its potential application in traditional diplomatic domains. For instance, ChatGPT could be used to automate consular services. Similarly, diplomats may one day use ChatGPT to prepare for negotiations. A diplomat could ask ChatGPT for a summary of Russian statements on the future of Donbas ahead of negotiations to end the destructive war in Ukraine. A press attaché could ask ChatGPT to analyze how his country is depicted in local newspapers. Yet alongside these potential benefits there is also an important challenge -that as ChatGPT moves into the house, users will increasingly rely on this AI system to learn about the world around them.
This is a formidable challenge given possible biases and inaccurate information generated by ChatGPT. If taken to the extreme, one can imagine a scenario where users employ ChatGPT much like they use Twitter and Facebook. That is to learn about events, states and actors shaping their world in near-real time. Yet inaccuracies in ChatGPT will create a false and alternate reality in which these users exist and operate. One example is an alternate reality in which Syria is flourishing, or a reality in which Russia never fought Ukraine or even a reality devoid of the Trump Presidency. Due to his divisive politics, ChatGPT refrains from offering complex answers to questions pertaining to Trump. The same is true of other newsworthy individuals as Twitter’s new owner Elon Musk.
The greater the gap between reality and ChatGPT’s alternate reality, the more people will struggle to make sense of the world around them. News reports of events and actors that conflict with ChatGPT results will create a growing sense of uncertainty and estrangement from the world. This has already happened thanks to disinformation and misinformation spread on social media sites.
For diplomats, this gap is a serious issue as feelings of uncertainty and estrangement often result in political polarization. When people can no longer make sense of the world they yearn for the world of yesteryear, for a world that makes sense. This breeds an affinity for reactionary politicians who promise a return to a simpler time and to a world that does make sense. Indeed, Donald Trump’s promise to Make America Great Again was actually a promise to make the world coherent again. In place of the fluidity that marks present day reality, Trump offered the old world of dichotomies, of “good guys” and “bad guys”, of “patriots” and “traitors” and of “men” and “women”.
The past years have shown that reactionary politicians undermine diplomacy in numerous ways. They regularly denounce globalization as a societal ill brought about by an evil cabal of multilateral policymakers. Reactionary politicians also label multilateral institutions as outdated (NATO), corrupt (WHO) or ineffective (UN). These politicians also embrace a narrow national prism through which the world is viewed, a prism that scorns global solutions to global challenges. Moreover, these politicians use their office to undermine trust in governments and their agents, including diplomats and policy makers. Finally, reactionary politicians create the illusion of a global, financial elite poised on erasing national cultures and heritages. This is then used as an excuse to abandon multilateral institutions such as UNESCO.
Thus, ChatGPT may further undermine trust in diplomats and their institutions while decreasing diplomats’ ability to formulate shared and global responses to shared challenges. A serious predicament in a global world in which the actions of one actor send local, regional and worldwide ripple effects.
As ChatGPT moves into the house, diplomats must acknowledge the challenges it poses and move to mitigate its potentially negative impact. One way to do so would be to open ChatGPT’s black box, to regulate open AI systems and ensure that for each answer they include the sources of information and databases used to generate knowledge. ChatGPT results should also recommend additional sources of information and clearly label instances in which generated information may be biased, inaccurate or outdated. These are but a few ways in which diplomats could mind the gap between reality and ChatGPT’s reality.