In his last recorded interview from 1986, astronomer Carl Sagan stated “We’ve arranged a society based on science and technology in which nobody understands anything about science and technology. And this combustive mixture of ignorance and power, sooner or later, is going to blow up in our faces”. The ignorance, and power, which Sagan was referring to is perhaps best exemplified in the growing importance of algorithms.
The digital society is based on algorithms. Indeed, algorithms play an important part in daily life as they are allocate social services, determine employee recruitment, approve bank loans, trade on stock markets, judge visa and travel requests, shape who meet romantically, guide our travels abroad, help us exercise, determine our access to news, guide our search for knowledge and manage our health records. Importantly, algorithms also shape our use of social media and they are the basis of social media’s financial model. All social media companies use algorithms to gather data on users and sell that data to relevant advertisers.
Given the centrality of algorithms to daily life, and to political and social processes, one would expect that all of us would be algorithmic experts who have a deep understanding of what algorithms are, how they function, what data they collect, what deductions they can make and how they shape our online and offline experiences. But as Sagan warned, most of us remain ignorant. People know that algorithms tailor online activities, that algorithms determine who they hear from and what content they are likely to see. And yet, algorithms remain black boxes. Few Twitter, Facebook or TikTok users know what data is collected, how the data is used to generate knowledge and how that knowledge then dictates online activities.
Moreover, users do not know what processes and reasoning guide these algorithms. Social media algorithms are extremely powerful taking into account thousands of variables across millions of data points. Crucially, users are ignorant of the predictive power of algorithms. Can Facebook predict our political affiliation, sexual orientation, gender identity, financial affluence and love interests, even if we have never stated these online? And if so, how can Facebook perform this complex task? What variables are used to predict our personality traits? And what, if any of this information is passed on to advertisers, the true users of social media?
The black boxed nature of algorithms is compounded by the phenomenon of algorithmic bias. Generally, the term ‘algorithmic bias’ is used to denote that algorithms are not neutral and do not make neutral decisions. Rather, algorithms are biassed and this bias can impact daily life. For instance, biases in search engines can limit access to information. Imagine that Google had a ‘Western’ bias ranking pages written in Western countries higher than those written in SouthEast Asia. Algorithmic bias also refers to the fact that biases in algorithms can replicate offline inequalities. Amazon discovered a gender bias in its recruitment algorithm that favoured male applicants while an American algorithm used to determine jail sentences was discovered to be biassed against black defendants.Biases have also been found in social media algorithms where access to job adverts was biassed racially and favoured men over women.
If we want to avert the combustive mixture of ignorance and technology, then we must open the black box of social media algorithms. This has become a societal imperative for three reasons. First, recent reports in the American press suggest that under Elon Musk’s leadership, Twitter has once again become ripe with distorted information, false information and conspiracy theories. These all undermine trust in governments leading to crises in democracies. They also drive political polarisation and create a violent town square where reactionary politicians flourish. Second, studies suggest that social media harms users, and especially the mental health of young users. Third, the Russia-Ukraine War has once again proven that social media algorithms can be hacked by states and used strategically to psychologically harm civilians.
The need to open the black boxes of social media algorithms has been recognized as a diplomatic imperative and falls under the remit of digital diplomacy activities. The EU, NATO and the UN have all called on states to regulate social media companies and curb their power. To date this has proven difficult. On the one hand, regulating social media would require an international consensus and it is hard to imagine that the EU, Turkey, China and Russia could agree on how to reform social media. On the other hand, regulating social media is heavily dependent on the US government as most social media companies operate in the US. Given that America’s business is business, it is hard to imagine that Congress would dare meddle in the financial model of tech giants.
All this until President Biden’s recent State of the Union Address. While speaking to both houses of Congress, the President said “We must finally hold social media companies accountable for experimenting on our children for profits”. This statement garnered not only bipartisan support, but received a bipartisan standing ovation. Diplomats should seize on this momentum and work with US diplomats to turn Biden’s vision into a reality. For not only is the White House calling for such actions, but social media companies are vulnerable. Facebook’s mounting losses and Twitter’s negative depiction in the American press may mean that social media companies will be unable to prevent regulation.
While an international accord on social media reform may be impossible, American regulation of social media may be a more realistic goal. Through traditional diplomatic means, ranging from bilateral meetings with US diplomats to lobbying Congress, diplomats hoping to reform social media, or at least mitigate its societal and political impact, must seize the day and exert pressure on the American government to begin opening algorithmic black boxes.