The 2024 Global Risks Report by the World Economic Forum points to misinformation and disinformation as the most severe threat facing the world in a short term perspective[1]. Behind this lies a recognition of the destabilizing potential of disinformation as malicious actors continue to refine their skills, and technological developments continue to offer still more opportunities for still more advanced and penetrating operations[2].
It is by now “the new normal”. All liberal democratic states have long understood that a digitalized media ecosystem, with its overwhelming number of platforms and highly anarchic structure, poses a particular challenge. This challenge has to be mitigated through a variety of approaches, some of which focus on the malicious actors, some on the technology and the platforms, and still others on the intrinsic cognitive resilience of the target populations. There is no silver bullet to end it all, and we should of course all proceed from the shared understanding that this challenge will never go away. In fact, everything suggests that the threat from disinformation is ever-increasing, and we need to remain alert to changes to any of the elements mentioned.
Most readers of these lines were probably equally shocked and fascinated by OpenAIs’ ChatGPT as it gained widespread circulation in 2023. One commentator introduced it with the words, “Picture an AI [Artificial Intelligence] that truly speaks your language — and not just your words and syntax. Imagine an AI that understands context, nuance, and even humor”[3]. This was followed, in early 2024, by the OpenAI Sora, an AI-video generator, the mind-blowing features of which led one technology expert to proclaim that “generative video has gone from zero to Sora in just 18 months”[4].
While these are early stage breakthroughs, and users have been quick to point to inaccuracies and glitches, from a disinformation perspective it is easy to see just how powerful and disturbing this is and will be. Even now, these tools may be used to create texts and video of a high quality and complexity and all at the push of a button on a keyboard. Future models will undoubtedly be vastly superior to what is available now. With this in hand, malicious actors may create targeted and more convincing texts devoid of some of the errors in syntax and grammar often seen (for instance inaccuracies stemming from the lack of the definite article “the” in the Russian language) as well as targeted and hyper realistic videos, which will essentially, at least to ordinary viewers, be indistinguishable from authentic footage. Malicious actors may for instance create countless versions of the same video, changing it to appease or antagonize different demographics (for instance ethnicity, gender, political preferences etc.). It may all be done at infinitesimal costs.
Relevant technologies will obviously continue to develop, and malicious actors will be quick to make use of them and to bring them onto all the platforms available. It will be very hard, if not impossible, to restrict the access of malicious actors to these technologies. This is particularly so if the malicious actor is a state. Technologies used for the creation of disinformation may often be employed reversely, that is, to detect disinformation, which may then be flagged alongside the accounts and platforms on which it appears. However, even if fully automated, there is delay in the monitoring and verification processes, causing disinformation to slip through. It is an illusion to think that we may be able to detect everything, to warn about it or to have it removed and to protect consumers from it. Members of the target population will be exposed to still more sophisticated pieces of disinformation designed to shape their political preferences, undermine their trust in public institutions, radicalize them to go against other groups within society etc.
Cognitive resilience therefore remains of the essence. It includes a firmly held belief in core political norms found within liberal democratic states – these norms should not be easily questioned – and an ability to critically reflect upon the overwhelming flows of information. Luckily, NATO member states are generally well-prepared for the expected wave of still more sophisticated disinformation. However, there is no room for complacency. it is important to continue to share best practices, to learn both individually and collectively and to face this threat together.
[1] World Economic Forum, The Global Risks Report 2024 (2024); available at https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf. [2] I define disinformation as information known to be untrue or even deliberately fabricated to achieve certain effects. It is intentionally false. If this information is subsequently spread by someone who is unaware of its false nature, it is reduced to misinformation. [3] Barnard Marr, “A Short History of ChatGPT: How We Got To Where We Are Today”, Forbes (19 May 2023). [4] In Will Douglas Heaven, “OpenAI teases an amazing new generative video model called Sora”, MIT Technology Review (15 February 2024).
Flemming Splidsboel Hansen
PhD, Senior Researcher
Danish Institute for International Studies
Denmark
