In the spring of 2023, while teaching media literacy to Finnish students as part of YLE’s Uutisluokka project, I showed them a viral image of the Pope in a Balenciaga coat – one of the first AI-generated photos to go viral and fool global audiences. At the time, spotting the telltale signs of synthetic imagery was still easy. Yet it was already clear that the kids in the classroom as well as the next generation of journalists would face a verification challenge far beyond what traditional tools could handle.
Two years later, that prediction has already materialized. Artificial intelligence is reshaping the practices of both open-source intelligence (OSINT) and journalism as well as the reality we consume online. It was quite early in 2025 when I spotted an eye-opening conversation on Facebook. Someone had shared a video of a whale being cleaned by divers with a caption “There’s still goodness in the world”. Except this goodness was artificial. I have to admit that though to my eye it was clear that the video was fake it was still quite realistic. The only thing giving it away as AI, was that the debris falling off the whale seemed to disappear into nothing instead of falling off the back of the animal. What was exceptionally alarming to me in this example was that people seemed to want to believe it was true and fighting that sort of belief is difficult. Since then, we have seen videos after videos of newsworthy events that are completely or partially artificial: videos from Israel bombing Syria or the Texas floods. Now everytime I open any given social media app I’m bombarded with AI generated videos and pictures. It’s so common and normal that even the White House publishes AI content consistently and universities across the globe battle with AI generated papers. In Finland AI is so common that a well-established photography company thought it to be acceptable to edit the faces of kids in school photos so much that they lost freckles. Of course, the parents didn’t think it was, because they spotted the AI.
The amount and the scale of AI content might be alarming, but what has kept me sleeping well at night is that the human eye has been quite good at spotting it. Even if we can’t put a finger on what exactly is the problem, something seems a little off when watching AI generated visual content. A former colleague of mine shared an AI generated deepfake video of himself and it was so good that in his words only his family and friends spotted that something wasn’t quite right. Spotting AI generated visuals is getting harder by the day and the amount of time and effort it takes is increasing.
What I find most problematic, is that especially in social media, we rarely take the time to really look at a picture or a video. It’s easy to be critical when someone asks you: Is this AI? But most of us are not hardwired to be critical all the time. Critical thinking acquires energy and our brains do almost anything to save it. And the general public receives almost no information let alone training in how to recognise AI generated content. We still believe what we see even though our reality online is being altered faster than ever before and in ways we can’t quite yet fathom.
To most of us AI generated content is just harmless entertainment. But it can, is and will be used to shape our worldviews. AI is making it easier and faster to generate fake news and simultaneously it’s getting harder to spot what’s AI and what is real. We might be lightyears away from singularity or even generative AI but we are approaching the moment in time when it becomes impossible to believe what we see. According to Derek Bowler, the Head of Eurovision Social Newswire at the European Broadcasting Union, AI generated content could be undetectable from a visual perspective already in 2026. This means that by the time you are reading this article we might have already gone beyond that point. And when that happens, using AI to generate audiovisual propaganda, disinformation and content for criminal purposes or even information warfare becomes tempting to say the least. This is when we can see fake news like never before and don’t even know about it.
According to Mr. Bowler, there are mainly three reasons why people currently make and share AI generated videos of news events. Firstly, they simply want to be a part of a conversation. This can lead to harmful misinformation but is not done on purpose or maliciously. Secondly in social media engagement is currency. Some of these AI videos get loads of attention. Especially in X and in Tiktok people make money this way. The third reason is to purposefully generate false information to mislead or maybe even cause harm or disruption. AI is now also used in OSINT, which is the method we use to fight fake news, investigate events or determine facts based on content published online or mainly in social media. Fact-checkers in the Nordic countries are integrating AI tools into their workflows to support tasks such as monitoring, data analysis, translating or just simply doing the work faster by automations. However, AI is not the best way to detect AI. According to Mr. Bowler, every time an AI detection tool is updated or a new one is created the technology has taken a leap forward hence making the content it’s supposed to help flag as AI is better than the detection.
This is where traditional OSINT methods and tools become helpful. Of course, AI can’t be detected with reverse image search but things like geolocating and satellite images can be helpful in some cases. The most helpful tool is the person or preferably the persons doing the research. Human judgement, ability to doubt and contextual interpretation are often the best and the only weapon against disinformation and misinformation even when it comes to AI generated content. Looking for context, on-site reports and comparing information from reliable sources should help create doubt. Together with knowledge on how and when misleading content is created and what signs to look for both within and outside the content, we get professionals equipped to assess and handle complex situations. The problem is that many newsrooms have put their efforts into creating and using AI tools in producing and scaling journalism instead of educating and training their people to do research.
“AI is scaring newsrooms. In general, there’s a lot of newsrooms, particularly in public service media, who are getting up to speed with AI as a tool to use for output and for workflows. The biggest problem is that many newsrooms have largely ignored the field of verification and they’re waiting for a tool to come along that tells you everything you need to know. That tool will never exist. From that perspective, it’s leaving newsrooms in a position where they may not be able to actually deal with the levels of misinformation and disinformation that’s out there”, Derek Bowler says.
Together with the increasing quality another concern of mine is the increasing amount of AI content. Throughout the years we have seen state backed entities creating vast amounts of false information and content especially when it comes to conflicts like the one in Ukraine. With the help of AI the amount can and probably will skyrocket. Especially in state-controlled media environments or even highly polarised environments this can lead to false information dominating or overtaking the whole society. This means that the false narrative is so overpowering that there’s no room or possibilities for fact checking. This is concerning from a European perspective because effective collaboration acquires us to have a shared reality with our allies. Upholding our own democracies or EU level decisionmaking on complex and emotional matters requires discussion based on facts without alternative facts taking over. Scientific research as well as institutional journalism and traditional media outlets have been a way for citizens and decisionmakers of democracies to share facts and a basis for reality. But we no longer get our news only from traditional sources that base their stories in research. In the UK 20 %, in Denmark 12 % and in the US 34 % of people say social media was their main source of news in 2025. And like I already pointed out, you can’t really use social media without being exposed to AI generated content.
Misleading or false AI content is not the only issue we are facing when it comes to news. According to the World Economics Forum’s Global Risk Report 2025 the use of AI chatbots as a news source or as search engines is emerging with 7 % of people getting news this way on a weekly basis and when we talk about young people under 25 the amount rises to 15 %. At the same time, according to Reuters Digital News Report 2025, more than half the public across the markets the report covers say they are concerned about what is real and what is fake when it comes to online news. According to Reuter’s survey most check the validity of content through an outlet they consider trustworthy. One might think that what people think is a trustworthy outlet is something like institutional journalism or government sources or research and some do, but many find search engines like Google to be that and some of them use LLM’s like ChatGPT like search engines. 13 % said they don’t know how to verify content at all.
In a digital world controlled by algorithms that are fuelled by AI, upholding democracy becomes a challenge. The most effective algorithms already manipulate our worldviews and have taken away our ability to make informed decisions. We have seen this happen on a large scale during some elections. Social media can quickly suck a person into a realm of dis- and misinformation even without them noticing: our emotions are easily manipulated and we believe what we want to believe if we are not vigilant. Combined with our tendency to believe simple explanations and latch on to the narrative that is repeated to us over and over again, we are vulnerable in front of massive amounts of AI generated information the algorithm has pushed for us. I was involved in debunking fake news and fact checking during the COVID19 pandemic and I saw first hand what disruption of our realities and facts can mean for governments, societies, communities and individuals In order for our democracies to work effectively the majority needs to be able to separate truth from fiction.
Moving forward we need to educate our decisionmakers, journalists and the public in media literacy in the age of AI. We failed to do that when social media took over. Based on my experience I’d say we can’t afford to do that again. Disinformation, misinformation, propaganda and hybrid warfare affect us all. As we are witnessing the disruption of information and power and increasing polarisation on a global scale, all of us need the basic knowledge in how to verify content.
Public trust has always been the basis of the news business but I’d say it is becoming even more important so newsrooms should not take implementing AI lightly. Use of AI in news ŕooms should be well justified and as transparent as possible. My question from day one has been, how can we write, enhance and illustrate news with AI and still say that AI generated content done by content creators rather than journalists is not a good thing. Like Derek Bowler said, trust should not be placed in tools only. And last but not least democracies in the Baltic region and in Europe need to work together. We face the same challenges when it comes to security. AI fuelled algorithms that amplify AI generated content based on AI generated information are a security threat. We need to be prepared for AI generated content when the next elections come no matter where the elections are held. We haven’t been prepared before and now the challenges we are facing are greater than ever and will continue to grow.
Sarita Blomqvist
Ecosystem Facilitator
M.A. Journalism
Dimecc Oy
Finland
sarita.blomqvist@gmail.com

List of sources
Outsourcing, Augmenting, or Complicating: The Dynamics of AI in Fact-Checking Practices in the Nordics: https://journals.sagepub.com/doi/10.1177/27523543241288846
Reuters Digital News Report 2025: https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2025
Economics Forum Global Risk Report 2025: https://www.weforum.org/publications/global-risks-report-2025/
Interview with the Head of Eurovision Social Newswire at the European Broadcasting Union Derek Bowler in 2025
Back to Table of Contents