HomeMain newsDisinformation

Russia Manipulates Artificial  Intelligence: A New Front in the  Kremlin’s Information War  

The Kremlin is no longer content with flooding the internet with fake news.  Russia’s disinformation factories have begun to “teach” artificial intelligence  their preferred narratives. According to a report by EUvsDisinfo, millions of  texts created by pro-Russian websites are being fed into the databases of large  language models such as ChatGPT, shaping the answers these systems  generate.  

Will Polish Courts Address the Issue of “Polish Camps”?
Poland Does Not Want to Occupy Western Belarus and Ukraine
Spotting disinformation: Six tactics used to fool us (3)

In the digital age, disinformation is no longer confined to social media. It has  evolved into a fully fledged instrument of information warfare — an area in  which Russia has long had vast experience. But the rise of artificial  intelligence (AI) has opened up entirely new possibilities for the Kremlin to  influence how both people and machines perceive reality. This is the focus of  a recent EUvsDisinfo report titled Large language models: the new battlefield  of Russian information warfare.

Since the Cold War, Russian foreign information manipulation and  interference (FIMI) campaigns have remained largely unchanged. Their goal  has been to undermine trust in democratic institutions and sow chaos. Now,  Moscow appears to have taken a step further — not only targeting audiences,  but deliberately “training” AI systems to reproduce Russian narratives.

“LLM Grooming” – The Kremlin’s New Technique  

Experts refer to this process as “LLM grooming” (from Large Language  Models). It involves flooding the internet with millions of low-quality or  manipulated articles, which are then absorbed by AI systems like ChatGPT  when generating responses. As a result, Russian false narratives begin to  “leak” into content that users perceive as neutral and objective.

In 2024, the French government agency Viginum uncovered an operation  known as “Portal Kombat”, also referred to as the “Pravda Network” — an  extensive web of multilingual websites publishing manipulated materials  sourced from Russian state media and pro-Kremlin influencers.

Its targets include not only Ukraine, Poland and Germany, but also France,  the United States, the United Kingdom and several African nations. The sheer  volume of these publications means that artificial intelligence naturally begins  to include them in its responses.

Investigators from NewsGuard’s Reality Check team found that the “Pravda  Network” had spread, among other falsehoods, the claim that Ukrainian  President Volodymyr Zelenskyy had banned Donald Trump’s Truth Social  platform.

Out of ten AI chatbots tested, six repeated this false claim — evidence,  analysts say, that their models had been “fed” content from Russian-linked  sources. The share of false or manipulative content in AI-generated responses  rose from 18% in 2024 to 35% in 2025.

Propaganda That Teaches Machines  

The phenomenon of LLM grooming suggests that disinformation is no longer  just a human problem — it has become a systemic one. The Kremlin is  deliberately introducing false data into the global information ecosystem in  order to “convince” algorithms, over time, that the Russian narrative  represents the truth.

As Halyna Padalko from the Digital Policy Hub observes, Russia is not merely  spreading lies — it is normalising them, gradually embedding them into  supposedly neutral discourse.

Even credible platforms such as Wikipedia have, at times, unknowingly cited  sources connected to the Pravda Network, demonstrating how deeply such  disinformation can penetrate spaces that were previously considered  relatively resistant to manipulation.

A New Threat to the Credibility of Information  

Artificial intelligence is increasingly replacing search engines, journalists and  experts. Many users prefer to ask ChatGPT rather than analyse sources  themselves. Therefore, influencing the content that goes into language models  is becoming one of the most dangerous propaganda tools of the 21st century.

Researchers at Clemson University warn that disinformation campaigns such  as “Storm-1516” continue the work of the notorious Russian Internet  Research Agency — the group that interfered in the 2016 US presidential  election. Today, similar mechanisms are being used to “poison” artificial  intelligence systems so that they generate responses aligned with Moscow’s  interests.

The scale and automation of these operations make them particularly difficult  to detect. Unlike traditional “troll farms”, this effort is not about posting  individual messages — it is about shaping entire language models over time.

Who Controls the Truth?  

“Whoever becomes the leader in artificial intelligence will rule the world,”  Vladimir Putin said in 2017, cited by EUvsDisinfo.

It is now clear that these words were more than mere bravado. Today, Russia  is waging a battle for information dominance, though analysts point out that  it relies largely on foreign technologies to do so.

Its ultimate goal is to shape not only what people think, but how the machines  that inform them think as well.

Source: Large language models: the new battlefield of Russian information  warfare

 

COMMENTS

WORDPRESS: 0
DISQUS: