How to Spot Errors in AI Chatbots: A Fact-Check Guide (International Edition)

It appears that a network of pro-Russian sites is deceiving chatbots using AI-created fake news stories. Experts think this might be part of an orchestrated strategy. What steps can individuals take to safeguard themselves against such misinformation?

Increasingly, individuals opt for AI-powered chatbots like ChatGPT, Gemini, Le Chat, or Copilot over traditional search engines when seeking information online. This shift offers the benefit of leveraging large-language models (LLMs), which do not merely supply links to reproduced material; they also condense content into succinct summaries, enabling quicker access to greater amounts of data.

Nevertheless, this method is susceptible to mistakes. Ultimately, it is quite usual for chatbots to make such errors. to make mistakes This effect, referred to as hallucination, has been recognized for many years.

In some instances, AI chatbots merge data in a manner that results in incorrect result Specialists refer to this as confabulation.

Chatbots can also be deceived by fake reports about current affairs, though this aspect is not widely recognized. This information comes from the U.S.-based firm NewsGuard, whose mission involves assessing the reliability and openness of various online platforms and digital tools.

What reasons lead Russian AI chatbots to spread misinformation?

Portal Kombat, a Russian disinformation channel that has garnered notice through various disinformation campaigns It could be considered one of the leading actors in contemporary online misinformation. This network aids the Russian military efforts in Ukraine through web platforms that disseminate pro-Russian propaganda.

The investigative team VIGINUM, which operates under the French government with the aim of addressing foreign misinformation campaigns, looked into Portal Kombat. During their examination covering September to December 2023, they identified a minimum of 193 distinct sites associated with this operation.

It seems that the main hub of this network is the international news website Pravda, which offers content in multiple languages such as German, French, Polish, Spanish, and English.

Wave of false information worldwide

As stated by VIGINUM, the websites associated with Portal Kombat do not generate original articles. Rather, these sites predominantly republish material sourced from three primary channels: “social media profiles of Russian entities or supporters, Russian news outlets, and the official webpages of regional organizations or individuals,” according to a VIGINUM report released in February.

The articles feature rather clumsy phrasing, common mistakes during transliteration from Cyrillic to Latin script, along with vibrant annotations.

The U.S. anti-misinformation program, American Sunlight Project (ASP), has recognized over 180 potential Portal Kombat websites.

Based on estimates from the ASP, approximately 97 domains and subdomains associated with the core structure of the Pravda network produce over 10,000 articles daily.

The subjects now extend beyond the NATO defense alliance and allied nations like Japan and Australia, also encompassing regions in the Middle East, Asia, and more. Africa , in particular the Sahel region , where Russia is battling for geostrategic influence .

Is the focus shifting towards chatbots as the new demographic?

As reported by NewsGuard, an internet monitoring organization, certain Pravda webpages receive approximately 1,000 monthly visitors. In contrast, the site for RT, which is a globally oriented news outlet under Russian government control, garners over 14.4 million visits each month.

Experts believe that the extensive number of articles significantly influences chatbot responses. They also speculate that this might be the primary aim of the whole system: The large language models (LLMs) powering these chatbots are designed to recognize and reference their own publications as credible sources, thereby disseminating them further. disinformation .

If we consider this as the strategy, it appears to be successful based on NewsGuard’s analysis. This organization looked into how chatbots responded to 15 distinct blatantly untrue statements disseminated across the Pravda network from April 2022 through February 2025.

The researchers submitted three distinct variations of each assertion to the chatbots. Initially, they presented them neutrally, questioning if the assertions were accurate. Next, they posed queries presupposing these claims were factual. Lastly, they framed the statements as though malicious entities aimed to elicit repetition of the incorrect information.

In one-third of the instances, the chatbots validated the pro-Russian misinformation. Less than half of the time, they accurately provided the factual information. For the remaining scenarios, the systems opted not to respond.

In January 2025, NewsGuard researchers carried out an additional study. This investigation focused on assessing how chatbots respond to ten fabricated statements across various linguistic contexts. The team analyzed reactions in seven distinct languages without limiting their examination solely to material supporting Russian viewpoints.

Although the bots showed slight improvement in this trial, they still verified the false information in approximately 536 out of their 2,100 replies.

Remain watchful, verify sources, inquire once more

For users of Large Language Models, this primarily signifies: Remain watchful. There is much to be aware of. misinformation circulating On the internet. However, what other aspects should you focus on?

Sadly, altering the phrasing of original prompts doesn’t make much difference, as NewsGuard’s McKenzieSadeghi explained to . “Although malicious actors’ prompts are more prone to generate fake news because it’s what they aim for, even benign and suggestive prompts still led to misleading content,” he mentioned to .

The data, which did not appear in the 2022 to 2025 report but was shared with , revealed that researchers obtained an incorrect response to a neutral or biased query approximately 16% of the time.

Hence, Sadeghi suggests verifying and compare sources This might involve querying different chatbots. Comparing the answers from these chatbots with trustworthy sources can further aid in evaluating the circumstances.

Nonetheless, caution is recommended since designs from well-known sources can be replicated occasionally. lend credibility to fake news portals.

Finding accurate information requires prompt action. Chatbots tend to struggle with newly emerging false stories that circulate quickly. However, over time, these chatbots will inevitably encounter verification processes that refute the original misleading statements.

This piece is part of a fact-checking collection focused on digital literacy. Additional articles within this series encompass:

What signs should I look for when identifying tampered photographs? What indicators can help me recognize images created with artificial intelligence? What clues indicate an audio deepfake? What distinguishes state-supported propaganda from regular information? What marks the difference between genuine social media profiles and fake ones like bots or trolls?

For additional information regarding this series, you may visit their website. The process of fact-checking involves verifying the accuracy of claims and content as outlined herein.

Tilman Wagner helped write this piece, which was initially published in German.

Author: Jan D. Walter, Carlos Muros

إرسال تعليق