The reliability of artificial intelligences appears more than ever to be called into question. Over the past year, study results indicate a concerning rise in false information being spread. The inability of these machines to discern credible sources highlights critical strategic issues for society. Incapable of filtering effectively, these technological tools risk propagating misleading narratives. The misinformation generated by chatbots raises significant questions about their real impact on public opinion. Let us question the erosion of this reliability and its far-reaching consequences.
Growing misinformation from generative AIs
A report by NewsGuard reveals that the rate of false information disseminated by artificial intelligence tools has nearly doubled in a year. Chatbots like ChatGPT, Gemini, or Mistral exhibit concerning performance in distinguishing between truthful information and fake news. This alarming finding underscores their difficulties in identifying credible sources in the current information ecosystem.
A revealing study
NewsGuard audited the top ten AI tools in August 2025, finding that they fail nearly twice as often as before in their ability to distinguish facts from false narratives. Despite significant updates to their models, their reliability is declining, particularly in handling sensitive current affairs topics, such as conflicts or elections. This failure is exacerbated by the integration of web search, an access to information that turns into a trap.
Comparison of AI models’ performance
The monthly barometer established by NewsGuard reflects notable discrepancies in reliability among different AI algorithms. Claude and Gemini, for example, display respective false information dissemination rates of 10% and 16.67%. In contrast, Perplexity, which once excelled, makes errors in 46.67% of cases. Users observe this decline in reliability, raising significant concerns about this tool’s ability to provide accurate information.
The impacts of internet access
Before integrating internet access, chatbots often avoided answering current affairs questions, displaying a non-response rate of 31%. Now, this rate is 0%, while their ability to refute erroneous narratives has increased from 51% to 65%. However, this accessibility leads to a faulty selection of sources, resulting in responses from dubious media or misinformation campaigns.
Exploitation by malicious actors
Malicious actors take advantage of this gap by flooding the web with misleading content. NewsGuard reports that AI models are often trapped by sites created abroad, posing as local media. Thus, chatbots repeat falsely constructed narratives, creating an informational void exploited by biased or misleading information.
The need for deep reflection
Jensen Huang, CEO of Nvidia, emphasized the importance of reaching a stage where answers provided by AIs inspire trust. However, nearly a year after his statement, the results indicate a misunderstanding regarding progress, as misinformation seems to amplify. The evolution of these AI models requires critical attention to ensure the accuracy of the information disseminated.
Frequently asked questions about misinformation and AI errors
What are the main causes of misinformation relayed by AIs?
AIs struggle to distinguish reliable sources from dubious ones, leading to the spread of false information. Their ability to identify verifiable facts deteriorates, making them vulnerable to misinformation campaigns on the internet.
How do updates to AI models affect their reliability?
Although regular updates are made to improve model performance, paradoxically, these updates have not increased their ability to detect false information, raising concerns about the development of their reliability.
Is it common for chatbots to relay inaccurate information?
Yes, a recent study showed that generative AIs transmit erroneous information approximately 35% of the time, a figure that has nearly doubled in a year, indicating a significant degradation of their reliability.
Which AIs are most likely to relay false information?
Tools like ChatGPT and Mistral AI have some of the highest rates of false information repetition. In contrast, models such as Claude and Gemini perform better in detecting inaccuracies.
How can users spot reliable information provided by an AI?
Users should always verify the sources of information provided by AIs by comparing them with established media and reliable news segments to avoid falling into the trap of false information.
Can AIs still debunk misleading narratives?
Yes, despite the increase in errors, some AIs have managed to improve their ability to refute false information, with a refutation rate increasing from 51% to 65% thanks to the integration of online research.
What are the implications of AI-generated misinformation for society?
The spread of false information by AIs can cause confusion among the public and impact important decisions, particularly during current events such as elections or international crises.
How can businesses protect themselves against AI-generated misinformation?
Businesses should train their employees to assess the reliability of information sources and develop protocols on the use of generative AIs to prevent the inadvertent relay of incorrect information.