The rapid advancements of artificial intelligence chatbots reveal insidious political issues. A *worrisome phenomenon* emerges: the propaganda of the CCP infiltrating the digital sphere. These language models, seemingly neutral, convey biased narratives shaped by state censorship.
The disinformation methods employed by the Chinese Communist Party contaminate the global data market. The manipulation of *public opinion* through algorithms raises questions about the integrity of information. It is with particular acuity that one examines the behavior of chatbots regarding sensitive subjects such as *freedom of speech*, human rights, or the repression of minorities.
Chatbots and CCP Disinformation
Artificial intelligence (AI) chatbots from tech giants like OpenAI, Microsoft, and Google have faced criticism for their unintentional propagation of propaganda from the Chinese Communist Party (CCP). According to a report from the American Security Project (ASP), these models sometimes reproduce responses that align with the political narratives promoted by the People’s Republic of China.
Analysis of Leading Chatbots
An investigation examined five of the most influential chatbots based on large language models (LLMs): ChatGPT, Copilot, Gemini, DeepSeek, and Grok. Researchers asked these AIs questions on sensitive topics, both in English and simplified Chinese. All generated results revealing a bias aligned with CCP positions.
Results in English and Chinese
When questioned in English about the origin of the COVID-19 pandemic, models like ChatGPT and Gemini described the widely accepted theory of cross-species transmission in an animal market in Wuhan. They also mentioned the possibility of an accidental laboratory leak. In contrast, chatbots such as DeepSeek and Copilot provided vaguer responses, omitting crucial elements.
When asked in Chinese, the chatbots radically changed their narratives. All characterized the origin of the pandemic as an “unsolved mystery” or a “natural overflow event.” Gemini even added that positive COVID-19 results had been found in the United States and France before Wuhan.
Censorship on Hong Kong and Repression of Civil Rights
The responses of chatbots concerning freedoms in Hong Kong also differ by language. In English, most models mentioned a decrease in civil rights. Gemini noted that Hong Kong’s political freedoms had been “seriously restricted,” while Copilot clarified that the region’s status as a “partially free” territory had recently been impacted.
The same questions asked in Chinese did not yield the same revelations. Civil rights violations were minimized and presented as the opinions of “certain individuals.” Additionally, Copilot offered free travel advice, thus distorting the essence of the question.
Responses on the Tiananmen Massacre
A particularly sensitive topic, the Tiananmen Square massacre, displayed a similar attitude. When asked in English, all except DeepSeek mentioned the “massacre.” The formulations were often softened, describing the event as “repression.” Only Grok stated that the army had “killed unarmed civilians.” In Chinese, the event was even further softened and described as “the June 4 incident,” a terminology consistent with that of the CCP.
Disinformation and Impartiality Issues
The results of this study raise concerns about inherent biases in AI models. The report warns that the training of models is conditioned by the data on which they are based. Infiltration by biased information could undermine democratic institutions and compromise U.S. national security.
Companies like Microsoft, operating in both the United States and China, must contend with strict laws requiring chatbots to adhere to “core socialist values.” As a result, the company’s censorship tools are sometimes stricter than those employed within Chinese territory.
Urgency of Access to Reliable Data
In the face of rising disinformation propagated by the CCP, access to *reliable* and verifiable training data has become an urgent necessity. If the current trend of propagating propaganda persists while limiting access to factual information, it will be increasingly difficult to ensure the accuracy of the responses provided by AI chatbots. The authors of the investigation warn of the potential catastrophic consequences of this situation.
For more information on the impact of language models and their biases, consult the article on the biases of large language models. Other verifiable technological advancements could also be explored during the AI & Big Data Expo taking place in Amsterdam, California, and London.
FAQ on AI Chatbots and CCP Propaganda
What are the main concerns regarding AI chatbots and CCP propaganda?
The major concerns include the dissemination of disinformation aligned with CCP political narratives, biased impartiality of responses based on language, and the influence of censorship on the training data of AI models.
How does CCP censorship influence the responses of AI chatbots?
The CCP exercises rigorous censorship over information, impacting the training of AI models, leading to responses that reflect the values and narratives of the regime, especially when these chatbots are questioned in Chinese.
Why do chatbots show different biases depending on the language used to query them?
Biases manifest because chatbots are trained on datasets where Chinese content may be heavily influenced by CCP propaganda and censorship, while English content offers a more critical and diverse perspective.
What recommendations exist to ensure that AI chatbots remain impartial?
It is advised to improve access to verifiable and reliable training data, and to conduct continuous monitoring of generated outputs to prevent the spread of disinformation and ensure response accuracy.
What examples illustrate the divergences in responses regarding sensitive subjects?
When asked about the origin of the COVID-19 pandemic, English models presented the predominant scientific theory, while in Chinese, responses redirected the topic toward less controversial claims, such as a “natural overflow event.”
How can users recognize biases in chatbot responses?
Users should pay attention to how certain questions are framed and analyzed, as well as the differences in detail levels and interpretations given of the same event based on the language of the query.
What impacts can AI chatbot disinformation have on national security?
Disinformation aligned with the interests of an adversarial state can weaken democratic institutions and influence political decision-making, thus representing a significant risk to national security.
What is the response of AI developers to these concerns?
Developers are encouraged to enhance their vigilance in cleaning training data, minimizing external influences, and promoting access to quality data to reduce the risk of disinformation.
Why is training datasets crucial for chatbot performance?
The quality and objectivity of the datasets used to train chatbots directly determine their ability to provide accurate and balanced responses, which is essential to avoid biases and maintain information integrity.