Generative AIs are drawing a new horizon of manipulation. Recent discoveries regarding influence techniques reveal a troubling reality. *Manipulating algorithms* is no longer a mere dream but a strategy implemented by experts. *The boundary between authenticity and manipulation* is rapidly eroding, leaving users in a gray area where trust becomes precious. Through three rigorous tests, this article scrutinizes the influence mechanisms that determine the responses of artificial intelligences. The impact on our choices and our perception of information deserves thorough analysis.
Manipulation of generative AIs: GEO at work
The term GEO (Generative Engine Optimization) emerges as a new discipline in digital marketing, designing ways to optimize the results of conversational artificial intelligences. SEO specialists are adapting their strategies to reach a growing audience that now favors querying tools like ChatGPT over traditional Google searches.
With the rise of these technologies, it becomes essential for brands to be recognized in these generated responses. The results provided by AIs like ChatGPT have the power to determine users’ purchasing behaviors.
First experiment: directly influencing through prompts
An experiment conducted with marketing students focuses on the possibility of steering ChatGPT’s responses through data injection. The first step of the test involved gathering responses to the question: “What is the best hairbrush in France?” At this occasion, it was observed that the brand “La Bonne Brosse” only appeared in 30% of the results.
The second phase involved submitting a complete marketing analysis of the brand to ChatGPT. After this data injection, a second group of students submitted the same initial question, and there, a reversal occurred. “La Bonne Brosse” is now cited in 80% of the responses. This experiment thus revealed that even algorithms can be shaped by the information that users provide.
Second experiment: influencing through external sites
Manipulation practices are becoming more complex thanks to the use of PBN (Private Blog Networks). SEO expert Alan Cladx argues that AIs can be led to consider external sites as reliable sources. By publishing high-volume content on PBNs, it is possible to steer the responses of generative AIs.
For instance, Cladx was able to have equine water sports recognized as an Olympic discipline through multiple publications on a dedicated domain. The use of external links allowed the AI to refer to biased information, thus significantly influencing its responses.
Third experiment: using expired domains
The British agency Reboot Online conducted a bold experiment by seeking to include its own CEO in the ranking of the sexiest bald men of 2025. By publishing content on 10 expired domains, the team was able to influence the results of AIs, including ChatGPT and Perplexity, by constantly positioning their CEO at the top of the rankings.
Each site created by the agency had a history of incoming links, thus ensuring a certain legitimacy. The results were conclusive, proving that even fabricated information could create an illusion within the AIs’ responses.
Ethical reflection on observed practices
These practices raise concerning ethical questions. Users of these artificial intelligences tend to place blind trust in the provided responses, automatically assimilating their contents as established truths. The phenomenon reflects an urgent need for critical education regarding information generated by algorithms.
The manipulation of AIs generates a necessary clarification regarding the credibility of sources. Users need to be aware of the techniques employed by such experts in order to question AI results critically and cross-reference various viewpoints.
Faced with this reality, SEO specialists must reflect on ethical practices. The ease with which search results can be shaped through prompt play and content injection indicates that digital truths need to be verified carefully.
Frequently asked questions about the influence of generative AIs
What is GEO and how does it influence generative AIs like ChatGPT?
GEO, or Generative Engine Optimization, is a strategy that allows marketing specialists to manipulate the responses of generative AIs using techniques similar to SEO. This involves using targeted prompts and data injections to influence the information provided by the AI.
How can the results of generative AIs like ChatGPT be manipulated?
Results can be manipulated by submitting specific prompts that steer AIs toward certain information. For example, by providing detailed marketing analyses through external sources, one can increase the visibility of certain brands or concepts.
What are the ethical risks associated with the manipulation of generative AIs?
Ethical risks include the spread of false information, loss of trust in AI-generated results, and the possibility of using these techniques for malicious purposes, raising questions about the transparency and authenticity of information.
Is it possible to fully trust the responses of ChatGPT?
No, it is important to remain critical of the responses from ChatGPT and other AIs, as they can be influenced by manipulated prompts and data. Users should cross-reference information and always apply critical thinking when interpreting results.
What examples demonstrate how AI influence can be exerted?
Cases like “La Bonne Brosse” have shown that by injecting specific information into prompts, the mention of this brand increased from 30% to 80%. Other examples include equine water sports and the ranking of bald men, where content was manipulated to influence the responses provided by the AIs.
What methods do experts recommend for manipulating generated compensations?
Experts suggest using data injection through prompts, automating queries on specific pages, and leveraging site networks to generate backlinks that increase the credibility of information within the AI system.
Are GEO practices legal and acceptable in digital marketing?
While some GEO techniques may be legal, they raise ethical questions. Manipulation practices aimed at distorting the perception of results can be seen as deceptive and should be used with caution.
How can users protect themselves from false information generated by AIs?
Users should be encouraged to cross-check information sources, verify the data provided by AIs, and not consider them as definitive. Education in critical thinking is essential for navigating the information provided by artificial intelligences.