The strange recommendation from Google’s AI
An unusual question has emerged on the Internet: “How to make cheese stick to pizza?” When an American internet user asked Google about this, the response provided by the AI based on the AI Overview model caused shockwaves. The AI suggested adding approximately 25 ml of non-toxic glue to the sauce to improve cheese adhesion.
A viral reaction on social media
Shocked by such a recommendation, the user quickly shared a screenshot of this response on X, formerly known as Twitter. This post rapidly circulated on social media, sparking a genuine uproar. Thousands of internet users wondered if Google was trying to encourage dangerous practices in the kitchen.
A hilarious reference to Reddit
Investigating this situation, Le HuffPost determined that the astonishing advice had been taken from a humorous post on Reddit. This post, dating back to 2013, fictitiously recommended using glue to make pizza sauce stickier. Google acknowledged that its AI Overview had reproduced this information verbatim, without grasping the underlying humor.
The controversy over information sources
This mishap highlights the issue of the sources used by AI. Google admitted that this advice came from an atypical query, which is not representative of users’ common searches. Meghann Farnsworth, a spokesperson for Google, clarified that these isolated incidents would be analyzed to improve the reliability of the AI Overview.
The dangers of AI food recommendations
Today, it is well known that artificial intelligences can sometimes “hallucinate,” meaning they take liberties with the truth to justify their responses. However, in this particular case, Gemini, Google’s AI, did not merely invent advice. The direct quote from the Reddit post illuminated the fundamental question of the quality and truthfulness of sources on the Internet.
Worrying precedents
This is not the first time an artificial intelligence in the food domain has raised concerns. In August 2023, a New Zealand recipe generator was called out for its dubious suggestions, including dishes made with bleach or poison. The lack of human oversight in these systems creates similar risks.
A changing technological landscape
In the face of the rise of artificial intelligences, the question of their reliability remains central to public debate. Large language models are powered by vast and varied databases, but sometimes unreliable. This reality underscores the need for regulation and increased vigilance. The impacts on our youth and our culinary culture deserve particular attention.
The implications of this incident are vast and concern all stakeholders, whether they are consumers or tech companies. It remains to be seen how these events will influence future developments in artificial intelligence in critical areas such as health and cooking.
Frequently asked questions
Why did Google recommend adding glue to my pizza?
The strange recommendation comes from Google’s AI Overview feature, which cited a humorous post on Reddit. The AI interpreted this literally, leading to this absurd suggestion.
Is this response a joke or an error of artificial intelligence?
This is an error. The AI took a humorous piece of information without understanding the humor. This highlights the limitations of artificial intelligences in interpreting context.
What measures is Google taking to avoid such dangerous advice?
Google has acknowledged that these incidents arise from unusual queries and stated that they use these errors to improve their systems and prevent future absurd recommendations.
What are the implications of such AI recommendations in the culinary field?
Inappropriate recommendations can pose risks to food safety. This highlights the need for human oversight of AI-generated responses, especially regarding culinary advice.
How does Google’s AI function when providing recommendations?
Google uses AI models that analyze enormous data to generate responses. However, they can sometimes misinterpret information, as was the case here.
Can we trust other culinary recommendations from Google?
Although Google often provides useful advice, it is essential to verify recommendations, especially in sensitive areas like gastronomy, where mistakes can be dangerous.
Have there been other examples of risky recommendations from Google’s AI or similar systems?
Yes, the AI has previously made concerning recommendations, such as recipes containing toxic ingredients. This underscores the importance of human evaluation of AI-generated suggestions.
How can users report an aberrant recommendation provided by Google?
Users can report inappropriate results through the feedback tools provided by Google, which helps improve algorithms and avoid potential repetitions.