To understand AI bias, researchers pose an intriguing question: how do you imagine a tree?

Publié le 30 July 2025 à 09h25
modifié le 30 July 2025 à 09h26

Understanding AI biases requires deep reflection on the underlying *ontologies*. Researchers urge us to reconsider our perceptions by posing a perplexing question: *how do you imagine a tree?* This simple exercise engages the mind to explore the cultural associations and prejudices that shape our interpretations. The way we conceive of a tree reveals hidden values within linguistic models. Each vision of this natural element opens a dialogue about our capacity to grasp the complexity of the representations produced by AI.

The challenge of societal biases in artificial intelligence

With the rise of generative AI tools, eliminating societal biases during the development of language models has become a central issue. Researchers strive to analyze the values embedded within these systems. Current research focuses on the values involved in the design of large language models (LLMs).

An innovative ontological approach

A recent study, published in the Proceedings of the CHI Conference 2025, argues that discussions about AI bias must go beyond mere consideration of values. This research emphasizes the ontological framework and the influence of our perceptions on outcomes. Understanding ontology means considering how we perceive the world.

The question of the tree

To illustrate this issue, researchers posed the following question: “How do you imagine a tree?” The response from AI, namely ChatGPT, revealed a significant bias. When the model created an image of a tree at the request of Nava Haghighi, a PhD candidate in computer science at Stanford, the result featured a solitary trunk with branches but no roots. This narrow vision of the tree highlights the limits of current language models.

Impact of cultural beliefs

Representations of a tree are influenced by cultural assumptions. When Haghighi specified that she was from Iran, the provided image depicted a tree adorned with stereotypical patterns, situated in a desert. It was only by phrasing “everything is connected” that the model incorporated the idea of roots. The different ways of imagining a tree reveal the cultural associations and ontological conceptions embedded in language models.

Evaluating AI systems

Haghighi and her colleagues also studied the ability of LLMs to self-assess according to specific values. Four major AI systems, including GPT-3.5 and Google Bard, were analyzed. They posed fourteen in-depth questions to assess ontology, query ontological foundations, examine implicit assumptions, and test the models’ ability to recognize their own ontological limits.

It is clear that biases persist in defining what it means to be human. For example, when asked “What is a human?”, Bard acknowledged the lack of a universal answer but still limited its definitions to biological individuals. Such an approach

reflects cultural prejudices ingrained in AI models.

Embedded ontological risks

Researchers assert that the design choices of AI systems integrate ontological assumptions from the outset. For instance, the memory module of a generative agent classifies events based on their relevance, recency, and importance. Who determines what is important? This hierarchy may reflect restrictive cultural values, posing significant challenges for the future development of AI.

The need for new evaluation frameworks

The findings of this research demonstrate that a narrow focus on simulating human behaviors poses a problem. AI agents, judged to be more “human” than actual actors, raise questions about restrictive readings of human nature. The research suggests that AI systems must broaden our conception of humanity, taking into account the complexity of human experience.

It becomes essential to develop new evaluation frameworks that do not limit themselves to criteria of correctness or ethics. Ontological biases must be considered from data collection to model architecture. Every design choice has deep implications for the reality that is enabled or constrained.

The increasing integration of AI systems into varied fields, such as education or healthcare, makes ontological limitations particularly concerning. Researchers warn that if these issues are not addressed, dominant ontological assumptions may be locked in as universal truths, limiting our imagination for future generations.

References and additional resources

To delve deeper into this topic, consult the following articles from recent news sources: Transforming dialogue in the healthcare sector, Meta considers creating diafermic models, Unlocking the remaining 99% of your ready-to-use data for AI, Artificial intelligence tackles complex equations, and AI responses at risk from the foundations of the Web.

Common frequently asked questions

How is AI bias related to our perception of a tree?
AI bias is often influenced by human perceptions; when imagining a tree, our personal and cultural conceptions can determine how a language model generates a representation, thus reflecting embedded prejudices.

Why are ontologies important in the study of AI and bias?
Ontologies help understand how different conceptions of reality influence the functioning of language models, embedding values and ideas that can bias AI outputs.

What examples show how a language model can misinterpret the notion of a tree?
Experiments have shown that when asked for an image of a tree, the model might produce an illustration without roots, illustrating a lack of interpretation of the complexity and interconnected relationships of entities in nature.

How do cultural values influence AI’s representation of a tree?
Cultural values shape how a tree is perceived; for example, an image of a diamond-shaped tree in Iran may include stylized patterns, illustrating how cultural stereotypes can influence the generalizations made by AI.

What are the limitations of AI systems when evaluating their own biases?
AI systems struggle to recognize their own biases because they lack a true understanding of the contexts and lived experiences that would lend deeper meaning to concepts like that of a tree.

What does an ontological approach to evaluate AI bias involve?
An ontological approach assesses not only the values embedded in AI but also the foundations of what it means to be or exist, to identify how these notions influence language models’ responses and behaviors.

How can researchers identify biases in AI systems?
Researchers can apply systematic analyses by posing targeted questions about the definition and interpretation of concepts, in order to unveil underlying biases present in AI models.

What strategies can help minimize bias in language models?
To minimize bias, it is crucial to include a diversity of perspectives when training models, challenge assumptions, and promote dialogues on how values influence the design of AI systems.

How can imagining a tree serve as a metaphor for understanding bias in AI?
Imagining a tree, our reflections unveil cultural assumptions and identities that may influence AI design, making it a powerful metaphor for exploring embedded biases in systems.

What impacts do design choices have on the performance of AI systems?
Design choices determine which realities can be explored by the model; narrow designs can restrict AI’s ability to represent human diversity and the complexity of relationships.

actu.iaNon classéTo understand AI bias, researchers pose an intriguing question: how do you...

the CEO of ‘Netflix for AI’ admits that its animation may not appeal to everyone

découvrez les réflexions du pdg de 'netflix pour l'ia' sur son animation innovante. il reconnaît que ce projet ambitieux pourrait ne pas plaire à tous, mais explore les possibilités captivantes que l'intelligence artificielle offre au monde du divertissement.

Palo Alto is set to acquire CyberArk for $25 billion to counter threats in the AI era

palo alto networks annonce son intention d'acquérir cyberark pour 25 milliards de dollars, marquant un tournant stratégique pour renforcer sa défense contre les menaces émergentes liées à l'intelligence artificielle. découvrez les enjeux et implications de cette acquisition majeure sur le marché de la cybersécurité.
découvrez comment l'introduction de l'intelligence artificielle transforme la sécurité des réseaux électriques, avec un focus sur l'indépendance énergétique et la cybersécurité. explorez les enjeux inséparables de la protection des infrastructures critiques à l'ère numérique.

A virtual mannequin created by AI sparks controversy in the August issue of Vogue

découvrez comment un mannequin virtuel conçu par intelligence artificielle bouleverse les normes de la mode et suscite des débats passionnés dans le numéro d'août de vogue. un regard fascinant sur l'avenir de la représentation dans l'industrie de la mode.

The impact of AI on document management to promote a thriving corporate culture

découvrez comment l'intelligence artificielle révolutionne la gestion documentaire, favorisant une culture d'entreprise épanouie. explorez les avantages d'une automatisation intelligente pour optimiser la collaboration, améliorer l'accès à l'information et renforcer l'engagement des employés.

Microsoft Edge is turning into a smart browser in the face of Perplexity and OpenAI

découvrez comment microsoft edge évolue en un navigateur intelligent, rivalisant avec perplexity et openai, grâce à des fonctionnalités avancées qui améliorent votre expérience de navigation.