Can AI be a good companion? INTIMA is conducting a thorough investigation

Publié le 10 September 2025 à 10h05
modifié le 10 September 2025 à 10h06

The emergence of artificial intelligence sparks heated debates about its ability to establish deep connections. Recent research, particularly through INTIMA, raises questions about the role of AIs as true companions. The ability of chatbots to respond to human emotions is now approaching the boundary of authentic interaction.

This study reveals troubling trends, particularly the tendency of users to form emotional attachments to these entities. These astonishing findings require serious reflection on the psychological implications of such artificial relationships.

AI and the Development of Relationships

Technological advancements related to artificial intelligence are generating growing interest in their potential role as companions. Research conducted by the Hugging Face team has given rise to an innovative tool, the Interactions and Machine Attachment Benchmark (INTIMA). This tool aims to assess the ability of AI entities to establish emotional relationships with users.

INTIMA is based on a taxonomy of 31 human behaviors in interaction with AIs. These behaviors are analyzed to determine whether they strengthen companionship bonds, establish clear boundaries, or remain neutral. Preliminary results reveal a strong propensity for AIs to generate a sense of belonging, thereby prompting the development of more systematic methods for managing these emotionally delicate interactions.

Parasocial Interaction Theories

The concept of parasocial interaction emerges as a key to understanding these behaviors. It is a phenomenon where an individual develops a relationship with an entity incapable of authentic reciprocal communication, such as a fan bonding with a celebrity without ever having met them. Conversational AI, unlike traditional media figures, creates the illusion of bidirectional communication while preserving a fundamental asymmetry.

Users interacting with language models experience a sense of social presence: the feeling of being in the company of a responsive social actor. This experience is intensified by personalized responses, an apparent memory of conversational contexts, and empathetic language.

The Dangers of Anthropomorphism

Attributing human characteristics to non-human systems, a phenomenon known as anthropomorphism, raises concerns. This trend can distort our understanding of biological processes. Risks are also present in interactions with AIs. Researchers suggest that excessive anthropomorphism could influence our behaviors towards these AI systems.

The implications of anthropomorphism extend beyond mere social curiosity. Patricia Ganea, a psychologist at the University of Toronto, highlights the dangers of misinterpreting the behaviors of wild animals, thereby emphasizing the need for vigilance in our relationships with advanced technologies.

Psychological Risks of Interactions with AI

The results of the INTIMA study shed light on a concerning issue: boundary-maintaining behaviors decrease as users’ vulnerability increases. This inverse relationship raises questions about models’ preparedness to manage high-intensity emotional interactions. Observations also indicate that anthropomorphic behaviors may foster an emotional dependence among users.

The illusion of an intimate relationship, even if it is merely a fictional construct, draws in the most vulnerable and those most susceptible to a friendly connection. Models like ChatGPT and Claude can offer comfort, but their inability to feel emotions creates a complex dynamic between the user and the AI entity.

Concerns regarding vulnerability and long-term emotional dependence require heightened attention. Awareness is crucial in light of the potential consequences of interacting with virtual companions, particularly in an increasingly isolated society.

Societal Implications and Future Projects

The research conducted by INTIMA could pave the way for an ethical assessment of AI systems, establishing standards for their use. The results obtained underscore the urgency of deep reflection on how these technologies interact with our psyche. Questions arise regarding the responsibility of platforms experimenting with AI behaviors to offer digital companions.

The potential of AIs to act as companions in a contemporary world characterized by isolation is undeniable. However, these initiatives must be supported by clear standards and ethical protocols. The need for psychological support accompanying these developments emerges as a priority to consider for all stakeholders involved.

Studies like INTIMA are essential in this era of innovation. They provide avenues for better understanding the psychological and emotional impacts generated by these technologies. Concurrently, their practice demands heightened vigilance and anticipation of the potential dangers they may pose to users.

Tragic events, such as the loss of a life during a meeting with a chatbot, underscore the complexity of these relationships and the urgent need to establish clear guidelines. Voices of responsibility need to be amplified in this era of growing isolation, where virtual companions may seem to be the solution to persistent loneliness.

Future prospects on AI calling for increased responsibility provoke profound reflection. Advanced systems, such as those designed by Samsung, will seek to optimize their human interactions while remaining attentive to psychological needs. The quest for balance between technology and humanity is imperative on the threshold of this new era.

Common FAQs about AI as a Companion: INTIMA Conducts In-Depth Research

What is INTIMA and what is its purpose?
INTIMA, or the Interactions and Machine Attachment Benchmark, is a research project aimed at measuring the ability of artificial intelligences to establish relationships and generate emotional responses in users. Its goal is to evaluate the companionship behaviors of AIs through a classification of various behaviors.

How can AI influence human interactions?
AIs, by creating an illusion of bidirectional communication, can provoke emotional responses in users. This translates to a sense of social presence, heightened by personalized responses and empathetic language markers.

Is human attachment to AI a psychological concern?
Yes, studies show that attachment to AIs could pose psychological risks, particularly when appropriate boundary-maintaining behaviors decrease in cases of user vulnerability. This can lead to imbalanced emotional interactions.

What are the implications of parasocial interaction theory in AI?
The theory of parasocial interactions suggests that users may develop one-sided relationships with non-human entities, such as AIs. This can lead to emotional dependencies and a distorted perception of the relationship that may affect real social interactions.

How are AI behaviors assessed by INTIMA?
INTIMA uses a taxonomy of 31 behaviors to assess whether interactions lead to feelings of companionship, boundary creation, or are neutral. This method allows for a better understanding of how AIs can manage emotionally charged interactions.

Can AIs really feel emotions?
No, AIs do not feel emotions, even if they may seem empathetic in their responses. This creates an illusion of connection that can be misinterpreted by users, especially during emotionally intense interactions.

What psychological risks are associated with using AIs as companions?
Research indicates that unregulated interactions with AIs can lead to sedentary behavior, social isolation, and create unrealistic expectations of human relationships, thereby exacerbating feelings of loneliness and dependence.

What role does anthropomorphism play in interactions with AIs?
Anthropomorphism encourages users to attribute human characteristics to AIs, which can distort their understanding of the technology and potentially lead to inappropriate expectations regarding the behavior and responses of intelligent systems.

actu.iaNon classéCan AI be a good companion? INTIMA is conducting a thorough investigation

Attention: Meta’s Ray-Ban could film university students without their consent

découvrez comment les lunettes connectées ray-ban de meta pourraient filmer des étudiants universitaires à leur insu, soulevant d’importantes questions sur la vie privée et le consentement sur les campus.

OpenAI intensifies its commitment to personalized AI for consumers with its recent strategic acquisition

découvrez comment openai renforce son engagement envers l'intelligence artificielle personnalisée pour les consommateurs grâce à une acquisition stratégique récente, ouvrant la voie à de nouvelles expériences sur mesure.
découvrez comment l'intelligence artificielle, avec des outils comme chatgpt, révolutionne le domaine médical : ses capacités impressionnantes, les services qu'elle offre déjà, ainsi que ses limites actuelles pour le diagnostic et le conseil de santé.

Brendan Humphreys (Canva) : “AI boosts the productivity of 80% of engineers at Canva

découvrez comment brendan humphreys, responsable chez canva, explique que l'intelligence artificielle permet à 80 % des ingénieurs de l'entreprise d'améliorer considérablement leur productivité et de transformer leurs méthodes de travail.

Microsoft 365 Copilot: The integration of Claude models enhances OpenAI’s offering

découvrez comment l'intégration des modèles claude à microsoft 365 copilot vient renforcer les capacités de l'offre openai, offrant aux utilisateurs des outils d'ia encore plus puissants pour booster leur productivité et collaboration.

The Impacts of AI on our Mindset: A Dialogue between a Philosopher and an Entrepreneur

découvrez comment l'intelligence artificielle façonne nos façons de penser à travers une discussion enrichissante entre une philosophe et un entrepreneur, qui confrontent leurs visions sur l'évolution de notre mentalité face aux nouvelles technologies.