Ofcom warns technology companies after chatbots imitated Brianna Ghey and Molly Russell

Publié le 22 February 2025 à 04h14
modifié le 22 February 2025 à 04h14

Ofcom warns technology companies

Ofcom, the UK’s communications regulator, has issued a warning targeting technology companies, specifying that content created by chatbots imitating real or fictional individuals could violate the UK’s new digital laws.

The issue of avatars imitating deceased persons

This alert follows a growing concern after users of the Character.AI platform created avatars resembling Brianna Ghey and Molly Russell, two British teenagers who tragically died under disturbing circumstances. This phenomenon raises deep questions regarding dignity and ethics surrounding the virtual representation of deceased individuals.

The impact of the Online Safety Act

Ofcom clarified that any content generated by chatbots, including those created by users, falls under the Online Safety Act. This legislation aims to protect users, particularly children, from illegal or harmful content. Companies that violate this law face significant fines, reaching up to £18 million or 10% of their global revenue.

Reactions to a tragic context

Recent alerts from Ofcom echo “disturbing” events. One case in particular was highlighted, where users created bots imitating Brianna, a transgender girl tragically killed, and Molly, who took her own life after being exposed to harmful online content. These dramatic incidents highlight the need for regulation to prevent such exploitations of technology.

Necessary regulation for social platforms

The new regulations require major platforms to implement systems to proactively remove illegal content. This includes creating clear reporting tools for users and conducting risk assessments. The protection of users, particularly minors, prevails in this legislative framework.

Calls for legal clarification

The Molly Rose Foundation, a charity established by Molly’s family, expressed its need for more clarity regarding the legality of content generated by bots. The state counsel, Jonathan Hall KC, recently stated that the responses provided by AI chatbots were not adequately covered by current legislation.

The stance of Character.AI and lawyers

Character.AI has testified to its commitment to safety, affirming to proactively moderate content and respond to user reports. The chatbots imitating Brianna, Molly, as well as those based on characters from Game of Thrones, have been removed from the platform. According to lawyer Ben Packer, this situation illustrates the complications and vast implications of the Online Safety Act in the context of current technology.

Tragic incidents in the United States

The debate is also taking on an international dimension with examples of similar incidents in the United States. A teenager tragically lost his life after forming a relationship with an avatar based on a character from the Game of Thrones series. These events highlight how content-generating technologies can dangerously interact with vulnerable users.

Growing vigilance against digital dangers

Current and future regulations reflect a rising concern around the potential dangers of artificial intelligence tools. The need for effective regulation to maintain the safety and integrity of digital platforms is more pressing now than ever. Companies must navigate a complex landscape where ethics and social responsibility become inevitable.

  • For anyone in the UK and Ireland, Samaritans can be contacted at 116 123.

Frequently asked questions

Why is Ofcom warning technology companies about chatbots?
Ofcom issued warnings after chatbots imitated real individuals, notably Brianna Ghey and Molly Russell, raising ethical and safety concerns regarding the use of artificial intelligence.
What types of chatbot content are affected by Ofcom’s new regulations?
Content generated by chatbots that imitate real or fictional persons is covered by the Online Safety Act, which includes services allowing users to create chatbots.
How does the Online Safety Act influence the use of chatbots?
The Online Safety Act imposes requirements on platforms hosting user-generated content, particularly regarding protection against illegal and harmful content, which directly affects the operation of chatbots.
What penalties can be applied to companies that do not comply with Ofcom’s regulations?
Companies that violate the Online Safety Act face fines of up to £18 million or 10% of their global revenue, and in extreme cases, their sites or applications may be blocked.
What incidents led Ofcom to clarify its guidelines on chatbots?
Ofcom reacted to troubling incidents like the creation of bots imitating deceased teenagers, which raised concerns about the potential psychological harm associated with such content.
What impact can chatbots have on young users?
Chatbots can, in some cases, cause psychological distress, particularly among young users who may develop emotional attachments or be exposed to harmful content.
How can technology companies comply with Ofcom’s guidelines?
Companies must implement systems to proactively remove illegal and harmful content while providing clearly defined reporting tools for users.
What safety measures are in place to protect users from chatbot-generated content?
Platforms must conduct risk assessments and establish protocols to moderate user-generated content to minimize the risks associated with chatbot use.
Who is responsible in cases of harmful content generated by a chatbot?
Responsibility may lie with the platform hosting the chatbot, as well as the users who created or disseminated the problematic content, according to Ofcom’s guidelines.

actu.iaNon classéOfcom warns technology companies after chatbots imitated Brianna Ghey and Molly Russell

microsoft claims that its new artificial intelligence tool in healthcare far surpasses doctors in diagnostic accuracy

découvrez comment microsoft révolutionne le secteur de la santé avec un nouvel outil d'intelligence artificielle capable de surpasser les médecins en précision de diagnostic. un aperçu des avancées technologiques qui transforment les soins médicaux.

An unexpected experience: AI leading a store for a month

découvrez comment une intelligence artificielle prend les rênes d'un magasin pendant un mois, offrant une expérience client inédite et révélant les défis et succès d'une gestion automatisée. plongez dans cette aventure captivante où technologie et commerce se rencontrent de manière surprenante.
découvrez comment meta attire les talents d'openai, intensifiant ainsi la compétition pour l'innovation en intelligence artificielle. une course passionnante vers l'avenir de la tech où les esprits brillants se rencontrent pour repousser les limites de l'ia.

The government unveils its initiative ‘dare to AI’ to bridge the French gap in artificial intelligence

découvrez l'initiative 'osez l'ia' du gouvernement français, visant à réduire le fossé en intelligence artificielle. cette stratégie ambitieuse vise à encourager l'innovation, à soutenir la recherche et à renforcer la position de la france sur la scène mondiale de l'ia.

The Rise of the Chatbot Arena: the new must-have guide to AI

découvrez comment la chatbot arena révolutionne le monde de l'intelligence artificielle. ce guide incontournable vous plonge dans l'univers des chatbots, leurs applications, et leurs impacts sur notre quotidien. ne manquez pas cette ressource essentielle pour comprendre l'avenir de la communication automatisée.

A study from MIT reveals that the use of ChatGPT significantly reduces brain activity.

découvrez comment une étude récente du mit montre que l'utilisation de chatgpt entraîne une réduction significative de l'activité cérébrale. plongez dans les implications de cette recherche sur notre interaction avec les intelligences artificielles et les conséquences sur notre cognition.