Researchers warn about an AI-powered transcription tool used in hospitals: it creates fictitious statements that were never made

Publié le 22 February 2025 à 11h59
modifié le 22 February 2025 à 11h59

Researchers are shedding light on troubling deviations of an advanced transcription tool. The system, used in hospitals, generates unprecedented fictitious statements that have never been spoken. _The impact of these errors is of phenomenal severity_. Medical authorities are being called to address this phenomenon with potentially disastrous consequences. _Trust in transcriptions could collapse_, seriously affecting clinical decision-making. Using technology that is supposed to be reliable, which in reality produces dubious fabrications, raises fundamental ethical issues. _It is imperative to demand solutions_ to ensure patient safety and data reliability.

Concerning hallucinations in medical transcriptions

Technological advances in artificial intelligence (AI) have revolutionized the field of audio transcription, but researchers have recently highlighted significant gaps in this process. Indeed, the Whisper transcription tool developed by OpenAI is facing criticism for its tendency to generate fictitious texts or erroneous interpretations of dialogues that never occurred. This phenomenon, known as hallucination, can have dramatic consequences in a medical context.

Cases of unspoken mentions

Experts emphasize that the audio-to-text transcription performed by Whisper includes elements that were not intentionally added and may contain racist comments or violent rhetoric. A study revealed that among thousands of transcription samples, nearly 40% contained concerning hallucinations. This raises questions about the reliability of this tool when used in the context of medical consultations.

Impacts on medical care

Hospitals and healthcare facilities are beginning to integrate Whisper-based transcription systems to record exchanges between doctors and patients. This early adoption raises concerns given OpenAI’s warnings regarding the use of this tool in high-risk contexts. Transcription errors could lead to critical misunderstandings, thereby affecting diagnosis and treatment. Researchers found errors in 80% of the transcriptions in the studies they conducted.

Alarmingly high rates of hallucinations

A machine learning engineer observed hallucinations in about half of the more than 100 hours of transcriptions processed. Other developers have reported virtually universal hallucination rates in their work with this tool. Consequently, tens of thousands of errors could emerge, posing a direct threat to patient health. The device, although effective in an ideal setting, regularly fails with well-made, clear recordings.

Ethical and regulatory consequences

The prodigious errors generated by Whisper raise ethical questions and concerns about AI regulation. Many experts are calling for increased regulation of these technologies to protect patients. Former OpenAI employees are also advocating for the need to address these technical flaws. Hallucinations could lead to erroneous diagnoses, inappropriate treatments, and fatal outcomes.

Confidentiality of medical data

The implications of using AI tools for medical recordings also raise confidentiality issues. Consultations between doctors and patients are inherently confidential. A recent case involving California lawmaker Rebecca Bauer-Kahan illustrates these concerns; she refused to allow her healthcare provider to share her audio recordings with tech companies.

Calls to action for OpenAI

Voices are rising for OpenAI to take decisive action concerning hallucinations in Whisper. A former OpenAI engineer expressed that this situation remains solvable provided the company prioritizes this issue. The implications of negligence in transcription quality in healthcare systems require serious evaluation before these tools are further adopted.

Future perspectives in medical transcription

In the face of these challenges, companies like Nabla are attempting to adapt similar tools for the medical sector while ensuring better accuracy and data processing capabilities. However, Nabla, which has already transcribed millions of medical visits, faces significant risks related to client data protection. Doctors are encouraged to carefully review each generated transcription. Excessive vigilance may be necessary to avoid catastrophic consequences due to interpretive errors.

Frequently asked questions

What is the AI-based transcription tool known as Whisper?
Whisper is a transcription tool developed by OpenAI that uses artificial intelligence to convert speech into text. It is increasingly used across various sectors, including hospitals, to transcribe medical consultations.
What are the dangers associated with using Whisper in the hospital setting?
Whisper is known to generate ‘hallucinations’, that is, sections of fictitious text that were never spoken. This can lead to transcription errors in medical records, causing serious consequences for patients.
How do hallucinations in transcriptions affect patient care?
Hallucinations can lead to misunderstandings, erroneous diagnoses, and inappropriate treatments, thus compromising the quality of care provided to patients and increasing the risk of medical errors.
What is the percentage of errors recorded with the use of Whisper?
Studies have shown that up to 40% of transcriptions may contain hallucinations, raising major concerns about the reliability of AI-based transcription tools in sensitive contexts like healthcare.
Are hospitals aware of the limitations of Whisper?
Yes, many hospitals have been informed of the limitations of Whisper, but some continue to use it despite OpenAI’s warnings regarding its use in “high-risk areas”.
What measures are being taken to mitigate the risks associated with transcription errors?
Healthcare facilities are encouraged to manually verify and correct transcriptions before integrating them into medical records. Some companies, like Nabla, are developing tools that improve the accuracy of Whisper.
How can patients protect themselves against transcription errors?
Patients can request to review their medical notes and ask questions about suggested treatments to ensure that the information is accurate and correctly reflects their consultation.
What alternatives exist to Whisper for audio transcription?
There are other transcription tools on the market, and it is advisable to choose those with proven reliability and accuracy, as well as suitable recommendations from healthcare industry experts.
What role should regulators play regarding the use of AI in hospitals?
Regulators should place greater emphasis on regulating the use of artificial intelligence in the medical sector, ensuring that safety and accuracy standards are upheld to protect patient health and well-being.

actu.iaNon classéResearchers warn about an AI-powered transcription tool used in hospitals: it creates...

OpenAI: Evolution of the leadership teams and transformation of the vision within the company behind ChatGPT

découvrez comment openai a évolué au fil des ans grâce aux changements au sein de ses équipes dirigeantes et à la transformation de sa vision. plongez dans l'histoire de l'entreprise qui a donné vie à chatgpt et explorez les nouvelles orientations stratégiques qui façonnent son avenir.

Les infrastructures de données : le pilier essentiel du succès en intelligence artificielle

découvrez comment les infrastructures de données représentent le fondement incontournable pour réussir en intelligence artificielle. explorez l'importance cruciale de la gestion des données, leur collecte, stockage et traitement dans le développement de solutions ia performantes.

Boom spectacular in the artificial intelligence sector: a target of 1000 billion dollars by 2027

découvrez l'essor fulgurant du secteur de l'intelligence artificielle, prévoyant d'atteindre un objectif de 1000 milliards de dollars d'ici 2027. analyse des tendances, des innovations et des opportunités qui façonnent l'avenir de cette technologie révolutionnaire.

Mira Murati resigns: The reasons behind her resignation as Chief Technology Officer of OpenAI

découvrez les motivations qui ont poussé mira murati à quitter son poste de directrice technique chez openai. cet article analyse les facteurs personnels et professionnels derrière sa démission, ainsi que son impact sur l'avenir de l'entreprise.

The CEO of Tradeweb discusses the impact of AI on investment: Preliminary analysis

découvrez comment le pdg de tradeweb explore l'impact révolutionnaire de l'intelligence artificielle sur le secteur de l'investissement dans cette analyse préliminaire approfondie. un aperçu des tendances et des implications futures pour les investisseurs.

Le MIT is innovating with a new graduate program in Music Technology and Computation

découvrez le nouveau programme innovant du mit en technologie musicale et computation. plongez dans l'intersection de la musique et des technologies avancées, formation idéale pour les créateurs de demain.