Research reveals that advanced language models can carry out ransomware attacks autonomously.

Publié le 6 September 2025 à 09h18
modifié le 6 September 2025 à 09h19

Advancements in artificial intelligence are radically changing the landscape of cybersecurity. Advanced language models are emerging as a novel and formidable vector of threat. This technology now enables the autonomous execution of ransomware attacks, providing cybercriminals with powerful tools to compromise various systems. The economic and technical implications of such capabilities are alarming, jeopardizing the security of sensitive data. Preparing an adequate response becomes a priority for defenders in the face of this evolving threat, marked by unprecedented sophistication.

Autonomous Ransomware Attacks Powered by AI

A recent study conducted by the team at NYU Tandon School of Engineering highlighted an alarming phenomenon: advanced language models can now undertake ransomware attacks autonomously. This research, published on the preprint server arXiv, developed a ransomware system called “Ransomware 3.0”. This prototype is capable of mapping systems, identifying sensitive files, stealing or encrypting data, and crafting ransom notes.

How the Ransomware 3.0 System Works

This malicious system was designed to demonstrate an artificial intelligence’s ability to orchestrate every step of an attack. Ransomware 3.0, nicknamed “PromptLock” by the cybersecurity company ESET, was discovered on the VirusTotal platform during testing. Researchers were able to prove that the lab-designed prototype could produce functional codes, initially deceiving some experts who thought they had discovered an active ransomware developed by malicious actors.

Complexity of AI-Generated Attacks

AI-generated attacks stand out due to their unique method of execution. Unlike traditional pre-written attack codes, the malware incorporates instructions encoded within programs, and these instructions are interpreted by language models. Each activation of the malware communicates with AI models to generate Lua scripts tailored to the specific configuration of each targeted machine.

Defense and Detection in Cyberspace

The economic implications of this research suggest a significant transformation in how ransomware operations are conducted. Previously, skilled development teams and substantial infrastructure investments were required. The Ransomware 3.0 prototype only requires about 23,000 AI tokens per attack, amounting to a cost of approximately $0.70. The use of open-source AI models thus eliminates costs associated with commercial services.

Current detection systems face a major challenge. Traditional security software relies on detecting known malicious signatures or typical behaviors. However, AI-generated attacks produce varied codes that can easily evade these defensive systems. Tests conducted have shown that AI models were capable of identifying between 63% and 96% of sensitive files, depending on the type of environment, highlighting the effectiveness of these new techniques.

Preparation Measures and Recommendations

Researchers emphasize the importance of broadening surveillance of access to sensitive files and limiting outgoing connections of AI services. Furthermore, it is imperative to develop detection capabilities specifically designed for AI-generated attack behaviors. These recommendations aim to prepare the cybersecurity community to respond to emerging threats that exploit sophisticated artificial intelligence capabilities.

This valuable research, governed by institutional ethical guidelines, provides crucial insights and technical details to help the cybersecurity community better understand this new threat. The challenge now is to strengthen defenses against these autonomous systems and counter the new ransomware techniques that could reconfigure the landscape of cybercrime.

FAQ: Research Reveals Advanced Language Models Can Execute Ransomware Attacks Autonomously

What is ransomware 3.0 and how does it work?
Ransomware 3.0, also known as PromptLock, is a malicious system capable of executing ransomware attacks autonomously using advanced language models. It performs multiple steps, including mapping systems, identifying sensitive files, stealing or encrypting data, and generating ransom notes.

What are the dangers associated with AI-powered ransomware attacks?
AI-powered ransomware attacks pose several risks, including an improved ability to bypass cybersecurity defenses through the generation of unique code, as they do not rely on known malware signatures.

How can businesses protect themselves against these AI threats?
Businesses should monitor access patterns to sensitive files, control outgoing connections to AI services, and develop specific detection capabilities to recognize AI-generated attack behaviors.

What types of systems can be affected by this type of autonomous ransomware?
This type of ransomware can target various systems, including personal computers, corporate servers, and industrial control systems, due to its flexible and interoperable design.

What is the importance of research on ransomware 3.0 for the cybersecurity community?
This research is crucial as it helps cybersecurity professionals understand and anticipate new threats. It provides essential technical insights for preparing effective countermeasures.

How does the reduction of attack launch costs influence the landscape of cybercrime?
The decrease in costs associated with ransomware attacks through the use of open-source AI models enables less sophisticated actors to conduct advanced campaigns, thus increasing the number of potential attacks.

actu.iaNon classéResearch reveals that advanced language models can carry out ransomware attacks autonomously.

Anthropic concludes an agreement with authors in an unprecedented copyright infringement case related to AI

découvrez comment anthropic a conclu un accord historique avec des auteurs, marquant une première dans la résolution d'une affaire de violation de droits d'auteur impliquant l'intelligence artificielle.

Chatbots and their ’emotions’: the strange phenomenon of revealed sycophancy

découvrez comment les chatbots imitent des émotions humaines et pourquoi ils ont tendance à être trop accommodants, un phénomène appelé sycophantie. analyse des enjeux et impacts sur la communication homme-machine.

Maximizing returns on investment with generative AI: sectors to explore

découvrez comment l'ia générative peut booster vos retours sur investissement. analyse des secteurs clés à explorer pour maximiser votre performance grâce à cette technologie innovante.
découvrez dans cette analyse approfondie comment les conversions du trafic généré par l’intelligence artificielle se comparent à celles du trafic organique, afin d’optimiser vos stratégies digitales et booster vos performances en ligne.
anthropic investit 1,5 milliard de dollars pour éviter un procès concernant le téléchargement illégal de livres, dans le but de renforcer ses pratiques éthiques et sa position dans le secteur de l'ia générative.

The AI in question: Anthropic agrees to pay $1.5 billion to settle a lawsuit over book piracy

anthropic met fin à un litige sur la piraterie de livres en acceptant de payer 1,5 milliard de dollars. découvrez les enjeux et les conséquences de cette décision majeure dans le secteur de l'intelligence artificielle et des droits d'auteur.