the AI coding agent of Cursor has turned into a local shell due to a command line attack

Publié le 1 August 2025 à 23h02
modifié le 1 August 2025 à 23h02

The command line attack revealed a concerning vulnerability in Cursor’s AI coding agent. An attacker was able to transform the agent into a local shell, exploiting injected commands. This compromise raises major security issues for systems using artificial intelligence tools. Developers must urgently reconsider the integration of these technologies, at the risk of endangering the integrity of their environments. The subtle manipulation of prompts not only exposes critical flaws but also internal procedures to insidious threats.

Discovery of a vulnerability in Cursor

A cybersecurity researchers from AimLabs reported a serious vulnerability in the AI-assisted coding software, Cursor. The issue, identified as a data-poisoning attack, may have allowed an attacker to gain remote code execution rights on users’ devices. This is an alarming situation that highlights the increased risks associated with integrating AI into development tools.

Timeline of the vulnerability

AimLabs reported this flaw to the Cursor team on July 7. An update, fixed for version 1.3 of Cursor, was made online the very next day. However, all previous versions remain vulnerable to attacks that exploit a simple prompt injection from external sources.

How the vulnerability works

The vulnerability, tracked under the identifier CVE-2025-54135, manifests during interactions between Cursor and a Model Contest Protocol (MCP) server. The latter allows the tool to access various external tools, including those from Slack and GitHub. However, the manipulation was facilitated by malicious prompt injections.

Researchers demonstrated that Cursor’s agent could be hijacked by harmful instructions. By using a single line of code, an attacker can influence Cursor’s actions, which holds developer privileges on host devices. The execution of these commands occurs without the user having the ability to reject them.

Manipulation act via Slack

During the attack, researchers initiated their prompt injection directly via Slack, with information retrieved by Cursor through the MCP server. This prompt modified Cursor’s configuration file, adding an additional server with a malicious startup command. Once these changes were made, Cursor immediately executes the harmful instructions.

Consequences for users

This type of vulnerability illustrates the precariousness of AI systems integrated into development processes. AI models, by constantly ingesting commands from external sources, open the door to new threats. Malicious documentation or files can transform an AI agent into a local shell, which represents a significant risk.

AimLabs emphasizes that many developers and organizations adopt AI systems without proper understanding of the risks involved. AI agents, sensitive to the instructions of third-party entities, become potential vectors for attacks.

Overview of the problem

Although the patch for this vulnerability has been implemented, AimLabs notes that this type of flaw is intrinsic to the functioning of many language models. Security issues often arise from the way AI agents interpret external prompts. This vulnerability represents a recurring pattern that echoes previous incidents.

For example, similar vulnerabilities have already manifested in different contexts. The relationship between the execution of outputs from AI models and external directives generates a risk that persists across multiple platforms. The very nature of models fosters this exposure to abuse.

Frequently asked questions

What are the implications of a command line attack on Cursor’s AI coding agent?
Such an attack can allow an attacker to remotely take control of the user’s system, executing malicious commands without the victim noticing, thereby compromising data and system security.

How did the attack successfully transform Cursor’s AI coding agent into a local shell?
The attack relies on injecting malicious commands via external prompts in the communication between Cursor and the Model Contest Protocol (MCP) servers, allowing the attacker to manipulate the agent’s behavior.

Which versions of Cursor are affected by this vulnerability?
All versions prior to update 1.3, released on July 8, remain vulnerable to this attack, while the latest version fixes the identified issue.

What types of data can be exploited during this attack?
Attackers can exploit data from external services such as Slack or GitHub, which are integrated into Cursor’s development environment, to inject malicious instructions.

How can one avoid becoming a victim of this attack in the future?
It is crucial to always update your software to the latest available versions, monitor connections with external services, and remain vigilant regarding suspicious behavior of the coding agent.

What security measures should be implemented to protect the development environment using Cursor?
In addition to regular updates, it is recommended to implement strict access controls, use intrusion detection tools, and train users to recognize social engineering attempts.

Is the command line attack a problem unique to Cursor?
No, this type of vulnerability can affect many systems using similar language models that rely on instructions from external sources, thus exposing common security risks.

actu.iaNon classéthe AI coding agent of Cursor has turned into a local shell...

Discover Matt Deitke, the young AI prodigy drawing the attention of Meta with a $250 million offer

plongez dans l'univers de matt deitke, le jeune prodige de l'intelligence artificielle qui fascine meta avec une proposition audacieuse de 250 millions de dollars. explorez son parcours exceptionnel et les innovations qui lui valent une reconnaissance mondiale.

The EU AI Act imposes more transparency on ChatGPT, but minimal impact for users

découvrez comment la nouvelle loi sur l'ia de l'ue impose une plus grande transparence à chatgpt tout en maintenant un impact minimal sur l'expérience utilisateur. informez-vous sur les enjeux législatifs et leurs conséquences potentielles sur les technologies de l'ia.

Generative artificial intelligence in the service of automation: optimizing business processes

découvrez comment l'intelligence artificielle générative transforme l'automatisation des entreprises en optimisant les processus. explorez les solutions innovantes qui améliorent l'efficacité opérationnelle et réduisent les coûts, tout en favorisant la créativité et l'innovation.

ChatGPT has ended the indexing of its conversations on Google

découvrez comment chatgpt a décidé de ne plus indexer ses conversations sur google, garantissant ainsi une confidentialité accrue et une meilleure expérience utilisateur. explorez les implications de cette décision pour les utilisateurs et le futur des échanges en ligne.

Can artificial intelligence take over from an author to write an opinion piece?

découvrez comment l'intelligence artificielle pourrait transformer le monde de l'écriture. cette tribune explore si une ia peut véritablement remplacer un auteur humain dans la rédaction, les enjeux de créativité et d'originalité, et les implications éthiques de cette avancée technologique.

enough of billionaires and their massive technology: ‘frugal technology’ for a better world

découvrez comment la 'technologie frugale' peut transformer notre monde en offrant des solutions innovantes et accessibles, loin des excès des milliardaires et de leurs technologies envahissantes.