the coding tool powered by alibaba’s ai raises security concerns in the west

Publié le 6 August 2025 à 09h27
modifié le 6 August 2025 à 09h27

Alibaba’s AI-powered coding tool raises major concerns regarding its security in the West. The potential repercussions on global technology systems are already worrying many experts. Such innovation, while promising, could turn into a Trojan horse in the digital landscape. The implications for data security and developer autonomy require close attention. The tension between technological advancements and vigilance is intensifying.

Qwen3-Coder from Alibaba: An innovative yet controversial coding tool

Alibaba has introduced a new coding tool, Qwen3-Coder, powered by artificial intelligence. This advanced coding agent, part of the Qwen3 family, stands out for its ability to perform complex software development tasks. With an open model containing 480 billion parameters, this tool activates 35 billion of them through an approach called Mixture of Experts.

The Qwen3-Coder manages up to 256,000 context tokens and can potentially extend this capacity to one million through extrapolation techniques. The model’s performance is already being compared to that of leading tools developed by OpenAI and Anthropic. While these features impress, growing concerns emerge regarding the safety of technology systems.

Security risks associated with widespread adoption

Cybersecurity experts express concerns about the use of Qwen3-Coder by Western developers. Jurgita Lapienyė, editor-in-chief at Cybernews, highlights that this tool could transform the way software is designed while posing invisible threats. By utilizing such models, the risk is to “walk while sleeping” towards a future where vulnerable codes could underlie critical systems.

Recent analyses of AI adoption in large US companies reveal that 327 firms in the S&P 500 are integrating artificial intelligence tools. This has led to the discovery of around 1,000 vulnerabilities associated with these technologies. The concern lies in the fact that a new AI model could introduce subtle weaknesses, exacerbating the overall security situation.

A Trojan horse in open code?

The question arises: Is Qwen3-Coder a convenience tool or a potential danger? Lapienyė mentions the concept of a “Trojan horse” in the context of open-source software. The emphasis on technical performance may mask fundamental problems related to the security of systems. Alibaba’s AI model could very well be exploited maliciously in strategic environments.

The implications of using such a model are even more serious given China’s national intelligence legislation. Companies like Alibaba are required to cooperate with government requests. This raises major concerns regarding data protection and the security of generated codes.

Data exposure: a malicious access offense

Adopting Qwen3-Coder presents a serious risk regarding data exposure. Each interaction with this tool could reveal sensitive information such as proprietary algorithms and security logics. Although the model is open source, opacity persists regarding the underlying infrastructure and tracking methods.

The lack of transparency about the data’s journey complicates developers’ tasks. This ambiguity can lead to situations where crucial details escape the designers’ attention, paving the way for unsuspected security issues.

Autonomy of AI models: towards unbridled power?

Alibaba also advocates for autonomous AI models capable of performing tasks without direct human intervention. While this may enhance efficiency, there is potential for drift. If an agent can modify entire codebases without supervision, it represents a significant, even explosive, risk.

This autonomy raises defense concerns. An agent that understands a company’s systems could create and target sophisticated attacks. This same background of innovation could, under different auspices, enable cybercriminals to infiltrate critical systems more swiftly.

Insufficient regulation and gaps in oversight

The current regulation does not seem adequate to assess tools like Qwen3-Coder. While the U.S. government has focused on privacy concerns for applications like TikTok, little effort has been made to control AI models developed abroad.

Agencies like the Committee on Foreign Investment in the United States (CFIUS) review acquisitions, but no similar procedure exists for AI models. Biden’s executive order on AI focuses on domestic models, overlooking concerns about imported tools integrated into sensitive environments.

The next challenge to address

To limit exposure to these risks, it is advisable for organizations dealing with sensitive systems to reconsider the integration of Qwen3-Coder. The same caution applies to AI models developed abroad. The question arises: if one wouldn’t welcome an intruder into their room, why allow an AI model access to their source code?

Security tools must evolve. Static analysis software does not always detect the complex backdoors that AI systems can create. The industry needs to develop new tools specifically dedicated to identifying and reviewing AI-generated code.

Industry stakeholders, from developers to regulators, must understand that code-generating AIs are not neutral. These systems hold significant power both as practical tools and potential threats.

Frequently Asked Questions about Alibaba’s AI-powered coding tool and its security concerns

What are the main security concerns related to the use of Qwen3-Coder?
The major concerns relate to the possibility that this tool introduces vulnerabilities into the code, which could be exploited by attackers. Additionally, there is concern about the exposure of sensitive data when using this type of AI, especially due to China’s national security laws.

How could Qwen3-Coder change computer systems without permission?
Qwen3-Coder is designed to operate autonomously, meaning it can write and correct code without direct supervision. This poses the risk of unauthorized changes being made to critical system code, potentially for malicious purposes.

What types of data can be exposed when using this AI model?
Interactions with Qwen3-Coder can reveal sensitive information such as proprietary algorithms, security logics, and infrastructure designs, thus representing a risk to national security when used in sensitive sectors.

Why is it dangerous to use AI tools developed under China’s national security laws?
Companies like Alibaba are required to cooperate with the Chinese government on national security issues. Therefore, AI models like Qwen3-Coder could be manipulated to integrate vulnerabilities that are difficult to detect and understand, representing an additional risk for systems built with this code.

What precautions should developers take before integrating Qwen3-Coder into their workflows?
Developers should suspend the integration of Qwen3-Coder into sensitive systems until a thorough risk assessment has been performed. They must evaluate the tool’s provenance and its data management and security practices.

How does the tech community view the performance of Qwen3-Coder compared to other Western models?
Although Qwen3-Coder is touted for its high performance by Alibaba, some experts believe that these performances may overshadow the security concerns surrounding it. The emphasis on productivity may divert attention from the real risks associated with its use.

Do current regulations address the risks posed by Qwen3-Coder?
No, existing regulations do not adequately address tools like Qwen3-Coder. Currently, there is little public oversight over AI models developed abroad that could pose threats to national security.

What impact could Qwen3-Coder have on technology supply chains?
The use of Qwen3-Coder may render technology supply chains vulnerable by integrating flaws into the code that could be exploited by attackers, thereby compromising the security of critical systems.

actu.iaNon classéthe coding tool powered by alibaba's ai raises security concerns in the...

Google introduces Jules and Gemini CLI, its AI agents dedicated to GitHub actions

découvrez comment google révolutionne la gestion des actions github avec jules et gemini cli, ses nouveaux agents d'intelligence artificielle. apprenez à optimiser vos workflows de développement grâce à ces outils innovants.

Microsoft redirects searches for “ChatGPT” and “Claude” on Bing to promote its Copilot tool

découvrez comment microsoft redirige désormais les recherches pour « chatgpt » et « claude » sur bing, afin de promouvoir son nouvel outil copilot. cette stratégie met en lumière l'innovation de microsoft dans le domaine des technologies d'assistance et son ambition de renforcer son intégration dans les recherches en ligne.

Understanding agentification and automation: challenges and impacts for your data strategy

découvrez comment l'agentification et l'automatisation transforment votre stratégie data. analysez les enjeux et impacts clés pour optimiser vos processus et rester compétitif dans un monde de plus en plus numérique.

Cloudflare accuses Perplexity of illegal web crawling

découvrez les tensions entre cloudflare et perplexity, alors que cette dernière est accusée d'avoir effectué un crawling illégal sur des sites web. analysez les implications légales et techniques de cette affaire retentissante dans le monde du web.

The arrival of the AI Act: a new challenge for Europe and the United States

découvrez comment l'ia act, nouvelle législation sur l'intelligence artificielle, représente un défi majeur pour l'europe et les états-unis. analyse des implications réglementaires et des impacts sur l'innovation.
découvrez comment les groupes artistiques et médiatiques s'unissent pour alerter le gouvernement sur le vol massif de contenus australiens, en vue de protéger la création artistique contre les abus liés à l'entraînement de l'intelligence artificielle.