Eighteen countries, including the United States, the United Kingdom, Germany, Italy, Australia, and Singapore, have just concluded an agreement aimed at enhancing security in artificial intelligence (AI) design. This is a global first in terms of international rules focused on the development of secure AI systems from their inception and the protection of these technologies against malicious actors.
An international agreement to strengthen the cybersecurity of artificial intelligence
According to Jen Easterly, Director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA), it is important for countries to recognize that the development of AI requires a security-focused approach. This is the first time a consensus has been reached to prioritize security over functionality, speed to market, or cost reduction in the field of AI.
The founding texts of a new era of cybersecurity
The UK National Cyber Security Centre (NCSC) regards these initial texts as the beginnings of future regulations. The ambition is to help developers make informed cybersecurity decisions at every stage of the development process. The goal is to ensure that security becomes a prerequisite for any artificial intelligence system and that it is placed at the center of the development work for each component throughout the entire process. At the launch of this initiative, around a hundred key partners from industry, governments, and international agencies will gather, including Microsoft, the Alan Turing Institute, and the cybersecurity agencies of the UK, the US, Canada, and Germany.
A collective commitment to ensure secure AI development
On Sunday, November 26, countries such as the United States, the United Kingdom, France, and a dozen others signed this agreement to strengthen the cybersecurity of artificial intelligence. This is the first detailed international agreement aimed at protecting artificial intelligence from malicious actors and encouraging companies to create secure AI systems from their design. To support this, CISA and NCSC have released new guidelines regarding the development of secure AI systems that will help developers make informed cybersecurity decisions at every stage of the development process.
Recommendations to ensure security in the AI field
These guidelines, signed by agencies from 18 countries, aim to elevate the levels of cybersecurity in artificial intelligence and thus contribute to ensuring that it is designed, developed, and deployed safely. Jen Easterly emphasizes that the publication of these guidelines marks “a key milestone in our collective commitment – from governments around the world – to ensure secure development and deployment of AI capabilities from their inception.” Although it is not a binding agreement, it mainly contains general recommendations such as monitoring AI systems against abuse, protecting data from tampering, and overseeing software suppliers.
An unprecedented international declaration to anticipate AI risks
The agreement also seeks to address questions on how to prevent the hijacking of artificial intelligence technology by hackers. It includes recommendations such as only releasing models after conducting appropriate security testing. In total, 28 countries, including the United States, the United Kingdom, EU member states, and China, have signed the Bletchley Declaration aimed at anticipating the potential risks posed by artificial intelligence. The Bletchley Declaration represents the first international declaration on AI, with all countries agreeing that this technology poses a potentially catastrophic risk to humanity.
As artificial intelligence takes an increasingly important place in our society, this international agreement and the announced measures represent a significant step towards ensuring the secure and responsible development of this crucial technology. Cooperation among different countries and actors must persist to effectively prevent the inherent risks of the growing use of AI in various fields.