The creation of a new national institute to monitor and anticipate risks related to artificial intelligence represents a major step for France. This initiative addresses crucial societal challenges, combining technological innovation and regulatory vigilance.
The National Institute for the Evaluation and Security of Artificial Intelligence (Inesia) has the mission to assess the potential impacts of this technology on society.
Experts agree that the *anticipation of risks* related to AI is essential. The institute’s goal goes beyond mere monitoring, also aiming to *bring together stakeholders* in the sector around an ethical and responsible framework.
The creation of the National Institute for the Evaluation and Security of Artificial Intelligence
France announced on January 31, 2025, the creation of a new public agency dedicated to the monitoring of artificial intelligence (AI). This institute, named National Institute for the Evaluation and Security of Artificial Intelligence (Inesia), has the primary mission of anticipating the inherent risks of this emerging technology.
A global context favorable to AI
This announcement comes just days before the global summit “for action on artificial intelligence,” which will take place on February 10 and 11 in Paris. The French government considers it urgent to act in a field where security and ethical issues are becoming increasingly pressing.
The objectives of Inesia
Inesia will be tasked with scientifically studying the effects of AI, particularly regarding security. The government specifies that the institute will not be a regulator per se, but an organization that will unite a national ecosystem of stakeholders around the issues of evaluation and security of AI technologies.
Inter-institutional collaboration
The institute brings together four existing administrations: the National Agency for the Security of Information Systems (Anssi), the National Institute for Research in Computer Science and Automation (Inria), the National Laboratory for Metrology and Testing (LNE), and the Digital Regulation Expertise Hub (PEReN). This synergy aims to strengthen France’s ability to anticipate future technological challenges.
A model based on international collaboration
The French government is aligned with the Seoul Declaration, adopted in May 2024. This declaration aims to establish norms for safe, innovative, and inclusive AI, supported by the European Union and several advanced countries in new technologies.
Tools for assessing the impact of AI
A tool, already online, allows for the assessment of the environmental impact of a query on AI systems like ChatGPT or Gemini. This initiative reflects the institute’s commitment to providing a refined and contextualized analysis of new digital tools.
A commitment to responsible innovation
The government encourages support for innovation in the field of AI while emphasizing the necessity to anticipate the associated risks. Such an approach aims to ensure that technological developments occur within a framework of trust and security.
Implications on the ground
This initiative could also influence the dynamics of regulation in France. By anticipating the challenges posed by AI, Inesia could play a crucial role in protecting citizens and the environment while facilitating the emergence of responsible AI.
Consistency with European and international discussions
Discussions around AI regulation in Europe are ubiquitous and resonate with the creation of Inesia. The challenges of cybersecurity, ethics, and data protection continue to fuel debates on a global scale.
Experts emphasize the need to develop defense strategies tailored to the threats generated by AI, in order to preserve the security of critical infrastructures. Link to discussions regarding these themes: defense strategies against threats.
A promising future for AI in France
With the creation of Inesia, France positions itself on a strategic ground integrated into international thinking. Its actions will allow for a better understanding of the risks and opportunities presented by artificial intelligence.
Frequently Asked Questions about the National Institute for the Surveillance of Artificial Intelligence
What is the main objective of the National Institute for the Evaluation and Security of Artificial Intelligence (Inesia)?
The main mission of Inesia is to anticipate and assess the risks associated with artificial intelligence to ensure a responsible and secure use of this technology.
When was Inesia created and what is its official launch date?
Inesia was created on January 31, 2025, in a context where the importance of regulation and security of artificial intelligence is increasingly recognized.
Who will supervise Inesia and which institutions participate?
Inesia will be led by the Secretariat-General for Defense and National Security (SGDSN) with the assistance of the General Directorate for Enterprises (DGE), and will gather actors from several existing administrations such as Anssi and Inria.
What types of risks is Inesia tasked with anticipating?
The institute is tasked with identifying potential risks associated with the use of artificial intelligence, particularly regarding security, ethics, and environmental impact.
Will Inesia act as a regulator of artificial intelligence?
No, Inesia will not have regulatory powers. Its role will be more to scientifically study the effects of AI and provide recommendations without implementing binding rules.
How will Inesia collaborate with other countries on AI-related issues?
The institute is part of the Seoul Declaration for safe AI, which involves international collaborations with pioneering countries in digital technologies.
What is the expected impact of Inesia on innovation in artificial intelligence in France?
Inesia aims to support innovation by creating a collaborative ecosystem among different stakeholders in the field while anticipating risks to ensure responsible development of AI.
What tools or initiatives will Inesia implement to assess the environmental impact of AI?
Inesia will use developed tools to measure the environmental impact of queries on AI platforms like ChatGPT to ensure transparency on the environmental effects of technologies.
How will the public be able to access information produced by Inesia?
The public will be able to access various reports and studies published by Inesia, aiming to inform about the developments and evaluations of risks related to artificial intelligence.
How does this institute represent a change for France in the field of AI?
Inesia marks a strong commitment from France to take preventive measures regarding artificial intelligence, integrating stricter governance and a security approach while promoting innovation.





