A radical change is beginning with the European AI Act. This regulation imposes increased transparency on artificial intelligence systems like ChatGPT. Companies must adapt to enhanced requirements, but the impact on users remains minimal. Reporting and security obligations will transform the AI landscape without disrupting users’ daily experiences. Technological advancements, while promising, may encounter administrative complexities. The quality expectations for AI systems clash with the opaque reality of regulatory processes.
The EU AI Act
Starting from August 2, 2025, the regulation known as the AI Act will impose obligations on artificial intelligence systems such as ChatGPT, DALL-E, and Google Gemini. This text was adopted by the European Union in March 2024 and aims to establish a legal framework to ensure reliable, secure, and innovative AI systems.
New Transparency Imperatives
Chatbots will need to meet transparency requirements, which include the necessity to maintain technical documents. These documents must be accessible to authorities upon request. A brief summary of the model and its operation must also be published. Although this transparency obligation is a step forward, the information shared will remain limited.
Implications for Copyright
AI models will also need to comply with copyright regulations. This means not using training data when rights holders object. A notable evolution is emerging, as many of these AI systems were initially trained in violation of copyright.
Providers will have to declare the data sources, specifying whether they come from public, private, or synthetic databases. A restricted list of sources will be mandatory, but the level of detail to be provided will remain low, leaving many ambiguities.
Sanctions Regime
In the face of non-compliance, sanctions will be applied. Offenders risk fines of up to 15 million euros or 3% of their global revenue. However, the effective implementation of these penalties could be adjusted by a one-year grace period, currently under discussion.
Marginal Impact for Users
Users will not directly benefit from these regulations. The direction of the regulation primarily favors businesses, leaving users facing minimal adjustments in their daily experience. Proposed changes to interfaces, such as indicating content generated by AI, will not be visible until August 2026.
The major risk lies in the possibility that the AI Act becomes a mere bureaucratic compliance measure. Notable changes in the quality of AI systems may take several years to materialize. Expectations for concrete improvements seem quite timid.
Consequences for Creators of Works
Rights holders expressing opposition to the use of their works for training AIs see little change in their situation. The available information will remain insufficient to reshape the relationships between rights holders and AI system providers. However, the regulation proposes the creation of a contact point, thereby facilitating exchanges, even though this does not guarantee a useful response.
In Conclusion
The repercussions of the AI Act reveal a delicate balance between innovation and user protection. The framework established could quickly become a regulatory obligation with no significant impact on the general public. The evolution towards more efficient AI systems now depends on a willingness for rigorous enforcement by market players.
Frequently Asked Questions about the EU AI Act and its Impact on ChatGPT
What are the main obligations imposed by the AI Act for ChatGPT?
As of August 2, 2025, ChatGPT will need to comply with transparency obligations, such as keeping technical documents and publishing summaries of its operation while respecting copyright.
Will the AI Act change how users interact with ChatGPT?
For users, the changes will be minimal. It is expected that indications about AI-generated content will only appear starting in August 2026.
What information will be publicly available due to the AI Act?
Public information will include summaries of how models operate, but they will not provide crucial details about their training or data.
How does the EU AI Act affect copyright related to the use of ChatGPT?
Providers like ChatGPT will need to respect copyright, which includes declaring the data sources used for training, but this does not ensure total transparency for rights holders.
Will there be consequences for ChatGPT in case of non-compliance with the AI Act?
Yes, financial penalties may be imposed, reaching up to 15 million euros or 3% of their global revenue, but the implementation of these sanctions may take time.
Will users of ChatGPT benefit from increased security due to the AI Act?
Security will be enhanced for systems at systemic risk, but this does not directly imply an increase in security for ordinary users.
Why won’t users see immediate changes in their experience with ChatGPT?
Concrete changes to the interface and features will be gradually introduced after the AI Act, meaning the impact on user experience will not be felt until August 2026.
What is considered ‘systemic risks’ for AI models like ChatGPT?
Models that could lead to significant societal impacts, such as biased or harmful decisions, will be subject to stricter safety and risk-reduction requirements.