Le 26 octobre 2023, OpenAI announced the creation of a new unit tasked with monitoring its artificial intelligence models. Named Preparedness, this team will be responsible for anticipating the catastrophic risks that could arise from the malicious use of AI. By controlling the safety and robustness of generative artificial intelligence applications, OpenAI hopes to limit the potential pitfalls of its tools.
Rumors about the development of GPT-5: towards general artificial intelligence?
Despite the current attention on the GPT-4 model, a Twitter leak suggests that OpenAI is secretly developing GPT-5. Scheduled to be completed in December of this year, this new model aims to achieve Artificial General Intelligence (AGI), capable of competing with the human brain in all cognitive tasks. This rumor rekindles the debate on ethics and issues surrounding future developments in AI.
Anthropic and Google: the competitors who refuse to be left behind
Meanwhile, the American mega-startup Anthropic is continuing its efforts to catch up to OpenAI and its generative technology ChatGPT. The young company has recently managed to double its funding thanks to support from Google, which has committed to investing $2 billion in the project. The operation occurs in two phases: $500 million in immediate investment, followed by $1.5 billion in the coming months in the form of convertible debt. This maneuver aims to strengthen Anthropic’s competitive capabilities and counter OpenAI’s lead.
Detecting fraud: a tool to recognize texts generated by ChatGPT
Facing ethical and technical challenges related to its models, OpenAI also wishes to develop solutions to detect potential fraud related to the use of ChatGPT, particularly in the field of education. Indeed, verifying work produced using text generators becomes extremely complex when one cannot identify whether an AI is behind the content.
To address this issue, OpenAI has presented a new tool designed to identify texts written by ChatGPT. In this way, the risks of fraud and other malicious uses can be limited and anticipated. Nevertheless, these initiatives highlight the importance of considering the ethical and social consequences of developing artificial intelligence technologies in advance.
The challenges of OpenAI: between technological innovation and social responsibility
Beyond the technical prowess offered by the models developed by OpenAI, it is essential to pay close attention to the ethical issues that accompany this progress. Research around artificial intelligence generates crucial questions about data protection, algorithmic biases, decision-making autonomy, and the oversight and control of AI applications.
In this perspective, initiatives such as the creation of the Preparedness team or the development of anti-fraud devices allow for more effective regulation of technological developments and their application. However, collaboration among public, private, and civil society actors must be strengthened to establish a harmonized regulatory framework at the international level to address the challenges of the future of artificial intelligence.