The American start-up Anthropic, which seeks to develop safer and more transparent artificial intelligences, recently raised $2 billion through Google. This massive injection of funds into the rapidly growing company highlights the increasing desire of technology giants to advance the fields of artificial intelligence while minimizing inherent risks. The key question remains whether these investments, and the precautions taken by companies like Anthropic to control the dangers associated with AI, will be enough to ensure our safety in the face of exponential developments in cutting-edge AI.
The emergence of ChatGPT and its consequences for the future of AI
Recent advances with conversational systems such as ChatGPT, or “predictive text generator,” have generated both enthusiasm and concern. Capable of writing speeches, planning vacations, and engaging in conversations as well as, or even better than, humans, this technology is celebrated for its versatility and power. Nonetheless, at the same time, questions arise regarding the potentially destructive capabilities of cutting-edge AI and how we will be able to master these tools.
Sounding the alarm
Major players in the sector, such as the British government and the large AI companies themselves, are already sounding the alarm about the unknown dangers of cutting-edge AI. These concerns notably relate to the possibility of malicious actors hijacking artificial intelligence systems for harmful purposes, ranging from misinformation to mass manipulation. Advances in this field are inevitable, but it is imperative to ensure that they do not pose a threat to humanity.
Anthropic’s approach: caution and transparency
In this regard, Anthropic aims not only to refine the capabilities of artificial intelligences but also to ensure their safety through a robust framework of research and development. One of their priorities is to create AI that can provide clear, accurate, and understandable explanations for their actions, in order to empower humans with greater decision-making authority and to limit judgment errors based solely on machine predictions. The substantial funding received from Google should allow Anthropic to make rapid progress in its research and to include more control measures in their AI creations. However, this massive investment also raises questions about the responsibility of large technology companies in developing artificial intelligence and the dangers that could arise from it.
A furious race for AI supremacy
It is undeniable that competition has arisen among technology giants, all eager to bring ever more advanced and capable AIs to market. This intense competition may encourage taking reckless risks or neglecting certain precautions in favor of speed of development. It is therefore essential that not only Anthropic but also its competitors like OpenAI and other industry players remain vigilant about the potential dangers of their work.
Collective responsibility in the face of the rise of artificial intelligences
Ultimately, it is the responsibility of all stakeholders – startups, governments, technology companies, and researchers – to carefully examine the implications of their actions in creating cutting-edge AI. Although investments such as Google’s in Anthropic are beneficial for research and – we hope – for managing the risks associated with artificial intelligence, it cannot stop there. Everyone must take their share of responsibility and ensure that the advancements in AI truly serve the interests of humanity while limiting collateral damage. Future developments will need to be subject to ongoing dialogue and sharing of ideas among key stakeholders. The success of artificial intelligence will largely depend on our collective ability to cooperate, innovate, and place ethics at the heart of our decisions for the future.