The lack of registration of AI systems by the British government raises sharp questions about transparency and accountability. The absence of regulation in a sector where technologies are already influencing millions of lives divides experts. Authorities must act quickly to establish a clear framework and prevent this situation from escalating into a full-blown technological crash. The consequences of such an omission impact crucial areas such as public safety, individual rights, and resource management. The urgency for registration becomes paramount to ensure an informed adoption of artificial intelligence in public services.
Absence of registration of AI systems
No Whitehall department has registered the use of artificial intelligence (AI) systems since the government announced that this would become mandatory. This lack of registration has raised concerns, with some warning that the public sector is navigating blindly with respect to the implementation of algorithmic technologies affecting millions of lives.
Current use of AI
AI is already being employed by the government to inform decisions related to benefit payments, immigration enforcement, and many other critical issues. Contracts have been awarded to various public agencies for AI-related services, such as a facial recognition contract with a potential value of up to £20 million, rekindling fears of mass biometric surveillance.
Limited registration of algorithmic systems
To date, only nine algorithmic systems have been submitted to a public register, while many AI programs used in the social protection system, within the Home Office, or by police forces remain undeclared. This lack of information is even more concerning as the government stated in February that the use of this register would become a requirement for all government departments.
Risks associated with unregulated adoption
Experts warn of the dangers of an uncritical adoption of AI. Recent examples show that computer systems have failed to produce the expected results, as seen with the Post Office’s Horizon software. The specter of poor execution also looms over other AI tools currently in trial phases.
Call for transparency
Peter Kyle, the Secretary of State for Science and Technology, acknowledged that the public sector has not placed enough importance on the transparency surrounding the use of algorithms by the government. He stated that if the government uses these technologies, the public has the right to know how they are applied, highlighting the importance of increased visibility.
Concerns raised by rights advocacy groups
Big Brother Watch, a privacy rights advocacy group, reported that the awarding of the facial recognition contract to police officers, despite warnings from parliamentarians about the lack of regulations, underscores the government’s lack of transparency regarding AI. Algorithmic technologies used secretly can compromise citizens’ data rights.
Consequences of a lack of transparency
The Ada Lovelace Institute recently warned that while AI systems can facilitate administrative management, they can also erode public trust and diminish benefits if the forecasts they generate are discriminatory or ineffective.
Status of the national register
Since the end of 2022, only three algorithms have been recorded at the national level. This register includes a system used by the Cabinet Office to identify digital archives, an AI-powered camera to analyze crosswalks in Cambridge, and a system to analyze patient feedback on NHS services.
Recently awarded AI contracts
According to Tussell, 164 contracts mentioning AI have been concluded with public agencies since February. Technology companies, such as Microsoft and Meta, are vigorously promoting the adoption of their AI systems throughout the government. Google Cloud has funded a report suggesting that greater use of generative AI could generate up to £38 billion in savings in the public sector by 2030.
Examples of government use of AI
Department for Work and Pensions
The Department for Work and Pensions (DWP) is using generative AI to analyze more than 20,000 documents daily to summarize relevant correspondence. This approach allows crucial information to be shared with decision-making agents.
Home Office
The Home Office uses an AI-powered immigration application system. This system, perceived by some as a “robot assistant,” influences decisions regarding the return of individuals to their country of origin. While the government presents it as rule-based, it raises concerns about human accountability in each decision.
Police use
Police forces employ facial recognition software to identify suspects. This has raised concerns about transforming British streets into true technological surveillance camps.
Collaboration with NHS
NHS England has engaged a £330 million contract with Palantir to create a new data platform. This partnership has raised concerns regarding patient privacy, even though Palantir assures that data control remains in the hands of its clients.
Development of chatbots
An AI-powered chatbot is being trialed to help users navigate the gov.uk website. This project, developed by the government digital service, utilizes OpenAI’s ChatGPT technology.
Frequently asked questions about government registration of AI
Why hasn’t the British government registered the use of AI on the mandatory register?
The government announced that registration would be mandatory, but so far, no department has registered the use of AI systems, raising concerns about transparency and control over algorithms used in the public sector.
What are the consequences of the lack of registration of AI systems?
The absence of registration exposes the public to opaque governance, where decisions affecting millions of citizens are made without oversight or understanding, rendering society vulnerable to biases and potential errors in the use of technologies.
What AI systems are currently used by the British government without registration?
Despite millions of contracts awarded for AI services, only nine AI systems have been registered, leaving programs in areas such as welfare and immigration undocumented.
Which parts of the British government use AI technologies without transparency?
Various departments, including the Cabinet Office, the Home Office, and the Department for Work and Pensions, use AI systems for critical tasks such as executing immigration decisions and fraud checks, without making these uses public.
How does transparency in the use of AI affect public trust?
Transparency is essential for the public to feel secure regarding the use of its data. Without clear information, trust diminishes, which can lead to reluctance to accept the adoption of AI in public services.
What measures does the government plan to take to ensure the registration of AI?
The government has stated that transparency and registration of AI systems will become mandatory, but specific details on the implementation of these measures remain vague and have yet to be realized.
What is the experts’ opinion on the current state of AI in the public sector?
Experts emphasize that the lack of adequate registration poses a significant risk, potentially leading to unfair decisions and the erosion of public trust, along with calls for greater transparency in the use of these technologies.
Are there concrete examples of problems caused by a lack of regulations on AI?
Yes, incidents like the errors of the Post Office’s Horizon software illustrate the dangers of unregulated deployment, resulting in severe consequences for the individuals involved.
What actions can the public take regarding the government’s use of AI?
The public can hold elected representatives accountable and promote legislation that requires the registration and transparency of AI systems used in the public sector.