Authors:Ellie Jones
Created:2024-09-12
Last updated:2024-09-18
Accountability in AI decision-making
.
.
.
Marc Bloomfield
Description: PLP
With a new government come new beginnings. This is particularly true for AI regulation, with the new government ideally positioned to remedy the previous administration’s omissions regarding regulating public-sector AI usage. The Data Protection and Digital Information Bill has died a death, presenting the Labour government with a legislative gap to fill – and an opportunity.
While discussion of methods of regulating private sector use of AI has entered the public and political consciousness, this can be contrasted with the lack of focus by government on the regulation of use of AI by public authorities. The new government has the opportunity to take a proactive approach towards legislating in a way that recognises the need to consider the impact of AI and automation its decision-making.
The previous government’s attitude to AI can be characterised as ‘light touch’, in line with the perspective that it would be unwise to hamper developers with overly onerous regulation at the expense of progress.1See further Public Law Project response to the AI white paper consultation, PLP, June 2023. In this approach, it is evident that little consideration was given to the operation of automated systems from a public law perspective. The one-size-fits-all approach towards AI and automation was always unfeasible if the government intended to set out an all-encompassing regulatory framework, and a tailor-made approach to public sector use of AI and automation is urgently required.
A public law-inclusive legislative framework
The use of AI and automation for public sector decision-making creates specific risks for government accountability and transparency. It is paramount that the use of automated tools does not hamper the government’s accountability for the lawfulness of the decisions made. Where there is little information provided about when and how AI is used, it is difficult to identify and prevent its unlawful use.
When considering the challenges posed by limited transparency, the Department for Work and Pensions’ (DWP’s) universal credit advances model exemplifies the issue. The tool is a risk model that flags applications for advance payment of universal credit that it identifies as at a higher likelihood of resulting in a fraudulent claim. Factors that result in a claim being flagged are unknown. Despite concerns raised by the Public Accounts Committee over the fairness of the model,2The Department for Work & Pensions annual report and accounts 2022–23. Fourth report of session 2023–24, HC 290, 6 December 2023, para 6, pages 7–8. the results of the fairness impact assessment were not published by the DWP beyond a very limited amount of detail.3DWP’s annual report leaves many questions about AI and automation unanswered’, PLP, 19 August 2024.
With no transparency requirements in place, holding decision-makers to account for unlawful actions where automated tools have informed the decision in some way is increasingly challenging.
A new path forward
Where the previous government’s AI regulation white paper4A pro-innovation approach to AI regulation, Office for Artificial Intelligence/Department for Science, Innovation and Technology, CP 815, 29 March 2023; updated 3 August 2023. neglected the implementation of robust accountability mechanisms for public authorities using AI, the new government can take a different path. The King’s speech 2024, although focusing on the private sector and ‘those working to develop the most powerful artificial intelligence models’, did not close the door on the government remedying the neglect of the previous administration.
It may be tempting for the government to carry on full steam ahead with streamlining the administrative decision-making process using automation; for a government seeking to cut down on public spending and increase the efficiency of public bodies, automating the process using AI could seem an attractive proposition. However, in pursuing this goal, there is also an opportunity for Keir Starmer’s government to legislate for AI regulation with a mind to public law principles. PLP has outlined the risk of not urgently addressing this legislative gap.5Mia Leslie, ‘The UK government could be sleepwalking into an AI disaster’, PublicTechnology.net, 28 September 2023.
One such step could be to honour the previous government’s commitment to publish details of automated tools used by the government’s 16 biggest departments to the Algorithmic Transparency Reporting Standard (ATRS). While the ATRS was a positive step taken by the last government, the current government has let the 31 July deadline6HC Written Question UIN 24976, 7 May 2024; answered 14 May 2024. pass by with no updates to the database, and as of August the only update is that details will be published ‘shortly’.7Jon Ungoed-Thomas and Yusra Abdulahi, ‘Warnings AI tools used by government on UK public are “racist and biased”’, Observer, 25 August 2024.
The ATRS, moreover, could be placed on a statutory footing and contain mandatory reporting requirements.
The government can look to Canada’s Directive on Automated Decision Making (DADM) for an example of a public law-oriented approach to AI and automation. The DADM not only monitors and tracks the use of AI in the public sector, but also requires publication of algorithmic impact assessments to identify the impact of the automated tool.
When automation permeates government, it impacts how it makes decisions on every level. Access to welfare, social care and legal protections such as asylum are impacted by it. The new government can ensure that principles of fairness, legality and transparency are paramount in implementing and utilising these new technologies.
 
4     A pro-innovation approach to AI regulation, Office for Artificial Intelligence/Department for Science, Innovation and Technology, CP 815, 29 March 2023; updated 3 August 2023. »
5     Mia Leslie, ‘The UK government could be sleepwalking into an AI disaster’, PublicTechnology.net, 28 September 2023. »
6     HC Written Question UIN 24976, 7 May 2024; answered 14 May 2024. »
7     Jon Ungoed-Thomas and Yusra Abdulahi, ‘Warnings AI tools used by government on UK public are “racist and biased”’, Observer, 25 August 2024. »