Authors:Luca Montag
Created:2023-09-08
Last updated:2023-09-25
Automated decision-making: risks to fairness
.
.
.
Marc Bloomfield
Description: PLP
Automated tools are increasingly deployed by public bodies. Their appeal is self-evident: they can make quick work of laborious and repetitive tasks in a more cost- and labour-efficient way than manual processes. These unique benefits, however, come with equally unique risks: their ability to make decisions at scale means that harms intrinsic to their design or use are scaled as well.
Rules-based algorithms are innately rigid, discretion being impossible by design. Machine learning algorithms are notoriously opaque and susceptible to risks of bias, being known to sometimes attribute statistical relevance to facts or datapoints that no rational person would consider relevant.1Mind the gap: how to fill the equality and AI accountability gap in an automated world, Institute for the Future of Work, October 2020, pages 12–18.
Conceivably, then, the use of an automated tool could breach various public law duties. From a procedural fairness perspective, anyone seeking to challenge breaches of these duties needs to be both aware of the role automation played and able to meaningfully understand the implications of the design of a particular system. The issues in this area can be illustrated by a closer look at Public Law Project’s (PLP’s) attempts to understand and challenge use of an automated tool by the Home Office to triage marriage/civil partnership referrals to decide which should be investigated as potential shams (the Triage Tool).
The Triage Tool
PLP became aware of the Triage Tool’s existence in 2016 through a report from the independent chief inspector of borders and immigration (ICIBI).2The implementation of the 2014 ‘hostile environment’ provisions for tackling sham marriage: August to September 2016, ICIBI, December 2016, page 17, para 7.5. The Triage Tool is engaged where a couple gives notice to the registrar to marry and at least one member of the couple is not exempt under the statutory scheme.3Immigration Act 2014 s49. The algorithm assigns couples a score that categorises them either as ‘GREEN’ (no further action taken) or ‘RED’ (flagging the couple for further consideration by a caseworker). Information from an equality impact assessment (EIA) disclosed in December 2020 seemed to suggest that the Triage Tool flagged some nationalities – Albanians, Bulgarians, Romanians and Greeks – more than others.
It took years to build a working understanding of the Triage Tool. From 2020 to 2022, PLP issued and escalated several Freedom of Information (FoI) requests to understand the tool’s operation and associated risks. The EIA revealed that the tool relied on eight criteria, most of which were redacted. Those that were revealed were the age difference between the couple and shared travel arrangements. Disclosure of the other criteria was consistently refused. The EIA also seemed to indicate that manual review was not conducted in every 'RED' referral prior to investigation.
Internal review was requested by PLP for a FoI request from February 2021 – a response was received nearly eight months later. The response revealed that the Triage Tool was a machine learning algorithm, specifically a random forest classifier, and that bias in its historic training dataset was an ongoing concern. By October 2022, the final FoI request had been responded to.
In early 2023, PLP engaged the Home Office in pre-action correspondence. This correspondence revealed that the tool now used six instead of eight criteria. It remains unclear whether the previously identified criteria are still in use. It also revealed that manual review had been mandatory for deciding whether to investigate in every ‘RED’ referral since ‘at least the end of 2021’.
Obstacles to accountability
The EIA and the responses to PLP’s FoI requests illustrate three key challenges when seeking to investigate or challenge the use of automated tools by public bodies. First, with the subjects of automation-assisted decisions often simply given the tail-end of the process – such as a letter notifying you that your marriage is being investigated or a sanction is being applied to your benefits – those impacted are left without any knowledge that automation played a role in that decision.
Second, even when an individual is aware that automation is involved, PLP’s investigation into the Triage Tool demonstrates that obtaining information about the automated tool can take years.
Third, the investigation illustrates how evidence about automated tools has a short shelf-life. PLP was not told when or why the criteria were changed, so information about an automated tool is only accurate until the next software update. Moreover, hard-won information about manual review was rendered stale following an internal procedural change. Paired with the length of time required to build a workable understanding of such tools and their deployment, the environment for effective accountability in this context is particularly unforgiving.
The use of automation in public administration therefore presents somewhat of a moving target. The question we are left with is: what does procedural fairness look like in the digital age? Public law practitioners and scholars will increasingly have to grapple with the finer points that this question raises as automated decision-making becomes a standard tool in the governmental toolkit.
 
1     Mind the gap: how to fill the equality and AI accountability gap in an automated world, Institute for the Future of Work, October 2020, pages 12–18. »
3     Immigration Act 2014 s49. »