The Techno-Judiciary: Sentencing in the Age of Artificial Intelligence

By Harry Grindle

In the era of artificial-intelligence (AI), algorithmic technologies are everywhere. AI’s emergence into the field of law has been rapid and there is no doubt that machine learning algorithms have begun to influence sentencing decisions in England and Wales. Computer assisted sentencing is already in the courtroom, offering informational aids to assist judges at sentencing.1 For example, across the UK, algorithms are frequently used to help assess a criminal’s risk of reoffending. The Offender Assessment System (OASys) and Offender Group Reconviction Scale (OGRS) are used for risk assessment and reoffending prediction, combining a hybrid of risk-assessment algorithms and human assessment to influence sentencing.2 AI is even used as an aid for writing legal opinions. In official guidance for Judicial Office Holders from the Courts and Tribunals Judiciary released in December 2023,3 potential uses for AI included:

  • Summarising large bodies of text.
  • Writing presentations (e.g., providing suggestions for topic to cover).
  • Administrative tasks like compose emails and memoranda.

Yet, as AI programmes increasingly come to the forefront of sentencing-process discussions, we must consider how matters of efficiency may impede on perceived judicial fairness.

 

1. AI’s Potential Benefits for Sentencing Practices

i) The Possibility of Neutrality and Consistency

With AI models utilising data-driven algorithms, a theoretical, exciting, possibility emerges, computers can theoretically make neutral and objective decisions, not succumbing to human subconscious bias.4 Seemingly untainted by human errors and sub-conscious biases, new legalistic technologies can draw upon vast databases to propose sentence recommendations or predict the likelihood of criminal recidivism. The techno-judge prides itself on consistency and impartial decision-making. The promise of objective data analysis is certainly an appealing one.

ii) Gains in Courtroom Efficiency

AI Natural Language Processing programmes can sift through large volumes of legal text, helping to speed up courtroom procedures, AI-generated efficiency gains have the potential to ameliorate the criminal justice system’s substantial congestion.5 Already faced with overwhelming caseloads, efficient legal tech makes for a theoretically attractive addition to the courtroom’s toolbox. With backlog data showing there were over 66,500 outstanding cases in the Crown Court and 347,820 outstanding cases in the Magistrates Court in 2023,6 7 it is perhaps no surprise that judges in England and Wales have been given approval by the UK judicial office to use AI to boost productivity.

 

2. A Double-Edged Sword

i) Issues with Training Algorithms

However, such optimistic imaginations do not quite match the practical realities. Algorithmic technologies that supplement legal decision-makers rely on data and such systems are only as solid as the data they consume. If a dataset contains existing biases, AI systems may reinforce already existent disparities, allowing issues such as the disproportionally lengthy sentencing of minorities to continue, but under the guise of machine-neutrality.8 Biased datasets may lead to discriminatory algorithmic predictions. Subsequently, it may come as no surprise that in the United States, “judges relying on artificial intelligence tools to determine criminal sentences handed down markedly less jail time in tens of thousands of cases, but also appeared to discriminate against Black offenders despite the algorithms’ promised objectivity”.9 As AI’s centrality in UK sentencing procedures continues to grow, this American example should serve as a warning of the dangers associated with problematic datasets.

ii) Troubles with Transparency

At the heart of this AI-conundrum lies the opaqueness of modern-day algorithms. Many sentencing and risk-related AI programmes are protected by proprietary information laws, meaning life-changing outputs are determined by unknown inputs. When comparatively accessible algorithms are evaluated, even tech-savvy legal professionals struggle with the complexity of their inner workings. Judges themselves may not fully know the specific mechanisms of the AI system in their courtroom, yet still use it as an assistive feature in their decision-making.

Today’s AI systems are frequently characterised by a troubling lack of transparency, raising difficult questions surrounding legal accountability and appellate procedures. In the UK, public bodies and police forces are not required to disclose information regarding their use of AI, an area of concern in the eyes of the House of Lords Justice and Home Affairs Committee.10 Black-box systems may lead to unsatisfying defendant outcomes, where the components considered by the AI system are not fully understood. Given the importance of judicial reason-giving in the public’s confidence in courtroom fairness, AI’s potential shortcomings are once more illuminated.

iii) Concerns Surrounding the Dehumanisation of the Courtroom

Concerns about transparency are coupled with worries about the possible consequences of a dehumanisation of the legal process. Judges, human decision-makers, exercise judicial discretion, allowing an empathetic understanding of a defendant’s individual circumstances to inform the ultimate sentence they choose to pass down. Mechanistic algorithms may fail to grasp the idiosyncrasies of each case, raising ethical issues regarding the key differences between machine-driven and human-driven decision processes. Currently used sentencing algorithms are fallible, as AI systems struggle to make value judgements in the manner that a human would.11 For those facing the prospect of time in prison, AI’s role in judicial processes can be rather unsettling as a technological error can have catastrophic ramifications.12

 

3. Going Forward: The Future of AI in Sentencing

i) The Need for Thorough Regulation

Though it is uncertain whether AI will one day replace human judges at sentencing, AI will likely become enmeshed within the criminal justice system as an important judicial aid. Thus, effective regulation must be implemented to ensure that AI enhances efficiency and promotes consistency, rather than just compounding the present problems within the justice system. This developing techno-tool for justice appears to be a new double-edged sword. A quest for AI-led courtroom efficiency and sentencing consistency should not come at the expense of the fundamental fairness and due-process concerns the criminal justice system was built upon. Transparency requirement should function so that judges can understand the specific factors considered by AI algorithms and the weight assigned to each variable within the model, whilst also lifting] the veil of AI for those who find themselves at any stage of the criminal justice system. Meanwhile, rigorous data-quality standards must protect AI models from replicating unjust sentencing patterns.

With few cases of AI precedent to draw on, we must proactively regulate rather than retrospectively legislate. Stringent regulations must be placed on how algorithms are designed and developed, even when AI has become sufficiently advanced that it rivals human decision-making.14 If AI is here to stay, clear-cut mandates must look to create processes that are not only transparent, but consistent with contemporary legal norms. Regular review and ongoing oversight are critical for this ever-evolving technology, where a flexible framework must be ready to deal with the emergent challenges AI is likely to create.

ii) Expert Opinion

Speaking at the Oxford Union on 20th February 2024, Baron Neuberger of Abbotsbury – the Former President of the Supreme Court of the United Kingdom – reflected on AI’s complexity and unknowability, “with quantum computing round the corner … it is difficult to predict what AI is going to do”. Though AI will play a central role in the development of many fields in the 21st century, it is in the world of sentencing and criminal justice that some of the most pertinent matters have already arisen.

 

References

  1. Ryberg, J. and Roberts, J.V. eds., 2022. Sentencing and artificial intelligence. Oxford University Press.
  2. Fair Trials, 2021. Automating injustice: The use of artificial intelligence & automated decision-making systems in criminal justice in Europe.
  3. Courts and Tribunals Judiciary, 2023. AI Judicial Guidance.
  4. Upile, S., 2023. The use of AI can disproportionately negatively affect marginalised groups in our society. Is there any benefit to using AI in our legal system?, Doughty Street.
  5. Digital Watch, 2024. AI is making its way into the courtroom in England and Wales, Digital Watch.
  6. Victims’ Commissioner, 2023. Victims’ Commissioner statement on record Crown Court backlog. Victims’ Commissioner.
  7. Sturge, G., 2023. Court statistics for England and Wales. Parliament.uk.
  8. Buranyi, S., 2017. Rise of the racist robots – how AI is learning all our worst impulses, The Guardian.
  9. Brannon, K., 2024. AI sentencing cut jail time for low-risk offenders, but study finds racial bias persistedTulane University News.
  10. Brader, C., 2022. AI technology and the Justice System: Lords Committee Report, Houses of Lords Library.
  11. McKendrick, J. and Thurai, A., 2022. AI isn’t ready to make unsupervised decisions, Harvard Business Review.
  12. Ryberg, J. and Roberts, J.V. eds., 2022. Sentencing and artificial intelligence. Oxford University Press.
  13. Brader, C., 2022. AI technology and the Justice System: Lords Committee Report, Houses of Lords Library.
  14. Taylor, I., 2023. Justice by Algorithm: The Limits of AI in Criminal Sentencing .Criminal Justice Ethics42(3), pp.193-213.