Restitutionary Liability for Algorithmic Decision-Making

Authors

  • Dr Elise Bant

Abstract

As algorithmic systems increasingly influence economic and administrative decision-making, novel questions arise regarding restitutionary liability for benefits generated through automated processes. This article explores whether unjust enrichment doctrine can coherently address enrichment arising from algorithmic error, bias, or unintended system behaviour. It examines the conceptual challenges posed by attribution, causation, and voluntariness in circumstances where enrichment occurs without direct human intention. The article analyses whether existing principles governing enrichment “at the expense of” the claimant can be adapted to automated contexts or whether such cases expose structural limits within restitutionary reasoning. Drawing on comparative case law and interdisciplinary scholarship on artificial intelligence and legal responsibility, the article argues that unjust enrichment can provide a corrective response in limited circumstances, particularly where identifiable parties benefit from algorithmic malfunction. However, it cautions against an expansive application that would convert restitution into a general mechanism for allocating technological risk. The article concludes by proposing criteria for restitutionary liability that balance corrective justice, commercial certainty, and the realities of algorithmic governance.

Published

30-09-2024

Issue

Section

Articles