24 July 2025
Facial recognition and risk prediction algorithms (ASTs) are increasingly used across social domains. Unfortunately, they can lead to ‘algorithmic discrimination’. ‘Algorithmic fairness’ is an inadequate concept for addressing it. Unlike structural injustice theories, it neglects the social structures in which algorithms are embedded, like institutions. This blinds us to important roots of discrimination such as unequal power relations. Therefore, building on three case studies, this project develops a new, normative framework for assessing ASTs from the perspective of structural injustice.