For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
Just not Fair: Towards a normative framework for AI and algorithmic surveillance technologies (ASTs) from the perspective of structural injustice
Marjolein Lanzing
Photo: Bob Bronshoff

Facial recognition and risk prediction algorithms (ASTs) are increasingly used across social domains. Unfortunately, they can lead to ‘algorithmic discrimination’. ‘Algorithmic fairness’ is an inadequate concept for addressing it. Unlike structural injustice theories, it neglects the social structures in which algorithms are embedded, like institutions. This blinds us to important roots of discrimination such as unequal power relations. Therefore, building on three case studies, this project develops a new, normative framework for assessing ASTs from the perspective of structural injustice.