How do data-based algorithmic systems affect people who are particularly vulnerable — and what forms of injustice can arise as a result? From a legal philosophy perspective, this dissertation examines how government AI applications in areas such as education, social welfare, asylum, and law enforcement can create individual disadvantages. The focus is on bias typologies, the limits of quantitative fairness tests, and the concept of “susceptibility to algorithmic disadvantage.” Using theoretical tools and case studies — such as the Austrian AMS forecasting system — the work shows how algorithmic systems can reinforce systematic inequalities across different contexts.
This dissertation project is part of the Interdisciplinary Legal Studies program at the law faculty of the University of Vienna. It involves a legal-philosophical examination of the application of data-based algorithmic systems in the context of government action with regard to individuals — especially individuals who find themselves in sensitive life situations: in education, in social welfare contexts, in asylum matters, and in criminal prosecution.
The technical scope of the dissertation covers both more traditional AI systems, such as classification, and generative AI. One focus is the risk of discrimination in connection with data bias. The dissertation delves deeply into the topic of bias and creates a three-part typology of different types of bias, which also addresses the question of whether an existing bias can theoretically be repaired or improved — this is not always the case. This typology has already been published as a peer-reviewed journal article.
In addition, the pitfalls associated with quantitative bias testing are also addressed: all areas of tension and ambivalence that have long been considered and studied in connection with data and quantification are also relevant in the context of bias testing. This is also a peer-reviewed article at FAccT 2023. With regard to the relationship between the state and the individual, philosophical relational egalitarianism is used as a perspective to formulate the prerequisites for a constellation that makes individuals particularly vulnerable to the disadvantageous effects of algorithmic systems in terms of justice theory: susceptibility to algorithmic disadvantage – also already peer-reviewed and published at FAccT 2024.
The various theoretical and methodological tools developed in the course of the dissertation are finally applied to several case studies. One case study, for example, deals with an algorithmic forecasting system used by the Austrian Public Employment Service (AMS). This case study has also been published as a peer-reviewed paper.
With regard to susceptibility to algorithmic disadvantage, a common thread of injustice is drawn across a wide variety of algorithmic systems in heterogeneous areas of application and widely scattered geographical locations.
Paola Lopez
Research Associate
- Building/room: Am Fallturm 1, TAB Raum 3.88
- Phone: +49 (0) 421 218 – 56573
- E-mail: lopez@uni-bremen.de
