Algorithmic Fairness, Institutional Logics, and Social Choice

Robin Burke1, Amy Voida1, Nicholas Mattei2, Nasim Sonboli1, and Farzad
Eskandanian3
1University of Colorado, Boulder
2Tulane University
3DePaul University

Abstract

Fairness, in machine learning research, is often conceived as an exercise in constrained optimization, based on a predefined fairness metric [2]. While many cases of problematic systems appear in popular literature, e.g., [3], only a small number of studies of deployed systems exist, e.g., [1]. We argue that this abstract model of algorithmic fairness is a poor match for the real world, in which applications are likely to be embedded within a larger context involving multiple classes of stakeholders as well as multiple social and technical systems. We may expect multiple, competing claims around fairness coming from various stakeholders, especially in applications oriented towards social good. We propose computational social choice as a promising framework for the integration of multiple perspectives on system outcomes in fairness-aware systems and provide an example in the application of personalized recommendation for a non-profit. Our work so far in this area is ongoing and comprises both studies of user aware fairness in recommendation [5] and evaluating social choice mechanisms [4].

[1] Alex Beutel, Jilin Chen, Tulsee Doshi, Hai Qian, Allison Woodruff, Christine Luu, Pierre
Kreitmann, Jonathan Bischof, and Ed H Chi. 2019. Putting fairness principles into practice:
Challenges, metrics, and improvements. In Proceedings of the 2019 AAAI/ACM Conference
on AI, Ethics, and Society. 453–459.
[2] Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in
recidivism prediction instruments. Big data 5, 2 (2017), 153–163.
[3] Cathy O’Neil. 2016. Weapons of math destruction: How big data increases inequality and
threatens democracy. Broadway Books.
[4] Nasim Sonboli, Robin Burke, Nicholas Mattei, Farzad Eskandanian, and Tian Gao. 2020.
“And the Winner Is…”: Dynamic Lotteries for Multi-group Fairness-Aware Recommendation.
arXiv:2009.02590 [cs.IR]
[5] Nasim Sonboli, Farzad Eskandanian, Robin Burke, Weiwen Liu, and Bamshad Mobasher. 2020. Opportunistic Multi-aspect Fairness through Personalized Re-ranking. In Proceedings
of the 28th ACM Conference on User Modeling, Adaptation and Personalization,
UMAP, Tsvi Kuflik, Ilaria Torre, Robin Burke, and Cristina Gena (Eds.). ACM, 239–247.

Presentation