On Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-Making

Jakob Schoeffer1 *, Maria De-Arteaga2, Niklas Kuehl1

Karlsruhe Institute of Technology, Germany 

2 University of Texas at Austin, USA

* Presenting author

Explanations are often framed as an essential pathway towards improving fairness in human-AI decision-making. Empirical evidence on explanations’ ability to enhance distributive fairness is, however, inconclusive [1]. Prior work has found that humans’ perceptions towards an AI system are influenced by the features that a system is considering in its decision-making process [2,3,4]. For instance, if explanations were to highlight the importance of sensitive features (e.g., gender or race), it is likely that humans will perceive such a system as unfair. However, researchers have challenged the assumption that “unawareness” of an AI with regard to sensitive information will generally lead to fairer outcomes [5,6,7]. Moreover, the relationship between humans’ perceptions and their ability to override wrong AI recommendations and adhere to correct ones—i.e., to appropriately rely on the AI—is not well understood. 

In our work we examine the interplay of explanations, perceptions, and appropriate reliance on AI recommendations; and we argue that claims regarding explanations’ ability to improve distributive fairness should, first and foremost, be evaluated against their ability to foster appropriate reliance—i.e., enable humans to override wrong AI recommendations and adhere to correct ones. To empirically support our conceptual arguments, we conducted a user study for the task of occupation prediction from short bios. In our experiment, we assess differences in perceptions and reliance behavior when humans see and do not see explanations, and when these explanations indicate the use of sensitive features in predictions vs. when they indicate the use of task-relevant features. Ultimately, we test for differences in perceptions and reliance behavior across conditions and infer implications for the appropriate characterization of explanations’ role in human-AI decision-making. 

Our findings show that explanations influence humans’ fairness perceptions, which, in turn, affect reliance on AI recommendations. However, we observe that low procedural fairness perceptions lead to more overrides of AI recommendations, regardless of whether they are correct or wrong—a phenomenon sometimes referred to as “algorithm aversion”. This (i) raises doubts about the usefulness of common explanation techniques for enhancing distributive fairness, and, more generally, (ii) emphasizes that fairness perceptions must not be conflated with distributive fairness. 

References

[1] Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, et al. (2021). What do we want from explainable artificial intelligence (XAI)? A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 296, 103473. 

[2] Grgic-Hlaca, N., Redmiles, E. M., Gummadi, K. P., & Weller, A. (2018). Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. In Proceedings of the 2018 World Wide Web Conference (pp. 903-912). 

[3] Plane, A. C., Redmiles, E. M., Mazurek, M. L., & Tschantz, M. C. (2017). Exploring user perceptions of discrimination in online targeted advertising. In 26th USENIX Security Symposium (pp. 935-951). 

[4] Van Berkel, N., Goncalves, J., Hettiachchi, D., Wijenayake, S., Kelly, R. M., et al. (2019). Crowdsourcing perceptions of fair predictors for machine learning: A recidivism case study. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1-21. 

[5] Corbett-Davies, S., & Goel, S. (2018). The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023. 

[6] Kleinberg, J., Ludwig, J., Mullainathan, S., & Rambachan, A. (2018). Algorithmic fairness. In AEA Papers and Proceedings (Vol. 108, pp. 22-27). 

[7] Nyarko, J., Goel, S., & Sommers, R. (2021). Breaking taboos in fair machine learning: An experimental study. In Equity and Access in Algorithms, Mechanisms, and Optimization (pp. 1-11).