Overcoming Algorithm Aversion through Process Control: People Will Use Imperfect Algorithms if They Can (Even Slightly) Customize Them

Lingwei Cheng1, Alexandra Chouldechova1,

Carnegie Mellon University, USA

Understanding the effects of providing greater control to intended users of algorithmic tools is central to advancing the responsible development and deployment of AI technologies in human-in-the-loop systems. While there is now an increasing emphasis on the use of participatory design methods for AI development, algorithms mostly continue to be designed by third-party researchers and organizations that may not fully understand users’ needs and values. This can lead to algorithm aversion, wherein human decision-makers are reluctant to use algorithms even when those algorithms outperform expert human judgment [1–3]. Studies have found that users are more willing to use algorithms as long as they have some control over the outcomes [4], and are more likely to perceive the algorithms as fair in those settings [6]. This ability to appeal or modify the outcome of a decision once it has been made is
termed “outcome control” [5]. Outcome control can be contrasted with “process control”, which entails control over the processes that lead to the algorithmic tool (e.g., data curation, the training procedure, etc.) The effect of process control on algorithm aversion is presently under-explored. We ask: Does process control mitigate algorithm aversion? Does providing both process control and outcome
control more greatly mitigate algorithm aversion than either form of control on its own? We conduct a replication study of outcome control [4], and test novel process control study conditions on Amazon Mechanical Turk (MTurk) and Prolific by allowing users to customize what input factors or model family (e.g., linear regression, trees, etc.) are used in the training process. Our results (mostly) confirm prior findings on the mitigating effects of outcome control. We find that process control in the form of choosing the training algorithm mitigates algorithm aversion, but changing inputs does not. Choosing the training algorithm also mitigates algorithm aversion to the same extent as does outcome control. Lastly, giving users both outcome and process control does not reduce algorithm aversion more than outcome or process control alone. Our study contributes to design considerations around mitigating algorithm aversion and reflects on the challenges of replication for crowdworker studies of human-AI interaction.

References

[1] RM Dawes, D Faust, and PE Meehl. 1989. Clinical versus actuarial judgment. Science 243, 4899 (1989), 1668–1674. https://doi.org/10.1126/science.2648573 arXiv: https://science.sciencemag.org/content/243/4899/1668.full.pdf
[2] Robyn M Dawes. 1979. The robust beauty of improper linear models in decision making. American psychologist 34, 7 (1979), 571.
[3] Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General 144 (2015), 220–239. Issue 1. https://doi.org/10.1037/xge0000033
[4] Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. 2018. Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Management Science 64, 3 (2018), 1155–1170. https://doi.org/10.1287/mnsc.2016.2643
arXiv: https://doi.org/10.1287/mnsc.2016.2643
[5] Pauline Houlden, Stephen LaTour, Laurens Walker, and John Thibaut. 1978. Preference for modes of dispute resolution as a function of process and decision control. Journal of Experimental Social Psychology 14, 1 (1978), 13–30. https://doi.org/10.1016/0022-1031(78)90057-4
[6] Min Kyung Lee, Anuraag Jain, Hea Jin Cha, Shashank Ojha, and Daniel Kusbit. 2019. Procedural Justice in Algorithmic Fairness: Leveraging Transparency and Outcome Control for Fair Algorithmic Mediation. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 182 (Nov. 2019), 26 pages.
https://doi.org/10.1145/3359284