Glen Berman
Australian National University
This contribution will explore how the affordances of engineering tools used to develop algorithmic systems configure practitioners’ conception and enactment of their social responsibilities. The engineering tools discussed are cloud-based platforms that provide practitioners with access to scalable computing infrastructure, and services to facilitate the life cycle of algorithmic system design, development, management, and decommissioning. Affordance here refers to the links between technical features of engineering tools, the ways in which such tools are used by practitioners, and the social outcomes of practitioners’ use of these tools to develop algorithmic systems [1].
As algorithmic systems are increasingly deployed in high stakes domains, the allocation of social responsibility for their performance and social outcomes has become a topic of increasing urgency for practitioners and policy makers [4, 3]. The day-today interfaces between practitioners and their engineering tools are critical sites at which social responsibility is negotiated and configured [2]. As such, attending to the affordances of the engineering tools used by practitioners can help extend our understanding of how social responsibility for algorithmic systems is currently constructed. I will present findings from a critical reading of a prominent platform used in the development of machine learning systems, a subcategory of algorithmic systems. These findings highlight tensions between discourse in policy settings regarding the allocation of social responsibility and the conception and enactment of that social responsibility by practitioners using the platform to develop machine learning systems.
References
[1] Jenny L. Davis and James B. Chouinard. 2016. Theorizing Affordances: From Request to Refuse. Bulletin of Science, Technology & Society 36, 4 (2016), 241–248. https://doi.org/10.1177/0270467617714944
[2] Rob Kitchin. 2017. Thinking critically about and researching algorithms. Information Communication
and Society 20, 1 (2017), 14–29. https://doi.org/10.1080/1369118X.2016.1154087
[3] Will Orr and Jenny L. Davis. 2020. Attributions of ethical responsibility by Artificial Intelligence
practitioners. Information Communication and Society 23, 5 (2020), 719–735. https://doi.org/10.1080/1369118X.2020.1713842
[4] Inioluwa Deborah Raji, Andrew Smart, Rebecca N. White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. 2020. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In FAT* 2020 – Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 33–44. https://doi.org/10.1145/3351095.3372873