Lorenn P. Ruster1, Katherine A. Daniell1,2
1 School of Cybernetics, Australian National University
2 Fenner School of Environment and Society, Australian National University
Ensuring that human-machine collaborations (HMC) do not dehumanise is an emerging focus of debate in the HMC literature, particularly when it comes to the application of HMC in medical contexts (Formosa et al. 2022). Taking a cybernetics lens, HMCs can be seen as purposive systems (Von Foerster, White, Peterson, & Russel 1968) shaped by human values. This interactional stance (Friedman & Hendry 2019) – where technology is shaped by humans and is concurrently shaping humans – is of particular relevance to the responsible design of machines that will be used by humans in collaborative ways. However, much of the human-machine collaboration literature focuses on how human-machine systems allocate resources in optimal ways (Hu & Chen 2017; Liu & Zhao 2021), structure human and autonomous teammates’ roles (Scholtz 2003), measure performance (Ma, Ijtsma, Feigh, & Pritchett 2022) and interact safely (Heinzmann & Zelinsky 1999; Ma & Wang 2022); relatively little attention is given to the importance of the earliest design phases where values are explicitly or implicitly chosen and begin to be embedded in design decisions, shaping HMCs. This talk posits that a focus on dignity as a value guiding these early stages (and then revisited throughout the design and implementation process) could provide a fruitful avenue of exploration for the future of responsible HMCs. It shares an interdisciplinary review on what dignity can look and feel like, pondering its meaning from a cybernetic perspective which considers HMCs as systems comprising technological, human and environmental factors. In doing so, this talk highlights a plurality of meanings of the concept of dignity and its potential relevance to HMC, including human rights-based discourse on the meaning of dignity (Mattson & Clark 2011), concepts of environmental dignity (Manaster 1976) and non-Western perspectives of dignity as communal responsibility (Ikuenobe 2016). It also shares some preliminary learnings from working in intervention research with early-stage entrepreneurs, highlighting how a focus on dignity may influence the initial phases of the design of recommender algorithms. We hypothesise that learnings from the recommender algorithm design context may also apply to the field of HMC. In doing so, it hopes to provoke conversation around the use of dignity as a value for the future of collaborative machines. This talk will be of interest to those who are intrigued about how we might practically ensure that what it means to be human is preserved and enabled in human-machine collaborations.
References
Formosa, P, Rogers, W, Griep, Y, Bankins, S, & Richards, D, 2022, ‘Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts’, Computers in Human Behavior, vol. 133, p. 107296, doi: 10.1016/j.chb.2022.107296.
Friedman, B & Hendry, DG, 2019, Value Sensitive Design: Shaping Technology with Moral Imagination, MIT Press.
Heinzmann, J & Zelinsky, A, 1999, ‘A Safe-Control Paradigm for Human–Robot Interaction’, Journal of Intelligent and Robotic Systems, vol. 25, pp. 295–310, doi: 10.1023/A:1008135313919.
Hu, B & Chen, J, 2017, ‘Optimal Task Allocation for Human–Machine Collaborative Manufacturing Systems’, IEEE Robotics and Automation Letters, vol. 2, no. 4, pp. 1933–1940, doi: 10.1109/LRA.2017.2714981.
Ikuenobe, PA, 2016, ‘The Communal Basis for Moral Dignity: An African Perspective’, Philosophical Papers, vol. 45, no. 3, pp. 437–469, doi: 10.1080/05568641.2016.1245833.
Liu, J & Zhao, Y, 2021, ‘Role-oriented Task Allocation in Human-Machine Collaboration System’, in 2021 IEEE 4th International Conference on Information Systems and Computer Aided Education (ICISCAE), pp. 243–248, doi: 10.1109/ICISCAE52414.2021.9590721.
Ma, L & Wang, C, 2022, ‘Safety Issues in Human-Machine Collaboration and Possible Countermeasures’, in V. G. Duffy (ed.), Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. Anthropometry, Human Behavior, and Communication, pp. 263–277, Springer International Publishing, Cham, doi: 10.1007/978-3-031-05890-5_21.
Ma, LM, Ijtsma, M, Feigh, KM, & Pritchett, AR, 2022, ‘Metrics for Human-Robot Team Design: A Teamwork Perspective on Evaluation of Human-Robot Teams’, ACM Transactions on Human-Robot Interaction, vol. 11, no. 3, p. 30:1-30:36, doi: 10.1145/3522581.
Manaster, KA, 1976, ‘Law and the Dignity of Nature: Foundations of Environmental Law’, DePaul Law Review, vol. 26, no. 4, pp. 743–766.
Mattson, DJ & Clark, SG, 2011, ‘Human dignity in concept and practice’, Policy Sciences, vol. 44, no. 4, pp. 303–319.
Scholtz, J, 2003, ‘Theory and Evaluation of Human Robot Interactions’, in Proceedings of the 36th Hawaii International Conference on System Sciences, Hawaii, Retrieved from https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.6.118&rep=rep1&type=pdf.
Von Foerster, H, White, J, Peterson, L, & Russel, J (eds.), 1968, Purposive systems: proceedings of the First Annual Symposium of the American Society for Cybernetics, Spartan Books, New York.