Involving stakeholders: The role of power in ELSA lab Defence for military AI

Marlijn Heijnen1, Jurriaan van Diggelen1, Marc Steen1

1 TNO, Netherlands

Many ethical, legal, and societal concerns have been raised in relation to the development and deployment of AI systems in military contexts. High-over frameworks and guidelines for AI have been proposed1 but still need to be translated into practical requirements, e.g. to develop and implement AI systems as parts of larger socio-technological systems – we propose to focus on themes like Human-Machine Teaming. This is where diverse stakeholders can have a voice (Gasser, Almeida, 2022). 

However, the engagement of stakeholders raises key questions about empowerment and the distributions of power. Who decides which stakeholders have a say? Who is suitable to represent a stakeholder group (e.g., citizens in mission area)? How are interests expressed, reported, and balanced? And how can these interests be translated into Human-Machine collaboration concepts?  

In general, there are many possible variations of stakeholder involvement processes, and various overlapping methods for stakeholder involvement such as Human centric design and Value Sensitive Design (Friedman, et al., 2019; Jacobs et al., 2018). However more specific understanding is needed about the role of power when involving stakeholders.  

Recently, the ELSA (ethical legal, and societal aspects) Lab Defence was established; a transdisciplinary research program, with eight partners from academia, civil society, and in the future also industry. The ambition of the ELSA Lab (Van Veenstra, Van Zoonen, and Helberger, 2021) is to enable diverse stakeholders to exert influence on the development and deployment of military AI systems in general, and more specifically in Human-Machine Teaming.  

In the ELSA lab Defence, we will explore the following questions: How to organize power and empower stakeholders in ELSA Lab for military AI? And how can this be organised in use cases that involve Human-Machine Teaming? For now, we will explore these questions conceptually, drawing from, e.g., political science and management studies (Freeman and McVea, 2006; Blok, 2019). In later work, we intend to also do empirical research by looking at practical cases of collaboration between partners of the ELSA Lab Defence, e.g. in meetings or workshops. 

References

Freeman, R.Edward., and John McVea. 2006. “A stakeholder approach to strategic management.” In The Blackwell Handbook of Strategic Management, edited by Michael A. Hitt, R.Edward  

Friedman, Batya, and David G. Hendry. 2019. Value Sensitive Design: Shaping Technology with Moral Imagination. Cambridge, MA: MIT Press. 

Gasser, U., & Almeida, V. A. (2017). A layered model for AI governance. IEEE Internet Computing, 21(6), 58-62. 

Jacobs, Naomi, and Alina Huldtgren. 2018. “Why value sensitive design needs ethical commitments.” Ethics and Information Technology. doi: 10.1007/s10676-018-9467-3. 

Van Veenstra, Anne Fleur, Liesbet Van Zoonen, and Natali Helberger, eds. 2021. ELSA Labs for Human Centric Innovation in AI: Netherlands AI Coalition.