What does responsibility look like in military contexts – and how do you think about encoding it in autonomous military technologies with the capacity to harm? In today’s episode, we explore this topic from a legal perspective with the help of Lauren Sanders. Lauren is a senior research fellow at the University of Queensland with expertise in international criminal law, international humanitarian law, and domestic counter-terrorism law. She is also host and editor of the Law and the Future of War podcast.
Listen and subscribe on Apple Podcasts, iHeartRadio, PocketCasts, Spotify and more. Five star ratings and positive reviews on Apple Podcasts help us get the word out, so please, if you enjoy this episode, please share with others and consider leaving us a rating!
Guest: Lauren Sanders
Co-Hosts: Zena Assaad and Liz Williams
Producers: Zena Assaad, Liz Williams, and Martin Franklin (East Coast Studio)
Liz: Hi everyone, I’m Liz Williams.
Zena: And I’m Zena Assaad.
Welcome to episode two of our second season of Algorithmic Futures.
Liz: Join us as we talk to technology creators, regulators and dreamers from around the world to learn how complex technologies may shape our environment and societies in the years to come.
Liz: In today’s episode, we’re joined by Lauren Sanders. Lauren is a senior research fellow at the University of Queensland, researching international criminal law, international humanitarian law and domestic counter-terrorism law. Lauren is also the host and editor of the Law and Future of War podcast where she interviews experts in the fields of law and emerging and disruptive technology, military strategy and military affairs.
Zena: Lauren has done an extensive amount of work around autonomous military technologies and the legal and ethical implications of these technologies. In this episode we unpack some of the subjective terminology around autonomous technologies – things like responsible, ethical and trusted. We also explore how these subjective concepts can be captured in a legal framework.
Lauren shares some incredibly insightful perspectives on the use of advanced technologies in a military domain. She did not shy away from answering some prickly questions and was able to bring a refreshing level of pragmatism to the discussion. We really hope you enjoy this episode as much as we enjoyed interviewing Lauren.
Hi, Lauren. How are you going?
I’m well, Zena. How are you this morning?
I’m doing really well. We’re very excited to have you on the podcast today.
Thank you so much for having me. It’s great to be here, to be talking to someone else who’s podcasting in this area of AI and really, from my perspective, full law.
Well, yeah. We’ve both been working in this space for a while, and I know the last time we saw each other was when we attended a workshop around responsible AI. And we’re seeing this term, responsible, pop up everywhere, in both civil and defense perspectives. And I know that you do a lot of work in the defense space. So from your perspective, how does this term resonate differently in a defense perspective versus a civil perspective?
Well, I think the genesis of the concept, responsible AI, is interesting in itself. We’ve seen, over the last few years, the terms used to describe what we want from AI evolve. So, trusted. Transparent. There’s a whole heap of principled words that have been used to describe what AI can do or is desirable for us to do over the last few years. But certainly, the current zeitgeist, both in the defense context and in the civilian context, seems to be using the term responsible because it is a really useful umbrella principle that incorporates most of the other features or characteristics of what it is that is necessary to apply to AI to make it lawful, make it a technology that is acceptable for use by society generally.
When you were describing all of those things, Lauren, you described a lot of other terms that go under that thing and those terms are also big terms. So you said things like lawful and trusted. I think for me, one of the things that I struggle with is that this concept of responsible is really hard to grasp. It’s hard to capture. It’s hard to describe. And it seems to be this umbrella term that all of these other things fall under, and they’re also very hard to capture and very hard to describe. So how do you grapple with that, especially in a defense perspective where things can be, perhaps, maybe the lines are a little bit more blurry or maybe some of the considerations are a little bit more different in that perspective?
I think it’s really interesting to look at what is happening worldwide, in relation to these uses of principles and terms to describe AI. So there’s been a proliferation of principles in terms of how states want AI to be utilized from a civilian perspective, but also a number of states have released principles in relation to the military use of AI, which include a lot of these terms. So you see things like responsibility appearing in almost all of them. And in a number of them now, they’re being used as the primary descriptor.
There was a responsible AI in the military conference in the Netherlands in February 2023, which resulted in, I think, about 70 states issuing a call for action for AI in the military context to be responsible. Within each of those sets of principles that these states are releasing, they have these umbrella terms. They have then specific principles underneath those. And each state, of course, has a slightly different definition for what they mean for all of those things.
I’ve undertaken an activity with a colleague of mine where we tried to compare and contrast all of these definitions to see what really the essence of each of those terms were. And because they are so based in culture, in the use context, and the requirements for the particular system, it’s very difficult to actually come up with a specific definition. And I think what we’re going to see is state-based adoption of these particular terms. But in the Australian context, it seems like Australia has issued a response to that call for action from the responsible AI in the military conference, and adopted that particular term. We are seeing some of the language around AI in the military using the term responsible. So DAIRNet is a collaborative research network that uses that term in some of its work.
But actually, we haven’t seen the release of any particular list of principles from the Australian Defense Force as to what those specific terms are, what they actually mean. More, it’s having to derive them from practice and what other principles and policy statements have been issued. I expect that they probably will release a set of principles in the near future. And I would also expect to see responsible being one of those headline principles, if not the umbrella term, just because it does really reflect, I guess, the current zeitgeist as to how people are trying to grapple with what they want AI to be.
But realistically, from a legal perspective, responsibility is really used to talk about who can be the person that is held to account for a particular action, or who is the state that might be held to account for a particular action. So in military context, state responsibility for the use of AI is a little different to what you would see in a civilian context, largely because the likely use of AI for harmful means or causing harm or damage to humans or objects is obviously a lot higher.
Given the multitude of potential ways different states, different actors can interpret responsibility, how do you approach conversations about this in your work? What are some of the considerations or approaches that you lean into, to tease out what responsibility means in a certain context, and how that might get translated into a legal perspective.
Well, I think it’s slightly easier for the lawyers than it is for those dealing with general principles and ethics, because we have a set of rules that we can rely on already, and we can identify where the gaps of those rules are in relation to the use design, development, deployment of these kinds of systems. So in the military context, if anything is being designed to be a weapons means or method of warfare, then we know that the rules of international humanitarian law or the laws of armed conflict will apply to how that particular weapons means or method of warfare will be utilized. So from that perspective, we can conduct effectively a gap analysis to say what already limits how this capability can be utilized. And is there any gaps in the law that cause concern?
So at the moment, it’s been ongoing for five to six years now, there’s been discussions under the auspices of the convention on certain conventional weapons, where there’s been a group of government experts appointed to talk about lethal autonomous weapon systems, which is effectively discussing and trying to identify if there is a need to create an additional set of rules or an additional ban in international law, to deal with this new kind of weapon system that’s incorporating AI to conduct its identification and strike of an object or a person in a military context, or if the existing rules effectively cover the field.
So from that perspective, it’s more applying existing rules, rather than necessarily having to grapple with the messy non-definitional space of ethics. Though that’s not to say that there is an interaction between the two fields.
That brings up that tricky word, ethics, and I know that you’ve done a lot of work looking at the ethical use of technology capabilities from a defense context. What’s the difference between ethical and responsible from a legal perspective?
Well, it’s tricky because when we talk about a legal perspective, we’re talking about an obligation that the law would apply in relation to someone being held accountable or a state being held accountable, or there being a specific prohibition or permission in relation to the use of a kind of technology. So ethics and responsibility form part of the general framework as to how we get the law, but they’re not necessarily separate streams within the law.
So ethics itself, we find typically will filter or trickle down into legal obligations or legal permissions or prohibitions over a number of years. And if we think about weapons law treaties, you can see that, over a number of years, those kinds of weapons that have been designed and used by states that have particularly indiscriminate or injurious effects have been banned as a consequence of the ethical concerns about those, or the humanitarian concerns about those weapons, more so than there being a strict or existing legal prohibition, other than the rules that would preclude them being used in those ways in conflict anyway.
So what we tend to see is the idea of ethics trickling down into the law over time. There is, of course, a principle of law called the Martens Clause, which is a principle of international law, or international humanitarian law, I should say, that dictates that if a particular item doesn’t, or a particular practice or process or weapons or system doesn’t have a specific prohibition applicable to it, or a specific rule applicable to it, but it still needs regulating in some way, there’s a reference to the idea of the dictates of public conscience. So if there is an overwhelming position that the public social conscience says this weapon shouldn’t be used in this particular way, then the Martens Clause creates this generalized ban. It’s been read very narrowly by states in the past to say, “Well, actually, this is usually something that’s already banned by our existing rules,” and that’s part of the debate that we’re seeing at this GG on laws at the moment.
But there are always these different inputs and effects for how ethics is then incorporated into law, particularly, and we’re talking about international humanitarian law, the laws of armed conflict, because they are centered around the concept of humanitarianism, reducing harm to the civilian population, whilst also enabling states militaries to achieve that military effect, which inevitably means the destruction of humans and things. So there is a balancing act in relation to what is lawful in that context, and there is a shift in relation to what is acceptable over time, which reflects general ethical concepts and principles, and then those principles are codified into law.
Lauren, you’re talking a lot about existing laws and how a lot of existing laws apply to emerging technologies or AI-enabled technologies. One thing I’m interested in is, I think a lot of the concerns that emerge from emerging technologies in the military space are concerns that exist even with human beings in that particular position. So why is it that we’re seeing this very strong response, I guess, to ethical and legal considerations of these technologies, even though we’ve seen a lot of the implications that come out of these technologies come out of what would be deemed regular warfare and has happened for decades and decades? What is it about emerging technologies that’s raising these concerns, that arguably have always existed, even in absence of these technologies?
It’s a really good point because, you’re right, there is an entire body of law that directs how soldiers, sailors and air persons are required to operate in the context of armed conflict. So there is a human who can be pointed to, who can be held accountable for their action, if they fail to follow those particular rules. Those rules require a lot of analysis and application of the context. And part of the argument in relation to passing some of the decision-making from these humans to machines, particularly when we’re talking about lethal autonomous weapons systems, is that machines don’t have that same judgment, that same calculus, that same reticence to take another human life that another human being would.
So the majority of argument that we are seeing at the GG relating to arguments to ban these weapon systems outright, to say that there should be an international treaty that bans the creation, use, distribution of lethal autonomous weapon systems, is that the use of these systems offends the principle of human dignity. So that’s this adoption of this ethical argument to say, we should now create a legal prohibition on the use or design or development of these capabilities. Whereas other people, a number of other states, reflect on the fact that, yes, individual soldiers have obligations to apply these rules, but getting a machine to do certain parts of that on their behalf doesn’t mean that there is an absence of human decision-making. It just means that there has been a machine undertaking a particular part of that decision-making, with specific instructions from a different human and a different point in the system.
So a lot of the response from states that aren’t supportive of this ban really focus on the idea that there is no difference in relation to the argument of human dignity being killed by a machine that’s been programmed three days prior to killing you, as compared to a machine or ammunition that’s been programmed to kill you 30 seconds before it strikes. So those kinds of arguments are really being tested with that underlying issue of, what does it actually mean to apply dignity in the context of armed conflict? It’s obviously a horrific situation where we’re talking about the loss of life of either combatants or collateral damage effects in relation to civilians, and then obviously, the damage to property and objects is on the other side as well. So those arguments seem to be really focused on the concerns about passing decision-making to an inanimate object effectively, as compared to some of the other restraints that you might see in decision-making processes when a human is directly making those decisions.
And of course, that flows on then to that idea of responsibility or accountability, because when we’re talking about things like war crimes, at the moment, the way the system is designed is that you can point to a human who is responsible for their action, so that person can be held criminally liable and responsible for what they have done wrong. Whereas if you are pointing to a machine that’s caused the harm, trying to unwind and find that point in the system design where the algorithm created a direction to do a certain thing, three months before it was deployed, it makes that accountability process a little bit more difficult. And that’s another challenge in relation to assessing whether the existing legal system is satisfactory for this novel technology.
Thinking about the design aspects of an autonomous system and how that can potentially play a role in the assessment of who is responsible for a given decision, I’m curious about how uncertainty plays a role in that. I mean, a lot of these autonomous systems, they’re based on statistics, and there is some aspect of their design that is inherently uncertain, particularly in context that they weren’t necessarily trained for. And so I’m curious, how do you think about the uncertainty aspect of these kinds of systems from a legal perspective?
Well, I think it’s useful to note that there’s uncertainty in relation to the use of any kind of weapon system deployed in a situation of armed conflict, and even uncertainty in relation to the way in which humans are going to respond to their expected duties and obligations in situations of armed conflict. So uncertainty is not something new in relation to the requirement to field capabilities and complex systems of systems, which is how a lot of modern militaries view their technologies that are interacting between different spaces, domains and systems. But it means that there needs to be a really rigorous testing and evaluation process, and that testing and evaluation process must be specific to the context and use of that capability.
So you noted that, obviously, a lot of AI capabilities can be taken and adopted for use from what they’re originally designed for, for another purpose. And I think my favorite example of this is that, in Japan, there was, I think you’ve probably all heard this one already, but there was the problem of trying to identify the right kind of pastry at a Japanese bakery. So this guy designed an AI scanner that could scan a tray with different croissants and could tell if it was a pistachio croissant or an almond croissant. Within the space of a year, that technology had been changed to be able to scan slides of human cell material, to identify if there were any cancer cells on that particular slide. And I guess, visually, you can sort of picture in your head the similarities in relation to that use, but obviously, wildly different consequences in relation to getting that analysis and assessment wrong.
And you can apply that assessment in relation to military technologies as well. Obviously, adopting and incorporating something that was used to make decisions, in relation to basic statistical probability in a civilian context is going to have a very different outcome if it’s going to be used for the purposes of securing a lethal effect. So if it’s part of the targeting cycle, if it’s, I think some academics call it the idea of co-belligerency, so it has this production of hostilities as an outcome of its use in the system, there’s a requirement to make sure it’s fit for purpose in relation to that specific use.
But moreover, there’s also a requirement to make sure that it’s capable of complying with the specific legal framework that it’s going to be applying. Because in situations of armed conflict, depending on the kind of armed conflict, there’ll be different rules that apply. If you’re not using it in armed conflict and you’re using it in a civilian setting, so if we’re thinking of humanitarian assistance disaster relief, or if we are thinking of peacekeeping operations, there’s going to be a separate set of legal regimes that apply as well, in relation to things like privacy, data protection, biometric collection and protection, those kinds of issues.
So there is actually a legal obligation on many states, Australia being one of them, because Australia is a party to additional protocol one to the Geneva conventions, to conduct weapons reviews of weapons means and methods of warfare, to ensure that those weapons means and methods of warfare comply with their international legal obligations. So what that obligation says is that, the Australian Defense Force effectively does an assessment to say, “We are satisfied. We have tested and evaluated this capability, for this particular anticipated use,” so the legal obligation is for its normal and anticipated use, “that it complies with this system of laws that is applicable for this particular case.” So that gives some assurance that anytime anything is being acquired by the ADF, acquired by the state, it’s discharged its obligations of state responsibility by fulfilling this particular legal obligation.
It gets tricky though with artificial intelligence machine learning type capabilities though, when we think about a capability that might adjust itself over time. So if it’s been reviewed, certified fit for use in a particular context, and it’s deployed in a slightly different one, there may be a requirement to do a new review of that particular system. Or it may be that the testing data and the training data needs to be reset, based on the particular situation which is deployed. And I think another good example for me to picture that difference is thinking about a relatively uncluttered maritime environment where you’ve got big warships and then a couple of sailboats, versus really congested complex urban environment where you have a lot of civilians, you have combatants who are seeking to hide themselves amongst the civilian population, so they’re very difficult to distinguish from those particular individuals.
And as a consequence, the testing required and the levels of satisfaction, in relation to how this AI system might identify combatants versus civilians is going to be different depending on how it’s deployed. So it leads to some questions about how states might adjust these review processes and testing processes, but there is certainly a very rigorous and detailed legal obligation to conduct these kinds of tests on these systems, for that military use.
I’m wondering about the defense strategic review, and part of that is really about changing the culture of the military, particularly procurement, to look at the creation of these minimum viable products that can get tools in the hands of war fighters faster. And I’m wondering what that kind of culture shift might do, or how that might interplay with the legal requirements of actually making sure that these technologies are actually fit for the use that they’re being designed for.
Well, it doesn’t change the requirement that there needs to be that sign off before they can be used. So even if it is just a minimum viable product that’s being tested first, what it might actually do, and this is, I think, in my view, a positive is actually shift to the left during the design process, when there is engagement in that legal review process. Because in the same way, as I’m sure you have discussed in relation to value sensitive design, incorporating principles of ethics early in the design process means that when you get to the product at the back end, it’s got these things built in, hardwired. You don’t have to unwind them to put them in again.
In the same way as when you are undertaking spiral development processes, if you are building in, I call it IHL by design, so LOAC by design because you’ve got to have a catchy phrase, if you’re building these things as you go along, it means by the time you get to the back end, or you make those adjustments, you’ve got the leverage, you’ve got the points of input in relation to acceptance or adjustment of legal risk, and so you can understand a little bit better how the law might apply to these capabilities.
So it means if you’ve got a minimum viable product, then you’ve got a starting point to then adjust and adapt that legal review as you identify new novel use cases and different legal frameworks you want to deploy these things in.
Zena: When I think about emerging technologies, I think a lot of the times when we when have this narrative around them, we talk about them as though they have more independence and agency than they actually do. But the reality is, it’s a tool, right? Even machine learning or AI-enabled capabilities– there’s still a boundary around their capacity and their capability to adapt and to change. There’s still boundaries around that.
And really, what makes these things lethal, and what makes them potentially unethical or unlawful, is the way that they’re used — so the decisions that human beings are making when using these tools. So how do you separate that from a legal perspective? Because I know that the International Committee of the Red Cross released, I think it was last year, they released a statement saying that they believe that lethal autonomous weapon systems should be banned outright. And that was something I personally didn’t agree with. And I didn’t agree with it for these kinds of reasons — that it’s not actually the weapon or the tool, it’s the people using them. And without the existence of those weapons or those tools, those people still exist and there are other means that they could still inflict harm. So how do you separate those two things from a legal perspective, and bring to light the fact that, at the end of the day, it is a tool, and we do have more agency over that tool than it has over us?
Well, that’s a good question because that really reflects the distinction between what we term weapons law versus the laws of armed conflict, so the rules of the conduct of hostilities and the rules that apply during the application of situations of armed conflict. And so the separation of those two bodies of law is really what that ICRC request for a total ban was getting at. They were saying that this kind of weapon system, in their view, is never capable of complying with the rules, if it is used in a situation of armed conflict. And so we should ban it outright as part of these weapons systems. So we see a number of states have said the same in relation to cluster munitions, for example, or anti-personnel mines, because they are always going to be indiscriminate. There is no way you can use this weapon system and apply the rules that you’re obliged to follow in armed conflict properly. You can’t deploy them in a way that doesn’t result in civilian harm or civilian casualties.
So the call for that outright ban is reflective of that idea that the weapon per se is not capable of complying with IHL. Whereas I think what you are saying, and I tend to agree with you, is that lethal autonomous weapon systems are capable of being used in accordance with the rules of IHL, but they have to have appropriate limits and boundaries put around them, to ensure that they are complying with the rules that are necessary to be applied, if they are going to be used in certain contexts. So the difference is really well established at international law, in terms of that body that says, “This is about disarmament, non-proliferation. There’s no way you can ever take this weapon system and use it lawfully so we don’t want them to exist,” which is what the ICRC is saying. They want to ban. And that’s been their view in relation to laws for a number of years now, versus what you see on the other hand with a number of states saying, “Actually, it’s about how you put boundaries, limits, controls, test, ensure compliance of this system for its particular use case and its use.”
And so Australia has, in its submissions to the GGE, actually been quite active and talked about this idea of systems of controls. So it talks about the idea that in modern militaries, it’s not just about the one lady in the car being able to be driving like a maniac. She’s actually got a bunch of other people and systems around her that are preventing her from doing those things, in addition to having the road rules sitting above her. In terms of things like international criminal law, the Defense Force Discipline Act, the General Criminal Law of Australia, if an ADF member does something wrong, there are other systemic controls that are put in place, so things like operational orders, things like rules of engagement, that testing and evaluation to check in the first instance that the car is safe for the road. I’m going to keep stretching the analogy, but there are a number of different points along the way that give instructions as to how these systems can be used.
And there may be cases where, in situations of the use of lethal autonomous weapon systems, there is a requirement to say, “Actually, in this context, we can’t comply with the law. We can’t use this weapon in this context.” So that idea of that complex urban environment, if there’s an armed conflict where the enemy is an organized armed group, and one of their key tactics is to not distinguish themselves from the civilian population. So if the capability you are deploying is a lethal autonomous weapon system that tries to identify objects or a legitimate military targets in terms of people by what they’re wearing, obviously you can’t use it in that context because it’s not going to be able to distinguish between the civilian population and the combatants or the organized group members.
So there will be times where that won’t be appropriate to be used, but then that same capability could be used in a maritime environment because identifying a warship versus a yacht is very easy, or relatively easy, in terms of assuring that capability for that particular use case. So again, it’s about those limits, as you say. I completely agree, it’s about how you put those limits and bounds around it. But in the context of any military capability, and this isn’t just in relation to laws, but all of these novel technologies, there’s a layering of these systems, and it’s very rare that there’s just one human who’s in charge of making the decision. And I think that kind of comes back to some of that unease in relation to these new systems is because it’s hard to pinpoint who is effectively pulling the trigger. It’s easy to identify a soldier with a rifle pointing it at someone pulling the trigger. It’s a very direct link in relation to that particular action.
But for a number of years now, there’s been a very complicated systematization of the use of force when we are thinking about things like targeting cycles, and dropping bombs from planes takes about 150 people to get that plane in the air, with that bomb, with the right intelligence, with the right decision making, with the right fuel support, the right kind of munition and understanding of where it’s going to drop, what its collateral effects are going to be, if and when it strikes something. So if we are thinking about coalition operations against ISIS, for example, it’s been a really distributed decision-making process, with a commander at the top of that chain. And there is a principle of command responsibility at International Criminal Law that addresses that sort of diffusion of decision-making and responsibility.
But I think that some of the discomfort with these capabilities comes with the inability to really pinpoint who has made the decision, where and when, in relation to it being deployed in situations of armed conflict. Maybe even more complex when we think about the way that AI-enabled or AI-enhanced capabilities operate because of the idea that the algorithm uses test data. It’s been designed months/years in advance of its particular use. So the designers and developers make a hundred different decisions as to how they’re going to achieve the particular outcome of that predictive model or whatever it is that they’re trying to achieve. And each one of those decisions will have an impact on how the capability might operate. So the ability to actually assure the proper use or the proper performance of that system is more complicated. And I think it’s that ability to understand how we are meeting the required standards and deploying these systems that causes some of the discomfort in accepting that this capability is, in some cases, going to be more useful than relying on less advanced technologies.
Looking at one aspect of the decision-making chain that might have happened, that might lead to an incident with one of these technologies, I’m interested in the interaction between the machine aspects and the human aspects, in particular, the communication and the trust dynamics between those two. And I’m wondering how that is considered when thinking about the appropriate use of these systems, but also when thinking about how to identify the root cause of a harm.
I think that’s a really great question because it is difficult to test a system outside of its anticipated use, and if the anticipated use is so interconnected with the human involvement in the targeting cycle. So if we’re thinking about an AI system, for example, that’s a decision support system, a decision support tool, so it’s being used to analyze a bunch of data and to spit out an assessment as to whether it thinks something is a legitimate target or not, or if they think that something is where analysts have suggested it might be, there is a level of trust required for the person who’s making the decision to then authorize the next stage in the process, being the strike, to say, “I trust this analysis. I trust this tool.” And a lot of that trust comes from understanding how it works, understanding what the testing and evaluation process is, to know what standard it’s going to meet, and also practicing its use in exercise environments.
And so, militaries are very adept at undertaking exercises. They do it routinely. And in most cases, prior to deploying, Australia, as well as many others, requires that their personnel are certified and signed off as being capable and competent in the role that they’re going into. So before they go to do something, they’ve gone through mission rehearsal exercises. They’ve tested the equipment, so they have that level of familiarity with it. Of course, there’ll be circumstances where that’s not possible because of the exigencies of the situation. But in most cases, there’s a process where that testing, that mission rehearsal or that exercising, allows the group that’s going to be using it to work through their trust issues with the capabilities, as well as the team.
As it stands at the moment, if you think about how the targeting process works, there’ll be intelligence inputs where analysts will make assessments, a number of assessments, where they have a confidence measure. They’ll say that this is probable or likely or whatever the assessment is, to say that, “We think this thing is this, and here is our reasoning for it.” And the commander or the decision maker will take that information and say, “Well, I am obliged by law to make myself aware of all of this information, and then make myself aware of what the analysis is and accept or not accept that.” So there is actually a legal obligation for commanders under international criminal law and the laws of armed conflict to be reasonably satisfied about what they’re going to accept in those decision-making processes, before they take that information and use it for the purposes of making a decision to attack something.
No different in relation to deleting the human analyst and inserting a machine analyst, effectively. So in the same way that intelligence analysts are taking data, and fusing it, assessing it, analyzing it, there is a presumption that machines can do it possibly better when we’re talking about the vast amounts of data that are being collected in modern battle spaces.
Now, of course, there are biases that apply in the way that machines analyze data, but there is also biases in the way that humans analyze data. And I think there’s some really telling examples in recent times about how those biases can play out. And I think if you recall the strike in Afghanistan, in the withdrawal of US troops in 2012, there was a civilian NGO water truck that was hit with a strike from a US drone, on the basis that your intelligence analysis was that it was an improvised explosive device. So things like markers to say, it’s weighed down at the back, because it was full of water so it was heavy, is another marker or indicia that it might actually be weighed down with explosives. So there was biases at play there by the analyst that said, we think this is an IED, a vehicle-borne IED, versus what it actually was, because the information they had to analyze was really limited.
Because if you think about current intelligence surveillance reconnaissance capabilities, a lot of them are really looking at a city through a straw in the sky. And so if an individual has the capacity to bring in many more straws and have something assist them in analyzing that data, I think that’s going to lead to better decision outcomes. But in any use of force in a situation of armed conflict, there is always risk. And that risk is not going to go away, by introducing this kind of decision support tool. But there are ways that decision support tools can be used to try and mitigate the risks that come with relying on humans to analyze data at the moment.
You’ve just given us a nice example of a case where some kind of human bias played a role in decision-making. I’m wondering if you can unpack the term bias a little bit more and tell us about how that might arise, both on the human side but also on the design side, and how that might be considered from a legal perspective.
Bias from a legal perspective can sometimes be completely acceptable and lawful. The question from a legal perspective is not so much, is there bias in the system? It’s more, has the bias in the system produced an unacceptable or unlawful result? So in some cases, it’s okay to be discriminatory. So in some cases, it’s okay to discriminate in employment, for example, if you’re trying to address quotas. So there’s limited context where data can be acceptable.
And in the context of AI and in a military use concept, there are going to have to be some biases built into these systems, because the law requires us to presume, in cases of doubt, certain things. So again, Australia is a signatory to additional protocol one. The law says, and it’s recognized as customary international law, so the entire world is obliged to follow this obligation, the law says that, in cases of doubt, you have to presume that someone is a civilian, rather than being a combatant or someone who’s taking part in hostilities. So there has to be a bias built into the system to say, if you can’t determine, to the standard that we tell you you have to determine… I shouldn’t say you. If the machine can’t determine to the standard that we tell it to determine, that this is a combatant, then it’s obliged to… We want it to defer to civilian status. So in that case, bias is going to be okay.
In other contexts, we know that inbuilt biases in these systems that come either from the dataset that we input, or the way in which the algorithm is programmed to analyze the data and make its predictions, that we come out with unlawful biases. So the result of relying on that information will result in an unlawful effect. So that’s really where the law cares. Lots of examples, obviously in relation to
AIs, the way the AI has unlawfully discriminated, or the results of the AI calculations have unlawfully discriminated against people of color, or women versus men in relation to the way that the test data has gone in or the way that the test data was then analyzed, and then what else was coming in from the system. So in those cases, those outcomes are unlawful. That’s problematic. But in some cases, there might be a need to program bias into the system as well.
With the way that you describe the legal implications towards bias, it sounds very black and white, a little bit from my perspective. And for me, I’ve always perceived military context and military operations as a little bit more gray. I don’t think it’s always black and white. And that could be argued for any context, honestly. It probably isn’t a context where things are just simply black and white. But specifically in a military context, there’s a lot of gray.
So how do you capture that from a legal perspective? I think if we try and apply this kind of this very rigid approach to legal aspects, to military weapons, I don’t think it’s actually reflective of the true context and the true nature in which these weapons are being used and being applied. So how do you try and implement or capture that kind of grayness that comes with warfare?
So I think it’s important to note that, when we are doing things like weapons reviews, we’re reviewing a weapon for its normal or anticipated use, and we’re testing it in particular use cases and scenarios. We’re testing it on a number of edge case scenarios. But armed conflict is messy. It is unpredictable. The way that people will identify and augment or adjust practices is also unpredictable. So there are limitations put on the system as a whole to try and ensure that everyone stays within the rails. There are going to be a number of matters, incidents uses around the edges that are still potentially lawful, but are questionable in terms of outcome.
So in those situations, and that reflects current military operations as well, in those situations, and there are additional guardrails that are put in place, if the conduct of the military forces or their capabilities, it’s starting to edge towards what we don’t want to see happening, whether that’s for reasons of law, whether that’s for reasons of policy or reasons of strategy. I mean, everyone talks about the idea of wanting destruction by these kinds of capabilities.
But of course, it’s not in the interests of the strategic approach to armed conflict, to level the ground that you are trying to liberate in the first instance, because that comes with a death and destruction bill that counters the entire purpose of the military operation in the first place. So there are a number of balancing considerations, in addition to law, that will assist in keeping those capabilities operating in the way that is desirable in terms of that armed force and the broader sociopolitical structure that it sits within. Because of course, it’s not just the law that directs how the armed forces can operate. There is a very strong and close link to the political leash as to what is authorized and what is not authorized in particular operational context and uses.
So in relation to the idea of the use of weapons in warfare being gray, I think the way that the review process was negotiated and was agreed on as an international rule under additional protocol one reflects the idea that states have… If you look at the discussion and the discourse of the states in negotiating that particular article, there was concerns about the idea of overreach and limiting what states can do in situations of armed conflict, because of course, states don’t want to be hamstrung in their ability to be able to defend themselves or achieve their military outcomes. So they don’t want to be too limited in what they can do. But then on the other hand, there is this humanitarian balancing requirement to try and make sure that where there is a resort to armed violence, which is of course the last resort under the international legal system, and technically illegal for it to start in the first instance. So there shouldn’t be wars, but we know there are because people break that rule.
The consequence of that is that we want to have a set of rules in place to try and limit the harm that happens when we are already in this last resort situation. And those rules are contextual. A lot of the rules that are applied relate to assessments by the humans that are making the decisions at the time, which is again where this tension comes in relation to how much of that decision-making can be given to a machine to do on behalf of the human, and what is that oversight mechanism to make sure that the human still controls what’s happening by the machine, because of course, the machine doesn’t have agency, as we’ve noted already. The machine has to act. Well, there is a legal obligation that the human is in control of what the machine is doing, and particularly in the battlefield, that’s a relevant consideration.
So all of those things in mind, there is in built into the system of recognition that armed conflict is gray and messy, with that outcome of trying to end the conflict as quickly as possible, but with that balancing requirement of humanitarianism. So most of the rules that apply, in relation to both the review and the use of these weapon systems, have to balance that idea of, this is a thing that’s designed to kill people or break things, but we want to make sure it does so in the most humane way possible. So there’s an inherent tension in that in the first instance.
Can we go back to the concept of dignity, which you raised earlier? I’m wondering what that looks like in an armed conflict context, and how you think about that when considering technology and its role in that kind of conflict.
Well, I’ve recently undertaken with a number of my colleagues an assessment of all of the submissions that states have made at the Group of Government experts in the last seven years, talking about the Martens Clause, and talking about that idea of what are the dictates of public conscience that might say that this kind of weapon system is a breach of dignity.
To be honest, I don’t think there is a very good articulation by states or non-government organizations of what they think that means. It feels like it’s a gut feel reservation in relation to this idea of killer robots. And of course, the campaign against killer robots has been quite active in the GGE, in relation to their concerns about lethal autonomous weapon systems. But I haven’t been convinced by the arguments because they haven’t really clearly articulated, what is dignified about dying in a situation of armed conflict, whether it’s conducted by someone directly killing you or if it’s done by distance? So I think death in situations of armed conflict is undignified. The intent of the body of law, of the law of armed conflict, is to reduce suffering for combatants where possible and minimize harm to civilians and civilian objects to the extent possible as well. And I think there is a tension in relation to also thinking about dignity as a concept versus, I think, humanity. And I think there are distinctions in those things.
Now, of course, there are specific obligations to ensure the dignity of people who are captured, for example. So the treatment of prisoners of war needs to be dignified. Those kinds of obligations exist in the body of laws of armed conflict. But the idea that a machine killing you is somehow undignified compared to a human stabbing you directly, for example, I personally don’t grasp that difference. I think the difference may be in relation to the idea that you’ve been reduced to an algorithmic prediction, and that your life has been taken as a consequence of that algorithmic prediction. But I don’t see that as particularly different from your life being taken simply because you’re a member of one armed force or not either. So I think that that inherent indignity in relation to the right to life in conflict is there, regardless of the means or method by which your life is taken.
I agree with that entirely. One of the things I always struggle with is when people use the word humane or humanity when talking about warfare or lethal autonomous weapon systems. So one of the common arguments is that, it’s inhumane to be killed by a lethal autonomous weapon system. To take a life, the decision should always be made by a human. And I’ve always argued that it’s a complete oxymoron. It’s inhumane to kill someone, period. But warfare is a really difficult thing.
And so for me, I’ve always really struggled with people using that argument because I’ve always been, very similar to you, in that I think it’s undignified either way. I do think we need to take care in warfare for sure. But we can’t just go around doing anything we want. There should be some care taken because the result is the loss of lives. But I agree with you entirely that this concept of dignified or humane, in the concept of warfare, I just don’t think it sits very well.
I will say, humanity is actually one of the underpinning principles of the laws of armed conflict. So I think, again, this comes back to the particular use of words and particular context. So what is responsibility? What is humanity? So the whole concept of the laws of armed conflict are also sometimes called international humanitarian law. I prefer the laws of armed conflict because I think it’s important for people who are using these weapons systems that it’s reflective of the fact that we are using these rules to regulate the means and methods of warfare. But international humanitarian law, or the laws of armed conflict has, as one of its underpinning principles, the concept of humanity. And it is really that search or that reach to try and enhance where possible that level of humanity for those who are going to suffer, and to reduce the consequences of suffering in the situation of armed conflict where possible.
And I think there’s probably a dual use of that term when we’re talking about particular weapons being inhumane, in terms of the ability to use them in an indiscriminate way or a way that offends the principle of distinction armed conflict, or a way that causes superfluous suffering, versus in terms of what is within the bounds of what is deemed acceptable in situations of armed conflict, versus then just generally something that is going to cause harm. Because yes, the situation of armed conflict necessarily requires there to be harm to achieve the military outcome, to end that conflict.
The purpose of warfare really is to impose harm on another party. So I would say that suffering is inherent within warfare. So I do a lot of work around safety and risk in both civil and military context. And one of the things I always talk about is the different risk thresholds between the two. So what’s interesting hearing you talking about this for me is the different thresholds or the different benchmarks of acceptability for things like harm and suffering. And you’re talking about how a lot of these legal parameters are trying to reduce superfluous suffering or to reduce suffering in general. So how do you grapple with that when, really, arguably, the entire purpose of warfare is harm and suffering?
Another principle of the laws of armed conflict is that idea of prohibition on unnecessary suffering. So the concept that underpins this body of law is that, the means and methods of warfare available to states to achieve this political ends or achieve the outcome of whatever the reason they’re fighting the war is, they’re not unlimited. So they can’t use torture. They can’t use unnecessarily harmful or injurious effects. So there are particular bans on things like non-detectable fragments.
So you can’t create weapons that are designed to explode bits of plastic to cause humans harm that you can’t then take out in surgery because you can’t find them from x-rays. You can’t use blinding lasers because that was deemed to be unnecessarily harmful. You can kill a person, but you can’t blind them using a laser. So there are these tensions in the rules that exist, but that’s because there is still that overarching need in the context of this body of law, which is again, a body of law of last resort in my mind, in a very pragmatic body of rules to try and achieve that end, in the quickest possible way, with the least impact on civilians.
So a lot of people talk about this in particular when they’re thinking about these new technologies and the precision weapons in particular about this idea of the precision paradox. So there is an undertaking by states to reduce the amount of harm that they have in relation to civilian populations. So if you have a more precise weapon, it’s going to strike the object that you want it to strike and cause less harm to surrounding objects. So it’ll cause less collateral damage, less incidental effects to the civilian population, civilian objects. So that’s great on the one hand.
But then on the other hand, some people argue, well, actually what that allows you to do is to go places and use those weapons where you ordinarily wouldn’t have, because it was too close to the civilian population and what you had previously would’ve caused too much harm. So you’re actually drawing, by having these more precise weapon systems, you’re drawing the fight into heavily populated urban areas. So it’s all a balance. And this is where some of the impacts of these new technologies and weapon systems have some second and third order effects in relation to how they might impact the civilian population and that overall risk analysis of, is it causing unnecessary suffering? Is it causing disproportionate harm in the use of these systems? So it is all a balancing act, but I think it really does reflect the idea that this body of law, as it exists, is, as I’ve said, unfortunately, a body of law of last resort, because technically speaking, we shouldn’t have wars. They’ve been banned by Article 2-4 of the United Nations charter. We can defend ourselves, but if we’ve done that, someone’s already broken the law to get there.
So Ukraine, perfect example. But now that you are in the conflict, this body of rules of the laws of armed conflict apply. And so there’s this balancing concept as to what you want to try and achieve the most humanitarian end to the conflict in the swiftest possible way, versus the inevitability that there will be death, destruction, and the breaking of a lot of things, just because of the existence of the war in itself.
So this has been a fascinating discussion. I’ve learned so much. I’m wondering if there’s anything that we didn’t cover that you might want to share? Anything that you think our listeners might want to hear about.
It might be useful to note that I think there is a lot of work being undertaken, as you’ve noted, in the civilian context, also in the military specific context, about how to put guardrails around the use of these technologies, how to make sure that they are properly reviewed. And there is a lot of academic debate going on about whether there are any gaps in the law in relation to these new systems that require new prohibitions or bans or permissions to be put in place, so that they can be used in accordance with those general principles of the laws of armed conflict that we’ve already spoken about. So that work is ongoing, and I think there’s a large amount of focus, internationally and domestically, on those kinds of regulatory and legal requirements. And, Zena, you’re doing work from the safety perspective, which obviously dovetails into this broader international legal compliance requirements as well.
So I think, despite what is a bit of a glib discussion about conflict, probably take heart from the idea that these capabilities are being designed with a mind to these considerations. And there is a lot of thought going into how we need to control these things, in the same way that there’s a lot of thought going into whether or not we should just ban things like generative AI altogether. So it’s part of the broader conversation, but I certainly think that it’s producing a lot of really good thought, and there is good international debates going on about this at the moment. So it’s not something that we’re sleepwalking into, in comparison to some other technologies that have kind of taken us by surprise, but it certainly does require continued attention.
Thank you so much for joining us today. It’s been an excellent conversation, and I really look forward to being able to share this with our listeners.
Thank you so much for having me, and I look forward to having you on my podcast in the right near future.