Podcasts Season 3

S03E01: Relics, reciprocity, and risk: A tale of two co-hosts

It is the launch of season 3 of this podcast, and we thought it was high time for a positionality statement – er, episode. Why not align it with our debut on YouTube? Listen in for an episode featuring our co-hosts, Liz Williams and Zena Assaad, exploring everything from relics, reactions, reciprocity, risk, and the complexities involved in creating and regulating AI systems in the real world. 

Listen and subscribe on Apple Podcasts, iHeartRadio, PocketCasts, Spotify and more. Five star ratings and positive reviews on Apple Podcasts help us get the word out, so please, if you enjoy this episode, please share with others and consider leaving us a rating!

Credits:

Co-hosts: Zena Assaad and Liz Williams

Producers: Robbie Slape, Zena Assaad, Liz Williams, and Martin Franklin (East Coast Studio)

Thanks to the Australian National Centre for the Public Awareness of Science for letting us use their podcast studio for recording.

Transcript:

Liz  

Hello, Listeners. This is season three of the Algorithmic Futures Podcast, where we talk to technology creators, regulators and dreamers from around the world about how complex technologies are shaping our world. My name is Liz Williams,

Zena  

And my name is Zena Assaad and today we’re joining you from the Australian National Centre for the Public Awareness of Science here at the Australian National University, and we are currently sitting in their podcast studio. 

Liz  

So if you’re just joining us for the first time, welcome. For those of you who are longtime listeners, we have an exciting announcement: we have branched out to YouTube. So if you want to watch any of our episodes this season, head to our website, algorithmicfutures.org, for the YouTube link and all of the details, you will find lovely, you know, first awkward moments in this first one here, because we are figuring out everything out for the first time. So it should be kind of fun, including inconsistent background. This is also a bit of a different kind of episode, just to mark the start of a new season. The guests today are us. Because we thought it’d be fun to explore the perspectives. We’re bringing this to this podcast. So our history has really shaped who our guests are, and the kinds of questions that we share with them. And so we thought it might be a good idea to help you understand where we’re coming from, to kick this season off.

Zena  

Yup. And I think with that, it’s probably best to dive straight into you. And the reason I’m diving straight into you is because you actually have a huge change happening this year. You got a new job.

Liz  

That’s true. So well. I’ve been at ANU forever. And I’m still at the ANU forever, well, at least for the foreseeable future. But yes, so I’ve just started a role as Associate Professor at the School of Engineering, which I’m really excited about. I get to basically bring, like my past and my recent, my recent more recent past, together for a new role looking at nuclear systems. So I guess I should talk about what that past is maybe a little bit.

Zena 

I think so and especially the difference between your past and your recent past. I think a lot of people think that academics have the same research area across decades of their career. And I think that might be the case for some academics, but it’s definitely not for you. And for myself as well. We’ve had more, I think, I might say diverse kind of research areas. Yeah, yeah. So maybe like how yours has changed over time and why it’s changed.

Liz  

Sure, yeah. So I guess I’ll start with the fact that I’m a nuclear physicist by training, which, you know, if you’ve been listening to this podcast for a while, you already know that — I’ve mentioned that several times. So I studied nuclear physics, because I stumbled into this lab called the Wright Nuclear Structure Lab back when I was an undergrad at Yale. And I walked in and they had an accelerator, they had all of this amazing equipment. And they were also nice, and let me do things. And so I can’t say how important that is for like, bringing somebody into an environment where, you know, women, for instance, often were not at that time, my group had more than half the group, I think, was women. And they also then–

Zena  

Wait, more than half the group was women. 

Liz  

Yes. Oh, okay. It wasn’t always like that. But when I joined, that was very much the case. And actually, the person that suggested that I go talk to people was my teaching assistant at the time, a woman called Deseree Meyer. And she was like, yes, they’ll let you do interesting things. They’re a great place to work. And that’s how I ended up in nuclear physics. So that was the start of it, 

Zena  

You ended up nuclear physics because people were nice to you?

Liz  

It sounds pretty lame. 

Zena  

No, it does, it really does make or break the decisions that you make when you were younger. So a funny story. I wanted to be an accountant when I was in high school, and I took accounting classes in year 11, and 12, which are your penultimate years, basically, before you go into uni. And in year 11, I had this wonderful teacher, and funnily enough, I don’t remember her name, but she was incredible. And I did so well in her class. And I was like, I wanted to be an investigative accountant. Like that was my career goal. And then in year 12, my last year, I took the course — the year 12 version of the course. And it was a different teacher. And I remember his name, and we did not get along. And he was not a great teacher. And I ended up going into aerospace engineering instead.

Liz  

As you do.

Zena  

It’s not — It’s like people being nice to you does actually shape your decisions. 

Liz  

Yeah, definitely. Oh, it’s kind of incredible. But I think it’s reflected– I mean, to be honest, I did grow up reading Scientific American and watching Star Trek when I was a kid, like, you know, I was always going to be like, you know, a nerdy kid. 

Zena  

It was in your genes. 

Liz  

Yeah. But, but the fact is, in nuclear physics, especially experimental nuclear physics, you have to work together as a team. And so part of this is them being nice. Everybody worked together, like we were up until three in the morning, trying to fix things all the time. And, you know, you had to do quite a lot of work to get any of these experiments working. And I think that’s part of, you know, why that environment was so focused on making sure that you know, I felt like part of the team, I felt like I was valued. And so anyway, so that’s how I ended up in nuclear physics. And that’s also why I was drawn to these smaller accelerator facilities. which I spent most of my nuclear physics career working in. I spent, like, I spent a short stint of time working in applied nuclear science, but mostly I was at accelerator facilities. And that’s really because I got to do everything. There are pluses and minuses to that. But also, because I was working, 

Zena  

Liz is great, we’ll just give her all of the work.

Liz  

But partly because we were working in these small teams. And so, so I did that for a really long time. And one of the things that I was doing while I was working in nuclear physics was I was building all of these pieces of equipment, these codes, in order to make these experiments work. We had international users that would come and they would need to use our data analysis code, we had a bunch of different groups that would need to collect data using the same pieces of equipment. And I ended up being involved in designing those things. And so one of the things that I realized when I was doing that work was that the choices I made with regards to how I was designing these things, or even how, how much information that I provided to the users, how much how many error messages I put in my code — that would actually shape the science that people were doing. 

Zena  

Yeah. 

Liz  

And so I realized that like this whole science pipeline was very dependent on how we were making use of technology, how we were designing it, you know, the choices that we were making about all of these key aspects that, you know, seem — once you get once you get them implemented their day to day, right, you forget that they exist. But if you haven’t done something right in that whole pipeline, you can create all of these opportunities for difficulties. 

Zena  

And I think part of that is the communication of those decisions as well. Right? Like, you’re not just making a decision.

Liz  

Oh, yeah, definitely. So you know, again, error messages, like, when has whatever situation you’re looking at gone beyond the bounds of something I’ve designed. 

Zena  

Yeah, right. 

Liz  

You could say nothing when that happens. Or you can say something. And usually, if you say nothing, nobody will look into it, or realize that’s happening, right? If you say something, you know, you pick it up. I mean, that’s not totally the case. Like sometimes you will notice artifacts in the data, and you’ll try and uncover what’s going on. But in other cases, the effect is subtle, and you don’t realize it unless you know, there is some reason to believe that something’s going wrong. And so that’s a really long-winded way of saying that, like I started to get into looking at, you know, how we create these algorithmic systems that we use so often these days, in particular, looking at artificial intelligence, which, you know, sounds like it has nothing to do with nuclear physics. But actually, the data pipeline that I’m just talking about, has very similar aspects in nuclear physics and in machine learning, this is why a lot of physicists end up working in data science. 

Zena  

Yeah. 

Liz  

And so we ended up doing research on AI-enabled systems, particularly in safety critical contexts, and looking at, you know, how the way that we’re designing those, particularly those that are designed to go at scale, shapes the potential impacts those systems have. And so that’s what I’ve been doing most recently. And so this new role in the School of Engineering, I get to actually bring back my nuclear expertise, I’ve been getting out the old textbooks. And preparing to–

Zena  

Is that a positive experience?

Liz  

It’s a –it’s, it’s nice; I’m remembering more than I thought I would. Okay, but also like, you know, again, there’s something about like bringing parts of yourself together in a new way that is very satisfying,

Zena  

You know, as cliche as it is to say, but when I joined the School of Engineering, I joined the aerospace cluster. In the beginning, I was teaching on a course that really I hadn’t touched since my undergrad. And so obviously, I had to go back into my deep dive and make sure that I was well up to speed on it all. It’s amazing how you can view things. And maybe it’s experience that contributes to this. But it’s amazing how differently you view things when you’re not a stressed out student who’s like, Dear God, please don’t make me fail.

Liz  

It’s true. Now, I’m thinking about it from the student perspective. How miserable are they going to be this semester? No, I actually hope that they are not miserable. But like, 

Zena  

I’m sure they won’t be! I’ve been a part of your classes before. I know you develop really exciting educational content, which is a little bit less dry than the usual kind of, you know, let me just talk to you for two hours.

Liz  

I appreciate that. I’ll do my best. So, but I’m looking forward to spending the next year really looking at what, you know, this work in safety-critical systems evolves into while in this new role at the School of Engineering. I think it’s an exciting time to be looking at all of this because Australia’s actually embarking on, you know, vastly growing its nuclear expertise in a very short amount of time because of AUKUS. 

Zena  

Yes.

Liz  

That’s going to have impacts across the new their stewardship landscape. And so I think it’s actually a really interesting and important time to be thinking about, well, how can Australia uniquely contribute here? And you know, where does my work fit into that?

Zena  

So it’s not really a change in your research? It’s kind of an evolution of your research. 

Liz  

Yes. 

Zena  

Which I think is the case for most academics. 

Liz  

Yeah. 

Zena  

I think sometimes, it can feel like you’re doing little pockets of different things like, Oh, I find this interesting. And I find that interesting. But I don’t actually think that’s the case, more often than not, it’s just this kind of really natural progression and evolution into what your research becomes.

Liz  

Well, it’s true. And I think, also the research skills that you develop as an academic, whether you’re you’re working in, you know, in my case, nuclear physics or in in doing qualitative research on, you know, AI enabled systems, your abilities, as a researcher do translate your ability to question what you’re doing, to question the assumptions that are underlying what you’re doing–all of that, your ability to dig into something in minute detail that, you know, you didn’t know about before, that will translate.

Zena  

 It does, you know, I read, and I’m gonna forget it now. But I will send it to you. I read the most fascinating article the other day, and it was written a very long time ago, decades ago. But the whole premise of the article was that academia should embrace idle curiosity, and it was talking about and this was like, decades ago, right? And it was talking about how like, you know, there is the academic grind, and we’re constantly having to produce outputs. And more often than not, we’re encouraged to produce things that are seemingly correct, even though negative results are still results in that you can still learn something from that we are more encouraged to produce things that are like, this is how it should be. And this is the correct answer. And this whole article — it’s about 10 pages. It’s incredibly fantastic. It talks about like the the benefit of idle curiosity, and it gives this long history of like, all of the things that have been discovered, or advanced or improved, because of like these really serendipitous moments of curiosity, that kind of led to something. And I think that’s kind of what I think sometimes when I go on something that some people might think is tangential, like a research project I’m interested in, I’m like, no, no, it’s just idle curiosity.

Liz 

You know, I think one of the things that’s actually really tough about this is that there’s a lot of work that goes into creating value in terms of creating a new idea that’s going to have impact, that is very much invisible work. You know, it might be that you have 10 null results that then — or 100 — that then feed into that idea that actually is going to change the world. You know, to put it in the most cliche way possible. You might not be able to publish all of those null results, results, but they are actually contributing to that value that you eventually contribute. And so, you know, I do think there’s something there about the hidden work that we do not really foster in the way that we currently assess academics. But I mean, that kind of invisible work that, you know, it’s always undervalued, and you see it everywhere. I mean, going back to, you know, data and nuclear physics, you know, I remember the stories in this lab that I did my PhD in we had this like giant blue ball, it was a relic, like they couldn’t– it was huge, they couldn’t get it out of the experimental hall. And you know, we’d like, 

Zena  

What was it made of?

Liz  

I have no idea what — steel — it was huge. It was huge. It was massive. And like, you know, there was nothing that we could do with it, except, like, decorate it for St. Patrick’s Day, which actually happened once–

Zena  

Oh, I love that. But it’s blue and not green.

Liz  

I know. But it had a hat that was green that we kind of stuck on the side of the ball, or I should say one of my colleagues did. That was like one of the few photos that I took. 

Zena  

St. Patrick’s Day was the only occasion, not Christmas, not — I don’t know, you celebrate Halloween in the US, No, no, just St. Patrick’s Day. 

Liz  

Exactly. We were a little superstitious in the lab. But anyway, it was this relic. And back in the early days of lab, they used to put this photographic paper and they do nuclear reactions in the center. And then like the little dots that you’d see on the photographic paper, that was the data that they would then process. And apparently they had, like, you know, a team of women who would sit and look at these little dots on this, these photo sheets, and that would then become the data that would then lead to the discoveries that would then you know, allow them to figure out, you know, some basics of nuclear structure. I’m pretty sure that these women did not get their names on a paper, I’m pretty sure that most people do not know they exist. But like, there are stories of this everywhere, right? Like, you know, you Google on Flickr Commons like NASA women computers, or even NASA computers. And you’ll see images of women come up, right. 

Zena  

Yeah, I think the film Hidden Figures really popularized those women. But I know that the whole Harvard College observatory also had about 80 women who worked as human computers. And they were analyzing astronomic data. And it’s the same thing there. Lots and lots of work like it’s so time and labor intensive to do that work. And this is before we had computers as we know them today. Absolutely no recognition, no acknowledgement of the work that they did in any publications. And what’s funny enough, is the first woman who actually worked at the Harvard College Observatory was the maid of the professor who headed up the observatory. And the reason he hired her is because his wife was like, “Oh, she’s really smart. I reckon she could do this work for you.” And he was like, “Okay.” And she was cheap labor because she was a woman. And so he hired her. And it turned out to be so successful because she was incredibly intelligent. And then that led into him continuing to hire more women, because again, they were cheap labor. And so he ended up with about 80 women working at the Harvard College Observatory as human computers. And none of them got any recognition. And when I say none of them got any recognition, I’m talking about like academic publications and academic conferences and, and outputs, they did not get the recognition in those forums at that time. 

Liz  

So I’m remembering a recent article that I read by Eryk Salvaggio, this was something for Tech Policy Press was on LAION-5B, which, you know, I’ll get to what that is in a second, stable diffusion 1.5, and the original sin of generative AI. And his article was basically looking into what is actually in the data that we are using to train these AI models. And so it’s probably worth thinking about the fact that, okay, you need a tons and tons of data to train these models. So you know, to train chat GPT, or stable diffusion, which is, you know, a generative AI tool that you can use to make images, when you put in a prompt that’s art deco flowers, or whatever, you get an image that that kind of looks rather like you might imagine an art deco flower art piece might look like. They’re trained on masses of data, huge amounts of data. 

Zena  

Publicly available information as well. 

Liz  

Yes. So in this case, LAION-5B is an open source dataset. And it’s used to train a lot of the computer vision type models that are built on this kind of data, because you know, it’s there. These datasets are hugely expensive to produce. Often they’re produced by like scraping the web, just pulling stuff. off the web, sometimes there’s some human involvement in tagging the images, like seeing what’s in them, or in trying to do some kind of quality control. But but because there are literally millions of pieces of data in here, the scale of the datasets alone are vast. And the work required to create that dataset, actually is pretty undervalued. And so what we, what he talked about in this article was this discovery, I think this was by I have to look at my notes, David Thiel, from Stanford Internet Observatory, whose work was built on Abeba– Abeba, Birhane, I’m hoping I’m pronouncing her name right. They basically found child sex abuse material in this open source dataset that’s being used to train all these models. This isn’t the first time this kind of thing has happened. But it sort of spoke to me about the undervaluing of data and data work, like what do you actually need to do this well, and what does it cost to produce it? And and what is it actually valued at in terms of like, what do we pay the workers that do this work, you know. Generally, you find all kinds of like terrible stories about the people who are doing this work and being paid peanuts, and often coming across like horrific images, or in this case, you know, nobody’s doing that work at all. And this kind of stuff ends up in these datasets that are being used to train these models that we then go and make these images out of that, you know, unknowingly we’re are making that on the backs of people who are very much paying the price for, for, you know, this way that we build these things. 

Zena  

I think you’re absolutely right. And something I will say in defense of the technology industry. We see this outside of the technology industry as well. This is a result of business models more so than it is a result of the product that’s being produced. So from the perspective of this particular AI image generation tool, it’s open access, right? So they don’t, they want people to be able to use it freely. So they’re getting their revenue source through some other way. They also, because they’re going to allow people to use it freely, they need to produce it as cost effectively as possible. And that’s where all of this work that goes into the data, this invisible work really is the first thing on the chopping block. No one sees it. People don’t care about it until it becomes a problem like it is today. But we see it in any other industry, like we see in the fast fashion industry. Everybody wants a bargain. Everybody wants to go to Kmart and buy like a gorgeous $10 top and be like, Oh, it’s $10 from Kmart, nevermind how it was produced, you know, produced in a sweatshop by a child not getting paid a living wage and living under, like working under terrible conditions. Yeah. So we see. And that really is invisible work, because on the other side of the world we go into came up with like, What a bargain, without any understanding of visibility of how that bargain got to your doorstep, basically. 

Liz  

Oh, yeah, no, yeah. We’re dealing with like the consequences of human behavior, right? Like, this is like–

Zena  

Exactly. This is not just the– and the reason I defend the technology industry is because there’s often these narratives of like, you know, the technology industry is going to ruin the world and AI is going to bring the end of humanity; it’s like, no, humanity will bring the end to humanity really. 

Liz  

Well, I mean, this is this is coming to mind some work that my student, Kathy Reid has been leading, and she has been looking at this for voice datasets and voice to speech technology. So you’re, you’re thinking like, you know, any of the tools that we’re using, for instance, or YouTube is using to, for instance, transcribe this video, you know, this is the kind of tool set that that she’s been looking at. And she’s been looking at the data pipeline. And recently, we put a paper out in the Australasian Language Technology Association workshop proceedings, that was about this was looking at, you know, who are using datasets? Why are they using them, what’s kind of shaping their, the choices that they make with regards to what they’re doing? And so, you know, yes, if you’ve got money, you can create your own dataset to your specifications, or you can buy it. But there’s a category of people who don’t have that money. And so they’re using these open source datasets or whatever data sets they can get a hold of, simply because that’s what they need to do to create the models that they need, for whatever reason, it’s actually one of the things, one of the arguments behind you know, the value of open source, you know, you don’t have to be a big company like Google to get access to this data set and make a model that might be useful to you. And that gives access to people or communities that might not otherwise be able to create these kinds of models. And so there’s a tension there, right of, you know, 

Zena  

Equal competition. 

Liz  

Yeah, yeah. So you know, there’s a reason that these open source models exist. And there’s, there are really good reasons that we should try and work to make sure that they’re still there. At the same time, because of how they’re produced. You know, these things can happen. But granted, I’m talking about millions of pieces of dataset–billions of pieces of data here. Unless you’re making everything from scratch, it can happen anyway. So it’s,

Zena  

it’s true. And I think the focus shouldn’t be more so that the images exist. We all know they exist. And we all know they’re on the internet. And we all know that there are groups of people trying to– you can’t stop it, but trying to minimize it. I think it’s gone beyond the point of being able to be stopped, sadly enough. 

Liz  

Yeah. 

Zena  

So there are a group of people who are trying to minimize it. But I think the reality is like specifically for this AI image generation tool where it kind of came up, I don’t think anybody who’s designing that tool stopped and thought, “Oh, I wonder if child sex abuse images will come up in my data.” I don’t think that would have even crossed anyone’s mind. 

Liz  

Well, actually, I think that they thought that might have been handled, but like, the thing is, they were using an automated tool to do that. And that automated tool had a warning that it doesn’t always work. Which is true for any of these tools. None of them are 100%.

Zena  

So they tried to embed it in the design process. 

Liz  

Yeah, yeah. So it’s just, you know, they did their best given the circumstances, but stuff still happens. And realistically, that stuff is a reflective of the risk of scraping data off the internet. I mean, it’s all out there. 

Zena  

And so when you cheap out on your data processes. 

Liz  

Yeah, I think I think this is a good kind of topic to segue into some of your work, which has been looking at risk and risk based. Well, I’ll let you tell it actually like tell us a little bit about you know, the work that you’ve been doing. Particularly, I know you recently had an article out about Australia’s approach to regulating AI. Can you tell me a little bit about what’s been going on about that and how you kind of address this kind of stuff in your research?

Zena  

Yeah, so my research predominantly looks at safety of emerging technologies, and that kind of started from a systems engineering perspective, because really my background is in aerospace engineering, but I worked predominantly as a systems engineer in that field. And so my research looked at safety from a systems engineering perspective. So I did things like develop safety frameworks, and I looked at assurance mechanisms of emerging technologies. This year, and sort of towards the end of last year, really, but this year, as well, my research has kind of evolved. And it’s evolved to redefine what this concept of safety means. So I kind of started this work, really, when I started my Trusted Autonomous Systems Fellowship, which sadly has now come to an end. I’m so sad. It was an amazing two years. But that’s really when I started to look at the risk, the safety sorry, the safety landscape a little bit differently. But I didn’t act like I, I looked at it from a perspective of okay, I know, it’s now going to include things that extend beyond physical hardware, including things like trust, we’re including things like ethics, how do we include these really quite intangible things? And how do we build mechanisms to assure the safety of these different characteristics? What I didn’t do though, was I didn’t stop and take a step back and say, Why are these now included in safety? And how has the safety landscape changed? And I kind of did that a little bit loosely. You know, we spoke about a little bit about the changing relationship between humans and machines, particularly in a human machine teaming context. I do a lot of my research in human machine teaming. But I never actually sat down to really unfold that argument.

Safety has a decades long history. And I was building off of that decades long history. So all of my work to date still stands. I still stand by all of my work. But what I’m doing now is actually digging into well okay, if I’m going to redefine safety. How am I redefining it? Why am I redefining it in this way? And how does it apply to emerging technologies. And I focus very specifically on a human-machine teaming context. I’m very interested in that relationship between humans and between machines. And when I talk about emerging technologies, obviously, the really big one is AI. This is what everybody’s talking about. 

Liz  

Yeah. But I can we, not all of our listeners really understand what human-machine teaming is, I think it’s, it’s something that you employ in a very specific context. So can you tell us a little bit more about what, what does human machine teaming mean? What kind of systems are, are you often talking about when you’re doing research on human machine teaming.

Zena  

So the difference really, between human-machine teaming, and I think a term most people would be familiar with, which is human machine interaction, is that human-machine teaming is characterized by a reciprocal relationship between humans and between machines. And you get this from more advanced capabilities in machines. So human computer interaction is a little bit more of a one-sided relationship, much more hierarchical, you know, very much of a command and control kind of relationship. Human-Machine Teaming is different. It’s more balanced, there is reciprocity between the human and between the machine. And humans don’t necessarily play a hierarchical relationship, they play more of a teammate role. And the reason a lot of the work that I do kind of leans a little bit into the defense space is because this idea of removing a hierarchy from a teaming environment, so removing hierarchical structures from a particular operation is incredibly fascinating from a defense perspective, which usually has a lot of legacy, hierarchical approaches and hierarchical operations. So that’s been incredibly fascinating for me to explore.

Liz  

It’s also something that you see a lot and, you know, NASA does a lot of work on this, for instance, because you get, you know, in these kind of crazy environments that you’re designing technologies for, you end up having these situations where like, the human is gonna have certain capacities that you’ll never get out of a machine and the machine’s gonna have capacities that you will never get out of a human. And really, it’s like the integration of those two in these really complicated environments. that allows for, you know, almost a, for the team to achieve far more than they could, at least potentially, than if they’re all humans, or all machines, or if you had this kind of like, you know, command and control type relationship.

Zena  

Yeah, it’s interesting. The Australian Army actually defines human-machine teaming in a similar vein, so they describe it as achieving an output that neither the human nor the machine could have achieved independently. So they ascribe to a very similar description. And I agree with that — I agree that it’s the combination of human and machine capabilities to achieve a particular outcome. But the addition that I add that I think is a really important addition to defining human-machine teaming is this element of reciprocity is back and forth, and this give and take between humans and between machines. And it’s not. This is not to anthropomorphize machines. I’m very careful about this. Because you know, I’ve been on so many rants with you about this. I hate the anthropomorphize ation of machines. I think it’s, I’m gonna be frank, I think it’s ridiculous. Okay, machines are machines. Yes, the way we interact with them might change how we feel about them on a personal level. But that doesn’t change the fact that it is a machine. And it is, in most contexts really quite a binary construct, right? Like yes or no, it will or will not do something. So I’m not anthropomorphizing machines. What I’m saying is that the capability and the applications of machines has evolved to a point where it’s no longer ‘I give a command, and I get an output out of it.’ So Chat GPT is a great example of that you give a prompt, you get a response. In return, it’s very much a binary interaction, right? I give you something, this is what I get in return. But it kind of stops there. Human-Machine Teaming is a little bit different. When you have more advanced capabilities, there is still a boundary around the kinds of outputs that you will get. However, there is some level of uncertainty there. So for example, I know that if I’m in a human machine teaming operation with an autonomous vehicle, and I’m trying to go from point A to point B, I know that my vehicle is most likely going to drive on this road. I also know that if there’s a particular obstacle on the way the vehicle is going to avoid that obstacle. And really, all it can do is either slow down and break, swerve left, swerve right. There’s still uncertainty there, because I don’t know of those three options, which it will choose in the moment, but there’s a boundary around that uncertainty, right? I know it’s gonna swerve left, swerve right or break. But I just don’t know which three. So there’s definitely a level of uncertainty associated with advanced capabilities. But that uncertainty is bounded. It’s absolutely — I do not ascribe to the narrative of, you know, advanced capabilities or AI capabilities have a mind of their own. I don’t ascribe to that at all. I don’t think it’s accurate. 

Liz  

Yeah. And so for the situation that you’re you’re talking about there, you know, and how is your vehicle kind of react to a particular obstacle in the road? You know, the challenge with these kinds of systems is really making sure that the human is is going to have sufficient trust in the machine’s capabilities, that they’re not going to take over in instances where they shouldn’t be taking over, but they are going to know how to take over, they’re going to know to take over in certain other instances. And that’s that’s one of the things that’s very hard to get right if you are trying to optimize a human-machine relationship for a given purpose, right. 

Zena  

Yeah, that’s absolutely right. And this is one of the things I say people talk about machines replacing humans. And I always have the argument that machines will augment human roles. And this is a fantastic example of that. So in this situation, the human is still actively involved, their roles and responsibilities, however, have shifted. And it’s no longer a case or I don’t think it was ever a case really, of being able to stand back and be like, Oh, well, the machine made a mistake. In a human machine teaming operation. It’s like well, what was your what were your roles and responsibilities as a human? How were you trained? Were you trained to effective effectively respond to the machine not operating as it was intended to operate? So there’s that liability and that accountability that still sits with the human which is where it should always sit? I don’t believe in it sitting with machines at all, but that you’re absolutely right. The human roles have been augmented they haven’t really been replaced. Yes, that’s absolutely right. And this is one of the things I say people talk about machines replacing humans. And I always have the argument that machines will augment human roles. And this is a fantastic example of that. So in this situation, the human is still actively involved, their roles and responsibilities, however, have shifted. And it’s no longer a case, or I don’t think it was ever a case really, of being able to stand back and be like, Oh, well, the machine made a mistake. In a human-machine teaming operation, It’s like well, what was your what were your roles and responsibilities as a human? How were you trained? Were you trained to effectively respond to the machine not operating as it was intended to operate? So there’s that liability and that accountability that still sits with the human which is where it should always sit? I don’t believe in it sitting with machines at all, but that you’re absolutely right. The human roles have been augmented; they haven’t really been replaced.

Liz  

So speaking of, of risk, and trust, like I recently saw that the Australian Government has released a response to the consultation they had last year on safe and responsible AI. And I was interested, because I did see that they were looking at adopting some of the EU AI act approaches of, you know, taking a risk-based approach to regulating AI. And so I was wondering if you had any thoughts on the response?

Zena 

I have lots of thoughts on the response. So they didn’t openly say that they were going to adopt the EU AI act. However, they did explicitly say that they were going to use a risk-based approach, which is the same approach that was taken in the EU as –

Liz 

Yes, that’s what I meant, — We’re not We’re not — Yeah. 

Zena: Anyway, so there are three kinds of, I guess, really prominent acts that have come out. So there’s the EU AI Act, there was Biden’s executive order on AI, which came out last year. And then we’ve had the Australian interim response to the consultation from last year. So Biden’s Executive Order last year was very high level, it spoke a lot about investigations that were going to happen, research that was going to happen and the groups that were going to be formed to do that work. And then there was some sort of loose kind of requirements around like transparency and privacy, but very, very high level. And that’s not me being critical of it. I think that’s the reality, you know, AI applications span, almost every single industry. And this idea that we’re going to have one overarching regulatory framework that will equally cover all of these industries is not realistic. So starting with something high level, and then maybe being able to — No, I’m not saying this is what they’re saying they’re going to do, this is just my, my two cents. 

Liz  

This is how you would do it.

Zena 

This is — Biden, if you’re listening.  So starting with something really high level, and then branching out into more specific regulations for specific industries based on their needs and their requirements. That to me, is the most logical way to do it. So I’m not critical of it being high level, just noting that it is very high level. And then the EU AI Act came out last year, but it’s just only been officially approved this year. And in that one, they actually had more specificity than the US executive order. So they spoke about a risk based approach to AI. And so there were things that were at a low risk. And anything that was at a low risk basically didn’t really have any kind of regulations around it. Anything that was at what was deemed an unacceptable risk would be banned. And then everything in the middle had some kind of requirements around it. And it was requirements around things like privacy, and things like transparency. So things that we’ve been seeing in the media quite a lot around these narratives. So very, so I felt it was very in line with rigorous research currently is. And then we had Australia’s one which was released very recently, and it was the interim response. And they had a similar thing. They said that they were going to take a risk based approach weren’t very– they didn’t demonstrate what that was. Whereas the EU AI Act actually had a pyramid scheme that outlined the different risk levels. I really like the risk based approach. And I like it because it encourages proportionality. So when I was doing my trusted autonomous systems fellowship, I developed a safety framework. And part of that safety framework involves a risk categorization matrix. And I was categorizing human machine teaming risk based on three categories, low, medium, and high, quite generic there. And the reason I was doing that is because the argument that I was making was that not all machines that will be employed in a human machine teaming capacity will have the same level of capabilities. They will have different levels of autonomy, and they will have different machine functions–so what the machine is supposed to do. And so I developed a risk categorization model based on levels of autonomy and machine functions, and having, and then it ended up with these three risk categories. And then based on the risk category was how you engaged with the risk assessment process afterwards. So I’m super supportive of a risk based approach because it encourages proportionality. One of the pieces of feedback I constantly got around the risk category was: why three? Why three categories? And I think a lot of people would ask the same question of the EU as to they had four categories. Last I checked. And I think a lot of people say why four categories. So they had very low, very low, which is where you had no kind of requirements. And then the very high which was that it was banned. And then there were two in the middle. I can’t speak for the EU AI Act. But I can say from my own research, I chose three categories, because it made the most sense to me to have low, medium and high. It’s one of the really challenging things about developing anything is knowing where to stop and where to put the boundaries. So when when something is too little, and when something is too much. The reason I went with three categories is because I felt that I was able to clearly articulate levels of autonomy and machine functions across three different categories. And I felt that those categories were both robust enough, but also broad enough to be able to include different things because the devil is in the details. Maybe this machine is sort of capable of this, but not capable of that. And it can get really fuzzy of like, Oh, am I in Category Five, or am I in category 17. And so just for me, categorizing it in low, medium, and high I was trying to simplify while also trying to maintain some level of robustness. Now, I’m not saying that the way that I did it is the only correct way to do it. But that was the way I did it. And I can see in the EU AI Act, they kept it to four as well. 

Liz  

[Laughter] Yeah. 

Zena  

Because the more categories you get, the harder it is to actually like you’d think that if you had more categories, there’d be more options for people to fit into. Doesn’t work that way. 

Liz  

Yeah, 

Zena  

Just no — people’d be like, Is it six? Or is it 24?

Liz  

Yeah, it increases complexity and that makes it harder to comply. I was interested, you mentioned that that the categories that you chose, then preceded kind of risk assessment. And so one of the things I noticed with at least the EU AI Act, the provisional document that they’ve put out, is that they they’ve really put those into context. So you’re looking, they’re categorizing based on context of application, rather than on the specific technology itself, even though if you’re looking at specific contexts and specific types of you know, just achieving certain things implies a kind of, you know, capability, right. So, I know that, you know, safety is an emergent property, it’s something that you actually have to look at the whole system in order to figure out well, are you actually creating a safe system? Or are you creating a system that might look safe, but actually isn’t, particularly for some people — like, it is an emergent property. And so I thought that was kind of an interesting — I thought that was an interesting approach and a useful approach. But I thought that that risk assessment process where you are trying to figure out well, what is what is going to emerge when this system goes out into the wild for this particular purpose, like, can you talk a little bit about how you go about doing that what the risk assessment process looks like. And so how this risk based approach kind of is actualized. 

Zena  

So I can talk about it from the perspective of my human-machine teaming research. And the reason I have to speak about it specifically from this context is because it actually shaped what the risk assessment looked like. So for human machine teaming, like I was saying before, it’s very much a more balanced relationship between humans and machines. And there’s reciprocity there. Now, because of that, the human role is just as significant–in fact, in terms of contributions to safety–as the machine, so it’s not just about how we ensure that the machine is safe, and that the machine is trustworthy. It’s also how you ensure that the human roles and responsibilities are safe and trustworthy. And so what changes in the three risk categories that I developed is that something with lower levels of autonomy will have greater levels of human intervention. And something with greater levels of autonomy will have lower levels of human intervention. And this changes the roles and responsibilities on the human. So the risk assessment process that came after it was like, Okay, if you see it in category one, where you have low levels of autonomy, and greater levels of human intervention, what your risks and what your risk mitigations will look like might lean a little bit more into the human roles and responsibilities, because they’re playing a more critical role in that particular human machine teaming operation. However, if you lean to category three, which has greater levels of autonomy, and lower levels of human intervention, there might be a shift in that dynamic. So you might have more of a focus on the machine roles and responsibilities and not no focus on the human, just perhaps more of a critical focus on the machine roles and responsibilities, as opposed to the human roles and responsibilities. So really, what the risk categorization matrix does relative to the risk assessment is that it changes the things that you — it changes the lens, I guess — it changes the lens of how you approach the risk assessment process. The thing with risk assessments as someone who’s worked as a systems engineer for years, it’s like, what’s the same threat of yarn? Is that presented or string and beyond? What is it? Do you know what I’m talking about? It’s like as long as the

Liz  

I can’t remember the expression but you can keep on pulling forever. 

Zena  

Yeah, basically, that’s what it’s like with a risk assessment. Truly, it’s really hard to take a step back and be like, I think I’m going to stop here because a risk assessment is a paper trail. Yep. It’s a paper trail that says that I thought about this thing going wrong. And this is what I’ve done to try and make sure it doesn’t go wrong. And then in the event that it did go wrong, from a legal perspective, you have evidence that says, we actually did think about this, this was the mitigation we put in place. But unfortunately, it clearly wasn’t enough or we didn’t like, you know, you’ve got some kind of defense for yourself. Now, the issue with these risk assessments is, where do you put the boundaries? It’s really, really hard to put a line in the sand and say, Okay, I’m going to stop here. I think we’ve done enough because and especially for safety, critical systems, like aircraft is a huge one, you know, the catastrophic failure means it’s going to fall out of the sky. Yeah, it’s pretty bad. And so most people want to do their due diligence. But it’s also unreasonable to expect that there will be no boundaries or no stopping point for these kinds of risk assessments. So that’s the other point of the risk categorization matrix is proportionality. It’s understanding that if I have a system with lower levels of autonomy and greater human intervention, what will my risks look like? Will I have greater levels of catastrophic risk? Probably not. But if I’m leaning to another side, where I have machines that have much greater levels of autonomy, and far less human intervention, maybe my risks are a little bit catastrophic. And so that’s the point of that risk categorization matrix, it’s really to put proportionality into perspective, when you’re going through a risk assessment process.

Liz: Yeah. Well, it’s, it’s probably worth like keeping in mind. I mean, this is something that comes out of looking at nuclear accidents, etc, there’s, there’s usually this tension between, you know, yes, you want to manage safety. But you know, you’re trying to achieve something. And, or an organization is trying to achieve something, a community is trying to achieve something, a sector, industry is trying to achieve something. And so eventually, you have to actually move beyond the risk assessments and do some work, right. And so you’re never going to, you know, you’re never going to get to the point where your risk assessment is so perfect, that you’ve managed everything, you’re also not going to get to a point where it’s, you know, economically feasible to do due diligence to the nth degree. And actually put something out there.  

Liz  

I think the challenge, of course, is that, you know, when you’re talking about something that is inherently digital, the, you know, the impacts of that can spread far and wide, sometimes very quickly. And so, you know, when you’re thinking about risks, particularly for AI systems that can be very difficult to get a sense of well, what are the actual risks? And how do you actually manage those risks?

Zena  

I think that’s right, and it’s bringing me back to the thing we were talking about earlier about curiosity, idle curiosity. And I think that if we lived in, we’re able to live in that space of idle curiosity for a long time, some of these things could potentially emerge. But I do fundamentally agree with you that there are certain things associated with risk and safety that you will never be able to predict or to anticipate until something has actually been deployed into the wild. That’s just the reality of human nature. Human beings use things in completely different ways. Like every time I overload the washing machine, my husband dies is like you’re not supposed to put that much stuff and I’m like “it’s fine, it’s working!”

Liz  

There’s plenty of space left in there!

Zena  

I’ll turn the washing machine on and it goes [ee, ee] He just, he gets so mad at me.  

Liz 

So with that, I think we’ll wrap up this episode. Thank you so much for joining us today on the algorithmic futures podcast. To learn more about our episodes or to subscribe, please head over to our website algorithmicfutures.org We’ve got all the links there; you can also find us on most of your favorite podcast players. And if you enjoyed this episode, please show some love on LinkedInInstagram or on Apple podcasts that really helps us get the word out. We’d also just like to take a second to thank the Australian National Centre for Public Awareness of Science for generously letting us use their studio. And thank you so much for listening.