Podcasts Season 3

S03E04: Exploring AI standards, with Dr Kobi Leins

What are AI standards – and why should we care? Our guest today, Dr Kobi Leins, has first-hand experience as both contributor to the development of AI standards for the world and a professional working on supporting safe AI in real world industry contexts. We talk about what AI standards are for and why the discussion and work feeding into standards – and AI development and deployment more broadly – matters for us all. It’s the kind of tricky discussion that starts in industry and day-to-day applications of AI, and ends in military uses of AI.

If you care about AI ethics, safety, responsibility, all those words – then you need to listen to this conversation.

Credits

Guest: Dr Kobi Leins

Hosts: Zena Assaad and Liz Williams

Producers: Robbie Slape, Zena Assaad, Liz Williams, Martin Franklin (East Coast Studio)

Further reading

About toasters and outsourcing: https://kobileins.com/outreach/recommended/161-how-technology-loses-out-in-companies-countries-continents-and-what-to-do-about-it

Who Decides who Decides? https://www.carnegiecouncil.org/media/article/who-decides-artificial-intelligence

Are we Automating the Radicality and Banality of Evil? https://www.carnegiecouncil.org/media/article/automating-the-banality-and-radicality-of-evil

AI for Better or for Worse, or AI at all: https://circusbazaar.com/ai-for-better-or-for-worse-or-ai-at-all/

https://kobileins.com/outreach/recommended/178-atlas-of-ai-power-politics-and-the-planetary-costs-of-artificial-intelligence

https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai

https://www.carnegiecouncil.org/media/article/7-myths-of-using-the-term-human-on-the-loop

https://phys.org/news/2024-04-bias-algorithms.html

https://www.linkedin.com/pulse/terrible-costs-phone-based-childhood-the-atlantic-rg1ce

https://www.cambridge.org/au/universitypress/subjects/law/humanitarian-law/new-war-technologies-and-international-law-legal-limits-weaponising-nanomaterials

https://www.wired.com/story/deaths-of-effective-altruism

Academic freedom: https://link.springer.com/article/10.1007/s10734-023-01156-z

CDAO shapes new tools to inform Pentagon’s autonomous weapon reviews

Podcast transcript:

Liz 

Hi everyone. This is season 3 of Algorithmic Futures podcast, where we interview technology creators, regulators, and dreamers from around the world about how complex technologies are shaping our world. I’m Liz Williams I’m a nuclear physicist. 

Zena

And I’m Zena Assaad and I’m an aerospace engineer

Liz

And together, we have an interest in safety in emerging technology. 

Zena:

Ok, so Liz, tell me who we have today?

Liz: 

We have one of my favourite people, Kobi Leins. She is a reformed academic, a reform lawyer, and she currently works in industry and AI governance 

Zena

Kobi’s amazing.

Liz

She has been actively involved in shaping Ai standards and that was the focus of the conversation we’re about to share. The one thing I would say is that when I was reviewing the transcript for this conversation, it actually kept me up that night because we started with AI standards and we went all the way into disarmament and you know how AI is being used in military technologies and I really thought that this was the kind of conversation that we are creating an opportunity to have in kind of a public forum.

Zena

Yeah, because it’s a really difficult conversation to have isn’t it?

Liz

It is, it is, and I think I think it’s one that people don’t really think about you know when they’re looking at how AI is being discussed in the public eye, it’s not one that we’re thinking about when we’re you know, typing prompts into chat GPT to see what comes out of them.

Zena

Yes, and the discussions that are being had in that space aren’t always pragmatic discussions. Usually you see people on like the hard left or the hard right. But realistically, there’s a huge gray zone in the middle and the gray zone isn’t always pleasant and but it’s not always completely awful and I feel like Kobi was really good at like dissecting through that. She reminded me of the episode we recorded last season with Lauren Sanders, if so for our guests, if you remember Lauren Sanders is a lawyer who works out at UQ and she had a similar approach where she really was like, she really worked with us to dissect that gray area and really pull out a lot of those nuances in that discussion. That’s a really hard discussion to have.  

Liz

It is and it’s and it’s one that you know I don’t think there are that many places where those discussions are you know fostered, are put out for people to actually hear, to engage with, to respond to and so my hope is that the episode that we’ve put together today with Kobi will help create an opportunity for people to kind of hear this kind of conversation. And so, we will take you to the episode. But first I just want to thank the Australian National Centre for the Public Awareness of Science for letting us use their podcast studio for the recording.

***

Liz  

Thank you so much for joining us Kobi. I feel like we’ve been talking about making this episode for a very long time now. And I’m so grateful you were able to be here today.

Kobi  

Pleasure is all mine. 

Liz  

All right. So we have known each other for I don’t know how many years now I remember that when we first met we like connected over some like playdough and shared interest in nuclear non-proliferation.

Kobi  

So I didn’t know who you were my memory of you is that I had no idea who you were. I met a bunch of people at the same time. None of us had name tags. And I just remembered, and I think I said this to you afterwards. I was like that, that one I need to know her. And then it turns out, you’re a nuclear scientist. Of course you were. And like, you’re amazing. Like, I had literally no idea who was in that room. It was one of the weirdest experiences I’ve had. But yeah, I was so lucky to meet so many amazing people through that, including you.

Liz  

Yeah. So which is what brings us here today, not nuclear non-proliferation. But we’ve been collectively we’ve been looking at artificial intelligence from a variety of different perspectives and doing a variety of different work in that particular field. And so you have an incredible track record that is honestly very hard to describe. So I think what we’re going to do is start with your work on standards and artificial intelligence. You’re a technical expert with Standards Australia working on standards for AI. Can we start with an exploration of what standards are and what work they’re supposed to do or support? Can we sort of explore that a little bit for our listeners?

Kobi  

Sure. I think other than the standards people themselves, you might be the only person who’s ever drawn on standards as the interesting part of my work, because most of my colleagues and people I engage with their eyes glaze over at the mention of standards. And I understand because they sound really boring, right? Until you kind of start telling stories like, do you use a toaster? I love toasters. I think one of your interactions with my, my children was–

Zena  

I feel like, this is a really random thing to love, Kobi.

Kobi  

Toasters? 

Zena  

Yeah. 

Kobi  

Oh, no, I’m not the only one. There’s a Dutch guy who does security who talks about outsourcing completely in relation to a toaster, like a toaster is a whole thing. An object any object toasters, people can relate to. You want to be able to use that toaster without being electrocuted. You want your toast to reach a certain temperature, you want the power to flow through the cable to get through the toaster, like, I don’t need a car, I can just use a toaster, right? It’s a simple example. 

Kobi  

All of that is governed by standards. So everything around us all the time, the stamps, and letters, so many things are governed by standards. And so many things went wrong, before standards were used in other fields. So a really good example is Nancy Leveson, who I love who works on systems thinking, has written a new book. And in that book, she gives the example of the largest number of deaths in New York before September 11, was actually on, do you guys know what it is? Boats, wooden boats, because they were steam powered, they had fires, they were made of wood, people couldn’t swim. It was a shop of horrors. Basically, the boats caught fire people drowned. And someone somewhere thought, we need to standardize this, we need to put rules and if you think of standards as just a set of rules around how to use something safely, that all standards are. So we’re in that space now with AI. Some of us have been talking about and thinking about this for years, others are coming to it more recently, Most excitingly, the new standard on AI management dropped before Christmas, I hope you’ve all got it and read it. It’s amazing. Not just because I was part of it with a team of international experts for many years, but because we just haven’t had a framework for these conversations. There’s been so much talk about ethics and general principles. But until you’ve got an operationalization in a commercial context, you haven’t really got very much.

Zena  

So then the thing with AI, though Kobi, the example you were giving before was about boats. And that’s a very specific application. And it’s very siloed. But the thing with AI is that it’s rarely a specific application, and it’s rarely siloed. So can you walk me through how you develop standards for AI, when it is not application specific? And what the challenges are of that?

Kobi  

Yes, yes, I can. That’s an excellent question.

Zena  

Thank you.

Kobi  

So this is one of the one of the things that keeps coming up that I remember years ago going to a talk of, I think it was Marie Hildebrand, was talking about, we need to just kind of have a stamp of approval, right, we need to have the certification you get on a car or on a toaster or whatever that says is this is safe. That doesn’t work for AI. So this standard is not — actually, this parent standard. It’s probably also with framing, most standards have a parent standard. And they have what are referred to as sort of infant standards. So the baby standards that sit underneath them. The parent standard is not about how to review the AI, it’s about what you need, the systems you need in place to review the AI. So your question goes to the How to which is coming and one of the baby standards, and I’ll jump on that. But the management standard is really about in a corporate structure, what are the things you need to think about to govern AI at all? So the first thing people do a lot of corporations are at is that they’ve drafted a policy or a framework or whatever they call it. And then that floats around in a void in their organization. And then they say that we need to review AI — using this. But how? How, you ask? Well, the standard provides that framework. So people at top need to have some kind of decision about what they will tolerate in their business. And ideally it flows down through a strategy. Then you also have ways to review that system, which goes to the more granular and difficult question that you’ve just raised, which is at the very heart of what makes AI fascinating and hard. But you’ve also got things like embedding it in the code of conduct, you’ve got things like having KPIs for execs that reflect this, so that if they do not follow the principles that are in that framework, there are some kind of consequences. So it’s really, it’s more about looking at the whole package. And the biggest change is going to be that these decisions that are going to be made through the other standards, the baby standards that are coming, need to be documented and auditable. So we’ll get to the heart of it. But if you’ve got a trail of who’s made, what decision where and how they’ve made it, and that needs to be able to be reviewed, just in the same way that your financials get reviewed. That’s going to change the game. You can’t just say, we have nice principles on our website; we’re great. You actually have to show that process. And I think over time, there’ll be practice that develops and tolerances that develop. And I’ve completely evaded your question.

Liz  

That’s okay. We have many other questions.

Zena  

I actually don’t think that you evaded the question, but I might circle back for a second. So you’re talking about a lot of processes and practices that, in my opinion, have existed in industries for a long time. And we’re now just kind of reshifting them to fit AI. So can you talk to me a little bit about the difference between the misconception people believe that, you know, because AI is all new and wonderful — it’s not new — but because you know, for a lot of people that they think AI is all new and wonderful. So they think that when we’re standardizing it, that everything that we’re developing around that needs to be new and wonderful, when in actual fact, there’s actually a decades-long body of knowledge that we’re building off from. So can you kind of talk about the opportunities that you’ve taken from the existing body of knowledge, but what the challenges have been for applying that to AI?

Kobi

You’re right — the review processes have existed for a long time. And one of the challenges in writing one of the baby standards, which hasn’t been published yet–I’ve been involved. And I have to say Australia has punched well above its weight. So we’re a team, I’m not the only one, there are a lot of us. 42005 looks at AI impact assessments. And the biggest danger is –you’re exactly right. There are already all these review processes. In physical companies, companies that build physical products, also software companies, most companies will have a cyber review process, a privacy review process. And and and and and … up to a dozen of these already exist. So the danger of saying you need an AI impact assessment is that you add another piece to the end of a process where people are done, it takes too long. They’re over it, they gamify it. It hasn’t been documented. And these are all sort of weak points at the moment. What this recommends, what the standard is going to recommend is that you consolidate your reviews, and there are a few reasons for that. One is just touched on it, people game it, they’ll tell different experts different things depending on what they want to happen. The other is that a lot of the questions that are asked are the same. So a lawyer, the legal team and the cyber team and the privacy team might ask the same questions, but hear the answers differently. And they often prompt each other and bounce off each other. But the most important reason is that AI isn’t like a set and forget review of other products; it’s something you’re going to need to review over a lifecycle and the triggers for that lifecycle– both the definitions of AI itself to come into the funnel of review, and the touch points for review are probably the two most difficult and contentious things that I find, as well as the culture, which is a whole other box over there with a big lid that I’m gonna leave closed for a minute to actually get this stuff to happen in a workplace.

Liz  

It seems like you know, the standards are meant to help support creating a process, a workflow for enabling, you know, this this kind of periodic review process, this periodic understanding of like, what are we trying to achieve? How are we making this thing safe? And how are we doing that all across the board. But, but you mentioned culture, and I know if you’re mentioning Nancy Levenson and you’re talking about you know, safety as an emergent property of a system like there are a large number of questions about how standards actually play a role there, particularly in terms of like, is it a kind of, are they kind of leverage point to help a company sort of adopt the kind of cultures where you would want to — where you would produce, you know, quote unquote, safe AI outcomes or like, Tell me about that. Tell me about how this fits into the broader picture of — how standards fit in the broader picture of the organizational piece to all of this.

Kobi  

So I’d say yes, and. Yes, you want to avoid the risks, you want to avoid the litigation. A lot of companies don’t even think about litigation, they just think about the news headlines, what’s going to appear in the paper that they did that’s going to be reputationally damaging. There hasn’t been enough litigation, and there will be enough litigation for companies to increasingly care about the litigation. They do care about breaches, because we’ve had a number of those in Australia. So the privacy data aspect is taking a little bit more of a prominent position. But what I’ve been thinking about too, is to answer your question, the challenge you have is that if you’re only managing risk, and you know this — you’ve got a certain lens, right, you’re just looking for the things that can go wrong. But that doesn’t mean that you’re using the tools well or safely, it just means you’re trying to avoid the things that you see that can go wrong. So what, what this system or what the standard is trying to do, I think in an ideal world, if the standard’s applied properly, what it should be doing is getting companies to think about and inform themselves. And when I say companies, I mean particularly boards for the larger companies. Boards are really interested in but don’t have the knowledge yet around what what are the good uses of AI? Like what are the business problems that they have? First and foremost. Stop with AI, put the AI aside. What business problems does your business have? And then– then your next question is, which ones of those could or would use AI? And is that a high risk or low risk proposition? So those three questions should be asked in that sequence. And that should frame and flow down into all of the AI that’s being used. Instead, what I’ve seen happen is people making public statements about we’re using lots of AI. You see this everywhere, most are still only doing proofs of concepts or POCs. AI is really hard to scale. It’s really hard to use. And when we say AI, also most people just think Gen AI, they don’t even know there are other kinds of AI. I mean, we do this stuff. But for the layperson, it’s just like Gen AI is AI. That’s all AI is. It’s just something and I can prompt it. And it’ll do it–

Zena

I see AI solution a lot on LinkedIn– 

Kobi  

Oh, my goodness.

Zena  

Our company has an AI solution. I’m like “to what problem?”

Kobi  

So one of my favorite, favorite anecdotes, because I do this stuff–I’ve done this stuff for a few years now, right, I review these systems. And one of one of the roles that I have, I did it and someone came to me and they were like, “This is the AI and this is what we’ll fix.” And I said no, no, I want you to go away and what you tell me what your problem statement is for this business. And they came back in the true corporate fashion with a beautiful deck of slides. And the first slide said problem statement question mark. And then the second slide said Kobi won’t let us do–like, my name — I was the problem.

Zena  

Kobi, you are the problem–the AI solution for Kobi.

Kobi  

No solutions for Kobi. Too late for that. But there’s this perception that if you don’t want to do what I want to do, you’re just standing in my way, instead of pulling back. And this is where it really needs to be an all of business approach where you’re strategic, right? You use and to your–to us who work in these area, this is really obvious. But to people who don’t know as much about the systems, or who read the little paragraphs on the in-flight magazines and go, I need to use this and my CEO has said I need to use this. They’re not necessarily using AI in the smartest ways or the most–the ways that are of most benefit to their company, right? And if you think about other technologies or other approaches, you would think about where is this a low risk, high impact tool? Why are we not doing that with AI? Because I think there’s so much hype, and we’re coming down off that hype. But until then we really need to be educating and working with companies to do this. 

Circling back to what we were originally talking about, which is you’re going to need people to advise. There’s increasingly talk with having Chief AI officers. So Joe Biden just announced that every government department needs to have a chief AI officer. There just aren’t enough people with the skill set to do this role. And he also said they need to have a certain level of expertise and knowledge, well, who has that expertise and knowledge? You know, there are a lot of people stepping up to a crypto bros who are now AI experts. How you going to make sure you have those right people in there asking the right questions and listening and actually being experts rather than saying that they are. So? 

Liz  

Yeah, 

Kobi  

Yes, it’s gonna be interesting to see how it’s implemented. 

Liz  

Circling back to the standards, I’m actually interested in the role of experts in developing that process. So I’m wondering if you can share a little bit about what it what it looks like to develop a standard for AI? And what are some of the challenges in doing that?

Kobi  

So I, I saw an international law class once, which started with a very offensive inappropriate clip of a scene at the UN where they end up throwing chickens and throwing chairs at each other. I don’t know if you know, the video clip, but

Zena

No, but I feel like I do want to watch it.

Kobi

It’s awesome. But I showed the students this. That’s what they think the UN is really like. And then I went to like a video clip, where everyone’s just sitting and it’s really boring, it’s really slow, and nothing happens. And that’s kind of like standards. So I think part of the reason that there have been so few academics involved–and there have been some from Tech Standards Australia–it’s really time consuming, first of all. So meetings happen on Global Times — so all through COVID. There were meetings at two in the morning, six in the morning, 10 at night, you name it, horrible — that face, Liz. I don’t function after 10 pm. 

Zena  

Liz has had quite the week with times for meetings. 

Kobi  

But it was funny because this came up as an opportunity. And I was sort of told, there’s not really much purpose to doing this. And having been involved in treaty negotiation before I knew very, at the UN, I knew very well how important this was and how slowly it would move. But then when it moves, it moves really quickly, right? So you’ve got 100, and however many states now participating, some more than others, but you also have a lot of — people participate in their personal capacity, but they’re often representing somewhere. So to have proper academic independence is also quite unusual. And those sorts of inputs, I have to say in the beginning, I was quite cynical about a number of the large tech companies playing in this space, because they do. IBM, for one has been involved in this space for a really long time, largely because of their historical involvement with some other fairly horrible happenings in the world. And if people haven’t read IBM in the Holocaust, it’s an incredible read. But other companies are involved for other purposes. And you don’t know who’s with what company, they just appear as their national body representative. In some countries, you can’t get selected unless you’re anointed, basically, by your government. That’s not the case here. And we can — Standards Australia picks people based on expertise. It’s pretty, pretty laid back and pretty easy to get involved. But it’s really time consuming and tedious. And you’re arguing over often words at a time. So at the beginning of the standard, a lot of the conversation was around words and defining words, and the fact that we’re all talking past each other. So huge part of the work was actually creating, and I know, we’ve done this in some of the teaching work as well, how do you create a common body of knowledge, a common understanding– 

Zena  

Common terminology.

Kobi  

Common terminology is one of the biggest challenges. So you know, when the beginning of the talk was, well, we need to remove all bias from any AI. It’s like, you can’t–

Liz  

What does bias mean?

Kobi  

Let’s talk about bias. And then ground truths was another one of my favourites. Because you know, it’s got a technical definition, but you say to a lawyer, this is a ground truth. And they’re gonna be like, Oh, yes, yes, that’s real. Like, everyone’s got a different word. These words mean the same thing. So use that Princess Bride analogy: “I do not think this word means what you think it means.” And getting everyone in the room again, to be humble and curious and to sort of ask questions and to make sure that you do have that common knowledge is, was a huge part of the initial work. And then you start hunting in packs, like you work together, right? People who are like minded sort of pull in certain directions, and you want certain things. So there is a piece in the standard that’s been published, which Suelette Dreyfus, who’s amazing, did some work on this in the background on – not, it’s not called whistleblowing, but it’s basically the ability for someone to notify if something goes wrong, because that to me, was really important piece given our piece on Three Mile Island, Liz, and other work that we’ve done on catastrophes, like you need the people who know what’s going on to be able to pull that lever or raise it, ring that bell, and not be punished for it. 

Kobi  

So yeah, there’s a lot of work that goes on behind the scenes that isn’t seen and there’s a lot of, there are a lot of friendships and allegiances and a lot of work that gets done together. That, for me, that’s one of the biggest joys–I went to the first actual meeting in real life in September, or no, October, sorry, in Vienna, and it was just like, these are people I’ve been working with for nearly four years. So they, you know, at really weird hours of the morning, so there’s like a trauma bond there.

Liz  

I can understand that as a nuclear physicist, experimental nuclear physicist, who’s done lots of work at three in the morning with colleagues. It’s the best way to make friends.

Zena  

Trauma bonding is the best way to make friends. 

Kobi  

You see people at their worst and their best, right? It’s like we haven’t had a coffee, you haven’t had a wine you haven’t had, like, you’re always off guard, everyone’s a little bit off. But for better or for worse, we’re still producing something that’s hopefully going to make a difference. 

Liz  

Well, once the standard is released, how does it get? How does it evolve? Like, how is it updated? Like does the same people get in a room together and try and figure out what to do — like, what happens? 

Kobi  

The what next piece I haven’t been part of yet, but they can be updated over time. Once they’ve just been published. I think for now, it’s going to be a question of what works and what doesn’t — what is implemented and what’s not. And a side question to that is, and I think it’s one that the Australian Government was looking at is, should you use AI standards as some form of regulation? And my answer is a firm no, the standards are the best practice; regulation is the bottom of the barrel. So you want–

Zena

So can I ask a question there, Kobi, since standards of best practice, and they’re not legally required, like regulations are what is it that motivates people to use standards? Like what’s the point of all these people, you know, myself included? Putting our time and energy into standards if we, if they’re not regulated? We’re not, not they’re not regulated–but if they’re not legally required, what is encouraging people to use standards?

Kobi  

It’s a really good question. Law is a really blunt instrument. And I always ascribe any kind of change in any area as requiring a big toolbox. Standards are a really interesting one. Because even if they are not, even if companies don’t sign up to the standards, even if they don’t necessarily feel like they need to comply, what they’re going to do is change best practice. So particularly in the third party space, once third parties start getting asked what are you doing with our data? How are you building your algorithms? What are you doing? I think it’s going to shine a light on some pretty poor practices and ask for a lot greater accountability and also raise awareness within companies of what is and can and will be going wrong. So again, carrot and stick, there’s a liability element, there’s a risk element. But there’s also an opportunity element to this, that if you get it safely and right, you can get return on investment with customers who genuinely care about what happens with their data. We know customers are more willing to share data with companies that they know and trust. A good example, or one of the better practice ones in Australia, and is Mecca; they’re a fantastic company, they don’t bother you, unless you ask to be bothered, they leave you alone, if you don’t. And part of that is also the transactional value of when to leave your customers alone and when to approach them. So I think it’s going to be a number of factors, it’s going to be peer pressure, it’s going to be improved awareness, it’s going to be realizing what’s even going on under the hood. Because I think up until now, a lot of companies haven’t even asked those questions about what’s happening with – a lot of companies don’t even know who they’ve got contracts with, let alone how they’re operating and what’s going on. So there’s gonna be a real uplift, particularly once the auditing starts in terms of, you know, being accountable. So even if you’re not legally required to be compliant, it’s going to be seen as pretty poor if you’re not.

Liz  

Is there information provided in the standards about what assumptions the best practice is, you know, this best practice set of recommendations is based upon?

Kobi  

No, there are some references. But there’s not. There are experts with expertise, you bring it into the room. And it’s not like an academic paper where the, you know, sort of footnote in here is what we’ve proposed this.

Liz  

Yeah. I’m just curious, because like, one of the things that strikes me whenever I look at a set of standards, they’re useful documents, but sometimes they are based on assumptions that are not necessarily applicable to a particular scenario and an understanding that can sometimes take a bit of work. And so is there a useful way to think about that, like, if you’re, if you’re thinking about using a standard for a given purpose, like you’re a company that wants to figure out how to do this well, and they mean best, like, how do they–how do they get a sense of whether they’ve gone beyond the bounds of what, you know, you and your colleagues have envisioned in this particular set of standards? 

Kobi  

Yeah. It’s a really good question. I think there’s a gap at the moment, and there will be for quite some time from the theoretical framework of a standard to how it’s implemented. And not many people are doing it yet. It’s a really good point that you don’t really have academic research sitting behind it. You have the expertise of experts and some of it’s industry, right, pushing for a particular agenda. I think right now, there’s a huge ask, or a huge need, a huge demand for skills of people who are doing this work or have done this work, which is why I find myself often inaccessible because people just want to know how to. So my next goal in the next 12 months, I’ve started writing a book on sort of translating what that could look like. There’s not a singular way. And it will vary depending on the size of the company and the culture of the company, all those kinds of things. But, again, the standard doesn’t really go to culture, right? So if you’ve got a cultre that’s a tick a box exercise–

Zena  

Culture such a hard thing to capture.

Kobi  

Yeah, if you don’t call stuff out, get stuff done, like a Volkswagen diesel-gate kind of situation, you could implement this standard perfectly, and you still wouldn’t necessarily have the right thing happen. And that’s where I think the challenges are really going to be around the cultural piece. So I sort of describe it, as you know, right now, corporates work in these silos, and you kind of have to turn the corporates on their side and shake up what’s in the silos. And it’s the reason I love AI is that it’s actually all about power. And everything has to change. We need more people in the room, we need more robust discussions, it’s incredibly challenging. So there is a gap, there’s definitely a gap. And I think a lot of people will want to just have consultants come in, who will give them a maturity assessment framework and five documents to adopt, and then everything will be fine. Except none of it will be fine.

Zena  

Consulting reports always have like that final page of recommendations. And it’s like a, you know, these are the five steps you can take. I feel like it’s like a diet plan where they’re like, do these five things and lose 10 kilos in a month. I feel like it’s the same with consulting reports, they’re like, do these five things, and you’ll have ethical AI.

Kobi  

And it’s really problematic, because people do want to quick, like, “how do I do this right?” And there are things that will take you down that path, but you’re still gonna have to lean into discomfort and gray areas. And you know, I still remember in one of my earlier engagements, I raised environmental impacts. And I will not talk ever about AI ever again without talking about environmental impacts, because right now, it’s, we’re at a turning point environmentally, and we’re making some decisions that may be very hard to reverse. Specifically in relation to AI and data warehouses–if you haven’t seen those images of Arizona and what’s happening in South America–

Zena  

They’re huge. People have this misconception that because it’s digital, it’s automatically more environmentally friendly. And it’s not the case at all.

Kobi  

So, you know, I think that has to be on — and I rage wrote a piece, which was modified and turned into another piece about how AI ethics is a set menu, not a smorgasbord. 

Zena  

I appreciate that. I’m just sad that somebody rewrote it. I want to read the original Kobi rage-written document. 

Kobi  

It was basically, you know, you don’t get to pick and choose. Like, you come into the room and you need to — you need to consider slavery, modern slavery issues and colonization, you need to think about environmental impacts, you need to look behind what’s going on and look at what the actual cost is, not just to your company, but you need to link it back to ESG. I mean, we’re not there yet. We also have carbon accounting standards that have landed that are going to require AI to track carbon across supply chains; those two interests intersecting is a space I haven’t seen anyone looking at meaningfully yet. 

Zena

So then, Kobi, you’re talking about a lot of things that to me sit under the umbrella of ethics. And you know, when people talk about ethical AI, they talk about all of these things. And you’ve worked across government and academia and industry and standards bodies. So what’s your experience with how the conception of ethical AI differs? Or is it the same between all of those different industries?

Kobi

So firstly, I won’t talk about ethical AI anymore. I have ethics in my title, but it’s a subsidiary. I’ve sort of been saying this for a long time. Again, I wouldn’t talk about an ethical car or an ethical toaster. So why would I talk about ethical AI? What I talk about is a safe toaster or a safe car. And I won’t talk about responsible systems either because they’re not responsible. We humans who make them are responsible. So the ethical piece – it really developed over the past decade, because a lot of companies wanted to obfuscate from any kind of regulation. So this idea of having something as being ethical, this has been a – it’s a little bit more public now. But for a long time, we couldn’t really people didn’t really say this out loud, until it was a Nature article that reviewed hundreds of ethical frameworks and found that they made absolutely zero difference in any corporate, right. So in reality, what we want to do is have more thoughtful reactions to and frameworks around complex systems that can have enormous impacts on people both within the company and outside the company. And I think, you know, there’s an article I was reading this morning about how bias in systems can help us to reflect on our own bias. And I think that’s kind of what’s happening. People are saying, you know, these systems are only selecting men and it’s like, well, in the real world, that’s what happens — is that okay? So we sort of some of these issues are surfacing and becoming more obvious, but they’re no different to what we’re seeing now. They’re expedited. I’m not saying there’s not a risk of these systems and I need to say this very carefully. There’s a speed and scale at which these systems operate that can cause incredible harm. But what we also need to do is look at ourselves as a society and say, “Is this okay generally,” because there’s stuff that’s going on now, like the Equal Pay thing that we’ve just had in Australia, we just had a legal requirement to, for companies to publish their, you know, equivalence of salaries between men and women. And what was really interesting was that the agency, the government agency that — I think it’s a government agency that supported that — had an article on how companies could game it to make their statistics look better. Right?

Liz  

Well, it’s, it’s the scale question, right? And this has always been the word like, you know, from the early days of AI, this has always been the worry like you can you can build an AI model for some purpose and you can roll it out across the world and and very quickly, if you’re not thinking about that, you can as you as you said, Kobi, cause a lot of damage. And so it’s — the scale question is where it gets tricky and interesting. And, and also dangerous, speaking to your point of “is it safe?” It is hard, if you are in one environment, to provide a realistic assessment of safety when you what you are going to roll out is in an entirely different environment to where you exist. Like this is this is that’s one of the major challenges there and I’m wondering if I’m wondering how you think about that Kobi, like if you’re if you’re gonna, you know, I don’t know advise an international corporation on how to think about that, how to approach that problem, like, what do you say?

Kobi  

So the impact — the AI Impact Assessment best practice recommends, this is 42005, that all the impact assessments get rolled together, because of the speed at which they’re going to need to operate. So you have all your experts in a room reviewing. And those touch points are really important. So if you’re ingesting different data sets, if you’re using a system for different purpose, if you’re changing the scope of the use of your system, all of those should require a new review. So you look at your old review, which is documented, you’ve got guardrails, or, you know, scope around that, whatever the change is, is also documented and the risk and the decision is made off that. Do we want to get into weapons?

Liz  

Yeah, let’s go for it. 

Kobi  

Because I think this has been on my mind, and especially with us, as you know, with our expertise, it almost be remiss not to touch on it. So there have been reports of using AI to target Palestinians. Six people were interviewed–true or not, put that to the side. Let’s, for a moment, assume that it’s true. Those are data sets that would have been, have been acquired largely through civilian contexts, right? So the surveillance that we have around us right now here in Australia could also be repurposed for the same thing. It’s one thing to use a camera –and surveillance is, again, it’s a whole topic in and of itself. But surveillance is one thing to use in one context, if you’re using it in the context of targeting people for killing them, whether a human’s on the loop, like even the whole autonomous weapons debate really drives me nuts, because it’s distracting from all of the pieces, which are very similar to what need to happen in a commercial review is, you should be well across all of the pieces that have been used. And you’re actually legally required to, under the Geneva Conventions, article 36, for those who love weapons reviews–I might be the only person who listens to this who does, but think you guys have been across it as well–you need to have that kind of knowledge. And then in the battlefield, you also need to have someone advising on those systems as you’re in the battlefield. So all of these pieces we’ve just talked about in a commercial context come to play in a military context. And again, in the same way that ethics is a distraction, I think this the whole autonomous weapons debate is like, is it an autonomous thing or not? You know, the killer robots conversation has actually been a huge distraction from the real conversation that needs to be happening, which is how are you documenting, building and tracking compliance of these systems with international law and other requirements?

Zena 

There’s also an element of the problem existing even in absence of the technology, right? Like with the conflict that you’re talking about Kobe, even in absence of AI, there are other international humanitarian laws that have been breached. AI was not the cause of that problem. It’s just an additional tool that’s being used. And I think that’s important to think about, especially with the debate on lethal autonomous weapon systems. Like I hear a lot of people talking about how they should be banned completely. And it’s like, but even if you banned them, war is still unpleasant.

Liz 

It is. I think the question is, how are you–what are you amplifying, with the introduction of these tools? And what are the pieces that are missing that allow you to do that allow you to make use of them in a way that is consistent with, you know, law, international law, standards, etc? Like, you’re right–war, like from the beginning, is never–there’s always a breach of something. But you know, there are still parameters within which — you have to work within.

Zena  

Within international humanitarian laws, you mean.

Liz  

Yes, yeah. 

Zena  

So you’re saying that the technology still has to meet the existing or maybe not even existing, even the ones that need to emerge specifically for these technologies?

Liz  

What I’m saying is that the technology in some sense, depending on the technologies in question, allow for the amplification of harm, in a way that may not–

Kobi  

It’s speed and scale. It’s what we’re just talking about, speed and scale, but in an armed conflict setting. Yeah. And I remember you making this point at a conference, the roundtable that we’re at where someone was talking about autonomous weapons. And I largely stayed out of this debate, because I worked in it for a while and then went into the commercial world to see how that stuff is really done. So I’ve been busy doing that. But just in the last couple of weeks I’ve been really, it’s brought it back, because I remember your comment saying, you know, when fighter pilots fight, there’s an enormous amount of compute that goes into the trajectory of a missile, right? It doesn’t mean that it’s not autonomous because they press a button. Like this whole, “Is there a human loop, which I’ve also rage written an article about —  that one was not censored. It’s like, stop talking about the human in the loop in this context, because you can’t actually meaningfully engage or interrogate these systems on the fly. It doesn’t work, you just can’t. So whether there’s a human pressing a button or it’s automatically like again, it’s just a distraction from questions of how you build, audit, and make sure that these systems are compliant. And to be really clear, I’m in a very unpopular position of being pro disarmament. I’m seeing the world head down a direction of being you know, the the amount that is being spent on weapons and the the lack of conversations around peacebuilding, concern me greatly. We’re only talking about war, which means that’s where we’re gonna go. Yeah, so when I’m talking about this, I’m not talking about it in terms of taking sides, I would really like there to be less war in the world, I’d like there’d be less polarization and none of those things are happening. And the technology’s facilitating a lot of where we’re headed.

Liz  

Well, it’s amplifying some of the fissures that are leading to conflict. I mean, we’ve, we’ve, we’ve gotten into social media before, we probably won’t go down that track today. But like, you know, it’s a great example of how you know, we are very good, or society seems to be very good at building things that amplify the differences between us, that amplify, you know, things that we are somehow naturally attracted to, but that actually bring us apart.

Zena  

Yeah, but just to add a bit of context, so Kobi, the thing you were talking about the comment I made on that roundtable was, I can’t, I don’t specifically remember what this person had said. But someone had said something about how you know, there should always be a human in the loop, and that’s why lethal autonomous weapons should be banned. It was something along those lines. And the comment I made was that — I use the example of a fighter pilot and I was talking about how if a fighter pilot is dropping a bomb on a particular location, once he presses that release button, he has no control — he or she sorry,– he or she has no control over what happens, right? If the coordinates are wrong, if the trajectory is wrong, if there are certain weather conditions that change the trajectory, they — that person has zero input or control over that bomb once it gets dropped. So this idea that lethal autonomous weapon systems have introduced a new level of an absence of human control is a myth. And so this is my argument. And when I say I don’t think that banning lethal autonomous weapon systems is the problem, because it goes back to that thing we were talking about about. You really, you game the system, there is always a loophole. And from the original, you know, the original development of the catapult. That was when we originally had an autonomous weapon system, like once that thing got let loose, no one has control over it. And so my argument is by focusing on this argument of banning lethal autonomous weapon systems, I think it completely sidesteps the problem of like we have existing problems and existing problems of the way that war is conducted. And the focus on lethal autonomous weapon systems being the creation of those problems, I think, is misleading. But I also agree with what you’re saying, what you’re talking about, you know, how perhaps lethal autonomous weapon systems have scaled the kind of misconduct and harm that perhaps we wouldn’t have seen without them.

Kobi  

I think there’s also a massive disconnect. We just had two completely separate conversations, 

Liz  

True. 

Zena  

Yeah. 

Kobi  

And they are actually the same thing. So if we’re figuring it out in the commercial sector, and Helen Durham, I loved Helen Durham’s angle on this, she’s like, women, and IVF is technology and procreation, is regulated up the wazoo. Technology and death, fine. Off you go. We have sort of disconnected a lot of these different conversations, and I don’t think it’s accidental. So if we’re more careful about building our commercial platforms than we are about building our weapon systems, something is deeply wrong. And I would argue that that is the case right now. Because those conversations are going past each other. What are — everything I’ve just said, in the standards world about commercial systems and AI systems reviews — I don’t see any of that happening. All I see is, is it an autonomous weapon, which is a distraction from it actually doesn’t matter. Like imagine in a commercial context, if we went “is that an autonomous system?” “If not, we can use …” It’s hilarious. It’s absurd. We’ve got a higher standard in a commercial context now than we have in a killing context.

Liz  

Is that because of the level of scrutiny– I mean, you know, I’m just thinking in terms of like militaries having like this veil of secrecy behind which they can, at least to an extent operate. There’s, that’s not completely true. There is always scrutiny on some level, but like–

Kobi  

Weapons still have to be compliant with international law. I mean, part of the reason I got my grant from my original PhD project, because I was like, I want to do a new technology. And I looked at a bunch of them. And Defense approached me because they purchased something that had nanomaterials that was entirely illegal. And they were like, “Oops, that was a lot of money. We don’t want to do that again, can you do some research on this?” Right. So there still has to be some compliance. And I don’t think–I deeply believe that these conversations have been held separately for a very, very long time. And it’s quite deliberate. And I think it’s time to bring those conversations together, where the lessons learned in the civilian sector need to be imported into the military sector and refined and reformed for the purposes of an Article 36 review, but you can’t just pretend to me that that’s not happening. If you’ve got a higher standard for marketing than you do for dropping bombs, we’ve got a big problem. 

Liz  

Yeah. How do you how do you how do you how do you trigger that kind of discussion? How do you how do you start that kind of conversation, such that it’s going to actually have a meaningful impact?

Kobi  

There are so many questions, so many levels to answering that question. I mean, one is that you need to get governments to get the right experts in the room who actually know about the system to talk about them. You need to have a political will. And I think the biggest issue for me, just as you were talking before, Liz, and you’re saying that we tend to follow these social media platforms, we tend to go this certain direction. They all make a lot of money. Right. Weapons are a massive industry. We’re talking billions of dollars. If people haven’t seen the figures from the UN, like every year, we just spend more and more and more and more on weapons. It’s the same with media, like clicks are worth money. So if you can enrage people, you can make money. So we have a sort of, you know, death of the nation state at the moment where tech companies have more power to shape events, how do you motivate them to comply or states to pull back to comply as a massive geopolitical question.

Zena  

And I think there’s something to be said in both a defense and a civil context about the public facade that companies use, and then what is actually happening behind the scenes. And I think that’s a huge part of the scale conversation. When we’re scaling technology and the use of technology, it’s being scaled in a particular way. But the narrative of how it’s being scaled, it doesn’t often align with the reality.

Kobi  

I think people just don’t understand. I have DuckDuckGo on my phone, and I show people occasionally in presentations, sort of, you know, how much of your data is going to how many places every time you sign up to a new app. Most people just don’t understand, feel disempowered, disconnected, don’t care for a whole wide range of reasons. But there’s also this feeling of just not having control anymore. There’s not a feeling of I can’t do anything about it, when in fact, there are things that could be done. And we imagined these worlds and we build these worlds, and they could be very, very different. I’d prefer one with fewer or no weapons in it. 

Liz  

Yeah, I agree with that.

Zena  

I’d also prefer one with shorter terms and conditions.

Kobi  

Preach. Yes, yes–that you understand.

Zena  

Spotify could own my ovaries for all I know, I have no idea. I just clicked accept.

Kobi  

There was a wonderful art exhibition is one of the I think it’s one of the first ADMS projects, the Centre of Excellence that was set up with someone just video recorded scrolling through terms and conditions of an average person’s apps on their phone, number of apps on their phone, and it was like five days worth just to go through them. Yeah. So yeah, that’s–

Zena  

They give you the short version now. And then they’re like, click here for the full terms and conditions. And I would love to see the percentage of people that actually click here. 

Kobi  

I think that’s the feature, right? It’s not the bug, it’s kind of–they know, people won’t, they do their thing. And then you end up with these data brokerage systems that use and abuse this data in multiple ways. Including in armed conflict–again, these worlds are not disconnected at the same companies that are selling– 

Zena  

Yeah, what you were talking about before, you know, the data was likely connect collected in a civil context, and it’s now being used in a military context. 

Liz  

Yeah. Well, it’s Corporations. Businesses, they want to make money. If they can make money from Defense, and you know, I don’t know selling a social media app, like they’re going do both of those things if they can.

Zena  

I think you know, what’s interesting to me having this conversation is the interconnection between civil and military spaces in the digital world. And I don’t know that we’ve had this level of interconnection before. Maybe we have and I’ve just been blind to it. But I’m finding it really, really fascinating to see this connection between these two — would have been really quite independent organizations for a very long time. And now we’re seeing them come together in a really different way.

Kobi  

Again, AI is not special in this way. So the research project that I had was looking at nanomaterials, it was the same thing, you end up with these questions around, who are the experts, who understands the technology enough to even say, what the regulations, what regulations apply? And at what point do you review–like article 36 Weapons reviews are really interesting, because, you know, again, you’ve got a really small pool of experts, both in AI and also in nanotech, like, and in other tech areas, as well, quantum would be another equivalent. How do you have independent experts who can come in and actually critique independently? I mean, we we’ve all worked in academia, we know how academia can also not be necessarily the place where you have the most free speech or, or the ability to critique and there’s a lot of defence money in academia. So how do you even — if all of this were connected in an ideal world, the bits that I spent a lot of time wondering about how would you actually implement finding people who have the independence, the expertise to review in a timely manner, to give that kind of feedback, who know international law, who know the tech, and there aren’t that many states to do Article 36 weapons reviews anyway. So it’s, you’d have to then have a funnel of anyone who sort of purchases or uses or modifies. And that’s the other thing, it’s means or methods of, of warfare that are modified–Modifications. Also, again, coming back to the AI impact assessment, those kinds of triggers that we’re talking about in the commercial sphere would actually require another review or a re-review as well. So learning from each other, I think would be incredibly helpful. And I’d love to stop the conversation around autonomous weapons dead in its tracks and actually shift the direction into how do we join those dots and learn from each other and actually do it better in both spaces.

Zena  

Fun fact, there is actually, the Australian Army is does workshops around reviewing article 36. And they’re making the outcomes of those workshops public for the general public.

Kobi

Are they doing AI in those? 

Zena  

Technology in general.

Kobi  

I would love to see those. That is a fun and interesting fact.

Zena  

They’re gonna make it — they’re going to make it public. I don’t know that they’re going to make the reviews of specific technology public. But there is a workshop reviewing article 36 and its application to technology. And they’re going to make the outcomes of that workshop public.

Liz  

I’m wondering if we can ask you a little bit about the many hats that you’ve worn, or still are wearing? And how how you felt about your autonomy of thought development in each of those roles. So academia, industry, in your work in standards, like how does–how do you as a, you know, somebody who is contributing to these conversations in each of those areas, like what autonomy do you have in those? How do you feel about the role that people in any of these hats, I guess, play in shaping these kinds of conversations?

Kobi

Yeah, it’s a really interesting one. I don’t think I really had much autonomy when I did the UN work, mostly because I didn’t feel like I had a voice. I’m sort of slightly losing my voice today, so it feels like that again, for a different reason. Before I did my PhD, I really didn’t speak. I didn’t write. I had thoughts, but I wasn’t really confident enough to use them. And if there’s one thing I’m grateful for the hazing, and broken process as a PhD for it’s feeling confident enough in one thing, to know what I know and know what I don’t know, and feel confident to speak. So when I came to academia, and started doing that research, there were clearly gaps, some of which we’ve touched on today that are in my book. And I really wanted to speak about those and I couldn’t, I couldn’t, for the reasons that I’ve touched on. Universities in Australia are deeply compromised. There is an incredible amount of both tech company money and defence money and the amount of independent research–I would, I would always say, I only do research where I don’t know what the outcome is when I start. And those kinds of areas of research are getting smaller and smaller. There was a–an article on it this week, that globally, that’s the case, too. It’s not just Australia, it’s worldwide. The independence of academic institutions is seriously challenged, which raises a whole lot of questions for democracy and the peace question that we were just talking about as well. So when I moved into the corporate sector, I made a demand, which I thought was entirely unreasonable, and they accepted it, which was the only work a four day week. So one day a week, I have to think and write and engage in all these other activities. And I’ve got an affiliation with King’s College. So I operate sort of on different planes at the same time. And I actually feel like I’ve got more freedom in the areas where I have expertise to speak than I ever did before. And I’m in this space commercially, because I want to learn how to do the things–I wanted to not just write about them academically, I wanted to actually implement them and see what it was like in a real-world context. I still haven’t taken defence money, other than the one off grant that started off my PhD, I still have maintained fairly independent thought, I think, and I value that extremely highly, because I think it’s, it’s very easy along the way, there’s been a lot of decisions I could have made, that would have taken me down a different path. And I’m, for better or for worse, I am where I am, I’ve made decisions where I’m in a position where I can say what I think is academically sound, and, again, I sit in this really unpopular position of being pro-peace and disarmament. That’s not something people say much anymore. It’s kind of gone away. And I think we need to be having those conversations. So I feel very privileged to be able to be in this position now, to be able to say those things, and to critique tech companies and to critique platforms and systems and, you know, be in a room where I know enough to ask questions that have a meaningful impact for businesses is a really luxurious position to sit.

Zena 

And to rage write your thoughts that may or may not be published, censored and or uncensored. I feel like our show notes just have to include links to the original read written pieces.

Kobi  

I think as I’ve gotten older too, it’s funny because you read the stuff you read earlier and some of it’s spot on. Like I wrote a piece six or seven years ago called AI for better or for worse or AI at all. And I reread that recently, I was like, look at my profound thoughts. The same stuff still applies, right? It’s like, why are we using all this stuff we were just talking about. But then there are other pieces where I’m treading into more delicate territory. So there’s a piece that I co-authored with Wendell Wallach and Anja Kaspersen. I don’t think I’ve ever spent his long writing a piece about. So I knew about Hannah Arendt’s banality of evil but I’d never read about her radicality of evil. And in effect, she was saying that radical evil is when we reduce people to numbers. They have no meaning in life anymore. And when I look at how we’re talking about tech replacing all of the human skills, and we’re replacing–it was a really hard adequate, right, because I didn’t want to go straight to like, you know, tech is evil; tech is not evil, you know, I don’t hold that view. But if we go down a certain path of using that tech, and if you go back to Weizenbaum, and you go back to Mumford, and a bunch of other authors, Karel Capek wrote about this as well. But there’s this sort of fundamental authoritarianism to tech, if it’s not used very carefully. I’ve been thinking about that a lot more lately, just because of how we’re going the direction the world is taking, which has to be used in a nuanced way. Because again, in Australia, it’s like “are you pro or anti tech now.” We can hold complex ideas at the same time. This is a very complex system, we need to be having more complex conversations like this one. 

Zena  

No, I agree with you entirely. I absolutely hate the narrative of AI is gonna bring the entire humanity. I just hate it. I feel like most of this conversation has been the three of us taking turns being like I hate this, I hate this. 

Kobi  

Rage podcast. 

Liz  

With that we have we have a question, we’re going to trial with you. It harkens back to the early days of podcast. What hopes do you have for our algorithmic futures?

Kobi  

Well, I care very little for the future of the algorithms, I care about the futures of the humans. And I hope that the algorithms that we’re choosing make better lives for more humans. I really want to see us not just thinking about what improves productivity in a business context, but also about keeps the environment going, what builds peace, how we manage this onset of climate crisis that is not going away. And in some places, algorithms will be relevant, and in many it will not. I hope we’re not distracted by the algorithms so that we miss out on what the actual problems are.

Zena  

Great answer. 

Liz  

It is a great answer. It’s just making me think that like, well, it would be nice if we thought about our wellbeing as a planet and a species and then maybe started there.

Kobi  

Yeah, yeah. Well being as humans, like, we haven’t even touched on, you know, tech and kids and teens and mental health and just generally–

Zena  

and pets as well, like automatic feeders and things like that it Yeah. Where’s Max, is he still there?

Kobi  

The thing I’ve been thinking about, I know we’re gonna wrap up, but like, if you could make the choice right now, the benefit– do the benefits outweigh the negatives?

Liz  

It’s a good question. I’ve wondered that. I’ve wondered that many, many times I like I think we’ve made–sometimes, especially when like, I’ve got 20 million things going on and my phone’s buzzing and like, my kids are on the iPad, and everybody’s screaming, like, I wonder is this? Is this really like, a helpful way to structure our life?

Kobi  

Have either of you seen Zone of interest?

Liz and Zena

No

Kobi  

It’s horrific. I think everyone should see it. It’s largely because for those who haven’t seen it, it’s the house that was next to Auschwitz. And it just shows the life of the family in that house. And it’s largely auditory. So you start off hearing the things and then over time, you don’t hear them. And kind of to your point, like if you went back 50 years, and then stepped into our lives now. And I’m not–I want the health care, I want the knowledge, I want the things, the good parts. But if they looked at our lives now, and I think sometimes the older people do and go, Well, what are you winning? What do you– what is the benefit? As people’s work becomes more precarious, as economies fluctuate. As weapons production goes up. Like, if you kind of took a step back and then stepped into it, what would your view be? Zone of interest is really about how we’ll be judged in the future and not about what’s happened in the past. So I strongly recommend. I’ve really been thinking about this. Are we going to get called out for having used the term ethics and responsible for a little bit too long to let these systems be built in ways where we should have been calling out stuff earlier. Are those of us in this field, how do we feel and how do we hold ourselves to account that we’ve actually acted responsibly and ethically — not the systems but us, in the face of immense commercial protests and pressure.

Liz  

And that’s that in itself is a difficult conversation to have with yourself. I mean, particularly with regards to like the idea of like, well, what what influence do I have? Sometimes it’s hard to know. But also sometimes, well–

Zena  

You can feel very small.

Liz  

Well, you feel small. But also like sometimes the conversations that you have with yourself are like well do, do I take this grant money? Or do I, you know, share my thoughts openly about this particular like, you know, those are the kinds of conversations that we’re always having, we’re always making these decisions that, you know, we might, at some point look back on and go, Why? Why did I do that? Like that was–

Zena  

There’s probably a lot of reasons. You know, it’s our livelihood. A lot of people have families and mortgages. And, you know, there’s an element of pride in your career and wanting to progress and certain grants help you get there. And it’s a lot. Like human beings are complex, and it’s very, hardly ever as much as I’d love to just be able to be like, “That was bad. And that was good.” It’s hardly ever the case is it. 

Kobi 

I think you are having an impact. I think this podcast is really important. I think other people’s voices as well that are critical and prompting thought — we might not have all the answers. But I actually would say that the positive is that you’re speaking into the space, which is hard. And I think a lot of people don’t understand the tension in academia of grant money. They assume that academics are independent, they don’t understand the dynamics in academia anymore outside of academia, so–

Zena

And you’re on contract to contract and you have to prove that you can bring in some dollars. Yeah, it’s hard. 

Kobi  

So knowing that you’re in this space speaking and interviewing and having a voice I think is really, really important. So thank you, and thanks for letting me be on your amazing podcast.

Zena  

Thank you for being here. We finally made it work.

Liz  

I know. And yeah, I’ve really enjoyed the conversation today. I so thank you. Thank you for making the time really appreciate it.

Kobi

Thank you for having me. 

**

Liz

Thank you for joining us today on the Algorithmic Futures podcast, and thank you again to the Australian National Centre for the Public Awareness of Science for allowing us to record this episode in their podcast studio. 

Zena

If you’d like to learn more about our podcast or listen to more episodes you can visit our website, algorithmicfutures.org.