In this episode, co-hosts Zena and Liz share some of their experiences on creating podcast episodes in support of the Social Responsibility of Algorithms workshop series and discuss the potential futures of the Algorithmic Futures podcast. Along the way, they have a wide-ranging discussion covering everything from how assumptions get embedded in technologies deployed at scale to what it’s like being a woman working in a male-dominated STEM field.
Listen and subscribe on Apple Podcasts, iHeartRadio, PocketCasts, Spotify and more. Five star ratings and positive reviews on Apple Podcasts help us get the word out, so please, if you enjoy this episode, please share with others and consider leaving us a rating!
This episode was developed in support of the Algorithmic Futures Policy Lab, a collaboration between the Australian National University (ANU) Centre for European Studies, ANU School of Cybernetics, ANU Fenner School of Environment and Society, DIMACS at Rutgers University, and CNRS LAMSADE. The Algorithmic Futures Policy Lab is supported by an Erasmus+ Jean Monnet grant from the European Commission.
Co-hosts: Zena Assaad and Liz Williams
Co-producers: Zena Assaad and Liz Williams
Episode 8 transcript
Liz: Hi everyone, I’m Liz Williams.
Zena: And I’m Zena Assaad.
And this is the Algorithmic Futures podcast.
Liz: Join us, as we talk to technology creators, regulators, and dreamers from around the world to learn how complex technologies may shape our environment and societies in the years to come.
Liz: In today’s episode, Zena and I look back at the episodes we made in support of the Social Responsibility of Algorithms workshop, and then we turn our conversation towards the next exciting phase in this podcast’s future. Along the way, we have a wide-ranging discussion covering everything from how assumptions get embedded in technologies deployed at scale to what it’s like being a woman working in a male-dominated STEM field. We had a lot of fun recording this one. I really hope you enjoy listening in.
Zena: Hi Liz.
Liz: Hi Zena. How are you?
Zena: I’m good. How are you going?
Liz: I’m doing alright. It’s been a good morning this morning. The weather doesn’t suck, which is good.
Zena: In Canberra? Really?
Liz: It’s a little cold. It was icy on my ride to work this morning, but the sky was beautifully blue, which I always appreciate about Canberra, even when it’s freezing.
Zena: I do remember having very blue skies in winter, but it was still excruciatingly cold, and I don’t appreciate that.
Liz: This is why you’re in Brisbane.
Zena: Exactly. Where it’s a beautiful 22 degrees and sunny.
Liz: I’m always envious. You always look like you’re wearing fewer layers than I am.
Zena: Yeah. One of the joys of having moved.
Liz: Yes. Cool.
Zena: So, Liz, the first Social Responsibility of Algorithms Workshop 2022 has now come to an end. And as most of our listeners know, this podcast came about because of this workshop series. So can you walk me through a little bit how the workshop series was developed, how it went over the course of this year, and what the connection to the podcast was?
Liz: Yeah, definitely, Zena. The Social Responsibility of Algorithms Workshop series started back in 2017 with our first guests on the podcast, so Alexis Tsoukiás of CNRS LAMSADE and Fred Roberts of DIMACS at Rutgers. And so, their idea with starting this workshop, as they said in that first podcast episode, was that they wanted to bring interdisciplinary groups together to explore this idea that algorithmic systems are being granted more and more responsibility. They’re being given more autonomy, and the design of that is actually really important to get right. And that is not just a problem for computer scientists or engineers. It is a problem for a vast array of people who look at these things as systems that sit within our environment, that connect with society.
So some of the examples that I have in mind are, for example, Facebook ads, the way that they are shown to people — that in itself is an algorithmic system that’s making decisions, and those decisions can sometimes have adverse impacts, particularly for already disadvantaged groups. So there are a lot of examples of these kinds of things. They really, really wanted to create a forum for exploring these things, addressing them, and creating opportunities for collaboration between people with very different disciplinary backgrounds, very different career paths.
And so that’s basically the tradition we aimed to carry forward with this current iteration of the workshop. It was a little bit more challenging this time. Usually, this is an in-person workshop, and that’s a lovely experience. I went to the one in 2019, and we had the workshop dinner with the lovely French food in Paris. We had lots of chances to talk with each other over coffee. And this time, we had to do it all virtually. And so, actually, that’s one of the reasons that we thought about doing the podcast is we knew that we were going to need to get a bit creative about engaging people in this conversation. And I personally love the podcast medium, and I thought that might be a really interesting way to bring in new audiences and share some of the work and the thinking that we’re doing, and potentially create an opportunity for enhanced collaboration for bringing in new ideas, all of that stuff.
Zena: I think the pandemic forced a lot of people to rethink how we facilitated things like conferences and like workshops. So can you explain a little bit some of the things that went well and some of the things that were a little bit more challenging with this new approach to facilitating the workshop series?
Liz: Yeah, definitely. So I guess I should step back and say because this was going to be a virtual-only conference, we had a few different challenges – the first of which is time zones. We had people we knew we wanted to allow to participate in the US, in Europe, and in Australia. And obviously, those time zones don’t work very well together, so we chose a compromise time. So for Australians, it was 8:00 PM to midnight. For those in the US, it was very early in the morning, I think at 6:00 or 7:00 in the morning to 11:00. And Europe, Europe had a nice lunchtime event. So that was one of the challenges was time zones. [Note to listeners; time zones refer to AEDT, EST, CEST].
We didn’t expect people to attend everything, and we were trying to arrange our programming so that when people needed to run a session, they were going to be doing that at a time that was at least semi-reasonable for them. So most of our case study sessions, as we called them, which were really interactive workshop sessions, small workshop sessions, were run by Australians. And so we had those first in the workshop. And then our keynotes and our discussion panels were in the second half of the workshop. The discussion panels were the least ideal. I think some of the Australians ended up participating at 11:00 at night or whatever time it happened to be, 10:30. Not ideal, but that’s just life when we’re dealing with a virtual conference.
And I think one of the things to keep in mind, one of the things that I definitely learned was that means that basically a lot of people are going to be relying on your content, on you to put your content out there later. They’re not going to necessarily be attending when the event is happening. And so we had smaller group sessions than we expected, but we had really good engagement, for example, for the podcast. And we are about to start putting the videos out for the conference, so all the keynotes will be released soon, as soon as we’ve got the subtitles all nice. [They are now all available here.]
One of the things that was surprising was that the small group sessions, what we called the case study sessions, we had a lot of interactive activities using various platforms like Miro and Padlet, which basically allow people to collaborate in an online environment. Those actually worked pretty well. We did get a lot of engagement in the breakout discussions. We did get a lot of engagement and actually interacting with the activities and contributing. And so, I consider that a success because we did actually get people to work together in a very challenging environment, and so we did manage to at least maintain some aspect of the workshop atmosphere of this kind of event, even though we were online.
Zena: And I wonder how much the podcast episodes contributed to the engagement because, to my understanding, the podcasts episodes were really supposed to be a precursor to each of the case study sessions or each of the workshop sessions. So did you notice a direct correlation in engagement with people relative to the podcast episode that was related to that workshop session?
Liz: The honest answer is that there was a small listenership that was directly correlated with the workshop itself. What the podcast episodes did do, though, is they got the workshop planners, so the people planning each of the case study sessions, to work together and to put together something that they would release out into the world as a team. And so that very much set the scene for the workshop in terms of figuring out what it was going to focus on, figuring out what the themes were. In each case, the case study started with a recap of what the podcast had covered, a short one, so that we didn’t assume that people had listened to the podcast but could go and listen later on. And so, we did see a small bump in our downloads that was associated with that.
Zena: I think it’s really interesting that the benefits that you’re talking about with the podcasts are more so related to the facilitators of the workshop series rather than the listeners. And so I’m really interested in understanding because this workshop series was really an amalgam of different people from different disciplines, all trying to come together to create this cohesive piece, did you find that the podcast was a catalyst for that?
Liz: Yes, because it was a direct opportunity for us to work together – on a longer time scale, too. I mean, it wasn’t just about the workshop sessions. These episodes take a while to put together, and so it was an opportunity for all of us involved in producing each of the sessions to come together to explore the ideas that we had about algorithmic systems and social responsibility in a way that fostered, I think, some genuine collaboration, which, I think, was one of the great things about the podcast.
Zena: So you’ve spoken about some of the positives about the podcast. What do you think was one of the most challenging things about creating a podcast series simultaneously with a workshop series during the COVID pandemic?
Liz: I didn’t get a lot of sleep. Look. The podcast actually takes a lot of time to produce. I think it’s worth it, but I mean, as you know, you’ve been involved in producing these episodes, they take a lot of time, particularly with the format that we’ve chosen. The putting together a narrative and creating a story that is worth telling is not an easy task. It’s not something that you just sit down, record, and you do some minor edits, and you release it to the world. And that’s one of the ways we’ve been bringing in some richness in terms of what are some of the things that we’ve been exploring in terms of the history of some of these systems. What are the things that we’ve been exploring in terms of how these systems are created and connected and how they influence society? These are the places we’ve been able to bring this in, but it takes a lot of time.
And any international workshop organization takes a huge amount of time. Add in the virtual component, and I actually think they’re much harder. There’s so much that can go wrong. I have to hand it to everyone who was on the organizing committee who was involved in putting this together, yourself included. Everyone put in a huge amount of effort to make this happen. And so, first of all, it was gratifying to get to work with all of these amazing people for such a specific project. Everybody I worked with on this was amazing, so that was awesome, but it was definitely a huge effort. I hope that answers your question.
Zena: It does. And from my experience, it was a huge effort. I think I walked into this knowing it was going to take quite a bit of my time, but I don’t think I really realized how much time producing a podcast actually took, so that was definitely eye-opening. And I can also say that working with everyone that we were able to work with and talking to all the interesting people that we were able to talk with was honestly one of the highlights for me. But before we go into the next steps of where the podcast is headed, I just wanted to touch a little bit more on the Social Responsibility of Algorithms Workshop series. Can you chat a little bit more about the series as a whole and where it’s headed from 2023 onwards?
Liz: I can, but before that, can I turn the tables on you for a minute?
Zena: Of course.
Liz: I’m really curious as to what you expected in terms of what putting together a podcast like this might have looked like at the beginning. And what are the key lessons that you learned in terms of – what would you do if we were to start this over?
Zena: So when you first approached me about this podcast series, I was really excited about it. And then I did some research into what it would take to develop a podcast because I wanted to know if I had not just the capacity but also the skills. We all have imposter syndrome, and I didn’t know if I was capable of producing this podcast series with you. And I did a bunch of research and I read all about how it takes a lot longer than just the recording session. And there’s a lot of things in the post editing that you have to do, and all of that stuff. And I thought that I had walked in pretty well prepared. But I think the thing that took me by surprise maybe, or that I wasn’t prepared for, was all the additional things that I wanted to do outside of what we had originally planned. And I think that’s something that came out with you as well.
So, for example, we would talk to some really interesting people, and then in the post editing, we’d be like, “Oh, this was super interesting. Oh, that was super interesting.” And so more work came out of the fact that we were really interested in what people had to say, and that’s something I think I didn’t really understand before walking into it. So it’s a lot more hours than you would expect walking into it, so that would’ve been one of the things I learned was to be a little bit more generous with the time that I put aside.
But one of the most interesting things for me was definitely all the people that we got to speak to. We were quite prepared for a lot of the recording sessions, so we had prepared questions for each of our guests to try and make the best use of the time. But the best parts for me were the tangential moments. So a lot of our guests would expand on the questions that we asked, and they’d go into these tangents. And you and I, being the people that we are, would ask even more questions that pushed them into tangents. And I think that ended up creating far more interesting episodes or podcast episodes that we then created, so that was definitely a highlight for me.
Liz: Awesome. Well, I want to ask you one more question about this before we turn back to the Social Responsibility of Algorithms and also then to the podcast’s future. You have played an instrumental role in creating the narrative for a number of our episodes, and I’m curious about the process. So how do you go about putting together the starting points of a narrative? How do you go about thinking through that, not just from your perspective but also from a potential listener’s perspective?
Zena: I think writing the narrative for each of the podcast episodes was something I didn’t expect I would do walking into the podcast. And it happened serendipitously, which I really loved. And if you remember, in the beginning, you and I were talking. And I said one of the things that I wanted to improve on a personal level was my creative writing skills, and that’s why I put my hand up to write the narratives for each of these episodes. And it started with … I would take these vigorous notes when we were recording with each of the different guests. And when I was taking notes, I would always highlight themes that I noticed as we were talking through them. So I remember in the first episode that we recorded, one of the themes was around the influences that Alexis and Fred had at early stages in their life, particularly from teachers and other educational people who were in their lives at the early high school stage. And that became a recurring theme. And so, I started noting all of these themes. So when I started writing the narrative for each of the episodes, I had the themes that I noticed on the side, and then weaved the narrative around those themes that I knew I wanted to highlight or that I wanted to bring forward. And I think one of the challenging parts about writing a narrative is getting worried that the things that I find interesting, other people won’t find interesting.
So I had these themes that I had picked out and these really interesting points that I would mark as we were recording with these guests. But there was always this thought in the back of my mind, “What if nobody else finds this interesting? What if this point isn’t as interesting as I actually think it is?” So I think having you as a sounding board and someone to review the narrative was a really good way to weave through that. I think having a second opinion or a set second set of eyes that was like, “No, that is interesting.” Or, “No, that bit is actually not that interesting.” But that was definitely the most challenging part for me was what are other people going to find interesting, and are they going to find what I find interesting just as interesting.
Can you chat to us a little bit about where the workshop series is headed in the future?
Liz: Yeah, so that’s a really good question, and there are actually two answers to that. So interestingly, while SRA 22, or Social Responsibility of Algorithms 2022, is the third in the Social Responsibility of Algorithms Workshop series, it’s the first in what we’re calling the Algorithmic Futures Policy Lab series. We got funding from the Erasmus+ Programme of the European Union to support three workshops, and Social Responsibility of Algorithms is the first of those three workshops. And the reason we did that was because we put together three workshops that are likely to build on the work that SRA22 was doing, and we’re likely to have similar audiences, basically.
So, in the near term, the Algorithmic Futures Policy Lab series will have a second workshop in December of this year in Paris. And this one’s going to be focused on human-machine collaboration. We’re going to be exploring some of the same themes, so looking at social responsibility, looking at environmental responsibility for some of these systems. We’re also bringing in a safety component, and so I’m really excited about that one. It’s going to be a hybrid event, so we will actually have people on the ground in Paris, and we’ll hopefully get to have those coffees and discussions in person.
We have one more workshop after that, so that’s Algorithmic Design for a Changing World. That will be in 2023. We don’t yet have the date set for that. For Social Responsibility of Algorithms, for that workshop series, I’ve just had a chat with Fred and Alexis about where they would like to take that. And at the moment, what we want to continue doing is creating opportunities for interdisciplinary collaborations to happen. And we want to do that in a way that we do have travel funds supporting kind of this meeting. We are at the stage of trying to seek funding for that, and we will be probably looking to continue focusing on basically places that often don’t show up a lot in this kind of discussion.
So we are talking about Australia. We’re talking about South America. We’re talking about Africa, all of these, and various other places that are not part of, I guess, the center of mass in terms of the research that happens to get done in this space. But we will basically have to see what happens with it. We do need to get funding to get it, so that’s the next stage.
Zena: So you and I share a PhD student, Memunat Ibrahim, and Memunat is doing some research around exploring trust for autonomous ground vehicles. But the interesting part about her research is that she’s bringing in an African perspective, and that’s a perspective that’s currently missing from the existing literature. So can you chat a little bit more about the significance of the Social Responsibility of Algorithms Workshop series bringing in those missing voices and how you’re trying to create a space for that?
Liz: Sure. I can answer that based on the work that Memunat has been doing in the space. And there is some limitation to my answer, but I will say that for her research, what we find is that a lot of the literature on trust has been done in places, in developed countries, in places like the United States, in Europe, and various other places. These also happen to be the places that are designing these systems and deploying them broadly to the world. And so this, at least at first glance, doesn’t seem like a big deal. But when you start to dig into it, you realize there are all these assumptions about, first of all, what trust means in a given context, what that means in terms of, like in her example, how autonomous ground vehicles might need to act within a given environment.
And so, basically, what that means is that there are all these assumptions built into these systems that we are deploying broadly. And until we actually, first of all, have a way of acknowledging those assumptions, but also have a way of investigating whether and how those assumptions stand up in environments that are very different than where this research is being done, we actually don’t know what the possible implication of that is.
Now, in terms of the Social Responsibility of Algorithms Workshop, one of the things Fred and Alexis and I have been talking about is how there really aren’t that many case studies, for example, the fair machine learning space, etc., that are from places that are outside of the United States. This even extends to Europe, where the legislative framework, the policies that inform what data corporations can or cannot collect are very different. And so you actually can’t do some of the same things that you can do with US-based data in Europe, for example, because you can’t collect the same kind of information. But you also can’t necessarily take the lessons learned from the United States and apply it directly to somewhere like Europe, for example, or somewhere like Australia, where the context is very different and where even things like the way people are categorized in a given study don’t apply. Right?
For example, in the United States, you might categorize people in terms of Caucasian, African American, Latino, etc. Those categories are not going to carry over directly to a place that’s very different. Right? And so, it’s actually really important to have broad perspectives on these things and to have research being done in a variety of contexts. So Australia might be one of those. But also beyond that, so Africa, South America, all of these places where these systems are likely to be deployed at some point, even if they aren’t now. And so that’s actually part of the reason that we want to keep holding this workshop to create opportunities for that kind of work to not only be carried out but also to develop audiences, to form collaborations between these places, to begin to build up that contextual understanding of what we need to think about when we’re designing these systems, when we’re designing policies that are meant to govern these systems. I hope that answers your question.
Zena: It did. And I just have one additional follow-up question. So with some of the stuff that you were talking about, about the cultural context for Social Responsibility of Algorithms, is that something that’s going to be explored in the next workshop series?
Liz: Yes. So for the Algorithmic Futures Policy Lab workshops, the next one, we’re definitely aiming to bring in voices from a broad range of places. And so, this is one of the reasons why we are bringing people together from a broad range of fields to even discuss what human-machine collaboration means but also to explore those safety, responsibility, and sustainability aspects of that. And so we’re really aiming for broad discussions from people from all over the world, exploring how we design these systems and also how we regulate for them, so that is a theme that we want to continue. I think it’s actually a really important part of the workshop series that we’re putting together, and it will continue to be an important part of the Social Responsibility of Algorithms Workshop series as well, of course.
Zena: Can you chat a little bit about where you see the podcast headed moving forward and if it will continue to be connected to the remainder of the Social Responsibility of Algorithm Workshop series?
Liz: Yeah, so I should back this up and say this is definitely a collaboration between the two of us. So this may have been an idea that came out of my head probably because I was listening to too many podcasts while I was on parental leave, but it is very much a collaboration between the two of us. And I think probably the best way to answer it is to say that these podcast series, while it started with Social Responsibility of Algorithms, we’ve always designed it so that it would have a life beyond Social Responsibility of Algorithms and could be brought in to explore topics and to bring in voices that, well, first of all, we want to talk to because that’s part of the benefit of running something like this, but also that allow us to explore some of the topics that are of interest to us.
I know you and I both have an interest in human-machine collaboration, human-machine teaming. I think there’s going to be an opportunity to invite people on to explore that. And while that relates topically to Human-Machine Collaboration, our next workshop, it’s not going to be tied in the way that the earlier podcast episodes were tied to Social Responsibility of Algorithms. It’ll be, basically, we’re going to be inviting people that we want to talk to and a bit more detail because they’re going to be interesting from a collaboration perspective, interesting possibly for our audiences, all of those kinds of things. At least that’s the way I’m thinking about it. What do you think? Where would you like to see it going?
Zena: So this idea of having podcast episodes that attach to case study-based workshop sessions was really interesting to me. But having worked on the podcast for, I think it’s been eight months now, I do feel like the podcast has taken on a life of its own. And the process of working on this series and working with you has been, honestly, a real highlight for me this year, so I am excited to see where it goes from here on out.
One of the best parts of this podcast series has been talking to interesting people. So I’m really excited about continuing that and continuing to engage with some really interesting people. And I’m also really excited about broadening the scope of topics that we talk about.
Liz: I’m actually really excited about that too. I think one of the things that we’ve been really careful about up until now is we haven’t really been bringing our own voices into it. I mean, clearly, our own voices are brought into it in the way that the narratives are shaped for each of the episodes. And some of these episodes are built on work that we have done together in a research context back when we were both at the ANU School of Cybernetics.
We haven’t actually been able to bring our own voice into it in quite a direct way, and I actually am really looking forward to doing that. And I think detaching it from the workshop series, which is very much a group effort — there are a lot of people involved — offers an opportunity to do that, if that makes sense.
Zena: No, it does, and I agree with you. So when I was writing a lot of the narratives for each of the episodes, I really tried to remain agnostic because the idea behind each of the episodes was that they were supposed to be an inclusive and collaborative episode that contributed directly to a particular case study. And so, I tried to not so much remove our voices, but I tried to create a more agnostic and neutral presence from the two of us in each of the narratives. But what was really interesting for me talking to these people was, like I said before, those tangents that you and I would constantly push people down.
And I think that those tangents definitely stemmed from our own personal interests, our own perspectives. And I do think that we kind of missed a little bit of that in the narrative. So while I think that those tangents helped us to create what I thought was more interesting narratives, I don’t think that the listeners actually got to see that process of us weaving around the original questions that we had prepared for each of our guests. So I’m really excited for the future episodes where we get to explore that a little bit more and have a little bit more transparency in what our perspectives are, and then the random questions that we ask our guests.
Liz: Yeah. No, I definitely would second that. The episodes that I ended up taking a lead in writing … that idea that we weren’t just putting forward our voices definitely shaped what the narratives ended up looking like. And we were definitely prioritizing the narrative over the exploration that we went on in the actual conversation. And this is one of the things that I love about podcasting as a medium is my favorite podcasts are the long-form podcasts where basically you get to explore tangents with an interviewer, with a guest who maybe you didn’t really expect to find that interesting, but you end up finding fascinating for whatever reason.
These are the kinds of opportunities that you get in a long-form podcast like the one we’ve got that you wouldn’t necessarily get to experience on a day-to-day basis. It’s almost like having a chat over coffee with someone fascinating that you never would’ve met otherwise and getting the chance to participate in that. And I’m really looking forward to bringing in a bit more of that atmosphere as we begin to play with the medium a little bit more, which I think we have an opportunity to do in the next phase.
Zena: One of the most interesting episodes for me was the episode … I think it was episode two, which was with Lyndon Llewellyn from the Australian Institute of Marine Sciences. And something our listeners didn’t really get to see was how we started talking with Lyndon about the community’s connection to the Great Barrier Reef. So that was something that we illuminated in the narrative, but it wasn’t really a part of the original questions that we had with Lyndon. And it wasn’t until we started chatting with him and going down those little rabbit holes that we started talking about the connection that the community had with the Great Barrier Reef.
And he expanded that to more than just the regulatory bodies and more than just the government organizations who are paid to maintain this reef and to protect it. It was things like the local tourist organizations. It was the local community. It was all of these other actors in the system that we hadn’t really considered when we had originally formulated all of these questions. And that was something that really came out in the narrative of that particular episode was understanding community involvement in the reef and the community’s attachment to that reef. And it really shapes how AIMS does their work. It shapes how the regulatory body makes decisions. It’s such a strong influencing factor in that system and one that, honestly, we weren’t aware of until we had that conversation with Lyndon.
Liz: I’m remembering that same conversation. One of the great things about just talking with Lyndon in general … I love talking with Lyndon … is you learn something amazing and new every day whenever you’re interacting with him. And one of the things that comes across, I think, even more in the raw interview that we did was how passionately he engages with his work and how very involved in understanding that community, understanding the role that he plays in that whole system, in the Great Barrier Reef, in the monitoring, in the regulatory processes that AIMS contributes to. It was very clear how much he cared about his work and how deeply he thinks about his responsibilities in terms of the role that he plays in the organization. And I think that’s the kind of thing that a podcast interview like that gives you a chance to see a little bit more, which is something that I love.
Zena: Me too. I think recording with Lyndon was one of my favorite episodes. And it was really interesting for me to hear this perspective of this person who had found his career because he followed his passions rather than having a strict five-year plan that he followed. He followed his passions and found himself where he was, and he’s still incredibly passionate about the work that he does. And that, for me, was really inspiring, especially as a woman in STEM. So I think this is something that I’d love to explore in the next podcast episodes.
But you mentioned exploring what it’s like as two people who actually work in this field, and I think the fact that we’re two women who work in what is still quite a male-dominated field is going to be a really interesting perspective to explore, and it’s something that we can explore with a bunch of our podcast guests. But being a woman in STEM has a lot of positives, and it also has a lot of challenges. And I think that influences not how passionate we feel about our jobs, but how we’re able to pursue the passion in our jobs, if that makes sense. It’s this really challenging balance that you have to have.
I don’t think that we always have the luxury of being able to say, “This is what I’m incredibly passionate about in my field and what I want to pursue.” I think we have to navigate some really challenging waters in order to be able to get to a point where we have the luxury of being able to say, “I’m going to do this one thing that I’m super passionate about.” And I think that’s something a lot of people in their fields will probably experience, but particularly for women in STEM or just women in any male-dominated field, really.
Liz: Yeah. Well, any minority, anybody who doesn’t look like the norm, I agree. You have to be much more strategic. I mean, this is something I’m still recovering from as a recovering nuclear physicist. The idea that you have to actually be perfect in some respect when you first stand up and start sharing your work with the field because you’re going to be judged more harshly means that the road to getting from your starting point to where you really want to focus your passions, can be a longer one and a harder one. And I think, especially looking back to the workshop themes and some of the themes of our podcast episodes today, the bumps in the road, shall I say, are going to be different for everyone. And being able to illuminate that on some level and share what the potential impacts of that has had, both positive and negative, is one of the things that I really like doing with these episodes. I like being able to explore the stories behind the work that you see coming out from various people.
Zena: I really gravitate towards what I would consider atypical or creative outputs. I think that being able to communicate work in a way that engages a wide and diverse audience is a really great skill to have, but particularly in my field. So particularly in my field, coming from the aviation sector, unique or creative outputs aren’t the norm. It’s not that they’re not accepted, but they’re just not widely encouraged, I would say. And so, having the courage to pursue that work while still being taken seriously and still having your work seen as credible has been an ongoing challenge within my career. So this podcast has been a really great platform for me to be able to explore different creative outputs.
As I said earlier, writing the narratives was great for me because it allowed me to explore a little bit more creative writing. And working on this podcast as a platform for communicating different messages was also a really great experience for me. But then, more broadly, with my research and my education, it can be a little bit challenging to try and include more unique and creative aspects to your work and also still maintain that high level of credibility that we want. And I think it’s even harder when you are a woman because we already have that challenge of wanting to be credible. We already have that challenge of wanting to be seen as good at our jobs, essentially. And so, coming forward with these outputs that don’t look the same as everybody else’s and are a little bit different, it can be really challenging, I think, to defend those outputs and to show their impact.
Liz: I’m sitting here nodding like mad. You can see me, but obviously, nobody can hear me nodding. Yes, I agree with that. I do think it’s very hard to put out something like this because it takes a significant amount of time that is not necessarily valued the same way that an academic publication, for example, might be valued. I think it’s interesting to think about what an output, a creative output like a podcast actually means in terms of long-term benefit. And so, if you want to map those back to KPIs, for example, to performance indicators, all of these different terms that basically mean what boxes are you trying to check to enhance your future career, to move your career forward? What does an output like this do? How do we really make it work for us, particularly when we are already in a situation where we have to be very mindful of the work that we put out there and how it’s perceived by sometimes some tough critics?
Zena: Yeah, I couldn’t agree more. I think it’s really hard to follow your passions. Like us, we work in STEM, but we also work in academia, and we do have KPIs that we have to meet. I think it’s really challenging to be able to pursue our passions in a way that still meets the KPIs that we are required to meet in order to show our impact. But I do think that academia is slowly changing, and I know for myself and my colleagues, I have seen a lot of different approaches to impact, different approaches to the way that we do research, and the way that we do education. So I do think it’s changing, but change is always slow and gradual over time. And I think being a part of that change we have an opportunity to make a difference. We have an opportunity to try and direct that change. But being a part of the change process means that you are slow to see the results of that process.
Liz: Yeah, definitely. This is making me really think about the episode that my students put together on looking at facial recognition in COVID 19 quarantine apps. That was effectively part of their education as PhD students. Them putting that together was an opportunity for them to collaborate as a group of PhD students. We have a cohort program, which means that we want them to be able to support each other through their programs. And just having a podcast out there that I was producing and was able to basically give them an opportunity to collaborate to create an episode, that in itself was massively valuable from my perspective as somebody who was meaning to guide them through this program. If we are looking for those opportunities to create value in that way in the education space or otherwise, I can only see good things coming out of that. But, of course, I might be biased.
Zena: No, I agree. I thought that episode was a fantastic episode. I think they created a really engaging narrative, and it was refreshing for me to hear different kinds of voices on that episode. I think it was reflective of a broader set of voices, and I think it would’ve appealed to a broader set of audiences because of those diverse voices on that particular episode. And I think it was a great opportunity for them, as well, to be able to create that episode because, as you and I both know, we developed skills from this podcast that I don’t think I would’ve developed otherwise. So I do think that it was a really great educational moment for them.
Liz: Yeah. I’ll have to double-check with them on that.
Zena: They might disagree.
Liz: They might. I don’t know.
Zena: So one thing I’ll add to that maybe just to expand on how it is an educational moment. So one of the biggest things we do in academia is writing. We write journal publications. We write conference papers. Writing is one of the critical parts of our job. And there is a particular style that we write with, particularly in STEM. So journal publications expect a particular style, and the same goes for conference papers. But when we think about broader impact of our research, I think there are some ways in which our style of writing in more traditional academic publications is perhaps not as accessible as we would think, especially for a more general audience. So from an educational perspective, writing the narratives for the podcast was a really great learning moment for myself as well because it required me to write a story in a way that was going to be accessible for a broader audience.
So when I was writing that narrative, I was thinking about audiences that extended beyond STEM, beyond people who worked in STEM, so audiences who may not have been familiar with the jargon that we often use in a lot of our publications and things like that. So from an educational perspective for our students, I think it was a really good opportunity for them to be able to explore different ways of being able to communicate research and to also explore ways of not so much simplifying, but ways of reevaluating how they communicate their ideas and their thoughts to audiences that extend beyond people within their field and within their domain. And that’s actually really challenging to do. It sounds simple, but it’s so hard to explain to someone a very bespoke piece of work that you’ve been working on for years, to explain it to someone who is not familiar with that field at all. It’s really quite challenging.
Liz: Yeah. And it’s becoming more important as well. I think academia is becoming more of a space where you actually do need to be able to communicate your work with external audiences. That’s becoming much more important as time goes on. And the more opportunities that we can create for ourselves and for our students to learn how to do that in a somewhat safe space, the better, I think. So from that perspective, I think that’s something that I would like to take forward in terms of what the future of this podcast looks like, what opportunities can we create, not just for ourselves, but also for our students, for those that we are helping become independent academics or whatever they want to do after they finish their PhDs.
Zena: With the wave of interdisciplinary research that’s coming out alongside all the work that’s being done around trusted autonomous systems, and socially responsible algorithms, and artificial intelligence, being able to communicate your work to a broader field is even more important. So for a lot of our students, they do sit within the interdisciplinary realm. So being able to explain their thoughts, and their perspectives, and their views to an audience that extends beyond their field has never been more critical.
Liz: Yeah, yeah, I agree with that. Actually, in order to have a collaboration where there’s true cross-disciplinary work, so you don’t sit in the box of your discipline and do a part of this project, you’re actually actively working together with people from other disciplines, other perspectives to create something new, you really need to be able to communicate well across the various divides to really understand even the fact that you might be using the same terms and mean completely different things. How do you deal with that in a collaboration? We touched on this with some of our episodes. We touched on this in several of our episodes with our guests. It’s not an easy thing. It’s always a challenging thing. And so, I agree with you, developing the skills to communicate to a broad audience also has knock-on effects in terms of interdisciplinary or cross-disciplinary collaborations that truly don’t just stick some disciplines together to form an output, that really start to weave them together in a way that generates new possibilities.
Zena: I think so. And I think one of the difficult things to navigate with cross-disciplinary work is being challenged on your way of doing something. And I think this goes back to one of my earlier comments around having the courage to pursue outputs that are atypical or more creative than what your discipline usually produces. And so, with cross-disciplinary work, you do get the opportunity to do that, but even your own perspective of the kinds of outputs you want don’t always align with people from different disciplines or who are coming from a different perspective. And so, being challenged on the way that you see an output should be produced, or being challenged on the way you want to will approach a problem … because we’re all trained to approach problems differently, depending on the fields that we’ve been educated in. Right?
My background is in engineering. I’ve been trained to approach a problem in a particular way. And so, working in a cross-disciplinary sense has really challenged how I’ve been trained and how I work, especially when I work with people who work more in the social sciences area. They really approach research in a very different way. They approach problem-solving in a really different way. So working with those people, being challenged on the way that I usually do things, it was definitely a growth moment for me. And I think that it was able to expand my skill set, but it was a challenge that I had to navigate.
Liz: I definitely agree with you. Even just realizing that there are different ways to think about how you create knowledge and what are the assumptions you bring to it, that in itself requires a very different perspective than one I’ve been trained to take on. So – it’s a really good challenge. I enjoy it, but it does take time to understand. it requires the ability to communicate effectively and clearly and to understand that even though you’re doing your best in that regard, you might still get it wrong and need to work through things with whomever you’re collaborating with.
Zena: No, I agree. And I think we definitely saw that in a lot of the podcast episodes that we recorded when we spoke to people who worked in interdisciplinary or cross-disciplinary sectors, so Alexis and Fred Lyndon. Pia Andrews was another one as well. They all spoke to those challenges of working in those spaces.
Liz: Yeah, definitely.
I’m really looking forward to this next phase of the podcast with you. I’m looking forward to being able to explore some new formats, to be able to bring in some different voices, to also get to spend some time to explore what our backgrounds are, how we ended up where we are. I know some of your story. You know some of my story, but it would be interesting to have the opportunity to share that as well. And so, I’m really excited for this next phase. I’m really glad we’re working on it together.
Zena: Me too. I’m really excited to continue working on this podcast with you, and hopefully create some more exciting episodes for our listeners.
Liz: Thank you so much for joining us today on the Algorithmic Futures podcast. To learn more about the podcast, the Social Responsibility of Algorithms workshop and our guests you can visit our website algorithmicfutures.org. And if you’ve enjoyed this episode, please like and share with others – it really helps us a lot.
Now, to end with a couple of disclaimers.
All information we present here is for your education and enjoyment and should not be taken as advice specific to your situation.
This is the last podcast we have created in support of the Algorithmic Futures Policy Lab – a collaboration between the Australian National University School of Cybernetics, ANU Fenner School of Environment and Society, ANU Centre for European Studies, CNRS LAMSADE and DIMACS at Rutgers.
The Algorithmic Futures Policy Lab receives the support of the Erasmus+ Programme of the European Union. The European Commission support for the Algorithmic Futures Policy Lab does not constitute an endorsement of this podcast episode’s contents, which reflect the views only of the speakers, and the Commission cannot be held responsible for any use which may be made of the information contained therein.