Today, we’re honoured to be joined by Jenny Zhang — a software engineer and writer based in Canada. Her purpose-driven approach to technology development comes through clearly throughout our time with her, and (we think) offers up lessons to anyone seeking to generate beneficial impact in the tech industry.
Listen in as we talk to Jenny about her circuitous path to software development, what it means to be a full stack engineer, her considerations of privacy and safety in voice datasets, values and career trajectories, and more.
This is the last episode for this season of the Algorithmic Futures podcast. We’ll be back next year for season 2!
Credits
Guest: Jenny Zhang
Hosts: Zena Assaad and Liz Williams
Producers: Zena Assaad and Liz Williams
Sound editors: Cyril Buchard (with final edits by Liz Williams)
Transcript
Liz: Hi everyone, I’m Liz Williams.
Zena: And I’m Zena Assaad. And this is the Algorithmic Futures Podcast.
Liz: Join us, as we talk to technology creators, regulators and dreamers from around the world to learn how complex technologies may shape our environment and societies in the years to come.
Zena: Today, we’re joined by Jenny Zhang. Jenny is a software engineer based in Canada. She currently works with Tailscale and is also a writer with a keen interest in exploring the impact of technology on society. As you’ll hear from Jenny in this episode, she came to software development via a circuitous path that, she says, has given her a somewhat different perspective on the industry she now contributes to.
Liz: Her purpose-driven approach to technology development comes through consistently in our conversation covering everything from what it means to be a full stack engineer to her considerations of privacy and safety in voice datasets for her past work with Mozilla’s Common Voice project.
Liz: Hi Jenny. Thank you so much for joining us today. We are really excited to have you on. And I’m really looking forward to hearing more about the journey that you have been on up until now. Where you started, how you ended up where you are now. I was wondering if you might start out by sharing a bit about your backstory with us.
Jenny: Yeah, for sure. I think, contrary to what I hear in a lot of these types of podcasts and interviews, I didn’t know what I wanted to be when I was young. I feel like I maybe often still don’t know what I want to be when I grow up.
But the way I found my way to computers and the internet is I was really drawn to stories and books what I was younger, and the internet is where you find communities for the stories and books that only the geeky fifth grader in the corner of the playground has read.
And so very early on I figured out bulletin board systems and online journaling systems and of those kinds of communities. And in the late nineties and early 2000s, you still needed a certain amount of technical knowledge to be able to participate in those communities. And I feel like the way I stumbled into computers is more an accident than anything else, where my thought process was I need to learn this to go talk to my friends, so I’m going to learn this to go talk to my friends.
And when it came time to figure out what I wanted to do professionally, it just never occurred to me that making websites, doing things on the computer was a path, because that always felt like a hobby and it felt like part of my self-expression and part of art and all of those things.
And so I went to business school and I studied sociology and I tried a whole bunch of other different types of things before realizing, “Oh no, I should just go code for a living, because I really enjoy that.” And that was kind of a circuitous way into the industry. But I think in some ways the fact that it was a non-traditional approach has helped me a lot in being able to have a slightly different perspective on the norms and the practices of the industry.
Zena: Can you expand on that a little bit? You said that you studied business and you studied sociology. Was there anything from that particular background that you find you do kind of integrate into your current work?
Jenny: Yeah. It’s funny because I went to business school and I think I realized very early on that business school was not a good fit for me. But I am a first generation immigrant. I come from a a family of immigrants. And so the concept of life stability and picking something safe and secure and sticking with it was drilled into me from very early on. It just didn’t occur to me that I could say, “Oh, I’m going to drop this and try something else.” In trying to figure out how to make business school work for me, I, (1) had to find things I found interesting within the financial system, the organizational behavioral system, HR, all of the different trappings that you normally associate with traditional business.
I had to find a way of relating that to my interests around community building and storytelling and those sorts of things. I think that was one of the advantages. And then I think another advantage is that frequently engineering can happen in a little bit of a vacuum in which we look at a computer system and we say, “Okay, what is the platonic ideal of how to solve a given problem?” But the systems we design never exist in a vacuum. They don’t exist independently of the people who use them or the systems that those people exist in. And I think being able to understand the financial system and how economic forces will act on people and their behaviors and how the people might then approach what they use and why, is a really important part of designing good technological systems.
I think there’s a little bit more of an understanding of that now than there was 15, 20 years ago. A lot of the work I see in user research and interdisciplinary studies are starting to emphasize that more. But I think when I was first getting into tech, there was still a very utopian view of “if we build it, they will they come”. Or if we just build the perfect system, this problem can be solved. And I feel like I had fewer of those. I wouldn’t say none, but I feel like I’ve had fewer of those delusions going in.
Liz: That makes a lot of sense. I’m curious about how your understanding of financial systems, human responses to economic forces and all of that influenced how you went about your job when you went into software engineering? How did that influence how you approached your role, and were there any particular challenges that came with having this knowledge in a field where this utopian picture you mentioned was pretty common?
Jenny: I’m a full stack engineer and I think if you ask five different full stack engineers what a full stack engineer is, you’ll probably get six different answers. Broadly speaking, the way I think of it is I work on and am in charge of, for lack of a better term, every piece of the technology stack that gets a piece of web software from the server all the way to the user.
That might include a database system, it might include what’s called a backend service. The computer and the cloud that actually is serving up the images and the data that a user might be seeing. But it also includes the interfaces in the front end or your browser, your phone, whatever that a user might be interacting with. Part of what I really enjoy about full stack engineering is looking at a system holistically. To me that system includes the user. I think the thing that I always remember from day one, having not entered the industry via a traditional computer science path is that, one, the user is often not who you think they are.
We software engineers can sometimes get in our own heads and we think, “Oh of course it makes sense for this button to look like this or for this system to behave in this way because that’s obvious to everybody,” which is never the case. And the classic example I give is every software engineer has a suite of tools they use every single day that they fight with and that they struggle with and that they’re frustrated by. But every single one of those tools was designed by another software engineer who thought that what they were building was intuitive. All of us are coming from a specific context and a specific framework.
Because I had seen how somebody who was going into Wall Street and learning to trade and using a Bloomberg terminal interacted with a computer, and how that might differ from somebody who was trying to learn R to do some sort of research survey for sociology class, and how the different understandings of what a computer is and what a computer does really informs how they approach the system.
It means that I always start every single build or feature or product or technology with a question of “Do we know who we’re building this for and why?” It’s not just, “Are we building a thing because it’s cool? Because we want to try a new technology? Because we want to shiny new thing on the resume?” Those are all perfectly legitimate and good reasons to build something, but if you want somebody to use your thing and to benefit from it, then you have to understand who that person is and why they’re coming to you.
Liz: Yeah, absolutely. Sounds like actually a really, really important part of making sure that a product that you put out there is actually going to work for the people you’re designing it for. I’m curious, we’re talking about a user as a singular, but I know just looking at your background a little bit, you have done a lot of work in thinking about not just one user, but a broad range of users and thinking about the implications of design from the perspective of broad users.
For example, thinking about privacy and thinking about how different considerations with regards to how something’s designed in terms of user privacy might have different impacts on different user groups. I’m curious to get a sense of how you think about users – collective – when you’re designing some of these systems.
Jenny: Yeah, I think that’s a really good point. The user is never a monolith. And I think there is very much an understanding of this in software. You’ll hear user researchers and product designers talk about the concept of personas where they’ll come up with a series of prototypical users that they might think of as being the audience for their software. I think some of that work is already being done. For example, the product that I work on right now is really technical and a lot of our users are people who run IT departments for companies or who are software engineers or DevOps people themselves.
And so we speak to that audience very differently than we would speak to a hobbyist or to somebody who is coming in with very little computer literacy or network literacy. But the thing that I like to think about when thinking about the broader impact of the thing I build specifically around things like privacy and potential harms and ethical implications and things like that is, who is not in the room?
I forget who said this: “If you want to build a system well, take care of the edges and the room will build itself.” Instead of starting a design or starting a system from what is the best-case scenario, let me build that and then fill in the gaps later, start from how can the system go wrong and fix those things first before you then look at what engineers call the happy path, when everything just works exactly the way it’s supposed to.
And I think, you see this attitude sometimes in talking about accessibility or making sure a website is usable by screen readers or friendly to people with vision impairments and those sorts of things. The attitude is often like “We’ll build a website and then we’ll add the accessibility features later.”
Whereas if you started from the principle of who is the least likely to be able to access this website and how can we make it good for them, you come up with a very different paradigm of how to even start something. It’s very hard to look at something that’s very visual and especially a lot of infographics and a lot of data visualization things.
It’s hard to take something that’s been designed to be interacted with by a sighted person and then adapt it to somebody who was blind after the fact. Whereas if you came up with a design that was universally accessible from the start, that can take you in entirely new directions that you might not even have thought of. I obviously am very privileged in a lot of different ways and I do not pretend to be an expert in these fields, but the question that I always try to ask myself is whose voice are we not even aware we’re not listening to.
Zena: I think that’s a really interesting point. And you mentioned earlier that you are the daughter of immigrant parents and that shaped, I guess, your approach and your decisions as you went along. And one of the things I always think about when I’m working on a particular project or helping to create or to build something is that I always imagine my parents using it and I think about the challenges that they would come against. You were talking about the different voices that you try and represent, and we were talking about differently abled people, but do you also sometimes think about migrant people or people from different backgrounds or a diversity of backgrounds and the kind of, I guess, experience that they would have with the systems that you build?
Jenny: Yeah, absolutely. It’s something that I find fascinating, because I think the archetypes we have in our minds of who is and isn’t good at computers is also very much informed by social norms. There was a movement a few years ago, I think, for people to stop saying, “This thing is so easy, even your grandma can use it.” Whereas, well, a lot of the early pioneering software engineers who invented programming languages were women and they’re grandmas now. Maybe tone that down a little bit.
On the flip side, we look at the concept of the digital divide and the Global South, and we think about all of the ways in which networks and internet culture and internet resources are designed for people in the West predominantly. But then you look at something like WeChat in China that is WhatsApp plus Twitter plus Venmo, plus any number of things combined into this holistic ecosystem that’s existed for many years before we had any of those systems, you know, Apple Pay or whatever.
Or we talk about people in India or in Africa whose primary mechanism for connecting to the internet is a cell phone and they have a very different experience of the internet than people who might start from a desktop device. And I think it’s important when thinking about the migrant experience to remember that it’s not just how can we help them, but also what can we learn from them and how can they help us? Because just because a set of technologies develop differently from the Western trajectory of a desktop to a laptop to your cell phone, doesn’t mean it’s worse, doesn’t mean it’s any less innovative or creative. There’s the things that you can do with interactive voice systems and phone trees in areas where a megabyte of bandwidth is exorbitant, is really impressive.
Liz: Going back to your point about designing something for accessibility, where accessibility is actually where you start as opposed to something that’s tacked on afterwards. Often these systems, when you are designing in that way end up being more interesting, more creative, and actually everyone benefits from that to some extent. So I think that point of actually “what can we learn?” is really, really a good one. I think it reframes this idea that, “Oh, we need to make sure everybody can access this stuff, so we need to do this extra work.” It reframes it to actually look at it as an opportunity, which I think is quite interesting.
[Music by EvgenyBardyuzha from Pixabay]
Liz: I know you used to work with Mozilla– one of my students, Kathy Reid, is doing a PhD project on voice, and she’s actually how we met. I know you’re not working with them now, but I wanted to talk a little bit about some of the work that you did with their Common Voice project. Mozilla describes it as a project to help make voice recognition open and accessible to everyone. This is straight from their website. I want to give people a sense of how something like Mozilla Common Voice works. What, at least on a basic level, goes into creating something like this? What’s the end goal and what might it become in the future?
Jenny: Yeah. I think Mozilla Common Voice is a really great example of the ideal of what open source can achieve. Because at the time when it started, I’m going to give a brief overview of the technical requirements that go into building a voice model, that is a little bit out of date because I’m presenting what Common Voice set out to achieve in 2018, and the state of technology has advanced a lot since then.
But when they started, the thought was, in order to build a voice assistant that could do speech recognition for a given language, you needed anywhere from a few hundred to a few thousand hours of speech for that given language, along with transcripts, along with annotations to be able to train computers to say “these syllables mean these words and this is how a sentence is constructed.” And the problem with that is that data is very hard and expensive to gather, and it’s really only accessible for the dominant languages in our Western paradigm.
As an example, as of I think 2021 was the last time I checked upon this, none of the major voice assistant providers, so Amazon, Apple, Google, had a voice assistant that was capable of recognizing any African languages, which is ridiculous. That’s an entire continent. There’s dozens–hundreds of languages quite frankly–and the fact that none of them are served by the major tech companies seemed like a gross oversight.
And part of it is just straight up capitalist concerns. If there’s no money to be made in building Siri for Swahili, they’re not going to do it. But those people who speak Swahili, people who speak languages that major tech companies don’t care about, still deserve to be able to have assistive technologies. How do you give people the tools in order to create those data sets themselves? And so what Mozilla set out to do is basically build a platform that that was entirely community driven from beginning to end. The community would contribute public domain, open source sentences from places like various forms of Wikipedia, but also open domain books that had fallen out of copyright and stuff like that that would be then submitted to the Common Voice database.
And those sentences would then be read out, also by volunteers from that language community. Other volunteers from the language community would then listen to the voice clips and say, “Yes, this is a good clip, or this is not a good clip.” And then add regular intervals, Mozilla would package up all of that voice contribution and then upload it and give it back to the community so that anybody can download it. Which means that it was really trying to be a community hub for people to organize and build their own voice data sets.
I think the idea of technology being able to give underserved communities and underrepresented communities agency over the tools they’re building was a key part of the promise of open source and a key part of the promise of what something like Mozilla set out to do. I think as momentum picked up around Common Voice, obviously Mozilla engaged in a lot of outreach and partnerships in order to spread the tool to as many different places as possible and to help people get onboarded onto the platform.
And there’s a lot of work that goes into how do you make something as complicated as a machine learning data set accessible to people who may have never thought about what goes into your phone, being able to transcribe your text messages for you. But fundamentally, I think the strength of something like Common Voice lies in the community and the strength of the community lies in the promise of what they can build for themselves without necessarily the intervention of Apple and Google and Facebook.
Zena: So I understand that some of your work in this space has touched on some of the links between the privacy of those contributing data and how a product like this is designed. What are some of the concerns you think designers need to take into account when creating something like this?
Jenny:
Yeah. That’s a really good question and a really big question. I think the first thing that is always top of mind for me is that when you are giving your data to a group project like Common Voice, like anything else, there’s a certain degree to which the data set can be anonymized. We can strip out your name, your email address, your gender, your anything else.
We can try to basically de-link the person who contributed the voice clip from the clip itself. But at the end of the day, the idea that any data is truly anonymous is a little bit of a fallacy. It’s especially a fallacy for something like voice. Because regardless if somebody who knows me, here’s a clip of me talking, they’ll be able to say, “Oh, Jenny was the person who said that,” and my voice data is inextricable from my identity.
But I think a thing that people often don’t realize is that it is very easy to pinpoint who somebody is, based on the fonts that are installed under a computer. If any browser will have access to all of your computer system fonts and that’s unique to some absurdly high percentage. That kind of browser fingerprinting is really common and really, I would say, insidious because people don’t realize what’s happening.
I think we need to start from the assumption that true privacy and true anonymity is impossible in a project like this. Then the question becomes, “What do you do to make it as safe as possible?” Common Voice does a lot of work to try and anonymize people and to make sure that demographic data isn’t attached if the sample group is too small. But I think a really key piece of the puzzle is also education, is not making promises that you can’t keep about what anonymity and internet privacy means. We should be able to say to people, “Here are all of the steps we are taking to protect you, but also here’s the risks that you run in contributing to something like this.”
That applies not just for things like Common Voice where you are donating your data voluntarily, because I think by and large, most of the people who contribute to a product like this are doing so with a purpose in mind and know what they’re getting into. But I think it applies to any product that tries to say, “We promise to keep your data private.”
There is a technologist / activist / writer, named Maciej Ceglowski, he is the founder of Pinboard. And he has a really, really good metaphor for large troves of data, which is that data is like nuclear waste.
The promise of what collecting large amounts of data can do is amazing. You want the clean energy, you want the renewable resource, all of that stuff, but we don’t know what we’re doing with it at the end of the day. We have no long term plan for it. It’s just accumulating in a data warehouse somewhere until one forgotten software update later, that data is all over the internet.
And I think the only real promise you can make about data is that the only truly private data is data you don’t collect to begin with. With that in mind, if you are starting out on projects like Mozilla Common Voice or any other community project, every single data point you collect, you have to ask yourself, “Is this necessary? Why am I asking for this? Am I asking for this because it is nice to have or I think I might need it one day?” If that’s the answer, maybe you shouldn’t be having that data.
Zena: I’ve always been really interested with people’s concerns about the privacy of their data. And it’s not that I’m negating those concerns, I think they’re totally relevant, but sometimes I think about the decades and decades of curated data and information that we see on other platforms like in libraries and in museums and things like that. So the concept of storing and keeping data has been around for a really long time. But the debate around data privacy really emerged when we started putting these things online. I wonder what your perspective is on why it’s now a concern that’s at the forefront of discussions, even though technically the concept of storing, keeping and making publicly available other people’s data and information has actually been around in different forms for a really long time?
Jenny: Yeah. I think the biggest difference there is intention. Because I think people who are creating libraries and archives are generally doing so by drawing from sources where the person who created that media source did so intentionally. Like a newspaper, a journal article, a book, a diary, even letters. Those are things that required thought and effort to be put down in a form that was more or less permanent. And then even in those situations, only media that passed some gatekeeping mechanism was published and then was available for the public.
Generally speaking, unless you are somebody super famous who die tragically, your diary is not going to be end up in the Smithsonian. I think the thing that the internet and computers have made very different is most of the data collection is happening unconsciously, unintentionally, and indiscriminately. The example I give is, you wouldn’t expect to go to Starbucks and be able to hear a recording of every single conversation that has ever happened at that Starbucks. But we kind of have that expectation of Twitter in a way that doesn’t really make sense to me.
And I think some of the debates around like, “Oh, this is the public square and this is a forum and whatever,” kind of negate the idea that most of the data we generate is like the carbon dioxide we just breathe out. We’re not trying to make some lasting statement when we make a dumb joke about the Oscars. We are just living our lives. It is very hard now to live your life in a way that doesn’t generate data, and so asking people to opt out of data collection oftentimes is asking them to opt out of having a social life at all.
The fact that there is no meaningful opt out available, and the fact that people are not able to control the parameters and the context of that data collection is, I think, a huge problem and a huge part of why people are concerned. The things that I say to my sister are not things that I would say to my boss, and I don’t think there’s anything wrong with that. But I think within some of the distorted paradigms of transparency that we have, that’s seen as somehow hiding something.
Liz: Makes a lot of sense. Listening to you, I’m contrasting this to the situation we have for research ethics. I’m on one of the human research ethics boards here at the university, and one of the things we are asked to really be careful about is if somebody wants to do a project where covert data collection is happening, there’s a high level of scrutiny in terms of like, “Well, can you collect data in that circumstance? What does consent mean? What’s the risk of that?” It gets back to your question about safety and what safety means in terms of the considerations of, “Is this data you should be collecting or not? And how should that be handled? How should that be communicated?” Maybe we can go back to that concept of safety. How do you think of safety with regards to what data you, as somebody designing one of these systems, might collect?
Jenny: I think safety is a really interesting framework for analysis. Because safety by a different word or by a different phrase is the potential for harm. And I think one of the things that system designers always need to think about is “What are the ways in which your system can be used to harm?”
The thing about safety that I struggle with a lot personally and professionally, is that what is and isn’t considered safe is very contextual and very historically situated. Something that might seem benign now wasn’t 15 years ago, may not be again in 50 years. The example that’s currently raging in the United States is, if you are using digital tools for tracking menstruation cycles, will that be used one day in a court subpoena to punish you for abortion? Those are the kinds of things that I always think about when people who are advocates of open data and radical transparency and data say, “If you don’t have anything to hide, what are you afraid of?” Well, I don’t have anything to hide with my menstrual cycle, but I am afraid of draconian governments coming in and misusing that in some capacity in a way that I don’t have control over. I think it’s very hard to say, “How can you design a system that’s truly safe?” Because I don’t think anybody can predict the future.
What you can do is listen to all of the people who have been harmed in the past, because usually they’re the ones who can see these things coming. The best thing that you can do for a system you design is, if you are going to do something, know why you’re doing it and know what tradeoffs you’re making. You may not be able to erase all of the risk, but you have to know that the risk you’re taking is for a reason.
The example that came up recently with Apple AirTags is “we’re going to design these little Bluetooth things that you can drop literally anywhere and nobody will know that they’re there and they’ll just tell you where the location of that thing is anywhere in the world”, which is great if you’re somebody who worries about lost luggage a lot, but not great if you are somebody who has to worry about stalkers.
And that’s something that I think anybody who has been threatened by an angry ex-partner, by who has been in domestic violence situations, anybody who has been marginalized would understand. That is probably 70% of the world’s population.
All you had to do was listen to them. And I think the fact that we oftentimes don’t start off from the perspective of what’s the potential harm in this, who is going to be the most harmed in it, and how can we address their concerns and their risks, is how we end up in a lot of these situations.
Liz: That makes sense. And I think the idea that safety is actually a dynamic thing and it’s evolving and it changes as a function of time, governments and the context in which we live and how those things evolved with time, I think that’s a really important thing to keep in mind, especially with regards to data that is kept and it’s taken at one time and it might be used for something completely different in another time. The menstrual cycle example in the face of Roe v. Wade being overturned is a really good and scary example of that.
I wonder if now this might be a good time to talk about your history of taking on positions that have allowed you to somehow contribute to creating technology designed with some aim for achieving equity, privacy, fairness. It seems like you’re drawn to these kinds of projects that are designed first to generate some kind of collective benefit. First of all, I’m wondering if that’s fair to say, is that the kind of project you’re drawn to?
Jenny: Yeah, I think maybe not initially. I think when I was 21 and trying to figure out my life, I would go anywhere that would hire me. As I have gained more seniority and agency in my career, I’ve definitely been more drawn towards how can I use the structural power I have to make sure that the system harms fewer people if at all possible.
I would describe myself as a skeptic of technology. I would describe myself as being very critical of technology. But at the end of the day, there’s a part of me that still really believes in what the internet should’ve been, what computers should have been, the things that we could have achieved with it, had we not fallen off the deep end of at tech over everything else.
I’m always trying to find a way of, “Can we get back to what this was originally supposed to be? Can we get back to a world in which humans use this cool thing where literally anybody can talk to anybody else at any point in time and do good things with it as opposed to bad?” That sounds really facile, and it sounds really naive. But a friend of mine says that my strength and also weakness is that I continue to hope, even when I rationally think maybe, that that’s a silly thing to do.
Liz: That makes sense. Does that feed into the kind of project that you decide to work on, the kinds of challenges that you choose to take on as an engineer in these spaces?
Jenny: Yeah, absolutely. I’m very lucky at this point in my career to be able to have a lot of control over the projects I take on. And in the current market environment is very much a job seekers market right now. So I frequently get recruiting emails in my inbox from all sorts of things I have never heard of. It’s funny how easy it is to eliminate the vast majority of them based on a couple of fairly straightforward principles of, I’m not going to work in crypto. I’m not going to work on anything that involves surveillance or advertising, and I’m not going to work on anything gig economy based, because I think gig economy-based tools tend to be primarily a vehicle for subverting labor laws. And that doesn’t feel like a very big list. That feels like a very sensible criteria to me. And yet it’s remarkable how much that eliminates.
Liz: I guess that gives us a sense of what the job market’s looking like right now in this space.
Zena: You mentioned how when you were first starting out in your career, you took any job you could take, and now that you’ve gained some seniority and some experience, you have, I guess privilege is the right word. You have the privilege of being able to be a bit more careful about the jobs that you choose to take or the projects that you choose to work on. And this is something I’ve definitely experienced in my career. Very much, early on, I don’t think I had the luxury of being as picky, and now that I have a little bit more expertise, I do have that privilege.
My question is around: what advice can we give to young people who are just starting out in their careers so that they’re able to make these decisions that align a little bit more with their values? I think when I was younger, for me it was like I didn’t have anything to bargain with. It’s like I didn’t have the courage to be like, “Oh, I’m not going to take this job” because there weren’t five other job offers on the table. When you’re young and you don’t have a lot of experience and you don’t have a lot of seniority, how are you able to navigate those waters and choose to be a part of things that align more with your values?
Jenny: Yeah, I think that’s a really good question, and I think that’s a really tricky question to answer for yourself, because we do live in this capitalistic environment and there is no way for anyone to be morally pure this situation. But that doesn’t mean we don’t have agency to decide how we want to contribute to the system. The thing I would think about is two-fold: 1) you have to know what your limits are, and 2) you need to be cognizant of what your alternative options are.
And what I mean by limits is everybody has different lines of things that they absolutely will not do. I would never work for Ratheyon if Ratheyon was my only option. Somehow I would rather significantly downgrade my quality of life and take a minimum wage job than contribute to the military-industrial complex. Whatever that line is for you, you have to know going into the job search, because once you’re at the negotiating table, it becomes really easy to rationalize for yourself, “if I make a compromise here, maybe it’s not that big of a deal.”
The second part of what I said in terms of making sure you know what your alternative options are is it is very rarely the case that a job actually is your only option. I gave the example of the minimum wage job, but I realise that’s a very different proposition in a country like Canada, where we do have some degree of socialized medicine, versus a country like the United States where a minimum wage job isn’t going to give you healthcare. And this is also a very different proposition if you have family or dependents.
But I think sometimes – and I think for myself, because this is how I think about things — I was so anxious about having a career and making sure a career followed the right trajectory, figuring out what I wanted to do with my life, that it felt like I needed to be making forward progress all of time, whereas I would say it doesn’t actually matter if your first job out of university is not something you thought you wanted to do or slightly orthogonal to the career path you had set out for yourself. Life is long, and life is weird, and it’s basically impossible to predict the path that your career is going to take or the shape of the economy, and when you’re making choices, I would make sure that you’re not just optimizing for some imagined future; you’re also optimizing for how you want to live your life every day.
And the flipside of that question that I would turn a little bit on its head is also: for people like us who have gone through the early, wobbly, uncertain stages of our career and have now more privilege and structural power, what are we doing to make those choices better and easier for the next generation? What are we doing to push the organizations we work for in terms of diversity and equity and inclusion? And what are we doing to make sure that those spaces that we can influence are using their power for good? This could mean if you’re interviewing a candidate and you know that they’re going to ask for less money than they’re worth, can you nudge them in a slightly different direction because you have more power and information in that situation than they do?
But it could also mean being aware of your company’s carbon footprint and being aware of what kind of corporate citizen that your organization is engaging in and pushing them to prioritise decolonization and prioritise anti-racism and those sorts of things. And basically use your power to create more opportunities for good choices for people who are looking for jobs after you. I think the questions we often ask and the advice we often give in these situations tend to be really individualistic, and I don’t think it’s necessarily a fair burden to put on somebody who’s up and coming, who is just trying to figure out how to survive in the industry. It is much more up to people who have benefited from the system already to turn around and then say, ok, where can I put up more ladders for people to join me?
Zena: I think this is a really fantastic segue into our next question. Liz and I have both read your blog post “Morals in the Machine”, and we both really enjoyed it. And you posted this on your blog phirephoenix.com last year, which will link in the show notes. Can you share what this essay was about for our listeners and can you talk a little bit about what inspired it?
Jenny: Yeah, definitely. I would say, broadly speaking, the essay was about my philosophy and feelings about the concept of moral artificial intelligence systems. And it came about because I had been involved in a lot of conversations, both in my professional capacity, but also just based on the communities I was interested in and the nature of the industry at the time around the concept of, “is it possible to build moral AI?”
Because that was around the time when GPT-3 had just been released, which is the world’s largest, depending on how you define large, the world’s largest language processing model that had gained a lot of press because journalists were using it to showcase like, “Oh, you could write an entire article using GPT-3. Humans will be put out of a job.” But it was also around when there had been a lot of reporting, I would say starting really in 2014, 2015, but had really reached a tipping point in the late 2010s around the harms that were being caused when we delegate decisions to computers.
And governments around the world were starting to look at, “Okay, how can we use computers? How can we use AI to help us do our jobs more quickly, more efficiently and more cheaply?” Which often just ends up being a pathway into austerity measures. But, can we make the welfare system more fair by adding computers to it? Can we make foster care systems less terrible by adding computers to it? And those debates were really important and also really frustrating, because it felt like the conversations that I was having with people who understood the field were very different than the conversations that I was happening having with my friends who were not computer scientists or my friends who worked in government or those sorts of things.
And I really wanted to be able to synthesize the worldview that was embedded in the question, “Can we make moral machines?” Because my feeling on moral machines is “Why?” Why do we want to delegate our decision making to a computer? And if the only reason we want to do that is because we think humans can never be taught to be better, what makes us think that we can teach computers to be better?
That was a thing that had obsessed me and frustrated me for a really long time. And it was one of those things where I just needed to sit down and get it out of my head.
Liz: Yeah, no. That’s totally understandable. And I think it relates to this idea of trust, right? There is a question of, “Well, can we trust humans to get these decisions right?” Okay, maybe not. “Can we create a machine to get these decisions right?” You’re really thinking about what are the agents in your system and how are they responding to people who — in many of these situations, are in a vulnerable situation and do need a trustworthy agent on the other end to respond in an appropriate way. I wanted to get into this concept of trustworthy AI, which I know you’ve spent a little bit of time thinking about. And I’m curious, what does trustworthy AI mean to you?
Jenny: I think trustworthy AI starts from an understandable and noble impulse of having seen the problems with AI that we have seen in the last 10 to 20 years of motion detection systems that can’t detect black skin, and natural language systems that always associate doctors with men and nurses with women, and all these sorts of biased models that came from biased data sets.
I think the trustworthy AI movement set out to say, “Okay, how can we design a framework in which we can have some sort of guarantee and some sort of reason around whether or not a model has been responsibly developed?” I think the principles that often go into trustworthy AI are, what is the providence of the data? How was it gathered? Who was involved in gathering it? Did the subjects have consent? And then who was developing the model? How was the model tested? Were the affected communities involved in actually building the framework?
I think the reality of trustworthy AI is that it can sometimes be a little bit of ethics washing, because I think it is easy to say, “We have a trustworthy AI working group that has signed off on this, and there’s a bunch of people with PhDs in ethics who have told us what we’re doing is ethically okay, therefore you should pay us millions of dollars and buy this thing.”
But that is a label, the same way that “This building is environmentally certified” is a label. You can slap any number of certifications, any number of descriptions on a thing, and it doesn’t actually mean that you’ve done the due diligence.
And I think it is possible absolutely to build, for example, voice recognition systems that work for all voices and not just male voices, or to build image recognition systems that work for all skin tones and not just lighter skin tones. And I think those things are definitely worthwhile and important. But I also think oftentimes the question that trustworthy AI never asks, and I think the most important question is, “Should this thing be built at all?” And the answer to that oftentimes, I think is “No.” But that’s not going to get you funding.
Liz: Yeah, that’s one of the challenges. The market actually sets us up for the creation of many systems that should never actually be deployed, I think.
Jenny: Yeah, definitely.
Liz: I think that’s a really good question. I’m wondering, is there anything that we haven’t asked that you wanted to explore at all?
Jenny: No, I don’t think so. I think we’ve covered a lot of different things.
Zena: I think we did. You had really great answers to all of our questions. I felt like it wasn’t just a direct answer to the question. I felt like you gave some really thorough and diverse answers and it was really interesting to listen to.
Jenny: Thank you.
Liz: Yeah. No, I really appreciate it. I learned a lot from this conversation. I’m wondering, is there anywhere that our listeners might be able to find you, for example, to read more of your writing or just to connect?
Jenny: All of my sporadic and very occasional writing is on my website, phirephoenix.com. And that’ll also have links out to my Twitter, which is where I spend far too much time. I’m always open to hearing from people and I love talking to people about these sorts of things.
Edited to note Jenny can now be found on Mastodon.
Liz: Awesome. Well thank you so much, Jenny. We really, really appreciate all of the time that you’ve spent with us. We have really enjoyed the conversation.
Zena: It’s been fantastic meeting you and chatting with you.
Jenny: Yeah, you too. Thank you so much for the opportunity.
…
Liz: Thank you for joining us today on the Algorithmic Futures podcast. This is our last episode for the year, but don’t worry: we will be back next year with a new season.
To learn more about the podcast and our guests you can visit our website algorithmicfutures.org. And if you’ve enjoyed this, please like the podcast on Apple Podcasts and share your favourite episodes with others. It really helps us get the word out.
And finally, I can’t help but throw in a shameless plug here: I’ve been working on the next Algorithmic Futures Policy Lab workshop in parallel to this, and we have a great event coming up at the start of December in Paris, France and online, focusing on Human-Machine Collaboration. We have a great program planned, so if you’re interested in that, please go to https://algorithmicfutures.org/hmc22 for more information. And remember: the Algorithmic Futures Policy Lab is made possible with the support of the Erasmus+ Programme of the European Union.
And now for a short disclaimer: This podcast is for your education and enjoyment only. It is not intended to provide advice specific to your situation.
Jenny’s recommended further reading:
- Race After Technology by Ruha Benjamin
- Algorithms of Oppression by Safiya Noble
- Automating Inequality by Virginia Eubanks
- Weapons of Math Destruction by Cathy O’Neill
Feature image by Ana Flávia on Unsplash