AFPL SRA

Episode 1: Sputnik, revolutionaries, and algorithmic systems, with Fred Roberts and Alexis Tsoukiás

EPISODE 1

In this episode we chat to Fred Roberts, Distinguished Professor of Mathematics at Rutgers University, and Alexis Tsoukiàs, a CNRS research director at LAMSADE, PSL University, Université Paris Dauphine, on how they came to be prominent researchers within their respective fields, complex real-world systems, and how the Social Responsibility of Algorithms workshop series (2022, 2019, 2017) came to be.

Listen and subscribe on Apple Podcasts, iHeartRadio, PocketCasts, Spotify and more.

Fred S. Roberts is a Distinguished Professor of Mathematics at Rutgers University and Director of the Command, Control, and Interoperability Center for Advanced Data Analysis (CCICADA), a US Department of Homeland Security Center of Excellence (COE). For 16 years he directed DIMACS, the Center for Discrete Mathematics and Theoretical Computer Science, which was founded as one of the original US National Science Foundation Science and Technology Centers.  Roberts is author of four books, editor of 25 other books, and author of 200 scientific articles. His recent edited books include the first book on maritime cyber security in 2017, a 2019 book on “Mathematics of Planet Earth,” and a 2021 book on “Resilience in the Digital Age.” His research deals with such topics as meaningfulness in measurement, mathematical social sciences, applications of graph theory, and homeland security, and he has been a leader in the world-wide effort called Mathematics of Planet Earth. Among his awards are the National Science Foundation Science and Technology Centers Pioneer Award and the award of Docteur Honoris Causa by the University of Paris-Dauphine. You can find out more about Fred here.

Alexis Tsoukiás is a research director at Centre National de la Recherche Scientifique (CNRS) in the Laboratorie d’Analyse et de Modélisation de Systèmes d’Aide á la Décision (LAMSADE), located at the Université Paris Dauphine. He holds a PhD in Computer Science and Systems Engineering from the Politecnico di Torino and is the author of two books, editor of several books and special issues, and has published more than 90 journal articles. He has a long-standing interest in algorithmic decision theory and was responsible for introducing the concept of policy analytics to the academic world. He has also actively involved in applying his research to real-world problems. You can find out more about Alexis here and here.

With the support of the Erasmus+ Programme of the European Union

This episode was inspired by work co-host Liz Williams has been doing on the Algorithmic Futures Policy Lab, a collaboration between the Australian National University (ANU) Centre for European Studies, ANU School of Cybernetics, ANU Fenner School of Environment and Society, DIMACS at Rutgers University, and CNRS LAMSADE. The Algorithmic Futures Policy Lab is supported by an Erasmus+ Jean Monnet grant from the European Commission.

Disclaimers

The European Commission support for the Algorithmic Futures Policy Lab does not constitute an endorsement of the contents of the podcast or this webpage, which reflect the views only of the speakers or writers, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

All information we present here is purely for your education and enjoyment and should not be taken as advice specific to your situation.

Episode credits

Liz Williams – creator, co-host, producer

Zena Assaad – co-host, producer

Fred Roberts – guest

Alexis Tsoukiás – guest

Katherine Daniell – assistant producer

Flynn Shaw – background research

Music – Coma-Media from Pixabay

Sputnik news clip – Universal Studios

Sputnik beep – NASA

Episode transcript:

Liz:

Hi everyone, I’m Liz Williams.

Zena:

And I’m Zena Assaad.

And this is the Algorithmic Futures podcast.

Liz:

Join us, as we talk to technology creators, regulators and dreamers from around the world to learn how complex technologies may shape our environment and societies in the years to come.

[musical intro]

Liz:

In today’s episode we talk with the two founders of the Social Responsibility of Algorithms workshop series Fred Roberts and Alexis Tsoukiàs.

Fred is a Distinguished Professor of Mathematics at Rutgers University.

He is also the Director of the Command, Control, and Interoperability Center for Advanced Data Analysis (CCICADA), a U.S. DHS Center of Excellence (COE). Fred was also the Director of the DIMACS Center, a U.S. National Science Foundation Science and Technology Center, for 16 years.

Zena:

And, Alexis is a research director at the French National Centre for Scientific Research (CNRS) at LAMSADE, PSL University, Université Paris Dauphine. He was Director of the LAMSADE until 2018 and is currently the national coordinator of the Policy Analytics Research Network.

Liz:

We’ve invited Fred and Alexis to join us today because their collaboration has, in many respects, inspired this podcast. I attended an event they organised in Paris back in 2019 — the Social Responsibility of Algorithms workshop — and really enjoyed how the workshop brought people from a wide range of disciplinary backgrounds and career paths to explore what social responsibility means in the context of algorithmic system design, development, deployment, and governance. I enjoyed it so much that I ended up working with them to create the next workshop in the series — Social Responsibility of Algorithms 2022. The Algorithmic Futures Policy Lab event series – and this podcast – came about because of this partnership.

Zena:

In today’s episode we hear about how Fred and Alexis came to be prominent researchers within their respective fields, we explore complex real-world systems, and we get an insight into how the Social Responsibility of Algorithms workshop series came to be.

[musical transition]

Liz:

Oftentimes when people get asked about their professional journeys, or how they began working in their current fields, their stories include points of influence and moments of serendipity that have guided and shaped their paths over the years.

Zena:

We asked Fred and Alexis to each share stories of their past, some of their influences and how they came to be prominent researchers within their respective fields. Alexis explained that, despite working towards being an engineer, he always found himself being drawn towards mathematics.

Alexis:

In reality, I started doing, let’s say engineering studies. When I was really young, I was convinced that I wanted to become an engineer. Well, what happened is that while I was studying actually, engineering, I discovered that it was not what I was interesting to. The only classes that motivated me were mathematics. So, I was really considering the idea to quit and to go for, let’s say the mathematics studies.

But it would make me lose a lot of time, and, but perhaps the most important, I started following operational research with a very brilliant professor at that time, Anna Ostanello, and that was a huge discovery for me. And this is how I started being introduced in decision theory and decision models and mathematics applied to decision making. And that’s it. After that, I did a PhD on these subjects and I’ve been involved in research, the rest has been a following of that, but it was very much a matter of chances.

Zena:

Alexis talks about being uniquely inspired by the work of Professor Anna Ostanello at the Politecnico di Torino.

Alexis:

You see, most operational research at that times was pure optimization. Instead, Anna had a view of operational research more focused on the problems, not on the methods. So, she was teaching a type of operational research where you start by the problem. And it was that fact that we were talking more about problems than about methods, that was fascinating.

Now, at the same time, there was the mathematical rigor that was fascinating, because what I really loved was the rigor of mathematics. That was what I was really fascinated with. Now, the fact that I was able to make the connection between a rigorous approach and talking about real problems perhaps was what made me follow in this direction. And I suppose that was very much related to how these subjects have been taught to me as a student.

Liz:

For Fred, the advancements in space exploration in the 1950s were momentous steps forward for our society. Fred was caught up in a national rethinking of the role of science during a time when another nation was the first in space.

Fred:

Well, I was trying to reflect on this. So, I grew up in New York City, and in New York City, there are, at the time I was growing up, there were a handful of specialized high schools that required an entrance exam. These were public high schools, but there was a city-wide competition to enter them. And so, I applied to the Bronx High School of Science, and after the onerous entrance exam was accepted. So, when I was a high school student, I commuted about an hour, one way every day on the subway to go to high school.

So, the Bronx High School of Science, it’s a bit of a misnomer because it was just a very good high school with excellent teachers and not just emphasis on science, but sometime during my high school career, rather early in fact, we were caught by surprise when the Soviet Union launched the Sputnik; earth-circling satellite.

[Sound of a news report from the time on the Sputnik satellite]

Today a new moon is in the sky: a 23” metal sphere placed in orbit by a Russian rocket. Here an artist conception of how the feat was accomplished. A three stage rocket: Number 1, the booster in the class of an intercontinental missile, its weight estimated at 50 tonnes. A smaller second stage took over at 5000 miles an hour, and carried onto the highest point reached. 500 miles up, the artificial moon is boosted to a speed counterbalancing the pull of gravity and released. You are hearing the actual signals transmitted by the earth-circling satellite – one of the great scientific feats of the age.

Fred:

This was quite a dramatic awakening in the United States because we thought, of course, we were ahead in everything. And then all of a sudden the Soviets were able to launch an earth satellite before we did. This started to push young people my age towards science and mathematics. And I don’t recall that there were specific pressures, but I’m sure there were, but certainly many of us were pushed in the direction of science and mathematics. As it turned out, many of us went on and got PhDs in science, math, and engineering.

Liz:

Sputnik clearly left a lasting impression on Fred and his peers. I asked him what it was like hearing the news of the launch for the first time.

Fred:

Well, it was quite exciting. If you were interested in what was happening around you. We were glued to the television set and over a period of months as the US tried to launch its first earth satellite, and my recollection is that was a disaster and didn’t work the first time. So, many of us were upset, disappointed, and so on. But it was also fascinating to see this new age where we were going to space, and of course, fast forward a few years and we were going to the moon. And I remember being glued to the television set when we did that. But I do remember also because I was, during the summers, working as camp counselor, and they brought in a television to show the launch of our early earth satellites because they just wanted all the children to see this was a big deal.

Anyway, fast forward to going to university, I was exposed to a course in mathematical social sciences. And this was a dramatic event in my career. I had been pushed toward mathematics, I was pretty good at it, but this was exciting to me because it wasn’t just mathematics for its own sake. It showed me that mathematics was useful for problems of society. It introduced me to group decision making and voting and things of that sort. And at the same time, they also introduced me to ecological issues. I was hooked. All of a sudden mathematics was not just an abstract subject, it was something that was useful and exciting and so on. And so, I ended up going to graduate school and was interested there in studying the applications of mathematics. I interacted with social scientists, but I also had awakening interest in environmental problems. I lived in California at the time, you could not go out the door without worrying about what the air condition was, whether it was polluted, the smog was visible, so on and so forth. So, that was an awakening for me.

And when I got my first job, my first real job, working at the RAND Corporation in Southern California, there, it was so obvious that we had a serious pollution problem, because you could not exercise without your lungs bothering you. The air was a real issue. And so I got fascinated by environmental problems. I got involved as an agitator. I helped organize the first Earth Day and things of that sort. And ever since, my interest in my professional life has somewhat paralleled my interest in my personal life. I like to solve problems of society, and that’s the story.

Liz:

As Fred and Alexis progressed in their careers, they were both drawn towards areas of work that had the potential to support complex real-world decision-making. When we asked them why they were drawn to this type of work, each of them reflected on moments of influence from their past and points of serendipity along their journey.

Alexis:

You see, when I was a high school student, this is one of the saddest periods of the modern Greek history because there is a military regime. We were governed by a bunch of colonels with a regime compared to the typical authoritarian fascist regimes. Me, as many other of the people of my age, although we were very young, we were very much involved in, let’s say, political fights against them, who ended also in dramatic moments of the Greek history. So, when I was finishing the high school, this period has ended, but left behind a very strong political commitment for the whole generation, to which I belong.

And this is something that followed me for a very long time. You see, when I told you before, that I didn’t prepare in reality my national examination for the Greek university, and this is because I was leading a students’ committee preparing a revolution. So, I had (laughs) other type of commitments. So, I did that for the whole time of my, also of my university studies, although this happened in Italy. But once again, in Italy, it was a period of very strong commitment. So, I was once again very much involved with political activities, with the movement of the students, with militant actions, for a bunch of things that have to do with civil rights, with students’ rights, with a lot of things.

In 1984, I was candidate for the European parliament. Fortunately, I was not elected because otherwise I would have been a politician, a full-time politician, because at the same time I realized that I was not really interested by a professional life in politics. But in any case, it left me an interest for this type of problems, for society problems, for political problems, and this is something that still, let’s say, you can see behind the type of topics I’m interested in the applied part of my research, that has to do with public policy, with social choice and things like that.

Fred:

So again, I can trace a lot of this to my involvement in my first real job at the RAND Corporation in Southern California. The Company was a think tank, quote unquote, and we addressed real problems. Originally, was founded as an offshoot of the US Air Force and dealt with problems of the military. But when I took the job, it was changing in fairly dramatic ways. And it began to think about real world problems dealing with transportation and with environmental quality and with urban problems, with the growing demand for energy and things of that sort.

And so, I was fascinated by all of those applications. So, we had the New York City RAND Institute that we formed, and worked on problems such as the growing need for water, and would New York City have enough water in the future? And what are all the uses of water and how do you make decisions that will allow you to conserve water in order have enough? And that gets you involved with public policy. Should we regulate the kinds of toilets people have? Because some of them flush six gallons of water at a time and others flush one. So, is there a way to regulate the new buildings in order to require that all of the toilet tanks be low water usage tanks, and that gets you into all kinds of public policy issues.

I got specifically involved with the growing demand for energy, and we put together the first multidisciplinary study of the uses of energy, the sources of energy, the problems of energy use. And we studied, from various points of view, the entire energy spectrum. So, we had a team that included mathematicians, engineers, and social scientists, and we studied for instance, the impact of air pollution or the result of energy use on the quality of air, the potential for regulating traffic and putting in commuter fees in order to reduce the pollution from driving and so on. So, it was very natural to get involved in public policy, decision making and things of that sort.

Liz:

Fred talks about working in a multidisciplinary team, made up of people from a diverse set of backgrounds and expertise. We were curious about his experience working in these teams and asked him to reflect on the benefits and also the complexities of working in this dynamic environment.  

Fred:

Well, we’re still working on how to, nowadays many years later, how to do good multidisciplinary research because it’s very complicated. What I left out as part of my background is that I was very interested in behavioral science as well as some of the other applications that I’ve talked about. And in particular, when I was a graduate student, my research revolved around issues having to do with judgements of preference and indifference and also closeness, noise and measurement of those things. And so I realized that I really needed to be able to speak the language of behavioral science.

So, I went for a postdoc in a psychology department. When I talked about the RAND Corporation being my first real job, my first job was the postdoc at the University of Pennsylvania, where I studied behavioral science and tried to learn some of the language a little bit better than I had as a graduate student and actually hanging around psychologists.

So really, in order to succeed in multidisciplinary activities and multidisciplinary research, you have to be able to step back and understand what the language is that people are using in different disciplines, and that’s a big challenge for us even today. As, finally, those of us who’ve always been interested in multidisciplinarity, are finding that this is now the in thing to do. We are facing the same challenges. And that is, how do you actually get enough of a background in somebody else’s discipline and in the way they think and the words they use and so on?

So, we faced that when we were at RAND Corporation, and it was definitely a matter of all of us coming up to speed, and we spent a lot of time coming up to speed on what each of us was working on, each of our backgrounds and what was known.

Zena:

The concept of multidisciplinary teams is quickly becoming a more familiar and common mechanism for diversifying research and work more broadly. We often hear the terms multidisciplinary and interdisciplinary used interchangeably. However, these two terms are actually quite distinct. Alexis provides his insight into how these two concepts differ and why he believes interdisciplinary work is a step towards more diversified work and outputs.

Alexis:

I may add on what Fred said that in reality what you are looking for is making a step from, let’s say, a multidisciplinary perspective in an interdisciplinary perspective in which you contaminate the other ones with your vision of a topic, but you allow yourself to be contaminated by how the others see the same topic. So, I also had many, let’s say, real life experiences on, let’s say, public policy, on supporting public policy on how to help policy making and things like that, and having to work with policy scientists has been very helpful in making, evolving my own research.

So, the type of research I continued doing around decision models and very formal topics about how you represent preferences and how you use them and how you aggregate them and things like that, has been certainly influenced by topics related to policy science, such as, for instance, under which circumstances a recommendation for doing something is legitimated. Not only correct or appropriate, but it is legitimated with respect to a, let’s say, a political context, just to use a very broader.

Now, if you want to do that, you have to open your mind and your experiences on how these topics are discussed in another discipline. Just to come to the subject that we are interested to, has to do with algorithms and the responsibility of using them, this is very much contaminated by people working in law, which is obvious because as soon as you talk about responsibilities and liabilities and things like that, you have to do it with people doing legal studies. But those people have a very different perspective on this topic.

Now, it is interesting to transfer to these people our vision of what it is, an algorithm, a procedure, a decision and things like that. But it is also extremely important to allow ourselves as decision scientists or as computer scientists, to be contaminated by the law people on how they see this problem, because they have a completely different perspective respect to how I see this topics. And I’m sure it will take a long time until we become able to set up a, let’s say, a common vision of this issues.

Liz:

But work on algorithmic systems demands more than just interdisciplinarity. In a world where companies like Google, Facebook and amazon create systems that impact most of the planet, having teams that represent diverse cultural, socioeconomic and geographic backgrounds can be important, too.

Alexis:

There is another aspect, which is important. When I started reading about the topic about what was happening on these areas. There have been two things that I noted. One, it was dominated and it is still the case, it is dominated, by the experience of using this type of devices in US. Now this is not bad. I’m not saying it for that. But, the society problems that come out in US are not the same ones that come in Europe, for instance. To give an example, in US there is a formally recognition of racial difference. And so there are protected attributes of being black or Hispanic or whatever. Now, this is something which in Europe is formally forbidden. You cannot even make a statistic about racial difference. So, you cannot talk about discrimination, for instance, in Europe. On this terms on, let’s say, religious or racial difference, because you are not allowed to consider that exists as a difference.

This is not to say that it’s not the case. Actually, Europe has many problems that are similar with US, but it’s not something which is recognizable. Indeed, the discrimination topic in Europe is about wage differences between men and women. Is not about racial discrimination in job seeking, for instance. Although, there is racial discrimination in job seeking, but this is not what the law considers, because for the law, there is no race. So, there is no such discrimination.

So, that means that in Europe you need a completely different perspective in talking about this argument.

Fred:

And I would pick up on this and say, the issue is worldwide beyond Europe or the US. So, you take, for instance, the United Nations sustainable development goals, one of which is gender equality between men and women, and there you get into a whole wide variety of issues worldwide, whether it’s economic opportunity and participation. So, it’s not just equal wages, but it’s level of types of jobs that women get and so on, but it’s also political equality. So, it’s things like number of women in parliament or a number of women who are heads of state and things of that sort. Those are metrics that are now used to compare the gender gap.

And then it’s also access to healthcare and the quality of that care and it’s educational opportunities in the number of women who are still in school at different ages and so on and so forth. And it actually, I think, all the data suggested in terms of education and health, the status of women is improving significantly compared to what it was. Whereas, in the first two kinds of things I mentioned, the economic and the political, there’s still much more serious gaps, but of course we’re interested in algorithms and we’re interested in figuring out whether those algorithms lead us to discrimination.

Zena:

The algorithms underpinning automated technologies have been around for a long time. In fact, the concept of an algorithm precedes the establishment of computer science as a discipline by centuries. The term itself dates back to the 12th century, and is linked to the work of Abdullah Muhammad bin Musa al-Khwarizmi, a Persian scientist, astronomer and mathematician often cited as “The father of Algebra”. But the powerful applications we now associate with the word “algorithm” are quite recent developments.

Alexis:

The other thing is that it seemed, and it still seems, that many of the people that are involved in that ignore the fact that automatizing the decision process is something which is much older than the situation today. Is not something that happened in the last 10 years. We have automatic devices since the seventies. The notion of algorithm is a notion which is anterior of the notion of computer science. So, you don’t need computers to have algorithms.

Fred:

Whereas, I agree with Alexis, that the idea of algorithm precedes computer science. It’s the dramatic change in computer power that has allowed us to use algorithms and apply them in so many ways in our lives, whether it’s the decisions we make about hiring somebody or the decisions we make about giving somebody a loan or the decisions we make about allowing a prisoner to go free on bail and so on and so forth.

So algorithms are now used in dramatically important ways. They’re used in terms of policing and in facial recognition of potential criminals. They’re used in commerce dramatically. The financial world and the financial transactions are governed by algorithms. They are used to control our power systems and so on and so forth. And they come into our homes, we’ve all got these smart homes now, and we have smart teapots and we control and monitor our electricity use and so on and so forth.

Algorithms are everywhere, and that’s because of the dramatic increase in computer power. And also because of the change in emphasis on what computer scientists do. We’ve entered the age where applications of algorithms have become the central driving force of new computer science research. And now enters the issue of how fair are those algorithms and are they treating people equally? And should we have a way of telling whether algorithms are being responsible? And that opens up the whole discussion that we’re entering.

Alexis:

I may add a small historical example of this… You see, computing electoral districts. This is a famous problem which is algorithmically solved because it is very complicated. In a country like US for instance, it is extremely complicated how you cut the electoral districts. So, it is done using algorithms. But this is something which has been discussed before computers. The algorithms of districting have to do with social choice theory before computers. The same, college admission, which is a perfect matching problem. Once again, this is a topic we solve with algorithms because it is very complicated, but it is antecedent the existence of computers.

Now, something we know from these experiences is that there is no universal way to have a fair algorithm. So, there is no universal fair electoral districting. It depends on what you want to privilege. Now, that has been some of the issues that I realized and realized with Fred that were more or less… People were not remembering these ones. They were starting considering as if this was an issue who showed up now, it’s not the case. It is much older. The issue of what happens when you handle a society problem through an algorithm, it’s much older.

Zena: We asked Fred and Alexis where the idea for the social responsibility of algorithms workshop came from and how they began the series.

Alexis:

So, people are very much aware of issues about fairness, about explanation, et cetera, et cetera. But I thought that we had something to add on this discussion, and this is how the series of workshops started. And this is also how I think we should keep them alive, because there are other people doing excellent research and organizing excellent meetings and workshops. But I think we have a perspective to add to what are the other perspectives.

Fred:

Well, I think that this has become a really important topic nowadays. Not just nowadays, but starting a while ago. I don’t think we should take the credit for being the ones who invented this, and I don’t think it was the first workshop that dealt with this topic, but it was a natural one for us to say, “Okay, what’s a good topic for the next time we do something?” And maybe Alexis will remember this differently, and I believe it was his idea that we should do this, but it’s very clear that as technology has taken over and become so important in our lives, we’ve come to understand the challenges of use and misuse of technology. And it was clearly an important topic that we needed to be involved in.

Alexis:

Yes, well, more or less. Yes, I suggested that to Fred. But now what I happened is that a certain point let’s say in the, not 10 years ago, but let’s say six, seven, eight years ago, I was in the scientific council of CNRS, which is my employer. Which is this very big, let’s say, scientific organization in France. And we have been asked to identify topics that were expected to become challenges for science in the forthcoming years.

And when I’ve been asked to contribute my ideas, one of the topics I suggested, I said, “Something that is turning more and more important is the impact of, let’s say, large use of automatic decision devices, or automatic recommendation devices.” And it will become more and more important. So, perhaps we need to have a more scientific, let’s say, position about it. Not only, let’s say, an ethical or a political one. But we need to start thinking scientifically of what does it mean? These type of challenges.

That has been accepted. There have been some internal discussions. And of course, as I suggested it, I said, “Well, you should take care of it. It’s not only an idea.” One of the first people I discussed with was Fred, who of course agreed and was extremely supportive about that. And this is how we, I think, in 2015, 2016, we decided to organize the first workshop in Dauphine, which took place finally in 2017.

Liz:

And finally, we asked Fred and Alexis what the term ‘socially responsible algorithm’ means to them.

Alexis:

(Laughs) This is a funny question. In reality, it is a term that I invented in order to draw attention. Algorithms are not responsible. And the reason is, they’re not liable for what they do. So, you cannot ask an algorithm to pay for what happened after it has been used. The idea is to show that in designing and implementing algorithms, we have a responsibility. It’s not something that happens straightforward. There is no universal way through which you can design an algorithm for a certain purpose.

Let me use the example of the perfect matching, which is what we use in college admission. It is well known from algorithmics that in the perfect matching problem, in which you have men and women that have to match between them, who makes the first move, the men or the women? Has an advantage in choosing. So, when you implement such an algorithm between students in schools, if you give the first move to the schools, you advantage the schools in choosing the students. If you put the first move to the students, you advantage the students in choosing the schools. This is a political choice or an educational choice.

It is antecedent to the algorithm. Either you go one way or you go the other way. This is well-known. So, when we design an algorithm, we have to know what they do, how they do it, and we have to explain to all the potential users, end users, political users or whatever it is, what does it mean using them? So, it is a matter of awareness that you are organizing, which is important.

The other aspect is that there are algorithms which we don’t know what they do exactly. And this is a matter of research. So, for people like us who do computer science, algorithmics, decision models, and things like that. It is a matter of research, understanding what a new type of procedure of algorithm of protocol we want to implement, how it is axiomatically characterized?

So, we know what it does, what property respect and what property does not respect? Because it will not respect everything by definition. So, there is a topic of awareness for the large community. And there is a matter of research for the specific community of computer scientists and decision analysts which is understanding our own tools. So, what these tools do, can do, and should not do for some purpose.

Fred:

So, I guess I would take a slightly different point of view here. To me, social responsibility of algorithms begins with the way we educate anyone who works on algorithms, and I guess that begins with the very first introductory computer science or engineering courses. And it tells me that the emphasis on whether an algorithm is efficient, effective, speedy, accurate, user-friendly and so on is only one side of the story.

And that from the very beginning, we need to consider other things when we design our algorithms and work on applications. We need to consider, as Alexis has said, transparency, and explainability in understandable terms, so that non-technical people can gather what the algorithm is about and what it’s doing. We need to find a way to protect individuals’ privacy, to the extent that we can and the algorithms can effect. We need to ensure that individual freedoms are protected so that governments can’t, and private companies, cannot surveil individuals extensively without appropriate regulations and requirements.

And we need to be aware of unintended consequences of our algorithms and constantly rethink them and use the opportunity to study those unintended consequences when they occur to see if we can learn about how to make algorithms better and more responsible. So, to me, it begins with education. And it also, I think, underscores one of the reasons we’re doing these workshops and that is to get more young people involved in understanding that these are the critical issues about algorithms that have not been emphasized enough, and we need to find a way to get people to think about them.

Liz:

Thank you for joining us today on the Algorithmic Futures podcast. To learn more about the podcast, the Social Responsibility of Algorithms workshop series and our guests you can visit our website algorithmicfutures.org. And if you’ve enjoyed this episode, please like and share with others.

Now, to end with a couple of disclaimers.

All information we present here is for your education and enjoyment and should not be taken as advice specific to your situation.

This podcast episode was created in support of the Algorithmic Futures Policy Lab – a collaboration between the Australian National University School of Cybernetics, ANU Fenner School of Environment and Society, ANU Centre for European Studies, CNRS Lamsade and DIMACS at Rutgers University. The Algorithmic Futures Policy Lab receives the support of the Erasmus+ Programme of the European Union. The European Commission support for the Algorithmic Futures Policy Lab does not constitute an endorsement of this podcast episode’s contents, which reflect the views only of the speakers, and the Commission cannot be held responsible for any use which may be made of the information contained therein.