AFPL Podcasts Season 1 SRA

S01E06: Could COVID-19 influence our technological futures?

In this episode, we explore the way technology scales in times of rapid change – such as the time in which we currently live.

We’ll use a single piece of technology to shape our exploration: the South Australian Home Quarantine app. This was rolled out as a trial to enable home quarantine during the COVID-19 pandemic. The app was the focus of intense debate in Australia, and drew attention from commentators in the US, because it used facial recognition combined with GPS data to monitor participants in the program.

We have a fantastic line-up of guests to help us explore this issue, including:

This episode was created by Amir Asadi, Ned Cooper, Memunat Ibrahim and Lorenn Ruster, who are part of the ANU School of Cybernetics 2021 PhD Cohort. Memunat and Lorenn narrate the story.

Listen and subscribe on Apple Podcasts, iHeartRadio, PocketCasts, Spotify and more. Five star ratings and positive reviews on Apple Podcasts help us get the word out, so please, if you enjoy this episode, please share with others and consider leaving us a rating!

With the support of the Erasmus+ Programme of the European Union

This episode was developed in support of the Algorithmic Futures Policy Lab, a collaboration between the Australian National University (ANU) Centre for European Studies, ANU School of Cybernetics, ANU Fenner School of Environment and Society, DIMACS at Rutgers University, and CNRS LAMSADE. The Algorithmic Futures Policy Lab is supported by an Erasmus+ Jean Monnet grant from the European Commission.

Disclaimers

The European Commission support for the Algorithmic Futures Policy Lab does not constitute an endorsement of the contents of the podcast or this webpage, which reflect the views only of the speakers or writers, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

All information we present here is purely for your education and enjoyment and should not be taken as advice specific to your situation.

Episode Credits:

Hosts:

Memunat Ibrahim

Lorenn Ruster

Liz Williams

Zena Assaad

Guests:

Peter Wells

Angela Webster

Diego Silva

Gavin Smith

Mark Andrejevic

Lizzie O’Shea

Producers:

Amir Asadi

Ned Cooper

Memunat Ibrahim

Lorenn Ruster

Liz Williams

Episode Transcript:

Liz: Hi everyone, I’m Liz Williams.

Zena: And I’m Zena Assaad.

 And this is the Algorithmic Futures podcast. 

Liz: Join us, as we talk to technology creators, regulators, and dreamers from around the world to learn how complex technologies may shape our environment and societies in the years to come.

Zena: Today’s episode is a special one — we’re going to explore the way technology scales in times of rapid change – such as the time in which we currently live.

We’ll use a single piece of technology to shape our exploration: the South Australian Home Quarantine app. This was rolled out as a trial to enable home quarantine during the COVID-19 pandemic. The app was the focus of intense debate in Australia, and drew attention from commentators in the US, because it used facial recognition combined with GPS data to monitor participants in the program.

Liz: This episode has been put together by Memunat Ibrahim, Lorenn Ruster, Ned Cooper and Amir Asadi, all of whom are members of the 2021 ANU School of Cybernetics PhD cohort. They first joined the School as Masters Students in February 2020 – just before the first COVID-19 lockdown in Australia.

Memunat and Lorenn are going to narrate this story, which begins in a classroom in 2020. Mem is going to take us there first.

Memunat: It all began in March 2020. We were just a few weeks into our Master of Applied Cybernetics at the Australian National University when we received notice that our studies would be entirely virtual, because of a new virus spreading across the world. We were studying emerging technologies and how they scale, and we were watching a series of global experiments involving technology unfold right in front of us. One of them was moving our course entirely online. Around a year into the pandemic, not long after we started our PhD program, we started following another experiment—one in which governments started using facial recognition technology to help manage quarantine.

Lorenn: First, an app using facial recognition technology was introduced in Western Australia at the end of 2020 so that those returning to W.A, that is, Western Australia from other states within Australia could manage their quarantining period at home. By July 2021, the National Cabinet of Australia agreed to commence pilot programs to use facial recognition technology for those returning to Australia from overseas. S.A – which is South Australia – was announced as the first trial location of the “Home Quarantine App”, intended to improve conditions for people returning to SA from overseas, increase the capacity of quarantine in SA and reduce the cost of quarantine.

Memunat: Participating in the trial was relatively straightforward. You would quarantine at home instead of a hotel. When the app pinged, you had to show your face to your phone camera within 15 minutes. The app would verify that you are in fact in your home quarantining through a combination of facial recognition technology and geo-location. If you failed to complete these steps, the police would be notified to check on you in person.

The problem with this is that introducing facial recognition technology for this purpose has implications beyond the life of the trial. We believe these implications have not been adequately explored.

Lorenn: Little by little, facial recognition technology is becoming mainstream… we are currently witnessing a transition of this technology from a niche technology to a `new normal’. And what do we, collectively, really think about that?

Memunat: Much of the commentary to date has focused on the technical elements of facial recognition technology. But to understand its impacts requires interrogation of more than the technology itself. There is a range of social and cultural factors that also need time and attention. And through this episode, we intend to shed light on some of the social, cultural, and technical factors that are playing out right now as we consider the place of facial recognition technology in our society.

Lorenn: To do this, we’ll be drawing on a framework called the multi-level perspective…

So, what is the multi-level perspective?

The Multi-Level Perspective, or MLP, has its foundations in the concept of ‘technological regimes’ from evolutionary economics. The technological regime refers to the beliefs and design practices that guide innovators to develop technologies. In 2002, Frank Geels extended the concept of technological regimes to ‘socio-technical regimes’, incorporating ideas from sociology to analyse and understand technological transitions. He introduced MLP as an analytical framework that bridges the gap between science, technology studies, and evolutionary economics to understand the long-term co-evolution of technologies and their social environments.

Memunat: To better understand how to apply the Multi-Level Perspective to interrogate the transition taking place with respect to facial recognition technology, we invited one of the leading experts in technology transition studies to join us on this episode of the podcast. His name is Dr. Peter Wells.

Lorenn: Peter is a Professor of Business and Sustainability at Cardiff University. His research interests mainly revolve around the global automotive industry, socio-technical transitions, business models, and cultures of automobility and sustainability. In 2020, Peter published a paper titled “A socio-technical transitions perspective for assessing future sustainability following the COVID-19 pandemic”, in which he explored the Covid-19 pandemic through the lens of the Multi-Level Perspective and suggested potential scenarios for a post-Covid world as a starting point for policy discussions and social engagement.

We asked Peter how he thinks about the MLP:

Peter: For me the MLP, essentially, it’s just a way of understanding the world. The world is a complex, dynamic place, there’s lots going on, the MLP is a way of kind of putting that into a simple shape that then allows us to make more sense of what we see going on. And in fact, most sciences like this if you think about it when you do a scientific experiment in the traditional sense, what you try to do is exclude all the other variables and focus in on the variable you’re interested in, and then you measure it and you draw conclusions from that. And in a way the MLP is the same; we see all these things going on, we want to try to both capture some of that complexity, but also to remove some of the noise so we just see the signal. In other words, we just try to identify what we think are the causal factors behind the changes that we can see happening.

Memunat: The MLP consists of three levels – the landscape level, the regime level, and the niche level.

Let’s dive into the middle level first – the regime – and hear from Peter on it:

Peter: With the middle level, the regime, is really the world as we know it. It’s what we see going around just day to day and it’s very much what we assume will happen day to day as well. And that’s not just about technologies, but it’s about everyday life, it’s about behaviors, it’s about cultural attitudes, it’s about how we buy and use things. It’s all of those kind of aspects bundled into everyday life.

Memunat: The next level in MLP that we will consider is the macro-level or the landscape level. This is where the pandemic fits in.

Peter: Above that we’ve got the landscape level; traditionally this is thought of as being the framing level for all the activities within the regime. In other words, at the landscape level you’ve got the structures and the big events that make a difference to our world. And the nature of landscape change therefore can either be slow and gradual, or it can be sudden and quite dramatic. And quite recently, of course especially with the pandemic, there’s been an interest in understanding these kind of landscape level events, these big events that make a difference to lots of people in lots of parts of the world. But equally in a landscape, events can be more localized and more system level events, which can only impact upon one particular area. So there’s this kind of combination of things going on.

Memunat: And finally, as Peter explains, we have the niche level:

Peter: And then the niche level, you’ve got this idea of innovation and things going on, new things happening. And within these niches they are fairly small scale, they may be geographically limited, they may be simply small scale in terms of the kind of economic value of the activities, or they may be small scale in terms of being kind of distinct social or cultural practices. And those kind of niche kind of level events, they’re going on all the time, lots of little things are happening. Sometimes those niches become more significant and develop and expand, and that’s where we see a dynamic relationship between the established regime, the established way of the world, and these new ideas, these new technologies, new practices, new ways of being which come along and begin to maybe displace that regime. So the MLP therefore is a way of thinking about all of those interactions, condensing it down to some key elements, and therefore being able to kind of make sense of what we see around us.

Memunat: These levels dynamically organize and reorganize all of the time… A transition occurs when some set of conditions emerge. In Frank Geels’ words, “a technological transition occurs when there are major technological transformations in the way societal functions are fulfilled”. A transition describes the phenomenon where something happening in the niche enters into the regime level—in other words, it is normalised in our everyday life.

However, technology transitions are not limited to technological changes — they also include changes in markets, user practices, regulations, culture, infrastructure and so on.

Generally, transitions take a huge amount of time–decades or more. But certain shocks, like a global pandemic, can propel these transitions over much shorter periods — in transition studies these are called “accelerated transitions”. 

We asked Peter about the difference between normal and accelerated transitions.

Peter: Broadly speaking if you look at that normal transition we’ll see that the pace of change unfolds over probably decades. And that’s true for most systems most of the time, partly because our systems are so heavily embedded in terms of technologies, in terms of capital investments, and in terms of things like behaviour.

Now when you have those accelerated transitions I think what we mean by that is that all of the changes that we expect to see in the transition occur in a much more compressed period of time. And that means that the technology update is more rapid. It means that the market dimensions are growing much more quickly. It means that the government is changing regulatory conditions and governance structures more quickly. It means that society as a whole is seeing bigger cultural changes more quickly, people’s behavior changes quite radically. All of those things come together, and so instead of taking decades you’re talking maybe a few years, maybe even less for some significant changes.

Memunat: Peter also provides some insight on the impacts of landscape-level pressures—such as the pandemic—on transition pathways.

Peter: One of the issues when you’re thinking about the MLP, and then you’re thinking about transitions, and change in society is to think about when you have those big landscape events, not only how significant are they at the time, but how enduring will they be and how far do they shift the important structures in society, important attitudes, cultural beliefs, consumption patterns, economic value, and so on?

One of the features that’s historically tended to happen is that when we have these major societal events then important changes occur to deal with that event, but then remain in place thereafter. And the example of the Napoleonic wars is one of those cases. So in order to fund the war between England and France the government introduced this idea of income tax, we never had it before. They’re supposed to be “temporary;” quite a long time later I’m still paying income tax. I don’t think we’re at war with France anymore although I’d have to check. But you can see the issue here that changes become embedded in that way.

Sometimes these shocks are temporary, that kind of boing and everything bounces back to the level it was at, and sometimes these changes get embedded, they become part of everyday life, part of the structure of society, and economy and so on, and then we’re kind of stuck with it. And I think this is what people worry about AI, they think okay, well once you’ve had that initial use you’ve got this idea that somehow, depending on the circumstances, it’s acceptable. And then what happens of course is the circumstances get widened out. Oh, we need it for this, oh, we need it for that, oh, we need it for the other.

Memunat: Peter also discusses his perspective on the influence of the pandemic, specifically, on regimes and systems.

Peter: As an event within the MLP it’s relatively unusual in being both kind of global in coverage and short, sharp shock in terms of the time duration of the event. You can compare that with say global warming, climate change — that is indeed global in its impact but it’s taking a very long period of time to unfold around us. And equally events such as military conflicts or famines within a region, those sorts of events are relatively short, sharp shocks, but localized.

So we can see that in that respect that this kind of major conflictual event had impact on many systems, the energy system, the mobility system, housing system, education system, health system, all these distinct regime systems impacted by this major event, and the continuation of that impact over a very long period of time. So I think there’s some something to be said for the idea that those more short duration, high impact events do lead to a much, not only an acceleration of the pace of change, but a greater embeddedness of that change. Because there’s an element of kind of sweeping away the old and bringing in the new, and then we get to a new normal that is different from the old normal shall we say. And in between there is that kind of major event.

Lorenn: Since the pandemic began, many countries around the world introduced technologies to address different challenges related to contact tracing, epidemic management, compliance with social distancing, and quarantine management. This included technology deployments in South Korea, Taiwan, Poland, and some states in the US to manage quarantine in various ways prior to the introduction of the Home Quarantine App in South Australia.

Memunat: We invited Professor Angela Webster and Dr. Diego Silva to share their thoughts about the application of different technologies to manage public health functions. Angela and Diego are both from the Sydney School of Public Health, at the University of Sydney. Angela is a Clinical Epidemiologist, Nephrologist and Transplant Physician. Diego is a Senior Lecturer in Bioethics, and he works with the World Health Organization on various public health ethics topics on an ad hoc basis.

Lorenn: Angela reflects on how facial recognition technology came to be part of the public health response during the COVID-19 pandemic in Australia:

Angela: With the pandemic, it came quickly and it became very serious very quickly. I think in health we often borrow from other fields to apply things that have worked somewhere else.

Digital surveillance and digital technologies have been much used in security and in border forces and in other kinds of ways. I think because of that, some of the technology was already developed. It meant nobody needed to start from scratch.

That’s why it was relatively easy to borrow and move across some of the technologies used in other fields and start to consider how they might be used in health, which in some ways is a huge boon for health, because it involves some cross-disciplinary collaboration and some sharing of knowledge and resources, and that meets the cost effective and scalable kind of goals really of any public health intervention.

So it’s easy to see why it was appealing and why the uptake of it was quick and fairly varied and widespread across the world, not just in Australia.

In health, we’re always concerned with value and efficiency, and so doing things in a cost-effective way is really important. Obviously digital health is a huge growth area across all kinds of public health and healthcare delivery contexts, because it offers the promise of a scalable, fairly low-cost intervention reaching many people. I think the idea of digital surveillance in public health, particularly in the pandemic can be viewed in that context of a way of doing something that needs to be done efficiently, cheaply, and scalably.

Lorenn: While facial recognition technology may have arrived quickly as part of the public health response to COVID-19, surveillance did not arrive in a vacuum.

Gavin Smith, an Associate Professor in the School of Sociology at the Australian National University, and Mark Andrejevic, a Professor in the School of Media, Film, and Journalism at Monash University, are both Chief Investigators on an Australian Research Council Discovery Project entitled “When your face is your ID: Public responses to automated facial recognition”.

The project studies issues raised by the emerging use of facial recognition technology in public and commercial spaces in Australia. Gavin started interrogating surveillance technology during his PhD.

Gavin: I started off looking, in my PhD at CCTV, generation one, like the dumb CCTV cameras that have to be operated by people. And increasingly what Mark and I and the team are looking at and really interested in is how these CCTV technologies are becoming smarter, the discourse says, and becoming more automated in their operation.

Memunat: Mark has also interrogated surveillance technology for many years, with a focus on everyday attitudes towards surveillance. This started with reality TV in the 1990 and 2000s.

Mark: I was really interested in the way in which reality TV, which is now faded into the background or become part of the regular programming landscape was really a way of repositioning how surveillance functioned socially. And so, I interviewed producers of reality shows and cast members of reality shows about how they thought about what the experience of being watched all the time meant and what it meant to them.

Lorenn: Mark gave us an overview of the history of surveillance technology in Australia leading up to the pandemic.

Mark: The pandemic took place against the background of what might be described as a backlash against surveillance tech. Thinking back to the ’90s, which is when really the widespread deployment of interactivity is means of gathering information about people largely in online context. It’s such a different time and context in terms of thinking about the technology. It was a very optimistic moment about the technology. Interactivity was equated with participation. 

This is prior to the consolidation of the economic model that’s now become familiar to us. Fast forward to the period right before the pandemic and we have blockbuster academic books, surveillance capitalism. We have the backlash against Cambridge Analytica and Facebook for the role that they’ve played in the 2016 US election and the Brexit campaign. 

We have this wariness about this culture of surveillance, which is what the interactive environment has morphed into – large corporate players capturing huge amounts of data about people.

Against that background, one of the things that happen when the pandemic comes along, of course, because of the moment that we’re living in, first, pandemic and the interactive smartphone appified era. 

One of the first responses is going to be is, oh, is there an app for that? How can we manage the pandemic using technology? And so, we get these contact tracing apps, and later on, we get a whole host of technologies for capturing information that can be used to monitor contagion and control spread, things like monitoring, whether people are wearing masks, QR code check-ins, even facial recognition apps that are used to monitor whether people are adhering to quarantine restrictions, the use of facial recognition in the workplace to make sure people are social distancing. 

So, you’ve got a whole range of digital monitoring technologies that come into play.

Memunat: In the first few months of the pandemic in Australia, the Federal Government released a contact tracing app for mobiles called COVID Safe. Almost a quarter of Australia’s population had downloaded the app the day after its release. But Mark reminds us that this was followed by widespread interrogation of the app and its privacy implications. Over the course of the pandemic, the level of vigilance has waned while surveillance technology continues to be deployed.

Mark: If you remember when the contact tracing app came out, there was a lot of coverage about what’s going to happen to this data. How’s it being controlled? Is law enforcement going to be able to get hold of it? So, this type of response have a reflexive concern about how digital technology might be used to gather information about us and track us. 

If you remember that moment, all the media coverage, people were reverse-engineering the app. They wanted the source code released. Now, of course, we live in this moment where very often, we’re asked to check in to the environments that we go to using QR codes. And that’s received a little bit less interrogation. 

Where’s that data going? Who’s got it? Who’s got access to it? Which companies are managing this and what are they doing with the data? I haven’t seen as much conversation about that as I did in the initial concern around the contact tracing app. So, this is probably not a completely unfamiliar trajectory that a monitoring technology comes in, and then it starts to get normalized. And then, it just becomes part of our everyday lives.

Lorenn: The increasing normalization of surveillance technology in Australia during the pandemic raises questions as to what’s next?

Here, it is worth clarifying the different ways that facial recognition technology can be deployed. They include: ‘one-to-one’ and ‘one-to-many’. One-to-one recognition checks an image of someone’s face against a single, respective image to determine if they are the same person. For example, many smartphones use ‘one-to-one’ facial recognition to unlock the phone. One-to-many recognition, on the other hand, checks an image against a database of many different images of people.

The SA Home Quarantine App used ‘one-to-one’ facial recognition. However, Gavin gave us an overview of the potential future pathways for facial recognition technology as it starts to become normalised.

Gavin: I think where things change and there’s a big politics to emerge is when that facial recognition becomes woven into public space. And it becomes a one-to-many form of recognition where individuals are walking around city streets or wherever it might be. And the system is detecting their face, identifying them, and being able to track them through their face as they move around and as they conduct their activities. 

And where that surveillance becomes inferential, where the people using it are looking to make judgments about your emotional state, about your behavioral orientation based on the contours, the geometry of your face, how your gait is being analyzed as you’re walking around, what you’re wearing, where you are, all of these kinds of things. I think that’s where the stakes go a bit higher. 

So, we’re really interested in new capacities become embedded into these systems and then rolled out as old systems become obsolete.

Mark: These are very speculative technologies. They tend to gravitate around the face, although not exclusively. But the face historically, obviously, and socially is an interface that we use to learn things about people. We see the face, not just as a way of recognizing individuals, but as a way of communicating and understanding them.

And the model of facial recognition is an automated technology that piggybacks on that.

[Musical interlude]

Memunat: The futures of surveillance described by Mark and Gavin speak to the increasingly invasive use of facial recognition technology in Australia. But as Lizzie O’Shea notes, the regulatory framework is still playing catch-up to what we have today, particularly in an Australian context. Lizzie is a lawyer, writer, broadcaster and founder of Digital Rights Watch. She has worked on many significant cases advancing human rights and social justice in relation to technology in Australia.

Lizzie: So the first thing I would say about the Australian context is that we do not have a set of human rights that are enforceable as a part of our legal culture at a federal level. And that’s distinct because most of the other liberal democracies that we might identify with, places like the United States, countries in Europe do have rights regimes that are enforceable. And Australia sits alone in that regard, it’s the only Western liberal democracy or liberal democracy without a Bill of Rights that’s enforceable.

We did have a bill proposed to regulate the use of facial recognition technology. It was proposed by the Department of Home Affairs. It was supposed to be a data sharing arrangement. So I’m in Victoria; VicRoads being the holder of a large amount of biometric information in the form of photos of people’s faces was due to share this information federally so that law enforcement agencies could exchange this and add into the pool of knowledge, training and making use of facial recognition technology for law enforcement purposes.

That bill that was designed to govern that specific use of facial recognition technology was actually rejected by the relevant parliamentary committee for having insufficient safeguards for people’s privacy. And we haven’t seen it tabled again.

So there’s not even any basic regulation of how this information, this processing, this technology might be being used.

And that can mean that we become a place in which this tends to be experimented with and used in lots of different settings. And that then dictates the culture of how it’s used more generally outside of those contexts as well.

Lorenn: The context outlined by Mark, Gavin, and Lizzie signals an increasing normalization of surveillance technology and a lack of regulation in Australia. But what are the benefits of facial recognition technology for public health? We asked Angela and Diego what practitioners must consider for any public health intervention. Angela noted that digitization in health can free up human resources for tasks that might require higher levels of care, but there are always trade-offs to consider.

Angela: I suppose with any public health intervention, there’s always a balance and a trade-off between the public good at a population level and individual good and where the line is drawn between an individual’s values and preferences versus a population’s ability to protect the more vulnerable or not.

In health, often we grab the nearest thing with the right intentions, but the legacy it leaves or what it evolves into can be less aligned with the initial intentions and become, in some cases, a misuse.

Memunat: Diego gave us his perspective on the factors that public health practitioners must keep in balance.

Diego: From a public health perspective, we need to think about liberty, we need to think about the common good, and we need to think about equity and we need to think about justice. You can’t think about liberty and not think about equality and justice. The same way that you can’t think about the common good and you can’t think about liberty. We need to stop thinking that these are sort of oppositional ideas, recognize that there are points of tension, absolutely.

But we have to acknowledge that these things, the only way they make sense at a conceptual level is to think of them at the same time.

Angela: What we always get caught up in doing is trying to approach this with equality, which is giving everyone the same thing and that’s never going to work, because people need different things. So there’s always this tension playing out and it’s very clear that not just the pandemic impacted people differently in the catching of the infection, the pathology of COVID played out differently in different populations. It’s very clear that the public health measures that were put in place around the world also impacted people differently and extremely unfairly, there was not equity. It’s always a massive tension about what you try to do to do that, to address that.

[musical interlude]

Memunat: To unpack how the transition to widespread use of facial recognition technology is playing out—in the context of the COVID pandemic and beyond–we turn to some concepts from the Multi-Level Perspective. The MLP speaks of technological transitions not only in terms of the technology but also in terms of sub-regimes. Sub-regimes refer to changes in cultural practices or in government policies, and changes in industrial activity and in markets.

Lorenn: We are going to start with the culture sub-regime. This revolves around the ways users perceive and interact with facial recognition technology and the cultural practices related to using this technology. We asked our guests what they thought may have contributed to compliance with public health orders, and the general attitude to facial recognition technology in Australia.

Diego and Angela gave their thoughts about what contributed to the public’s compliance with COVID-related public health orders.

Diego: So I think that in terms of why people complied, so I think the first thing to note is that the vast majority, like the really, really, really, really large majority of individuals complied across the world with public health orders or public health directives when it comes to pandemics.

That we actually see in terms of say contact tracing and surveillance generally with infectious diseases, besides COVID in non-pandemic situations. So people want to comply because they’re genuinely scared for them and their families. So there’s a, I think most people comply because they care, right. They think about themselves, they think about their families. They’re concerned about spreading the virus in the community. So I think that there’s most people genuinely comply because they want of the right thing.

Angela: If there’s not trust built with the kind of society and society generally with the public health programs, then if there’s no trust, then things tend to be much more badly received.

In Australia, particularly, we did have very good trust at least initially.

Diego: You don’t build trust during an emergency, trust needs to preexist.

Memunat: While trust and benevolence may have played some role in compliance with public health orders, according to Mark, public safety and security are often used to justify the use of facial recognition technology.

Mark: We did a national survey a couple of years ago on public attitudes towards facial recognition. And it’s probably worth contextualizing by saying that for most people, it probably remains pretty abstract as a question.

What we found is that when it’s framed in terms of security and public safety, there tends to be relatively solid support in Australia. But the interesting thing is the results tend to be really contradictory. So, people will say that they would support an application that enhanced public safety, public security. If it’s couched in terms of protection against terrorism, you get even stronger support. But at the same time, they identify that use of facial recognition one-to-many in public spaces as an invasion of privacy.

Memunat: Gavin pondered where the public get information about facial recognition technology, and how that may impact acceptance of the technology.

Gavin: I think it’s interesting to ask the question, well, where is the public getting their knowledge from? That’s a really important sociological question. Where is this body of knowledge coming from? Mainly, when people think about facial recognition, they’re either remembering what they’ve seen in a film, some cultural representation of it – as Mark said, is some futuristic CSI thing or minority report thing, whatever it might be. 

The state obviously drip-feeds them information about why we need facial recognition and how it’s being used. And always that’s framed in very positive terms, always in reductionist’s simplified terms, as it be about, for us against them. It’s going to improve convenience. It’s going to speed things up. It’s going to manage and govern those problem individuals.

Lorenn: Mark and Lizzie similarly discussed how public debate impacts the normalization, or otherwise, of facial recognition technology.

Mark: I think here in Australia, there’s a certain wariness around actually entering into the public debate. There’s a sense, because there are Police Departments that are using facial recognition technology already, and they’re not publicizing that use. They’re not triggering a public debate about it. They don’t really want a public debate about it because they know that in other places, there’s been relatively strong opposition because it sounds like quite an intrusive technology. 

So, one thing I think that we really need to watch out for in Australia is what might be described as the creeping implementation of facial recognition technology without widespread public discussion, deliberation or education about how it’s being used and what the consequences are. 

The other thing that’s happening in Australia that’s worth keeping an eye on is the use of facial recognition technology for access to government services, the government’s trusted digital ID program. And that’s going to be a way to normalize the tech.

Lizzie: I do think public health is something of a stalking horse to erode digital rights, but human rights more generally, if we are thinking about digital rights as being a subspecies of human rights. It’s a very powerful rhetoric to say that it’s necessary to stop the spread of a pandemic that we have to use this kind of technology. It’s very difficult to contest that because you don’t want to be perceived as being opposed to these public health measures.

Lorenn: Peter helps us to make sense of these forces of transition and normalization from the multi-level perspective

Peter: When you have an event like a pandemic one of the things it does is shifts the perspectives on legitimacy, and particularly I think for the state because the state is the mechanism by which society responds to a crisis. So this is why I think this happened in Australia; there’s been enough of a consensus around the significance of the moment to allow the state to deploy these technologies legitimately. They have the moral legitimacy to do it, in other words. That’s not an open checkbook, it is not entirely forever necessarily but it does mean that once that initial deployment has been accepted and you’ve got that sort of first case example being used, it’s easier to carry that into other areas. And say look, it helped us on this, it can help us on that. And that may be a change in the way that Australia functions.

We saw this in a way with the deployment of non-AI cameras, the UK as a society’s totally cameraed. You go through any normal day you’ll probably be on film or on video 17, 20, 30 times, different people recording you. To the point where people have it in their homes, offices, shops, government places, all over the place. So this notion of legitimacy I think is key to understanding the relationship between the technologies and the applications, and then social acceptance.

Memunat: The COVID pandemic and public health responses has expanded the opportunity for the deployment of surveillance technology. We asked Angela and Diego what public health responsibilities are in terms of scaling and decommissioning such technologies.

Diego: I think to the question of responsibility, what public health can no longer be is ignorant of the fact that public health is fundamentally political. We might disagree whether it’s intrinsically political, that’s a debate, but in practice, it is tied to politics.

So when we think about questions of responsibility and we think about what are the levers for change as we go forward, as we try to scale down, we need to be cognizant.

So that needs to be a consideration for all of us going forward, is what are the politics of the situation?

Angela: I think the context of the health, the use of this kind of technology in health is what’s different. It’s already being used in many other walks of our lives explicitly, and we’re okay with that, just not so much in health.

What’s happening in the commercial sector with social media, and God knows what else, we have no idea often. It’s quite clear that some of our very personal data and probably image data is being used across those platforms and we have no control and no insight really. So you’ve got to see things in that context I think too.

[musical interlude]

Lorenn: All data may be repurposed and used in contexts other than the context in which the data was collected. Biometric data collected to operate facial recognition technology may be used for law enforcement, for research purposes, or other purposes which have impact on the people whose data have been collected. We asked our guests what is different about collecting biometric data, and facial images specifically, and why repurposing this type of data in other contexts can be an issue.

Lizzie: Once the data exists, it becomes like a honey pot where people who want to access it will start petitioning government. And that may well extend beyond the original mandate of why it was collected, but government may not be concerned about that and carry on. It’s very tempting if you’ve made use of facial recognition in a public health setting for then that to potentially be used if you’ve violated your quarantining app and you haven’t complied with the various quarantining rules, that then that might be a reason why that data is shared with law enforcement.

Diego: So one really big issue is what about secondary use of the data and who has access to this data via secondary use. So that can be in the immediate, in terms of law enforcement. I think there’s a really interesting history that’s going to be told about public health working and not working with law enforcement. So we saw in New South Wales, there were stories of police wanting information about individuals from public health and public health actually saying no, because they wanted to protect the individuals. They knew that working with community was about building trust and likewise, sort of that data that’s kind of driven or generated from it. So I think that the secondary use of data, the use of data beyond public health is probably a really, really big one. Then the other thing is again, this idea of even within the context of public health and research is the idea that if we’re not consenting in the sort of traditional research ethics sense of the word, there’s at least some kind of ascent, or at least some kind of tacit agreement that people are given that data for public health purposes of the pandemic. So there’s this question about, are we going to use this data for research purposes outside the pandemic when the pandemic ends?

Mark: One of the claims that we’ve been making about facial recognition technology is that in a sense what inspires it is the model, which is pioneered online, which is when you move through online spaces, it’s possible to track your movements and your activities and link them to other attributes of your behavior elsewhere.

It’s harder to do that in public space. Although it’s becoming easier with network devices like smartphones but smartphones, in practice, they often are linked to a particular individual but it’s not a guaranteed connection. You can turn them off. You can forget them somewhere. Whereas you can’t do that with your face. It’s with you. And so, it allows for passive information collection in public space that can link an identity to other information about it.

And for populations, who have historically been the subject, especially of data collection by law enforcement and government authorities, this enables an enhanced mode of data collection, linking movements throughout the course of the day with other data. It’s also worth saying something else about facial recognition technology, which is really important. And that’s that it corresponds to a shift in the modality of surveillance from a discrete targeted practice to a population-level form of pattern discrimination.

Memunat: Discussing data collection, storage and re-use in the context of facial recognition technology leads us into deeper consideration of the role and power of governments in this space. The sub-regime of relevance to this topic is the policy sub-regime. This includes the role of existing and emerging policies, and the role of regulation.

For Mark and Gavin, facial recognition technology surfaces a range of questions for the role of government to participate in, and also regulate the surveillance economy.

Mark: If you had strong protection against the collection and use of biometric data, it would fundamentally change the business model of the online economy. That seems to me, to be at least worth having a discussion about. I’m not beholden to that business model. I think it’s a bad business model. I think we’ve got ourselves in a corner by building our whole online system around that business model. But the fact is to disrupt that business model would create a major, that would be huge, right? Because we’ve already embedded ourselves in it so much.

But I do think it’s time to start asking questions, like, yeah, what if we create a regulatory system that makes that business model impossible?

Mark: For me, always the question about surveillance comes back to relations of power control and accountability. Then the question really becomes, who do we want deploying the technology? Who do we want to have access to it and how do we hold it accountable?

Lorenn: Lizzie also explored questions of accountability. We asked Lizzie: what is the responsibility of government to citizens when deploying facial recognition technology?

Lizzie: There is also then, I think, a responsibility to use these forms of technology that are still in their infancy in only very limited settings with lots of safeguards. And that’s certainly not what we’d see. So if you were an agency thinking of using this kind of technology you know that there’s faults within it, and you know that it’s got limitations, then you might expect that a responsibility of a government agency would be to implement safeguards. And that is not what we see.

And so there’s a separate question there that even if we were confident that this technology was very good and worked effectively, does the government have a responsibility to citizens — and I would argue that it does — to minimize uses of this very invasive form of technology to only the absolutely most necessary uses and to only deploy it in quite limited settings? And again, I don’t think we see government carrying out that responsibility.

Lorenn: Lizzie went on to talk not only about failures in government responsibility but also about the tendency for government representatives to underestimate the social bonds that exist between citizens, that is, their sense of responsibility to one other.

Lizzie: In fact, politicians always to claim that we’re the worst versions of ourselves and that’s why we need to invest in surveillance systems. That’s why we need to have sophisticated technology to keep track of people because you can’t assume that you’ll be safe otherwise, and that the state is the great provider of this safety.

And that justifies all sorts of approaches to criminal justice as well as what we’re talking about here, public health settings and enforcement of public health orders. And I think that really underestimates the social bonds that we have. And I feel in a lot of ways, that’s the kind of untold story of this pandemic, just how much people did help each other, did look after each other, did sacrifice huge amounts in the name of protecting vulnerable sections of society. And if I was in a public health setting, that’s what I’d be looking to as a foundation of public health policy.

Lorenn: Angela reflected on trust between public health institutions and citizens as core to the success of public health.

Angela: I think in general, still just about at the moment, there is a lot of trust in public health in Australia, but probably less trust in the relationship between health and politics and less trust in politics generally or politicians generally and the potential nefarious means or motivations that they may have.

So I think maintaining trust in public health will be key in however the future unfolds, and the use of these technologies will be here to stay, but it may not be front and center of everyday life in the way that it has been before. I think, the transparency and the dialogue around how we scale back and how we reintroduce and why has been very important certainly in Australia and I think elsewhere.

Memunat: Speaking of how the future may unfold, we asked Diego and Angela about what we can learn from the past when it comes to pandemics, the use of technology and public health. Diego shared this story:

Diego: I remember when I took introduction to political science all the way back in first year when I was 18 years old in undergrad, and the very first lecture the professor goes, “If you take nothing else from this course, always remember to follow the money.” And I think that applies in public health as well, especially when we think in terms of technologies.

So there is money tied to the continued use of surveillance technologies or sorry, technologies for the sake of the surveillance. We might have low fi solutions to problems that arise going forward. The question is, will we take those? What will be the pressures that public health practitioners are going to face in light of the pressure in a capitalist system to continue using these new tools?

Lorenn: This led us to two more sub-regimes of interest– the role of industry in normalizing new technologies and the creation of new markets. Lizzie and Gavin reflected on the role of industry in driving the uptake of facial recognition technology in recent years, and how it impacts citizens.

Lizzie: Industry can be a big driver of uptake in various different government departments. And I think that is a large phenomenon that’s mostly out of sight for many citizens because it occurs behind closed doors. Governments are lobbed and advocated to by industry. Solutions get proposed by industry for problems that government might not realize it had. And then they they’re rolled out and experimented on people after the decision is made rather than as a consequence of public consultation and public engagement.

Memunat: For Gavin, the role of industry such as the biometrics industry has also resulted in the creation of new markets

Gavin: I think it’s interesting that we’ve seen.. the biometrics industry has very much been able to jump on this moment and render it into an opportunity to be able to mass market their various technologies. This is a really opportune time in this moment of crisis where everything is about governing circulations and trying to striate different flows and restrict movement, but enable movement as well of capitalism and capital moving through

And these technologies we’ve seen from meta-distance away is very much being implemented to do that work. And in a sense, rendering people into walking sensor platforms. I think that’s been interesting thing, the way in which citizens are imagined in that way, because they are always, always networked as they move through space.

That’s certainly been an interesting theoretical dimension to this — about how individuals are imagined as these network subjects moving through these digital enclosures, as Mark has previously described them.

Memunat: Angela also commented on the connection between the pandemic and the formation of new markets

Angela: In some ways the pandemic, in some situations has created a very fertile ground for innovation, because we’ve been in an incredibly constrained and restrained situation and people under pressure innovate. I think if it wasn’t facial recognition, it would be something else.

So the facial part, I think that it’s a more open argument. It’s more about how technology’s used generally in health rather than specifically. I know you are focused on facial recognition technology as an idea, but it’s not the only concerning technology in terms of identifying individuals. So I think we’ll never stop that advance and it’s more about care and consideration about how and why it’s used transparency around that. That’s more important.

Lorenn: Lizzie also emphasized the need for care and consideration when deploying technology. For Lizzie, this includes asking questions about the problems we’re trying to solve with facial recognition technology.

Lizzie: I think the first question is what is the problem we’re trying to solve? Are we trying to surveil citizens? Or perhaps I think more accurately, what we’re trying to do is make sure people comply with quarantining orders in using the example of the facial recognition quarantine app. And if we’re trying to do that, we have to ask why they might not be. And a big part of the pandemic as well or understanding why people might not have complied with orders is they were struggling to make ends meet because they were essential workers. They might not have been classified as that because they were food delivery workers or something similar, they worked in the gig economy. But I would argue those people were essential workers on one reach and they needed to make ends meet.

Lorenn: We can also understand the impact on different markets from the perspectives of different stakeholder groups. Most of our guests spoke about the importance of understanding the impact of the pandemic on different parts of society.

Diego highlighted immigrant groups:

Diego: This is not going to play out the same way for everyone in Australia, let alone the world. It’s going to be easier for me to reject being surveilled, a little bit easier, than it is say an immigrant from from Latin America coming to Sydney for the first time. It’s just going to play out vastly differently.

Lorenn: Lizzie drew our attention to the elderly, as well as recent immigrants.

Lizzie: If you think about who’s most vulnerable in the course of this pandemic, it’s been elderly people, in particular; recent immigrants who don’t have a good grasp of English. And then we are saying that we need to introduce quarantining apps with sophisticated facial recognition technology in order to enforce relevant, stay-at-home orders for these people. You could see how there may well be a disconnect there. It requires more resources in fact, to educate and help these people make use of these technical systems when they may be much more straightforward answers and quite obvious kind of cultural reasons or otherwise that people may not be complying. But then you’ve created this public health narrative where non-compliance is the fault of the individual rather than the system that we’ve implemented.

Memunat: Turning to surveillance technology used during the pandemic, Mark discussed the differing experiences of majority and minority groups.

Mark: Surveillance typically has a double veillance, right? It’s used to benefit certain groups in terms of convenience, access, and so on, and to disadvantage other groups, who’ve historically been the brunt of surveillance practices. Often people of color, folks who’ve been subject to discrimination, oppression, domination. They tend to get the brunt of the negative consequences of surveillance. 

Memunat: Building on this point, Gavin noted that to expand the public debate around facial recognition technologies, we also need to engage majority groups with the transition our society is undertaking.

Gavin: One of the things that’s going to be a challenge for the wider social debate around facial recognition ubiquity is just trying to engage with middle-class groups in particular about: why not? Why not go down this path if this thing is going to enhance and optimize my life, make customization much more specific and personalized to me, and this is not going to negatively impact upon me. In fact it’s going to speed my life up even more, make my life more convenient, securitize my life more… I think where this is really going to be important is to engage middle-class groups about what are some of the social implications and impacts of a culture where the face becomes an interface and absolutely primary medium, which affects your life in quite profound ways.

Memunat: In the face of such complexity, it can be easy to get disgruntled, distracted or complacent. Diego, Peter, Lizzie, Mark and Gavin all highlighted in their own ways the importance of this moment in time and how it may shape the future of these technologies and their impacts.

For Diego – the time is now to consider public health in a wider context.

Diego: Part of that means keep coming back to this question of what is it that we need to do for the sake of public health versus what do we want, you know, versus what can we do because we can do it.  So we always have to keep in mind what is the good of public health that we’re trying to achieve here? And I think it requires acknowledging that public health is not the only thing that matters.

Memunat: For Peter – the time is now to recognise that even if the pandemic is considered over, the experimentation with facial recognition technologies is not finished.

Peter: At what point do you decide that this is over? Now you could of course say, now once the Australian government says we’re fully opening up and there’s no more lockdowns, there’s freedom of travel you can say okay, we’re officially at that point. But are they still using AI technologies for facial recognition? Maybe there’s more concerns from those interested in civil liberties to agitate against this technology. Maybe the companies that deployed those sort of technologies are more interested in growing their market. Maybe there are different agencies who have seen what’s going on and thought you know what, this could be useful for us to do something else. In that sense the genie is out of the bottle and the experiment is never over.

Lorenn: For Lizzie – the time is now to work out how to protect rights and prevent harms, particularly for those most vulnerable.

Lizzie: I see the huge potential of technology to deal with some very serious problems, but I can also see its potentially insidious impact and the huge harms that it can cause. And this is the moment I think before these systems become instilled and entrenched, where we have the opportunity to have these discussions and avoid large-scale harm, particularly the most vulnerable people in society, by thinking through how we can make sure their rights are protected and that harms are prevented before they occur.

Lorenn: For Mark – the time is now, to ask ourselves important questions before facial recognition technology is normalised.

Mark: Facial recognition technology is an interesting moment because it gets attention because there’s something quite sensational about being able to recognize people passively as they wander around. It’s not yet reached widespread normalization. And so, it gives us a purchase point to ask that question, how do we want to deal with this new quantum leap in monitoring before it becomes completely normalized?

Lorenn: And for us, as PhD students at the ANU School of Cybernetics, we also believe that the impact of facial recognition technology needs to be seriously considered, in this moment, by the Australian public, and citizens in other countries where FRT is increasingly being deployed. And we hope that this podcast episode serves as a useful addition to the conversation around the deployment and normalisation of facial recognition technology triggered by the COVID-19 pandemic.

Liz: Thank you for joining us today on the Algorithmic Futures podcast. To learn more about the podcast, the Social Responsibility of Algorithms workshop series and our guests you can visit our website algorithmicfutures.org. And if you’ve enjoyed this episode, please like and share with others.

Now, to end with a couple of disclaimers.

All information we present here is for your education and enjoyment and should not be taken as advice specific to your situation.

This podcast episode was created in support of the Algorithmic Futures Policy Lab – a collaboration between the Australian National University School of Cybernetics, ANU Fenner School of Environment and Society, ANU Centre for European Studies, CNRS Lamsade and DIMACS at Rutgers University. The Algorithmic Futures Policy Lab receives the support of the Erasmus+ Programme of the European Union. The European Commission support for the Algorithmic Futures Policy Lab does not constitute an endorsement of this podcast episode’s contents, which reflect the views only of the speakers, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

Leave a Reply

Your email address will not be published. Required fields are marked *