Code interspersed with words like accountability, diversity, transparency, cooperation
AFPL SRA

Episode 3: Public trust and accountability in the digital age, with Pia Andrews

In this episode, we chat with Pia Andrews, a self-described “open government, digital transformation and data geek” with a passion for making the world a better place.  She predominantly works in the public sector transforming public services, policies and culture through greater transparency, democratic engagement, citizen-centric design, open data and emerging technologies in the public sector and beyond. For more about Pia, visit her website, pipka.org.

We invited Pia to join us today because she has been thinking deeply about how to create and use technology for public good for the last 20 years. She will also be presenting at Social Responsibility of Algorithms 2022, which is supported by the Erasmus+ Programme of the European Union.

Listen and subscribe on Apple Podcasts, iHeartRadio, PocketCasts, Spotify and more. Five star ratings and positive reviews on Apple Podcasts help us get the word out, so please, if you enjoy this episode, please share with others and consider leaving us a rating!

With the support of the Erasmus+ Programme of the European Union

This episode was created in support of the Algorithmic Futures Policy Lab, a collaboration between the Australian National University (ANU) Centre for European Studies, ANU School of Cybernetics, ANU Fenner School of Environment and Society, DIMACS at Rutgers University, and CNRS LAMSADE. The Algorithmic Futures Policy Lab is supported by an Erasmus+ Jean Monnet grant from the European Commission.

Disclaimers

The European Commission support for the Algorithmic Futures Policy Lab does not constitute an endorsement of the contents of the podcast or this webpage, which reflect the views only of the speakers or writers, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

All information we present here is purely for your education and enjoyment and should not be taken as advice specific to your situation.

Episode credits

Podcast Creator

Liz Williams

Hosts

Zena Assaad

Liz Williams

Guest

Pia Andrews

Producers

Zena Assaad

Liz Williams

Episode artwork

Zena Assaad

Audio editing

Liz Williams

Music

Sourced from https://mixkit.co/ and https://pixabay.com/.

Episode transcript:

Liz: Hi everyone, I’m Liz Williams.

Zena: And I’m Zena Assaad.

And this is The Algorithmic Futures Podcast. 

Liz: Join us, as we talk to technology creators, regulators and dreamers from around the world to learn how complex technologies may shape our environment and societies in the years to come.

Zena: In today’s episode we talk with Pia Andrews, an open government leader and special advisor for the Benefits Delivery Modernization program at Service Canada (ESDC) in Ottawa, Canada. Pia predominantly works in the public sector transforming public services, policies and culture through greater transparency, democratic engagement, citizen-centric design, open data and emerging technologies in the public sector and beyond.

Liz: Pia is a legend in the open government space. A self-described “open government, digital transformation and data geek” with a passion for making the world a better place, we invited her to join us in this episode because she has been thinking deeply about how to create and use technology for public good for the last 20 years. She has also been a strong advocate for ‘rules as code’ – the idea of turning legislation into machine-readable code. This turns out to present an interesting opportunity for rethinking how society translates legislation into practice, and it’s something that governments in New South Wales, New Zealand, France and Canada have all been experimenting with. But before we go there, we wanted to start at the beginning – with Pia’s background. We wanted to understand where her interest in public service, policies and culture began.

Pia: My interest in this space is actually a convoluted story. My mom was techie. I grew up in a household where my mom built a lot of the business systems in the 80s and 90s, in my little small country town. And I grew up with the skills and intuition of a techie, but no interest in it. I saw it as a soulless path, not because there’s a problem with what my mom did. I loved what she did, but I didn’t see it as meaningful or purposeful. It was just helping businesses do their thing. It was just a utility.

So, I went and tried a few different degrees, not completing them, just shifting around. And I just didn’t find my truth or purpose in physics or in health or in a bunch of different places. So, I ended up, a couple years later, going and working in an ISP, just to fill time. I thought, “Well, I’ll just go and work in tech for a little while, while I figure things out.” And I discovered free and open-source software. And I discovered a couple of interesting things.

First of all, a place where my skills and my values actually aligned. And suddenly I had an opportunity to apply what I’m good at, which is tech and systems, in a way that actually aligned to what I care about. The second thing I realized was that this huge epiphany of the people that build the tech are the ones who shape the world. People’s experience and work and personal lives, even in the 90s I could see this, were increasingly being shaped by the tools that we use.

So, I felt this profound and very strong sense of responsibility to ensure that I be one of the people who contributes to ensuring that technology helps people to live well and better, not to be used to limit or constrain or repress. And that’s how I got really interested in tech. Then, I worked in the tech sector for a while, and I loved it and it was fun. And I spent my volunteer time getting involved in Linux user groups, in Linux Australia, in Software Freedom International, in a whole bunch of that community, which was great fun.

I found myself advocating against things like the US-Australia Free Trade Agreement because the impact it would have on our sector. So, I started interacting with the government, and I remember going to heckle an up-and-coming, well, she’d been around for a little while, but a senator who was the shadow minister of IT. And when I went to heck her, she actually gave the best speech I’d heard anyone give about technology and about how the US Free Trade Agreement was not going to be good for us. I’m, “Oh. Well, that, that’s what I was going to say.”

So, we then became friends, and then later on she actually poached me to be a tech policy advisor working with her to reimagine government. And that was Senator Kate Lundy. So, we did a whole bunch of really cool work around Gov 2.0, reimagining what government could look like in the 21st Century. This is about 13 years ago now. And it was while working for her that I finished some studies in government, and I developed my passion and interest in the public sector.

Because as a systems person, as a sysadmin, originally, I could immediately see that the public service, when it operates well, truly can be a platform that people can stand upon a means of equitable life, a means of distribution of wealth, a means of supporting the values that I see in society, and then that I hold. But when it’s done badly, of course, it can be just the worst. So, at that point I committed my life to public sector reform.

So, I’ve been working in the public sectors for the last 10 years ever since then. And partly it’s about tech, but largely it’s about the intersection of technology, government, and society, and how we can ensure that we are creating a world where everyone can truly thrive and live well, and where we create the optimistic futures that align with our values, not the pessimistic futures that you fall into by default when you don’t take the steering wheel.

Zena: What’s fascinating about Pia’s story is her view that working with technology actually aligned with her values of ensuring that technology helps people to live well and better. We found this interesting because, from our experience working in the technology space, we often come across a popular narrative of a disconnect between the development of technology and considerations around social impacts. Pia’s story begins to call this dominant narrative into question–. In the open-source software community, for example, you can find people from all walks of life volunteering their time and expertise to create, distribute, and maintain free software that they believe can, as Pia puts it, improve the world.

Pia: One of the cool things about, and not all open-source, but a lot of open-source communities is that people are putting their values front and center in developing tools that can improve the world. They’re not just trying to solve a technical itch, they’re actually trying to help people be free and live well and have their best life. One of the reasons I think that technology has got such a bad rap is because there’s a big world of difference between those values driven technical communities that not a lot of people have had experience with and the traditional IT department.

So, the traditional IT department of any organization, and indeed a lot of the tech companies, particularly, to be fair, through the 90s and 20s, have basically built their model and their operating process around the idea that technology is magic, “Just trust us, we’ll sort it out, just give us lots of money, and we’ll go and solve these problems,” and at the same time, this notion that technology is just a means of implementing, it’s not a means of innovating, it’s not perceived to be a means of actually creating value, it’s just considered a means of implementing someone else’s need.

When you start to embrace the idea that technology and technical skills are actually not just an ends but also a means, it’s a set of skills and intuition and values, that if you bring in to your design process, you bring into your policy process, you bring into your business strategy, you bring into your organizational blueprint, if you actually identify it as a strategic approach rather than just an implementation approach, that changes the game.

But because too many IT departments have been set up to just receive orders and implement orders, a lot of people in the sector have become just, “Okay, once you figure out what you want to do, I’ll just go and implement it. Success for me is successful implementation.” If we can bridge that gap and actually get multidisciplinary teams where you have your designers, your policy people, your visionaries, your data people, your technologists, your engineers, in the room together, now you’ve got something quite profound.

Now, not only do you get the value of all of those disciplines actually merging together to get a holistic approach from the ideation and design right through to implementation and continuous improvement, but you also can then draw on those different instincts and disciplines and such as part of a creative process. And that would dramatically shift the game.

But that would involve, first of all, acknowledging that there is a gap, and that that’s a problem, and it would involve a lot of IT departments actually shifting their culture. It isn’t just about becoming agile. A lot of them have adopted agile practices, but a lot of them still say, “Well, once you figured out what you’ll do, we’ll implement it in an agile way.” But bringing technology as a valuable skill into the creative process, is the next step that a lot of organizations still need to go through.

And to be fair, a lot of people have had very frustrating time dealing with their IT departments because of that operating model. And because of that, “Well, we can’t talk to you till you give us a lot of money for project management,” mentality, there’s a lot of evolution and maturity that needs to happen both inside the IT departments and our sector, but also in organizations valuing technology as a means to their end, not just the implementation.

Liz: Pia advocates for a fundamental change in the way we look at technology. She believes we have an opportunity to think about technology as an organisational strategy rather than just a new capability that needs to be implemented. This thinking is apparent in Pia’s work on ‘rules as code’ — a movement to translate legislation into machine-readable and actionable code. We asked Pia where the idea for ‘rules as code’ originated and how it works.

Pia: So, because I’m a person who always looks for patterns, and I can’t unsee a pattern once I’ve seen it, one of the things I always look for is, what does government as a platform look like? What is repeatable? How do you scale, not just a solution, but an impact, how do you actually get an exponential response to exponentialism, which is one of the big paradigm shifts that we’re facing right now. And when you look at government and you really stand back and look at the public sector as a whole, you can start to see that there are some repeated patterns.

Regulation and legislation are two sides of the same coin. They are basically the rules, they are basically the operating system that we all operate on. Regulation sets the rules by which regulated entities and markets need to work. Legislation is exactly the same, but focused on departments. So, it’s how you regulate how departments operate, how benefits and services and taxation and such works within the constraints of how government operates. So, legislation and regulation are really just rules.

If you want to get a consistency of how rules are applied, you need to have a reference implementation. Unfortunately, up until now, the way that rules are implemented, and of course, rules have been implemented into software for decades now, since computers were invented, but up until now, the way that we have done that as a sector, and particularly across government as well is, we’ve effectively used, and I like to joke about this much to my lawyer-husband’s chagrin, but we’ve basically used lawyers as modems.

Lawyers are translating between analog and digital. They’re working with operational people who then say, “Well, here’s the written version of what that means.” They’re then merging legislative and regulatory rules with operational policies. And they get this spaghetti mess of software definition of requirements written down that are then given to IT to implement. Now, IT have no means of checking whether that’s true or not, because their job is not to question what the business do, but just to implement it. And again, this is why this break between the business and it is such a problem.

So, what you end up with is not just widely diverging translations of interpretations and implementations of the same rules, but you also have no means of tracking it, checking it for consistency, verifying it, et cetera. The second problem here, of course, is the shift to principles-based regulation and legislation. When you get principles-based, it requires an interpretation. And the moment you have lots of interpretations, you get again, inconsistency of implementation.

And because too many people started to say, “Oh, well, principles-based is just so much better, so let’s just do that,” then, of course, compliance with rules becomes very expensive because you now need to have lots of lawyers doing that interpretation for you. So, when I was working at the Australian Financial Intelligence and Regulatory Agency, AUSTRAC, of course, as a regulatory agency, and it was the first one I’d worked for, we could see this very directly; here’s the regulations, some of them are prescriptive and some of them are judgment-based.

But even the prescriptive ones, I started saying, “Well, if everyone has to follow these rules, why don’t we make them available as an API? If the software systems of the regulated entities can just test the rules or test their software against our rules, then they could dramatically reduce the cost, the complexity, the risk of implementing the rules themselves.”

So, it dramatically reduces the cost, it dramatically reduces the time and speed to delivery, because a law passes in parliament, and then people have to know about it, find it, interpret it, translate it into code, implement it, test it. It’s months and months and months before you’re getting things put in. Imagine if we actually had rules as code from the start, so that when the human version passes the parliament, the machine readable version is available as an API that same second.

Imagine if there’s a change to rules and that change being able to communicate it through software, and suddenly you can actually dramatically reduce the cost and speed of change. So, if we actually had a reference implementation of prescriptive, put an emphasis on that word, “prescriptive”, rules as code, legislative and regulatory, that people can consume, that business systems can consume, that AIs can consume and test against, and if we have common tests suites, as it were, for what output you get from a given input, suddenly you then have a couple of things happen.

Not only do you get greater efficiency and reduced speed and cost of change, but, and this is where it gets very interesting, you can start to say, “Okay, what if I capture, in real time, this decision or action was taken based on this law at this point in time as referenced here.” Now, you could capture that in real time. Now you’ve got traceability to the legal basis upon which decisions are made. And that traceability gives you two very important things that are not possible today. They’re not possible in any real time, automated way, and that is real time auditing and real time appealability.

You should be able to communicate to people the basis upon which a decision is made, particularly by government, but also by regulated entities under areas for which they’re regulated. And in New Zealand, for instance, there’s this wonderful piece of legislation that I don’t think we have similar in Australia, it’s called Section 23 of the Official Information Act. So, their equivalent of FOI. And I highly recommend people to read it, but basically it says that the departments are legally required to provide explainability of their decisions. Legally. It’s a basic premise of Administrative Law, but that’s a real empowering thing for citizens, isn’t it?

So, this actually provides the ability for a person to say, “Well, under what basis was this made? Oh, under that basis. Oh, but actually that law’s out of date now. Actually, that operational policy you’ve got was conflicting with the law.” I’ve found many cases now in governments around the world where the way that a law was written is different to what was implemented.

Let that sink in, all the people who were, “Oh, but changing the rules into law is going to create obfuscation,” and all the rest of it. It’s, if it’s obfuscated now, all of those business systems that have all of these laws and rules and legislation and regulation codified with no mechanism of checks and balances, of auditing of appealing, it is right now a black box economy, and we need to actually turn it into something a bit more traceable.

It’s a very exciting area. It provides the opportunity for integrated services, for more efficient compliance, and of course, for greater access to and visibility of justice. And the final opportunity, of course, is modeling. In places like France, where they have and codified the social services and the taxation system, anytime you want to do a change to policy, they actually can demonstrate with, real demographic population data, the impact a change to policy is going to make across the whole thing, whereas, in Australia and New Zealand and around the world and most places, when they do modeling of change, the only modeling it within their particular portfolio, and they can’t tell the unintended consequences until they implement it.

Well, we can start to then say, well, if we actually create these rules as code, we can start to do some very interesting modeling. Then, you could even have things like, as you might have seen with the indigenous protocols for artificial intelligence work that’s been done in Australia, what if we took and supported and worked with indigenous communities to get indigenous law as code. Then, suddenly you can say when a new policy is introduced, you can test it against those laws to say, “Well, actually, this is going to conflict with our ancient laws and protocols and ways.” So, suddenly, you start to get some really interesting opportunities in this space.

Liz: I wanted to understand how this interplay between rules as code, modelling, and community input might work in practice – so I asked Pia to explore an example of how an Aboriginal community might make use of the output of a model of some new piece of legislation’s impact upon them. Who would they talk to? How would the conversation go? What would the potential impact be – both on the community and the legislative code?  

Pia: So, first of all, just an output from a model is a problem. We should actually have open government models. And the wonderful Audrey Lobo-Pulo has been advocating for this for as long as I can remember, for at least 10 years, that I can think of. But if you don’t open up the models themselves, then the output, obviously, isn’t all that useful, which is why legislation regulation as code combined with open models is actually really helpful.

But if government proposes we’re going to change this benefit so that you’re going from 65 to 70 or something like that, and, let’s say, a particular Aboriginal community can say, “Well, in our community, actually, there’s a much higher mortality earlier than that age. Basically, these benefits you’re putting in place are going to dramatically and disproportionately benefit people who have a longer lifespan.” That’s a problem.

What if a policy might be around how, oh, I don’t know … Let me think of some examples. A policy might be about benefits to a particular group that the government thinks needs help, and an indigenous community might have their own data, such as a lot of the work that’s happening here with Iwi in New Zealand, where they actually run the same model, but against the community data, which of course is where there’s a lot of gaps in government because government only sends the data of people coming to government services.

And they might find that, actually, according to our data, we’re going to get quite a different outcome. So, opening up the rules, opening up the models, and encouraging and supporting people to then be able to apply those in their own context, gives you a more rich policy conversation, a more rich opportunity to identify problems before normalizing it. And it also comes back to the heart of trying to compliment evidence-based policy with experimentation-based policy.

Evidence-based policy will only ever give you the status quo, because evidence finds you problems, but it never gives you solutions, because it’s only ever going to give you a solution based on a trend of where you’ve already come from. Experimentation-based policy is where you are including people in participatory policy design, participatory service design, where you’re actually saying, “Well, there’s five different ways we might be able to solve this problem completely different from what we’re doing today, let’s try them. Okay, we’re going to try this here, here, here, and here. And now, let’s use that data to actually identify which of these is working.”

So, I think a lot of people have come from a very heavy evidence-based approach, and think that it’s sufficient, but neither of those approaches is sufficient. They both are necessary, if that makes sense.

Zena: Experimentation-based policy involves, as Pia described, a participatory based approach to policy and legislation. This means including people from different communities in the design process to encourage a wider breadth of representation. While participatory approaches may lead to more robust and inclusive policy and legislation, engaging people in the design process is not a simple task.

Pia: So, first of all, I think we all need to acknowledge that most people are a 100% slammed. They’re a 100% at work and then they’re a 100% at home, and then in between they are being inundated with information overload, from their email to their social media to the news, to whatever horrors is coming around the next emergency. So, most people don’t have a lot of capacity. Most public servants don’t have a lot of capacity. Most researchers, most normal people don’t have a lot of capacity.

So, the first thing start talking about participatory government, which is absolutely one of my biggest passions, the first thing we need to acknowledge is that, unless we address the capacity issue, we’re never going to see participatory government. So, I’ll come to what it looks like in just a moment, but one of my dreams is that we would do something like a lot of Australians used to and will again, have the obligatory after school overseas experience.

In Israel, they have a mandatory military year. In some countries they do something where they’re giving back to the community. I would love to see a civic gap year where anyone can opt in, probably, because it’s the society we are, to being considered. And every year some proportion of the public sector, maybe 5%, maybe 10% … Maybe this would be good for science as well, because too many scientists see the public as just people to be convinced and in order to get funding as opposed to participants in the problem space.

But if some proportion of the public sector were effectively one-year fellowships, as it were, and every year you had a demographically balanced and representative group of people from all different backgrounds be paid the same amount, regardless of what the background they’ve come from, to just work in whatever policy area they want, that would disrupt the usual self-reinforcing conversations that can sometimes happen. It would dramatically disrupt and keep it very real as to the impact on people. It would dramatically create, my initial point, capacity.

So, if we want to have proper participatory things, participatory governance, we need to have capacity. That’s one strategy. Another one is actually paying people for their time. Of course, in service design, there is a practice of paying for user research, for user interviews, for user testing, so that’s really good. That started to happen. But why don’t we pay people to participate in public consultations, for instance?

If there’s a consultation that’s important, why wouldn’t a person’s time for an hour be worth actually subsidizing so that they can actually afford to take that time off work, to take that time off whatever else they need to do. So, capacity is critical. Once you start to free up capacity, then the nature of how you participate, of course, then it comes a bit down to that.

I would love to see a future, and I’ve got some modeling of this, and I’ve got a fabulous virtual reality experience where we built in New Zealand a few years ago, as an example of this, if you’d like to see it, where people just have their helper, their Alexa, their whatever, and it knows who you are. It’s independent from anything. It’s not provided by government or by a company, but it is associated to you and to your values and your interests. And it’s just helping you navigate the world. “Hey Peter, a new consultation’s come up about this topic that I know you’ve got a passion in. Do you want me to put the same sort of stuff you put in last time? Here’s what they attract.”

I’d love to see flex tax idea where 5% of your tax you can say what are the policy areas or program areas you’d like to focus on. Not which school, otherwise we’ll get Florida, but the notion that I care about education or I care about river quality or I care about these policy areas, if we have the ability for people to direct a small proportion of the tax that they pay, that would be an amazing means of actually having a bit more representativeness, and also a great way to understand where the public are leaning on things.

Then, people’s ability to participate actually becomes not just directly with government, but through third parties. Why can’t you have public policy modeling tools? Like we built a very cheekily named “policy difference engine” in my team in Canada, which has been a 10 year dream for me. And of course, a bit of a hat tip to Lovelace and Babbage of course,

Liz: Pia is referring to Ada Lovelace and Charles Babbage here. In the 1820s, Babbage started building a ‘difference engine’ – a massive, mechanical machine that could calculate polynomial functions, which were crucial for navigation, science, and other applications. And Ada Lovelace, who became Charles’s collaborator, was the first to recognise the significant potential of a successor to the difference engine called the Analytical Engine. She wrote a program that such a machine could carry out. Their work, in many respects, laid important groundwork for the modern computer.

Now back to Pia’s policy difference engine.

Pia: But the policy difference engine is a really simple idea. Here’s my policy. Here’s either changes to the policy I want to play with, or a whole different policy in the same area. Here’s population data applied. Okay, what’s the difference? Who becomes the winners and losers? Where are the unintended consequences? What are the consequences full stop, full stop.

And of course, the next step, the next generation policy difference engine down the track once you start to play with that, starts to become, “Alexa, here’s a different outcome I wanna get. Model me three options.” Because it’s so big and so complex at the moment, if you think about all of the local state, federal, international rules and laws and policies and all the rest of it, we can’t, as individuals, as humans, actually navigate that entirely by ourselves anymore. We need to be able to model the complexity in a way that helps us achieve the outcomes, because every tweak is going to have unintended consequences.

So, we need to, A, have better means of modeling it, but B, have better means of monitoring it. So, not just monitoring for my personal KPI, but monitoring for quality of life. If you actually apply quality of life metrics like the human services outcomes framework, I don’t know if you’ve seen it, but it’s amazing, if you can monitor for human quality of life, then you can start to see, oh, well this policy, this service, this regulation, is supposed to have an impact on this. Has it, or hasn’t it? Oh, there’s an unintended trend happening. What are we going to do about it?

Oh, I’ve been notified about a shift that is unexpected. Why is it? Oh, because it’s a lead indicator of some massive emergency about to happen. So, we need to better prepare our systems to be proactively identifying things, trends and patterns and opportunities and risks, rather than just waiting for them to get so bad that you’re just hanging on for dear life by that stage.

Liz: I think this idea Pia is sharing – of being able to both model and monitor changes in policy in order to craft the future in a predictive way – is really powerful. And in many ways, it rings true to what we’re often looking for in any modelling exercise: a way to predict and shape the future in ways that allow us to intervene.

This got me thinking about the power of having something like this – but also the potential challenges of implementing it in a way that is explainable to humans. And I don’t mean experts who deeply understand the tools they’re using here. I mean the people who are being impacted by the policy as it is implemented.

Now add in the idea that some of that legislation is already being created to help shape and manage complex technology that has the capacity to learn and enact change in the world. And that complex technology (which may in itself not be explainable) is likely to be playing a role in policy outcomes. I wanted to know how Pia thought about the challenge of creating explainable legislation when these kinds of technological systems are involved.

Pia: I think there’s two things. First of all, we need to have a categorization of decisions and actions. If a decision or action is one that must be explainable, then we shouldn’t be using technologies that are not explainable to automate them. So, if a person is eligible for a benefit, let’s look at something that actually affects real people who are in that exact moment at their most vulnerable. If you want to use black box decision making for that, then you’re not actually living up to the accountability, scrutibility intent, let alone the letter of Administrative Law. You must use explainable technologies for things that need explaining.

The second part of it though, is, I don’t think a replacement for the first, but it’s certainly a corollary, is testability. So, your legislation regulation is code, and the test suite that accompanies that, can be a mechanism for actually testing the outputs of your systems, even for unexplainable AI. So, if you come out with a decision that is blah, how you came to it, so long as it’s testable; so here’s the input I had, here’s the output that I got, you might have come to it for all kinds of crazy reasons, but you should be able to test it. This input equals that output against the actual law.

Law says, you need to be this age, have these characteristics, this amount of time in the country, et cetera, and then after all that, maybe you need to meet a good character test. If we’re talking about something like citizenship where it’s a combination of judgment-based and prescription-based rules, but you absolutely met the prescriptive-based rules but there was something in the judgment that said you didn’t get it, that’s the point at which you can go and now appeal or have the argument or whatever.

But there should be no argument about the prescriptive ones, because it’s either yes or no. It’s either you have met or haven’t met the conditions, or the data about you meeting the condition might be wrong as well. So, if a person can say, “Here’s the reasons upon which I got or didn’t get it based on this data and these rules,” they might able to say anything about whether they met the rules or not, but they might be able to say, oh, but that data’s wrong because low and behold, the Department of Immigration might have got screwed up data on that particular record, or because they changed names and so the data was different or whatever.

Pia: So, explainability for decisions and actions, particularly in government, that require it, is the first part, and testability is the second part. The fact is that there is no excuse for not having explainability of very important things that are going to affect a person’s life. And we shouldn’t be automating and having black box decision making of that variety in the private sector. And I started a very robust Twitter thread the other day, unintentionally, but I quite often hear people in the public sector talk about, and in the research sector, frankly, say, “Oh, if people give all this data to Google and Facebook and stuff, why can’t they just give it to us?”

And the simple answer is, and this is more so in government than it is for research, but it’s the same to some degree because so much research influences government, it’s quite a simple answer, actually; because Google can’t lock you up. Google can’t take your kids. Google can’t create a lifelong debt to your family in the same way that the State can. So, there is rightly a different test and requirement of explainability and trustworthiness on the State than there is for any other sector.

Liz: When a new capability is designed, particularly a software based system, the system goes through a testing phase. Software testing is the process of evaluating and verifying that a software product or application does what it is supposed to do.

For legislation as code – where government legislation is translated into a set of instructions – there is a need for testing that the system is consistent with actual law and for testing to ensure people are not being adversely affected.

A key ingredient for this is data. You need to have some data about the people that might be impacted adversely by a law in order to test the potential impacts of that law’s implementation.

We asked Pia how she thought about the tension between the value that comes from testing rules as code and the consequences that could come from collecting the data needed to do this work.

Pia: I think that there’s probably three aspects to this. The first one is that you can work with … And I’m talking very much about government here at the moment, so about regulation and legislations. You can work with the policy makers, in the first instance, and even down the track, but you can work with the policy people to say, “Okay, let’s build some test cases,” where they say, “Well, according to our rules …” Now I understand the policy, because quite often it’s quite deep and quite excellent, being able to say, “Well, a person or a business with these attributes should get this outcome.” That’s how the rules should work.

And if you can work with them to draft some of those test cases, then you can test whether your software gets that outcome. Now, if the software gets a different outcome, it might be because you’ve done it wrong or because they’ve done it wrong. But either way it gives you a point of reference to then work back from. We found one case where we codified a calculation, which is a pretty prescriptive and specific thing, and we were getting a very different result, or quite a different result, up to $20 different result from a particular rebate actually in New Zealand.

And we asked to see the code of the business system, which of course they couldn’t and didn’t even have access to, and the usual story. But they did have the spreadsheet that the calculation and the business system was based upon. And we looked at the spreadsheet, which is a 30-year-old spreadsheet. Oof. And the macro in it, or the macros in it, halfway through the calculation, some bright spark had rounded the number down for the purpose of easier reporting of that attribute.

And of course, that meant that it wasn’t actually lawful. It wasn’t actually in alignment with the law. And New Zealand being New Zealand, when pointed it out, we were asked, “Are we overpaying or underpaying?” And we said, “You’re overpaying.” And they said, “Oh, that’s all right then.” That was the point in time when I think I fell in love with working in this country, in the public service, because in Australia they would’ve gone after them, wouldn’t they? Anyway.

So, test cases is your first technique here. And you can do that with your policy people. And some of your test cases will be hypothetical, but they give you one point of testing. The second one, of course, is demographic data. You can use anonymous demographic data to be able to say, how are different people impacted by this policy? Again, none of these are perfect. They need to be used in combination. But demographic data does give you the chance to say, oh, here’s an unintended consequence. We knew that this demographic was supposed to be benefited by this, but look, this other one got unintentionally disadvantaged by this. So, demographic data give you a chance to test, at least in theory, some of the impact.

Your third strategy has to be about identifying. And again, working with the policy people to identify what are the intended policy impacts? This going back to the rebate I said before, if the intended policy impact is for this rebate to encourage retired people to live at home for longer in order both to support greater dignity of life, but also to take pressure off the aged care system, is either of those things happening, or even just measuring, is this keeping people at home for longer?

Because if it’s not, it doesn’t matter how successful or what the customer experience or what the performance measures, it doesn’t matter how else you measure it, it’s not meeting the policy metric. So, identifying and actually measuring the intended policy impact, and as to whether the change drives that, is really important, and it actually forces people to quantify that somewhat. Then, the final technique, and this is possibly the most important. It goes back to it I said before, is measuring quality of life.

If you can measure human quality of life, and that will take into account various different attributes, and you measure it somewhat independently from any particular law or regulation, then you can see if there is a corresponding peak or trough or shift in the trend in either a positive way, or the way that you expect, or an unexpected way. Because part of the challenge is, one of our basic principles for AI should be to do no harm, which is a general principle for public service, to do no harm. But how can that you’re not doing any harm? You need to measure for harm.

So, that fourth concept of measuring for human impact, or impact on humans, and having the mechanisms to escalate and mitigate and do something about it, is a critical part of it. So, those would be my four recommended techniques. Oh sorry, those are measurement techniques. The final technique of course is the rules themselves. So, it doesn’t matter what software you use or way that you codify rules, the basic test is, and there’s basically two very simple tests; can I audit this? And you can take whatever the auditing rules of your particular jurisdiction are into account. And can I appeal this, which is just basically a design exercise. The problem I’m trying to solve is, how does a user appeal this?

Well, they need to have the decision communicated to them in a way that they can understand, in a way that’s appealable back to law, in a way that communicates the data and the outcome, and they need to be able to have, effectively, a button, “appeal me”. So, if you can meet that user story, which is why service design and engineering needs to be hand-in-hand for these things, then that’s probably the final technique.

Zena: It strikes me that what Pia is really thinking about here is trust. Every aspect of the testing – from doing the initial modelling to providing some way for people in the broader community to understand and appeal decisions made about them is about creating something that those designing, using, and being impacted by the system can trust. We took some time to explore the concept of trust more deeply with Pia – particularly regarding the information we share and access so freely every day.

Let’s set the scene for this discussion with a bit of history. The information age, at least in the way we are experiencing it, is quite new. Personal computers were first introduced in the 1970s, the world wide web wasn’t a thing until the 1990’s, and today, we can access vast quantities of information almost anywhere from a device that fits nicely in the palm of one hand.

The progress we have seen in computing power has made it possible for nearly everyone to share and access information freely and almost instantly.

This has a lot of positives – like improving the knowledge base of the general community – however, the ability for anyone to publish anything at any given time has its drawbacks – particularly in policy settings.

Pia: When I arrived back in New Zealand, in Aotearoa, late last year, I was a little surprised to see a bit of a lack of urgency in transformation. I was, at the time, working for the Canadian government, and I do work in collaboration with lots of governments around the world, and quite frankly, Australia and New Zealand are the exception to the rest of the world.

The rest of the world is on this journey of genuine systemic transformation with the realization that normal was neither … What did she say? Sorry. There was a minister of homelessness in Pakistan that put this for best. He said, “Going back to normal is neither feasible nor desirable,” there you go. It’s neither feasible nor desirable because normal is, and has been for some time, full of holes, is not fit for purpose. We do not have adaptive, responsive, proactive public services, because our public sectors were invented in the Industrial Age.

The processes, structures, et cetera, were invented in the Industrial Age, had their last major reform to operations in the 80s, and are not really fit for purpose in a Digital Age. They’re not real time, and not distributed, and have a whole bunch of challenges. So, in coming back to Aotearoa, I wanted to write a paper to try to communicate what I saw to be the two main problems facing the country. I think these are common to Australia as well.

The first one is that the lack of integrated government services is a problem for people in a myriad ways. It means that people can’t get what they’re entitled to, they can’t engage with the system in a holistic way, they don’t really understand who to talk to about what. So, the State’s not actually that helpful. It’s just there doing its thing, but it’s not helpful, whereas you go into Service New South Wales or Service Canada, and what you get is a helpful experience.

“Welcome to Service New South Wales. How can I help you?” “Welcome to Service Canada. I can see that you’re halfway through a process. Can I help you continue it? Oh, you might also want to check this out. And have you seen that this other organization is doing this?” So, the second problem I pointed out was about the ability for the public to navigate and have confidence online. And if we think about misinformation and fake news, it’s just the start. Humans believe what we see and hear far more easily than what we read.

And when I saw that video that Tom Cruise, Deep Fake series, that that bloke made, that was the first time, actually, in my career that I actually felt fear, actually proper fear, because everything else is manageable, mitigatable, et cetera. But the recognition that our coming elections are going to be bombarded with videos of literally anyone saying anything, and normal people not having a means to navigate it, really scared me. And I think a lot of governments think that they can regulate the internet. They can’t.

So, what is within your control? Well, what is within your control is the data and the information and the services that you provide. So, how can you make them more trustworthy, more verifiable, so that even if you disagree what a politician says, because their truth and your truth are not going to be the same, at least you can tell if it’s authentic. So, this is where you get into veracity.

So, there’s a new research institute being set up in New Zealand called Veracity Lab, which I was delighted to find out about, because I’ve been absolutely passionate about this space for many years now, but this is what they are looking into, how can you create an environment where critical information, and if you think about not just democratic information like elections, but emergency response information.

If you are running from something and you are being given emergency response information, how do you know it’s authentic? How do you know you’re not being driven off a cliff? How do you know that your algorithm to build an earthquake response simulation hasn’t been gamed, or indeed the data that’s feeding into it hasn’t been gamed. We are entering a very dark time where people and communities and whole nations are being gamed, and not necessarily for profit and not necessarily for nefarious reasons, sometimes just for fun, or sometimes for a machine imperative.

And machines are not motivated in the same way that humans are. So, a lot of our levers are based on human imperatives. So, a lot of our regulations is based on if you do the wrong thing, we’ll find you, you’ll go to jail, or you’ll be embarrassed. Machines don’t care about any of that. So, how do you deal with that? What I tried to do with that paper, and I was invited to put the paper into the Electoral Inquiry. I had written a more generic version of the paper prior to that.

And because New Zealand does this fabulous review after every general election to try to understand the upcoming threats, it was a perfect opportunity to raise some of these things around not just ensuring that official information that comes from government or indeed from parties is verifiable, I can disagree with the truth, but I want to know it’s authentic, but also to actually look at and create and co-develop with the public, ways to just manage all of this, to help engage. Because a lot of people start from, “Well, we just need to educate people.”

Well, back to my earlier point in participatory democracy, you can’t educate people who don’t have time. No one’s got the capacity to be educated, but you’re going to put the whole onus on them to figure it out for themselves, really? What are the mechanisms to support people to have confidence, to navigate misinformation, to know when they’re being gamed? You can’t just put it all in the education system, and you can’t just all put it all the onus on people themselves or communities themselves. What role does government play in that context?

We almost need to sit back and look at our entire society, and actually look at, well, the constitution says, if we’re talking in Australia, the constitution says that the role of the Federal Government is X, Y, and Z. In a Digital Age, is there anything that should be interpreted there or looked at? We did a piece of work in New Zealand recently where we looked at the treaty, and we looked at it in the context of a digital age, and it was profound.

One of the parts of the treaty is about governance over land, and we started thinking, well, what’s the digital land that everyone needs to stand on? And how do you govern that in the right way? How do you manage that in the right way? How do you ensure it’s there for the benefit of everyone?

But the fact is that too many people in government think that their role is to just get out of the way, and have forgotten that the part of the purpose of the public service is to be the platform we can all stand on, because if the public service isn’t that platform, who is? A company’s always going to have a profit motivation. Research is always going to have a truth motivation, which is good, but not an implementation motivation. The nonprofit sector do really great work, but are driven by all kinds of different purposes.

The public sector is one of those things, it’s a foundation that everyone should be able to rely upon, and yet, when you take away the purpose of the public services for public good, then we’ve got a real problem, how to react to these national challenges.

Liz: Pia wrote a response paper to the 2020 General Election and Referendums Inquiry, entitled Truth, Authenticity and Trust for Election Integrity in New Zealand, Aotearoa. We were curious about what Pia meant by truth, authenticity and trust in this context, so we asked her to tell us more about the Inquiry and her response to it.

Pia: After every general election, the New Zealand government does an inquiry into that election. They look at how they could have improved it. They look at particular topics. And after every election, they also say, “What are the things that we should consider for the next election?” It’s a really fantastic way of them doing a bit of forward planning. I wanted to contribute into that inquiry some concerns and thoughts I had that were a bit more broad, but certainly had a direct relationship to electoral integrity.

Around truth, it is increasingly hard for everyone to navigate truth, and to have confidence and trust in truth when misinformation is just completely out of hand. When I first saw that Tom Cruise video, that was the first genuine time I felt fear in my career. I could see, immediately, that people were about to be a whole bunch more easily manipulated than ever before, because humans are, predictably, more believing of things that we can see and hear than of things that we read.

And when you have literally anyone being able to be portrayed as saying anything, that creates a significant threat to democracy, to social cohesion, to civil society as a whole. So, I wanted to draw out … And everyone understands the issue of misinformation, because everyone’s last election was addressed by it, and of course COVID vaccination challenges has been affected by it too. So, everyone knows there’s a problem, but I don’t think there was a sense of urgency about just how much more big the problem’s going to get, because everyone’s just dealing with the problem they see right now. I think it’s going to get a whole lot worse.

So, that was part of looking at truth. Trust was about not just trust for information. It was also about trust in, who can you trust? I believe that a well-operating public sector should be independent of politics enough that the public can trust it. It should be run in an accountable, authentic and independently overseen way. It is and should be accountable to the parliament and to the people, not just to the government of the day. And that set of circumstances, systemically, means that public services, at least in countries like ours, should be a tool.

A tool for societal equity, it should be a platform, a social economic and structural platform that people can rely upon, and should provide, in the digital age, should provide the digital or public infrastructure that can actually support a highly confident and engaged community. So, trust is not just about asking for trust. Trust is about establishing trustworthiness. I don’t think a lot of people in the public service realize that people trust the government exactly as much as their worst experience with government.

So, if you’ve had a really crappy service, well, that translates through, eventually, to how much you trust the legitimacy of a policy decision of a report that comes out of information about vaccines. So, actually, improving the government’s quality of service, quality of public engagement, quality of accountability oversight, and public transparency, is directly related to how effective government policies are able to be implemented. So, trust was going to the heart of, really, trust in the public sector and what needs to be done to support that.

Then, authenticity really went to the heart of veracity. How can you ensure that information is what it says? We can disagree on a story, we can disagree on all kinds of things, but is that video I saw actually authentic? Was it actually produced by that political party? Was it actually provided by the Emergency Department? Was it actually produced truly or not? You can’t control the internet by any stretch of the imagination. You can’t filter it any, any mechanism to just constantly watch for and respond to mistruth, of which there are many valiant efforts, and those need to be continued to be supported.

But there are some areas where you don’t need to do it after the fact, you can do it in advance. Any government information that is provided by parties, by departments, by government more broadly, should be authenticatable. If there’s a video from this politician, ostensibly put out by a party, did it really come out from the party or not? That’s a really good test, because then I might be able to still have my disagreement with uncle whatever at the Christmas table, but at least I can say to him, “But you realize that video’s not authentic, right?” That’s a very different conversation to whether we agree on what they said.

So, authenticity really gets to the point of veracity. You need to be able to have veracity of decisions, veracity of information, veracity of data, even veracity of the software supply chain. Because what if someone has just switched out a particular library halfway through your transaction and switched it back again, and you’ve now got a decision that’s based on a bogus formula or algorithm or whatever. You need to have end-to-end supply chain veracity of your software.

So, the paper really tried to hone in on what it would take to make the sector and its operations and its systems and the rest of it trustworthy, including better public services, but it also honed in on the absolute critical need to co-develop and co-design with the public what they need to be able to have confidence and trust online.

Zena: Pia has been involved in a lot of discussions and work around trustworthy public infrastructure. In our discussion with Pia, it’s clear that her response to the Inquiry was designed to start a conversation. We were curious about how the conversation her work has provoked has shaped her views on what ‘trustworthy public infrastructure’ looks like–.

Pia: Everyone has a role to play in improving our society. I think there are, fundamentally, three things that we need to do collectively. The first one is to create just that bit of capacity, a bit of time to think about what good could look like, what our optimistic futures are, and what the pessimistic futures are, so that we can optimize towards one and mitigate the other. If we don’t take a bit of time to dream about better futures, if we keep reacting to whatever the current trends are or the next over-the-horizon trends, that we will just keep stumbling in the darkness towards a cliff.

So, the first thing we need to do is dream about where we need to go, and then put that light on that hill, and walk towards that light, because then across every sector, we can then have convergent effort towards something better that we actually create. And to really badly paraphrase Alan Kay, I think it was, “You can’t predict the future, you can only invent it.” The second thing I think we need to do is to have examples of what good looks like. We can invent it, but then we need to demonstrate it.

So, a lot of government departments and a lot of public servants, and frankly, a lot of researchers as well, because of new public management and because of the businessfication and the comsification of those two sectors, are quite hamstrung, actually, from exploring what is needed, from actually representing and being able to drive public good and better futures. So, there is, first of all, reform that needs to happen. And I’ll talk specifically to the public sector. But I think there needs to be a reform to the sector to actually get back to the purpose of it.

And if the purpose of the public sector is to build a platform, social economic, et cetera, infrastructure platform for society and democracy and the economy, to have a modicum of stability in order to thrive, then that’s quite a different imperative than just trying to save some cash, than just a 5% reduction on the budget, et cetera. So, getting back to purpose and ensuring that a reasonable amount of the resource and effort is being always driven towards purpose, not just the latest thought bubble of a minister or the latest emergency or urgency, but actually focused on the mission.

Which means that the public service needs to be, again, as part of that reform, we need to build up its existential confidence again, because several public services now have been told for decades that the best public service is no public service. So, they have shifted their operating model to basically outsourcing every. So, unless we build up that existential confidence in the public sector, it’ll continue to be abdicating on its responsibility to support and drive and be a steward for the longevity of public good. So, there’s reform that needs to happen in the sector.

The third thing that needs to happen is to just shift the paradigm to one of adaptive participatory governance. To be adaptive you need to be monitoring in real time and be able to respond to change. You need to be presuming change from the start. In a digital world you need to be presuming machines as users, not just people as users, so that you are naturally building out systems and processes and regulations that assume and presume machine interactions and are coping for and responding to that.

You need to close the gap between policy and implementation so that you have policy agility, not just service agility. So, that’s adaptive. Then, on the other side, participatory. We need to shift the dynamic where citizens’ participation in democracy is not presumed to stop at the ballot box. It needs to be seen as something where actually having citizens participate in policy and services, design, development, implementation, oversight, et cetera, is a good thing.

Now, of course, that creates a small threat to ministers who would prefer to keep citizens as a communications exercise, and that’s why comsification is one of the most dangerous things that has happened. If normal public servants can engage with the public directly or in their work, then we can get true evidence-based, true experimentation-based, true participatory outcomes. But while ever it’s a, “Oh no, our job is just to do what the minister says, and we’ve just got to, we’ve got to tick a box of consultation,” which now necessarily becomes a, “How do we convince you of the thing the minister wants to do anyway,” which is dangerous and problematic, then nothing’s going to change.

So, participatory and adaptive are probably the two key things that would drive a whole bunch of change that would make any of this possible. The final quick thing I’ll say is, gov as a platform, shifting to an idea, the government’s platform upon which people can build. It doesn’t take away its job to deliver the service, to deliver the policy, but it does mean that it sees itself as just a node in a network and not just a bottleneck to constrain.

And that means not just designing government according to the constitution and the values of the general public, but taking into very special account the long history and lessons that we can take and learn from and collaborate, frankly, on, with indigenous cultures, those knowledge systems are profoundly different. They are, or many of them, at least the ones I’ve had the proof to engage with in Australia and New Zealand, more so than other places, as a knowledge system, they teach you how to think in the past and the future simultaneous with the present, whereas Western knowledge systems are only good in the next three months or the next five years, if you’re really lucky. And even then it’s very focused on the now.

So, learning to adopt and adapt different knowledge systems and applying that to the concept of gov as a platform, I think can create the profound change that makes all of this possible.

Zena: And finally, we asked our guest what socially responsible algorithms mean to them.

Pia: It’s such an interesting question. First of all, it’s not about my values. It can’t be, because if everyone just does what’s socially responsible for them, they are ignoring the fact that we all have a different value system, we all have different perspective. So, partly, this depends on who you are and what you do. In the case of the public sector, I want to share a quick story. When we were doing 50 Year Futures a few years back here in Aotearoa, in New Zealand, I invited a whole bunch of people to come and give their view of 50 Years Good.

And I remember inviting a particular gentleman who’s been in public service for, I don’t know, forever, for 50 years or something. It’s not that long, and he will kill me when he hears this. Anyway. So, I won’t name him. But I asked him to share about his view, and he’s, “Oh, well, I’ll just come in and talk about this framework.” I said, “No, no, no. No, no. What’s your view?” “Oh, well, I’ll share about the frame …” We went back and forth and back and forth. And finally he blurted out something really profound. He said, “Pia, I don’t think as a public servant that it’s my job to define what Future Good looks like. I think that Future Good is where people can thrive in however they define thriving to be.”

I said, “That’s perfect. Say that.” He’s, “Oh.” Socially responsible algorithm for me means a couple of key things, and it’s all about context of public service for my work. If you are using an algorithm, it has to be explainable, and it has to result in an outcome which is auditable and appealable so that you are, as the people who own and run that algorithm, accountable to the public, the parliament and the government, for the impact of that algorithm, unintended or not. I strongly believe that it doesn’t matter your intent. If you accidentally punch someone in the face, the fact that you didn’t mean to, doesn’t take away the bruise.

You must be, and it’s very hard to be, because a lot of people love to think that, “Oh, but I didn’t mean it” is enough of being an excuse, we are absolutely accountable for the impact of everything we do, whether we mean it or not. So, socially responsible algorithms are ones that are not just designed to capture that explainability and appealability and auditability, but they are algorithms for which you are monitoring and assuring, do no harm.

Liz: Thank you for joining us today on The Algorithmic Futures Podcast. To learn more about the podcast, the Social Responsibility of Algorithms workshop series and our guests you can visit our website algorithmicfutures.org. And if you’ve enjoyed this, please like and share the podcast with others.

Now, to end with a couple of disclaimers.

All information we present here is for your education and enjoyment and should not be taken as advice specific to your situation.

This podcast episode was created in support of the Algorithmic Futures Policy Lab – a collaboration between the Australian National University School of Cybernetics, ANU Fenner School of Environment and Society, ANU Centre for European Studies, CNRS LAMSADE and DIMACS at Rutgers University. The Algorithmic Futures Policy Lab receives the support of the Erasmus+ Programme of the European Union. The European Commission support for the Algorithmic Futures Policy Lab does not constitute an endorsement of this podcast episode’s contents, which reflect the views only of the speakers, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

Leave a Reply

Your email address will not be published.