AFPL Podcasts Season 1 SRA

S01E05: Artificial intelligence policy approaches in Australia and the European Union, with Katherine Daniell and Flynn Shaw

In this episode, Katherine Daniell and Flynn Shaw from the Australian National University (ANU) join us to talk about the ways Australia and the EU governments approach artificial intelligence policy. This episode is designed to provide attendees of the Social Responsibility of Algorithms 2022 a brief overview of the approaches both sets of governments use to shape the present and future of artificial intelligence.

Katherine is a professor in the ANU School of Cybernetics and Fenner School of Environment and Society and is the Director of the Algorithmic Futures Policy Lab. Flynn is a researcher in the ANU School of Cybernetics and ANU Fenner School of Environment and Society. You can read more about Katherine here and Flynn here.

Listen and subscribe on Apple Podcasts, iHeartRadio, PocketCasts, Spotify and more. Five star ratings and positive reviews on Apple Podcasts help us get the word out, so please, if you enjoy this episode, please share with others and consider leaving us a rating!

With the support of the Erasmus+ Programme of the European Union

This episode was developed in support of the Algorithmic Futures Policy Lab, a collaboration between the Australian National University (ANU) Centre for European Studies, ANU School of Cybernetics, ANU Fenner School of Environment and Society, DIMACS at Rutgers University, and CNRS LAMSADE. The Algorithmic Futures Policy Lab is supported by an Erasmus+ Jean Monnet grant from the European Commission.

Disclaimers

The European Commission support for the Algorithmic Futures Policy Lab does not constitute an endorsement of the contents of the podcast or this webpage, which reflect the views only of the speakers or writers, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

All information we present here is purely for your education and enjoyment and should not be taken as advice specific to your situation.

Episode Credits

Hosts: Liz Williams and Zena Assaad

Guests: Katherine Daniell and Flynn Shaw

Producers / Writers: Katherine Daniell, Flynn Shaw, Liz Williams

Art selection: Zena Assaad

Episode Narrative

Liz: Hi everyone, I’m Liz Williams.

Zena: And I’m Zena Assaad.

And this is the Algorithmic Futures podcast

Liz: Join us, as we talk to technology creators, regulators and dreamers from around the world to learn how complex technologies may shape our environment and societies in the years to come.

Zena: In today’s episode, we will explore how governments influence artificial intelligence-enabled technology through policy. We’ll compare two places – Australia and the European Union – that share some common approaches to technology policy, but also offer some interesting differences in how they think about shaping our technological futures.

So let’s set the scene.

Liz: Imagine you’re a policy maker hoping to help technology developers flourish in your community. You can see that intelligence-enabled technologies – these are technologies that have some capacity to learn and adapt responses based on inputs from the environment – are increasingly prevalent. These technologies are pivotal for your region’s economic future.

This can be a good thing: your constituents might get access to better music recommendations, find ways to optimise their health through wellness technologies, or benefit from jobs growth that AI development might bring.

But you know the adaptive aspect of this technology can also lead to problems if it’s not carefully managed.

Zena: As a policy maker, how do you go about enabling the development and use of these technologies safely and appropriately? How do you balance the need to support healthy creative (and economically beneficial) development in this space, while minimising the potential for future harm?

These are questions policymakers are currently facing worldwide.

Liz: Today, we’re joined by Professor Katherine Daniell from the ANU School of Cybernetics and ANU Fenner School of Environment and Society, and Flynn Shaw, a researcher in the ANU School of Cybernetics.

We’ve asked them here to help us understand how policy makers in both Australia and the European Union currently regulate and develop policy for artificial intelligence-enabled technologies.

Zena: So why these two places? Well, both tend to use similar categories of policy instruments to influence technological development, but they have very different social, economic, and environmental factors to consider when deciding how to use these instruments to achieve their goals.

Hi Katherine. Hi Flynn. Thank you both so much for joining us today.

Katherine: Hi Liz and Zena, great to be here and thanks for the invitation.

Flynn: Hey guys, thanks for having me, looking forward to it

Zena: We’ve asked you here because you’ve both been looking into Australian and EU government policies related to artificial intelligence, in preparation for the Algorithmic Futures Policy Lab workshop series we’re kicking off this June. I’m wondering if we can start off with a bit about you. Do you have a personal story you can share that relates to your interest in this topic?

Katherine: Sure Zena. I’m originally a bridge-builder. I studied civil engineering and arts in Australia, and also for some time in France. And after working in the construction industry for a few years, I realised that I was more interested in how to build bridges between people rather than physical ones – The first relationships and policy dialogues I helped to build were between community members and policy makers on issues of sustainability policy like housing developments and river basin management. I’ve since worked for many years on research and international science diplomacy work that has spanned European, Australian and Pacific comparative governance and policy, particularly around science, technology and innovation as well as issues of climate and water management. In the School of Cybernetics this work has extended into looking at AI enabled technologies and I’ve been really excited to see how many of the underlying comparative cultural and political systems and the dialogues between different people and their worldviews are shaping how these technologies are adopted and/or resisted in different countries and regions around the world.

Flynn: So on my end, my academic background is in international relations and political philosophy and professionally I’ve worked in economic policy. I was originally put onto the idea of working in AI governance by a former tutor of mine, Aaron Tang. The concept was really exciting to me; there was an opportunity here to inform and potentially shape brand-new public policy, and the challenges presented by AI came with their own raft of philosophical debates which I was eager to get into. So when I was offered the opportunity to work on a comparative study of EU and Australian AI-policy at the School of Cybernetics, I jumped at the opportunity. It seemed like the perfect marriage of my studies, my professional experience, and my desire to work in AI governance.

Liz: I know this is a rather extensive and emerging space, so perhaps we could start with a big picture overview of the policy environment in the EU and Australia. Is there anything we should keep in mind as we explore some of the policy approaches governments in these areas are using?   

Katherine: Yes, I think there are a couple of things that are useful for framing. Firstly, that both the EU and Australia are interesting in terms of their multi-level governance structures. They are both Federations and rely on negotiations between all the internal states and territories to set overall directions. They both need to bridge a range of cultures and interests to find policy solutions that can be broadly applied across diverse communities, industries and environments. To do that, often values-based approaches drawing on existing negotiated regulations, and guidelines are an easy starting point for integrating new needs like those presented by AI. The other approach we see is policy experimentation in different states, rather than starting with Federal Australian legislation or an EU directive, and we can talk about some of these later.

The second key influence is the make up of the economies and market structures which influences policy challenges and solutions. In Australia the economy’s made up of a lot of primary and tertiary industry – mining and agriculture, and then a lot of service industries including education and finance, whereas in Europe there’s a lot more secondary and tertiary industry – lots more manufacturing and industry R&D. So there are some differences there. But the bigger difference is the market structures.

Zena: Ah yes, the EU has something called the European Single Market doesn’t it?

Katherine: Yes, exactly, this EU market structure basically aims to strengthen the economic connection and cooperation between the EU states, and it supports free movement of goods, services, and people – so think employers and employees within the region. This was a key pillar of European Integration, pushed by Jean Monnet, a famous French and European internationalist after which the grant supporting the Algorithmic Futures Policy Lab is named. Monnet’s view was that for the European project to succeed, the integration needed to be more than just political, and so economic harmonisation and creation of a common system has been a key part of European development, and in recent years has extended into a new program of a Digital single market, which is also relevant to Europe’s development of AI-powered technologies and services.

Liz: Oh, that’s interesting. So what has been the effect of these single markets, and is there any difference with Australia as a single country with it’s own market?

Katherine: Well, the EU market, it really has the effect of creating a rather large market with relatively less external international competition than producers in Australia would see, for example. Australian producers have to compete earlier on a global stage to be successful. I think we see this play out in the legislation. In Australia, we hypothesise that there’s much less regulation of artificial intelligence because this is seen to hinder innovation and competitiveness in that global market.

Liz: Great. So I’m guessing we’ll find there are some common approaches, but also some differences linked to market forces—have I gotten that right? 

Katherine: Yes, that’s about right.

Liz: Excellent. So Flynn, can you tell me what patterns you’re seeing in the AI policy space in both Australia and the EU?

Flynn: Sure. We had some help here – the EU’s Joint Research Centre and the OECD have done some work on categorising policy instruments, which gave me a place to start. The categories they’ve named are human capital, product development and regulation – and we’ll get into what each of these categories actually looks like in a bit – but the useful thing is that the AI-related policy we’ve uncovered pretty much aligns to the three categories.

Zena: Ok. Maybe we can start with the first category – human capital. What fits in this policy category and why is it interesting for AI?

Flynn: Human capital is focused on all the ways governments strengthen human capacity in a certain area. So support and legislation relating to AI education at all levels, free online courses or on-the-job training fit in this category.

We see evidence of this in both Australia and the EU, which makes sense, really – governments are basically trying to set their constituencies up for success in the future economy, and AI looks like it’s going to play an important role in that future.

Liz: So what are some of the examples you see that governments are rolling out in this space?

Flynn: Denmark is an interesting example, actually. They’ve put together a four-year test programme meant to help strengthen technological understanding in primary and lower secondary education. They’ve also set out to increase the number of people in STEM (Science Technology Engineering and Mathematics) disciplines by 10,000 over 10 years – something they call the Technology Pact.

Zena: So – let me see if I understood this correctly. The Technology Pact is about increasing jobs in STEM in general, so it doesn’t directly speak to increasing capacity in AI. But if you’re increasing awareness of and engagement with AI in Schools whilst simultaneously increasing the number of jobs in STEM, I’m guessing there’ll be flow-on effects. Is that the idea?

Flynn: Yeah, I think so. Finland is another place doing interesting work in this space. They’ve focused on the existing labour force, and are supporting the creation of MOOCs, or massive open-online courses – and these are free – and their aim is to help improve human capacity in AI.

Katherine: Yes! their ‘Elements of AI’ series, which are MOOCs produced by Reaktor and University of Helsinki, attracted 100,000 participants – that’s 2% of the Finnish population!

And I’m kind of excited that they’re now translating the courses to all languages of the EU, so Finland is really only just the start of this initiative.

Zena: How are governments deciding what goes into these educational initiatives?

Katherine: Well, maybe Sweden gives us a good example of this. They’ve started a pilot project. It’s designed to inventory the skills they’ll need to help their citizens develop in order to make use of new technologies, which includes AI.

Zena: Are you seeing similar initiatives happening here in Australia?

Flynn: Yes. Interestingly, the Technology Pact that Denmark has rolled out is very similar to the Australian government’s Job-Ready Graduates program, and it’s designed to reduce fees for STEM students in higher education. And in the Australian national curriculum, they’ve incorporated ideas about AI and algorithmic futures in the Technologies curriculum, which is primarily aimed at students in years 5-10.

Katherine: Yeah, and we also see other initiatives in the higher education space. One that’s specifically AI-focused is the Next Generation AI Graduates Program, which the Australian Government has committed $24.7 million to over the next six years. This program is basically a national scholarship program designed to support students interested in studying AI, and focuses on Honours years and beyond.

Liz: So it seems like these are all focused on education sectors. Do you see anything focused on the existing workforce here in Australia?

Katherine: Yes, though mainly in the form of funding and some commitments to establish National AI and Digital Capability Centres at this stage.

Flynn: The National AI and Digital Capability Centres are tasked with rolling out something like Sweden’s pilot project inventorying skills development needs, but notably, they’re focused on the small-to-medium enterprise, or SME, space. They really want to help the SME sector be a strong support for AI development in Australia.

Zena: Why the focus on small-to-medium enterprises?

Katherine: Yeah, that’s a good question, Zena. I think the idea is that SMEs are seen as the biggest Australian growth area for adopting AI, so Australia will need to grow this sector to stay competitive and to create new AI-related jobs. It’s also an area that comparatively to the OECD has been significantly underfunded and struggling to connect to research and development, so the Government is likely trying to fill this gap.

Liz: Maybe now we can shift focus to the second category of policy instruments – product development. Flynn, can you tell us what kinds of policies fit into this category?

Flynn: Sure. Product development includes any policy measures that are meant to foster innovation and help AI developments get to market.

Zena: And what kinds of policy instruments do governments typically use to achieve this?

Flynn: Funding’s a big one. If you’re funding research and innovation in a particular area, you are of course likely to see people doing some work in that area. Creating centres is another means of directing funds, but generally funding is generally tied to more specific goals. And of course, there’s funding and support for moving ideas “from lab to market”, as they say.

Liz: Ok, so existing research funding mechanisms fall into this bucket, right?

Katherine: Yes, absolutely. Things like the Horizon Europe funding are good examples of this in the EU. There’s a very large section of that program that’s meant to support risky or breakthrough technologies, as well as those in the early levels of technological readiness, that might have a hard time getting to market otherwise.

Zena: And I know here in Australia, the usual channels for research funding – so Australian Research Council and the National Health and Medical Research Council grant schemes, or Cooperative Research Centre schemes – would provide some funding in this space. But is there anything more specific to AI?

Flynn: Kind of. The funding instruments named in Australia’s AI Action Plan, which the Australian Government released last year, are broader in focus and have generally been around for a while – we just mentioned a few things funded under this before. The Government’s University Research Commercialisation scheme is maybe an interesting one to note in that it has no direct parallel in the EU. It’s focused on helping translate university outputs into commercial goods. This can help get AI products to market, but it’s really about trying to diversify Australian exports across the board.

Liz: Interesting. So we’ve talked about how governments help develop human capacities in AI, and we’ve explored how they help support product development.  What about the third policy category? That was regulation, right?

Flynn: Yeah.

Liz: So what does it include?

Katherine: Regulation includes a range of measures that really shape the AI that is developed and released into the world. Think legislation that determine whether, when and how AI-enabled technologies (or components of them) can be used, and then other instruments like AI ethics guidelines, or standards that establish norms for AI development, whether or not they are followed or can be enforced.

Zena: So what are some of the key examples of this in the EU?

Katherine: Sure, I think the biggest and most prominent one is the GDPR, or the General Data Protection Regulation. This is the legislation behind all those cookie consent pop-ups you see on the internet. But it is currently framing how AI legislation is rolled out in the EU. And I think there are some really interesting clauses in there. Specifically, it imposes specific requirements on profiling and automated decision-making. For example if you use an AI system for profiling or automated decision-making, like for hiring processes, you are legally responsible for ensuring 3 things, firstly, that the system is fair, which includes preventing individuals from being discriminated against; secondly ensuring the system is transparent which includes meaningful information about the logic involved in the AI system; and the third bit is the right to human intervention, and this one is around enabling individuals to challenge the automated decision.

Liz: GDPR is about data, which I realise is inherent to AI. All AI systems collect and make use of data in some way. I’m curious – can you tell me some of the ways the GDPR influences AI development?

Katherine: Well, GDPR requires a ‘privacy by design’ approach – basically, whatever you create, if it collects data, you can’t collect and store that data without the explicit permission from someone who is providing it to you. You also have to have transparent data handling practices in place – so you need to say how you’re storing and handling any data you collect.

Flynn: Interestingly, this sounds a lot like some of the requirements put in place by the Australian Privacy act of 1988.

Katherine: That’s right, Flynn; there are some similarities there, but there are also some differences, too – the GDPR gives individuals ‘a right to be forgotten’, which isn’t in the Australian Privacy act. 

Zena: Can you tell me a bit more about this?

Katherine: Yeah, sure, Zena. “A right to be forgotten” is really aiming to give individuals some control over their personal data, so in certain circumstances, it requires organisations to have a way for individuals to request that their data be deleted in a reasonable timeframe. This doesn’t apply in all circumstances – for example, if the organisation collecting or processing the data still needs it for the purpose they collected it for, or if there’s some legal or public interest task that requires that data then they’ll be able to keep it. And the right to be forgotten also doesn’t relate to future data.

Liz: It sounds like it can get pretty complicated.  

Katherine: Sometimes yes. We could probably do a whole other episode on the nuances and get some legal experts involved!

Liz: Sounds good. I’m wondering if we can switch focus a bit here and explore some of the AI ethics principles or toolkits that fall into this category. I feel like I see another one put out almost every week, and I know there’s a huge debate about how to create “ethical AI”. But this seems to be a significant challenge.

Flynn: There are so many approaches to this. In the EU, they’ve recently released a set of guidelines for trustworthy AI. This is a non-binding document, so there aren’t any compliance requirements that come with this. But it’s meant to help with the development of so-called ‘ethical AI’ in Europe.

Katherine: And I might just add, It’s perhaps important to note that the EU describes the core principle steering these guidelines is what they call “human-centric” – they want to foster an environment that supports AI that is, as they say in their context and implementation paper, “respectful of European values and principles.” Within this, the concept of dignity is central. But they also specifically mention consideration for the environment, for other living things, and for future generations.

Zena: That’s really interesting. Do you have a sense of the kind of impact these principles are having?

Katherine: Not yet. They’re really pretty new, and they are also non-binding – it’s kind of similar to what’s going on here in Australia, where we have a set of AI Ethics Principles, but we don’t really have a lot of plans yet for enforcing these.

In a lot of places within the EU they have created AI ethics committees to help develop guidelines for AI developers, and there are a couple of examples of AI certification or quality seal systems in Malta and Germany.

Liz: So this is something like a certification scheme, where an organisation is charged with reviewing AI-enabled products and deciding whether they’re ethical or not? Is that about right?

Katherine: Yes, that’s the idea. They’re basically a way to encourage voluntary compliance.

Liz: Ok. So it sounds like it’s still fairly early days for all this – all of these ethics guidelines were released in the last few years. And I know through some of the work we’ve been doing in the School of Cybernetics that they’re not always straightforward to implement. Are there any interesting ways governments are helping AI developers figure these things out?

Flynn: Yeah, one really interesting idea is the “regulatory sandbox”, which is not widespread yet – I think Italy has created one, but this isn’t just for AI.

Zena: Flynn, can you explain how a regulatory sandbox works?

Flynn: Sure. The basic idea is that you create space for AI development by temporarily reducing regulatory burdens. It’s a way of fostering experimentation that would be difficult to carry out otherwise.

We actually see a lot of this at the state government level in Australia, and it’s usually tied to specific technologies. There were, for example, autonomous vehicle trials with Navya in Perth, and also facial recognition software trials Queensland, New South Wales and Victoria. These were super interesting because they used stadiums as the sandbox setting. The one exception to this is the Enhance Regulatory Sandbox, which is focused on FinTech, or the financial technology space. None of these are AI specific, which is why the Australian Human Rights Commission Report on Human Rights and Technology advocated for AI-focused regulatory sandboxes in Australia.   

Zena: Can you tell me more about the Australian Human Rights Commission Report?

Flynn: Yeah. They released this report last year, so the recommendations in it are pretty new, and are meant to give Australia guidance on how to maximise the potential opportunities of emergent technologies whilst also minimising potential harms. It’s pretty extensive – there’s a list of 38 recommendations in that report covering everything from national strategy to accessible technology creation. One of the major recommendations coming out of that report was that Australia should appoint an AI Safety Commissioner to help both government and private sector agencies use AI ethically and safely. It’s pretty clear from the report that the emerging tech landscape is complex – especially in relation to AI.

Zena: Are there any signs that the Australian government will appoint an AI safety commissioner, in line with this report?

Flynn: Not so far.

Liz: Interesting. It’s clearly a pretty complex and rapidly emerging space, and I feel like we’ve already covered a lot of ground in this podcast episode. Is there anything else we haven’t touched on yet that we should have a look out for?

Katherine: Yes – one important thing is that the EU put out a proposal for a new regulatory framework for AI last year, which their briefing says is – quote – “the first ever attempt to enact a horizontal regulation of AI.” They’re working on the draft framework now, but one of the interesting things about this framework and what it does is classify and regulate AI technologies based on risk. The risk categories range from unacceptable technologies – so technologies that employ, for example, subliminal techniques or exploit vulnerable groups – to low or minimal risk technologies. The regulatory burden is tied to the risk category a piece of technology falls in.

Flynn: So what might be considered “unacceptable” in this framework?

Katherine: That’s a good question. One example is real-time biometric identification systems – it looks like they would outlaw those in publicly accessible places for law enforcement purposes.

Flynn: We should note that they allow biometric identification technology for other uses – but they consider them high risk. So they’re allowed in this framework, but are meant to be highly controlled. And biometric categorisation – so without the identification component – is apparently limited risk. People deploying those systems just need to provide some transparency measures.

Liz: It’ll be quite interesting to see what happens with that – and what knock-on effects that might have beyond the EU.

Zena: Yes definitely. Any final thoughts you’d like to share?

Flynn: So, something I want to highlight before we finish up is the need to focus on the ‘grey’ in terms of AI-policy. There is a tendency in a lot of the discourse surrounding AI to focus on the utopic or the dystopic; you know the idea, AI will either destroy or create a new era of enlightened human existence, if you like. And it’s compelling – it makes a great story – but it isn’t useful if we’re trying to tackle the challenges which are raised by AI.

Liz: So what you’re saying here is that we need to create policy that actually conforms to the reality that we’re living in, how AI sits in that. Is that right?

Flynn: Exactly. To over-regulate over fears of an iRobot situation, or to underregulate so we can maximise our market gain – they both carry significant issues. What’s actually really refreshing to see is that in both the EU and Australian case, the discourse hasn’t really affected the regulatory environments both states are seeking to create. Equal regard is given, as it really ought to be, to development, responsibility, economic opportunity and the human experience.

Zena: Thanks, Flynn – that’s great advice. And you, Katherine?

Katherine: Thanks Zena, I think something worth thinking about is the cultural values that we are looking to embed in our AI policies. The EU has a really strong value set developed in response to the Second World War and therefore the precautionary principle and respect of human dignity has really been paramount – trying to avoid those atrocities again. In Australia with our very different history and very different kinds of populations, environments and challenges we’ve got an opportunity to see right now at the beginning of our AI development and deployment to see what voices and values are going to be included in our systems. And for me, this includes those of Indigenous Australians, and Indigenous peoples and First Nations peoples all over the world are already leading conversations globally on how culturally appropriate AI-enabled technologies can be developed. I think we also need to ensure that those who are currently marginalised from such conversations in our communities can be involved to prevent further marginalisation. I thought the recent prompt by the French association Entourage who created the world’s first person experiencing homelessness – a guy called Will – in the metaverse; it was a poignant reminder and call to action about what socially responsible development of technology can be. For me it reinforced why we need to create technology, policy and regulation that really enables safer, more responsible and sustainable futures and why it’s such a great opportunity to be working with the School of Cybernetics and others around the world, around Australia, and Europe to do just that.

Liz: Thank you both for joining us today! It’s been really lovely to have you on our show.

Katherine: Thank you Liz and Zena.

Flynn: Thank you.

Liz: Thank you for joining us today on the Algorithmic Futures podcast. To learn more about the podcast, the Social Responsibility of Algorithms workshop series and our guests you can visit our website algorithmicfutures.org. And if you’ve enjoyed this episode, please like and share with others.

Now, to end with a couple of disclaimers.

All information we present here is for your education and enjoyment and should not be taken as advice specific to your situation.

This podcast episode was created in support of the Algorithmic Futures Policy Lab – a collaboration between the Australian National University School of Cybernetics, ANU Fenner School of Environment and Society, ANU Centre for European Studies, CNRS Lamsade and DIMACS at Rutgers University. The Algorithmic Futures Policy Lab receives the support of the Erasmus+ Programme of the European Union. The European Commission support for the Algorithmic Futures Policy Lab does not constitute an endorsement of this podcast episode’s contents, which reflect the views only of the speakers, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

Episode References

Please see the literature review we used as a basis for this episode for all the relevant references.

Leave a Reply

Your email address will not be published. Required fields are marked *