Podcasts Season 2

S02E05. Safeguarding the future: Responsible AI use in education and beyond, with Simon Chesterman

You have probably heard of ChatGPT – the generative AI language model that is already transforming work and education as we know it. In this episode, we explore the many potential benefits and challenges ChatGPT and models like it pose for education and law with the help of Simon Chesterman, author of We, the Robots? Regulating Artificial Intelligence and the Limits of the Law, David Marshall Professor and Vice Provost of Educational Innovation at the National University of Singapore, Senior Director of AI Governance at AI Singapore, and Editor of the Asian Journal of International Law. This episode has something for everyone who is interested in understanding how we can sensibly make the best use of generative AI models like ChatGPT while mitigating their potential for harm.

Listen and subscribe on Apple Podcasts, iHeartRadio, PocketCasts, Spotify and more. Five star ratings and positive reviews on Apple Podcasts help us get the word out, so please, if you enjoy this episode, please share with others and consider leaving us a rating!

Credits:

Guest: Simon Chesterman

Hosts: Zena Assaad and Liz Williams

Guest co-hosts: Tom Chan and Matthew Phillipps

Producers: Tom Chan, Matthew Phillipps, Robbie Slape, Zena Assaad, Liz Williams, Martin Franklin (East Coast Studios)

Theme music: Coma-Media

Episode transcript:

Liz: Hi everyone, I’m Liz Williams. 

Zena:

And I’m Zena Assaad. 

Welcome to season two of the Algorithmic Futures Podcast. 

Liz:

Join us as we talk to technology creators, regulators and dreamers from around the world to learn how complex technologies may shape our environment and societies in the years to come.

Zena

In today’s episode, we’re joined by Simon Chesterman. Simon is a guest with many titles: David Marshall Professor and Vice Provost of Educational Innovation at the National University of Singapore, Senior Director of AI Governance at AI Singapore, and Editor of the Asian Journal of International Law. He is also a prolific writer who has penned a number of fiction and non-fiction volumes, including We, the Robots? Regulating Artificial Intelligence and the Limits of the Law, and his latest novel, Artifice.

Liz:

We invited Simon to join us at the request of one of the two new voices you’ll hear in this episode: Tom Chan. Tom came across Simon’s work as a law student at the University of Melbourne, where Simon – a fellow alum – had produced crib notes that were famous for helping students (Tom included) get through exams. In light of the current discussion about how ChatGPT might change education, Tom wondered if there were analogies we could draw between educational supports like Simon’s notes and the role ChatGPT is increasingly playing in classroom settings. He and Matthew Phillipps helped produce this episode, so you’ll hear their voices in the interview today.

Zena:

As is not surprising for a guest with Simon’s wide ranging expertise, we covered far more than ChatGPT’s impact on education in this episode. There’s something in this for everyone who is curious about how AI is shaping our future – and how we can work to access the best AI has to offer while controlling its very real potential for harm.

Liz:

Simon, thank you so much for joining us today. We’re really looking forward to having you on the Algorithmic Futures Podcast.

Simon:

Thanks so much. Pleasure to be here.

Liz:

Today we’re going to get started with some questions from Tom because he’s actually the inspiration for this episode. He is the one who said we should invite you. So, Tom, would you like to go ahead and get started?

Tom:

Thanks, Liz. Tom here. I’m feeling a bit like a fanboy at the moment because when I was studying law at Melbourne University, Simon Chesterman was this legendary person that I’ve never met. Now, for our listeners who may not know, law degrees often assess using exams. They tend to be two, three-hour long, really grueling things. The distinguishing thing about them was they were fully open book. You can wheel in a trolley full of law journals or anything else, textbooks, case books. But the thing that you really want is a set of not just really beautifully structured and almost like a choose-your-own-adventure set of law notes that you wrote yourself, but to get a set of Simon Chesterman’s law notes. So basically in hindsight, a bit of a performance-enhancing drug or tool.

Now with that, I’m trying to bring you back to those days of 1990s when you were a legendary student at Melbourne Uni with legendary notes that everyone wanted. Was this borderline cheating that I use your notes? What are the students doing now and what kind of super tools are your students using to, well, enhance their performance? This might lead us to, what is the role of ChatGPT and AI?

Simon:

Well, Tom, thank you very much for that overly kind introduction. I hope you did read the small print on the notes, which is that I take responsibility neither for your success nor your failures. There are documented cases of people who had the notes and did badly. I’m glad you did well.

Really one of the things that I discovered as a student is that a lot of law is quite predictable. Indeed, that’s the purpose of law in many ways: predictability, stability. So I suppose in a very early way, one of the things I was trying to do, in particular in these doctrinal subjects, what you described as a choose-your-own-adventure could also be described as a kind of rules-based approach to solving problems. To me, one of the fascinating things in law in the context of automation is that there are some aspects of law that could be automated, systematized. Indeed, in the literature on artificial intelligence, there’s long been this sort of hope that a lot of legal practice could be outsourced to machines.

The problem is that often the irrational exuberance of some of the people working in this space — and I include people like Daniel Susskind who’s been writing about this for 40 years almost — the underlying assumption was that law is basically the application of clear rules to agreed facts. In some law exams, that might be the case. But in actual legal practice, the rules are never that clear and the facts are usually not agreed, which is why I think there are real limitations to how far you can go with this idea of systematizing the law, but certainly as a matter of studying law, yeah, that was helpful. There is actually a book, a pretentiously titled book, which I did, called Studying Law at University: Everything You Need to Know

Tom:

Wow.

Simon:

… much, much shorter than it would be if it was really everything you need to know. That actually came out of the work that I did with the Aboriginal Tutorial Assistance Scheme, which was a program to, in particular, do a little bit more than just hand notes, but actually to train up Indigenous students back in the ’90s and subsequently. Anyway, I’m glad it worked for you.

It’s a nice precursor to what’s happening today where we do have this dilemma at universities about what to do with ChatGPT and the like, both in terms of how we ensure the integrity of our systems and also avoid cheating and so on, which is what many academics, I think, unnecessarily focused on. The much more interesting question is, what does it mean for a university education? What does it mean to be a thinking individual? What does it mean for people who work, like most of us, in the knowledge economy?

Zena:

I recall at the time when I think I was still in primary school, I think I was still quite young, and autocorrect had first started to come about. At the time when we were writing things, there was this real aversion to using autocorrect. Everyone was like, “Don’t use it. You’re not going to learn how to spell if you use it.” Over time, we’ve realized that that’s actually not in fact the case.

I think with ChatGPT, we were speaking earlier before you came on the call, that I had asked it to write an essay for me. I gave it a random topic because I was really curious what would come of it. The essay itself, it was fine, but it definitely did not have any depth, and I didn’t find that it had a lot of structural integrity to it. But Matthew made the comment that it’s actually how you use it. So if you’re using ChatGPT in the correct way, you get different results from it. So I’m curious what your perspective is on how it will age with time and how our use of it might change our perspective of it with time?

Simon:

Artificial intelligence has always had this problem of managing expectations. You have the idea of the AI winter, going back to the start of AI, the Dartmouth College retreat where these guys got funding in 1956 from the Rockefeller Foundation for two months of research saying, “In two months over the summer, we think we can basically solve artificial intelligence.” So that was the moment when AI as a term was coined. It was also the start of the challenge of managing expectations. I think there is a lot of hype around ChatGPT, but I personally do think this is really a transformative moment on the scale of the emergence of social media, the widespread dissemination of the internet as a tool.

Going back to your question, how do we think about these tools? Well, autocorrect is a good example of a kind of nudge, a bit like mapping software. It helps us do things where there is an objectively correct answer, either how you spell a term, the grammatical construction, how to get from point A to point B. Does it undermine our capacity to do these things independently? Not initially, but I think there are many people now who don’t understand how to read a map. There are many people who rely overly on correction software, much as I assume most of us don’t memorize as many telephone numbers as we used to because we don’t need to. Now in some ways, that’s a great boon because you don’t need to waste mental energy on these things. You can focus on what’s often called higher order tasks.

But how do we think of something like ChatGPT? Is ChatGPT more like autocorrect, or is it like a kind of friend who does some of the work for you? I tend to think of it, at least at the moment, a bit more like a calculator. A calculator did transform mathematics in a good way and in a problematic way. The good way is that we can do ever more sophisticated, complicated mathematical sums. Obviously, we’ve got much more sophisticated tools now than just a calculator.

But in terms of education, I think we’re quite careful when we give children calculators because this kind of mental exercise is a useful thing, both in terms of the training it offers your brain generally, but also because eventually technology will break down. You don’t want to be stuck somewhere, unable to remember how to get home, how to telephone the police, for example, just because you’ve committed nothing to memory.

The question is how we think about these tools in general. Yeah, I think ChatGPT really is forcing us to reassess the way we think about our relationship to information. I’m less worried about how we use it as prompts at university or in terms of research than I am if it gets into the hands of children in primary school, secondary school where it’s not going to be a shortcut for them, rather it’d be like that child getting a calculator who never learns their times tables. If you never learn how to write an essay, how to construct an argument, then I think it’s going to be very difficult to function as a meaningful member of society or to have a really fulfilling life in many ways.

Zena:

I think that’s a really salient point, especially because it’s not just about learning to write the essay. It’s about learning to construct an argument. It’s about learning to defend an argument, to provide evidence in support of it. I think those are the kinds of skills that you take beyond just writing an essay.

Simon:

This is why, much as I’m all for the productive use of technology, but in many of my classes I’ll ban all devices. 

To me, the value of classroom time, and this is something that universities are having to grapple with, the value of classroom time is precisely that interaction, the active engagement rather than just the passive consumption of information or just the regurgitation of information that they’ve looked up online.

Liz:

When you tell students that you are going to ask them to put all of their devices away for your classroom, do you get a sense of discomfort, particularly in the early stages, from your students?

Simon:

Yeah. I’ve learned to telegraph it more effectively. The first time when I went into a classroom and everyone had their laptops open and I said, “Right, close your laptops,” there’s a bit of resistance. It’s partly because for some of them it’s a crutch, for some of them it’s a shield. It’s a crutch because they can’t google for information and so on, or it’s a shield so that they don’t have to look you in the eye. They can be looking at their laptop. I console them somewhat by saying that I do know a guy from New York University who bans pencils as well. He wants them doing nothing. He doesn’t want them even taking notes. There’s a designated note-taker. I think that’s a little bit too much for me. But there’s initial resistance. Although for the most part when I get student feedback, even those students who complained at the start, most of them say that they were more engaged, they were more in the moment, they were more present through the class, and even if they were not 100% on board, they see the benefit of it.

Liz:

This is making me wonder if you could share a little bit about what your classroom style looks like. How are you asking your students to engage? What does this style look like? Just so we can set the scene in our minds for what this interaction looks like.

Simon:

The role of universities has of course changed. Hundreds of years ago, students would pay a professor like me because I knew things that they didn’t know. They’d come, pay me, and I would give them that information. Actually thousands of years ago, Aristotle is said to have used the metaphor of education as being the filling of empty vessels. But the idea that I have information that they don’t have and I just give it to them, those days are long past. So the purpose of a university being just to transmit information from one person to many, that’s no longer really relevant.

It’s quite rare for me in a class to talk for more than 10 or 15 minutes because I’m much more interested in engaging with the students, partly because I think that’s better educationally and partly it makes it much more interesting for me. I tend to get bored with the sound my own voice. I try to be as interactive as possible.

That’s one of the things that I’m trying to do some work on now in terms of pedagogy is thinking through how we use online tools, because, again, the pandemic forced us to adapt very quickly. But in many areas what we were really doing is what’s sometimes called digitization rather than digitalization. Digitization, meaning that you’re basically doing the same thing through an online or through some sort of technological medium. You’re doing the same thing you would otherwise, so you’re teaching a class as if you were in a classroom, you just can’t physically be there, and that’s better than nothing. But I do think the digital tools we have available to us, computer-assisted learning and teaching, does offer much more scope for innovation and a move away from just thinking about, “Well, is the main debate whether we’re doing things synchronously or asynchronously?” To me, the real interesting question is, how do we ensure that students are actively learning rather than passively consuming?

Liz:

I’ve got two questions that follow onto that. The first is about your practice. In your practice, in your role as a professor in higher education, how have your practices with regards to the use of technology in the classroom changed as a function of time? I’m curious, after that, if we can take a step back, do you have any reflections on how you’ve seen practices of your colleagues more broadly change as a function of time? Are there any reflections on whether there are advantages in terms of what technology is offering people who are offering education? Are there disadvantages? We’ve already talked about some of those. I’d love to hear your thoughts on that.

Simon:

I think everyone was dragged into the internet age. People who’d never tried video conferencing in their lives were sort of dragged kicking and screaming into doing it. One of the interesting things was I had some older colleagues who basically didn’t want to give it up. One or two people never wanted to go back into a classroom again, sometimes for health reasons, sometimes for convenience reasons, that they could roll out of bed, teach a class, and then roll back into bed, which wasn’t a particularly good argument in favor of doing it that way.

I think many of us also discovered during the pandemic two consistent bits of feedback from students and faculty. One is that hybrid teaching, when you’re trying to manage in-person students and online students trying to do those things simultaneously, that’s the worst. But secondly that you can do a lot of stuff online, but it’s not as good as in-person. It’s not as effective. There are various studies showing how the mental energy that you expend, knowing you’re on camera, it’s more tiring. Your recall is diminished. So I think the ability to use Zoom-style classes is, again, better than nothing but not preferable to in-person classes.

But also crucially, and this is something you miss out on Zoom classes, a lot of learning happens on the margins before and after class when students are either walking in or walking out saying, “What the hell was Chesterman talking about today? That didn’t make any sense.” That exchange with the students that you’re sitting alongside reflects a kind of social aspect of technology.

In terms of my own learning, I’m actually right now experimenting with some colleagues on an online module on governance of artificial intelligence and ethical use of artificial intelligence. Here one very basic thing that I think most of us have discovered is that if you are going to do this asynchronous learn-anytime approach, it’s very convenient for people. But if you want it to be effective, then it can’t just be a recording of you speaking for an hour. And so again, it goes back to that question of, how do you engage people of active learners? Here there are all sorts of interesting things you can do with online quizzes, interactive opportunities for feedback, peer-to-peer learning, sending people off to look at a different video. Those are some of the things that I’ve tried to internalize, but I’m certainly not an expert in this field.

Tom:

Just on that, you’re doing a course in ethics of AI or ethical governance of AI. How would you govern the use of AI by yourself or your students in that course?

Simon:

Maybe I can start by saying how we’ve approached things like ChatGPT at the university more generally and then applied to that specific course. I think there was a lot of academic hand-ringing when ChatGPT was released. Academics, we tend to be a little bit self-regarding. We think we’re very important, and we worry about our students being lazy. Of course, every generation thinks that the younger generation is lazy, doesn’t work as hard.

I think there were real reasons for concern that students would cheat by using ChatGPT to hand in essays. As we were saying earlier, ChatGPT can produce a grammatically correct human-like response to a prompt, but it’s not very good. It’s not very insightful, yet. This is kind of amazing that it’s only six months into widespread deployment. There are actually some stories that did the rounds. A few Australian universities actually were reported as saying, “Well, that’s the end of online exams. We’re going back to paper and pen,” as the only way to ensure that the students are being tested rigorously. So I’m less worried about the students cheating. Because if students want to cheat, they can cheat. If they want to get someone else to write their essay for them, they can already do that. If they want to outsource it to someone doing things very cheaply, they can. I appreciate that ChatGPT lowers the bar for that. But at least at university level, I think most students already have the capacity to formulate arguments.

I’m much more concerned, as we were discussing earlier, about people first developing those skills. So in fact, we’re doing the opposite. We’re encouraging students to experiment with ChatGPT, see its limitations, see its promise, because otherwise I think it would be a bit like a mathematics department saying no one will ever use a calculator or indeed a computer science department saying no one will ever use computers. These are going to be a part of our lives. This kind of technology is going to transform our relationship to information.

Zena: 

In what way?

Simon: 

I think some people listening to this podcast maybe are old enough that they searched for information before Google was a verb. But I think for even the most serious researchers, one of the first things you’ll do looking at a new topic is you’ll google it. You’ll look at it online. I’m old enough that we had an encyclopedia at home that we would browse through, and that’s now kind of quaint. I think they’ve stopped producing hard copy encyclopedias. I really do think that generative AI holds the prospect of transforming our relationship to information in a similar way. And that’s good and bad. It’s good in that it will be ever more creative, human-like, useful, but also potentially limiting.

If you switch to a future in which generative AI, like ChatGPT, is integrated into a search engine, and rather than putting in a query and then getting a list of responses, some of which are ads, you just get a one-paragraph or two or three-paragraph response to your query.

The two problems that creates is, one, you might just think that is the answer and you might just take it as correct. That’s a problem because it might not be. We all know about hallucinations of GPT and others. Secondly, how’s it going to be paid for? Presumably, if you’re not subscribing, then it’s going to be paid for through advertising, but the advertising’s going to be much more subtle. It’ll be built into a single answer. So for those reasons, I’m a little bit wary about what this is going to mean for our relationship to information.

Then lastly, so what does this mean when I’m constructing a new module? Full disclosure, I did ask ChatGPT, “Give me an outline for a module on ethical AI.” It did a half decent job, but I also looked around, I’d said, “What’s MIT, what’s Google, what are these other schools doing?” and then decided, “Look, if I’m going to put my name to this and I’m going to ask some people to cooperate with me, then I want to do this work myself.” But maybe I’m just a glutton for punishment.

Tom:

You just reminded me, I went to talk with Professor Joshua Gans, economist. He did a talk at the Productivity Commission earlier in the year to say, “Is ChatGPT coming for our jobs?” He experimented on himself: Can it do a professor’s job? It sounds like he did a very similar thing where he was teaching a course on AI policy and regulation and wanted a 12-week syllabus. Similarly, he said, “Oh, it did a pretty good job.” The problem was when he asked, “Can you produce some readings, produce good readings?” that were real, that weren’t fake, the problem, it didn’t have any Joshua Gans readings in the readings produced, but produced assessment task and also rubrics for tutors to mark it. Something that would’ve taken him maybe weeks, a first draft within a couple of hours. Maybe that’s something that we can also explore. How does this change the work of professors and not just how students use the technology as a super tool?

Simon:

But this is a much larger opportunity and problem. Because I think in past technological revolutions, the concern is that it’s the low-end wage earners whose jobs get taken. Usually that’s been made up for. So with the invention of the loom and industrial processes, a lot of manual labor got automated, but then people could be upskilled and trained to do different things.

What’s happening with the current so-called fourth industrial revolution is that it’s actually hollowing out the middle. That actually manual labor is quite hard to outsource to machines because hand-eye coordination is something that we developed over millions of years of evolution. But those low-end conceptual, low-end cognitive, I mean I’m using this term pejoratively, the repetitive cognitive functions, essentially what you teach as a basic undergraduate degree, that’s the easiest thing to outsource. So things like really basic accounting, basic radiology, these are some of the repetitive skills that you can train a human to do, but you can get a machine to do much faster.

Now what does that mean for a professor or, in my field, law? It means a lot of the easy work can be done by a machine. Now, that’s going to be great for me potentially because I can then focus on the so-called higher end tasks. In a law firm, it might be great because you can just draw on the partners, and a lot of the grunt work that used to be done by junior associates and so on, law clerks can be outsourced to machines. That’s potentially great. But then even if you keep the senior lawyers, how do you train the next generation of lawyers if people haven’t started off like that? So back to your example of research. Yeah, I can take some shortcuts because I know how to develop a course. I’ve taught a bunch of courses in the past. But if this is your first course and you’re not learning those basic skills, then how do you develop those higher-order skills that we think will be useful, even when the machines play an ever more important role?

Zena:

Listening to your answers around this, I’m thinking about the expectations that we have on people, and I’m going to kind of keep this focused on academia for the purpose of context. Burnout is a very common thing that a lot of academics talk about. There are a lot of expectations on us on what we’re supposed to produce from education to research, to service. When I think about generative AI and these kind of tools that exist, for me, I think from the outset they seem like they’re assistive. But when I look at it a little bit more holistically, I actually think that it just changes the expectation that I can do even more that I was already kind of struggling to get above board, if that makes sense. So I think there’s also something to be said here about the expectations that we put on people in our capacity. Because at the end of the day, we’re not supposed to be working 24 hours a day. We’re actually not supposed to be humanly-possible of producing this amount of work. So these assistive tools, in some respects, are actually not that assistive.

Simon:

I remember there was a seminar that Joseph Weiler, who’s a professor, gave on how to write a doctorate. One of the things that he would say is, “I’m going to tell you how many hours you need to work on your doctorate per week.” Then he would pause for effect, and then he would say, “Six hours a day, five days a week.” Then he would explain, after a bit of laughter, that what he meant was real work, real conceptual work, not answering emails, not photocopying, not wasting time, but real conceptual work. Because I think there is a limit to how much we can really intentionally do that is productive, and yet, there’s a huge amount of stuff that can actually occupy our time.

So I think, Zena, there is a worry that, as has often been the case in the past, the impact of technology will not be the much dreamed of idea that, “Well, now we’ll just free up time for leisure.” It’ll just be changing expectations. Well, now you’ve got this tool. Instead of taking six months to write a paper, you should be able to write a paper a week. I think there’s a concern there. There’s a whole separate conversation we can have about the economics of publishing, which are shifting from paying for the privilege of reading really good work to paying for the privilege of publishing work, which moves the incentive structure away from publishers curating really, really good content to publishers publishing as much as possible. I don’t think this is going to make the academic grind any easier. It will make it more complex, it might make it more interesting, but certainly not easier, I don’t think.

Matthew:

Simon, one thing I’m aware of is the new generation of ChatGPT, which is currently a paid product using GPT-4, I don’t know if you’ve seen this, but Khan Academy is integrating it into their product as a tutor. It doesn’t tell people the answers. It guides them through the problem-solving process within whichever domain of knowledge that Khan Academy provides and across the whole of the curriculum that is on Khan Academy, which is at the moment pretty much most of the US secondary school curriculum, a lot of the advanced placement courses. What do you think the opportunities are for these technologies in assisting the actual learning process of students?

Simon:

I think there’s tremendous scope if used correctly. People have experimented with this in the past. There was a guy at Georgia Tech half a dozen years ago who had a bunch of teaching assistants. They would mainly interact with students via email. He didn’t tell them, but one year he started experimenting with a chatbot. These computer science students at Georgia Tech, which is a good school, couldn’t tell the difference between the humans and the bots. The main thing that he had to do was ensure that the bots didn’t respond instantly and correctly every time, so he had to introduce six-hour delays and spelling mistakes and so on.

But in terms of Khan Academy and my own work here at National University of Singapore, I think there’s enormous potential for AI and these kind of chatbots to be interactive, supportive tools for education to facilitate understanding. The advantages are they are endlessly patient. They’re always on. So I think there’s great scope for the use of AI tools in supporting students in that kind of personalized learning journey.

I don’t think it should be a replacement for in-person classes with other humans. Because a big part of university is not just the education, it’s the socialization. But in terms of students being able to come back and ask questions, rather than a professor having to get an email and then wait to respond and then have a follow up, if the student can have a chat, at least for some questions, much like if you’ve got a problem with your computer increasingly or various other areas of knowledge, if you can engage with a chatbot, you might get a response much faster, personalized to your needs that will solve your immediate problem or at least some of those problems. So I think there’s great scope for that kind of personalized, patient, never getting angry, willing to go over things again and again for certain areas of knowledge.

Tom:

We’ve been so far talking about putting an economics hat on. It’s like the uplift in efficiency or the value of having an AI tutor or an assistant. I wonder if you can talk more about maybe the equity or distributional impacts of this kind of technology for maybe democratizing education in some ways. I’m thinking as someone who English is a second language, and I’ve studied law, I’m competing with people who are native speakers, whose parents and parents were lawyers and so forth. I wonder if an AI legal tutor or other disciplines has equity impacts.

Simon:

Yeah, absolutely. Just on English as a second language, that’s a function ChatGPT is actually extremely good at is cleaning up language. It’s a bit like our example of the spellchecker, cleaning up language, removing errors, making it a little bit more sophisticated. I think that is a great equalizer. 

Khan Academy was founded on this idea of mass education, and there’s a lot of scope, certainly, for AI to be an equalizer in that field, to be a leveler. So I do think this kind of generative AI does have the prospect for making not just passive information but the kind of skills that you need to function in today’s world more accessible.

Maybe that’s worth just highlighting. The shift of education that, as I was saying earlier, used to be about the transmission of knowledge. It’s clearly not that anymore. There’s an element of rote learning everywhere, but education has to be more than rote learning. We tend to think now, well, maybe it’s about skills. We’re teaching skills, but this is something that the AI is coming for right now. So the sort of next layer up is the values, perspectives, priorities, and that’s much harder to train. But certainly at that level of skills, the widespread dissemination of AI tutors is going to make a reasonably sophisticated education. It won’t be a Harvard brand name education maybe, but a reasonably sophisticated education available to many, many more people than could get access to it at the moment.

Zena:

I just wanted to add a point here on the point you were making earlier, Simon, about searching for things. We have this abundance of information and it’s kind of given us the illusion that we can find the things that we’re searching for. I know this is probably not the most credible source, but I was watching an interview with Dylan Moran. He was saying that human beings have always been searching, and the problem with the internet is we now think we found it. Again, I don’t think he’s the most credible source, more of a comedian than a philosopher, I think. But I think this idea of having more and being able to find what you’re after in more is a little bit elusive.

Simon:

There’s different types of knowledge. Knowing that versus knowing how. That’s the idea that knowing the capital of Australia is Canberra is very different from knowing how to ride a bike. I might know intellectually how to ride a bike, but actually getting on a bike is very different from the intellectual aspect of it.

But one of the areas where I get particularly concerned, and this maybe goes back to how you teach an ethics course, is there is this idea, which I’m not sure if I find it more offensive or insulting, but ethics as a service. So there is this idea that you feed all of human history and human knowledge into a computer system and then ask it ethical questions and it’ll give you the ethical answer. The reason I find it either insulting or offensive is it assumes that there is an answer and that the reason we haven’t found that answer is because we’re too dumb, as opposed to these being inherently contested ideas: What is a good life? How do we distribute resources fairly while also paying attention to encouraging innovation? How do we manage the inevitable conflicts that arise?

I think you’re absolutely right, Zena, that there’s a problem associated with the digitization of information that reduces things to ones and zeros, if that suggests that answers are always black and white. If the answers to legal questions were black or white, going back to Tom’s example of the law school notes, if the answers in law were black or white, lawyers would be out of a job because no one would ever go to court. The only reason cases ever make it to court, outside of a few rare examples, the only reason cases usually go to court is because smart people being paid good money think that they’re going to win, and 50% of the time, on average, they lose. That’s not because they were dumb, occasionally it is, but usually it’s not because they’re dumb, but because these are inherently contested questions. That’s why we continue to need judges and so on.

That’s why I tend to think that, although a lot of legal services will be outsourced to machines, the role of judges is unlikely to be outsourced to machines because the legitimacy of a decision made by a judge isn’t because he or she is the smartest person in the world, but because he or she exists within a politically accountable structure that provides final answers to questions that we know are unanswerable but that have to have answers anyway and that we need those answers to keep moving. We need certainty, so we set up a process not to remove the questions, but to provide answers so that we can all move forward.

Liz:

This is just making me think of the inherent dynamics of all this. Just like you say, it’s not like there’s this one black and white set of rules or ethical guidelines that we can put in place and we’re done with the job. We are always changing, our circumstances are always changing, and what is likely to be okay in one instance may not in the other. The best example I can think of from my recent history for this is I’ve got two little kids and once in a while I’ll pull out a show from my childhood. Usually what happens is I go, “I cannot believe I was allowed to watch that,” because things have changed so much since I was young, and things are not acceptable now that they were then. It’s a small example, but it’s a very clear one in my mind.

Simon:

This is one of the reasons why I think the hype about outsourcing legal services is exaggerated. Here it’s worth pausing just to look into how machine learning actually functions. Machine learning is statistics, it’s very complicated statistics, but it’s basically backward looking, looking for correlations, looking for patterns, and AI is fantastic at this. But if you, in a thought experiment, transferred legal discretion or political discretion to AI systems, then it would kind of freeze time because AI systems are very good at making predictions based on past behavior. But just imagine if you froze legal development, I don’t know, a hundred years ago. Well, so much for women’s equality, so much for racial equality. If you did it 20 years ago, so much for gender identity equality. If you froze it a couple years ago, well, there’d be no debate about a Voice referendum in Australia about the Indigenous population being represented.

So for all these reasons, I think it is important to understand that some areas of life, politics, law, have no answer, no single answer and that we shouldn’t pretend there is an answer. If there was an answer, we wouldn’t have elections, we wouldn’t have judges. But rather these are works in progress, and you hope they’re going in a positive direction. Usually, statistically, they seem to. Most countries go in a more progressive, more egalitarian, more liberal direction. But, yeah, we have backsliding. I think the idea that you could outsource any of this to machines is a pretty horrifying one.

Tom:

I found what you just said then fascinating, Simon. The picture you paint of law and policy sounds very progressive, innovative, reformist, and the problem with AI, it is backward looking. But then looking back at studying law, it’s about understanding the precedence of and the reasons behind decisions of dead judges who’ve said something at some point in time that might apply to the facts of the case. You were kind of marked on that partly how much you can point to precedence and be able to… I mean, you build on that. So at least for that, and it sounds like you wouldn’t want to outsource that kind of thinking to AI, but then seems like AI could be a slave to precedence. In some ways, at least in a legal system, that may not be a bad thing because we’re all kind of a little bit slavish to precedence. I don’t know. I’m not a lawyer, by the way. I didn’t get admitted. I just got a law degree.

Simon:

That’s all right. Some of my best friends are not lawyers. You raised a really interesting point. The reason why we train like that is the same reason, when you start at school, you do a lot of rote learning. Rote learning is a bit like exercising. You’ve got to develop the muscle memory. You’ve got to develop the skills.

It helped that my first year of law school was 1992. I wrote a research paper on Native Title as the High Court was preparing to issue what became the Mabo decision. I had a cutting 4,000-word essay about how the cowardly High Court would stick to precedent, would never overturn 200 years of precedent. This was a blight on Australia’s history and legal development and so on. Then luckily, the Mabo decision came down four days before the paper was due. So I could throw that out and say, “Of course, the High Court could never have stuck with this precedent. Did the right thing. The reason you have highly trained judges in positions of authority is so that they can make these corrections.”

Now, a lot of legal practice is pretty deterministic. So if you think about the area of law that affects most people on a daily basis it’s probably traffic law. You’re driving or you’re walking or you’re taking a bus. We don’t teach that in law school because it’s so simple. Yet, actually it’s not that simple because most laws… In Australia, if I’m correct, most states there’s a speed limit. If you exceed the speed limit, you are subject to a fine unless you have a reasonable excuse, and the reasonable excuse gives you a lot of scope for argument. Many countries, including Australia, I think, if an ambulance needs to break the speed limit, the ambulance doesn’t have a license, the driver doesn’t have a license that says you can break the speed limit any time you like. You can run a red light. The ambulance has to rely on a doctrine of emergency, which means that you would have to have a weighing exercise. So even these very simple, deterministic areas of law upon examination are not that simple.

I think bringing it back to AI, the idea that there are lots of areas of life that can be optimized. If I think about how to get from point A to point B, I just want the fastest, the safest route, and if an AI system can do that for me, great. I’m much happier having telephone numbers programmed into my phone than me struggling to remember them. But if I’m thinking about what job I should do, who I should marry, who I should vote for, I don’t want to be nudged or told by an AI system who that should be. Although dating, that is an area which AI has been encroaching and with some measure of success, it seems, in terms of predicting successful marriages. Although some matchmakers historically would say that that’s what they were doing all along also, provide and manage expectations. I’ve watched a few episodes of the Indian matchmaker show, which seems to suggest that maybe we don’t know ourselves as well as we think we do.

Zena:

I could do an entire podcast episode about Indian Matchmaking. That show is amazing. My favorite part is when she tell people, “You have to compromise, and if you get somebody who meets 60% of your criteria, that’s it. That’s good enough.”

Simon:

Sorry, this is a small product placement, but there’s a point. Having completed the very serious earnest footnote-heavy work, We, the Robots, I actually wrote a novel that just came out earlier this year called Artifice where there’s an AI system that basically is explaining dating to a human and says, “That having examined all of human history and all of the data, I’ve understood the secret to happiness, which is settle for less.” Actually, there is data that backs that up.

Tom:

It’s low expectations.

Simon:

This comes from mismanaged expectations. That’s the very human side, but in terms of achieving optimizable goals, AI is extremely useful for determining what those goals should be. Yeah, I don’t think that’s what we should outsource to AI.

Zena:

I think this is a great segue talking about your book. Your book does include topics around regulation of AI. Can you share with us what you’ve found have been some of the challenges that come with attempting to regulate AI in different contexts?

Simon:

I grew up reading Isaac Asimov. I think many people in this space, although it tends to be older people, younger people often haven’t even heard of him, but even those who’ve not read Isaac Asimov might be familiar with the three laws of robotics. This is the idea that Isaac Asimov, who wrote his first short story on this in 1942, imagined the future in which there are robots walking around with human level intelligence. This is actually a real problem in terms of the expectations for artificial intelligence because it planted in many people’s minds what’s now known as the Android Fallacy, that if we ever really have artificial intelligence, it will be humanoid in form and human level in terms of intelligence. There’s no reason for either of those things to be true. When we started developing autonomous vehicles, no one seriously suggested the path to autonomous vehicles was humanoid robots sitting in a car holding the wheel with robot hands and so on. As for intelligence, if we ever get to human level intelligence, it’s unlikely to stop there.

Now, in terms of how to control these machines, Asimov had this idea of the three laws of robotics: that a robot cannot harm a human being, it should follow orders provided that doesn’t involve harming a human being, and it should prevent harm to itself provided that doesn’t contradict the other two laws. So this was fun and interesting, but I think put in people’s minds the idea that to deal with the problem of artificial intelligence what we needed was new laws. So there have been various efforts, literally hundreds and hundreds of efforts, to come up with a new set of laws or principles or guidelines or frameworks to govern AI.

That misunderstands the problem as both too hard and too easy. It misunderstands the problem as being too hard because actually we don’t need a whole bunch of new laws. What we really need to do is apply existing laws to AI. It misunderstands the problem of being too easy because it seems to assume that, if only we had the right list of rules, that would solve the problem. But actually the real challenge is in implementing them.

So a big part of my work really has been to show that most laws can govern most AI use cases most of the time. To pick autonomous vehicles, for example, people worry, “Oh, there’s an autonomous vehicle. Who’s going to be to blame if there’s a crash if there’s no driver?” Well, we already have that problem. If I injure you because I’m driving badly, driving negligently, you can sue me. If I injure you because my car’s next to you and it blows up, no point suing me, I’m dead. You go after the manufacturer. So basically what we’ll see in autonomous vehicles is a shift from responsibility of drivers to responsibility of manufacturers. There’s a whole interesting thing to do with insurance there, but that’s not particularly complicated.

What becomes much more interesting is in areas where we don’t want to outsource decision-making to machines. Hopefully one of the contributions of the book is to say, look, there are three reasons why we might care about regulating AI. The first or the dominant one is we want it to be safe. We don’t want it to be harmful. Again, autonomous vehicles, we just want them to be safe, so product liability can deal with a lot of that.

But there are some areas where we shouldn’t be outsourcing decisions to machines at all. The example I focus on is lethal autonomous weapons. So in a battlefield, the kill decision shouldn’t be outsourced to a machine. It should be in human hands. Why? Is it because humans are better at making these decisions? Some people have argued that, and I think that’s a pretty terrible argument. Some people will argue, “Look, the battlefield is complex. Who’s a combatant? Who’s a civilian? What is military necessity?” These are complex questions. I’ve done work on international humanitarian law. I’ve worked briefly for the UN. I’ve interned at the War Crimes Tribunal for Rwanda. A lot of war crimes happen because people are dumb, angry, racist, sexist, all things that we can train machines not to be. But I still think those decisions should be kept in the hands of humans, not because the human will make a better decision, but because a human can grapple with it morally, and most importantly, a human can be held accountable for that decision.

Then there’s a third set of decisions where not just any human should be making a decision, but a particular human should be making a decision. That goes back to the example of the judge. That there are some decisions where we want a particular human to make a decision, for example, a judge, because the legitimacy of that decision doesn’t come from him or her being right. It doesn’t come from him or her being a human rather than a machine. Rather it comes from his or her role within a politically accountable structure, so a judge within a hierarchy and so on. Those are some of the things that I argue in the book.

Maybe one of the thing to touch on is the dilemma for medium and small-sized jurisdictions, like Australia or Singapore where I’m based, which is not what the rules should be or why you regulate or even how you regulate, but when you regulate. Here there’s an idea that’s called the Collingridge dilemma. It dates back to David Collingridge wrote a book, Social Control of Technology, back in 1980, not really thinking about artificial intelligence at the time.

What he highlighted was that there’s a dilemma for the regulator, because at an early stage of innovation, it’s easy to regulate and the costs are low. The problem is you don’t know what to regulate. You don’t know what the harms are. The longer you wait, the clearer those harms are, the clearer what you should be doing becomes, but the costs go way up. We can use social media maybe as an example of this. If you wanted to regulate Facebook back in 2004, easy-peasy. Facebook is very small. Social media wasn’t really a thing. But we didn’t know what the harms were. We didn’t know about Cambridge Analytica. We didn’t know about teenage anxiety and depression.

Jump forward to today, those harms are a lot clearer, but it’s much harder to regulate because social media has become such an integral part of the ecosystem. It’s a lot harder to roll that back now, which is why some people are sounding the alarm about generative AI today because they see something similar happening. The problem is you don’t really know what is happening. So even the Pause Letter, which was probably the clearest example of this, and I wrote two years ago, that it was kind of striking that no one’s made a serious argument for a real moratorium on AI development, that’s changed. Now we had a serious proposal and no one took it seriously. Because why pause for six months? What’s this going to do? Who’s going to benefit? What research would go on? It wasn’t really clear what should happen. But I think we will have more and more debates about what regulators should be doing in the generative AI space in particular, both because the potential benefits are real, but the potential harms are very significant. But also I think we are seeing a real acceleration in terms of development.

Maybe the last thing I’ll say just in general terms on this is one of the ideas I kick around in the book is a role for an international agency. This was modeled on the International Atomic Energy Agency dating back to the very early days of nuclear energy when it was very clear that nuclear energy had real potential for enormous benefit in terms of energy, agriculture, medicine, and obviously harm, like nuclear bombs. But the very people who were developing nuclear bombs were also trying to work out, how do we get the benefit while mitigating or minimizing the risks? So the IAEA played a role in that. So I at least hypothesized, well, could we have a similar agency at the international level? Because trying to rely on the good faith of every country and every company to just cooperate spontaneously is pretty unrealistic, so maybe some sort of international coordination might help.

Liz:

Well, I think that’s an interesting idea. I’m curious, you’re comparing to the IAEA, when you’re talking about nuclear technologies, typically, you can bring in inspectors. You can use detectors to seek out signs that people are not doing the right thing. With something like generative AI, at least knowing what I do know about the technology, it’s much harder to do that kind of thing, to have that kind of oversight. So I’m wondering, how are you thinking about how that would work from a regulatory perspective?

Simon:

You’re absolutely right. Nuclear energy, there is a limited set of materials. They’re reasonably easily detected. There’s the pool of knowledge, it’s 1940s technology and knowledge, but it’s complicated to build a nuclear bomb, happily. Some of you might be familiar with ChaosGPT, which was a generative AI system based on ChatGPT, or on GPT-4 I think, tasked with destroying humanity. It went out and said, “How do I acquire or build a nuclear weapon?” Then determined, “That’s a bit difficult.”

What would this actually mean in practice? Well, it’s a thought experiment, but we’ve seen international coordination on areas like data protection, on areas like human rights. It’s not always effective, but at least we draw the lines, and that would be a starting point. 

The two areas where I think we need real coordination are on human control and transparency. It should be illegal to develop uncontrollable or uncontainable AI systems, something that cannot be contained or controlled. The analogy here is not that you’re trying to regulate statistics. It’s more like, we don’t want people developing biological or chemical weapons or chemical products or viruses or, again, the nuclear energy comparison, in a manner that is uncontrollable or uncontainable. You need a license to develop these things. There are strict safeguards. I think that’s at least defensible as an argument, very hard to implement, because as you rightly stress, Liz, the technology is hard to contain. But you could at least draw red lines. You could have national agencies to enforce it. It is kind of remarkable that this 1940s technology of nuclear weaponry, we haven’t had them used in anger since World War II.

The second set of red lines is on transparency. This is a bit harder, but there should be minimum levels of transparency in some use cases, not in all use cases. I was on a panel talking about medical care and AI. A doctor pointed out that we’ve been using aspirin for a hundred years. It’s good for headaches. It’s good for your heart. We don’t know how it works. We don’t know why it works, but we’re prepared to continue. So I think in a lot of AI use cases, we don’t really need to know how it works. But in some decisions, in particular where we’re exercising some form discretion or there’s potential for bias, then, yeah, we do need auditability and either transparency in advance, that’s the kind of model transparency, understanding how things work, or transparency after the fact. That’s the so-called right to an explanation that’s got a bit of play in Europe.

The reason for that is that if you don’t have any of these things, then you have scandals like the Robodebt scandal in Australia. This wasn’t really a sophisticated AI system, but you had government functions being outsourced to machines which led to real harms. If it hadn’t been whistle-blown and then investigated, then lots of people would’ve suffered. So I think there are all sorts of areas in which we need coordination because that was a national-level scandal, but as AI becomes more widely deployed, you’re going to have a global dimension to this. If we’re going to maintain at least some level of basic coordination of standards, basic coordination of red lines that we don’t want to cross, then you can’t rely on every country and every company acting on their own to do that.

Liz:

Interesting. I have many more questions, but I’m also conscious of time. I’m wondering if there is anything that we didn’t discuss thus far that you really wanted to share with our listeners.

Simon:

The thing we haven’t discussed, which I don’t really lose a lot of sleep about, is the artificial general intelligence prospect. There’s a lot of discussion these days that something like 50% of AI researchers think there’s a 10% chance that AI will wipe out humanity. I think it’s good we haven’t talked about that. I think it’s good we haven’t talked about trolley problems. The idea, do you kill the old man or the old lady or the dog or the cat? Because I think there are much more immediate problems.

The difficulty if you posit existential risk, the problem with that, a bit like the long-term philanthropy ideas that have been kicked around, is that humans aren’t really good at dealing with existential questions. If we’re trying to calibrate how to guard against the end of everything, then that will lead you down a very dark, ethical path. Because if you think that there’s infinite harm to avoid, then that will justify infinite responses. I’m much more worried about sleepwalking in a direction that’s going to be unhelpful for us, either because we outsource too much political discretion to AI or because we allow it to affect the most vulnerable people, either in socioeconomic terms or in terms of development, like youth.

I think as counterintuitive as this might be to an Australian audience, I think actually looking at what China’s been doing in terms of regulation is really interesting. Not that I support the Chinese surveillance and coercion that goes on facilitated through AI, the social credit scoring and so on, but a real willingness to say, “No, kids can’t use TikTok more than 45 minutes. No, we’re going to ban these. We’re going to require certain degrees of transparency.” They might be doing it because they want to promote communist values or socialist values. But that willingness to say, “We’re going to stand up for certain principles,” I think is an important part of regulation.

Because you regulate usually for one of two reasons. Often in Western countries, we’ve been focused on the former. The first is to avoid market inefficiencies. That’s the example of the autonomous vehicle and sophisticated processes. Through the 20th century, we moved away from “buyer beware” to product liability, putting the onus on manufacturers because they’re the ones best able to bear the cost and to mitigate the risk. So market efficiency is a good and important reason for regulation, but it’s not the only one because we also regulate for social and other purposes to uphold values that we as a society hold dear.

Even if you could demonstrate that it was efficient, that it would help the market to discriminate on the basis of gender, sex, sexual orientation or anything, we’d say, “No, we’re not going to do that because we don’t think that’s appropriate. We think that crosses a line.” This social set of purposes will be determined by each jurisdiction separately. There’s a degree of overlap, and we tend to be broadly going in the same direction. But I think it’s important, even as AI systems help us achieve those market efficiencies, we don’t lose sight of the values that make those markets worthwhile in the first place.

Liz:

That sounds like a very nice thought to end on. I really appreciate that. Thank you so much for joining us today, Simon. We really appreciate you taking the time to share a bit about your work and all of your thoughts, especially around the AI regulation space and the future of education with us. We will share some links to your books in the show notes. Thanks so much.

Simon:

Thank you very much. Lovely talking with you.

Leave a Reply

Your email address will not be published. Required fields are marked *