Podcasts Season 3

S03E02: The art of artificial intelligence, with Eryk Salvaggio

In the age of DALL-E and Stable Diffusion, what counts as art? And what can art tell us about AI? In this episode, we explore these questions and more with the help of Eryk Salvaggio, a US-based artist, designer and researcher whose work explores the fabric of artificial intelligence — and often playfully defies its boundaries. 

Credits

Guest – Eryk Salvaggio

Hosts – Zena Assaad and Liz Williams

Producers – Robbie Slape, Zena Assaad, Liz Williams

Audio Producer – Martin Franklin (East Coast Studio)

Thank you to the Australian National Centre for the Public Awareness of Science for allowing us to use their podcast studio for this episode.

Transcript

Liz 

Hello, listeners! This is season 3 of the Algorithmic Futures podcast, where we talk to technology creators, regulators and dreamers from around the world about how complex technologies are shaping our world. My name is Liz Williams,

Zena

And my name is Zena Assaad, and today we’re joining you from the Australian National Centre for the Public Awareness of Science here at the Australian National University, and we are currently sitting in their podcast studio.

Liz

So for our listeners, Eric Salvaggio is somebody we’ve known for a few years and we’ve been following your career. You’re an artist and designer and researcher, who in your work applies a critical lens to emergent tech in kind of many fascinating ways. So we’re gonna get into some of those on the episode today. But yeah, thank you so much for joining us and being our guinea pig. Because you know, it’s always fun.

Eryk 

Great to be here, as experimental as it is.

Liz 

So, you’re in Australia at the moment because you’re, you’re speaking at the recent Future of Arts culture and Technology Symposium in Melbourne, which is really exciting. during your presentation, you shared your work, “Swim,” which is a video art piece. I’m wondering if you can explain what “Swim” is and what it represents.

Eryk 

Yeah. So “Swim” is part of a — part of a — kind of a practice that I have that’s around thinking through technology by by using it and by trying to sort of visualize, as much as I can, the way that the technology is working– just so I can try to understand some of the logic behind the way these things are built. And in particular, I’ve been thinking a lot about diffusion models– things like stable diffusion, midjourney, Dall-E. And swim was a result of thinking a lot about what our relationship is to training data, because we have been talking about training data a lot in a kind of abstract way, where I think we don’t get into, like, what are these machines actually trained on? What are they actually like, quote unquote learning from. And as a result of that, in a kind of research practice, I’ve been looking at these datasets and trying to get into them trying to figure out what exactly is in in them. And also trying to think about them as archives, instead of just quote unquote, datasets. And one of the things that’s different between the way — and this is not always true, but sort of in the, in the practice of the these AI models, right, the the training data was really hands off, not a lot of people involved in thinking through what was in them. Not a lot of people reviewing the training data for diffusion-based models, and really automating the sort of decision making completely so —

Liz 

Can we back up a little bit? For our listeners, who are new to this, can you talk about what of diffusion model is and kind of what are the components that go into building one? 

Eryk 

Yes, so um, I, the diffusion models that I’m interested in, kind of emerged around, I don’t really know, maybe around 2020. There are some other sort of versions of it out there. But really, they started hitting hitting the sort of public and probably 2021, with with Dall-E, Dall-E 2 from open AI. And the idea is, you know, these the interfaces of these things is, you type a prompt, and it gives you an image, you ask for a cat, it gives you a cat, you ask for a cat in a tree, it gives you a cat in a tree. This is like a huge shift in the way that we’ve been generating images with AI, much higher quality, much more control; we can say very precisely what we want. But there’s still a level of arbitrariness in it based on just sort of what’s going on behind the scenes, behind the curtains of the technology. And a lot of that is because it is navigating a kind of randomness, the machine is built to work with randomness, and then sort of constrain randomness. And I’ll talk a little bit of what that means. But it might be worth sort of talking about what swim is because that’s it’s kind of a visualization of what that means. So Swim. Swim is a way of, of me engaging with an archive. And in this case, it is a–this video from, it’s actually sort of like a Delta entertainments from the 1950s. But it’s not very risque, it’s a woman swimming, you know, fully dressed, swimming underwater. And the original video that comes from the archive is about a minute and a half long. But using AI and interpolation– an interpolation tool, which, which basically allows you to do extremely slow motion by sort of faking frames. So it draws like one image to the next so that you could get this sort of very fluid, slow motion movement. And so I slowed it down. And then I also slow down this jaunty jazz soundtrack. It’s this like very cheesy, sort of like burlesque type music. But then

Zena 

When you said jaunty jazz soundtrack, I immediately understood the tone in my head. That was the best way to visualize.

Eryk 

Yeah, so it’s, it’s just one of those types of numbers. But when you slow it down, there’s this this different, very different tonality, it becomes sort of meditative. And I’m thinking about, so well, just sort of starting on that level, right, because there’s a couple of layers in the piece. But thinking about that archive layer, and thinking about what it means to sort of stretch these things like stretching that archive, so that it covers from 1951 when the thing was recorded, but it is still sort of what we’re what we’re doing when we put these images into an archive or museum or a collection, or a data set is they’re sort of saying, this is the thing that’s there that represents a certain time and place. When we put it in a dataset, we sort of train an AI model in it, we break it down. And this is where I want to get into the sort of second piece of this. But all of the training data that goes into these AI models is literally broken down into noise. Information is removed from the image until just a noisy JPEG — like literal static. And I think that’s super fascinating. I think that process is super fascinating, because then it walks it backwards. It starts with a random frame of noise, but it has learned like what a swimmer looks like, quote unquote, learns, I always have to do this a scare quotes around that, but it’s memorize the way that this noise has moved through an image. And then it can walk that path backward. And if you give it a random image of static, it’ll create a random image of something resembling all of the swimmers that are in the training data. And to me, there’s something very emotionally complex about that. When we think about memory, when we think about cultural memory or social memory, what are all these photos that we’re putting out there in the world? What are all these photos in our archives? What are all these photos that we’re sharing on social media, and how are they being broken down, and what’s being lost when they’re being broken down in that way? They’re oftentimes losing a lot of context. And when you look at the training data, there’s a lot of things in there that are a little disturbing. And we could talk about that if we want to. But one of the things is just really that principle of taking things that are essentially memories that have a large context, and then boiling them down into noise, stripping that context away, and then regenerating stuff that looks sort of like it, but not exactly like it. So I was thinking in swim, you have the swimmer, and she’s moving through the sea of noise. And thinking about that relationship of the things in the training data, the people in the training data, the histories, in the training data, and how that is being reduced down. And also the challenges of sort of moving in that world into the challenges of all this noise in our world that we’re surrounded by constantly. One of the things I mentioned at ACME is like one of the most listened to audio streams on Spotify is white noise. Like that’s where people go to, to listen to. And so we are in an age of noise is what I would argue. We’ve entered the information age, all this information is overwhelming. And it’s so dense that we actually don’t have any context anymore, which means all this information has essentially become noise. And I’d like to think about how we might swim through that noise as opposed to drowning in it. And this is one of the things that’s come through in that piece, I hope. So it’s a long winded answer. But it gets a little bit into sort of like my thinking around diffusion models and AI and how it works and how I’m trying to work through that as an artist and visualize these tensions and show these tensions in a different way and come at these tensions in a different way.

Zena 

You know, Eric, I’m really interested in — you mentioned context.  And from my perspective, from what I’m understanding, it sounds like context is removed from data. And it’s really up to a user to kind of reinforce context through the prompts that we put in. I find this fascinating for a few reasons. I think context, I used to think context was a little bit more black, maybe not black and white, that’s probably too strong. But I used to think context was more clear. But there’s a really fantastic literary author named Ian McGoohan. And he does a really great job of when he writes a story, he’ll write about a pivotal moment that happened. And then the story will unfold about all of the people who saw or witnessed or experienced that pivotal moment, but from their own context. And so the story unravels with how different people perceived it based on the context of their own situation, how they responded to it, etc, etc. So when I’m thinking about this from the perspective of training data, what I’m really interested in is, in absence of context, it is just noise. And it is just information. But then when you introduce context, and you introduce it from different people’s perspective, are we getting a wider range of use and of significance out of content that we wouldn’t have gotten otherwise? Does that make sense? 

Eryk 

Yeah.  Yeah, I think — so. when I’m, when I’m thinking about it in the training data, right? There’s that severing of of context, right, this information that’s in an archive that has a text description is sort of like just boiled down into noise, and we kind of lose that context. It just sort of becomes these sort of like bullet points–a flower that I have given to someone I love, or sends a picture of a flower, you know, just becomes this mass amalgamation of flowers. And so the original meaning is sort of lost to this sort of abstraction of what a flower is. When we’re prompting, we’re asking for that amalgamation to almost be sort of reformed in the new image and so we we lose the original context, right? You don’t see the image in the training data, you don’t see the flower that I sent, the picture of a flower that I sent to my wife, in your AI generated image. You see all of the flowers that everybody said to their wives or husbands or partners, or whatever. And so there is one way of thinking about these images, which is that they are a visualization of the datasets, right. They’re an infographic in a way, they’re a data visualization of all of these things. But they’re also no way into understanding exactly what informs a specific image. So we’ve severed contexts in that way. But then we’re imposing and maybe imposing is a strong word, but we’re imposing our own context through the prompts. We’re saying, there’s a flower in this noise, find it. And then we get a flower. And we react to that flower. And we are contextualizing that in our own way, right? We are sending that picture of a flower. Now we have an AI generated picture of a flower that we could send to our partners. And then that introduces a new context. But there’s a break there. And that break is really interesting to me of that original sort of thing in the training data that’s been collected and analyzed and studied and broken down. And the things that we make from that. That’s an interesting tension to me. Like there’s a lot that comes in between that translation that is lost or shifted or changed that’s worth thinking about because I think there’s a lot of — there’s a lot of risk that comes into that gap as well. Like a lot of the things — when you think about the way things are labeled, well, they’re not always labeled great, in great ways, right? They’re not always labeled in thoughtful ways. And even sometimes it’s not even intentional bias or intentional harm that we’re trying to do. But sometimes it is. So when that stuff gets in that amalgamation that we get from an AI image, how are we thinking about that? What are we doing to handle that? So to get back to the piece is like trying to explore that by actually sort of doing that manually, like doing it by hand, like I’m taking training data. And I’m dissolving it into noise over the course of nine minutes, this training data, the swimmer who’s swimming. It’s slowed down to talk about that, like, long reach of data, right, where things are completely sort of changed by their context. But then it’s also sort of broken down into noise. And I just wanted to sit in that stew and meditate on that translation for a little while.

Zena 

You’re talking about noise. And I know that you’ve done a lot of work around AI and sound. And I know when you’re talking about noise in this context, it’s different to sound — but I want to shift this to sound. Can you talk to me a little bit about what’s kind of captured your interest between the intersection between AI and sound? Because we see a lot between AI and images? 

Eryk 

Yeah.

Zena 

And we’ve now seen a lot with AI and videos, as we know, there’s some new tools and things that have emerged that everybody’s very excited about. 

Eryk 

Yeah.

Zena 

But what is it specifically about AI and sound that has captured your interest?

Eryk 

AI and sound is really interesting, because depending on how you want to define AI, this idea of generativity, which is so interesting in the world of images, this idea of generated music, or generative music has been with us forever. Maybe not forever, but I would say probably forever, right? All music is generated somehow. And it’s usually generated by a tool. But there’s also processes that we’ve been thinking about, particularly, you know, it goes further back, but I, for me, it goes back to like John Cage. John Cage, you know, famously comes out, there’s a concert, he’s supposed to perform a piece called four minutes, 33 seconds. And he sits at the piano, and there’s silence. And that silence continues for four minutes, 33 seconds. 

Zena 

Sounds so awkward.

Eryk 

Right? And it’s usually treated as a bit of like a joke. But like, John Cage was not like a comedian, really. Like he, he wasn’t not funny. And he did a lot of playful stuff, too. But I think that that piece, what he was really getting at is actually the way that we organize our position to sound, is to say we’re waiting for a performer to give us some, to perform something. And what he was sort of saying was actually shift that– the music is already there. And it’s just a matter of how we tune into that music. And so the silence wasn’t actually silence and a lot of people sort of have reflected on it since then. Original people who are in that audience saying, like, “Oh, I was hearing the rush of blood in my ears”, right and, and realizing like, “oh, that can be thought of as music”. So the definitions of music have been very fluid since then. And there’s been a lot of playfulness, John Cage is one of them. There’s the Fluxus movement, which was around which was doing a lot of things like writing instructions on note cards, and then handing out the note cards and people could go home, and it would be like “pour water on top–into a tuba”. And that would be the script, right? And so these types of ideas, that’s, it just strikes me as shifting the idea of what music is helps us. But then there’s this generativity and creating matrixes for creativity, like setting the conditions, or setting the environments from which music can emerge is a way of understanding generated music and AI music. So now what we have is, is actually, in a way, the AI music that we get is actually more constrained than a lot of the way we’ve been making music before. So it’s this interesting relationship. So I started playing with AI and sounds while I was here, in 2020, working on GANs — GANs, for music basically. You know, it, it was a way of looking at the visualization of sounds. Extending that visually, so it takes a picture of like the waveforms, like you actually get a map of the waveforms, and then it figures out the same way that it would sort of predict an image, it would just predict, like what would come next in the waveform. And then it would sort of play that sound. And so you can take prior music and these days, I’m a little bit more wary of talking about it in this way, because the the models I was using were trained entirely on copyrighted material. And just at the time, no one was really thinking about that. But like, these are bands that had performed and the tools I was using took the entire library of performances from these musicians. And you could go in and you could do strange things like it asked for like punk bossa nova. And it would fuse like the sort of training library for punk and the training library for bossa nova and create this weird hybrid. And again, I was really more interested in these weird hybrids of what can be generated. But I found that over time, it was actually kind of boring, like the, the generativity was there, but it always kind of merged towards the center. Like it always sort of sounded like everything else that it was generating. And so for me, that became two constrainted. And now I’m more interested in the questions that emerge from improvisation with AI, machine learning, sort of, and feedback, and how you sort of navigate what the technology allows you to do, as a way of thinking about that as like, almost collaborative, but almost also as adversarial. Sometimes I like to think about it as like, not necessarily a collaboration with the machines, but a challenge to the machines to see what I can get them to do that they’re not really designed to do, because that’s where the sort of a lot of fun stuff starts to emerge,

Zena 

Trying to stretch those boundaries. 

Eryk 

Yeah.

Liz 

So it sounds like you need the human at least from your perspective, you need the human to actually get that interesting, and, I don’t know, transformative experience from an artistic perspective.

Eryk 

I think so. Yeah. 

Liz 

We should, we should translate GANs for our listeners. 

Eryk 

Oh, okay. GANs are much easier to understand, I think with images, right? I think with sound, it takes a lot more but to start with images, I actually started my my artistic practice with GANs, I used to be a collage artist and so I would download images from public domain datasets–I have to say this– public domain datasets. I’d go to like the Internet Archive or Flickr. And I would download images that were public domain historical images. And what I would do is I would say, Oh, this is a picture of an apple, red apple, I’m going to put this red apple in the category for red; I’m going to put this red apple in the category for fruits; I’m gonna put this red apple in the category for rounds. And so then whenever I wanted to make a collage, a visual image, I would be able to say, oh, I need something red. And I would go to the red folder and see this apple or I’d find something, I need something round; I’d find the apple. And so plums would be sort of in various categories, right? You just assigned categories. What I realized when GANs came around, is that I build training sets, like I’d build massive datasets for training. And so I started playing with GANs, because you take a collection of images with a lot of similarities. And they have to be sort of, you know, with GANs, you had to sort of get the human involved to sort of sort them. And once you had, the funny thing is I say this. Now, once you had 5000 images, you could generate 10,000 more, but you really sort of needed like at least 500, 1500, in order to sort of get these models working to build on top of like past models. But you’d have five hundred to 5000 images, and then it would find the patterns that were common across the images in that folder, and would give you more images that reflected that pattern. 

Liz 

Okay. Cool. 

Zena 

One thing I’m super interested in is there are a lot of people who make the argument that if you create art using AI, you have lost that element of creativity. It is not creative, because you use an AI generated tool to generate it. But there are other people who claim that it’s not the case that art is fluid, and that you can still create something creative using an AI generated tool, because it’s your prompts. It’s your context. It’s your vision.

Eryk 

Yeah.

Zena 

So I want to talk to you a little bit about AI and creativity. And what you think the impact is of AI on creativity. 

Eryk 

Yeah. So, a little background there. One of the things that I’ve done recently is I’ve worked with a group called AI by Design, and I was the learning experience designer for a residency. And we had six, sort of traditional animators, illustrators, and we had six creative technologists who worked with AI. And this was sort of before things got polarized. Things have been polarized–we’ll just put that out there. And so they they kind of had to figure out together with what they were doing and how they were engaging with this technology. And what also, what their sort of boundaries were; like, what did the artists say feel okay about giving up? What did the artists feel like they wanted to keep? What did the artists feel was like an unethical use of the technology, because there’s, there’s a lot of questions about, like, where this training data has come from, and stuff like that. And one of the things that was amazing about seeing that is that by just thinking critically, and by critically, I just mean sort of like consciously, right, about the tools they were using and how they were using them, they built their own workflows. And so even though we have all these same tools, you had artists figuring out, like, I want to do this part here. And I want to do this part here, and I don’t want to touch this thing. And you would have a very different end product. And so just by navigating these tools, and figuring out on their own terms, what they were saying yes or no to they were building their own systems, and the art that they made as a result of building these tools, not really building the tools, but selecting the tools that they wanted to use in the order that they wanted to use them, they all came up with some very distinct things and very distinct ways of expressing themselves. I think there’s something to be said about that about that – about thinking consciously, like, whenever you’re an artist or crafts person, you’re thinking about the tools you’re using, and you’re thinking consciously about those tools. It’s the same with design. And I imagine it’s probably the same with engineering–you could tell me? But I think that like, one of the things that I’m very aware of is that you can be an artist who draws by hand, and you can draw stuff that’s very, like, derivative, which I know is like one of these fancy art words, right? But like, you can draw stuff and never really draw like your own characters or, you know, you can, you don’t, it doesn’t, the difference between the tools isn’t what’s changing your relationship to what you make. What changes the relationship to what you make is thinking about the relationship you have to what you make. So for me, there is a thing where you see a lot of people who are sort of like passively using the tools and to their, to me, it’s almost like there’s like a roulette like element where I type in a prompt and I keep refreshing the prompt and when I see an image I like that’s the one I take and then I say, I made this with AI. And there’s a creativity involved there. But what I ask fundamentally is, if we define creativity that way, what are we losing in the process of creativity, because to me, and to many of the artists I talked to, and this is not a universal and I think actually having a universal definition of art is also one of the challenges of the AI stuff, right? It’s saying like, this is how artists work. That’s not how artists work. You talk to three artists, you’re not gonna — you talk to 3000 artists, none of them are gonna work the same way. 

Zena 

Yeah.

Eryk 

So, um, I forgot where I was going, I lost my train of thought there. Oh, okay.  So if you take an image and you sort of passively accept an image, then that is a relationship to creativity that you are choosing. But one of the things that’s not happening is the engagements with the process that will give you the challenges, will give you the rewards, will give you the critical insight, and give you the control that you might want as an artist, to get better, and to think about what you want to make and to think about how you want to say things and how you want to do things. And that’s one of the sort of challenges I think is defining creativity in that way, or saying that we’re democratizing creativity by giving people these tools is actually we’re defining creativity in a way that takes away the true democratisation of creativity, which you can do with a pencil. And you can do just by drawing something awful, right? Because the problem with saying, “this is creativity,” or “This is art. And this is not art” is that we are telling people, the drawing you’ve made, because you’ve taken a pen and you’ve put it to paper is bad. But actually, like, what about the act of taking that pen to paper and drawing that and learning from what you’ve drawn? Or seeing what you’ve drawn? And thinking, Oh, I can, what if I did this? What if I shift this? That relationship with what we make is actually the thing that I think we need to put in the focus, as opposed to the end product. And these tools, particularly AI generating image stuff, is really focused on the product. So I want to think more about how do we engage critically with the process?

And one of the things that I find really interesting is a lot of these tools are, you know, originally, a lot of these tools had a lot of control. Stable diffusion had a had a couple of like, sort of open source modules that gave you like, real hands-on granular control over the images you were making. But over time, and especially as it gets commercialized, and sort of simplified for a mass audience, that stuff is what’s lost. That’s the stuff that gets cut off. And now you see, DALL-E 3, the prompts you type in isn’t even the prompt that that is what’s giving you the image. They are connecting your prompt to GPT 4, the large language model, and that is then translating what you’re asking for into language that it knows DALL-E 3 will understand. So like that control is even more sort of cut out of the loop. So I worry about that trajectory with with, as them as creative tools. But I’m also interested in exploring what I’ve been sort of calling creative misuse, which is thinking about these as– so this goes back to Nam June Paik, who is an artist who did video art and one of the things about Nam June Paik — brief history of Nam June Paik is: he grew up in Korea, and he was in Berlin, and he saw the after effects of some of these totalitarian governments and the way they were using media, particularly television. And he wanted to say like, how would we actually sort of think about democratizing television, right? Now we’re talking about democratizing art in a very different way. He wasn’t talking about that. But he’s talking about making television because television was very limited. And what he was thinking about doing was literally just sticking giant magnets on the TV and changing the images and interacting with TV in that way. And one of the quotes that I have that I love from Nam June Paik is, “I work with technology in order to hate it properly.” And I think that’s that’s a really great position and it reflects a lot of the way that I’m engaging with AI. And it’s it’s strong, you know, and I have a lot of criticisms of AI. But I also think, if we step away from it entirely, we give up some of the opportunities to introduce friction to the things that are introduced to us. So I’m really interested in thinking through, like, how can I use these tools in ways that are not sort of prescribed to me, and one of them has been, again, coming back to noise, figuring out how to generate noise with them, because they are not designed to generate noise. They are designed to remove noise from an image of noise towards something in the training data. But if I’m uncomfortable with what’s in the training data, or I don’t want to reproduce that training data, because, as an artist, I want to create something visually new, you know, air quotes around “new,” but I want to make something that doesn’t touch the training data. And so asking it for an image of noise, I’ve found actually tricks it into a feedback loop or is adding noise, removing noise, and it’s just like stuck. And it gives me these weird abstract images, which I love. And I think that’s like a form of creative misuse that says, there’s an agency in me as an artist that I can, it’s sort of a search over the system, in order to get something different out of it, in order to use it in a different way. And so these images of noise have have really sort of become part of a visual language as I’m trying to work with. And that appears in “Swim” as well, which is sort of the last piece of the swim element is this background of noise that’s generated by AI systems that I’ve sort of, I guess it’s a prompt injection attack is what it’s called, which I didn’t think about. Some some hackers told me I was officially a hacker. And I was like, very deeply honored. But they told me like, yeah, that’s —

Zena 

Your proudest moment.

Eryk 

So they told me like, yeah, that’s a prompt injection attack, I was like, Okay, great. And that’s a kind of misuse. But I’ve been trying to use that. And I so in the background of swim, that’s what you see, you see, this noise generated by the, by Mid journey, which is not meant to produce noise is actually meant to produce to remove noise, but I’m getting it to produce these weird abstractions. And so that swimming between being dissolved into noise and this background of colorful, abstract noise that’s being generated by a machine that’s not supposed to make it. That’s the tension that the swimmer is swimming through. And that’s where I think I find some some sort of hopefulness in that piece is that we can all sort of swim between that sort of reduction to noise and that agency that we have over how we define it, how we define our relationship to it. So there’s a lot in there, and I guess it’s taken like 40 minutes to get to that conclusion to answering the first question, but yeah, that’s how I see it. 

Zena 

You know, I think the art world is often characterized by divine talents. You know, it’s a God given talent. You either born incredibly artistic and creative, or you’re not, and it’s just this talent that you have. And I think that that, from my perspective, as somebody who isn’t an artist has made art inaccessible for me, 

Eryk 

Yeah.

Zena 

So I will go to a gallery, and I’ll see a piece of art, and I’ll just look at it, I don’t get it. And you know, I read the description, and they talk about, you know, what the meaning is supposed to be and how talented this person was. And then there’s always the, you know, they were well ahead of their time. And it’s hard, and I’m not discrediting it, please don’t misunderstand, but I just don’t get it. And I really struggle with it. And I think what AI has done is AI has broken down those barriers and made it so that art is more accessible to everyone and can be understood a little bit more by everyone. Because I think when it comes to something that’s creative, everybody can take it back like to, you know, kind of take a step back and say, “Oh, well, you just don’t get it” or you know, “you just don’t understand the context.” So I think what AI has done, and I think this is why it’s so contentious in the art world, is that it’s really pulling down that curtain of this divine kind of talent that you’re either born with, or you’re not. Yeah, and so I think it’s really breaking down the  art world and what characterizes the art world? 

Eryk 

Yeah, um, I think, I think there is a frame around the art — particularly this type of like process products of art, when we look at the products, and we don’t think about the process. That’s part of the issue, I think, and I think a lot of the reasons that you see people saying, like, AI images, you know, should be considered as an art form. And I don’t disagree with that. I think that anything could be considered an art form. And like that definition is actually the thing that we need to start thinking about and redefining. And that’s been a process. You know, Andy Warhol sort of started that and people– didn’t start that I wouldn’t shouldn’t say start that– but this idea of like, what is what is an art form anyway? Like, what is a piece of art? I’m more interested in like, what is the process of art and so when people say, you can’t make art with an AI image model, I actually like really flinch at that, too, because if you can make art with a broken television, if you can make art with like, I don’t know, taping a banana to a wall, right? Like, if you can make art with a banana and a piece of tape, then of course, you can make art with the largest dataset of visual cultural information ever gathered on Earth. Right? Of course, you can make art with that. But one of the things that I’m reluctant to say is that by default, and I hate the definition that like what’s art, what isn’t, right? But by default, using that tool to generate an image, I think, what I find what I struggle with is when that image is then sort of circulated using the definition of art that I’m that I struggle, that I resist, which is that it’s the product. If an AI artist comes to me and says, I thought about the prompts. And I thought about the categories, and I understood the, how it was accessing the dataset and how it was sort of rendering this image. And I figured out that if I render this image, and I put it up against this image, and there’s a dialogue between these two pieces, or I’m challenging some of the stereotypes, right, there’s some amazing work. And I wish I remember the artists name, but there was an amazing video that I saw at a film festival, where as a woman, I believe, possibly, I think she was an African American woman, and but may have been from elsewhere. This is a very vague description. But what the goal, the point was, she was trying to get the AI to generate an image that looked like her. And the entire video is a process of trying to get that image to emerge that looks like her based on describing herself through all these different lenses of like, how like, you know, the government has described her, how she describes herself, how her mom describes her, right? How someone describes themself is fascinating, because in my experience is often never accurate. I don’t think I’ve ever heard someone describe themselves in a way that I’ve seen them, or that somebody else has seen them. Like we I think our own self reflection is so fascinating. So to me, that is art.  Yeah, I agree!

Zena 

If I was going to use an AI image generator tool to try and describe myself the amount of labor and time and, God I didn’t even know, that would go into that. And then I probably show it to Liz, she’d be like, “What is this?” “I made it myself.”

Liz 

On that note, I would love to ask about a piece that you’ve made that kind of stuck with me, this “Sarah Palin Forever.” I, maybe for our listeners, we should probably give a background context on Sarah Palin, because some of our listeners are Australian and maybe weren’t there when she first emerged. But I’d love to hear from you. Like maybe you can give a little bit of that background but then you can talk about what this piece is and like where it came from. And yeah, and maybe the process of creating it. 

Eryk 

So Sarah Palin was John McCain’s running mate against Barack Obama in 2008. And Sarah Palin was the governor of Alaska, had a very distinctive way of speaking. And was umm, hmm, highly controversial.

Zena 

I love how diplomatic you’re trying to be.

Eryk 

Largely not considered qualified for the position. And, you know, I don’t want to say one way or the other and this I, you know, I’ll be politically neutral here, but like I will say like, it is, it is sort of widely considered even among the Republican Party that she was not qualified for that position, she sent a lot of like very, just blatantly wrong things, and very combatitively, and I was actually doing press, when she came to Bangor, Maine, ah–you know, North Northeastern US, and did a campaign rally there. And I remember going to that campaign rally. And one of the things that happened, and this will give you kind of an idea of who Sarah Palin is, is that when she got on stage, she started talking and she started saying, sort of saying how like, the people “We’ll show you who the real know nothings are” and pointed to the press, and told everyone in the crowd to look back at the press and boo them. And so I was in the press box at this point. And I just see these people that I go grocery shopping with turn around and start booing me, because I’m in the media, I’m in the press. And so this sort of thing, it’s, you know, we’ve seen it in, you know, the for– the former president, but it sort of started with with Sarah Palin. 

Okay, so I should describe the story, right? The story of Sarah Palin forever is that it is the story of a 17-year-old girl who was born into a Sarah Palin rally that repeats every three hours. And so the rally starts, and it goes through all of the exact same motions. Like clockwork, Sarah Palin gets up and gives a speech, Everyone applauds, Sarah Palin leaves. As soon as Sarah Palin leaves, everything resets itself. And just that’s the next day. And if you’re born into this loop, and there’s a mother who arrives, she’s a photographer, she’s pregnant, and she actually has to give birth to this daughter in this scenario. And as the, as the, as the story goes on, and then as the film goes on, you sort of are trapped in this environment, and what sort of seems like, I don’t know, it’s almost comic in a way. But then the joke goes on too long. And it actually becomes this kind of horror. And I wanted to revisit that with AI. And it’s only been recently–it’s weird. It’s one of these things where it’s like, subconsciously, I was just like, yeah, that makes sense for this story. So I use AI generated images to generate images of a campaign rally. And I told the story that way. And I did it in the style of like a 2008, sort of newspaper photo slideshow. So it’s like a frame and then it goes black. And then there’s a picture. And the narrative is actually my voice through in a voice changing, deep fake tool that’s been trained on about 10,000 hours of Sarah Palin’s talking. And the idea isn’t that Sarah Palin is narrating; it’s that the daughter has grown up hearing Sarah Palin all this time that she talks like Sarah Palin, that she’s internalized this voice of Sarah Palin. And so even though she doesn’t like it there, and even though she’s sort of like trying to get out of this rally, she still talks that way. She’s still sort of stuck in this, the confines of the patterns that are in her life. And to me, this is, sort of started to emerge as like, actually, that feels like AI in a way: that the training data we work with, the datasets we’re in are predictive. And they can make they can predict these patterns. But we really have to think about what are the alternatives to patterns and repeating the patterns of history that we want to break out of. Because in this story, we keep coming back to this sort of center, which is Sarah Palin. And everything that comes out of that is that that’s at the center of Sarah Palin, right? The people in the audience, the people who are introducing Sarah Palin, the speeches that come beforehand, it’s all coming back to the sort of central organizing principle of Sarah Palin. And in AI, and in the risks that I see the sort of dangers of AI, when I’m at my most paranoid, it’s this idea that like, you can have these sort of boundaries, you can have this sort of variety, but it’s going to keep coming back to the center. And I’m really curious to think about like, can we get out of that? Can we get out of these loops? Can we get out of these patterns? And how do we do that? So that’s a long walk through Sarah Palin forever.

Zena 

So is this connected to what you were talking about earlier, when you were saying that you try and push the boundaries of the tools that you use to see how much you can get from them, what outputs you can get from them? 

Eryk 

Yeah. Outside of the confines of that particular work. That’s what I’m trying to do. But I’m also trying to use art to sort of illustrate some of the kind of fears and concerns that I have about these systems, which are not like these, like, I’m not worried about a terminator robot, I’m not worried about this kind of stuff. I’m more worried about sort of defaulting to predictability and patterns, and the way that that enforces a kind of history onto people moving on to the future. And that, if the past were perfect, if we if we had achieved the sort of like egalitarian society, maybe that data would be something I would be more inclined to trust. But right now, when we look at the past predictions, and we look at where the data we’ve gathered, what the sort of biases were, and the choices and even the things that don’t get into the dataset, because they weren’t allowed to exist. You know, those are the things that, that I’m sort of trying to break out of the box of thinking when we think about AI.

Zena 

And I think what you’re developing is a really good way of articulating that to a more general audience, right? Like a lot of the work’s —

Eryk 

I hope so – Yeah.

Zena 

Because some of this work when you’re not when you’re not trained or educated in this particular area–science Communication is incredibly hard. People think it’s really simple to explain very complex topics in a very simple way. It’s absolutely not. It is much, much harder for me to do general science communication than it is for me to write in a traditional academic sense. 

Eryk 

Yeah.

Zena 

So I feel like the work that you’re doing is kind of a bit of a middle ground there, it’s being able to communicate quite complex things in an accessible way for different people. And also, in a way–this is what I’m most interested in–is that it’s interpretable in different ways to different people. And I think that’s really a strength of the work

Eryk 

That–thank you. And I’ve been trying harder and harder to make it so that if you look at one of my pieces, you don’t have to think about AI at all, that you can put your own meanings onto it that you can think about it in terms of memory, or politics, or whatever you want to think about. And you can see that, but that there is an engagement with the tools that I’m using as well, so that, you know, the placards and the process that we’ve just been talking about can actually say something about why? Why am I using AI to tell this story or make this thing? And part of that is about that engagement. But I also do want people to, who don’t care about AI or don’t think about AI to kind of come in and actually see something and respond to it emotionally on their own terms, too. And I think that permeability is important. Yeah.

Liz 

Maybe this is a good time to talk about your website and your newsletter, because your website is one of the places where they can see some of the all of this stuff you talking about today, just briefly, where can people find you?

Eryk 

Yeah, cyberneticforests.com. And when you’re there, you will see a link to my newsletter.

Liz 

And it’s a fantastic newsletter, by the way, everybody should go sign up for it. 

Eryk 

Thank you so much. Yeah, cybernetic forest. So I write weekly, or I try to write weekly. It’s free. And yeah. Just talking about things that are going on in the world of AI and trying to think about it, trying to sort of decode the hype, and think about it from a longer historical position, but also sort of just unpacking, like, if there’s a new model, I will go in, and I’ll try to read about the white papers. And I’ll try to just like, put it into plain language so that people can know what’s going on with them. And that’s kind of one of my goals is demystifying that AI hype cycle, and really thinking about like, what is it? How does it work? What’s it doing? And like what should we actually be worried about? And what should we actually sort of like think is kind of cool, because some of this stuff is really also kind of cool, and I’m kind of clear on that, too. Like a lot of this stuff is fun and interesting, but we have to be responsible with it too.

Liz 

Yeah. Is there anything–That’s a that’s an excellent lesson for our listeners to kind of keep in mind–but is there anything else that you wanted to share that we haven’t maybe covered today?

Eryk 

I can’t think of anything. No, um, it’s been great.

Liz 

It’s been awesome to have you. It’s been a fascinating conversation. And yeah, I

Zena 

I feel more creative after this.

Eryk 

Good. Yeah, I think one thing I guess I think is like, really important is it like, if we define creativity as a thing that you have or don’t have, then like, we’re doing it wrong. And that’s one of the things that I just think is like, if AI is your outlet, like use AI and be creative, but like, but also pay attention to yourself and trust yourself as an artist to be like, you have a voice, right? Everybody has a voice. And I just think that like finding that voice and thinking about the way you express yourself and how that voice is coming through–that’s like the really important thing to think about when you’re using technology is like how not to lose your voice but how to use your voice which I did not intend to sound like a bumper sticker. But that’s what I that’s where I guess I can end.

Zena 

Bumper stickers being sold on cyberneticforests.com.

Liz  

New e-commerce venture. Yeah, thank you so much, again for joining us. And thanks again to the Australian National Centre for Public Awareness of Science for letting us use their studio. And thank you all for listening as always.

Leave a Reply

Your email address will not be published. Required fields are marked *