554. Can A.I. Take a Joke?

Episode Summary

Episode Title: 554. Can A.I. Take a Joke? Main Points: - Current AI like ChatGPT is based on neural networks and large datasets, not complex rules like old AI systems. It predicts text outputs based on patterns in the data. - AI can generate joke structures and punchlines, but often the humor falls flat. It lacks human emotional understanding and creativity. - Professional comedy writers like Michael Schur worry AI could replace human creativity and lead to derivative, mediocre content. But for non-writers, AI could help improve communication. - Economist Joshua Gans sees AI as a prediction technology that reduces costs. It can help more people participate in writing, though the most creative parts still require humans. - Research shows competition encourages creativity up to a point, but too much competition discourages effort. AI may crowd out human creativity. - We don't know if AI can ever match human creativity. But focusing on uniquely human skills and emotions may allow cooperation with AI systems.

Episode Show Notes

Artificial intelligence, we’ve been told, will destroy humankind. No, wait — it will usher in a new age of human flourishing! Guest host Adam Davidson (co-founder of "Planet Money") sorts through the big claims about A.I.'s future by exploring its past and present — and whether it has a sense of humor. (Part 1 of "How to Think About A.I.")

Episode Transcript

SPEAKER_09: Free economics radio is sponsored by Amika Insurance. When it comes to auto, home, and life insurance, you want a company that's on your side like Amika. They take the time to understand what you need and tailor a policy to meet your needs. When you need Amika, their representatives put you first and let you know what you can expect from them. As Amika says, empathy is our best policy. So by choosing Amika, you know you'll have someone in your corner when you need it most. SPEAKER_01: Carl's Jr.'s new Big Char Chili Burger is big on charbro flavor. Big on spice with melted pepper jack cheese and spicy Santa Fe sauce. Big on smoky sweet heat with a whole fire roasted charred Anaheim chili. And big on savings with the Big Char Chili combo. Get the big, bold, smoky flavor of the new Carl's Jr. Big Char Chili Burger with small fries and drink for just $7.99. Tax not included for a limited time. Price and participation may vary. SPEAKER_09: Hey there, it's Stephen Dubner. Today on the show, a rare occurrence and a welcome occurrence. We have got a bona fide guest host. This is a person whose name will be familiar to many of you, Adam Davidson. Adam, welcome. Thank you so much, Stephen. So Adam, for Freakonomics Radio listeners, you are almost certainly best known for having, is it co-created and then hosted the NPR show and podcast Planet Money, is that correct? Yeah, I sort of had two careers. SPEAKER_10: I had a career doing more human interest, more narrative stories for This American Life and a career doing very straight business stories. And with my buddy Alex Bloomberg, who was at This American Life at the time, we thought, well, what if we put them together? SPEAKER_09: What if we made peanut butter and chocolate? SPEAKER_10: Peanut butter and chocolate. You may be the one other person on the planet who can fully identify with that thought. And so we first did this big hour about the housing crisis called the giant pool of money. So that led to Planet Money, which Alex and I ran together for about five years. And then I eventually left for the New York Times and then later left for The New Yorker. SPEAKER_09: Planet Money, we should say, is still alive and very, very well. And you, Adam, as I mentioned, are coming in to guest host this Freakonomics Radio episode, but not just this episode. This is a three-part series on essentially how to think about artificial intelligence. Is that about right? SPEAKER_10: That is about right, yeah. And actually I was, in my mind, thinking of that giant pool of money show we did so long ago, which is, you know, if you remember back in 2008, all these things nobody had been thinking about, the mortgage market, subprime housing, interest rates, the Fed, suddenly it was this massive force that was going to, we didn't know what it was going to do, but it seemed scary and big. And spending the time to just figure it out, like, what is this thing? How can I think about it? How can I make it life-size enough that I can just engage it? SPEAKER_09: What would you say was the main thing or a main thing that you wanted most to understand about, let's say, the next year or two of AI? SPEAKER_10: I would say the fundamental question is, is this time different? Is this just the latest or is this a new kind of thing? You know, certainly in my life, I'm finding a few people are all in on AI. A lot of people are saying, eh, I don't know, it seems creepy, I don't want to have anything to do with it. And I would encourage people, it doesn't mean you have to love it, it doesn't mean you have to hand your life over to it, but the more people who are involved in thinking about how it should be used, probably the better outcome. SPEAKER_09: I would like to think that one good way to get more people engaged in it is to make a three-part series for this show, so I'm glad you did that. And most of all, I'm just so happy to have you playing on our team, so thanks for joining. Thank you, it was so much fun, I hope that comes across. SPEAKER_10: Thanks Adam. The thing I want, the thing I've been searching for for about a year now, should be simple. At least I think it should be. Like you, like everyone, I keep hearing about AI, artificial intelligence, and I want to know how to think about it. I want a simple, clear, middle-of-the-road explanation. Here's the deal with AI. Here's how to use it, here's how not to use it. But the problem is that the idea of AI inspires people to start talking about the future in extreme ways. AI is the most existential threat to humanity. Serious people say it will kill us all. But other serious people, they say different things. They say that AI is ushering in a new age, maybe a better age, where humanity can achieve things never before dreamed of. It will eliminate disease and poverty and allow us to live for centuries. I don't know about you, but I find that my brain sort of shuts down when I hear these huge pronouncements. It will kill us all, no, it will bring about heaven on earth. I've spent months now talking to as many smart people as I can find about AI, and I learned a lot. The main thing, the big headline? Nobody knows where AI is heading. That's why there's such a crazy range of predictions. As one expert told me, there are no experts yet. We're still figuring this out. So over the next three episodes, we're going to take a little tour through the world of AI as it is now. We start today with the basics. What is AI? Why is everyone talking about it? How does it work? What can it do now? Not what might it do a decade from now? And crucially, what happens when we start asking it to do things we think of as distinctly human? SPEAKER_05: This is Freakonomics Radio, the podcast that explores the hidden side of everything with guest host Adam Davidson. SPEAKER_10: One major lesson I learned is that the big fears and the big hopes are not really about what we have today. They're not about open AI's chat GPT or Google's BARD. This current generation of AI, which as we'll learn probably shouldn't even be called AI, it's not going to kill us. It's more mundane than that. In fact, all the talk of existential threats and complete transformation is distracting us from the current reality, which is really quite interesting and also plenty confusing in itself. Have you played around with chat GPT or any of the other AI tools? I have a lot, and I'm continuously struck by two experiences. One is that it can seem magical. I ask it to do something, write a sonnet about basketball, write an essay about the history of farming, whatever. And the AI generates words and sentences and full paragraphs. And it seems impossible that some computer software is creating all that. But the other experience is that those words it generates are a bit off. They're weird. No person would write them. That has become my obsession, not just mine. It seems to have captured the world's attention. Is AI becoming human or is it altogether something else? I wanted to try to get at that by asking a really simple question. Can chat GPT be funny? Can it tell a good joke? Almost. SPEAKER_06: I don't think it's as good as people yet. SPEAKER_10: That's Lydia Chilton. She's a professor of computer science at Columbia University. SPEAKER_06: All it knows how to do is from a sequence of words, predict the next one. So if you say, tell me a knock knock joke, what would you as a human being predict the next word would be? It would be knock knock. Who's there? And how did you know that? Well, because you've heard it many, many times before. And you don't even have to know what a knock knock joke is to do that. You just follow the patterns. SPEAKER_10: Because the software that is behind chat GPT, as I understand it, is not looking at words. It's just looking at numbers. So knock would be translated into a number. Joke could be translated into a number. And then it's just doing a bunch of math. And when this number is near this number, then this other number comes up a lot. SPEAKER_06: Computers, at the end of the day, really only know how to operate on zeros and ones. They add them together, they subtract them from each other. That's all they do. But even with just zero and one, you have to figure out how to represent the number two. And with those numbers, I can also represent words. I can line all these up and actually say, if someone has typed A, what's the most likely letter they're going to type next? It's like the dumbest thing you could possibly do, which is great. It's one of my loves of computer science. You take something really complex and make it so simple that a computer could do it. SPEAKER_10: Imagine setting out to write the rules of being funny. You could probably just about do it with knock-knock jokes. Rule one, you say knock-knock. Rule two, the other person says who's there. Rule three, you say a word that kind of sounds like another word. You see where this is going. But when's the last time you actually laughed at a knock-knock joke? If you can write clear rules for how to generate a joke, it's probably not a very good joke. That is essentially why Lydia Chilton gave up on AI the first time she looked at it, which was about 15 years ago. When she was in graduate school, she tried out the cutting edge of AI at the time. It now goes by a phrase I love, GOFAI. That's G-O-F-A-I, good old-fashioned AI. You can think of it as rules-based AI coming up with super complicated rules to achieve some outcome. So, GOFAI, good old-fashioned AI, is just all the things that we did pretty much before SPEAKER_06: the internet. GOFAI had this vision of allowing computers to see, and sort of the method was let's, you know, take a picture of a human face and break it up into features. Here's the one eyeball, another eyeball, nose, a mouth, and then test that against all the other eyeballs that are in the database to identify, this is Adam, this is Lydia, this is Barack Obama. SPEAKER_10: This is the classic model of a computer program. You give the computer a series of rules, and it follows the rules in sequence and spits out a result at the end. To GOFAI, recognizing a face, telling a joke, diagnosing a disease, coming up with the fastest route to your mom's house, whatever task you have in mind, it's all just a series of really complicated rules. So the AI researchers would try to write more and more complicated lists of rules. SPEAKER_06: And that didn't work, at least not very well. There's two reasons. The computers just weren't powerful enough. Turns out this does kind of work, but you just need a lot of examples of what eyeballs look like and what everyone's eyeballs look like to make that work. And unless someone's going to sit there and type in everybody's eyeballs, it's just not going to happen. So it's really like it was a good idea, but the scale wasn't there. SPEAKER_10: Of course it didn't work. The human brain evolved to work in a way quite different from GOFAI's long list of sequential rules. Our brains don't start with a bunch of rules. They start by taking in sights and sounds and smells and the rest. And then they build connections among neurons, which prepare that brain to interact with the world it finds itself in. SPEAKER_06: It's sort of this illusion that computer scientists were under that if I write down enough rules, I can describe a cat or a table or anything. But it turns out it's really hard to write down those rules. You try table, you know, it's got a flat bit and then some legs, four legs. Oh, but some of them have two legs. Oh, but some tables fold. And so then they have no legs and the world just doesn't break down in this categorical sense. And guess what? That's not how people learn either. We just fumble around as newborns and toddlers and see a bunch of stuff and kind of figure it out. And those toddlers have a lot of data. And so if computers could have that data or even much, much, much more, maybe they'll just figure it out on their own with the right information architecture, which is neural networks that's just taking in all this data and trying to predict, is that a table? Is that a table? And it doesn't have to conform to hard rules. SPEAKER_10: The reason you're hearing all about AI now, the reason it is getting so much attention is that AI researchers shifted from good old fashioned AI, the long list of rules, to what Chilton just mentioned, neural networks designed to be more like the human brain. The AI software is made up of a huge network of nodes designed to simulate the brain's neurons. You feed this AI tons of data and let it form the connections. Interestingly, this neural network approach has been around for a long time. It was first proposed in 1943 by two researchers at universities in Chicago, a neurologist and a logician, but it wasn't until pretty recently that computers were fast enough with enough memory that those neural networks fully took off as a powerful tool. Also, for decades, researchers had a problem. A neural network needs a ton of data. If you want it to be able to identify a table, you need to show it a lot of tables. If you want it to predict how human beings communicate, you need a lot of examples of human beings communicating, and you need those examples to be in a form that computers can read. And for most of the 20th century, there was just not that much stuff. SPEAKER_06: Then, the internet happened and people just started dumping information. There's probably 100,000 photos of me even on the internet and of everyone. We all just gave away all our personal information. And so this amassing of data, not just of facts, but of people and personal experiences and thoughts, really created the trove of information that we needed to train these algorithms rather than trying to engineer rules and figure it out, because there's just too many rules. JS In all that information we dumped on the internet, SPEAKER_10: all those blog posts and Instagram stories and angry comments, as well as movie scripts and just about every book ever, we gave neural networks a ton of examples of our faces and our experiences and our thoughts. Back to humor, if you're a computer connected to the internet, it's very easy to find examples of people being funny and of people trying to be funny and then being told whether or not they actually are all that funny. So after giving up on rules-based AI, Lydia Chilton decided to give neural network-based AI a chance because she had this obsession. What makes something funny? And can I make a computer be funny? SPEAKER_06: LYDIA Well, I will be honest, one thing you get to do in computer science is overanalyze things that you find fascinating but are not good at. And that was me. It's a power to be able to tell jokes, good jokes. JS And did you not feel like you were good at SPEAKER_10: it? SPEAKER_06: LYDIA No, I would say most of my humor is a little bit unintentional. I would say certainly for myself and maybe other computer scientists feel like understanding people is a real challenge. For me, it does not come naturally. And so I like studying it so I can understand these things so I can feel like a normal human that understands other people. And humor is a big part of that and always just felt like this nut that I could crack. JS We know computers can do math well. SPEAKER_10: We know they can store a ton of data. But humor, making another person sincerely laugh out loud, feels so human. LYDIA People have this intuition that a computer can't SPEAKER_06: be funny because it doesn't have emotions. And that is a challenge. But there are actually ways that AI can get around that. The main way it gets around that is by simulating those emotions. But we all simulate emotions as well. You can do it without feeling it. So can a machine. And it learns it from patterns, just like you did. JS But the best humor is really surprising. SPEAKER_10: That's the fun of the humor. Like you never would have thought that person would have said that. So is that also just following rules? LYDIA There's this sort of myth out there that creativity SPEAKER_06: is somehow magic and jokes are one of the most creative things. They just come out of nowhere and they don't follow patterns. And it's really even hard for a person to do unless you're like Mozart, Picasso, Shakespeare, Einstein, someone like that, that you're not going to come up with something super creative. But it turns out that creativity is not that hard. It's just a lot of hard work. And you always lean on patterns. The trick is that humor has that this structure beneath the surface like a plot like a chord progression. But what it really is, is it's violating expectations in a very particular way. JS Chilton and her collaborators did eventually SPEAKER_10: get a computer to make up a joke. Not a great joke, but a joke. Here's how they did it. They focused on the American Voices section of The Onion, the humor website. In American Voices, they take some topic from the news and then have a few fake person on the street reactions. It's a classic setup punchline. But here there's one setup and three punchlines. This is great for a computer science researcher. American Voices, which was originally called What Do You Think, has been around since at least the mid 90s. So 30 years, 50 setups a year, three to six punchlines. That's thousands and thousands of jokes with exactly the same structure. All that data allowed Lydia Chilton to come up with a series of 20 steps. She calls them micro tasks that a writer goes through to make a joke. For instance, if you're given a headline, first identify all the elements. In her paper, she looks at a headline that says Justin Bieber baptized in New York City bathtub. Task one would be to identify four elements. There's Justin Bieber, there's baptism, there's New York City, and there's a bathtub. Then task two would be to figure out what people would normally expect from such a headline. And then this is where the humor comes in, you subvert that expectation. So with all of her structure and all of that data, how does AI do? Chilton tried one for us. SPEAKER_06: Okay, I like these because I have toddlers. So the real headline is 10 year olds found working at McDonald's until 2 AM. AI says, talk about commitment. I can't even get my 10 year old to finish their vegetables. Another one, well, that explains the finger painting in my big Mac box last night. Another one, finally, a solution to the never ending debate of homework versus real world experience. The punchlines are not great. SPEAKER_10: AI seems to understand the overall structure of a setup punchline kind of joke, but it's struggling to make those punchlines actually funny, actually work as jokes or even always make sense. Though to be fair, a lot of us human beings struggle with that, which made me wonder how this looks from the perspective of an actual funny person. SPEAKER_03: My name is Michael Shore. I am a television writer and producer based in Los Angeles. SPEAKER_10: Michael Shore is one of our era's most prolific and successful creators of TV comedies. On his own or with others, he created Parks and Recreation and The Good Place. He was a major force behind Brooklyn Nine-Nine and the American version of The Office and is an executive producer of the show Hacks. I love all these shows. I think I've seen every episode of television Michael Shore has had anything to do with. At the moment, he's on the negotiating committee for the Writers Guild of America in their ongoing strike. He told me he has dabbled in chat GPT. SPEAKER_03: I will say that I am generally averse to it because I know that by playing around with it, you're helping it learn stuff to some extent. And as a longtime fan of science fiction writing, I don't want to contribute in any way to the advancement, the rapid advancement of these tools. So I have tended to shy away. I mean, that's actually an interesting moral question. SPEAKER_10: Like as someone with some influence in your industry, you know, I certainly understand the position of let me not participate. Let me not encourage it, which makes a lot of sense. But on the other hand, maybe let me understand what this thing can do so I can better represent my community or something like that. Do you see a tension there? SPEAKER_03: Yeah, I do. And I do understand that at some level, both as a writer and as a member of the negotiating committee for the WGA, it is probably part of my job description to understand these things, know how they work, play around with them, that sort of thing. But I've also, I think I get it, you know, and I kind of don't want to encourage it. Shure's approach to AI, to not use it, to hope it goes away, made me think of the Luddites, SPEAKER_10: the movement of British textile workers who smashed factory machinery in the early eighteen hundreds because of course they've become the go to historical analogy for anyone who resists a new technology. But the real story of the Luddites is a bit more complex. The Luddite movement was made up of highly skilled textile workers, the very people who are most familiar with the new industrial technology. They weren't against the machines. They were against the way the factory owners were using those machines. Factories were using the technology to make inferior products and in the process, destroying the pipeline of skilled textile workers. And in that sense, Michael Shure is almost exactly a Luddite. He is not so much worried about the technology itself. He is worried about how industry will use that technology to weaken the power of writers. He is also worried that those studio execs don't even realize that they're being self defeating. If they damage the current comedy writing ecosystem, they might find themselves without anyone who knows how to be funny professionally and reliably. Let's take Shure, for example. He got a job at Saturday Night Live when he was fresh out of college. SPEAKER_03: I was extremely bad at the job for a good long time and by all rights should have been fired, but eventually figured it out through observation. The head writers at that time were Tina Fey and Adam McKay. And I had good friends who worked on the show, Dennis McNicholas and Robert Carlock. And I just decided to be a sponge. I just decided to say like, okay, I'm going to watch these folks. I became a forensic scientist. I would look at their sketches and I would break them down and I would try to understand what made them good and what made them successful. And eventually through a combination of observation and genuine mentorship, I kind of got to the point where I could do the job. SPEAKER_10: Shure and his fellow WGA members are striking right now for a bunch of reasons, but one big one is AI. Specifically the writers don't want studio executives to be able to use AI to supplant writers as the creator of a new idea for a movie or TV show. The way Hollywood works is that writers have the most power and make the most money when they generate original ideas like Shure did with The Good Place. What Shure and the WGA fear is that executives will ask AI to generate a bunch of ideas for TV shows and movies, and then hire writers to flesh those ideas out into scripts. There is no AI program that can actually write a ready to shoot full script, at least not yet, but AI can't generate a ton of ideas and at least some of them might be usable. If AI creates the original idea, then the writer is just a hired gun, which means that more of the rewards of the show's success accrue to the studio. SPEAKER_03: The thing that we're fighting for here very simply is the concept of writing being a viable career. It's never been remotely this hard for young writers to move to LA or New York and begin a career and then sustain that career. I have watched as what was already a difficult path has become nearly impossible. That is essentially why we are fighting this fight because if it doesn't change, if we can't make it more sustainable, it's going to stop. People will just decide that writing falls into the same category as being a professional basketball player. Like, I love basketball, but I'm not making the pros, so there's no point. That would be a real shame. We would lose out on a lot of great stories and a lot of great brains and hearts and souls of people who have something to say. SPEAKER_10: For Michael Schur and the WGA, this is existential. It would mean a near total collapse of the career of writing for movies and TV shows. That fear is about the current generation of AI, the one that cannot yet write a full script. AI, of course, is getting better all the time. SPEAKER_03: My fear is that even if these machines and programs only ever get really, really good at doing the thing that they do, which is predictive text, that they will still at some point with enough data and with enough computing power get to the point where they could accidentally stumble into something that might look enough like a genuine human idea that people wouldn't really care one way or the other. And that's what honestly worries me is the idea that it will be so good at imitating or predicting based on its vast reservoir of existing knowledge that people won't really be able to tell the difference when it generates whatever it generates. Obviously, if you write for a living, the idea of AI writing as well as you is pretty SPEAKER_10: worrisome. But what about the rest of us who don't write TV shows, but we consume them? What would it mean if, as Schur says, AI just keeps getting better at predicting things? SPEAKER_03: That's the thing that keeps me up at night and haunts me and makes me feel like there's something very, very dangerous that is right around the corner. SPEAKER_10: That's coming up after the break. I'm Adam Davidson, and this is Freakonomics Radio. SPEAKER_09: Which you could save. Molly Ford. Hey, guys. Welcome back to Freakonomics Radio. I'm Adam Davidson. I am on a journey to figure out SPEAKER_10: how I should think and feel about AI and its place in our society, a way that doesn't have the panic or the excitement cranked up to 11. So I knew who to call. SPEAKER_08: I'm Joshua Gans. I'm a professor of strategic management at the University of Toronto. And I guess I'm an economist for a living. I've been turning to Joshua Gans for years SPEAKER_10: for exactly this sort of thing. There is some exciting new trend and everyone is freaking out. What's a calm, grounded way to understand it? Joshua Gans will know. SPEAKER_08: I call that process de-sexification. Meaning like we're taking something really SPEAKER_10: exciting and brand new and how can we make it boring and predictable and like a lot of other things. Exactly. Exactly. That's my mission in life. SPEAKER_10: Gans has co-written two books on AI, Prediction Machines in 2018 and Power and Prediction in 2022. He also runs a program at the National Bureau of Economic Research on AI through which he's written and edited a ton of smart papers on the subject. Nearly everything he writes includes that word prediction. He says the best way to understand the economics of AI is to think of it as a process that reduces the cost of prediction. SPEAKER_08: And what is prediction? Prediction is taking information that you have and turning it into information that you need. For instance, when we predict the weather, we're taking information of historical weather trends and other things going along at the moment. And we use it to turn it into information we need, which is a forecast. Not to say that these predictions are perfect. They're just better than what we have to make decisions in their absence. But the big leap was turning things that we didn't normally think of as a prediction problem, realizing they were a prediction problem, and then applying this new methods of statistics to solve it. Let's step back a moment. Earlier, I mentioned SPEAKER_10: that artificial intelligence is not the right term for the current generation of what we have all come to call AI. The word intelligence suggests that there is some active process of thought. But that is not what ChatGPT or any program is doing. All it is doing is taking in information and using a lot of mathematics to predict what information comes next. The pros call it machine learning. There are no words or pictures or sounds. There are only numbers. Words are turned into numbers, pictures into very long numbers, sounds into numbers, and then the AI does math. It's not even very complicated math. Each step is fairly straightforward. It's just that the software does a lot of math, a lot of linear algebra equations over and over again. So after being trained on a ton of joke setups from The Onion, say, the AI can use math to more accurately predict what is likely to come next in the punchline. So let's get back to writing. In December of last year, the Harvard Business Review asked Gans and his book co-authors to write an essay about ChatGPT. The team got together and hashed out some big ideas and some key insights they wanted to put in the essay. And then Gans was given the task of turning those rough notes into an actual finished work. What I did instead is I looked at those and said, ah, I wonder what happens SPEAKER_08: if I just put in the notes that we have into ChatGPT and say, write a 700-word piece describing these things at the level of an MBA student in terms of reading and terminology. So pretty low. Pretty low. And so I did that. Pressed enter and out popped exactly 700 words. We did some light editing. So I'd say about 10% of it was altered. And off it was in the Harvard Business Review and people read it and found it interesting. We put a note at the bottom saying we'd use ChatGPT for this purpose because it was so new that it seemed appropriate to do so. And so you look at that and you say, oh, well, why was I even necessary? And it's true. I saved myself an hour worth of time doing something that we'd normally call writing. But let's think about that whole task. What really happened? The task of writing was now decomposed into three things. The prompt, the actual physical churning out of the words, and then the sign off at the end. And then when you step back from that and say, what was the important part of this that makes it worthwhile to read? It's not the writing in the middle. It's the prompt and it's the sign off at the end. It is not that all of a sudden you can't write or what you've done is not valuable. What that means is that anybody, even if they can't string a few words together, can prompt ChatGPT to churn out their thoughts and then read it and sign off on it. There's this potential for a great explosion in the number of people who can participate in written activity. And that's the change that's going to come from this. SPEAKER_10: Because, to be fair, a lot of human generated everyday communication is not great. Think of PowerPoint presentations you've sat through or memos from your colleagues or the instructions to some new gizmo you bought. We are inundated with communication that doesn't meet the basic hurdle of being clear, comprehensible. To a professional writer, AI that is good at writing sounds like a threat. But to a lot of other folks, people who have to communicate but aren't great at it, AI might be a solution. Joshua Gans made me feel a bit calmer about AI, a bit more settled. I can see why some people are afraid of it and others like it. But then I remembered a part of my conversation with Mike Schur. SPEAKER_03: That's what honestly worries me is the idea that it won't actually be creating a new idea, but it will be so good at imitating or predicting based on its vast reservoir of existing knowledge that people won't really be able to tell the difference. SPEAKER_10: By its nature, AI is backwards looking. It looks at whatever it is fed and then it uses that stuff to make predictions. So what happens if most of the writing we have was produced by AI? And then if that AI is being trained on all that AI written stuff to write more stuff. If our TV shows and movies and essays and articles are all created by AI and then are used to train AI to write more of the same. Think of the funniest thing you've ever seen. Your favorite book or movie or TV show. That thing that surprised you, that came out of left field and just blew you away. For me, I instantly think of Monty Python or watching Spinal Tap or seeing Ali G in the UK version of The Office for the first time and the movie Step Brothers. Your list may be different, but you have one, right? SPEAKER_03: When you're talking about the relationship that audiences have to the art form, what you're really talking about is can you reach through the screen and grab someone by the lapels of their jacket and shake them a little bit and make them see the world differently or make them understand themselves differently? And the AI piece of this to me is giving up on that concept. It's saying that's not the goal anymore. If we go down that road, I don't think we can ever come back. I don't think that there will ever be space for the better version of the art form to break through because the world will be so cluttered with garbage and dreck and the slurry of other shows and movies that has just run off into a processing machine and spit back out in a new shape and form that there won't be any room for the good stuff. That's the thing that keeps me up at night and haunts me and makes me feel like there's something very, very dangerous that is right around the corner from where we're standing right now. You're supposed to be the funny guy. Well, there's SPEAKER_03: nothing funny about this. That's the problem, man. You think I want to be walking in circles for four hours a day and talking about the death of the art form? Much like the Luddites who saw a SPEAKER_10: flood of inferior machine-made textiles replace the higher quality, more expensive stuff made by hand, sure. Pictures, a world of AI driven, track middle of the road stuff produced by a prediction machine, a machine that predicts the most likely to satisfy answer, not the single very best, most amazing thing. No, the average, the middle of the road. So yes, Michael Schur is right. If all of our comedy was written by AI, we would probably only have what I think young people call mid middle of the road, derivative comedy. And let's be honest, a lot of human written comedy is pretty derivative, pretty middle of the road, but people, at least some people do want that grab you by the lapels experience, that new thing that is fundamentally unlike anything that came before for now that requires human beings. Okay. So if creativity is what human beings can offer, that AI can't fully replace, it's pretty important to our economic future. In which case, we should probably know what creativity is, which is easier said than done. There has been, SPEAKER_07: in my view in economics literature, kind of an abstraction away from the individual and that individual act of creativity. That's right after the break on Freakonomics Radio. SPEAKER_09: Freakonomics Radio is sponsored by NetSuite. Your business was humming, but now you're falling behind. Teams buried in manual work. If this is you, you should know these three numbers, 36,000, 25 and one. 36,000. That's the number of businesses which have upgraded to NetSuite by Oracle. 25, NetSuite turns 25 this year. That's 25 years of helping businesses do more with less, close their books in days, not weeks, and drive down costs. One, because your business is one of a kind. So you get a customized solution for all of your KPIs in one efficient system with one source of truth. Manage risk, get reliable forecasts, and improve margins. Everything you need all in one place. Right now, download NetSuite's popular KPI checklist designed to give you consistently excellent performance, absolutely free at netsuite.com slash freak. That's netsuite.com slash freak to get your own KPI checklist. Netsuite.com slash freak. Freakonomics Radio is sponsored by Saatva. You ever hear the expression out with the old, in with the new? Well, that's exactly the strategy most mattress companies employ when they cut their prices. The goal is to move out all their old mattresses to make room for their new models. The lower price is the carrot to get you to take the old ones. But that raises the question, why settle for an older mattress when you can have a brand new freshly made Saatva luxury mattress for considerably less? Saatvas are famously comfortable and because they're sold online, they are made to order and cost half the price of the top retail brands. So it comes down to this, an old mattress that's been sitting around or a freshly made luxury mattress that costs way less? Some decisions have no brainer written all over them. And right now, save $200 on $1,000 or more at Saatva.com slash Freakonomics. That's S-A-A-T-B-A dot com slash Freakonomics. SPEAKER_10: Economists sometimes have a hard time talking about creativity, although one exception is Dan Gross from Duke University's Fuqua School of Business. SPEAKER_07: It's this ephemeral thing. There isn't a broad consensus on what this even is, let alone what a good way to measure it would be. You want to get an economist excited? Tell them there is some vague thing that can't be measured. SPEAKER_10: They'll obsess over how to measure it. Creativity is a deep issue for economics. As you've heard on this show many times, economic growth, where more people have more of their needs met, most often comes from innovation, from the output of creativity. That could mean a new technology or a new TV show. They're both bringing something into the world that wasn't there before. Some societies and some moments in history produce a lot more creativity than others. Economists want to understand that. So they look at the kinds of things economists pay attention to. Property rights, population density, interest rates. They don't usually look much at individual people. Partly this is because of the tools that are available. SPEAKER_07: And the data that are available. There has been, in my view in economics literature, kind of an abstraction away from the individual and that individual act of creativity. And that's what I decided I wanted to try to get a little bit more insight into. Gross happened upon something economists love. A natural experiment. A real thing happening in SPEAKER_10: the world that would generate the data he needs. Not something I would have thought of. Online logo design competitions. This work that I did in graduate school, it was studying how SPEAKER_07: competition affects creative production. And in particular, it was examining design competitions where you have individual designers who are competing for a fixed prize that has been posted by a sponsor, typically a small business that's in need of a logo. So I've done these, by the way, it's kind of awesome. I had a small podcast production company and we just went on this site SPEAKER_10: and explained what we wanted. And suddenly we had hundreds and hundreds of options. SPEAKER_07: And so let me tell you how this really worked in the setting that I studied. The principal mode of feedback was one to five star ratings. So this design got three stars. This one got one star. The designers can see the ratings of their own work. They can't see SPEAKER_10: what ratings have been given to specific designs by other people, but they can see the overall distribution of ratings. They can see, okay, you know, somebody out there seems to have a winning SPEAKER_07: idea because there's five star floating out there somewhere. And then they can think about what that means for them. What do you do when you get a three star rating for your design and you know SPEAKER_10: someone else has five stars? You know you're not getting the gig. You're not winning the award unless you do something different. Do you go for broke, try something wild? Do you get even more conservative and do something classical but boring? Or do you just quit? Groats was able to peer into this carefully controlled space to get a real sense of how people respond. What I've found here is that when a designer gets their first five star rating, they'll really SPEAKER_07: transition from trying out different ideas to just iterating on the one that you've rated highly. And that's especially the case if they don't have any high rated competition that they're aware of. On the other hand, if they're aware that there is other high rated competition, they'll then be induced to actually revert back to experimenting a little bit more. So with competition, creativity goes up. But you know, spoiler alert, SPEAKER_10: I did read the paper. So I know there's another part of this story. Let me tell you about the twist here. As a contest gets very crowded, so if there are a lot of high SPEAKER_07: performing competitors, these individual designers, their incentive to keep investing more effort, to keep putting more in to try to make their designs better, that starts to go down. Because essentially in a crowded field, it becomes a bit more of a lottery. The chances that that incremental effort are going to really yield some return for them, they really shrink to zero because you have a lot of other good contenders out there. The odds that you're going to slip by them start to become smaller and smaller, even if you have a good idea. And so, crowded competitions actually discourage effort. They might actually drive these designers to just stop participating. What Gross is saying is that there's this magic SPEAKER_10: Goldilocks zone where there's just enough competition to get the largest number of people to step up and do more creative work. The story of the paper in a nutshell is that too little competition, you don't get a lot of variation. Too much competition, SPEAKER_07: you don't get a lot of effort. And it's somewhere in the middle where at least incentives to be creative, to produce novel work, to just come up with this stuff, seem to be the highest. AI means that there will be essentially infinite competition for that creative middle of the road. SPEAKER_10: AI can produce so much work in that space that it probably makes sense for people who can only create middle of the road work to bow out, let the AI do it. But Gross's study contains a warning. When the contest gets crowded, it's not just the middle of the road folks who stop competing, it's everyone. One big question, will AI always be stuck in the middle of the road or can it generate new ideas, new forms of writing, new ways of creating art or telling jokes? Is AI fundamentally different from us or is it just early on its journey? There are some similarities between what people do and what computers do. SPEAKER_06: That's computer scientist Lydia Chilton again. Certainly both rely heavily on examples. The more examples you look at and analyze, SPEAKER_06: you usually get better at your craft. No one is born knowing how to do these things. We're all learning from examples. The computer really is trying to simulate aspects of human experience, but there are some things like if you can't actually feel it, you don't have what we call ground truth data. You don't know what's real. You're only seeing part of it. You're only sort of guessing. And I think we've all been in experiences where like, I don't really know what's going on, but I can kind of guess. And so that's what the computer is. It's just guessing, but it's seen enough data that it can guess correctly, often enough. I feel like I want humans to win. And I would love it if you said there's something SPEAKER_10: fundamentally human that computers will never be able to understand. It's hard for me to separate what I think will happen from what I want to happen. And nobody SPEAKER_06: knows. Here's what I want. What I really want is to show people that these things like creativity, that we think are mysteries, it's not a mystery. You can do it. Now in this process, if I have accidentally enabled the machine or helped the machine in any way, do better than people, I'd be like, oh, maybe I shouldn't have done that. This is a classic computer science thing where we're so excited about just showing the computer can do it. Maybe we should have thought whether it should do it. Your job is not really fundamentally to figure out what are the implications. Your job is to advance SPEAKER_10: science and to teach science, right? That's what I'm good at. I'd say I'm not that great at the other thing. At thinking through the implications. I try, but I have to admit I get a little bit stuck. SPEAKER_10: I'm so caught up in the idea of understanding this process. And I do really think it's hard for SPEAKER_06: me to think of, okay, computers can make jokes like what comes next. The question about humor is really a question about humanity. Are there things, valuable, SPEAKER_10: important things that only humans are able to do? If there are, then the answer is clear. People can thrive so long as they focus on the human stuff. Let AI do whatever it is that AI can do. But if we learn that there are no things or very few things that humans can do better than AI, then our position is a lot more confusing. What is our role in a world where we're not needed? We're not the only ones who can do better than AI. We're not a world where we're not needed. We're not there now. That's not today's issue. But it could come soon. SPEAKER_04: GPT-2 was roughly the size of a honeybee's brain. And it was already able to do some interesting stuff. Now I think GPT-4 is roughly the size of a squirrel's brain, last I checked. So we've moved, you know, from honeybee to squirrel. And I was trying to forecast, when would it become affordable the human brain. ADAM DAVIDSON How long will that take, and what will it mean for humans like you and SPEAKER_10: me? Next week on part two of our series, How to Think About AI, we'll answer those big questions and a few others, including, is AI coming for your job? And if so, what can you do about it? That's coming up next week on Freakonomics Radio. I'm Adam Davidson. Thanks for listening. SPEAKER_09: Hey there, it's Stephen Dubner again, and that was our guest host, Adam Davidson. He will be back next time with part two of How to Think About AI. Until then, take care of yourself and if you can, someone else too. Freakonomics Radio is produced by Stitcher and Renbud Radio. You can find our entire archive on any podcast app or at freakonomics.com, where we also publish transcripts and show notes. This series was produced by Julie Canfer and mixed by Eleanor Osborne, Greg Rippon, Jasmine Klinger and Jeremy Johnston. We also had help this week from Daniel Moritz-Rabson. Our staff also includes Alina Cullman, Daria Clenert, Elsa Hernandez, Gabriel Roth, Lyrick Bowditch, Morgan Levy, Neil Carruth, Rebecca Lee Douglas, Ryan Kelly, Sarah Lilly and Zach Lipinski. Our theme song is Mr. Fortune by the Hitchhikers. The rest of our music is composed by Luis Guerra. As always, thank you for listening. SPEAKER_08: Can I call you J-Man? If you really want to, sure. J-Gans. You know, it's your podcast. SPEAKER_05: The Freakonomics Radio Network, the hidden side of everything. SPEAKER_01: Stitcher SPEAKER_02: Whether you're running a restaurant or planning your next backyard barbecue, Chef Store has it all. Think wide selection of national and exclusive food service products, stellar customer service and quality that is next to none. Chef Store has everything you need to run any style of kitchen, whether you're feeding five or 500. Chef Store, built for chefs, open to everyone. Visit your local Chef Store today. No membership required. SPEAKER_00: Walmart Plus members save on meeting up with friends. Save on having them over for dinner with free delivery with no hidden fees or markups. That's groceries plus napkins plus that vegetable chopper to make things a bit easier. Plus members save on gas to go meet them in their neck of the woods. Plus when you're ready for the ultimate sign of friendship, start a show together with your included Paramount Plus subscription. Walmart Plus members save on this plus so much more. Start a 30 day free trial at WalmartPlus.com. Paramount Plus, the central plan only. Separate registration required. See Walmart Plus terms and conditions. SPEAKER_08: With the McDonald's app, you can get your favorite thing delivered to your door so you can eat your favorite thing while you watch your favorite thing at home. SPEAKER_05: Order McDelivery in the McDonald's app. And participating McDonald's delivery prices may be higher than at restaurants. SPEAKER_04: Your another piece may apply.