556. A.I. Is Changing Everything. Does That Include You?

Episode Summary

The episode explores how artificial intelligence (AI) is impacting various fields and people's lives. It focuses on a new job called "prompt engineer" that involves crafting prompts to get better results from AI systems like ChatGPT. The host interviews Anna Bernstein, one of the first prompt engineers, about how she helps make "AI talk good." She explains techniques like using more specific verbs, adding contextual details, and piling on synonyms. The episode also discusses the inner workings of large language models like GPT-3 and Anthropic's Claude. Researchers like Dario Amadei talk about constitutional AI - teaching models human values. Entrepreneurship professor Ethan Moloch demonstrates how AI can help generate business ideas through "constrained ideation." He shows the host how interacting with AI tools can unlock creativity. Overall, the episode explores how this new technology is changing the nature of work and thinking. AI still has limitations but also expands possibilities. The takeaway is that we need to engage with AI thoughtfully as it will transform many fields.

Episode Show Notes

For all the speculation about the future, A.I. tools can be useful right now. Adam Davidson discovers what they can help us do, how we can get the most from them — and why the things that make them helpful also make them dangerous. (Part 3 of "How to Think About A.I.")

Episode Transcript

SPEAKER_04: Free economics radio is sponsored by Amika Insurance. When it comes to auto, home, and life insurance, you want a company that's on your side like Amika. They take the time to understand what you need and tailor a policy to meet your needs. When you need Amika, their representatives put you first and let you know what you can expect from them. As Amika says, empathy is our best policy. So by choosing Amika, you know you'll have someone in your corner when you need it most. Free economics radio is sponsored by Capital One Bank. With no fees or minimums, banking with Capital One is the easiest decision in the history of decisions, even easier than deciding to listen to another episode of your favorite podcast. And with no overdraft fees, is it even a decision? That's banking reimagined. What's in your wallet? To get the most out of your wallet, and to get the most out of your wallet, please apply. See Capital One dot com slash bank Capital One N A member FDIC. Hey there, it's Stephen Dubner. You are about to hear the third and final episode in our three part series called How to Think About AI. The guest host for this series is Adam Davidson, one of the founders of NPR's Planet Money. Here's Adam. SPEAKER_05: Can you just tell us who you are and what your job is? This is a question I dread at parties, but my name is Anna Bernstein. SPEAKER_03: I am the prompt engineer at a company called Copy AI. SPEAKER_05: Have you heard of this new job, prompt engineer? It's a job that could only exist right now, a job that satisfies a need that almost none of us even knew we might have until the last few months. So what is it exactly? SPEAKER_03: The prompt engineer essentially is an expert in being the linguistic intermediary between user input and AI output. SPEAKER_05: Please note how precise Anna Bernstein's language is. That is her superpower, being really, really precise about language. If you've played around with AI tools like ChatGPT or Google's Bard, you've seen it. They respond to the precise words you type in. They can't figure out a vibe or a hidden intention. Also, unless you happen to work at an AI company, you don't ever interact with the raw model itself. Whatever you type in is put through a filter, a filter that people like Bernstein design. The filter between you and the raw AI model is there for a bunch of reasons. The first is to avoid the ugly stuff. Since the AI models were all trained on an unfiltered mass of text from the internet, they can easily call up truly horrific, offensive words and ideas if left unfiltered. But another reason for the filter is that the AI in its raw form is not always great at understanding what human beings want. Or put another way, human beings are not always that great at telling the AI precisely what we want. We might type a quick prompt like, write some marketing copy for my new blog. The AI has no way of knowing what length you want, what audience you are targeting, what voice or writing style you're looking for. That's the kind of context Bernstein's work provides. She gives the details that most people don't think to type in. In the previous episode of our series on AI, we talked about how all new technology destroys some jobs and creates new ones, jobs nobody could have imagined. Well, Prompt Engineer is one of those new jobs, and most people have still never heard of it. SPEAKER_03: Honestly, most of the time I would just say I make the AI talk good and then that will get a laugh and everyone would move on hopefully. It's a difficult job to define. SPEAKER_05: As far as you know, do you think you might be the first person to have this job? SPEAKER_03: As far as I know, if someone is like, nope, I had it before that, I'm not going to fight them on it, but of who I know, yes. SPEAKER_05: You are the first one I've heard of and you are the person from whom I learned that this is a job. But I now see that it's an in-demand job, right? There's a lot of listings out there for Prompt Engineers. Yeah, I think a lot of companies and a lot of industries are still figuring out how and SPEAKER_03: whether they get a lift from Prompt Engineering, which is also very tied into the question of whether AI will be relevant to them at all and how it's going to be relevant to them. SPEAKER_05: For a job that barely existed a few years ago, Prompt Engineering has become hot. I've seen dozens of Prompt Engineering gigs on jobs boards. Most pay more than $100,000 a year. I saw one that was more than $500,000. This is just one early example of a new job created by AI. Talk to leaders in medical research, engineering, finance, healthcare, education. Almost all the folks I speak to say roughly the same two things. AI is going to change everything and we have no idea what those changes will look like. That's what we're talking about today on the third and final episode of our series on AI here on Freakonomics Radio. What does the world look like when AI is everywhere? When AI is just assumed at your job, at school, in your personal life, how will it change the way you live and how will you have to change? SPEAKER_07: This is Freakonomics Radio, the podcast that explores the hidden side of everything with guest host Adam Davidson. SPEAKER_05: When Anna Bernstein talks about making the AI talk good, she is speaking of this new generation of AI programs called large language models known as LLMs. These LLMs take in massive amounts of data. We don't know exactly how much, but it's some huge percentage of all of human knowledge, meaning just about every publicly available book, blog, news article, as well as, well, whatever Twitter and Reddit and comments on YouTube videos represent. And then the AI does something fairly simple. It assigns probabilities to words. We do this too. If you heard me say, I saw a huge school when I was out on that boat, you're probably able to do a quick calculation and figure out, I mean, a school of fish. You might be using a handful of parameters to make that calculation. The word boat makes you realize I was on the water. Maybe you also know something about me and what I've been up to lately and what I'm interested in. Maybe we had just been talking about fish. Reports explain that our brains can actively process about three to seven different chunks of information at any one time. The AI large language models employ hundreds of billions, up to a trillion discrete parameters. So the words AI writes are not based on a few facts like that I'm in a boat and we were just talking about fish. They are based on trillions of data points. There's an old saying, knowledge is knowing a lot of facts and wisdom is knowing which facts matter. In its raw state, an LLM has almost all of human knowledge and almost no human wisdom. There are a handful of LLMs now. There's open AI's GPT-4, which fuels chat GPT as well as Microsoft's Bing. Google's LLM is called Palm 2. We've heard so much about these LLMs. I started to wonder, what are they exactly? Are they one big box or some network of computers? Are they in one place or just some vague capability in the cloud? Who creates and manages these LLMs? SPEAKER_02: The number of people required to do it has gone up over time. When I was at open AI doing GPT-3, it was basically three of us who trained the original model. That is Dario Amadei. SPEAKER_05: I find this amazing. He says that basically three people did the original work on this LLM that eventually changed the world. Amadei quit open AI, the company that created chat GPT, and has a new job now. SPEAKER_02: I'm CEO of an AI company called Anthropic. SPEAKER_05: Before he was CEO of Anthropic, when he was at open AI, Amadei was VP of research there. SPEAKER_02: Then around the end of 2020, I and a number of our colleagues left to start this company called Anthropic. That was really going to focus on A, scaling up AI systems, but B, really thinking about the safety and controllability components of it more so than we felt other folks in the field had done so far. And why did they want to do this? Things are moving very, very fast and we want to move fast too, but we want to move fast in a way that's good. SPEAKER_05: Anthropic has raised billions of dollars from venture capitalists and from Google, which owns a reported 10% of the company. Its goals are not small. It wants to win the race to be the dominant large language model in the world, surpassing open AI by being bigger, smarter, and more aligned with humanity. They call their LLM chat bot Claude. There are around 35 people working on it today. I asked Amadei, what does it take to build one of these LLMs? SPEAKER_02: The first stage is actually surprisingly simple. You just take this large language model and you feed it a whole bunch of texts. People typically crawl the internet. So this would be something like trillions of words of content, just wide range of stuff you can find on the internet from news articles to Wikipedia articles about baseball to the history of samurai in Japan. And you basically tell this language model to look at document after document and for each document, look through the document up to a point and always predict the next word. So look through the first three paragraphs. What's the first word of the fourth paragraph? Then what's the second word of the fourth paragraph? And you can always tell whether it's guessing right or wrong because you know what the truth is. SPEAKER_05: And my understanding is early on, it's basically random. It's totally wrong. Oh yeah, it starts totally random. SPEAKER_02: I mean, it's like a newborn baby, right? You know, they can't move any of their limbs or really understand the world. They can't really even see effectively. And then what you see is that different things form. First it gets a basic sense of how to spell words, right? So you know, at the beginning it isn't even spouting useful words. And then it starts to get the grammar right. It learns the idea of subject, verb, adjective, the grammatical structure of sentences are correct. But it's still not really making sense. What's the famous sentence? Furious green clouds sleep furiously or something. It's a grammatically correct phrase that doesn't mean anything. It's total nonsense. Then it gets to the point where the sentences start making sense. Let me just say it's colorless green ideas sleep furiously. SPEAKER_02: Colorless green ideas sleep furiously. So just piece by piece, it starts to put together an actual picture of the world and some things it learns later, like takes a long time for it to learn about arithmetic or math or how to program and some things it learns earlier. It tends pretty early to cotton on to kind of basic facts and basic ideas. Like it learns that Paris is the capital of France pretty early. SPEAKER_05: When you're doing this work, like if it's trillions of words, it would take, I don't know, a hundred thousand people their entire life or something to read all that. SPEAKER_02: We've no ability to do any of this manually. So you're not sitting there going, oh, cool. SPEAKER_05: It just learned Paris. SPEAKER_02: We're seeing statistics about how good it is at predicting the next word. And every once in a while we just ask it to generate things, right? We just talk to it. And you know, you can ask it like, what's the capital of France? And when it's a week into training, it seems like it has no idea. It says, you know, clouds or something, and it doesn't even understand the type of object. And then it says, you know, Madrid. And then you're like, okay, well, it's starting to understand that it needs to be a city, but it's not getting it right. And then it's like Marseille and you're like close, but not quite and a little more training. It's like Paris. You're like, okay, well, it's learning the concept. So you can check these things, although what the model knows as a whole is so vast, right? It's like asking a child a question at different ages. You can't understand everything that they're learning that's going on in their brain, but you can check one particular thing if you want to, if they've learned it or not. SPEAKER_05: This by the way, keeps coming up. People who work on AI talking about the models like their children. Some talk as if they are newborns listening to language and slowly figuring out what words mean. The rules of grammar. Although large language models often learn dozens of languages. Others talk about the models like their older kids, though still naive about the world. I sometimes think of those Amelia Bedelia books about the girl who takes everything literally. You ask her to plant a bulb and she puts a light bulb in the dirt. Here's prompt engineer Anna Bernstein. SPEAKER_03: The analogy I often use is it's like I'm picking up a cup and teaching a toddler to drink from the cup and I like bring the cup to my mouth and drink from it. And the toddler picks up the cup and brings the cup up to their mouth. And I'm like, yes, yes. And then it brings the cup over the floor and I'm like, no. And then it just drops the cup. And I'm like, okay, we were close. SPEAKER_05: I found this surprising. The main work, the hard work at these companies is not building the original large language model. That's a big project. It costs more than a hundred million dollars. It takes several months of intense, although fairly basic computing. But the hard work comes once all the training and the math are finished and you have a large language model and it's sitting there knowing everything, understanding nothing equally capable of a romantic sonnet or a racist diatribe. Human AI spent months getting GPT-4 to stop being so offensive, so violent, so ugly, so useless. That requires a ton of human intervention. With Anthropics, Claude, they intervene using a method they call constitutional AI. SPEAKER_02: The idea in constitutional AI is that you write an explicit set of principles, a constitution. Claude's constitution is about five pages long. It has some principles drawn from the UN Declaration on Human Rights, some from Apple's terms of service and some that we kind of came up with ourselves. And then you essentially train the model to do things in line with the constitution and another copy of the model to tell the first copy of the model, hey, is what you just did in line with the constitution. And so we kind of feed the model back on itself and teach it to be in line with the constitution. And the great thing about the constitution is, although I think this question of what should the values of the model be, how should it interact, I think that's a very hard question. But I think an important advance here is if we're able to make this five-page document and we can say this is what Claude is attempting to do. It might not be perfect about it, but Claude is attempting to respect human rights. Its aim is not to have political bias in any direction. And then we can point to the constitution and we can say, look, we're not perfect at this, but like the training principles are right here. You know, we've not secretly snuck anything into the model. SPEAKER_05: This might be a weird question, but if I just Googled around, I could find, you know, neo-Nazi white supremacist stuff. And I don't like that that stuff exists, but I'm a journalist. Sometimes I want to know, like, what are those folks saying? Why not just let the thing, you know, it represents the collective view of humanity. And that's what people should have. SPEAKER_02: There's a certain amount of sympathy I have for that. And one place I have sympathy for it is when the model is originally gobbling up all the data. I don't want to block it from reading almost any content. My view is models like humans should learn about these things, but I'd want the model to use its knowledge of history, including the bad parts of history, to understand what it means to respect human rights. That's what I think of as a healthy human being, right? They've seen the good stuff. They've seen the bad stuff. They understand why the bad stuff is bad. My hope is with things like the constitutional AI that we can do the same thing with AI systems. SPEAKER_05: This is a big concept Amadei is proposing, that one of the things we should teach AI is to tell right from wrong. Anthropic is not the first team that's tried this. It has been a major hurdle for researchers. Amadei is cautiously optimistic that Claude's constitution will help it clear that hurdle. And he thinks there are other reasons to feel optimistic about AI's potential. Before he started working on AI, he got a PhD in biophysics. SPEAKER_02: When I think of the places we've made progress and the places where we failed, the diseases that we've done a good job of curing are those that are fundamentally simple, right? Viral diseases are very simple. There is a foreign invader in your body. You need to destroy it. Same with bacterial. The ones we haven't succeeded that well at yet, even though we've sometimes made progress, are cancer. You have a billion different cells in your body that are out of control and rapidly self-replicating. Each of them has a different cocktail of like crazy mutations that causes it to do a different crazy thing. How do you deal with that? It runs into this incredible complexity of number of cells, number of proteins within each cell, all these incredibly complicated regulatory pathways. And my thinking on it as I went from biology to AI was, oh my God, this is beyond human comprehension. I'm not sure that humans can completely solve or understand these problems, but I have this feeling that maybe machines can, with still some help from some parts that need to be done by humans. But all the facts, all the incredibly complicated proteins, regulating proteins, tens of thousands of RNA sequences, like this is raw data. This feels like machine language, not human language. I mean, curing cancer has almost become a joke, right? The idea of totally curing a disease like that to most people in that field and maybe to most people, it's been said too many times and the promise hasn't been delivered. But I actually think AI is a technology that could do it. I mean, as worried as I am about the downsides, I think the upsides are incredible. I mean, we're in a period where science has not progressed as fast as we like, but I think AI could unblock it. SPEAKER_05: But what about those downsides? Amadei testified in Congress about his fears of bad actors using AI to develop biological weapons. He points out that today it takes enormous resources, typically the resources of a state to develop most biological weapons, which is scary enough. But what if AI puts those tools in the hands of terrorists and psychopaths? SPEAKER_02: It just stands to reason that if you can do wonderful things with biology, you can also do horrible things with biology. And I'm very concerned about this. The truth is the scary sounding stuff and the stuff you can get on Google, that's not really the stuff that makes the core experts afraid. What you should think about is there's a long process to really do something bad. I'm not going to talk about it in a public setting and a lot of things I don't know about even, but there are certain steps in that process where the information is really hard to get and the tasks are really hard to do. And what we looked at is, is that information being implicated? And the concern is that we're getting to the beginning of where it is. We're not there yet, but two to three years is our best guess. I don't know what's going to happen, but that's our best guess for when we'll get there. That's scary. SPEAKER_05: There's a lot of people with really bad ideas and the world puts up barriers to them because they don't have knowledge and resources. And so it's hard to think, how do we lower barriers for good stuff and maintain barriers for bad stuff? SPEAKER_02: I think that's the hardest question. The term we use is differential technology development. How do we develop the good stuff ahead of the bad stuff? SPEAKER_05: These debates are essential. Is AI good? Is it bad? How can we encourage the goodness and try to head off the worst of the badness? Recently, Anthropic, OpenAI, Microsoft and Google announced they were forming an oversight group to self-police their industry. It's called Frontier Model Forum. Amadei and other AI executives have also called for the government to develop regulations for this new technology. There's obviously an inherent tension here. Amadei says AI could be very dangerous and should be regulated. At the same time, he's running a company whose very mission is to make AI more powerful. SPEAKER_02: I think of myself as someone who's trying to do the right thing, but I can't say that my company has all the right incentives here. You're kind of relying on me to go against the company's incentives, and that's true for the other companies as well. Somehow the government needs to play a watchdog or enforcement role while leveraging the expertise of the companies. There's probably a role for nonprofit organizations too. And so there needs to be some kind of ecosystem where the strengths of each component help to ameliorate the weaknesses of the other components. SPEAKER_05: How to regulate AI is a big, important issue that's in the midst of being figured out. In the meantime, AI is here. We can't stop it. A lot of us are using it not to develop bioweapons, but to beef up our resumes, inspire ideas for a dinner party or just to play around. So it makes sense to try to figure out what it's good at, and getting to know AI is kind of like getting to know a stranger. It takes time. SPEAKER_03: I mean, at this point I've spent, I don't want to know how many hours with these models. SPEAKER_05: After the break, we'll get to know AI a little better. I'm guest host Adam Davidson, and you're listening to Freakonomics Radio. SPEAKER_04: Freakonomics Radio is sponsored by Mint Mobile. From the gas pump to the grocery store, your utility bills and favorite streaming services, inflation is everywhere. Thankfully, there is one company out there that's giving you a much needed break. It's Mint Mobile. As the first company to sell premium wireless service online only, Mint Mobile lets you order from home and save a ton with phone plans starting at just 15 bucks a month. All plans come with unlimited talk and text, plus high speed data delivered on the nation's largest 5G network. And you can use your own phone and keep the same phone number. To get your new wireless plan for just 15 bucks a month and get the plan shipped to your door for free, go to mintmobile.com slash freak. Cut your wireless bill to 15 bucks a month at mintmobile.com slash freak. Freakonomics Radio is sponsored by Saatva. You ever hear the expression, out with the old, in with the new? Well, that's exactly the strategy most mattress companies employ when they cut their prices and the goal is to move out all their old mattresses to make room for their new models. The lower price is the carrot to get you to take the old ones. But that raises the question, why settle for an older mattress when you can have a brand new freshly made Saatva luxury mattress for considerably less? Saatva's are famously comfortable and because they're sold online, they are made to order and cost half the price of the top retail brands. So it comes down to this, an old mattress that's been sitting around or a freshly made luxury mattress that costs way less. Some decisions have no brainer written all over them. And right now save $200 on $1,000 or more at Saatva.com slash Freakonomics. That's S double A T V A dot com slash Freakonomics. SPEAKER_05: AI, like any new technology, will create winners and losers. For now, Anna Bernstein is one of the winners. When she started this work, the job didn't have a name. She and her bosses barely knew it was a job. SPEAKER_03: They hired me on contract for a month to fix Tone at the time. They were like, if we can really nail, you know, friendly or professional or formal or informal that'll improve the product. And I kind of figured that out for them. And so was hired full time. And yeah, it's been it's been a really wild ride ever since. SPEAKER_05: I find Bernstein's experience so interesting because she's an early pioneer into this new world of large language model AI, which she has learned in copywriting will, I think eventually apply to people in lots of other fields. After all, whatever we do with AI and biological research or aerospace technology or whatever, human beings will need to figure out how to communicate their goals to this computer brain. What Bernstein learned and what she taught me is that learning to talk effectively with an AI LLM requires you to spend some time thinking about how you talk to human beings. SPEAKER_03: It's definitely opened my eyes to how much of human communication is inference is under this tent of context where when we speak to each other, we think we're speaking in a nuanced and precise way. And we are, but we're doing that through relying on context cues and even in written communication. I'm not just talking about body language and tone, but that as well. But relying on the shared context with the person we're talking to, this sort of fuzziness or like slack they'll give us. And you have these models that are trained for both really, really precise and literal instruction that human beings would struggle with because they are very intricate and it's a lot of instruction at once. And at the same time, the same model is supposed to be good at this sort of fuzzier communication with people. And those two really at times are at odds with each other. SPEAKER_05: Think of saying something like, it would be fun to get together soon to another person. That phrase could mean what it says and could be followed up by an email asking to schedule a date. It could also mean, and maybe it's more likely to mean that it would very much not be fun to get together soon. Maybe I'm just saying that to be polite and to get out of this awkward encounter. That's a great and sometimes very confusing aspect of language. The very same words can mean a specific precise thing or the exact opposite. For me, at least so far, this has been a bit maddening when I use AI tools. I'm thinking of something I want them to do. I type out the words, telling it to do the thing I want, and they don't quite do what I was hoping. I look at the words I use and realize I hadn't put in enough context to guide the AI to what I want. For someone like Anna Bernstein, though, that's not maddening. That's the fun. SPEAKER_03: Now, at the intersection of those two things is actually something I really enjoy, which are really, really well-crafted prompts, even like simple prompts where when you get just the right wording, it does exactly what you want. If you can describe exactly what you want, it gives you exactly what you want, where you really hit the nail on the head for describing the type of copy. And that can be so powerful. SPEAKER_05: My early experiences with AI were frustrating. I'd type in a prompt and get a very boring and generic response. Generate a scientifically backed meal plan to lose weight. Give me a list of movies that an 11-year-old boy and his parents might enjoy. These are pretty generic. I didn't tell AI what I like to eat or if I have any dietary restrictions. I didn't explain what my son is into, what I'm into, what my wife's into. If you don't tell AI, it won't know. So while it has been exposed to so much, it'll just give you a blah, middle of the road answer. The good news is that it's not hard to improve the AI results. Just be more specific. Give it more context and learn what kinds of prompts lead to better results. Anna Bernstein has a YouTube video that I found helpful. It's called Master the Art of Prompt Writing, 6 Tips to Writing Better Prompts, if you want to check that out. So what's Bernstein's advice? SPEAKER_03: It's going to sound really basic, but just getting the right wording, trying synonyms, trying different syntax variations on the same theme can really unlock capabilities in the AI. You can also pile synonyms on there if you're trying to like get it to use a very enthusiastic voice and just enthusiastic isn't quite enough. You can pile on like excited, hyped up and just like use all of those at once. SPEAKER_05: That has been a huge realization is that often the problem is not the AI or not just the AI. The problem is me. I haven't actually thought through, wait, what do I want? You know, one thing I did is write an essay about, and then it was the subject. In my case, I'm working with a friend of mine who teaches at Harvard. He's an Assyriologist. So write a paper about ancient Assyria. And it would just be a really generic paper. He actually said it probably could have gotten like a B minus at Harvard, but it forced me to realize, well, what do I actually want to do? So use a lot of synonyms, but also use the right words and particularly use the right verbs, use powerful verbs. Can you explain that? SPEAKER_03: The paper is a great example because you could maybe change that to something like explore ancient Assyria in the form of a paper, et cetera, et cetera. And that might give a more interesting dynamically written results that does what you want more, which is to explore the topic rather than just sort of roadly regurgitate facts about the topic. So you really want to think about the purpose of what you're doing and imbue the verb with that. SPEAKER_05: One of the ways in which it's really different from us is you have the rule down as no, no negativity allowed. But the idea that it really doesn't perform well when you tell it not to do something. SPEAKER_03: It depends slightly on the negative command. But as a general rule, if you tell it, you know, don't be formal, don't be stiff, it may get the message, but it'll at the same time be like, oh, interesting. I know what those words mean. I'm thinking about them as I'm doing this task and it'll just influence the task. So at best, you'll get no effect. And at worst, it'll be a lot more formal and stuff than when you started. SPEAKER_05: With A.I., it's quite effective to just use normal language. It works well to just write down your thoughts or even better, speak them like you'd be telling them to a person. A friend told me to think of talking to an intern, a capable, super eager, but hopelessly naive young intern. Use regular language, but add as much context as you can. So don't just say, I want to go to Montreal for the weekend. Give me a list of fun things to do. Give way more context. I want to go to Montreal for the weekend. My wife and I love trying new local foods and out of the way restaurants. Our 11 year old son loves anything to do with sports. We love to go on long walks in regular neighborhoods and we don't like touristy spots. You get the picture. Give a lot of context. A few other tricks or tools I find helpful. Tell the A.I. to ask you questions. So I might give it a prompt to do something and then I'll add, ask me any questions that might help you fully meet my needs. Another trick a friend told me, give it an instruction and then write, do not do anything yet. First, tell me what you think I'm asking you to do and let me know what you find confusing. For what it's worth, I sometimes find this process pretty fun. Like I'm learning how to communicate with an alien and I sometimes find this process maddening like I'm communicating with an alien. SPEAKER_03: I mean, I love it. I have been on like meetings with coworkers where they're like, this prompt isn't working. Can you help me? And I start fixing the prompt and they're like, sorry about making you do this tedious thing. And I'm like, what? What tedious thing? I think it also comes from a bit of a poetry writing background where at least my process for writing poetry involves like really concerningly obsessive editing where I am resetting the same line break over and over again and giving it time and rereading it. I had this poetry professor who was like, the next right word is usually your first thought or like your hundred and fifty third thought. But it's really like your second thought or third thought. SPEAKER_05: I've been focused on avoiding the hype, the dystopian and the giddy. And truth be told for all the attention, AI is still in such an early stage that most of the ways I use AI and that I see others use it are pretty banal. I think that it helps to have a utopian vision here. SPEAKER_05: That's Ethan Moloch. He's a professor of management at the Wharton School at the University of Pennsylvania, where he studies entrepreneurship and technological change. And I'm not aware of anyone who is having more fun with AI than Ethan Moloch. SPEAKER_08: I've been playing around with it for a while. I've been adjacent for my whole career. I am not a computer scientist, but I've been thinking about uses for a long time. SPEAKER_05: Moloch publishes a weekly newsletter called One Useful Thing, which provides a stream of things you can get AI tools to do. One area that Moloch finds AI especially helpful with is entrepreneurship, which is what he teaches at Wharton. SPEAKER_08: Work as a nation of entrepreneurs in waiting when we do surveys, a third of people have had an entrepreneurial idea in the last five years they wish you could execute on and almost nobody does anything. SPEAKER_05: What does that look like? AI helping would be entrepreneurs actually pursue their dream? After the break, Ethan Moloch and I go into business together. I'm guest host Adam Davidson, and this is Freakonomics Radio. SPEAKER_04: Freakonomics Radio is sponsored by Inspire Sleep. If you struggle with CPAP, you know that getting a good night's sleep can be difficult. There's discomfort and tossing and turning that can make you dread the start of your day. For those still struggling with CPAP, it doesn't have to be this way. With Inspire, you can say goodbye to the masks and the hoses and CPAP machines. But you may be wondering, what is Inspire? It's an implant that works inside your body to keep your airway open while you sleep. To learn more about Inspire and see if it may be right for you, visit InspireSleep.com. Inspire. Sleep apnea innovation. No mask, no hose, just sleep. Inspire is not for everyone. Talk to your doctor to see if it's right for you and review important safety information at InspireSleep.com. Freakonomics Radio is sponsored by Honey Nut Cheerios. We all know that heart health is oh so important and with Honey Nut Cheerios, making heart healthy decisions doesn't have to be complicated. Make Honey Nut Cheerios part of your breakfast with whole grain oats and a touch of real golden honey. Not only do they taste great, but they can help lower cholesterol. Eating a heart healthy breakfast like a bowl of Honey Nut Cheerios can help set you up to make better choices throughout the day. Add a change of heart to your shopping cart. SPEAKER_05: I'm Adam Davidson sitting in for Stephen Dupner on this, the final episode of our series on AI. I know for me, one of the best ways to get going on something new, a new business, a new project, a new exercise regimen is to get a partner, a buddy who knows things I don't, who can share the journey with me. But it is really hard to find good partners. AI is a general purpose technology. SPEAKER_08: It screws up in some areas. You have to use it the right way. But it's really exciting to have a generalist co-founder with you who can give you that little bit of advice, a little bit of encouragement, push you over the line. And that makes a big difference in people's lives. SPEAKER_05: Ethan Moloch teaches entrepreneurs how to start businesses. He's at the Wharton School, one of the top business schools in the world. He also studies what leads to success and what leads to failure when people start businesses. And he says that a major barrier to success is not some fancy formula cooked up in the Ivy League. It's so much simpler. One big reason people fail is that they stop too soon. Many stop before they do anything. They have an idea and they never pursue it. Others pursue it for a while, but hit some blocks. And that's when they stop. AI is far from perfect. Moloch will certainly agree with that, but it's always ready to push another step. It never just gives up. SPEAKER_08: If you ask the AI, especially GPT-4, the most advanced model you can get access to, and you say, what should I do next to do this? The steps that it will give you are perfectly reasonable. Are they the best steps possible? No. Will it make mistakes? Probably. But overcoming that inertia, getting a little push about what to do next is really helpful. And then it will help you actually do those experiments. I'm forcing all my students to actually interview the AI as an actual potential customer. That's not because it's as good as interviewing a customer. You absolutely have to interview potential customers, but it actually gets you partway there. It greases the wheels a lot of ways to doing that initial testing and overcomes those barriers where you might otherwise say, I need to hire someone to do this, or I don't know what to do next because it can help you overcome that inertia. SPEAKER_05: I find that really exciting. That feels really cool. If every person in the world has an entrepreneurship engine in their home. SPEAKER_08: I absolutely love the phrase entrepreneurship engine because most entrepreneurship is not like big E entrepreneurship. It's not trying to launch the next Twitter. It is trying to start a small business in your town. SPEAKER_05: We've all had these fantasies, right? Of opening a lovely business in our town. Here's mine. I live in a small town in Vermont. The other day I was two towns over in Bristol, which has the perfect tiny little small town, Vermont, main street. I was with some friends eating lunch and across the street, I saw a sign announcing that the local stationary store was for sale. I started to fantasize telling my friends how much I would love to buy that place and turn it into my dream stationary store. I love stationary pens and paper and stamps and the whole thing. I love fancy artisanal fountain pens, and I also love big stacks of regular old copy paper and a giant wall filled with Bic pens. But of course it was just a silly fantasy, some idle conversation. But talking to Ethan Moloch, I started to wonder what if I had used an AI tool to flesh my idea out a bit, see if it was at all viable. SPEAKER_08: So you say you have an idea, right? But I still think ideation is his own phase because a lot of ideation is just about combining ideas together with variation. That's something that AI does really well because it finds connection between ideas. So I might start off with, I want to launch a stationary store in Vermont. Give me 20 different variations on stationary store ideas that could be great. Give me a hundred ideas. And then I would probably do some constraint thinking. Let's say an unlimited money. Give me 20 ideas with unlimited money. So generally when you prompt the AI, you want to tell it who it is, what context it's operating under. SPEAKER_05: And let's slow down there for a second because that was a real big game changer for me is that you literally just tell the AI what it is. SPEAKER_08: You're not actually like magically invoking Warren Buffett. When you say you're Warren Buffett, what you're doing is putting it in a context, right? Because it's absorbed all of sort of human knowledge, right? So what context is it operating in? Who's its audience? So I would start with that idea of generating a lot of ideas. And by the way, develop the ideas you like. Tell me more about idea two. Give me more variations on idea six. What would the steps be at idea 12? So you're interacting with it. You're not just typing prompt in and getting a query back. It's not Google. The more you interact, the better. And you can also give it your context. You could say, I'm really interested in stationary as a lifestyle business. I want to do more than break even the more context you give it, the better off you are. So I would be experimenting with that. And again, you'll find out what works for you. And I would also be super careful if you know nothing about stationary, you're going to get lied to by the AI. Sometimes I'm actually tempted. SPEAKER_05: Can we just do this? Why don't we do this? Here? I'm logging into my open AI account. I'm going to click on GPT four. Yes. All right. So what should we start with? SPEAKER_08: Well, we can just say you're very good at coming up with creative ideas. Let's start like the most basic kind of approach. So what are 20 ideas for a stationary store in Bristol, Vermont that would, what should SPEAKER_05: I say that would attract people from all over the world? Something like that? SPEAKER_08: Yes, let's do that. Yep. And you will not know until you type this in about how useful this will be. So the point is you can't just bounce off at once, right? We're going to get this working, right? So let's figure out what we get out of this. It may be the wrong direction. So the great thing is we can nudge it back into shape. Right. SPEAKER_05: All right. So it's saying I could do an eco stationary store products made from recycled and sustainably source historical stationary. That's actually kind of neat. I like that one actually bespoke stationary, custom design stationary. That feels more like a virtual business for me. Vermont inspired stationary. I'm not quite sure that works. Worldwide stationary, maybe locally made artisanal stationary. That one doesn't feel as exciting somehow. Cause I think that exists. Stationary subscription business, high tech stationary, ephemera store, vintage stationary, a cafe where people can relax, write or sketch while enjoying a good coffee or tea. SPEAKER_08: And by the way, I would expect none of these to be perfect ideas, right? Hopefully though I'm seeing, I'm watching your eyes here, right? You're hopefully making connections in these cases where it's like, oh, that's actually how I had thought about the vintage angle from that way. SPEAKER_05: It surprised me to see just how fun and useful this process that Moloch calls constrained ideation could be. Even when the ideas were silly or terrible, it got my brain working in new ways, thinking new thoughts. And since the AI tools are so fast and easy, you can zoom in to explore the most ridiculous ideas. SPEAKER_08: Let's do something that's like stationary that would be useful to astronauts or something like that. Right. SPEAKER_05: Gotcha. Okay. So let's see. Astronomy themed stationary, stargazing space tech accessories. SPEAKER_05: I would definitely drive a few hours for space tech accessory store, but I'm both a stationary nerd and a space nerd. So it's got me in two ways. Art memoirs and notebooks, spacecraft, blueprints glow in the dark. All right. I'm liking like the theme, but I'm not seeing why that would be in Bristol, Vermont. SPEAKER_08: Absolutely. But you know, it's costless, right? So the other thing we don't quite realize is like you just completed idea generation session in the minute and a half or talking, right? This would have been a whole process before it. There's no reason not to throw 50 constraints at the system. And you spent a half hour and you'll have a list of ideas that you're super interested in. So dig into anything like give me five variations on that one. SPEAKER_05: Yeah. Give me five variations on a stationery store that sells historic stationary. And it's just immediately jumping in, right? Ancient script stationary store. SPEAKER_08: It gets kind of obsessive, right? Like you're like, Ooh, I could keep getting it to generate stuff. Like my mind is spinning now. Like I've got ideas for a stationary store to compete with yours one town over because this sounds great. SPEAKER_05: This was starting to get fun. Now I don't want to open a space theme stationary store. That's not the point. The point is that constraint ideation can get your brain open to all sorts of possibilities. I actually started to give that historical ancient stationary idea, some serious thought. I'm so fascinated by old ways of writing. I would love a place where I could buy clay and a stylist to practice cuneiform writing, get some vellum and dip pens or some real papyrus. I was just reading how in the ancient middle East, some people wrote by scratching into pliable lead. Is it just me or would you love to go to a store that had all of that? And with Moloch's guidance before long, I had a halfway decent business plan and market research proposal. Now I'm not actually going to open a historical ancient stationary store, although man, I do kind of want to, but the point is the process was fun. And in 12 minutes I accomplished what would probably normally take months. And who knows? I like playing around with different business ideas and maybe someday I'll actually do one of them. A key point that Moloch showed me is that this is very much not a passive thing. I used to have that idea that AI meant giving over all the thinking work to some computer that using AI was cheating. And it can be like all the reports of people getting AI to write their school assignments for them. But this Moloch approach of using AI as a thought partner, pushing me forward, playing around with goofy ideas, fleshing out semi-formed thoughts into more rigorous ones. It felt very active and it got me closer to what I actually think and want. It made my own thinking clearer to me. SPEAKER_08: In every case where we're using this, you should be able to push the AI to a point where you're getting to a back and forth interaction with it, where you as a human are adding a lot to that interaction. And it's helping you by giving you the immediate gratification. You need various ideas that you need. And if you can get there, then I think this becomes really magical. SPEAKER_05: I caught a bit of Ethan Moloch's infectious optimism. It started to help me see that AI doesn't necessarily have to replace us. It can expand the range of human possibilities, allowing us to do far more than we could before. Ethan Moloch's wife, Lilac Moloch, is also at Wharton where she leads their digital learning programs. They both weren't sure at first how AI would impact education. There are a lot of fears, reasonable fears that it will hurt education. Instead of learning, students will just use AI to pretend to have learned. But the Moloch's have been experimenting with AI as an educational tool and have found that it can be quite helpful. SPEAKER_08: So for example, as much as we hate them, tests are one of the most powerful ways to learn because they not only test your knowledge, but they actually increase your future recall. Writing tests is hard to do. This writes tests for you. Educating people is the key to unlocking everything. If we could do that scale, what does that mean? That's incredibly exciting. SPEAKER_05: We started this series of episodes asking if AI is more good or more bad for humankind. We've made clear throughout that nobody really knows. This thing is so new, so weird, so fast changing that any prediction at all in any direction is quickly overtaken by a surprising reality. So I'm not going to predict the future. I do think we can say some things with some confidence about right now, and here they are. Number one, the current iteration of AI is not worthy of the hype we've seen. It's a tool. Yes, it's a big deal. Pay attention to it, but it's nowhere near mature. Two, the future of our lives with AI is not yet written. Whether it makes life unbalanced better or worse for most of us is up to us. It will be how human beings use AI, set the rules for AI that will determine its impact. Three, AI will almost certainly transform your job and your company and your entire industry. You will probably still have a job if you have one now. You'll be doing work, but it will be done differently and in some ways better and in some ways worse. And four, this is my one recommendation. Don't sit it out. Get to know it. You can hate it. You can love it. You can have mixed feelings, but the more you understand it, the better prepared you'll be. Just as I was finishing this episode, a friend told me about something that had just happened to him. My friend was in India because his mother-in-law was in the hospital, 84 years old, on a ventilator. The doctor said she would pass within a day or so. Her husband, who was 92, was distraught for all the obvious reasons, but for another one too. He wanted to tell her how much she had meant to him, how wonderful their 60 plus years of life together had been, but he didn't know how to say that in words. As it happens, his granddaughter, my friend's daughter, works in AI. She guided her grandfather through some AI prompts, asked her grandfather some questions and entered them into chat GPT. It produced a poem, a long poem. He said it perfectly captured his feelings about his wife and that on his own, he never would have been able to come up with the right words. He sat next to her, reading the poem line by line. She died soon after, and he said it allows him to know he told her everything. I want to thank you for listening. It has been so fun to work on this show, a show I've loved for as long as it's been around. Thank you for letting me sit in for Steven, who will be back next week. SPEAKER_04: This is Steven, and I want to thank Adam Davidson for taking us on this lovely, leisurely stroll through what AI is and isn't and what it can be. If you missed an earlier episode in the series, go catch up on any podcast app or on Freakonomics.com. Just look for the series called How to Think About AI. Coming up next time on the show, the union that represents NFL players recently conducted their first ever survey about workplace conditions. I would never have thought to ask, are there rats in your locker room? And they gave a letter grade to each of the league's 32 teams. SPEAKER_01: This is really about, are we giving you the inputs you need to be as productive as possible? SPEAKER_04: The NFL is the richest and most successful sports league in history. Each team is worth at least four or five billion dollars. SPEAKER_07: Nobody wants to be known as the cheapskate. Before when it was rumored you were the cheapskate, it was harder to prove. Now there's data. SPEAKER_04: And what does the data say? That's next time on the show. Until then, take care of yourself. And if you can, someone else too. Freakonomics Radio is produced by Stitcher and Renbud Radio. You can find our entire archive on any podcast app or at Freakonomics.com, where we also publish transcripts and show notes. This series was produced by Julie Canfer and mixed by Eleanor Osborn, Greg Rippon, Jasmine Klinger and Jeremy Johnston. We also had help this week from Daniel Moritz-Rabson. Our staff also includes Alina Cullman, Daria Clenert, Elsa Hernandez, Gabriel Roth, Lyrick Bowditch, Morgan Levy, Neil Carruth, Rebecca Lee Douglas, Ryan Kelly, Sarah Lilly and Zach Lipinski. Our theme song is Mr. Fortune by the Hitchhikers. The rest of our music is composed by Luis Guerra. As always, thank you for listening. SPEAKER_05: I don't know that chat GPT would ever tell me, by the way, don't do this. This is a dumb idea. SPEAKER_08: Entrepreneurship is this alternating thing between like, I'm the master of the universe and oh crap, this was terrible. I can't believe I did this. SPEAKER_07: The Freakonomics Radio Network, the hidden side of everything. SPEAKER_03: Stitcher. SPEAKER_06: At JPMorgan Chase, we see the potential in people like John Burke, president of Trek Bicycles. SPEAKER_09: My dad started Trek in Waterloo, Wisconsin, and now it spans the globe. You want to take what was given to you and you want to build it and you want to pass it along. SPEAKER_06: That's why we're here, to help make going global happen. JPMorgan Chase. JPMorgan Chase Bank, member FDIC. SPEAKER_01: So many teenagers waiting to be adopted from foster care feel like their lives are over. They've given up hope of having a permanent home and are terrified of aging out with no support system. Right now, more than 113,000 children are waiting to be adopted in the US. The Dave Thomas Foundation for adoption is dedicated to finding them the right family before it's too late. Learn how you can help at DaveThomasFoundation.org slash learn more. SPEAKER_00: eBay Motors is your Carroll too easy to get a A tended meat have never damaged before Household outside ahead torance those unsafe seniors simply go local, OK waves. It's it. Even more straight empty at it because guaranteed fit they see someone he's algorithmically gone. Ever. those precious gems, giants and no dinero maintain Vivo Ese's Spirit to the Right or Die baby and eBay Motors eBay Motors.com took part in the presentation