SPEAKER_03: Welcome to Episode 124 of the all in podcast. My understanding is there's going to be a bunch of global fan meetups for Episode 125. If you go to Twitter and you search for all in fan meetups, you might be able to find the link but just
SPEAKER_00: to be clear, we're not they're not official all in this. They are fans that self organized, which is pretty mind blowing, but we can't vouch for any particular organization, right? Nobody knows what's going to happen at these things. You can
SPEAKER_03: get robbed. It could be a setup. I don't know. But I retweeted it anyway, because there are 31 cities where you lunatics are getting together to celebrate the world's number one business technology podcast. It is pretty crazy. You know what this reminds me of is in
SPEAKER_00: the early 90s, when Rush Limbaugh became a phenomenon. There used to be these things called rush rooms where like restaurants and bars would literally broadcast rush over their speakers during I don't know, like what for the morning through lunch broadcast, and people would go to these rush rooms and listen together. What was it like sex when you were
SPEAKER_03: about 1617 years old at the time? What was it like when you hosted this? It was a phenomenon. But I mean, it's
SPEAKER_00: kind of crazy. We've got like a phenomenon going here where people are organizing. You've said phenomenon three times
SPEAKER_02: instead of phenomenon. He said it's phenomenon. Phenomenal. Why Saxon a good moochma what's going on?
SPEAKER_03: There's a specific secret toe tap that you do under the
SPEAKER_02: bathroom stalls when you go to a rush room. I think you're
SPEAKER_00: getting confused about a different event you want to
SPEAKER_03: let your winners find Rain Man David Saxon. We open source it to the fans and they've just gone
SPEAKER_02: crazy.
SPEAKER_03: There's a lot of actual news in the world and generative AI is taking over the dialogue and it's moving at a pace that none of us have ever seen in the technology industry. I think we'd all agree the number of companies releasing product and the compounding effect of this technology is phenomenal. I think we would all agree a product came out this week called auto GPT and people are losing their mind over it. Basically what this does is it lets different GPT s talk to each other. And so you can have agents working in the background and we've talked about this on previous podcasts, but they could be talking to each other essentially, and then completing tasks without much intervention. So if let's say you had a sales team and you said to the sales team, hey, look for leads that have these characteristics for our cell software, put them into our database, find out if they're already in the database, alert a salesperson to it, compose a message based on that person's profile on LinkedIn or Twitter or wherever, and then compose an email, send it to them if they reply, offer them to do a demo and then put that demo on the calendar of the salesperson, thus eliminating a bunch of jobs and you could run these, what would essentially be cron jobs in the background forever, and they can interact with other LLMs in real time sacks. I've just gave but one example here. But when you see this happening, give us your perspective on what this tipping point means. Let me
SPEAKER_00: take a shot at explaining it in a slightly different way. Sure, not that your explanation was wrong, but I just think that maybe explain it in terms of something more tangible. Sure. So I had a friend who's a developer who's been playing with auto GPT. By the way, so you can see it's on GitHub, it's kind of an open source project, it was sort of a hobby project, it looks like that somebody put up there. It's been out for about two weeks. It's already got 45,000 stars on GitHub, which is a huge number. Explain what GitHub is for the audience.
SPEAKER_03: It's just a code repository. And you can create, you know, repos
SPEAKER_00: of code for open source projects. That's where all the developers check in their code. So you know, for open source projects like this, anyone can go see it and play with it. It's
SPEAKER_02: like Pornhub, but for developers,
SPEAKER_03: it would be more like amateur Pornhub because you're contributing your scenes as it were your code. Yes, but yes, continue.
SPEAKER_00: This thing has a ton of stars. And apparently just last night, I got another 10,000 stars overnight, this thing is like, exploding in terms of popularity. But in any event, what you do is you give it an assignment. And what auto GPT can do that's different is it can string together prompts. So if you go to chat GPT, you prompt it one at a time. And what the human does is you get your answer. And then you think of your next prompt. And then you kind of go from there, and you end up in a long conversation that gets you to where you want to go. So the question is, what if the AI could basically prompt itself, then you've got the basis for autonomy. And that's what this project is designed to do. So what you'll do is what my friend did, he said, Okay, you're an event planner, AI. And what I would like you to do is plan a trip for me for a wine tasting in Healdsburg this weekend. And I want you to find like the best place I should go. And it's got to be kid friendly, not everyone's going to drink, we're gonna have kids there. And I'd like to be able to have other people there. And so I'd like you to plan this for me. And so what auto GPT did is it broke that down into a task list. And every time I completed a task, it would add a new task to the bottom of that list. And so the output of this is that it searched a bunch of different wine tasting venues, it found a venue that had a bocce ball and lawn area for kids, it came up with a schedule, it created a budget, it created a checklist for an event planner. It did all these things. And my friend says, he's actually going to book the venue this weekend and use it. So we're going beyond the ability just for a human to just prompt the AI we're now the AI can take on complicated tasks. And again, they can recursively update its task list based on what it learns from its own previous prompt. So what you're seeing now is the basis for a personal digital assistant. This is really where it's all headed is that you can just tell the AI to do something for you pretty complicated. And it will be able to do it, it will be able to create its own task list and get the job done in quite complicated jobs. So that's why everyone's losing their shit over this freebird
SPEAKER_03: your thoughts on automating these tasks and having them run and and add tasks to the list. This does seem like a sort of seminal moment in time that this is actually working. I think
SPEAKER_01: we've been seeing seminal moments over the last couple of weeks and months, kind of continuously. Every time we chat about stuff or every day, there's new releases that are paradigm shifting, and kind of reveal new applications and perhaps concepts structurally that we didn't really have a good grasp of before some demonstration came across chat GPT was kind of the seed of that. And then all of this evolution since has really, I think, changed the landscape for really how we think about our interaction with the digital world and where the digital world can go and how it can interact with the physical world. It's just really profound. One of the interesting aspects that I think I saw with some of the applications of auto GPT were these almost like autonomous characters in in like a game simulation that could interact with each other or these autonomous characters that would speak back and forth to one another, where each instance has its own kind of predefined role. And then it explores some set of discovery or application or prompt back and forth with the other agent. And that's the kind of recursive outcomes with this agent to agent interaction model, and perhaps multi agent interaction model, again, reveals an entirely new paradigm for, you know, how things can be done simulation wise, you know, discovery wise, engagement wise, where one agent, you know, each agent can be a different character in a room. And you can almost see how a team might resolve to create a new product collaboratively by telling each of those agents to have a different character background or different set of data or a different set of experiences or different set of personality traits. And the evolution of those that multi agent system outputs, you know, something that's very novel, that perhaps any of the agents operating independently, we're not able to kind of reveal themselves. So again, like another kind of dimension of interaction with these with these models. And it again, like every week, it's a whole nother layer to the onion. It's super exciting and compelling. And the rate of change and the pace of kind of, you know, new paths being being defined here, really, I think makes it difficult to catch up. And particularly, it highlights why it's gonna be so difficult, I think, for regulators to come in and try and set a set of standards and a set of rules at this stage, because we don't even know what we have here yet. And it's gonna be very hard to kind of put the genie back in the box.
SPEAKER_03: Yeah. And you're also referring, I think, to the Stanford and Google paper that was published this week, they did a research paper where they created essentially the Sims, if you remember that video game, put a bunch of what you might consider NPCs, non playable characters, you know, the merchant or the whoever in a, in a video game, and they said, each of these agents should talk to each other, put them in a simulation, one of them decided to have a birthday party, they decided to invite other people, and then they have memories. And so then over time, they would generate responses like I can't go to your birthday party, but Happy Birthday. And then they would follow up with each player and seemingly emergent behaviors came out of this sort of simulation, which of course now has everybody thinking, well, of course, we as humans, and this is simulation theory are living in a simulation, we've all just been put into this. Chama is what we're experiencing right now. How impressive this technology is? Or is it? Oh, wow, human cognition, maybe we thought was incredibly special. But we can actually simulate a significant portion of what we do as humans. So we're kind of taking the shine off of consciousness. I'm not sure it's that. But I would make
SPEAKER_02: two comments. I think this is a really important week. Because it starts to show how fast the recursion is with AI. So in other technologies, and in other breakthroughs, the recursive iterations took years, right? If you think about how long did we wait for from iPhone one to iPhone two, it was a year, right? We'd waited two years for the App Store. Everything was measured in years, maybe things when they were really, really aggressive, and really disruptive were measured in months. Except now, these incredibly innovative breakthroughs are being measured in days and weeks. That's incredibly profound. And I think it has some really important implications to like the three big actors in this play, right? So it has, I think, huge implications to these companies. It's not clear to me how you start a company anymore. I don't understand why you would have a 40 or 50 person company to try to get to an MVP. I think you can do that with three or four people. And that has huge implications then to the second actor in this play, which are the investors and venture capitalists that typically fund this stuff, because all of our capital allocation models were always around writing 10 and 15 and $20 million checks and $100 million checks, then $500 million checks into these businesses that absorbs tons of money. But the reality is like, you know, you're looking at things like mid journey and others that can scale to enormous size with very little capital, many of which can now be bootstrapped. So it takes really, really small amounts of money. And so I think that's a huge implication. So for me, personally, I am looking at company formation being done in a totally different way. And our capital allocation model is totally wrong size. Look, Fund Four for me was $1 billion. Does that make sense? No, for the next three or four years? No, the right number may actually be $50 million invested over the next four years. I think the VC job is changing. I think companies startups are changing. I want to remind you guys of one quick thing as a tangent. I had this meeting with Andre Carpathie, I talked about this on the pod, where I said, I challenged him, I said, Listen, the real goal should be to go and disrupt existing businesses using these tools, cutting out all the sales and marketing, right, and just delivering something and I use the example of stripe, disrupting stripe by going to market with an equivalent product with one 10th the number of employees at one 10th the cost. What's incredible is that this auto GPT is the answer to that exact problem. Why? Because now if you are a young industrious entrepreneur, if you look at any bloated organization that's building enterprise class software, you can string together a bunch of agents that will auto construct everything you need to build a much, much cheaper product that then you can deploy for other agents to consume. So you don't even need a sales team anymore. This is what I mean by this crazy recursion that's possible. Yeah. So I'm really curious to see how this actually affects like all of this all of these, you know, it's a singular company. I mean, it's a continuation chumath of
SPEAKER_02: and then the last thing I just want to say is related to my tweet. I think this is exactly the moment where we now have to have a real conversation about regulation. And I think it has to happen. Otherwise, it's gonna be a shit show. Let's put a pin
SPEAKER_03: in that for a second. But I want to get Saxe's response to some of this. So, Saxe, we saw this before it used to take two or three million dollars to commercialize a web based software product app, then it went down to 500k then 250. I don't know if you saw this story. But if you remember the hit game on your iPhone, flappy birds, flappy birds, you know, was a phenomenon it you know, hundreds of millions of people played this game over some period of time. Somebody made it a game by talking to chat GPT for in mid journey in an hour. So the perfect example and listen, it's a game. So it's something silly. But I was talking to two developers this weekend, and one of them was an okay developer. And the other one was an actual 10 x developer who's built, you know, very significant companies. And they were coding together last week. And because of how fast chat GPT and other services were writing code for them. He looked over at her and said, You know, you're basically a 10 x developer now, my superpower is gone. So where does this lead you to believe company formation is going to go? Is this going to be, you know, massively deflationary companies like stripe are going to have 100 competitors in a very short period of time? Or are we just going to go down the long tail of ideas and solve everything with software? How's this going to play out in the in the startup space? David Sachs?
SPEAKER_00: Well, I think it's true that developers and especially junior developers get a lot more leverage on their time. And so it is going to be easier for small teams to get to an MVP, which is something they always should have done anyway, with their seed round, you shouldn't have needed, you know, 50 developers to build your v1, it should be, you know, this the founders really. So that, that I think is already happening. And that trend will continue. I think we're still a ways away from startups being able to replace entire teams of people. I just, you know, I think right now, we're final ways, months, years, decades. Well, it's in the
SPEAKER_00: years, I think, for sure, we don't know how many years. And the reason I say that it's just very hard to replace, you know, 100% of what any of these particular job functions do 100% of what a sales rep does 100% of what a marketing rep does, or even what a coder does. So right now, I think we're still at the phase of this, where it's a tool that gives a human leverage. And I think we're still a ways away from the, you know, human being completely out of the loop. I think right now, I see it mostly as a force for good, as opposed to something that's creating a ton of dislocation. Friedberg, your thoughts,
SPEAKER_03: if we follow the trend line, you know, to make that video
SPEAKER_01: game that you shared took probably a few hundred human years than a few dozen human years, then, you know, with other toolkits coming out, maybe a few human months, and now this person did it in one human day, using this tooling. So if you think about the implication for that, I mentioned this, probably last year, I really do believe that at some point, the whole concept of publishers and publishing maybe goes away. Where, you know, much like we saw so much of the content on the internet today being user generated, you know, most of the content is made by individuals posted on YouTube or Twitter, that's most of what we consume nowadays, or Instagram or Tik Tok, in terms of video content. We could see the same in terms of software itself, where you no longer need a software startup or a software company to render or generate a set of tools for a particular user, but that the user may be able to define to their agent, their AI agent, the set of tools that they would individually like to use or to create for them to do something interesting. And so the idea of buying or subscribing to software, or even buying or subscribing to a video game, or to a movie or to some other form of content, starts to diminish. As the leverage goes up with these tools, the accessibility goes up, you no longer need a computer engineering degree or computer science degree, to be able to harness them or use them. And individuals may be able to speak in simple and plain English, that they would like a book or a movie that does that looks and feels like the following or a video game that feels like the following. And so when I open up my iPhone, maybe it's not a screen with dozens of video games, but it's one interface. And the interface says, what do you feel like playing today? And then I can very clearly and succinctly state what I feel like playing and it can render that game render the code, render the engine, render the graphics and everything on the fly for me. And I can use that. And so you know, I kind of think about this as being a bit of a leveling up that the idea that all technology again, starts central and moves to kind of the edge of the network over time, that may be what's going on with computer programming itself now, where the toolkit to actually use computers to generate stuff for us is no longer a toolkit that's harnessed and controlled and utilized by a set of centralized publishers, but it becomes distributed and used at the edge of the network by users like anyone. And then the edge of the network technology can render the software for you. And it really creates a profound change in the entire business landscape of software and the internet. And I think it's, you know, it's really like we're just starting to kind of see have our heads unravel around this notion. And we're sort of trying to link it to the old paradigm, which is all startups are gonna get cheaper and smaller teams. But it may be that you don't even need startups for a lot of stuff anymore. You don't even need teams and you don't even need companies to generate and render software to do things like that. And you don't even need to do stuff for you anymore.
SPEAKER_03: Chamath, when we look at this, it's kind of a pattern of augmentation, as we've been talking about here, we're augmenting human intelligence, then replacing this replication or this automation, I guess might be a nice way to say it's augmentation, then automation. And then perhaps deprecation. Where do you sit on this? It seems like sacks feels it's like, free break things, hey, maybe startups and content are over. Where do you sit on this augmentation, automation, deprecation journey we're on?
SPEAKER_02: I think that humans have judgment. And I think it's going to take decades for agents to replace good judgment. And I think that's where we have some defensible ground. And I'm going to say something controversial. I don't think developers anymore have good judgment. Developers get to the answer, or they don't get to the answer. And that's what agents do. Because the 10x engineer had better judgment than the 1x engineer. But by making everybody a 10x engineer, you're taking judgment away. You're taking code paths that are now obvious and making it available to everybody. It's effectively like what you did in chess. An AI created a solver. So everybody understood the most efficient path in every single spot to do the most EV positive thing, the most expected value positive thing. Coding is very similar that way. You can reduce view it very, very reductively. So there is no differentiation in code. And so I think freeburg is right. So for example, let's say you're going to start a company today. Why do you even care what database you use? Why do you even care which cloud you're built on? To freeburg's point, why do any of these things matter? They don't matter. They were decisions that used to matter when people had a job to do. And you paid them for their judgment. Oh, well, we think GCP is better for this specific workload. And we think that this database architecture is better for that specific workload. And we're going to run this on AWS, but that on Azure. And do you think an agent cares? If you tell an agent find me the cheapest way to execute this thing. And if it ever gets not, you know, cheaper to go someplace else, do that for me as well. And, you know, ETL all the data and put it in the other thing, and I don't really care. So
SPEAKER_03: you're saying it will, it will swap out stripe for ad yen or it doesn't know for Amazon Web Services, it's going to be ruthless,
SPEAKER_02: it's going to be ruthless. And I think that the point of that that and that's the exact perfect word, Jason, AI is ruthless, because it's emotionless. It was not taken to a steak dinner. It was not brought to a basketball game. It was not sold into a CEO. It's an agent that looked at a bunch of API endpoints, figured out how to write code to it to get done the job at hand that was passed it within a budget, right. The other thing that's important is these agents execute within budgets. So another good example was, and this is a much simpler one. But a guy said, I would like seven days worth of meals. Here are my constraints from a dietary perspective. Here are also my budgetary constraints. And then what this agent did was figured out how to go and use the Instacart plugin at the time, and then these other things and execute within the budget. How is that different when you're a person that raises $500,000 and says, I need a full stack solution that does x, y, and z for $200,000. It's the exact same problem. So I think it's just a matter of time until we start to cannibalize these extremely expensive, ossified large organizations that have relied on a very complicated go to market and sales and marketing motion. I don't think you need it anymore in a world of agents and auto GPTs. And I think that to me is quite interesting because a, it creates an obvious set of public company shorts. And then be you actually want to arm the rebels and arming the rebels to use the toby lutke analogy here would mean to seed hundreds of one person teams, hundreds. And just say go and build this entire stack all over again using a bunch of agents. Yeah, recursively, you'll get to that answer in less than a year. Interestingly, when you talk
SPEAKER_03: about the emotion of making these decisions, if you look at Hollywood, I just interviewed on my other podcast, the founder of you have another podcast. I do. It's called startups. Thank you. episodes you've been on her four times. Please, please don't give him an excuse to plug it. I'm not gonna
SPEAKER_03: plug this weekend startups available on Spotify and iTunes and youtube.com slash this weekend. Runway is the name of this company I interviewed. And what's fascinating about this is he told me on everything everywhere all at once the award winning film, they had seven visual effects people on it and they were using his software. The late night shows like Colbert and stuff like that are using it. They are ruthless in terms of creating crazy visual effects now without and you can do text prompt to get video output. And it is quite reasonable what's coming out of it. But you can also train it on existing data sets. So they're going to be able to take something sacks like The Simpsons or South Park or Star Wars or Marvel take the entire corpus of the comic books and the movies and the TV shows and then have people type in have Iron Man do this have Luke Skywalker do that. And it's going to output stuff. And I said, Hey, when would this reach the level that the Mandalorian TV show is and he said within two years now he's talking his own book, but it's quite possible that that all these visual effects people from Industrial Light Magic on down are going to be replaced with director sacks who are currently using this technology to do. What do they call the images like that go with the script storyboards storyboards Thank you. They're doing storyboards in this right now. The difference between the storyboard sacks and the output is closing in the next 30 months, I would say. Right. I mean, maybe you could speak to a little bit about the pace here because that is the perfect ruthless example of ruthless AI. I mean, you could have the entire team at Industrial Light Magics or Pixar be unnecessary
SPEAKER_00: this decade. Well, I mean, you see a bunch of the pieces already there. So you have stable diffusion, you have the ability to type in the image that you want, and it spits out, you know, a version of it or 10 different versions of it, and you can pick which one you want to go with. You have the ability to create characters, you have the ability to create voices, you have the ability to replicate a celebrity voice. The only thing that's not there yet, as far as I know, is the ability to take static images and string them together into a motion picture. But that seems like it's coming really soon. So yeah, in theory, you should be able to train the model, where you just give it a screenplay. And it outputs, essentially an animated movie. And then you should be able to fine tune it by choosing the voices that you want and the characters that you want. And, you know, and that kind of stuff. So, yeah, I think we're close to it. Now, I think that the question, though, is, you know, every nine, let's call it of reliability is a big advancement. So yeah, it might be easy to get to 90% within two years, but it might take another two years to go from 90 to 99%. And then it might take another two years to get to 99.9 and so on. And so to actually get to the point where you're at the stage where you can release a theatrical quality movie, I'm sure it will take a lot longer than two years. Well, but look at this sex, I'm just gonna show you one image.
SPEAKER_03: This is the input was aerial drone footage of a mountain range. And this is what it came up with. Now, if you were watching TV in the 80s or 90s on a non HDTV, this would look indistinguishable from anything you've seen. And so this is at a pace that's kind of crazy. There's also opportunity here, right? Friedberg I mean, if we were to look at something like The Simpsons, which has gone on for 30 years, if young people watching The Simpsons could create their own scenarios, or with auto GPT. Imagine, you told The Simpsons stable diffusion instance, read what's happening in the news, have Bart Simpson respond to it have the South Park characters parody, whatever happened in the news today, you could have automated real time episodes of South Park, just being published onto some website. Before you move on, did you see the the wonder studio
SPEAKER_00: demo? We can pull this one up. It's really cool. Yeah, please. This is a startup that's using this type of technology. And the way it works is you film a live action scene with a regular actor, but then you can just drag and drop and animate a character onto it. And it then converts that scene into a movie with that character, like Planet of the Apes or Lord of the Rings, right? Yeah,
SPEAKER_03: exactly. So Dacus, you see the person who kept winning all the Oscars. So there it goes after the robot is replaced the human.
SPEAKER_00: Wow, you can imagine like every piece of this just eventually gets swapped out with AI, right? Like, you should be able to tell the AI give me a picture of a human leaving a building, like a Victorian era building in New York. And certainly can give you a static image of that. So it's not that far to then give you a video of that. Right. And so yeah, I think we're we're pretty close for let's call it hobbyists or amateurs to be able to create pretty nice looking movies using these types of tools. But again, I think there's a jump to get to the point where you're just altogether replacing. One of the things I'll say on this is we still keep trying to
SPEAKER_01: relate it back to the way media narrative has been explored and written by humans in the past very kind of linear storytelling. You know, it's a two hour movie, 30 minute TV segment, eight minute YouTube clip, 30 second Instagram clip, whatever. But one of the enabling capabilities with this set of tools is that these stories, the way that they're rendered and the way that they're explored by individuals can be fairly dynamic. You could watch a movie with the same story, all four of us could watch a movie with the same story, but from totally different vantage points, and some of us could watch it in an 18 minute version or a two hour version or a, you know, three season episode episodic version, where the way that this opens up the potential for creators and also, so now I'm kind of saying, before I was saying, hey, individuals can make their own movies and videos, that's going to be incredible. There's a separate, I think, creative output here, which is the leveling up that happens with creators that maybe wasn't possible to them before. So perhaps a creator writes a short book, a short story, and then that short story gets rendered into a system that can allow each one of us to explore it and enjoy it in different ways. And I as the creator can define those different vantage points, I as the creator can say, here's a little bit of this personality, this character trait. And so what I can now do as a creator is stuff that I never imagined I could do before. Think about old school photographers doing black and white photography with pinhole cameras, and then they come across Adobe Photoshop, what they can do with Adobe Photoshop was stuff that they could never conceptualize of in those old days, I think what's going to happen for creators going forward, and this is going back to that point that we had last week or two weeks ago, about the guy that was like, hey, I'm out of a job, I actually think that the opportunity for creating new stuff in new ways is so profoundly expanding, that individuals can now write entire universes that can then be enjoyed by millions of people from completely different lengths and viewpoints and, and models, they can be interactive, they can be static, they can be dynamic, and that the personal personalized, but the tooling that you as a creator now have, you could choose which characters you want to define, you could choose which content you want to write, you could choose which content you want the AI to fill in for you and say, hey, create 50 other characters in the village. And then when the viewer reads the book or watches the movie, let them explore or have a different interaction with a set of those villagers in that village, or you can say, hey, here's the one character everyone has to meet, here's what I want them to say. And you can define the dialogue. And so the way that creators can start to kind of harness their creative chops, and create new kinds of modalities for content
SPEAKER_01: and for exploration, I think is going to be so beautiful and incredible. I mean, Freiburg. Yeah, you can choose the limits of how much you want the individual to enjoy from your content versus how narrowly you want to define it. And my guess is that the creators that are going to win are going to be the ones that are going to create more dynamic range and meet creative output. And then individuals are going to kind of be stuck, they're going to be more into that than they will with the static, everyone watches the same thing over and over. So there will be a whole new world of creators that you know, maybe have a different set of tools that then just just realize to build on what you're saying for you, which I think is
SPEAKER_03: incredibly insightful. Just think about the controversy around two aspects of a franchise like James Bond. Number one, who's your favorite Bond, we grew up with Roger Mays, Roger Moore, we lean towards that, then we discover Sean Connery, and then all of a sudden, you see, you know, the latest one, he's just extraordinary. And Daniel Craig, you're like, you know what, that's the one that I love most. But what if you could take any of the films, you'd say, let me get, you know, give me the spy who loved me, but put Daniel Craig in it, etc. And that would be available to you. And then think about the next controversy, which is Oh, my god, does Daniel does James Bond need to be a white guy from the UK? Of course not. You can even sit around the world and each region could get their own celebrity, their number one celebrity to play the lead and controversy over.
SPEAKER_01: You know, the old story, the epic of Gilgamesh, right? So like, that story was retold in dozens of different languages. And it was told through the oral tradition. It was like, you know, spoken by bards around a fire pit and whatnot. And all of those stories were told with different characters and different names and different experiences. Some of them were 10 minutes long, some of them were multi hour sagas explained through the story. But ultimately, the morality of the story, the storyline, the intentionality of the original creator of that story. Yes, through the Bible is another good example of this, where much of the underlying morality and ethics in the Bible comes through in different stories read by different people in different languages. That that may be where we go, like, my kids want to have a 10 minute bedtime story. Well, let me give them Peter Pan in 10 minutes, I want to do you know, a chapter a night for my older daughter for a week long of Peter Pan. Now I can do that. And so the way that I can kind of consume content becomes different. So I guess what I'm saying is there's two aspects to the way that I think the entire content, the realm of content can be rewritten through AI. The first is like individual personalized creation of content, where I as a user can render content that was of my liking and my interest. The second is that I can engage with content that is being created that is so much more multi dimensional than anything we conceive of today, where current centralized content creators now have a whole set of tools. Now from a business model perspective, I don't think that publishers are really the play anymore. But I do think that platforms are going to be the play. And the platform tooling that enables the individuals to do this stuff and the platform tooling that enables the content creators to do this stuff are definitely entirely new industries and models that can create multi hundred billion dollar outcomes.
SPEAKER_03: Let me hand this off to sacks because there has been the dream for everybody, especially in the Bay Area of a hero coming and saving Gotham City. And this has finally been realized, David Sachs, I did my own little Twitter AI hashtag. And I said, to Twitter AI, if only please generate a picture of David Saxe's Batman crouched down on the peak thinking bridge, the amount of creativity sacks that came from this and this is
SPEAKER_03: something that you know, if we were talking about just five years ago, this would be like a $10,000 image you could create by the way, these were not professional, quote unquote,
SPEAKER_01: artists, these individuals, individuals that were able to harness a set of platform tools to generate this incredible new content. And I think it speaks to the opportunity ahead. And by the way, we're in inning one, right? So sacks, when you see yourself as Batman, do you ever think you
SPEAKER_03: should take your enormous wealth and resources and put it towards building a cave on your mansion that lets you out underneath the Golden Gate Bridge and you could go fight crime so good. So good. Do you want to go fight this crime in Gotham? I think San Francisco has a lot of Gotham like qualities. I
SPEAKER_00: think the villains are more real than the heroes. Unfortunately, we don't have a lot of heroes. But yeah, we got a lot of jokers. Yeah, we got a lot of jokers. Yeah. That's a whole
SPEAKER_00: separate topic. I'm sure we'll separate topic. We'll get to at some point today. You guys are talking about all this stupid
SPEAKER_02: bullshit. Like, there are trillions of dollars of software companies that could get disrupted. And you're talking about making fucking children's books and fat pictures of sax. It's so dumb. No, it's a conversation. conversation is the entertainment industry is doing a great job. No, nobody cares about entertainment anymore because it's totally okay. So why don't you tell the biggest industries that is why don't you teach people where there's going to be actual economic destruction. This is gonna be amazing economic destruction and opportunity. You spend all this
SPEAKER_03:
SPEAKER_02: time on the most stupidest fucking topics. Listen, it's an
SPEAKER_03: illustrative example. No, it's an elitist example that you
SPEAKER_02: fucking circle jerk yourself. It's Batman's not nobody. Nobody cares about movies. Let's bring nobody tweet over everybody. I
SPEAKER_03: mean, I think I think us box office is like 20 a year. I
SPEAKER_00: remember when like, like 100 billion a year payment volume and now it's like hundreds of billions. So ad in and stripe
SPEAKER_02: are going to process $2 trillion almost. Why don't you talk about that disruption? You nanny market size of us media and
SPEAKER_03: entertainment industry 717 billion. Okay, it's not insignificant. Video games are nearly half a trillion a year.
SPEAKER_03: Yeah, I mean, this is not insignificant. But let's pull up chamath tweet. Of course the dictator wants to dictate here all this incredible innovation is being made. And a new hero has been born chamath poly hop atia a tweet that went viral over 1.2 million views already. I'll read your tweet for the audience. If you invent a novel drug, you need the government to vet and approve it FDA before you can commercialize it. If you invent a new mode of air travel, you need the government to vet and improve it FAA. I'm just going to edit this down a little bit. If you create new security, you need the government to vet it and approve it sec more generally when you create things with broad societal impact positive and negative, the government creates a layer to review and approve it. AI will need such an oversight body. The FDA approval process seems the most credible and adaptable into a framework to understand how a model behaves and it's counter factual. Our political leaders need to get in front of this sooner rather than later and create some oversight before the eventual big avoidable mistakes happen. And genius are let out of the bottle. chamath you really want the government to come in. And then when people build these tools, they have to submit them to the government to approve them. That's what you're saying here. And you want that to start now. Here's the alternative. The alternative is going to be the
SPEAKER_02: debacle that we know as section 230. So if you try to write a brittle piece of legislation or try to use old legislation to deal with something new, it's not going to do a good job because technology advances way too quickly. And so if you look at the section 230 example, where have we left ourselves, the politicians have a complete inability to pass a new framework to deal with social media to deal with misinformation. And so now we're all kind of guessing what a bunch of eight 70 and 80 year old Supreme Court justices will do in trying to rewrite technology law when they have to apply it on section 230. So the point of that tweet was to lay the alternatives. There is no world in which this will be unregulated. And so I think the question to ask ourselves is do we want a chance for a new body? So the FDA is a perfect example why even though the FDA commissioners appointed by the president, this is a quasi organization, it's still arms length away. It has subject matter experts that they hire, and they have many pathways to approval. Some pathways take days, some pathways are months and years, some pathways are for breakthrough innovations, some pathways are for devices. So they have a broad spectrum of ways of of arbitrating what can be commercialized and what cannot. Otherwise, my prediction is we will have a very brittle law that will not work. It'll be like the Commerce Department and the FTC, trying to gerrymander some old piece of legislation. And then what will happen is it'll get escalated to the Supreme Court. And I think they are the last group of people who should be deciding on this incredibly important topic for society. So what I have been advocating our leaders, and I will continue to do so is don't try to ram this into an existing body. It is so important, it is worth creating a new organization like the FDA, and having a framework that allows you to look at a model and look at the counterfactual judge how good how important how disruptive it is, and then release it in the wild appropriately. Otherwise, I think you'll have these chaos GPT things scale infinitely, because again, as freeberg said, and as sex said, you're talking about one person that can create this chaos, multiply that by every person that is in anarchist, or every person that just wants to sow seeds of chaos. And I think it's going to be all avoidable. I think regulating what software people can write is a near
SPEAKER_01: impossible task. Number one, I think you can probably put rules and restrictions around commerce, right? That's certainly feasible in terms of how people can monetize. But in terms of writing and utilizing software, it's going to be as challenged as trying to monitor and demand oversight and regulation around how people write and use tools for for genome and biology exploration. Certainly, if you want to take a product to market and sell a drug to people that can influence their body, you have to go get that approved. But in terms of, you know, doing your work in a lab, it's very difficult. I think the other challenge here is software can be written anywhere, it can be executed anywhere. And so if the US does try to regulate, or does try to put the brakes on the development of tools, where the US can have kind of a great economic benefit and a great economic interest, there will be advances made elsewhere, without a doubt. And those markets and those those places will benefit in an extraordinarily out of pace way. As we just mentioned, there's such extraordinary kind of economic gain to be realized here, that if we're not, if the United States is not leading the world, we are going to be following and we are going to get disrupted, we are going to lose an incredible amount of value and talent. And so any attempt at regulation, or slowing down or telling people that they cannot do things when they can easily hop on a plane and go do it elsewhere, I think is is fraught with peril. So
SPEAKER_03: you don't agree with regulation? Sachs? Are you on board with the chamath plan? Are you on board with the free bird? I'll say I
SPEAKER_01: think I think just like with computer hacking, it's illegal to break into someone else's computer, it is illegal to steal someone's personal information. There are laws that are absolutely simple, and obvious and you know, no nonsense laws, those not illegal to get rid of 100,000 jobs by making a piece
SPEAKER_03: of software, though. That's right. And so I think trying to
SPEAKER_01: intentionalize how we do things versus intentionalizing the things that we want to prohibit happening as an outcome, we can certainly try and prohibit the things that we want to happen up as an outcome and pass laws and institute governing bodies with authority to oversee those laws. With respect to things like stealing data, but you can jump on a plane and go do it in
SPEAKER_03: Mexico, Canada, or whatever region you get to sacks. Where do you stand on this debate? Yeah, I'm saying like, there
SPEAKER_01: are ways to protect people. There's ways to protect society about passing laws that make it illegal to do things as the output of the outcome. What law do you pass on chaos? GPT?
SPEAKER_02: Explain chaos. GPT? Give me an example, please. Yeah. Do you
SPEAKER_02: want to talk about it real quick? It's a recursive agent that basically is trying to destroy itself. Try to destroy
SPEAKER_01: humanity. Yeah, but I guess by first becoming all powerful and
SPEAKER_02: destroying humanity and then destroying itself. Yeah, it's a
SPEAKER_01: tongue in cheek. Auto GPT. But it's not it's not it's not a tongue in cheek auto GPT. The guy that created it, you know,
SPEAKER_01: put it out there and said, like, he's trying to show everyone to your point, what intentionality could arise here, which is negative intentionality. I think it's very naive for anybody to
SPEAKER_02: think that this is not equivalent to something that could cause harm to you. So for example, if the prompt is, hey, here is a security leak that we figured out in Windows. And so why don't you exploit it. So look, a hacker now has to be very technical. Today with with these auto GPTs, a hacker does not need to be technical, exploit the zero day, exploit in Windows, hack into this plane and bring it down. Okay, the GPT will do it. So who's going to tell you that those things are not allowed? Who's going to actually vet that that wasn't allowed to be released in the wild. So for example, if you worked with Amazon and Google and Microsoft and said, you're going to have to run these things in a sandbox, and we're going to have to observe the output before we allow it to run on actual bare metal in the wild. Again, that seems like a reasonable thing. And it's super naive for people to think it's a free market. So we should just be able to do what we want. This will end badly quickly. And when the first plane goes down, and when the first fucking thing gets blown up, all of you guys will be like, Oh, sorry, facts. Pretty compelling example here by Chamath, somebody puts
SPEAKER_03: out into the wild chaos GPT, you can go do a Google search for it and says, Hey, what are the vulnerabilities to the electrical grid, compile those and automate a series of attacks and write some code to probe those until we and success in this mission, you get 100 points and stars every time you chase do this such a such a beautiful example, but it's even
SPEAKER_02: more nefarious. It is. Hey, this is an enemy that's trying to hack our system. So you need to hack theirs and bring it down. You know, like you can easily trick these GPTs. Right? Yes, they have no judgment. They have no judgment. And as you said, they're ruthless in getting to the outcome. Right. So why do we
SPEAKER_02: think all of a sudden, this is not gonna happen? I mean, it's
SPEAKER_03: literally the science fiction example. You say, Hey, listen, make sure no humans get cancer. And like, okay, well, the logical way to make sure no humans get cancer is to kill all the humans. But you might Can you just address the point? So what do
SPEAKER_01: you think you're regulating? Are you regulating the code? Here's what I'm saying to write. If you look at the FDA, no, you're allowed to make any
SPEAKER_02: chemical drug you want. But if you want to commercialize it, you need to run a series of trials with highly qualified measurable data, and you submit it to like minded experts that are trained as you are to evaluate the viability of that. And but there are power, hold on, there are pathways that allow you to get that done in days under emergency use. And then there are pathways that can take years depending on how gargantuan the task is at hand. And all I'm suggesting is having some amount of oversight is not bad in this specific example. I
SPEAKER_01: get what you're saying. But I'm asking tactically how what are you overseeing? You're overseeing chat GPT, you're overseeing the model, you're doing exactly what I chips.
SPEAKER_02: Okay, look, I used to run the Facebook platform, we used to create sandboxes, if you submit code to us, you would we would run it in a sandbox, we would observe it, we would figure out what it was trying to do. And we would tell you, this is allowed to run in the wild. There's a version of that that Apple does when you submit an app for review and approval. Google does it as well. In this case, all the bare metal providers, all the people that provide GPUs will be forced by the government, in my opinion, to implement something. And all I'm suggesting is that it should be a new kind of body that essentially observes that has PhDs that has people who are trained in this stuff, to develop the kind of testing and the output that you need to figure out whether it should even be allowed to run in the wild on bare metal. Sorry, but
SPEAKER_01: you're saying that the mod the model Sorry, I'm just trying to understand some of the points you're saying that the models need to be reviewed by this body and those models, if they're run on a third party set of servers, if they're running the wild, right, so if you're a computer on the on the open internet, we cannot run an app on your computer, you know that, right?
SPEAKER_02: It needs to be connected to the internet, right? Like if you wanted to run an auto GPT, it actually crawls the internet, it actually touches other API's, it tries to then basically send a push request, sees what it gets back parses the JSON figures out what it needs to do. All of that is allowed because it's hosted by somebody, right? That code is running not locally, but it's running. So the host becomes sure if you want to run it
SPEAKER_02: locally, you can do whatever you want to do. But evil agents are
SPEAKER_01: going to do that, right? So if I'm an evil agent, I'm not going to go use AWS to run my evil agent, I'm going to set up a bunch of servers and connect to the internet. How I could use
SPEAKER_01: VPNs. The internet is open, there's open people who are in
SPEAKER_03: another rogue country, they can do whatever. I think that what
SPEAKER_02: you're going to see is that if you for example, try to VPN and run it out of like, to GK stand back to the United States, it's not going to take years for us to figure out that we need to IP block rando shit coming in push and pull requests from all kinds of IPs that we don't trust anymore, because we don't now trust the regulatory oversight that they have for code that's running from those IPs that are not us domestic.
SPEAKER_03: Let me steal man to mouth position for a second, Jason,
SPEAKER_01: hold on, I think the ultimate if what you're saying is the point of view of Congress, and if to mouth has this point of view, then there will certainly be people in Congress that will adopt this point of view. The only way to ultimately do that degree of regulation and restriction is going to be to restrict the open internet, it is going to be to have monitoring and firewall safety protocols across the open internet, because you can have a set of models running on any set of servers sitting in any physical location. And as long as they can move data packets around, they're going to be able to get up to their nefarious activities.
SPEAKER_03: Let me still man that for you, freeburg. I think, yes, you're correct. The internet has existed in a very open way. But there are organizations and there are places like the National Highway Traffic Safety Administration, if I were to steal man to mouth position, if you want to manufacture a car, and you want to make one in your backyard and put it on your track and on your land up in Napa somewhere, and you don't want to have brakes on the car and you don't want to have, you know, a speed limiter or airbags or seatbelts and you want to drive on the hood of the car, you can do that. But once you want it to go on the open road, the open internet, you need to get you need to submit it for some safety standards like NHT sa like Tesla has to afford has to so sex where do you sit on this or is let's assume that people are going to do very bad things with very powerful models that are becoming available. Amazon today said they'll be Switzerland, they're gonna put a bunch of LLMs and other models available on AWS, Bloomberg's LLM, Facebook's, Google bard, and of course, chat, JPT, open AI and Bing, all this stuff's available to have access to that. Do you need to have some regulation of who has access to those at scale powerful tools? Should there be some FDA or NHT sa? I don't think we know how to regulate it yet. I think it's
SPEAKER_00: too early. And I think the harms that we're speculating about we're making the AI more powerful than it is. And I believe it will be that powerful. But I think that it's premature to be talking about regulating something that doesn't really exist yet. Take the the chaos GPT scenario. The way that would play out would be, you've got some future incarnation of auto GPT. And somebody says, okay, auto GPT, I want you to be, you know, WMD AI, and figure out how to cause like a mass destruction event, you know, and then it creates like a planning checklist and that kind of stuff. So that's basically the the type of scenario where we're talking about, we're not anywhere close to that yet. I mean, the chaos GPT is kind of a joke, it doesn't produce, it doesn't produce a checklist. I can give an example that would actually be completely
SPEAKER_03: plausible. One of the first things on the chaos GPT checklist was to stay
SPEAKER_00: within the boundaries of the law because it didn't want to get prosecuted.
SPEAKER_03: Got it. So the person who did that had some sort of good intent, but I can give you an example right now. That could be done by chat GPT and auto GPT that could take down large swaths of society and cause massive destruction. I'm almost reticent to say it here. Say, well, I'll say it and then maybe we'll have to delete this. But if somebody created this, and they said, figure out a way to compromise as many powerful peoples and as many systems, passwords, then go in there and delete all their files and turn off as many systems as you can. Chat GPT and auto GPT could very easily create phishing accounts create billions of websites to create billions of logins, have people log into them, get their passwords, log into whatever they do, and then delete everything in their account. Chaos, you're right to be done today. I don't think we've done
SPEAKER_00: today. simpler than this. How about how about you fish?
SPEAKER_02: Yeah, pieces of it can be created today. But you're
SPEAKER_00: you're accelerating the progress. Yeah, but you can
SPEAKER_02: automate. Yeah, exactly. And by the way, I'm accelerating it in weeks. Why
SPEAKER_02: don't you just spoof the bank accounts and just steal the money? Like that's even simpler. Like people will do this stuff because they're trying to do it today. Holy cow, they just have a more efficient way to solve the problem about banking. So look, he's
SPEAKER_00: so number one, this is a tool. And if people use a tool in nefarious ways, you prosecute them. Number two, the platforms that are commercializing these tools do have trust and safety teams. Now in the past, trust and safety has been a euphemism for censorship, which it shouldn't be. But you know, open AI has a safety team and they try to detect when people are using their tech in a nefarious way and they try to prevent it. Do you trust that? Well, no, not on censorship, but I think that they're probably people are using their policing. Are you
SPEAKER_02: willing to abdicate your work societal responsibility to open AI to do the trust and safety? What I'm saying is, I'd like to
SPEAKER_00: see how far we get in terms of the system. Yes, you want to see the mistakes. You want to see where the mistakes are, and how
SPEAKER_02: bad the mistakes are. I'm saying it's still very
SPEAKER_00: early to be imposing regulation, we don't even know what to regulate. So I think we have to keep tracking this to develop some understanding of how it might be misused, how the industry is going to develop safety guardrails. Okay. And then you can talk about regulation. Look, you create some new FDA right now. Okay. First of all, we know what would happen. Look at the drug process. As soon as the FDA got involved and slowed down massively. Now it takes years, many years to get a drug approved appropriately. So yes, but at least with a drug, we know what the gold standard is, you run a double blind study to see whether it causes harm or whether it's beneficial. We don't know what that standard is for AI yet. We have no idea. You can absolutely study in AI.
SPEAKER_02: What? No, we don't know that somebody review the code. You
SPEAKER_03: have two instances in a code to do what? No, sax.
SPEAKER_00: Auto GPT. It's benign. I mean, my friend use it to book a
SPEAKER_00: wine tasting. So who's going to review that code and then speculate say, Oh, well, in 99.9% of cases, it's perfectly benevolent and fine and innocuous. I can fantasize about some cases someone might do how long how you supposed to resolve that very simple.
SPEAKER_03: There are two types of regulation that are concurrent in any industry, you can do what the movie industry did, which is they self regulate and they came up with their own rating system where you can do what happens with the FDA and what happens with cars, which is an external government based body. I think now is the time for self regulation, so that we avoid the massive heavy hand of government having to come in here. But these tools can be used today to create massive farm, they're moving at a pace we just said in the first half of the show that none of us have ever seen every 48 hours, something drops. That is mind blowing. That's never happened before. And you can take these tools. And in the one example that Chumath and I came up with the top of our head in 30 seconds, you can create phishing sites, compromise people's bank accounts, take all the money out, delete all the files and cause chaos on a scale that has never been possible by a series of Russian hackers or Chinese hackers working in a boiler room. This can scale. And that is the fundamental difference here. And I didn't think I would be sitting here steel manning. Chumath's argument. I think humans have a horrible ability to compound. I
SPEAKER_02: think people do not understand compound interest. And this is a perfect example, where when you start to compound technology at the rate of 24 hours, or 48 hours, which we've never really had to acknowledge, most people's brains break, and they don't understand what six months from now looks like. And six months from now, when you're compounding at 48 or 72 hours is like 10 to 12 years in other technology solutions. This is
SPEAKER_03: compounding. This is this is different because the compounding I agree with that the pace revolution is very
SPEAKER_00: fast. We are on a bullet train to something. And we don't know exactly what it is. And that's disconcerting. However, let me tell you what would happen if we create a new regulatory body like the FDA to regulate this, they would have no idea how to arbitrate whether a technology should be approved or not. Development will basically slow to a crawl to slight drug development. There is no double blind stand. I agree. What can we do? What software regulation can we do? There is no double
SPEAKER_00: blind standard in AI that everyone can agree on right now to know whether something should be approved. And what's going to happen is the thing that's made software development so magical and allowed all this innovation over the last 25 years is permissionless innovation. Any developer, any dropout from a university can go create their own project, which turns into a company. And that is what has driven all the innovation and progress in our economy over the last 25 years. So you're going to replace permissionless innovation with going to Washington to go through some approval process. And it will be the politically connected, it'll be the big donors who get their projects approved. And the next Mark Zuckerberg, who's trying to do his little project in a dorm room somewhere will not know how to do that will not know how to compete. And that highly political process. Chima, I think you're mixing a bunch of things together. So
SPEAKER_02: first of all, permissionless innovation happens today in biotech as well. It's just that it's what Jason said, when you want to put it on the rails of society, and make it available to everybody, you actually have to go and do something substantive. In the negotiation of these drug approvals, it's not some standardized thing. You actually sit with the FDA, and you have to decide what are our endpoints? What is the mechanism of action? And how will we measure the efficacy of this thing? The idea that you can't do this today in AI is laughable. Yes, you can. And I think that smart people so for example, if you pit deep minds team versus open AI team, to both agree that a model is good and correct, I bet you they would find a systematic way to test that it's fine. I just want to point out Okay, so basically, in order to do what you're saying, okay,
SPEAKER_00: this entrepreneur, who just dropped out of college to do their project, they're gonna have to learn how to go sit with regulators, have a conversation with them, go through some complicated approval process. And you're trying to say that that won't turn into a game of political connections. Of course it will. Of course it will. Which is self regulation. Yeah, well, let's get to that. Hold on a second. And let's look at the drug approval process. If you want to create a drug company, you need to raise hundreds of millions of dollars. It's incredibly expensive. It's incredibly capital intensive. There is no drug company that is two guys in their garage. Like many of the biggest companies like many of the biggest companies in Silicon Valley started.
SPEAKER_02: That is because you're talking about taking a chemical or biological compound and injecting into some hundreds or thousands of people who are both racially gender based, age based, highly stratified all around the world or at a minimum all around the country. You're not talking about that here, David, I think that you could have a much simpler and cheaper way where you have a version of the internet that's running in a huge sandbox someplace that's closed off from the rest of the internet, and another version of the internet that's closed off from everything else as well. And you can run on a parallel path, as it is with this agent. And you can easily, in my opinion, actually, figure out whether this agent is good or bad, and you can probably do it in weeks. So I actually think the approvals are actually not that complicated. And the reason to do it here is because I get that it may cause a little bit more friction for some of these mom and pops. But if you think about what's the societal and consequences of letting the worst case outcomes happen, the AGI type outcomes happen, I think those are so bad. They're worth slowing some folks down. And I think like, just because you want to, you know, buy groceries for $100, you should be able to do it, I get it. But if people don't realize and connect the dots between that and bringing airplanes down, then that's because they don't understand what this is capable of. I'm not saying we're never
SPEAKER_00: going to need regulation. What I'm saying is, it's way too early. We don't even know what we're relating. I don't know what the standard would be. And what we will do by racing to create a new FDA is destroying American innovation in the sector, and other countries will not slow down. They will beat us to the punch here.
SPEAKER_03: Got it. I think there's a middle ground here of self regulation and thoughtfulness on the part of the people who are providing these tools at scale to give just but one example here and this tweet is from five minutes ago. So to look at the pace of this, five minutes ago, this tweet came out. A developer who is an AI developer says AI agents continue to amaze my GPT for coding assistant learned how to build apps with authenticated users that can build and design a web app, create a back end, handle off logins, upload code to GitHub and deploy. He literally while we were talking is deploying websites. Now if this website was a phishing app, or the one that Chammap is talking about, he could make a gazillion different versions of Bank of America, Wells Fargo, etc, then find everybody on the internet's email, then start sending different spoofing emails, determine which spoofing emails work, iterate on those and create a global financial collapse. Now this sounds insane. But it's happening right now. People get hacked every day at 123%. Saks fraud is occurring right now in the low single
SPEAKER_03: digit percentages identity theft is happening in the low single identity percentages. This technology is moving so fast that bad actors could 10 x that relatively easy. So if 10% of us want to be hacked and have our credit cards hacked, this could create chaos. I think self regulation is the solution. I'm the one who brought up self regulation. What I said, I
SPEAKER_00: brought it up first. I brought it up first. I get credit. No,
SPEAKER_03: good. No, no, it's not about credit. I'm no self regulation.
SPEAKER_00: I never got to finish my point about it because you interrupt when you talk for eight minutes. So if you have a point to make,
SPEAKER_03: you should have got in the eight minutes. Oh my god, you guys
SPEAKER_00: kept interrupting me. Go ahead. What I said is that there are trust and safety teams at these big AI companies, these big foundation model companies like open AI. Like I said, in the past, trust and safety has been a euphemism for censorship. And that's why people don't trust it. But I think it would be appropriate for these platform companies to apply some guardrails on how their tools can be used. And based on everything, I know they're doing that. So websites, so the open web with chat GP for and he's going to have it do it
SPEAKER_03: automated, you're basically postulating capabilities that
SPEAKER_00: don't yet exist. I just tweeted the guy he's doing it. He's got
SPEAKER_03: a video of himself doing it on the web. What do you think? Freebird? That's a far cry from basically running like some
SPEAKER_00: fishing expedition that's going to bring down the entire banking system.
SPEAKER_03: Literally a fishing a fishing site and a site with OAuth are the same thing. Go ahead freebird. I think that that guy
SPEAKER_01: is doing something illegal if he's hacking into computers, into people's emails and bank accounts, that's illegal, you're not allowed to do that. And so that action breaks the law, that person can be prosecuted for doing that. The tooling that one might use to do that can be used in a lot of different ways. Just like you could use Microsoft Word to forge letters, just like you could use Microsoft Excel to create fraudulent financial statements. I think that the application of a platform technology needs to be distinguished from the technology itself. And while we all feel extraordinarily fearful because the unbelievable leverage that these AI tools provide, again, I'll remind you that this chat GPT-4 or this GPT-4 model, by some estimates is call it a few terabytes, you could store it on a hard drive, or you could store it on your iPhone. And you could then go run it on any set of servers that you could go set up physically anywhere. So, you know, it's a little bit naive to say we can go ahead and, you know, regulate platforms and we can go regulate the tools. Certainly, we should continue to enforce and protect ourselves against nefarious actors using, you know, new tools in inappropriate illegal ways. You know, I also think that there's a moment here, that we should all kind of observe just how quickly we want to shut things down, when, you know, they take away what feels like the control that we all have from one day to the next. And, you know, that the real kind of sense of fear that seems to be quite contagious for a large number of people that have significant assets or significant things to lose, is that, you know, tooling that's creating entirely newly disruptive systems and models for business and economics, and opportunity for so many needs to be regulated away to minimize, you know, what we claim to be some potential downside when we already have laws that protect us on the other side. So, you know, I just kind of want to also consider that this set of tools creates extraordinary opportunity. We gave one sort of simple example about the opportunity for creators, but we talked about how new business models, new businesses can be started with one or two people. You know, entirely new tools can be built with a handful of people, entirely new businesses. This is an incredible economic opportunity. And again, if the US tries to regulate it, or the US tries to come in and stop the application of models in general, or regulate models in general, you're certainly going to see those models have continued to evolve and continue to be utilized in very powerful ways that are going to be advantageous to places outside the US. There's over 180 countries on earth, they're not all going to regulate together. It's been hard enough to get any sort of coordination around financial systems, to get coordination around climate change to get coordination around anything on a global basis to try and get coordination around the software models that are being developed, I think is is pretty naive. You don't want to have a global organization, I think you need
SPEAKER_02: to have a domestic organization that protects us. And I think Europe will have their own thing again, FDA versus Emma. Canada has its own Japan has its own China has its own and they have a lot of overlap and a lot of commonality and in the guard rules they use. And I think that's what's going to happen here. This will be beneficial only for political insiders who will
SPEAKER_00: basically be able to get their projects and their apps approved with a huge deadweight loss for the system because innovation will completely slow down. But to build on freeberg's point, which is that we have to remember that AI won't just be used by nefarious actors, it'll be used by positive actors. So there will be new tools that law enforcement will be able to use. And if somebody is creating phishing sites at scale, they're going to be probably pretty easy for, you know, law enforcement AI is to detect. So let's not forget that there'll be copilots written for our law enforcement authorities, they'll be able to use that to basically detect and fight crime. And a really good example of this was in the crypto space, we saw this article over the past week that chainalysis has figured out how to basically track, you know, illicit Bitcoin transactions. And there's now a huge number of prosecutions that are happening of illegal use of Bitcoin. And if you go back to when Bitcoin first took off, there was a lot of conversations around Silk Road. And the only thing that Bitcoin was good for was basically illegal transactions, blackmailing, drug trafficking, and therefore we had to stop Bitcoin. Remember, that was the main argument. And the counter argument was that well, no, Bitcoin, like any technology can be used for good or bad. However, there will be technologies that spring up to combat that to combat those nefarious or illicit use cases. And sure enough, you had a company like chainalysis come along. And now it's been used by law enforcement to basically crack down on the illicit use of Bitcoin. And if anything, it's cleaned up the Bitcoin community tremendously. And I think it's dispelled this idea that the only thing you'd use Bitcoin for is black market transactions. Quite the contrary, I think you'd be really stupid now to use Bitcoin in that way. It's actually turned Bitcoin into something of a honeypot now, because if you used it for nefarious transactions, your transactions recording the blockchain forever just waiting for chainalysis to find it. So again, using Bitcoin to do something illegal be really stupid. I think in a similar way, you're going to see self regulation by these major AI platform companies combined with new tools are used new AI tools that spring up to help combat the nefarious uses. And until we let those forces play out. I'm not saying regulate never, I'm just saying we need to let those forces play out. Before we leap to creating some new regulatory body that doesn't even understand what its mandated mission is supposed to be. The
SPEAKER_03: Bitcoin story is hilarious, by the way. Oh my gosh, journal story. It's
SPEAKER_02: unbelievable. Pretty epic. It took years. But basically, this
SPEAKER_03: guy was buying blow on Silk Road, and he deposited his Bitcoin. And then when he withdrew it, he there was a bug that gave him twice as many bitcoins. So he kept creating more accounts, putting more money into Silk Road and getting more Bitcoin out. And then years later, the authorities figured this out again with you know, chain analysis type things. Look at James Zong over there. Look at James Zong. He accused had a Lamborghini a Tesla a lake house and was living his best life apparently, when the feds knocked on his door and found the digital keys to his crypto fortune in a popcorn tin in his bathroom, and in a safe in his basement floor. So they have a well, the reason the reason I posted this was I
SPEAKER_02: was like, what if this claim that you can have all these anonymous transactions actually fooled an entire market, because it looks like that this anonymity has effectively been reverse engineered, and there's no anonymity at all. And so what Bitcoin is quickly becoming is like the most singular honeypot of transactional information that's complete and available in public. And I think what this article talks about is how companies like chain analysis, and others have worked now for years, almost a decade, with law enforcement to be able to map all of it. And so now every time money goes from one Bitcoin wallet to another, they effectively know the sender and the recipient. And I just want to make one quick correction here. It wasn't
SPEAKER_03: actually exactly popcorn. It was Cheetos spicy flavored popcorn. And there's the tin of it, where he had a motherboard of a computer that held is there a chance that that this project was actually introduced
SPEAKER_02: by the government? I mean, there's been reports of tour anonymous or network that
SPEAKER_03: the CIA had their hands all over tour. Tor if you don't know it, which is an anonymous like multi relay, peer to peer web browsing system, and people believe it's a CIA honeypot, an intentional trap for criminals to get themselves caught up in. All right, as we wrap here, what an amazing discussion, my Lord, I didn't I never thought I would be.
SPEAKER_01: I want to say one thing. Yes. We saw that someone was arrested for the murder of Bob Lee. That's what I was about this morning. Yeah, which turns out that the report of the SFPD is arrest is that it's someone that he knew that also works in the tech industry, someone that possibly right. So still breaking news. Yeah,
SPEAKER_03: yes, possibly. But I want to say two things. One, obviously,
SPEAKER_01: based on this arrest, and the storyline, it's quite different than what we all assumed it to be, which was some sort of homeless robbery type moment that has become all too commonplace in SF. It's a commentary for me on two things. One is how quick we all were to kind of judge and assume that, you know, a homeless robber type person would do this in SF, which I think speaks to the condition in SF right now, also speaks to our conditioning that we all kind of lacked or didn't even want to engage in a conversation that maybe this person was murdered by someone that they knew. Because we wanted to kind of very quickly fill our own narrative about how bad SF is. And that's just something that I really felt when I read this this morning, I was like, man, like, I didn't even consider the possibility that this guy was murdered by someone that he knew, because I am so enthralled right now by this narrative that SF is so bad, and it must be another data point that validates my point of view on SF. So, you know, I kind of want to just acknowledge that and acknowledge that we all kind of do that right now. But I do think it also does, in fact, unfortunately, speak to how bad things are in SF, because we all are, we've all have these experiences of feeling like we're in danger and under threat all the time, we're walking around in SF, in so many parts of San Francisco, I should say, where things feel like they've gotten really bad. I think both things can be true, that we can kind of feel biased and fill our own narrative by kind of latching on to our assumption about what something tells us. But but it also tells us quite a lot about what is going on in SF. So I just wanted to make that point.
SPEAKER_03: In fairness, and I think it's fine for you to make that point. I am extremely vigilant on this program to always say when something is breaking news withhold judgment, whether it's the Trump case or Jesse Smollett or anything in between January six, let's wait till we get all the facts. And in fact, quote from sacks. We don't know exactly what happened yet.
SPEAKER_00: Correct. Literally sacks started with that. We do that every
SPEAKER_03: fucking time on this program. We know when there's breaking news to withhold judgment. But you can also know two things can be true. A tolerance for ambiguity is necessary. But I'm saying I didn't even do that. But as soon as I heard this, I was like, I
SPEAKER_01: was like, Oh, assumption. But you know, David, that is a fine
SPEAKER_03: assumption to make. That's a fine assumption.
SPEAKER_00: Listen, you made that assumption for your own protection. We got all these reporters who are basically propagandists trying to claim that crime is down in San Francisco. They're all basically seeking comment from me this morning sending emails are trying to dunk God on us because we basically talked about the Bob Lee case in that way. Listen, we said that we didn't know what happened. But if we were to bet, at least what I said is I bet this case, it looks like a lot like the Brianna Kupfer case. That was logical. That's not conditioning or bias. That's logic. And you need to look at what else happened that week. Okay, so just the same week that Bob Lee was killed. Let me give you three other examples of things that happened in Gotham City, aka San Francisco. So number one, former fire commissioner Don Carmagnani was beaten within an inch of his life by a group of homeless addicts in the marina. And one of them was interviewed in terms of why it happened. And basically Don came down from his mother's house and told them to move off his mother's front porch, because they were obstructing her ability to get in and out of her apartment. They interpreted that as disrespect. And they beat him with a tire iron or a metal pipe. And one of the hoodlums who was involved in this apparently admitted this. Yeah, play the video. Somebody over the head like that and attack him. He was he was
SPEAKER_02: this disrespectful. We were disrespectful. That was a big old kind of bald haired old man. Don Don. So he was being
SPEAKER_03: disrespectful. And then but is that enough to beat him up? Yeah, sometimes. Oh, my lord. I mean, so this is case number one. And apparently
SPEAKER_00: in the reporting on that person who was just interviewed, he's been in the marina kind of terrorizing people, maybe not physically, but verbally. So you have, you know, bands of homeless people encamped in front of people's houses. Don Carmagnani gets beaten within an inch of his life. You then had the case of the Whole Foods store on Market Street shut down in San Francisco. And this was not a case of shoplifting like some of the other store closings we've seen. They said they were closing the store because they could not protect their employees. The bathrooms were filled with needles and pipes that were drug paraphernalia, you had drug addicts going in there using it, they were engaging in altercations with store employees. And Whole Foods felt like that to close the store because again, they cannot protect their employees. Third example, Board of Supervisors had to disband their own meeting because their internet connection got vandalized. The fiber for the cable connection to provide their internet got vandalized. So that's a basically disband their meeting. Aaron Prescott was the one who announced this and you saw in the response to this. Yeah, my retweeting him went viral. There were lots of people said, Yeah, I've got a small business and the fiber the copper wire, whatever was vandalized. And in a lot of cases, I think it's basically drug addicts stealing whatever they can they steal $10 of copper wire, sell that to get a hit. And it causes $40,000 of property damage. Here's the
SPEAKER_03: insincerity sacks. Literally, the proper response when there's violence in San Francisco is, hey, we need to make this place less violent. Is there a chance that it could be people who know each other? Of course, that's inherent in any crime that occurs that there'll be time to investigate it. But literally, the press is now using this as a moment to say there is no crime in San Francisco, or that people are acting and like, I just have the New York Times email me during the podcast. Heather Knight from the Chronicle, San Francisco Chronicle, in light of the Bob Lee killing appearing to be an interpersonal dispute. She still doesn't know right? We don't have all the facts with another tech leader. Do you think the tech community jumped to conclusions? Why are so many tech leaders painting San Francisco as a dystopian hellscape with the reality with the reality is more nuanced. I think it's a little typo there.
SPEAKER_00: Yes. It's like, of course, the reality is nuanced. Of course,
SPEAKER_03: it's a hellscape. Walk down the street. Heather, can I give you a theory? Please?
SPEAKER_02: I think it was most evident in the way that Elon dismantled and manhandled the BBC reporter. Oh my god, that was brutal. This is a small microcosm of what I think media is. So I used to think that media had an agenda. I actually now think that they don't particularly have an agenda, other than to be relevant, because they see waning relevance. And so I think what happens is whenever there are a bunch of articles that tilt a pendulum into a narrative, they all of a sudden become very focused on refuting that narrative. And even if it means they have to lie, they'll do it. Right. So, you know, I think for months and months, I think people have seen that the quality of the discourse on Twitter became better and better. Elon is doing a lot with bots and all of this stuff, cleaning it up. And this guy had to try to establish the counter narrative and was willing to lie in order to do it, then he was dismantled. Here, you guys, I don't have a bone to pick so much with San Francisco. I think I've been relatively silent on this topic. But you guys as residents and former residents, I think have a vested interest in the quality of that city. And you guys have been very vocal. But I think that you're not the only ones Michelle Tandler, you know, Schellenberger, there's a bunch of smart, thoughtful people who've been beating this drum, Gary Tan. And so now I think reporters don't want to write the end plus first article saying that San Francisco is a hellscape. So they have to take the other side. And so now they're going to go and kick up the counter narrative, and they'll probably dismantle the truth and kind of redirect it in order to do it. So I think that what you're seeing is, they'll initially tell a story, but what then there's too much of the truth, they'll go to the other side, because that's the only way to get clicks and be seen. So I think that that's what you guys are a part of right now. They are in the business of protecting the narrative. But I
SPEAKER_00: do think there's a huge ideological component to the narrative, both in the Elon case, where they're trying to claim that there was a huge rise in hate speech on Twitter. The reason they're saying that is because they want Twitter to engage in more censorship. That's the ideological agenda here. The agenda is this radical agenda of decarceration, they actually believe that more and more people should be led by prison. And so therefore, they have an incentive to deny the existence of crime in San Francisco and the rise in crime in San Francisco, if you pull most people in San Francisco, large majority of San Francisco believe that crime is on the rise, because they can see it, they hear it. And what I would say is, look, I think there's a pyramid of activity, a pyramid of criminal or anti social behavior in San Francisco, that we can all see, the base level is you've got a level of chaos on the streets, where you have open air drug markets, people doing drugs, sometimes you'll see, you know, a person doing something disgusting, you know, like people defecating on the streets or even worse, then there's like a level up where they're chasing after you or you know, harassing you, people have experienced that I've experienced that, then there's a level up where there's petty crime, your car gets broken into or something like that, then there's the level where you get mugged. And then finally, the top of the pyramid is that there's a murder. And it's true that most of the time, the issues don't go all the way to the top of the pyramid where someone is murdered. Okay, but that doesn't mean there's not a vast pyramid underneath that, of basically quality of life issues. And I think this term quality of life was originally used as some sort of way to minimize the behavior that was going on saying that they weren't really crimes, we shouldn't worry about them. But if anything, what we've seen in San Francisco is that when you ignore quality of life crimes, you will actually see a huge diminishment in what it's like to live in these cities like quality of life is real. And that's the issue. And I think what they're trying to do now is that say that because Bob Lee wasn't the case that we thought it was that that whole pyramid doesn't exist, doesn't exist pyramid exists, we can all experience Oh, my God, I mean, and that's the insincerity of this, it is insincere. And the
SPEAKER_00: existence of that pyramid that we can see and hear and feel and experience every day is why we're willing to make a bet we called it a bet that the Bob Lee case was like the Brianna Kupfer case. And in that with a disclaimer with a disclaimer,
SPEAKER_03: and we always do a disclaimer here. And just to George Hammond from the Financial Times who emailed me, here's what he asked me. There's a lot of public attention lately on whether San Francisco status has one of the top business and technology hubs in the US is at risk in the aftermath of the pandemic. Duh, obviously it is. I wondered if you had a moment to chat about that and whether there is a danger that negative perceptions about the city will damage its reputation for founders and capital locators in the future. So essentially, the end is as the obviously a lot of potential for hysteria in this conversation, which I'm keen to avoid. And it's like, have you walked down the street? And I asked him, have you walked down the street in San Francisco? Jason, the best response is send him the thing that SAC sent,
SPEAKER_02: which is the amount of available office space in San Francisco. People are voting. Companies are voting with their feet. So it's already if the quality of life wasn't so poor, they stay. This is the essence of gaslighting is what they do is
SPEAKER_00: that people who've actually created the situation in San Francisco with their policies, their policies of defunding the police, making it harder for the police to do their job, decriminalizing theft under $950, allowing open air drug markets, the people who have now created that matrix of policies have created the situation. What they then turn around and do is say, no, the people who are creating the problem are the ones who are observing this. That's all we're doing is observing and complaining about it. And what they try to do is say, well, no, you're you're running down San Francisco. We're not the ones creating the problem. We're observing it. And just this week, another data point is that the mayor's office said that they were short more than 500 police officers in San Francisco. Yeah, nobody who's going to become a police officer here.
SPEAKER_03: Are you crazy? Well, and there was another article just this week about how
SPEAKER_00: there's a lot of speculation, rumors are swirling of an unofficial strike, an informal strike by police officers who are normally on the force who are tired of risking life and limb. And then you know, they basically risk getting in a physical altercation with a homeless person. They bring them in, and then they're just released again. So there's a lot of quiet quitting that's going on in the job. It's like this learned helplessness because why take a risk and then the police commission doesn't have your back. It seems like the only time you have prosecutorial zeal by a lot of these prosecutors is when they can go after a cop. Not one of these repeat offenders. And you just saw that by the way in LA look motherboard and New York Times just emailed and DMed me
SPEAKER_03: and then and then did you guys say that? Instead of solving
SPEAKER_02: these issues? The Board of Supervisors was dealing with a wild parrot? What was it? The meeting that was disbanded, they had or Yeah, they had
SPEAKER_01: scheduled a meeting to vote on whether the wild air force was parrots are the official animal of the city of San Francisco. So that was the scheduled meeting that got disbanded.
SPEAKER_03: Also, can I just clarify what she must talk about with the interview, a BBC reporter interviewed Elon and said, there's much more race and hate and hate speech in the feeds on Twitter. And he said, Can you give me an example? And he said, Well, I don't have an example. But people are saying this is it which people are saying it. And the BBC reporter said, Well, just different groups of people are saying it. And you know, I have certainly seen you said, Okay, you saw it and for you, he goes, No, I stopped looking at for you. He said, so give me one example of hate speech that you've seen in your feed. Now we without speaking about any inside information, which I do not have much of, they've been pretty deliberate of removing hate speech from places like for you. And here, it's a very complicated issue when you have an open platform, but the people may say a word, but it doesn't reach a lot of people. So if you were to say something really nasty, it doesn't take a genius to block that and not have it reach a bunch of people. This reporter kept insisting to Elon that this was on the rise with no factual basis for it that other people said it. And then he said, but I don't look at the feed. He said, so you're telling me that there's more hate speech that you've seen, but you just admitted to me that you haven't looked at the for you feed in three months. And it was just like this completely weird thing. I just had mother called in a lie. He called him in a
SPEAKER_00: lie. He caught him in a lie. And this is the thing. If you're a
SPEAKER_03: journalist, just cut it down the middle. Come prepared with facts. Listen, stop taking a position either way. I want to
SPEAKER_01: connect one dot, which is that he filled in his own narrative, even though the data wasn't necessarily there in the same way that, you know, we kind of filled in our narrative about San Francisco with the Bob Lee, you know, murder being another example. We put a disclaimer on it. We said, hold on a second.
SPEAKER_00: We said we knew we didn't know. And furthermore, we're taking great pains this week to correct the record and explain what we now know. Yeah, we're he was intellectually honest. This is just intellectual
SPEAKER_03: honesty. Honestly, you're you're you're doing soft here,
SPEAKER_00: freeburg. You're getting gaslit by all these people.
SPEAKER_01: Okay, anyone? I think the guy totally totally had zero data. By the way, when you're a journalist, you're supposed to report on data and evidence. So he's certainly, you know, I think completely with Don Carmen, Yani. It's the same story.
SPEAKER_00: Yeah. This is a Don Don happened to survive. Guys. I love I love
SPEAKER_01: you, but I gotta go. Goodbye. Here's what Maxwell from mother
SPEAKER_03: body. Have fun. There's been a lot of discussion about the future of San Francisco and the death has quickly become politicized. Has that caused any division or disagreement from what you've seen? Or has that not been the case? The press is
SPEAKER_00: gleeful right now. They're gleeful. Like, oh, my God. They're fighting. Just like the right was gleeful with Jesse
SPEAKER_03: Smollett. Having gotten himself beaten up or you know, setting up his own. All right, everybody for the Sultan of Science currently conducting experiments on a beach to see exactly how burned he can get with his SPF 200 under an umbrella wearing a sun shirt and pants. freeburg freeburg on the beach was the
SPEAKER_02: same outfit astronauts wear when they do spacewalks. Hey, stay able diffusion. Make me an image of David freeburg wearing a
SPEAKER_03: full body bathing suit covered in SPF 200 under three umbrellas sunny beach. Thank you. Oh, my God for the dictator. Chima polyhappity creating regulations. And the regular Oh, the regulator you can call me the regulator regulator. See you tonight when we'll eat our orchelons what's left of them. The final four or five orchelons in existence or in a cage. Otherwise, I'm putting you on the B list today. If you're like,
SPEAKER_02: I will be there. I'll be there. I promise. I promise. I promise.
SPEAKER_03: Can't wait to be there. And the rain man himself. Namaste. I didn't even get putting Ron. Oh, we'll talk versus Nicky.
SPEAKER_03: I think you should ask auto GPT how you can eat more endangered
SPEAKER_00: animals. Yes, we have a plan for you. Yes. And then have it go
SPEAKER_03: kill those animals. Real world. Put something on the dark web to go kill the remaining rhinos and bring them to chamat house for poker night. I don't think rhinos are taste good. Wasn't
SPEAKER_00: that the movie? It was a Oh, did you guys see his cocaine bear
SPEAKER_02: out yet? No, it was a Matthew Broderick Marlon Brando movie
SPEAKER_00: right where they're doing the takeoff on the Godfather was the first. Yeah, yeah, yeah. It's like a conspiracy to eat
SPEAKER_00: endangered animals. Yes, the freshman. The freshman came out in 1990. Marlon Brando did it with Matthew Broderick. And like Bruno Kirby. They actually they that was the Kirby. That's a
SPEAKER_02: deep they were actually they were eating endangered animals.
SPEAKER_03: What do you what do you think he too? Is that going to be good? Sax I know he's one of your favorite films. Me too. It's awesome. Is there a sequel coming? They're gonna do he took
SPEAKER_03: the novels already come out. Adam drives all the novel. Yeah,
SPEAKER_00: he's amazing. He does. One of those movies where when it comes
SPEAKER_00: on, you just can't stop watching. To screener. Best bank
SPEAKER_00: robbery slash shootout in movie history. You know, that is
SPEAKER_03: literally the best film ever. Like it's up there with like the Joker with Reservoir Dogs. The Joker in that Batman movie where he robs the bank like, I mean, what I love you guys. All right, love you besties. And for blah, blah, blah, blah, blah. This is gonna be all in podcast 124. If you want to go to the fan meetups and hang out with bye bye. Bye bye.
SPEAKER_03: What your winners ride Rain Man David
SPEAKER_00: we open source it to the fans and they've just gone crazy with it. Love us
SPEAKER_02: Queen of
SPEAKER_00: besties are dog taking we should all just get a room and just have one big huge or because they're all like this like sexual tension but they just need to release I have your Beep beep, wet your beep. Wet your beep. Beep. Wet.
SPEAKER_03: We need to get merch.