The Truth About Building AI Startups Today

Episode Summary

The Light Cone podcast episode explores the current state of AI startups. The hosts - Gary, Jared, Harj and Diana - are Y Combinator group partners who work with AI founders. They explain why so many of their recent batch companies utilize large language models (LLMs). Rather than YC seeking out AI startups, ambitious founders are choosing to build companies using this new technology. Many observers dismiss these startups as mere "GPT wrappers", but the hosts push back on that characterization. Well-designed user experiences still matter, even when leveraging LLMs under the hood. The best ideas solve specific problems with customized data and business logic, not just generic automations. As examples, they discuss startups parsing government contracts, improving sales workflows, and streamlining regulatory compliance. The hosts explore other promising spaces like data privacy tools for enterprises sharing sensitive data with LLMs. They also examine the tradeoffs between general foundation models versus custom trained models tailored to niche use cases. Overall there is a proliferation of new ideas to build companies thanks to recent advances in AI. However, the hosts warn entrepreneurs to avoid "AI tar pits" - ideas that sound exciting but have little actual product-market fit. The AI copilot concept draws interest but many customers struggle to find practical uses. Broad promises to fine-tune open source models also rarely retain customers. The most durable startups solve concrete problems for specific customers. In closing, the hosts reflect on how this AI boom mirrors prior technological revolutions. Once again the earliest pioneers are hardcore technologists dismissively labeled as "geeks" - yet history suggests they may build the next generation of iconic companies. For entrepreneurs willing to search for valuable problems hidden in plain sight, it is an unprecedented time full of opportunity.

Episode Show Notes

In the first episode of the Lightcone Podcast, YC Group Partners dig into everything they have learned working with the top founders building AI startups today. They share the ideas that are working particularly well, mistakes to avoid, and take a look at the competitive landscape among the current AI giants.

Episode Transcript

SPEAKER_02: How would you differentiate between an idea that could be a great foundation for a billion dollar company and an idea that is likely to get run over by GPT-5? SPEAKER_00: Something that's boring might actually be an incredible business, but why is that? Yeah, let's talk about GPT wrappers. SPEAKER_03: Are people worried about giving these datasets to open AI? SPEAKER_01: All these AI agents are passing the Turing test. I mean, this is why I think the chat interface is wrong. SPEAKER_02: You want to do something in AI like this is a good place to like look into. Big generational companies are getting built as we speak. SPEAKER_02: Great startup ideas just lying on the ground. You'd like trip over them. SPEAKER_03: This might actually be like a once in a lifetime opportunity and I think I actually agree. SPEAKER_00: What a time to be alive! Welcome to the very first episode of The Light Cone. I'm Gary, this is Jared, Harj and Diana and we're group partners at Y Combinator and we get to work with some of the best founders in the world. Jared, why are we calling it The Light Cone? SPEAKER_02: Well, in special relativity, The Light Cone is the path that light takes from a flash of light. You can imagine a flash of light and it spreads out in a cone shape. And in special relativity, you think about it spreading out in a cone both in the future, but also in the past. And in this podcast, we are here in the present, but we are going to talk about both the past and future of technology. So that's how we came up with the name. SPEAKER_00: And one of the things that we're all seeing is the encroachment of AI into almost every piece of society at this point. You know, every business transaction, every thing that we sort of use with computers, suddenly a new burst of technology is sort of entering everything we're doing. And we're seeing it in the startups that we're funding, which is why we're so excited about it. I think, you know, what's the percentage of companies you've backed right now that have large language models? SPEAKER_02: I think we're somewhere 23, it was close to 50% of the batch. And it's pretty interesting. Like, I think a lot of people like see that number and they think, oh, YC must have funded so many AI companies because we have this thesis about AI. And like, it's just easier to get into YC if you're an AI company, because we just like love funding AI companies. And it's funny to us because we know how that's not true. And yet that's probably how 90 plus percent of people actually think YC works. How does it actually work? Shall we tell people like how it actually works? SPEAKER_02: Actually, it's interesting. SPEAKER_03: The smart founders apply to us with what they want to work on, and we fund the smart founders, irrespective of what they want to work on actually. And exactly. SPEAKER_02: And so the fact that half the batch is working on AI says something much more interesting than just the YC partners think AI is cool. It's an emergent phenomenon of what the smart founders want to work on right now. SPEAKER_01: Where do they think there's the high beta to build the largest company? And I think the most ambitious and smartest founders are going after this, because it's definitely, I think the exciting thing about right now with AI, I think it's like real, there's been a lot of ways for AI and multiple AI winters. But this one, actually, GPT 3.5 and then 4 blew out of the water a lot of tasks. It impressed a lot of smart people. When a lot of smart people start paying attention and building in this current idea maze, I think big generational companies are getting built as we speak. SPEAKER_03: One thing I'm seeing that's interesting is I feel like a lot more founders are dropping out of college to start working on AI. Because there's a FOMO. SPEAKER_01: Yeah, there's like an actual like, and usually it's so funny, SPEAKER_03: my interview question is almost always like, what's the rush? Like, why do you want to drop out of college? Like, why don't you just like graduate? Because it makes a lot more sense to graduate and then do a startup. And the reply is usually like, well, like, this might actually be like a once in a lifetime opportunity. And I think I actually agree. SPEAKER_02: And the other cool thing is that this is an opportunity where college students are particularly well, like young founders are particularly well positioned to work in it, because nobody has like, there's no one walking around with like four years of LLM experience. Yeah. So like, everyone is starting from the same playing field. And so if you can learn fast, you're going to be at the same level as everybody else. SPEAKER_03: That's right. And you know, one, an area I've seen that come to play is like developer tools for prompt engineering. I've been seeing like these sorts of tools are getting uptake, it's like ability to like chain together different prompts and test your prompts and see like the second order effects. And actually, a lot of college students are the people who are just like playing around with like, prompting models and seeing what comes out. And it's a really easy startup idea for them to like, just build the tools that they want. And like the tools that they want are literally setting like the standard for what every developer should want. Like I know a lot of the headlines are all around like AGI and all of the fancy stuff, and then the really cool demos of like multimodal AI, like AI generated video and this kind of stuff. The stuff that I've seen in the batches that are actually taking off is a little bit more mundane. Like it's, I probably say a lot of it's sort of like workflow automation. It's finding things where there was like a human doing some repetitive tasks usually involved, like searching for things or filling out forms, and then using like LLMs to replace that. SPEAKER_02: It feels very obvious to us, the people who work at YC, that this is an amazing opportunity. There's so many jobs in the world that are basically very mundane information processing, typically stuff that's like hidden in some back office somewhere where there's somebody who's just like reading stuff and summarizing it, reentering it from one system into a different system in like a slightly different format. And it's such a perfect fit for LLMs. LLMs are like perfect for this job. SPEAKER_02: And yet we actually don't get that many applications for people working on this. And there's a lot of founders out there who are searching for a great idea. So if you're out there and you're looking for a great startup idea and you want to do something in AI, like this is a good place to like look into. I'll give you an example. So Lastpatch had a company I worked with called Sweet Spot, and we funded them. SPEAKER_03: The idea was something about like food ordering from food trucks, something like random. And they pivoted immediately looking for a new idea. And the idea they found was using LLMs to automate searching for government contracts to bid on. Oh my God, that's such a good idea. Yeah, and submitting the proposal. That sounds so boring. What could be more boring than searching through like a list of all the government contracts? SPEAKER_03: You know how they found it? They're exploring startup ideas and then they realized one of their friends, his job was to work for one of these like government contractors. And his whole day was just spent like refreshing this government website to like find things and then submit proposals. And they're like, what? Like that's exactly that. That's so boring. Like wouldn't you like a tool that did this for you? Yeah. And they launched and like pretty much straight out of the gate got like a pretty decent amount of traction because they're like opening up the people who would actually do it. Like it becomes easier to like find government contracts to bid on when it's all automated away and like software does it for you. SPEAKER_00: You know, obviously we all know that something that's boring is actually kind of awesome. But why is that? That's like, you know, just off the bat, you know, we have a sense that something that's boring might actually be an incredible business. There's an old PG essay where he talks about this and he says, he quotes a phrase, where there's muck, there's brass. SPEAKER_02: Nice. It's like it's almost like old English. You want to explain it Harj? Just means that you can find treasure in surprising places. Yeah. SPEAKER_01: And I think the cool thing is you have to go deep and vertical and solve a very concrete problem. Like some of the problems with let's maybe talk about AI tar pits. SPEAKER_02: What a tar pit idea is, is it's an idea that from the outside looks really shiny and attractive. It looks like a great startup idea. And so lots of founders go and they start working on it. And then you realize once you're in it that it's actually not a good startup idea, but by the time you're there, you're like stuck in it. And so it just attracts founder after founder and they just get stuck in the tar pit idea. And we see this a lot at YC because we see all these applications. And so it's really obvious to us when like 500 people apply it to YC back for the same idea. But they don't know that 499 other founders are also stuck in the same tar pit. What's tricky I think about tar pit ideas for AI is like we know something's a tar pit idea in hindsight once like enough people have been stuck in it. SPEAKER_03: So with AI it's so new we don't know yet. So I have a couple that I'm actually like keen to get your thoughts on. A very common one is AI copilot. So it's like, hey, I'm going to make it easy for people to like build an AI copilot for their product or service. It's this really unusual type of phenomenon where there's so much interest from potential customers to like want a copilot. But it's actually quite easy to start getting like inbound leads if you pitch this. And it's even easy to get people to pay you money upfront. But what's really hard is to get them to actually like use the copilot because they don't actually know what they want it for. They just heard that AI copilots might be changing the future of software so we should have an AI copilot. But they don't actually know what their customers will use it for. SPEAKER_00: I think for me and maybe I just have a mental block around chat interfaces. But I've never been that big a fan of chat because it puts so much of the emphasis on the user knowing how to speak to a computer. And, you know, well, in the next five or 10 years, I think we will get far more used to using it that way. I think the low hanging fruit right now is just using the large language model to actually do the sort of knowledge work that a human being could do. And then package it into the UI that, you know, whether it's a mobile app or a web app that is just familiar, like sort of what people use to do their work right now. And it's, you know, basically the LLM is better used as sort of this like, I mean, it's almost like, you know, this thing that's sprinkled in that, you know, the software suddenly does something really powerful. But you don't have to change the way you would want to use the software as it is. It's sort of like an example of a phenomenon that like I think we have seen in the past when like some technology gets really hot and all of a sudden, like all these companies are like, they're being asked by people like, what's our AI strategy? SPEAKER_02: They're like, oh, well, we better get an AI strategy. Or like with crypto, there was like, oh, everybody needed a blockchain strategy. And even before that, it was like everybody needed a mobile strategy. For a moment in time, it's like easy to sell them something that like placates their desire to check some box. But in the end, you've got to actually make it successful for them. Like otherwise it's not going to stick. I agree. SPEAKER_03: And so like perhaps with this AI copilot thing, like maybe it's too early to call it, like perhaps they actually will find product market fit. SPEAKER_02: Maybe with something that's not a chatbot UI, like they'll like keep iterating on the UI until they find something that's an AI copilot people actually want. Or maybe it's just going to like fizzle. It just like turns out most people don't need an AI copilot. Some of the advice I've been giving those specific companies is another old PGSA about if you're trying to sell technology to someone and they're not buying, like see if you can just build a competitor. SPEAKER_03: And so it's like, hey, if you're trying to sell like a fintech company a copilot and they're not buying it, well, like if you are convinced they should have a copilot, like why don't you just like build the company with the copilot as the main experience and see if you can out compete them or not? I like that. SPEAKER_01: I like that. I think getting people to focus on the use case. I think the problem is the whole thing with kind of the gold rush, people selling more the shovels and the tools. And even then in this case, it is a bit of that. But a lot of people aren't digging gold yet. Like the reality is this is such a new technology and even the end applications that apply AI, the reality is they're so early, they don't have product market fit. So it's sort of a bit of the blind leading the blind in here. It's like, what do I even know what the pattern is for copilot? I mean, it sounds cool just to join the cool kid club of we're doing AI and we're going to check mark. So I think that's the danger for a lot of these startups. It's like, it seems that they're getting traction as you mentioned. But then when we poke them closer, is anyone actually using you? What are the actual use case? And then the founders come back and they started blanking at us. Oh, but look at all the sign up, look at the revenue. But then they're not really using your product. SPEAKER_00: I mean, we're seeing even the second order effects, right? So a bunch of us are funding dev tools companies that sell to AI companies and they're selling tooling. But then they might sell enterprise contract to someone who also upstream has a Fortune 100 that said that they'd pay $100,000 a year for that contract. And then six to nine months later that Fortune 100 went back to the incumbent. You know, some other leading IBM, Salesforce, something like that. Because they ended up adding large language model technology to what they were doing and people just switched back. And suddenly the dev tool company suddenly realizes, oh, I had five contracts, but three of them went away because my customer actually lost their customer. So it's actually like sort of remarkable how fast this is evolving right now in 2024. SPEAKER_03: A specific type of idea I'm curious to get thoughts on here as well is offering like fine tuning open source models as a service broadly. That's a very popular idea, I think, over the course of 2023. Here's what I've seen. So why do people want like why is there any demand for a fine tuned open source model at all? It tends to be initially, I think the big driver was cost like open AI, like chat GPT was expensive, and people wanted a cheaper version of it. And so I think it was very easy to get customers with the pitch of, hey, like we can fine tune an open source model and it's just going to be much cheaper. What I think a bunch of the companies in space are seeing is that like that's not enough to keep the customers, especially because like open AI, like the cost of all of the models just going down. And that's going to keep happening with the open AI has a plan for all of those. SPEAKER_01: So there's something more that all these fine tuning companies need to do. SPEAKER_02: It has to be better, not just cheaper. SPEAKER_01: I think where is exactly that where I think is having more legs is when these companies need to customize it to private data sets. So you have the open general big foundation model, but then you have to tune it up to specific data sets that, for example, healthcare or fintech can give out, can give out and they don't have the team of experts to do it. So I think the one company that I think Brad worked with was Credle that kind of was doing that. SPEAKER_03: What are you seeing about like the concern around data privacy is another big reason. Like, are you seeing that as being enough? Like, are people worried about giving these data sets to open AI? SPEAKER_00: It's really interesting. I mean, whenever you have something so new like this, it's actually sort of resets the clock on the competitive landscape again. So, you know, you almost can expect all the same things will happen again. You know, just as 10, 15 years ago, cloud was brand new and then you had cloud cybersecurity and cloud strike and all these companies sort of come out. You know, we're seeing the first wave of cybersecurity companies, you know, like prompt armor. So they sort of wrap your API calls. And what they actually have figured out is that for a lot of large language models, if you do any sort of fine tuning or training with private data, you can actually just speak to the model and get it to spit out your private data again. And they have a solution that stops it. So it's so interesting because, you know, it's entirely possible. You know, they're basically creating a new industry again. Of cybersecurity for LLMs sort of in the same way that cloud opened up that space and created cybersecurity for the cloud. SPEAKER_03: Yeah, I definitely think that whole world of controlling within an enterprise in particular, like controlling who has access to like which LLM has access to like what data and who has permissions is like a really ripe space for building interesting software. SPEAKER_01: I think the other exciting area that a lot of the tools are getting built is getting more. This is like a step further fine tuning, but more purpose trained models that are smaller. So take, for instance, a LLama and getting those to run locally in machines for inference. And when you customize them, train on a specific domain and target data is going to perform better than the general model. The general model was kind of trained on all of the human language for all of the task. But if you wanted to build like the best, let's say, language model for parsing SQL queries, you would then parse very specifically just the set for SQL query. And I think some of those that are interesting companies that we fund is like LLama that you fund it, that's trying to make the development process for running all of these locally a lot faster. And I think we're also funding some of these that are custom for coding. The thing that was surprised learning from some of the startups that are building coder type of copilots, which I think is a use case as working out, making a lot of the workflow for programming a lot faster is kind of like autocomplete and copilot type of thing. They're training on older models of GPT. They don't even need the newest one. And then I asked, like, why is that? And even for like one of the companies who funded last batch metalware for hardware, they're not using the state of the art model. Like the older GPT, I forget which one is like the older 2.5 or 3 was sufficient and actually creating good enough results because the vocabulary for specific domain for hardware or software is a lot smaller than the human language. So this is other world where the open model that's customized, I think it's going to win and compete versus the big one for specific domains. So lots of companies with this. Yeah, that's what Toby Lutke from Shopify actually still dabbles with the stuff. SPEAKER_00: I think he actually built the internal copilot for Shopify. And what he was saying is the best way to use whatever GPT for the latest closed source models that are most expensive and have the most parameters. Just think of it as a prototyping tool. Anything you do with those prompts, you can get your own model to do with a little bit more training. SPEAKER_01: It's kind of like when people build hardware, you have the analogy of prototyping with FPGAs, which are very expensive, right? And then when you have the right architecture for hardware, then you do the circuit path and actually do the custom SOC. So right now for some of these tasks, the large language model is sort of like your FPGA, whatever GPT for. And then when you customize it, you do like a super efficient one coding path for, I don't know, Shopify for coding assistance and hardware, software, etc. That becomes your SOC that you train and customize, which is cool. I think that pattern is emerging. SPEAKER_03: So as I hear you talk about that, Diana, what's interesting, I just think it's just like so many different startups that could be built. SPEAKER_03: It just feels like we've never had this moment. At least I didn't feel like I've never experienced a moment where there's just so many potential startup ideas to be built all at once. SPEAKER_02: Yeah, there absolutely hasn't. We definitely saw this in the last batch with all the pivoting companies. Oh, yes. SPEAKER_02: People don't always realize this, but like many of the companies get into YC within a month after we fund them, they're looking for a new idea because the old thing didn't work or they lost interest in it or something. It's normally not actually that easy to find a great startup idea for a team to work on, but man, was it easy last summer. It was just like great startup ideas just lying on the ground. You'd like trip over them. SPEAKER_02: Yeah, that was a fast. SPEAKER_01: I think you actually had a tweet about it. That was one pretty viral that talked about this is the batch, the batch ever in your whole career working at YC where founders got to good ideas the fastest ever. SPEAKER_03: And hard has been here even longer. Yeah, it definitely feels unique. I've never had so many successful pivots. SPEAKER_03: And Gary, to your point about the chat GPT wrapper, I think back, I feel like that meme really came out like just about a year ago. Yeah, let's talk about GPT wrappers. SPEAKER_03: I feel like the first sort of group of ideas I saw in the batch were also generative AI ideas built on top of chat GPT. So it was stuff like, hey, like automate your marketing copy or automate like your creative content or something like that. And that term got thrown out. These things are all just like wrappers on top of chat GPT. And open AI is going to like take all of that. You're just going to build all of these things and they were going to release their app store and like it's just going to take all the value and these things will die. All of all of SaaS software is just MySQL wrappers. SPEAKER_00: Exactly. SPEAKER_02: I think this is a great analogy. You can think about any SaaS product as basically a database wrapper. Like you could imagine like negging any SaaS product. Because like the first version of a SaaS product is basically just a crud app. And it's just like you took like MySQL and then you like built like a website on top of it. And I think people are going to look back on this term GPT wrapper like similarly how we think of like how we would look at the term database wrapper which just seems like silly. SPEAKER_00: I mean this is why I think the chat interface is wrong. Like I actually think there is value accrued to really great UX like good copy good interaction design information hierarchy. You know being able to approach a product and say like this is the job to be done. And for users to come in and just sort of naturally understand what to do. Like there is a craft to building software that is timeless and that sort of transcends whether or not you're using a large language model. And so you know that I think is what I mean by you know these things are not you know SaaS software is not a MySQL wrapper. SPEAKER_02: Well here be a question I'd be interested in everyone's thoughts on. Suppose you're a new founder and you really want to build a big company. And you want to do something on top of LMs. How would you differentiate between an idea that could be a great foundation for a billion dollar company. And an idea that is likely to get run over by GPT-5 and is probably like not a good starting point. SPEAKER_01: I think if a founder is working on something too general and not solving a specific need for a user they can actually go talk to another use case. So I worry about the ones that are too generic. And building going after. Some kind of abstract. It will solve all the things. Yeah if it's like hey like throw your data in here and we'll do like automations on top of it. SPEAKER_03: Like for everything that's probably hard to compete with whatever one of the foundation models might offer. But if it's like hey we're give us like your sales log data and we'll like spit back like suggested next actions that you can like for sales people to make them better at sales. That's probably going to work better. SPEAKER_01: Or give us all your compliance checklist to pass HIPAA compliance and process that is like that's very specific and lots of business logic. Or give us all of your data for processing government forms right. SPEAKER_01: So a lot of custom business logic so same thing with the SaaS era. A lot of the applications and how you build applications in there there's always the separation of business logic. And the crowding a lot of architectures for these app. And a lot of the value of the company is accrued on that business logic that is so custom per company. And there's a whole pattern of programming patterns on how people separate those. SPEAKER_03: Yeah as this all goes multimodal this is going to be really interesting. So early days we've seen companies work on voice AI apps to be like a sales rep. And I think it's an interesting example of the kinds of ideas that might be possible now with AI is where you take something like a Salesforce and you try and reimagine like what would Salesforce do if it were started today with all the power of AI. Well it almost certainly do more than just be like a CRM right. Like it would make like it would find who your leads might be like maybe now it can make the calls for you. It could like set them up. Like maybe it goes all the way to start like implementing like the first version of the product for them. Like I think it's just like the scope of software you can build with AI now is so big. I think it's another good way to find ideas like look at software today and reimagine it with the power of AI today. Which we fund a number of companies that effectively are AI voice agents for small businesses because they receive I don't know if you're like a flower shop or AC repairman in the middle of the US. SPEAKER_01: There's a lot of calls for you to schedule and you don't have a lot of stuff automated. And there's these YC companies that are using they're building these AI voice agents to basically be their receptionist. SPEAKER_03: I know one of our partners Paul Bouhite is quite worried about this actually. He's worried about there's going to be a world of just sort of like all these AI agents that are out trying to do malicious things and that we're going to need like our own like good defensive AI agents out there making sure we don't get scammed out of all of our money. SPEAKER_00: I mean this is actually why I'm so an advocate for open source AI because these things are sort of real considerations. You know can you imagine there only being one hyperdominant AGI and it's totally closed source it's owned by one company and you know it's only available to the highest bidder. And you know imagine you being you know someone who just had to go to the doctor and on the other end of it is some health insurance company that. You know bought the bought access and blocked it out from everyone else and you getting on the phone you're not able to sort of navigate or go against the sort of you know impenetrable AGI that is able to sort of get around anything that you know your side might throw at it. Like we actually want you know some form of actually equity at the AI level like we actually want. You know not merely the biggest companies to own the most capable AIs we want all consumers to be able to have from the bottom up the same access to that same technology. And that's you know the best insurance against tyranny. SPEAKER_01: I'm certain that's actually what a lot of also not just founders but smartest researchers who are really at the cutting edge. So I went to near Epps this past December which was incredible to see the energy in there the conference has grown so much I think it's like over 10,000 attendees there were 3,000 papers more than 3,000 papers accepted. And I think back in 2017 there was only around 600 papers. When I went back in 2010 it was just in a ski lodge and maybe like 100 papers. SPEAKER_01: It's crazy the kind of exponential growth and one of the big topics of interest was a lot around AI ethics and regulation and how do we measure that. So that was interesting. But the thing that's different about typically that was interesting in this conference is the amount of interest from researchers wanting to start companies too. One interesting data point is a lot of this era with GPT came about from one foundation paper is all attention you can need. It was this paper that got released, got launched in near Epps back in 2017. It was a team at Google who was trying to figure out how to make machine translation between languages more cheap because English translation to any language is actually pretty good. But if you wanted to do I don't know German to Japanese there's not enough data. So they figure out this way to compress data which became the transformer models for GPT and it was like groundbreaking and this is the foundation for LLMs. That paper came out in 2017 and the fun fact was just looking this up out of all those authors, eight authors, seven of them started different companies. And all of the companies in total their rate their worth valuation more than six billion. And now people are seeing all these like industry pioneers did this and is creating this new crop of I think founders that I don't think would have started because I talked to a lot of AI researchers and I don't think they wanted to be founders. And I got a lot of this question how can I turn my paper into a company which I think is cool because this is like going back to the root of what I see of funding hardcore technical founders. And I think it's cool to see that energy there. So when we went and host our event we didn't plan and it was like 3x oversubscribed. SPEAKER_00: Standing room only huh? Yeah. SPEAKER_00: Yeah. It's that sounds like really the new homebrew computer club so NeurIPS in December. Yeah. SPEAKER_01: We got a market on the calendar. SPEAKER_01: We'll come back. Yep. SPEAKER_02: Diana I love your point about how this is sort of like returning YC to its roots. It definitely felt that way last summer because when YC got started the internet was really new and the people who were building stuff on the internet were mostly technologists. It's actually pretty hard to build websites back then and pretty hard to build like good software. And like as building software building websites got commoditized a lot more people came into the space. And this is a cool reversion back to the like origins where like the people who are building the most interesting stuff were like mostly really hardcore like researchers and technologists because there's actually real new technology being invented. It's not just like innovating on business models but like commoditized technology. SPEAKER_03: And again just like every great technology it's being dismissed right? So again much like the chat GBT rapper meme. Again I actually think that was great for YC because it meant we only got the people who are like tuned who could tune that out and we just say hey like either I'm just so interested in this technology I don't care like what the memes are or I'm just too busy building it to pay attention to the meme on Twitter which is also great. But like I feel like this has always been the case right? Like homebrew computer club like PCs are dismissed as like toys like the internet is dismissed as a toy like all of these things. So it feels like that moment again. SPEAKER_00: Yeah there is a classic essay that I love that I saw off Hacker News. Do you guys remember this? It's geeks mops and sociopaths in subculture evolution. And you know I think that actually is the one thing that's quite durable and like keeps returning right? It's always the geeks who are going to be into the tech no matter what. They're on the cutting edge. You know I always think of Steve Wozniak talking about like you know we started Apple computer with no idea that it would ever be a company like we just wanted computers for ourselves and our friends. And so you know at some point the sociopaths come along and they start sort of monetizing the people who you know come to the scene and then the cycle returns and repeats. So that's why I like being at the beginning of a new cycle and clearly AI is exactly that. So don't count it out. Don't write it off. It's one of the most interesting things that is happening out there. But you know there are clearly things to be careful of like don't be attracted to the new shiny thing. Instead look for the muck because where there's muck there's brass. So that might be a great place to call it for the very first episode of the light cone. We'll see you next time.