Liquid AI's Ramin Hasani on liquid neural networks, AI advancement, the race to AGI & more! | E1928

Episode Summary

In episode E1928 of "This Week in Startups," host Jason Calacanis interviews Ramin Hasani, CEO and co-founder of Liquid AI, about the innovative approach his company is taking towards artificial intelligence (AI) through the development of liquid neural networks. Hasani explains that the mission of Liquid AI is to design AI systems from first principles, rooted in biology and physics, leading to the creation of liquid neural networks during his PhD program. These networks are inspired by the nervous system of the C. elegans worm, which shares 75% of its genes with humans and has a fully mapped nervous system. Unlike traditional AI systems that are fixed after training, liquid neural networks remain adaptable, offering more dynamic and robust responses. Hasani's work focuses on making AI models smaller, more efficient, and capable of running on minimal hardware like a Raspberry Pi, without sacrificing performance. This approach could revolutionize the AI field by reducing the energy and computational resources needed for training and deploying AI models. Liquid AI has raised significant funding and is collaborating with major system integrators worldwide to commercialize their technology across various sectors, including finance, healthcare, and automotive. The conversation also touches on the broader implications of AI advancements, including the race towards Artificial General Intelligence (AGI), the potential for AI to solve major global challenges, and the ethical considerations of AI development and deployment. Hasani believes that within the next two to five years, significant leaps in AI capabilities will be witnessed, possibly leading to the first versions of AGI. He emphasizes the importance of explainability in AI systems to ensure they are understandable and controllable, contrasting Liquid AI's approach with the black-box nature of larger, less interpretable models currently dominating the industry. Overall, the episode provides a deep dive into the cutting-edge work being done at Liquid AI, the potential of liquid neural networks to transform AI, and the philosophical and practical considerations of advancing towards AGI.

Episode Show Notes

This Week in Startups is brought to you by…

LinkedIn Jobs. A business is only as strong as its people, and every hire matters. Go to LinkedIn.com/TWIST to post your first job for free. Terms and conditions apply.

Experimentation is how generation-defining companies win. Accelerate your experimentation velocity with Eppo. Visit https://geteppo.com/twist

Attio - A radically new CRM for the next era of companies. Head to attio.com/twist to get 15% off for your first year.

*

Todays show:

Liquid AI’s Ramin Hasani joins Jason to discuss the mission and the concept of Liquid AI's liquid neural networks (1:09). They dive into liquid neural networks’ applications (16:07), transition from theory to execution (21:37), their efficiency on small devices (27:30), and more!

*

Timestamps:

(0:00) Liquid AI CEO and co-founder Ramin Hasani joins Jason

(1:09) Liquid AI's mission and concept of liquid neural networks

(7:06) LinkedIn Jobs - Post your first job for free at https://linkedin.com/twist

(8:34) Demo of Liquid AI: traditional vs. Liquid neural networks in autonomous driving

(16:07) Practical applications of Liquid AI

(20:07) Eppo. Accelerate your experimentation velocity with Eppo. Visit  https://geteppo.com/twist

(21:37) Commercializing worm-inspired AI systems, building a team, and solving problems across various sectors

(27:30) Efficiency of liquid neural networks in compact devices like the Raspberry Pi and the transformative potential of AI modeled after worms.

(34:15) Attio - Head to https://attio.com/twist to get 15% off for your first year.

(35:24) Data ownership in AI and incentivizing data providers

(41:18) Role of AI in real-world applications

(43:57) Societal impact of AI, job displacement, and the optimistic view on AI's potential

(50:25) Explanation of physical models vs statistical models in AI and the challenge of understanding black box AI systems

(57:01) Speculations about AGI's market cap, who might achieve AGI first, and the potential of using AI systems to build various applications

(1:00:12) Comparison between open-source and closed-source models in AI and the trend of open-source moves in the AI industry

*

Mentioned on the show:

https://www.raspberrypi.com/products/raspberry-pi-5

https://www.capgemini.com/us-en

https://www.ctc-g.co.jp/en

https://www.accenture.com

https://www.ey.com/en_gl

https://www.cnn.com/videos/media/2024/04/02/the-daily-show-jon-stewart-ai-work-force-jobs-orig.cnn

*

Source of C. elegans worm footage:

https://www.youtube.com/watch?v=zjqLwPgLnV0&t=1s

*

Follow Ramin:

X: https://twitter.com/ramin_m_h

LinkedIn: https://www.linkedin.com/in/raminhasani

Check out Liquid AI: https://www.liquid.ai

*

Follow Jason:

X: https://twitter.com/Jason

LinkedIn: https://www.linkedin.com/in/jasoncalacanis

*

Subscribe to This Week in Startups on Apple: https://rb.gy/v19fcp

*

Thank you to our partners:

(7:06) LinkedIn Jobs - Post your first job for free at https://linkedin.com/twist

(20:07) Eppo. Accelerate your experimentation velocity with Eppo. Visit https://geteppo.com/twist

(34:15) Attio - Head to https://attio.com/twist to get 15% off for your first year.

*

Great 2023 interviews: Steve Huffman, Brian Chesky, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarland

*

Check out Jason’s suite of newsletters: https://substack.com/@calacanis

*

Follow TWiST:

Substack: https://twistartups.substack.com

Twitter: https://twitter.com/TWiStartups

YouTube: https://www.youtube.com/thisweekin

Instagram: https://www.instagram.com/thisweekinstartups

TikTok: https://www.tiktok.com/@thisweekinstartups

*

Subscribe to the Founder University Podcast: https://www.founder.university/podcast

Episode Transcript

SPEAKER_01: If you have AGI, as you said, you can solve the energy problem.Once you solve the energy problem, you are basically the most valuable company on earth.Think about that.I mean, if you can solve economy, if you can solve politics, basically, the structure of governments, this is the thing that we are hoping to get.And there's a race to getting there. SPEAKER_03: Do you think everybody gets there at the same time?Like AGI feels, no, you don't.You feel some people will get to AGI first?First, yes.Yes. SPEAKER_00: This Week in Startups is brought to you by LinkedIn Jobs.A business is only as strong as its people and every hire matters.Go to linkedin.com slash twist to post your first job for free.Terms and conditions apply. Eppo.Experimentation is how generation-defining companies win.Accelerate your experimentation velocity with Eppo.Visit geteppo.com slash twist. And Adio, a radically new CRM for the next era of companies.Head to adio.com slash twist to get 15% off for your first year. All right, everybody. SPEAKER_03: Welcome back to This Week in Startups.We've got a great guest for you today with a great idea.Ramin Hassani is the CEO and co-founder of Liquid AI. And we're going to hear all about what Liquid AI is doing in a moment.But they're kind of headed in a new direction, trying to make smaller and more efficient language models.Welcome to the program, Ramin. SPEAKER_01: Thank you for having me. SPEAKER_03: Maybe, you know, just by way of introduction here, explain to me what the mission of liquid AI is.And then let's get into, you know, sort of language models and, you know, the size of models and making them more efficient. SPEAKER_01: Yeah, definitely.So I started a company to design basically, like from first principles, systems that we can understand from scratch. on a completely new base for artificial intelligence that is rooted in biology and physics.So we started looking into brains and see how we can get inspirations from there to design kind of a new math that we can understand and we can scale basically.And that kind of became kind of a liquid neural network technology that I invented during my PhD program. SPEAKER_03: Okay, liquid neural networks.What does it mean compared to, say, a traditional AI model, large language model?So what's the difference?What is a liquid neural network?Let's explain that.And is that the term you came up with, or is this an industry term? SPEAKER_01: Yes, that's something that I came up with. So believe it or not, like about seven years ago, I started looking into the brain of a little worm.The worm is called C. elegans.The worm is one of the, like in the tree of evolution is one of our fathers. Okay, so it's basically nervous systems and your cellular kind of organization and everything is evolved from this animal.This worm has already won four Nobel Prizes for us because it shares 75% of its genes with humans.And its entire genome is actually sequenced.It's one of the only animals on Earth, we have actually two animals now, that its entire nervous system is mapped. That means we know exactly how anatomically, how each part of the nervous system is actually connected to each other.So I thought another nice behavior of this biological organism is the fact that its nervous system is... Differentiable.What does that mean?Today's AI systems, as you know them, they are basically a set of neurons in a layer-wise architecture next to each other.And they're connected through synapses or weights of the neural network.And they become like a giant neural network that can do what ChatGPT can do today.We scale those kind of neural networks into this kind of regime.Now, neural networks, the way we train these systems on massive amount of data is with a technology called backpropagation. OK, back propagation of errors.The underlying mathematics of the systems is differentiable.That means you can propagate errors without interruption inside the neural network, inside this huge kind of gigantic kind of functional form of neural networks. OK, this property doesn't exist in the human brain.In the human brain, Neurons spike.So you've seen like, I don't know, EEG kind of signals and stuff.Like, you can see that there are spiking neural networks, okay?Spikes, we haven't understood yet, like from nervous systems, we don't know why spikes work. We have no idea.We still don't know what's the purpose of the spike.I mean, some people say they translate an analog to digital kind of conversion to propagate information much faster.We know a little bit about the learning theory around that. Like Geoffrey Hinton is actually like, he's working on some forward algorithms, you know, non-backpropagation based kind of methods and stuff.So there are local kind of learning rules and stuff that... We figured out, but there is still so much that we don't know how the brain actually does learning.But when we go back into animals, until we arrive at this worm, we don't have any nervous system that doesn't spike.So that's why I like this worm, because the nervous system is something that is very similar to the mathematics that we design artificial intelligence with.So I started basically modeling the behavior of cells inside this worm. And then this system became a new type of learning system.This learning system is flexible in its behavior, meaning that when you train it on data, the system still stays adaptable to incoming inputs. This is not the case with artificial intelligence systems.When you train them, they become kind of a fixed system. So when you train the weights of a neural network, let's say in the case of, let's say, GPT-4.GPT-4 has 1.8 trillion parameters.Each parameter, this corresponds to 1.8 trillion weights in the system.These weights of the system are already trained and they're fixed. Now that they're fixed, it's now became an intelligent system.You can input information in there and then take output information.But the system is fixed.Liquid neural networks, on the other hand, they're not fixed.They're systems that you can have.They can stay adaptable to the incoming inputs. That's the major kind of difference between the two. SPEAKER_03: And that's an advantage because it will make the answers more dynamic or more real time.What's the advantage to?We're more robust.Okay, let me cut to the chase right now because I know you're busy and everyone is hiring right now.And you know, it's a lot of competition for the best candidates, right?Every position counts.Market's starting to come back.You need to get the perfect person.You want a bar raiser in your organization, somebody who will raise the bar for the entire team And LinkedIn is giving you your first job posting for free to go find that bar raiser linkedin.com slash twist. And if you want to build a great company, you're going to need a great team.It's as simple as that LinkedIn jobs is here to make it quick and easy to hire these elite team members. I know it's crazy, right?LinkedIn has more than a billion users.We all watch this happen when it was 10s of millions that hundreds of millions and now a billion people using the service.This means that you're going to get access to active and passive job seekers, active job seekers, they're out there looking passive job seekers, they got a job, but it's not as good as the job you're offering them.So you want to get in front of both of those people, maybe somebody got laid off wasn't their fault.And they're an ideal candidate, get that active job seeker.And LinkedIn also knows that small businesses are wearing so many hats right now. And you might not have the time or resources to devote to hiring.So let LinkedIn make it automatic for you.Go post an open job role, you get that purple hiring ring on your profile, you start posting interesting content, you watch the qualified candidates, they just roll in.And guess what first ones on us call to action. Very simple.LinkedIn.com slash T W I S T LinkedIn.com slash twist.That'll get you your first job posting for free on your boy J Cal terms and conditions do apply.A demonstration would be great here because this all sounds quite theoretical.So maybe we could walk through your product demo or your PowerPoint on how this all works. SPEAKER_01: Yeah, definitely.Definitely.So we were talking about still the science of things like how this became liquid AI. SPEAKER_03: And this is all based on a worm. What worm is it? SPEAKER_01: It's a worm called C. elegans.It's a two-millimeter-long worm.It's a very, very tiny worm.Got it.But it's a very popular worm.Let me show you what would happen when you train a liquid neural network versus a typical neural network. SPEAKER_03: Okay.And for those of you not watching, you can go to This Week in Startups on YouTube and find this episode.Just look at the recent videos, Tim. SPEAKER_01: Yes. What I'm showing is basically a dashboard of an autonomous driving system.It's a neural network, what I'm showing here in the middle.You see layers of neural networks that stack to each other, and then they receive camera inputs, and they make a final decision.They make a driving decision, basically. Now this system has been trained on massive amount of kind of driving data.This is just the lane keeping task because we've done that at MIT during our research basically.So what we see here, this is actually an actual car that is getting driven by this neural network.In the camera on the top left, what you see is the camera view.And on the bottom left, what you see is an attention map of this neural network.That means, where does this neural network is paying attention to when it is taking driving decisions?This neural network has 500,000 parameters.It's a rather small neural network, okay? Now, as you see, there is also a little bit of noise on top of the image.You know, like on the camera, you see like we put a little bit of noise so that we can disturb and see how robust the decision making of the system is.Okay.And as we see in a typical kind of neural network that you see in the middle, I put all of these dots that are glowing.There are basically single neurons that are getting activated and deactivated basically. It is very hard to say what this neural network is doing, right? Because there's a lot of them, and there's a lot of 500,000 parameters.How can I actually say what each individual of these systems is doing in this task?But again, in an abstract way, if I bring it back to this image on the bottom left, what you see is the attention map.The lighter regions are the regions where the network is paying attention to when it's taking a driving decision.Got it. SPEAKER_03: And that would be the road, I guess.Exactly. SPEAKER_01: It has to be the road.Yeah.It has to be the road, but it is basically like outside of the road.In this case, as you see, the attention is kind of outside and is kind of affected by the noise that we put at the input, you know.So that's why it is not that much reliable.This is how a typical artificial neural network works. Now, let me change that to a liquid neural network.All we did, we switched the parameter-heavy part of this neural network.We kept the eyes of the network, which is this kind of layers, as you see, like convolutional layers, basically.But we replaced basically the parameter-heavy part of the system with 19 neurons. 19 liquid neurons.Neurons that are modeled after the worm's brain. And then we basically, you know, like the synapses also like the connectivity, you know, it looks like a little bit more scattered.It's kind of recurrent kind of connections.You can see a lot of kind of unstructured kind of connections in this system.But this system has 19 neurons and around 1000 parameters, as opposed to the previous system that I showed you that had 500,000 parameters. The system became very small. SPEAKER_03: Yeah, it becomes much smaller.And does that make it more accurate?Or does it make it faster decisions or both?Or we just don't know? SPEAKER_01: Both, actually.So now let's look at the bottom left again, like the attention map of the system.As you see in the attention map, now the focus is on the road and on the sides of the road.So the system, without any prior, it actually figured out how to perform decision-making without being like, you know, like disturbed by anything else.Now, not only this system is very much smaller than a transformer architecture, but it's also it can give you basically much more robust representation very similar to how biological systems perform decision making. SPEAKER_03: So net net, a worm is a better driver than a human brain.Yes.What you're telling us. That seems counterintuitive.Aren't human brains better than worm brains?And is this because the silicon that these things are run on and the cameras aren't able to process fast enough in real time, like a human brain.So actually a worm brain might be a little bit simpler and easier to run on today's silicon.Is that what I'm reading into this as? SPEAKER_01: I mean, yeah, to some extent, but the fact that these are just modeled after how nervous systems perform computation in the brain of the worm, now we can take those mathematical inspirations and then build machine learning systems that are not just mimicking to be a worm or anything.They're just basically the fundamentals of computation in nervous systems.Now, the reason why I told you the worm in the tree of evolution is one of our fathers, is the fact that these principles actually scale.That means if nature actually evolved worms into humans, we can take these inspirations from neural computations and even go beyond that.So there's an opportunity to build AI systems powered by how nature designed nervous systems. SPEAKER_03: Okay, so the worm system is less robust and narrower than humans, but you could scale it up.And if a worm had a billion neurons or a million neurons, I don't know how many it has. SPEAKER_01: Actually, the worm has 302 neurons.It's a very tiny worm. SPEAKER_03: Okay, so it's got 302.How many neurons does a human have? SPEAKER_01: Human has 100 billions of neurons. SPEAKER_03: Got it.Okay, so there's a big gap between those two.But the worms are simpler and easier to define or easier to emulate than a human because humans are much more complex with 300 billions.Yes. SPEAKER_01: Yes, we can understand the brain of this worm much better than we can understand the brain of a human being.Because we still have a lot of questions.We don't understand mice.We still don't fully understand monkeys.We don't understand small fruit fly.So that's why we need to start from somewhere.So I wanted to take a step back. and start as a, as a, you know, computer scientist, basically wanted to see like how these kinds of systems, how, how can we look at the origin of these nervous systems and where can we find basically principles that we can at least confirm that exist in biology.And then now take these systems and build new type of learning systems.Okay. Got it. SPEAKER_03: Okay. SPEAKER_01: That's an inspiration basically. SPEAKER_03: I understand.Yes.Okay.So this is pretty trippy, but I think I'm following.So let's keep going. SPEAKER_01: Yes.Yeah. yes so then since then we managed to drive cars autonomously like with these nervous small nervous systems we showed that you can fly drones with them okay you can recently united states air force actually showed that you can also fly a full-blown f-16 jets with them These type of worm-inspired systems can actually, you know, do a lot more than just, you know, like navigating warps. SPEAKER_03: So it might not be able to handle the existential crisis or making a season of The Sopranos and something complex like that and creative, but it might be able to do something incredibly simple and basic, like stay in the middle of this road, you know, dead center, or, you know, keep this drone in the sky, not crashing into something. SPEAKER_01: Exactly.That's what we thought at the beginning, right?That this is going to be the property of this learning system.But then we started to see that you can also do much more complex kind of tasks way better than how artificial intelligence systems performing that.For example, what? predictive models for financial markets, predictive models for biological signals.Let's say, like, if you want to predict the mortality rate of people in ICU based on their biomarkers, you know, and then if you want to do predictive tasks like that, you can see that these models are really good at doing that.In principle, we figured out that this type of new type of technology is really good at modeling time series data.Data It could be video data. It could be audio data.It could be text.It could be user behavior.It could be financial time series, medical time series.So it is basically a general purpose computer, the type of models that we develop.These type of models, we've applied them and we checked it the last seven years, actually.We have seen that these systems are really good at performing these kind of sequential decision-making processes, right? And that became basically the point where we thought that, okay, so now it's time to start maybe making larger and larger systems off of these general purpose computers that we can, you know, like we can change the spectrum of how AI is done today.Because today we are working with base called transformer architecture right we're building we're basically changing that transformers and gpt's generative pre-trained transformers into a new foundation which is called liquid foundation models which is called lfms basically so it's a new thing that is coming basically SPEAKER_03: Okay, and a time series, just so people know, when you say time series, it's very simple.It's the series of a similar data point, but over time.So perfect would be a stock price over every minute on the stock exchange.Or as you talked about driving, it would be... the steering wheels alignment, or the speed of the vehicle over every second or millisecond, you're that's a time series.And these are particularly good at studying a time series is what you're saying. SPEAKER_01: Yes, yes.And also these these podcasts, you know, the audio signal that you're hearing is basically a time series, the video that you're seeing is also a time series.So all video data, I mean, in some sense, if you think about it, That's a time series.You know, audio is a time series.Video is a time series.But then language is a little bit different than that.Language is also a kind of sequential kind of data, but it's not the time element is different.It's basically just a sequence of words coming after each other.So you could also technically apply liquid neural networks to those kind of problems as well. SPEAKER_03: Are you tired of slow A-B testing?I'm sure you are.Do you have any trouble trusting your experiment results?I know I do sometimes.Well, get ready to 10x your experiment velocity with EPO.That's E-P-P-O.Whether you're a scrappy startup, a tech giant, or anybody in between, their feature management platform will turn your risky launches into clear-cut experiments.Data teams, of course, love EPO, and so will your product growth and machine learning teams. The executives are going to love it too, because they're going to love the results and the discipline that comes from defining really important product experiments and then executing on them really well, because it gives data teams better coordination and faster innovation.Online marketplace Inventa has cut down experiment time by 60%. And ClickUp, the project management company, has cut down analyst time by over 12 hours.And Epo's cloud-based system is all about empowerment.You get instant access to your experiments from anywhere in the world, boosting flexibility and teamwork. And you get a system that grows with your needs, easily scaling up to handle more tests as your business grows.With EPPO, the daunting becomes doable.I love that.So your team can rely on the experiment results and make faster decisions.Here's your call to action.Experimentation is how generation-defining companies win. accelerate your experimentation velocity with EPO. Visit getepo.com slash twist.Just visit geteppo.com slash twist.And let's get some experiments running.Let's get that product market fit.And thanks to EPO for supporting independent media like This Week in Startups and all the startups who are listening.Well done.All complex.Where are you at in terms of This being theory versus execution.So we see chat GPT-4, we see FSD-12. Where is your company at in terms of, you know, commercializing this?And did this all come out of MIT?I heard you mentioned MIT earlier.So you went to MIT, you studied this.Yes. And, you know, this jet fighter, that, you know, was AI based?Is that your software?Or they also studied this?So explain to us where you're at with this company?And maybe some demos of the product? Yeah. SPEAKER_01: Yeah, definitely.Definitely.So we started exactly maybe one year ago, one year and three days ago, actually, the company. So the company has four co-founders.It's myself and all MIT people.So it's myself, it's Matthias Lechner, who's another CTO.We have actually co-invented the technology together.And then we have Alexander Amini, another PhD student from MIT, and he's a graduate now.And then the director of computer science and artificial intelligence lab at MIT, who is Daniela Roos, basically is also a co-founder of ours. team, we started this company on this new technology, because we've seen a lot of like, you know, that our lab at MIT was focused on real world applications of AI, you know, like, we really wanted to design AI systems that can go into the real world and solve real world problems, you know. And that's why, like, we always had our AI systems deployed in the society, like, they were always deployed in an environment doing a task. This would be an autonomous car.This could be also a manipulation of a robotic arm.This could be any kind of task, a humanoid robot kind of control. SPEAKER_03: Do they refer to that as CSAIL? SPEAKER_01: CSAIL, yes. SPEAKER_03: At MIT, the Computer Science Artificial Intelligence Laboratory, which is known... Correct me if I'm wrong, for a lot of robotics that we see in the world.Absolutely.So the Roomba and some of those projects came out of that, yeah? SPEAKER_01: 100%, yes. SPEAKER_03: So a lot of the fingerprints on robotics come out of this. SPEAKER_01: Yes. SPEAKER_03: MIT's CSAO lab, and you were part of that.Exactly. Um, where are you at in terms of providing this as like a product?Are you, is there an API or people starting to use this?Are you yet?How old is the company?How much have you raised?Tell me a little bit about, you know, now that we got the background on the science behind this and the science is worms.We get it.Yes. Super interesting.Let's talk about the application and like making the startup reality because going from theoretical and spinning something out of a university and then making it reality, that's a jump. that very few companies are able to make.So explain to me where you're at with that big jump. SPEAKER_01: Yeah, definitely, definitely.So we started last year, 30th of March, the company, it's very fresh, like it has been like 12 months now, we raised the substantial amount of kind of seed money, I think we first had a seed round of $5 million at $50 million valuation.And then we actually did like a C two basically.And That C2 was also, I think eventually became $37 million.And so overall, we raised like $42 million in seed value at a $300 million valuation. The reason for raising the money was basically building the superstar team, which is one of the things that we have.Because if you're building something completely different than 99% of the companies, because every company in the generative AI space and AI space is working on top of a technology called transformers.Now we are changing that foundation.So you need to have like people, like-minded people from all over the world. I actually gathered them from mostly from MIT and Stanford and some of the students of Yoshio Benji as well. So we gathered this team of people, brilliant people.They have all invented new technology for efficient alternatives to machine learning systems.People that have worked on explainability of AI systems.We have all sorts of capabilities in the team.With the purpose of... wrapping basically this technology of ours, like building on top of the technology, the core technology, which is liquid neural networks for enterprise kind of solutions with a horizontal kind of look to the market.So we are basically going after verticals because as I told you, it's a generalist system.I can solve financial problems for banks, for large banks.I can solve also problems in the space of biotech. I can solve problems in the space of autonomy, right?So it's a horizontal play. Now, as a startup, it's always like the weird way to actually go after all I want to solve all of them. SPEAKER_03: But we're talking about the ocean problem.So yeah, you're a platform that you're going to provide an API to people?Or are you going to go after one of these verticals?I guess is the question everybody has. SPEAKER_01: Yes, yes.So we are building an AI infrastructure in which you can train, fine-tune, and play around and use liquid foundation models.This product is an enterprise-facing product.It comes with a developer package where we actually give it to enterprises.Enterprises basically can use this technology and actually enjoy its performance.They can see the efficiency of the models.Mostly, you can develop models on the edge. We have today language models that run on a Raspberry Pi. SPEAKER_03: Raspberry Pi, just so people know, is the smallest computing unit, essentially, in the open source hardware community.These Raspberry Pis go for $10, $25.It has a certain amount of power to it.So you're telling me you're going to be running this on this neural network on a Raspberry Pi, which is like running it on like a thumb drive, basically. SPEAKER_01: Exactly. SPEAKER_03: People can imagine that.Yeah. SPEAKER_01: that's like one of the beauties of the technology.So the technology can be running on a very, very tiny, they're very energy efficient, you know, depending on their, they can be small, but they can be very powerful.Now, In terms of like how we are going to market and how we are actually commercial and how we are managing to be the AI platform for all the verticals, we have established some contracts across the globe, actually with some of the system integrators in the world.So in Europe, we have a contract with Capgemini, which is one of the largest system integrators actually in Europe.In Japan, we are working with Itochu CTC, which is basically the Accenture of Japan, you know. In the United States, we are signing up with EY and conversations with Accenture, basically.So the target is that system integrators would take the platform as basically being able to integrate it in the verticals that they're interested in. SPEAKER_03: So you don't have to worry about the commercialization of this.You have to provide the people who do commercialization and license to them. So this seems incredibly disruptive.If you are able to do this for a fraction of the cost, what does this do?And the fraction of the hardware, if you're successful, what does this do to NVIDIA?What does this do to open AI?You know, they're putting together, you know, billions of dollars, 10s of billions of dollars in supercomputers, to train these models, you're claiming you're going to be able to do this, because it's with the worm brain, and it's a much more efficient process with a fraction of the hardware model, the hardware footprint.So, you know, head to head, what's going to happen to, you know, big iron in AI, if you're successful? SPEAKER_01: Yeah, definitely.So there are two costs on developing AI systems.One cost is like designing the AI systems.And the other cost is basically usage of AI systems, right?Like you can now, now my AI is in France, basically, right?So now on in France side, as I told you, we can be between 10 to 1000 times more efficient than the models that are available today. That's basically the energy footprint of the models, okay? On the training side, we can be between 10 to 20 times more efficient than the transformer models.That means if I train, let's say, a 10 billion parameter liquid model, it's going to cost me, depending on how much information it can process, which we call context lengths, right?Depending on the context lengths that they have, it can be between 10 to 20 times much more efficient to actually develop this kind of system. So that means instead of requiring $10 billion, basically, to develop GPT-4 quality models, you would need a fraction of that, basically. SPEAKER_03: Yeah, maybe $500 million or something.Or $100 million.A serious fraction.What does that mean for... you know, somebody like open AI, Microsoft, some of these cloud computing platforms that are, are they building all this extra hardware and focused on the wrong problem?You know, that hardware is not the problem.It's the architecture and the framework and the paradigm under which they're building this and they're just building under a much less efficient paradigm.Is that your claim?Here? SPEAKER_01: I would say, you know, the beauty of the transformer architecture and what OpenAI and everybody else is after is the fact that these systems scale really nicely.You can scale them into larger amounts of data and also larger model sizes.So what motivates the community on generative AI is the fact that The larger you make the systems, the more powerful they become.Now, if you look at where we are today with the state of the art, we have Claude Opus, basically, which is the most powerful model.I expect this model to be in the order of like three to five times bigger than GPT-4.That means this model is, I would say, in the range of maybe 10 trillion parameter model. SPEAKER_03: Now, they haven't released Claude.Anthropic hasn't released what that model is, but it is number one on Hugging Face now with the ELO ratings. SPEAKER_01: It's even number one.100%.It's the number one kind of performing kind of AI system in the world right now.Okay.Now, Anthropic is talking about 10x-ing the size of the models every year that goes forward.That means we can expect by the end of next year to have a hundred trillion parameter transformer model. The reason why they're doing that is because when the models are actually getting larger, they become better and better.And maybe we can get into AGI and generalist kind of AI systems by enlarging kind of the architecture.And the focus is just that.There are two companies in the world that I think the absolute focus of the companies are building AGI is open AI as an anthropic right now. So there are kind of gutsy moves like what we are doing, basically.We are basically changing the fundamental architecture.We are building new scaling laws, basically, on top of this thing.The scaling laws, let's see if we can make liquid neural networks also scale.That means if I have one trillion parameter liquid model, it might actually be as performant as a 50 or 20 trillion parameter transformer model. The other way of it is also true.If I have 100 trillion parameter liquid model, it might be better than a 20x larger transformer based model.So that means these are basically the kind of moves that we want to make. SPEAKER_03: I mean, so far, if you're successful, when will anthropic move over to your platform, do you think?Or are you a competitor to them? SPEAKER_01: I think, I mean, right now, like we are going to another fundraising, like Series A of Liquid.And I think after this round, we are basically getting prepared to actually train very, very large models. So these models are going to be I mean, after the release of those models, probably by the end of the year, I would say, then the community is going to see like that there are alternative kind of models that they can come in and disrupt the way transformers are actually disrupting.And they can scale the way transformers scale, basically, what hardware you use, what platform are you using? Right now we are using Nvidia GPUs as well.Like it's very similar.It's just that the number, the amount of GPUs that we consume is about 10 to 20 times less than how... Got it.5%, 10% of them, of what they're using.Startup's a small business. SPEAKER_03: Listen up.You want a CRM that neatly organizes all your customer data so that you can avoid missed opportunities, and you can deliver a personalized service.Rigid CRMs can adapt to your fast-growing needs, and that's where ATIO comes in.ATTIO delivers the goods.It's a custom CRM that's flexible and deeply intuitive.ATIO is built for the modern company, headed into the next era of businesses.It connects your data sources, adjusts easily to your specific setup, and suits any business approach. Whether it's self-served or sales-driven, Adio automatically enriches all your contacts.Think about that.You might be missing a first name, a last name, an email, an address, all of that stuff. It's going to sync your emails and calendars.It's going to enrich those contacts, and it's going to give you powerful reports.It's also going to let you quickly build Zapier-style automations.If this, then that type of automations.The next generation deserves more than a one-size-fits-all CRM.Join 11 Labs. replicate modal and more and get ready to scale your startup to the next level head to adio.com slash twist and you'll get 15 off your first year that's attio.com slash twist and so talk to me about data because it does seem like this is the next big shoe to drop licensing data, balkanization of data, hey, maybe Reddit is available to Gemini, but not open AI.Twitter now is, you know, closing up access or x.com is closed up access for people.And it and the New York Times is in a lawsuit with open AI, which obviously trained on their data without permission. How do you see all of this resolving itself?Because obviously, people are rightfully saying, Hey, I own this data, I have the archive of the New York Times, or I'm Disney, I own this archive of IP from Star Wars to Marvel, or I'm an author, and I have these books.How do you see all this shaping up in the in the coming years?Because is that going to be the limitation that the data you have access to? Or is it going to be synthetic data rules the day and you're going to be able to just make your own data to train on?How do you see all this unfolding? SPEAKER_01: Yeah, definitely.I believe at the end of the day, I think the data providers, they should be incentivized to provide their data, and they should know that their data is being used.Basically, you need to have a payment scheme for people that you're using their data in your systems.What should that be in your mind? SPEAKER_03: How would that work?Do you have any ideas? SPEAKER_01: We haven't looked. gotten there yet like i think i think this would be like a challenge to to to think about but at the moment what we are trying to do is basically the way everybody does like we are basically purchasing data purchasing data right like you're basically paying for the data that you use in order to be able to you know like to leave and you believe this is a good idea because it will keep people making data so journalists artists writers thinkers you believe hey this is a a SPEAKER_03: fair deal here, some sort of licensing arrangement where they get paid some reasonable fee to train your models or train Claude's models or open AIs models or Google's models.Yeah. SPEAKER_01: 100%.The reason being, say, for example, a content creator on YouTube, right?So if people come and look at their content, basically, you know, like, and they get inspired to build something off of that, you see, so AI is also like basically doing the same thing, right there, it's looking at the data that is basically available.And it's getting inspired by that data if it's not directly the copy of that data, right?And that scheme of how we are doing it through like, let's say, social media kind of channels, right?It has to happen like very similar ways that we can incentivize users of social medias or users of AI or providers of data for AI systems to also like have this understanding of this is basically... the same thing, the same kind of scheme can actually apply here, there might be analogies here.But again, like you really have to be systematic, but systematically going after this problem, which is one of the one of the main main issues, like as we were thinking about the scaling our company. SPEAKER_03: It does feel like it's fair if somebody's put a lot of work into it, that if an AI was built on top of the New York Times corpus, that they would have permission to do that because it is something that you could partner with the New York Times as opposed to OpenAI and build this with them and monetize it with them.And it's their opportunity to create an AI based on the New York Times data, not... You know, open AIs or Gemini's, everybody should have the ability to opt into these things.I feel like that's a pretty smart approach that you're taking.How long before people will be able to use your platform and swap out Gemini or swap out Claude or swap out open AI for yours? SPEAKER_01: So, I mean, the first batch of products that are coming is basically already in use with some of the clients.It's a developer package, as I told you, for solving AI problems.Like this could be, let's say, like you have a predictive task where you have like video data from surgical devices. of processes and the at the output you want to predict basically what phase of surgery we are in for example that's a kind of case cases study where a developer can take our package and then basically use our system in that kind of real world application to solve that task This is already ready and it's available to some of the enterprises through our system integrator contracts and through directly with some of them.We are already working like in the financial sector, in the medical sector, in the health care and biotech.We have been like very active and automotive.OK, this is already available. SPEAKER_03: What's your definition of AGI?How do you determine that a system is generally artificially intelligent?Do you have or I mean, you must have heard a million of these different ones that when you're at MIT, and there's a big debate around it. SPEAKER_01: But what do you think? I think for AGI, I think that I just want to stick to something that we can actually still understand and talk about.For example, a system that is beyond human capable, can perform beyond human capabilities, given the same resources.That means if I'm provided that the same kind of resources is provided to the human and to the AI system, the AI system is being able to perform that task. better or orders of magnitude better than humans.Got it. SPEAKER_03: So given the same resources, we both have access to the internet.We both have broadband.Can I beat this system at chess?No. SPEAKER_02: Okay. SPEAKER_03: But it would be at a new game that just came out today.Could it beat me? I guess is the question. SPEAKER_01: Exactly.And AGI can exist in a virtual world as well.Like as you were mentioning, these are possibilities that are inside a virtual kind of existence, just existing in an internet kind of system.But in real world, you need to have also embodiment.So that's why a lot of work is actually going towards, you know, like the humanoid kind of movements of robots.You're building humanoid kind of robots, an open-aided figure, I mean, the new works that are going on, like there's so many, I mean, at MIT, there are many, many people working on humanoid kind of research, and also like other types of AI systems that you can integrate into society, you know, in a safe way. SPEAKER_03: So the point is, you know, there's virtual, we know that those are creeping up, like getting an answer to a legal question or making a marketing plan or writing. SPEAKER_02: Exactly. SPEAKER_03: something, you know, and obviously chess and verticalized games go, it's crushing humans, but it's got to be able to translate into the real world.And if it's going to be doing picking strawberries, we're going to need a robotic arm, we're gonna need computer vision, but all those things seem to be aligning.So a robot, we had a company called root AI, which I think actually had some of its origins at MIT as well with the robotic hands.And being able to pick strawberries in the real world, better than a human faster, pick the right ones, not crush them, put them in a box.I think we're kind of there today.We're pretty close to it. SPEAKER_01: For those kind of applications.Yeah, yeah.But think about think about for application of play.I want I want to have like a robotic soccer team or a basketball team.Can we have like those kind of things, right? SPEAKER_03: That's a level of fine motor skill.Probably not.Yes.Yeah. So when do you think we hit AGI in your definition that it's able to beat a human at any task?Could be basketball, could be cooking. SPEAKER_01: I think the next two to five years is going to be very, very exciting.And I think we are going to see leaps in performance of these models as the size of the models are growing. I would say, we might actually see, you know, first versions of it, like very soon, I would say maybe after 100 trillions of parameters, this is where in terms of number of capacity in terms of number of parameters, we would be equivalent to a human kind of the amount that is available to what is that two more boosts of 10x.So we have like two more boosts of entropic training their clock clock four and five problems. SPEAKER_03: So yeah, somewhere around cloud five or chat GPT six, something in that range of jumps, two more jumps, which might take another, you said two to five years.We get some, what feels like smarter than any human on the planet.That was mine.Like smarter than any human on the planet, able to be any human on the planet at any test.Now, robotics might be hard because you do have some physical fine motor skills that basketball and soccer seem out there. 100%. you know to work in a factory or to cook maybe it does work pretty quickly um yeah how do you think about job destruction societal changes you know this is always something that folks in your career and coming out of mit you know debate late at night when you're having drinks or whatever you're imbibing whatever the vibes are What do you when you're sort of off duty talking with people who are building this stuff?What do you how do you think about retiring a whole swath of jobs that are arduous and painful, but that also do provide meaning and purpose to some degree or employment generally, for humans working in a factory, picking strawberries, writing marketing copy, all this stuff seems to be fantastic. at risk.So how do you think about job destruction? And what's the back channel on this?Is it coming fast and furious?Or do you think we're gonna be able to manage it as a species? SPEAKER_01: I think we can manage it, like any technology that comes in, I would say, it's going to be disruptive, like you can think about like the evolution of technology in all the things that are in our hands, and it changed the the type of the jobs that you would be actually having, but it's not gonna, like replace because right now, you can use the systems as an assistant, In some sense, I think this AI revolution, this one in particular, is helping us to evolve into better versions of ourselves.Every kind of application that today you see in generative AI enables is in the productivity space, right?So it's increased productivity.We can do things faster.We can build... things faster because of AI.And I feel like this is going to be the trend, you know, and we're going to frame basically AI systems for basically helping us to become the better versions of ourselves and get things done faster.For me, the moment that I'm dreaming of happening is the fact that when AIs can actually solve new physics and new mathematics, new science, right? Like if AI can, can discover, can discover new, I want, I want to give the AI system, basically the Einstein's equation, Maxwell's equation, and the theory of everything that, you know, cosmologists are working on.If I want to give them there and tell the AI system, hey, continue from here, and go figure out what's what's next thing that is going to happen now if you solve physics then you can solve the basically the you know the way we built structures like the way we do science if you solve mathematics you can solve the economy of the world you know if you solve uh humanitarian sciences, like the conflicts that we would have, you know, we might actually have AI helping governments basically solve conflicts, you know, there might be so many use cases of AI, enabling like new opportunities for work.But this is how I see AI helping us as an assistant as an as an elevator of the way we live. SPEAKER_03: Yeah, this is, I think, the most positive spin on it, which is, hey, yeah, you might get rid of some arduous jobs, just like we got rid of being a phone operator.People used to have that job.People used to work in the mailroom.I remember when I was starting my career in the 90s, working in the mailroom. or being a bike messenger was like a major career like you there were many jobs that you could do and you get paid really well bike messengers got paid a sick amount of money in New York to run documents back and forth for law firms from Wall Street to Midtown.And they don't exist anymore.For all intents and purposes.You don't have to run documents because you obviously the fax machine and email change that forever. But yeah, you're right.What if we could actually solve existential problems or science problems around clean energy, around farming, around calories, around health? Maybe we just live with massive abundance.And I think that's what people have to keep in mind.There was like this... short-term look at it on the Jon Stewart show.I don't know if you saw that trending.What was your take on the Jon Stewart take that like, oh my God, we're just doing job destruction here.I got a little cameo in there because I was interviewing Brian from Airbnb and he was talking about like, hey, we're not going to need a bunch of customer support people answering repetitive questions, which I don't know if that's a great career or not.I don't know if people, and there's some people who love being in customer support because they like interacting with people, but maybe it's not a great job long-term. SPEAKER_01: I think we just get better choices as a species.You would basically have a choice to interact more with humans, right?Because let's say for a customer support job, why a person would be interested in that job?I would say the human aspect of it, right?I like to talk to people.I like to interact with humans.You can do that in the presence of AI, just in a different way. It might actually be less involved than how you have to do it or you're forced to basically do it for that kind of human interaction.I would say AI and intelligence in general is giving us choice.Choice is an important kind of element of human civilization as well.The way we evolved actually became this kind of the most powerful species in the world is by the fact that we have a lot of choice.Choices are integrated in our site and the ability to have choice.I think, again, as I always say, I'm going back to this.Of course, AI would have downsides and upsides.It's not all green and everything, but I think that the right version of AI is going to be extremely useful.What do I mean by the right version of AI?One of the things that is concerning is making today's AI systems larger and larger as black boxes.If you don't understand what you're doing with a system, that system, no matter how much control, you're losing control. You're not going to have a lot of control in the system. The fact that everybody like anthropic is actually putting like 20% of their workforce on on on explainability.Right. SPEAKER_03: So explain what this means for people who don't understand because this is a topic that I think is super important and underreported.Understanding what the machine is doing.It's hard for people to believe that people don't actually understand what the neural networks are doing.So take a minute to explain this to folks. SPEAKER_01: Yeah, definitely.So let's first define like, what do I mean by explanation?Okay, what do I mean when I say I can explain a system?Okay.I tell you the equation that I think most of your audience would actually be able to relate to.E is equal to MC squared.That's the Einstein's equation, right? May I ask you this, like, do you think this equation is explainable?That means what?That means like if I have a mass, I have an object and I know the mass of this object. And if we know that if this object is moving with the speed of light, then you can compute the energy that it would dissipate at that kind of rate. Yeah, you can explain this.Yes.You can explain it in full.It's explainable across time.Like it's basically like at any given point in time, if I just give you this equation, this is called a physics equation or physical model.Okay.This is the best type of modeling framework that scientists has ever designed. A physical model is a model that is completely 100% explainable, and it explains a kind of reality that you can relate to. SPEAKER_02: Right.It's understandable in reality.That's it.Exactly. SPEAKER_01: On the other side of the spectrum, you have statistical models. I said physical models, and now we have statistical models.Statistical models are not 100% explain the behavior of a system, but they observe data, and from data, they infer what is basically the construct of this topic that I'm modeling.Let's say a chat GPT.Chat GPT is a statistical model. SPEAKER_03: Okay?It's guessing the next word.It's figuring out what the next thing in this thread should be. SPEAKER_01: just by observing data, right?Because E is equal to MC squared, it doesn't need data anymore.It's explainable.You just need to plug in your data and it will always give you the answer. SPEAKER_03: But if you were to say the quick brown fox jumped over the lazy dog, this is something that you would get. SPEAKER_01: Now there's a probabilistic kind of thing.You have to see whether this is a statistical model.ChatGPT and systems like that are statistical models.Now, Now scale these statistical models into billions of parameters as well.This becomes today's AI systems, right?Today's AI systems are black boxes because of the fact that we cannot really understand why if there is an input coming in and an output is getting generated, why this output is getting generated.There is no explanation to why this input output. SPEAKER_03: No citation to a source. exactly what's the source material explain your work is or show your work is what people tend to do in phds right and and in graduate school you have to show your work how did you come to this conclusion you can't just solve the math equation you got to show us how you solved it so we get an idea of that and and in these neural networks people have not been doing that SPEAKER_01: Exactly.And now we are basically hopelessly basically trying.There is a term called mechanistic interpretability.Mechanistic interpretability tries to point into a part of a system, a gigantic system, and tries to say, based on this interaction here, I suspect that this method is basically doing what?This part of the system is responsible for biases in my system or something. Now, in the middle of these two spectrums that I plotted for you, okay, so I told you there's the statistical models and physical models.In the middle, there is a set of models, which we call causal models. SPEAKER_03: Okay, causation. SPEAKER_01: Yeah, exactly.That means like x implies y. And if x x implies y, then what, you know, like, basically, like more structure into the way you're designing learning systems. What I understood from the liquid neural network kind of thing, and actually I proved theories around this thing in my PhD thesis, is that liquid neural networks are dynamic causal models.They are one step ahead of the statistical models.That means you can understand, to some extent, the behavior of what goes in and comes out.And you can explain a little bit about the cause and effect of tasks inside the system, not 100%, but to a really good extent compared to the statistical models. SPEAKER_03: Because they're simpler, they're more basic. SPEAKER_01: Exactly.They are more basic.And the math itself is kind of tractable.The math itself is like something that you can, you know, as a technical person, you would be able to understand the machine.Now, when I was telling you that we want to design the mission of Liquid AI is basically is to design AI systems that we can understand and efficiently deploy in our society.Because We understand the math behind our systems.It's not like a transformer architecture that I just take it and scale it.Because it scales, it gives your eyes to very nice kind of capabilities as a black box.But now we are designing systems that are kind of white boxes that at every step of the go, we have a lot more control into how these AI systems are doing decision-making. And that creates more safety. Yeah, exactly.Exactly. SPEAKER_03: And this is where, like, I think there are some weird incentives to take the time to slow down.If you're open AI, anthropic or Gemini, you're working on some big project to slow down and say, Hey, we don't want to make this model bigger until we understand it a little bit better.There's a perverse incentives here in capitalism.And in this race to see who can get to AGI first, or who can monetize this first. and get their next version out opening I 567, you know, Claude version 456, Gemini, whatever.Is there not an incentive to not slow down and not understand it?Like why put engineers if you're putting 20% of them?Why not put 0% of them on explanations?And, you know, explainability?Why do explainability when you could just, you know, put more servers on and get more data and and beat everybody else? That's the perverse issue here, right?The alignment of incentives. SPEAKER_01: But I know, I know what's their incentive.Like, let's say, what's the capital for AGI?The market cap of AGI is 600 trillion. SPEAKER_03: Yeah, I mean, it would be the market cap of human existence. SPEAKER_01: It's the world. That's it.It's the world.That's the market.So that's it.So that's where these companies are heading at, you know?So the market, like if you have AGI, as you said, like you can solve the energy problem.You can solve, once you solve the energy problem, like what, I mean, you are basically the most valuable company on earth, you know?Like, Think about that. I mean, if you can solve economy, like if you can solve politics, basically, the structure of governments, you know, this is the thing that we're hoping to get.And there's a race to get in there. SPEAKER_03: Do you think everybody gets there at the same time?Like AGI feels, no, you don't.You feel some people will get to AGI first?First, yes. SPEAKER_01: Yeah.Yes.Who?Of course, like a lot of people have... I mean, of course, I would say OpenAI and Entropic would be the first bets that I would say, both of them.I don't know which one first, but I think they have a head start and they have a lot of kind of information in-house to get there.I don't know about Google.I don't know where their priorities are, but I think the two companies that are focused on really scaling AI systems into more and more kind of powerful companies beings, I would say at this point, I think it's going to be entropic and opening. SPEAKER_03: But they both have the they're both taking the approach that you can just use their system to build whatever you want on top of it.So of course, If it's open, it's not open, but it's available to people to pay for it.So then if there was the ability to, I don't know, figure out which stock's going to go up, you might have a thousand developers realize, Claude and OpenAI are great at this.I'm going to make the best trader in the world to go trade stocks. SPEAKER_01: That's true, but that's why the release of those kind of huge models is still, by itself, is actually a huge challenge.I would say today we haven't seen those kind of systems yet, but the systems that are coming in the next two years, as I was saying, those systems, even the release of those systems to public, it has to be a rollout, it has to be like a trial and error like we really have to see how what's the reaction we internally like these systems getting massively tested you know like it's not like they're just today they get it and then tomorrow they enable it to to to everybody to get access to right like you need to do a lot of testing of the system see the capabilities how how they come about SPEAKER_03: Do you think that's why there was that chaos at OpenAI?Is that maybe they felt like that next version was getting close?And that's why there was sort of chaos?Because there was that whole sort of speculation.Like, maybe they did feel like this thing was getting, you know, AGIS, let's say.Do you think they're close? SPEAKER_01: I don't know. I don't know.I seriously don't know.Like, because, I mean, it's all behind closed doors.I really just don't know.The only thing I would say is, like, it might look like more of a conflict, like, just on mission, I would say.Yeah.As opposed to, like, how... If the AGI has been achieved or not, you know?Yeah.And this is... SPEAKER_03: The open source models seem to be doing pretty strong as well.Do you think open source wins the day?Or do you think open source can keep up with the closed systems or no? SPEAKER_01: The unfortunate answer is no.Because the closed models are usually like a lot of resources are going in a lot of concentrated resources is thrown out closed source modes. That's just a simple allocation kind of task.Just think about resource allocation, the massive concentration of resources in the hand of OpenAI and NVIDIA itself, Google, Microsoft, all of these companies.So that alone also slows down open source.Open source is going to always play the catch-up.The gap between the closed source capabilities and open source actually grows as well.That's also another thing. So I don't think the gap is shrinking.So unless there's going to be an open source move on, you know, like a more credit is moving, you know, has moved to open source models, Apple's doing open source models. So it's going to be really interesting to see if either of these delayed open source, think about how Lama to Lama to came up delayed open source is again, the same story, right?The Lama to came out as a commercial license first, right?And then they decided to open source it.Now, let's see how Lama 3 is getting released.So it is important also to think about timing on the open source moves.It is true that some companies are just putting out... For example, Mistral also played an amazing role in the open source community, right?They put a model out, but then immediately they put the more powerful models behind the paywall, right?Got it.So you have to... you always have to think about like, what, what is happening in the game.And I would say the unfortunate truth is the fact that closer smalls are really amazing. SPEAKER_03: All right.So I think you're hiring and things are going pretty well for the firm.If people want to join the firm, where can they learn more and come join the liquid team? SPEAKER_01: Yeah, liquid.ai basically.Like there's a get involved section where you can... Today we have like around 25 smartest people on earth, I would say.It's a really crazy concentration of people.We have people with Olympiad medalists in the team.Like we are people that are solving literally like really complex problems for us.We have on the team like inventors of very important AI technologies. And we have good philosophers also in-house.We have Joshua Bach, also like was part of our organization.And, you know, like it's always like it's a privilege for myself to work with such an amazing team of talent because this has been the power of Liquid AI.We have been like very good at bringing in like key players into into the space to build like something from scratch, a kind of white box kind of intelligence, and then hopefully scale it into something that is meaningful.And again, we are obviously hiring as well. SPEAKER_03: And we'll continue success with it.And thanks for sharing this crazy vision.And, you know, be thoughtful about releasing this stuff.Let's not end the world.Let's make life awesome for everybody.And we'll see you all next time on this week.Bye bye.Great job so much.