When your headphones listen to you with Ramses Alcaide of Neurable

Episode Summary

Ramses Alcaide is the co-founder and CEO of Neurable, a company developing non-invasive wearable brain-computer interface technology. As a child, Ramses was fascinated by computers and did computer repair jobs in his neighborhood. A family tragedy involving his uncle losing his legs in a truck accident motivated Ramses at a young age to want to use technology to help create more natural prosthetics. In graduate school, Ramses worked on brain-computer interface research to help people with disabilities communicate. He developed machine learning techniques to interpret brain activity more accurately. This allowed systems to work faster and better for users. After grad school, Ramses started Neurable to bring brain-computer interface technology to everyday consumer devices like headphones. Their main product is N10, a set of headphones that can track your natural working rhythms and alert you when to take breaks to prevent burnout. The headphones can detect your level of focus/fatigue by analyzing brainwaves. Neurable has developed algorithms to extract and boost relevant signals from brain activity. This allows the technology to work effectively from sensors near the ear instead of a full EEG cap. In the future, Ramses envisions brain sensing wearables replacing other health tracking devices by consolidating their functions and enabling new capabilities. Silent speech technology could allow basic device control through thought. Brain-computer interfaces may also unlock the potential for early diagnosis of conditions like Alzheimer's disease. Neurable aims to make the technology accessible to all and empower developers to build new applications. The headphones will be commercially available in late 2022. Ramses is excited to see what solutions people create once everyday brain-computer interfaces become a reality.

Episode Show Notes

Our brain activity can reveal a lot about our physical and mental health. And thanks to Ramses Alcaide and his team at Neurable, we’ll soon be able to glean insights from our brainwaves in our own homes — without ever stepping foot in a laboratory...

This week on How I Built This Lab, Ramses recounts the inspiration behind launching a brain computer interface company, and previews his company’s first product: headphones that detect and interpret your brain activity to help you do your best work. Plus, Ramses’ vision of a future with frictionless communication — where you’ll be able to send a text, look up a restaurant or random factoid, and control your playlist entirely with your mind.


This episode was produced by Rommel Wood and edited by John Isabella and music by Ramtin Arablouei. Our audio engineer was Robert Rodriguez.

You can follow HIBT on Twitter & Instagram, and email us at hibt@id.wondery.com.

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Episode Transcript

SPEAKER_06: Here's a little tip for your growing business. Get the new VentureX business card from Capital One and start earning unlimited double miles on every purchase. That's one of the reasons Jennifer Garner has one for her business. That's right. Jennifer Garner is a business owner and the co-founder of Once Upon a Farm, providers of organic snacks and meals loved by little ones and their parents. With unlimited double miles, the more Once Upon a Farm spends, the more miles they earn. Plus, the VentureX business card has no pre-set spending limit, so their purchasing power can adapt to meet their business needs. The card also gets their team access to over 1,300 airport lounges. Just imagine where the VentureX business card from Capital One can take your business. Capital One. What's in your wallet? Terms and conditions apply. Find out more at CapitalOne.com slash VentureX business. Apple Card is the perfect credit card for every purchase. SPEAKER_01: It has cashback rewards unlike others. You earn unlimited daily cashback on every purchase, receive it daily, and can grow it at 4.15% annual percentage yield when you open a high-yield savings account. Apply for Apple Card in the Wallet app on iPhone and start earning and growing your daily cash with savings today. Apple Card is subject to credit approval. Savings is available to Apple Card owners subject to eligibility requirements. Savings accounts provided by Goldman Sachs Bank USA, member FDIC. Terms apply. SPEAKER_06: Football season is back, and Whole Foods Market has everything you need to host a successful watch party or tailgate on game day. In the meat department, there's animal welfare certified marinated chicken wings, organic chicken sausages, hot dogs, and more. And you can grab football-ready sides in a flash. Everything from mac and cheese and potato salad to sushi. I love picking up Whole Foods guacamole, which if you haven't eaten it, you're about to get your mind blown because it's actually amazing. By the way, catering from Whole Foods makes tailgates a breeze. Explore the menu at shop.wfm.com. Save 20% from September 20th through October 17th with promo code FALLCATERING, all one word. Don't sleep on the build-your-own-taco bar. It's always a winner. Terms apply. Elevate game day at Whole Foods Market. This episode is brought to you by State Farm. If you're a small business owner, it isn't just your business. It's your life. Whatever your business might be, you want someone who understands. And that's where State Farm Small Business Insurance comes in. State Farm agents are small business owners too, and know what it takes to help you personalize your policies for your small business needs. Like a good neighbor? State Farm is there. Talk to your local agent today. Hello and welcome to How I Built This Lab. I'm Guy Raz. So right now, it's the beginning of the show and you're probably listening pretty actively. At least, I hope you are. But the reality is that sometimes our attention drifts. So what if right before you reach that point, your headphones or your earbuds triggered a small audible tone that basically told you you're starting to lose focus? Well, that's where today's guest comes in. Ramses Alkaid is the co-founder and CEO of the non-invasive wearable tech company, Nurable. Nurable's main product is N10. It's a set of headphones that adapt to your natural working rhythms to help prevent burnout. Nurable actually began in 2015 when Ramses was getting his PhD in neuroscience at the University of Michigan, but his interest in technology goes back much further. SPEAKER_02: I came from Mexico. I was born in Mexico and I came to the United States when I was five. And I remember that I was so into computers. My parents bought me one that was basically a computer that was a throwaway computer by some large company. And I remember taking it apart and being just so fascinated by it that I wanted to fix other people's computers. And so I would post up signs in my neighborhood where I would basically do like computer repair for people for like $20 an hour. And that was great money back when I was a kid, like, especially when you're like anywhere between six, seven, like eight years old, right? And I remember I'd show up and they'd open the door and they're like, oh, it's so cute. You brought your son with you. And my dad's like, no, I don't know anything about computers. Like he's going to fix whatever your problem is. SPEAKER_06: So from an early age, you had this knack for computers and clearly your passion for tech continued. And from what I understand when you were a kid, your family experienced a tragedy that really sparked your interest in robotics. Can you tell me a little bit more about that? SPEAKER_02: Yeah, I mean, really the idea of NURABLE and kind of the concept of what I've dedicated my life to started when I was about eight years old. My uncle got into a trucking accident in Mexico. And he, you know, he lost both his legs and it was really intense time for the family. We brought him from Mexico to the United States to get his prosthetics made. And my uncle's kind of a genius. You know, he's been an inventor all his life. He's always been a hard worker and seeing him through that struggle and seeing how unnatural the prosthetic systems he started using were, that's what really motivated me into how do I leverage this curiosity that I have with electronics and computers to try to make something more natural for him. SPEAKER_06: Like in your head, you were thinking one day I'm going to make something that's going to let him control his prosthetics. Exactly. I read that you got an undergraduate degree in electrical and electronics engineering. And it seems like you could have taken a path towards robotics, right? Especially inspired by what happened to your uncle. Was that a path that you were potentially pursuing? SPEAKER_02: Yeah, definitely. So when I was studying electrical engineering, I worked with the prosthetics teams at the University of Washington. And then, you know, I was like, I really wanted to create the brain into these prosthetic limbs. So I went to grad school and I started working with brain computer interfaces more. And that's when I worked with people with ALS and children's cerebral palsy. And I was like, wow, this is, this is just a whole nother level. I mean, as terrible as it is, for example, my uncle having to go through this experience and having prosthetics that don't work with him naturally. What if you can't even move your eyes? Right? Because that's just a whole different level of need. And so that's really what pivoted me from something that I thought I was going to dedicate my life to robotics to something much broader. All right, let's talk about brain computer interfaces. SPEAKER_06: Because I think a lot of people, when they hear that term, they think of like electro EEGs, right? Correct. EEG, electrosyptilography. Nodes that are attached to people's heads, you know, that track brain waves. But what were you trying to solve with this research and this concept? SPEAKER_02: Yeah, definitely. So essentially, the work that I did as my graduate student work is I was working with children who had severe cerebral palsy. And the issue there is that we were not able to essentially give them these tests that they needed to be allowed to get physical rehabilitation. And the reason for that is because they couldn't communicate, at least not in the traditional means like talking or pointing. And so we use brain computer interfaces to solve that problem. And the blocker there was that, you know, you have this eight-year-old kid, he just got 20 minutes worth of setup with goop and gel in his hair to make the technology work. There would be about 10 minutes worth of like calibrating the system. And then it would take sometimes between one to five minutes to like even get a response from them. And so the work that I did was essentially machine learning classification so that we could interpret their brain activity at a much higher level of accuracy. And what that enabled us to do is reduce the response time from one to five minutes to anywhere between 30 seconds to a minute, which is enormous for an eight-year-old kid because, you know, having them sit there for five minutes to say, you know, a yes or no question is crazy. SPEAKER_06: All right. Help me understand what the technology is that you're talking about. You said it would take, you know, five to 10 minutes of setup. And what was it that you were doing differently? SPEAKER_02: Yeah. So the reason the setup takes so long is because getting brain data is incredibly difficult. There's a lot of noise in the environment. Even you're blinking, you know, talking, the electromagnetic noise, even the lights impact the ability to collect this data because the sensors are so sensitive and brain signals are so small. And so we essentially developed an artificial intelligence that was able to use brain data that we previously collected and then also brain data that was coming in and essentially increase the signal to noise in order to essentially be able to do classifications of what a person intended to do. In this case, the child was selecting a multiple choice question at a greater fidelity. And when you can do that at greater fidelity, you don't need to repeat the question numerous times. And through less repetition, you know, it gave people a better user experience. And so being able to make that brain-computer interface work in a seamless way really unblocks a lot of its use cases. SPEAKER_06: All right. So how much data can brain waves tell us? Like in theory, could you look at a person's EEG reading and know, you know, for example, how they'd answer a simple yes or no question or even that, like, that they're thinking about like pushing a button or moving an object? SPEAKER_02: Yeah, I mean, there's a lot that you can take from brain data. So for example, just using EEG, you can identify a person's focus. You can identify, you know, for example, measures of stress, whether, you know, for example, they're going through a stroke, epilepsy, sleep responses, like REM sleep. So there's a lot that you can do because your brain is the central hub for everything. But the main issue is in order to get to tap into those types of applications, you usually need to have a giant gel cap system with lots of setup to really be able to tap into them. Right. So, you know, imagine like you're an eight-year-old kid and we set you up with a system and there's like gel running down your face and you don't want to be there, but, you know, your future is being dependent on it, right? Or imagine we try to bring this to the real world, like no one's going to wear that, right? No one's going to want to wear a swimmer's cap with gel in their hair. And so how do we unlock all these incredible value propositions? And we created an IP at the University of Michigan that helped us increase the signal to noise of those brain waves so that we could actually bring it to everyday devices instead of having them be trapped in the laboratory setting. Even if we could just unlock what we already have in the lab, it'd be a major step and it would accelerate so many fields. And so it was that's what the company was focused on. Essentially, how do we create an everyday brain-computer interface? How do we unlock the brain to billions of people? SPEAKER_06: It makes sense because the brain just sends out electric signals to the rest of our body. And so if you could somehow harness or capture those signals, maybe you could make them work in ways that we haven't been able to make them work yet. SPEAKER_02: Exactly. And even just the stuff that we were doing in the laboratory, for example, no one has an EEG device at home, right? But what if instead you just put on your Apple AirPods to go take a call? And then every single time you do that, we're tracking your brain just a little bit more to be able to tell you, hey, actually, you know what? You're trending toward Alzheimer's. You know, now you're getting older. We're starting to see this cognitive decline. This is when you should go seek out a doctor, not 10 years into the disease before it's, you know, it's already too late, right? And so how do we unlock all these incredible value propositions of brain-computer interfaces to the masses? And so that's really what the main work that we do at NURBL is. SPEAKER_06: We're going to take a short break, but when we come back in just a moment, more from Ramses about bringing brain interface technology to the masses. Stay with us. You're watching Home Proj and you're listening to How I Built This Lab. SPEAKER_05: Angie has made it easier than ever to connect with skilled professionals to get all your home projects done well. Whether it's routine maintenance and emergency repair or a dream project, Angie lets you browse homeowner reviews, compare quotes from multiple local pros and even book a service instantly. So the next time you have a home project, just Angie that and start getting the most out of your home. Download the free Angie mobile app today or visit angie.com. That's A-N-G-I dot com. Wedding season is in full bloom and if you've been wanting a straighter smile, look no further SPEAKER_00: than Bite. Bite offers clear teeth aligners that help you transform your smile from the comfort of your home or wherever you'll be this time of year. Forget the endless trips to the dentist. Bite's clear aligners are doctor directed and delivered straight to your doorstep. Just take an impression mold of your mouth, preview your 3D smile and order your all day or at night aligners. Bite also knows that wedding season is expensive enough as it is. Their aligners cost thousands less than braces. It's time to let your smile shine. Get started on your smile journey by visiting bite.com and use code WONDERY at checkout to get your at home impression kit for only $14.95. That's B-Y-T-E dot com code WONDERY to get over 80% off your impression kit. SPEAKER_06: Welcome back to How I Built This Lab. I'm Guy Raz. My guest today is Ramses Alkaid who launched his company, Neurable, back in 2015 to develop brain computer interface technology. All right, so you basically, while you're a student, launch what eventually becomes Neurable, right? Starting out just looking at how you could use brainwaves and patterns to help people and now it's evolved into wearable devices. But essentially it's about the technology. It's about building up a way that you could really measure what's happening inside of our bodies in an accurate way by measuring brainwaves. SPEAKER_02: Exactly. And the hypothesis there was if we just build a reliable brain computer interface system, which is what our core technology enabled us to do, it doesn't mean it's going to scale because it has to bring people value, right? And so that's why we ended up going toward focus and helping individuals essentially prevent, you know, burnout from occurring. And then on top of that, with some of our groups, we actually do it for safety. So preventing injuries due to fatigue. For example, the Air Force, you know, really big problem, right? So it can save billions of dollars. And then that enables us to have a really strong concrete step one for us to build a business and enable large amounts of these systems to go out. And then from there, really open it up to others to help solve some of these other problems as well. SPEAKER_06: All right. So basically we're trying to solve this challenge, this problem, which is how do you gather data from brainwaves in an easier way, right? Like for example, I've got an Apple Watch. And so I wake up and it can give me some data about my sleep. It'll tell me my heart rate, blood oxygen. So I mean, you know, given what we can already gather from just our heart rate, right, which is quite a bit of data beyond all the things we talked about, what other things potentially could we learn about our health or, you know, our general state from these devices, from these brainwaves? Yeah. SPEAKER_02: You know, what's really interesting is that brain detecting devices are actually the ultimate wearable. A lot of the devices that you wear right now, for example, accelerometers for movement or for heart rate, they can either be picked up through brain data or they originally come from the brain and those are just secondary sources of signals, right? So for example, you can actually pick up Parkinson's responses using the Apple Watch. The issue is that by the time you pick it up at the hands or through walking metrics, your brain's already been dealing with it for the past 10 years. And so with the brain, you can actually pick up a lot of those things earlier. So there's two parts. One is that brain-based wearables are going to replace all the other wearables that you have. That's step one. And so all that data and all that value that we're seeing with existing wearables are going to be all consolidated into one device. But then two, there's certain things that you can only pick up from the brain. You know, for example, traumatic brain injury information, tracking ALS, right? Seizure detection. There's so many other things that you can only do with the brain that, you know, not only are you taking care of your previous wearables, but now you're adding a whole plethora of medical use cases that have already been tested out in scientific literature, but now are able to be used at scale. SPEAKER_06: All right. So let's talk about these headphones, the Enten headphones that your team has been working on and is getting ready to release later this year. What are you able to track using, you know, putting these headphones on people now? What can you actually find out about? SPEAKER_02: Yeah. So there's kind of like three areas where we use the technology right now. On the first end is, for example, understanding an individual's focus over time when they're fatiguing, when they should be taking a break in order to maintain it. Your brain is kind of like your body is to dehydration. You should be, in the case of the body, drinking water throughout the day. Or you should be taking breaks throughout the day too. Even though you may not feel thirsty or you feel tired, you should be doing that in order to maintain a high level of hygiene for your own work and life balance. And so that's kind of the first area. The second one is in control. So we have the ability to, for example, use brain activity to do very minimal controls on the hardware as well too. So changing music tracks, play and pausing music. And then on the third end is there's so many incredible biomarkers that can be picked up using this type of technology, like I said, tracking Alzheimer's or cognitive decline or you know, other types of biometrics. So essentially just like how the Apple Watch started out as a system for tracking your movement, now it can actually pick up like heart arrhythmias. And so all of that medical landscape and biomarkers are also available through these brain-computer interfaces. SPEAKER_06: But how do headphones, how do sensors around your ear capture as accurate information as those other sensors? SPEAKER_02: Yeah, I guess the answer to that, we have to break it down into two steps. First is like how do we capture those signals from headphones? Well, the brain is very conductive, right? So for example, one of the brain signals we look at is called P300. So we don't really have to get into the details of it, but essentially that comes from an area of your brain called the parietal lobe. It's around the back of your head. And even though that signal comes from around the back of your head, the signal is such a strong response. It can actually go, it goes all over your brain. Only the farther it goes from the signal source, the smaller that signal becomes. So it becomes harder to read. So we know that these signals go across the head, but you lose the ability to record them easily. And so that's where our AI comes in. It picks up these signals, even though they don't come from the most perfect location, they come from the areas that headphones are at. And from there, we're able to boost those signals to a level that makes them usable for different applications. SPEAKER_06: So presumably when people have access to initially the headphones, they'll have, they'll be like an interface where they can sort of either on their watch or just on a webpage or their smartphone where they can see this data. Correct, yes. And initially it will be, what kind of data will they have access to? SPEAKER_02: Yeah, you know, it really depends on the audience. So for most individuals buying our tech, they're just going to be able to see their focus scores whenever they do work and, you know, suggestions on how to improve their focus over time, reduce fatigue, create more balance in their life. But there is a whole bunch of raw data and filtered data that is more granular that's going to be available to scientists or people who want to dig in deeper. And from that, that's where you can pick up a lot more detail as to like, you know, sleep measures or, you know, epilepsy, et cetera. SPEAKER_06: And so the idea is you would wear the headphones all day and you would just go by your day? SPEAKER_02: Yeah, you would wear the headphones whenever you're working. So like right now I'm wearing headphones. So I just do, you know, a couple hours with the work, wear the headphones, and then they would notify me when I should be taking a break in a way that's optimal for my mental health. So like what kind of notification? It would just be an audible notification. You know, we recommend you to take a break and you can ignore it if you want to. Sometimes, you know, you're in the middle of something, you need a few more minutes, that's fine. And then you would take a break and it can tell you how long you should be taking a break. And then when you come back, it's actually really surprising, especially in our user testing, like people don't realize how much they needed it and they come back and then they just crush their work. And they're like, wow, like I didn't think I needed this break, but I came back so energized and like focused. And it's having people have that feeling consistently and their day feeling really motivated about the work that they did so that they don't feel guilty about, you know, taking some time off when they get home. That's essentially the feeling we're able to give people with the technology. SPEAKER_06: essentially is recommending when you need to take a break, right, when you're working. But what is that based on? What kind of data is it getting to suggest that you need to stop working? SPEAKER_02: Yeah, so we did a large study with a professor from Harvard who's now a professor at Worcester Political and Technical Institute. And he had created this incredible method for identifying the individual's focus. And so what we were able to do, and we did close to a thousand individuals worth of data collection on this, is we tracked people just doing their work and then leveraging this algorithm that we co-worked with this individual at Harvard. And what we saw is that there was very clear breaks in the data where we could see that if we were to recommend them a break at this time point, it enabled them to have three to four hours of higher productivity afterwards, feel more refreshed, reduce their errors and the amount of work that they do, instead of just burning themselves out and feeling really like, you know, bad about their day, essentially. SPEAKER_06: Okay, so it can basically sense certain brain waves that would suggest, according to this research that it's time to take a break, what else can it do while you're working and you've got the headphones on? SPEAKER_02: One of the best parts about the technology is you can actually build things on top of it. So a few of the things that people are building, for example, the first one is with Audible, right? You can listen to an audio book and then when you get distracted, it'll actually automatically pause it, which is really great because this happens to me a lot when I read or when I'm doing audio books, I have ADHD, I'll start reading it or listening and then I'll just start zoning off into something else. And then I'll realize that I get to the end of the page of the book or the end of that paragraph in the audio book and I realize I haven't paid attention at all and I got to rewind it. So there's ways to, for example, automatically pause things and there's a whole bunch of other things that you can do with the technology too but essentially it's that reliable. SPEAKER_06: On that note, because this happens to me too, right? When I'm reading a book and I just zone out or even listening to something, how does it know that you're zoning out? SPEAKER_02: Yeah, it's the same algorithm for focus and fatigue. Essentially we notice that there's a sharp decrease in an individual's focus and once we identify that, if it remains consistent, then we know that the person's not focused on the task that they were on previously. And so then we're able to just pause it or in the case of reading, for example, I have it output a small audible tone and it just reminds me, hey, that's right, I should be reading right now. I just got distracted by a random cat that walked by and now I should get back to reading. SPEAKER_06: We're going to take another quick break but when we come back, more from Ramses on just how close we might be to mind control technology. Stick around, I'm Guy Raz and you're listening to How I Built This Lab. SPEAKER_04: With Audible, you can enjoy all your audio entertainment in one app. You can take your favorite stories with you wherever you go, even to bed. Drift into a peaceful slumber with the Audible Original Bedtime Stories series hosted by familiar voices like Emmy winner Brian Cox, Keke Palmer, Philippa Soo and many more. As a member, you can choose one title a month to keep from the entire catalog, including the latest bestsellers and new releases. You'll also get full access to a growing selection of included audio books, Audible originals and more. New members can try Audible free for 30 days. Visit audible.com slash wondery pod or text wondery pod to 500-500 to try Audible free for 30 days. Audible dot com slash wondery pod. SPEAKER_03: Hi, I'm Lindsey Graham, the host of Wondery's podcast, American Scandal. We bring to life some of the biggest controversies in U.S. history, events that have shaped who we are as a country and that continue to define the American experience. American Scandal tells marquee stories about American politics, like the break in at the Watergate hotel, an event that led to the downfall of a president and raised questions about the future of American democracy. We go behind the scenes looking at devastating financial crimes like the fraud committed at Enron and Bernie Madoff's Ponzi scheme. And we tell stories of complicated public figures like Edward Snowden and Monica Lewinsky, people who found themselves thrust into the spotlight and who spurred debates about the future of the country. Follow American Scandal wherever you get your podcasts. You can listen ad free on the Amazon Music or Wondery app. SPEAKER_06: Welcome back to How I Built This Lab. I'm Guy Raz and my guest today is Ramses Alkaid, co-founder and CEO of Neurable. There are a bunch of companies working in this space, just like with AI and other categories. Many of those companies are massively well funded, hundreds of millions of dollars in funding. You raised 20 million, which is impressive, but tiny compared to some of these other companies that I'm sure as you know. How do you prevent those companies from just with their cash supercharging this technology and leaving smaller companies behind? Yeah, I think a lot of those companies aren't really our competitors. SPEAKER_02: That's why. Neuralink is not our competitor. We're non-invasive. We don't require surgery. Most of those companies that do invasive need a ton more money. It makes sense. The way I would think of Neuralink is kind of like a hip replacement. No one really wants a hip replacement until you actually need one. Even if they end up being 10 times better than your actual hips in the future, you probably want to be able to keep your hips for as long as possible regardless. When it comes to the competitors more in our space, which is non-invasive, we're probably one of, if not the best funded company in the world. That's enabled us to continue staying ahead. At first, it was because of our technology. We're at least five to 10 years ahead of anybody else on the market. But now it's because of our business partnerships that we have that we continue to grow. SPEAKER_06: Let's talk about the products that you're developing. Right now, it's going to be a commercial product, right? Headphones initially, is that right? SPEAKER_02: Yeah, headphones and then very soon after, in earbuds. SPEAKER_06: Let's talk about the headphones for a moment. When will they be available? SPEAKER_02: They're going to be available essentially Q4 this year or Q1 next year. And then from there, we'll continue improving things and working with the customers that we have to help us further evolving the product and helping scientists unlock more capabilities. It's really going to be a community effort to build these types of devices. SPEAKER_06: Okay. So, presumably the idea is to build on this and to be able to create other features down the road. But lots of companies are working on ways for our thoughts to be turned into actions. Tell me about that side of the research that you're doing because presumably down the road, you'd want to do something around that. SPEAKER_02: So you know, some of this research, it's called silent vocalization research has been around. And the main issue is that when you collect this type of data, you usually need sensors like around the mouth and the face. And they pick up primarily muscle activations that are completely invisible to the user. And so we're essentially doing a very similar methodology, but inside the wearable devices that are going to be compatible with our platform. And with that, we're going to be able to open up more capabilities for individuals to interact with their technology. And the best part is if you have any headphones or earbuds that are neuro- compatible, you'll just be getting all of this through software updates essentially. But you know, at first we're going to introduce a system for very simple forms of control, just launching Spotify, Play, Pause, NextTrack. And then as we collect more data with individuals' consent, obviously, we're going to be able to further expand those capabilities. But the main goal is how do we essentially create more of a seamless interaction system between humans and their computers? SPEAKER_06: Also, on your web page, in this video, it shows like a young woman walking and like thinking in her mind of a message she's sending to her dad, like a text message. She's like, hey, dad, meet me at home for dinner. And she's just thinking it. How far away are we from that reality? I mean, are we 10 years away, five years away, a year away, 20 years away? SPEAKER_02: So that thinking to text perspective, at least the very first versions of it are the ones that I was discussing earlier, where it's very simple like Play, Pause, NextTrack. And then as we collect more data and build out the system, it'll enable more things similar to the video that you saw. SPEAKER_06: So just to clarify, you're saying that in a short period of time, you'll be able to wear these headphones and just think in your brain without saying it, play music or play this song and it'll play it? SPEAKER_02: It's a little bit more nuanced than that, but essentially, yes. And like I said, we already have working systems of that in lab. And so really, everything on that video are things that are wholly within what we're building. So it's not like a vision video of 100 years from now. At least V1 is going to be available within the next two years. SPEAKER_06: So just for me to understand, I mean, when I'm thinking of the word play, is my brain making a specific brainwave to that specific word? SPEAKER_02: So think about, for example, you have an athlete and the athlete thinks about throwing a football. Well, when they think about that, even though they're not throwing the football, that area of the brain that is associated with throwing a football is activating and those muscles are activating. It's not happening at a visual level. You don't see the football player moving his arm. Just thinking about it activates those areas. And so whenever you think of a word like play, pause, or next, the same thing happens in the brain. And so we're able to pick up those signatures even though you're doing it silently. You're not visually saying something out loud, audibly or visually, but we're able to pick up those signals and then use that essentially as a source for how we control devices. SPEAKER_06: So just let me go back and sort of reverse engineer this. You know when you walk down the street and you see somebody talking really loud and you now know that they're on a phone call, like they're wearing earbuds and they're on a phone call, but even like 10 years ago, even five years ago that was still jarring, you'd be like, wait, what are they? And then, oh, they're on a phone call. And now it's normal, totally normal to see people walking down the street just talking to themselves. But if you took a human from like the 1950s and you dropped them in like, you know, modern day, some modern day city and just saw people talking to themselves, they wouldn't understand what's going on. In 10 years from now, are we, I mean, are we likely to see a version of that except in silence like people maybe having conversations with other people through their, you know, their device, their earbuds or whatever it might be, but just in silence? I mean, I wouldn't say 10 years from now, right? SPEAKER_02: I think it's going to take longer, but what I would say is that, you know, within the next 10 years, people are going to be communicating to their technology silently. You know, I think that communicating via voice is still going to be a more efficient method of communication with somebody, at least in the near term. But when it comes to, for example, let's say you're having a conversation with somebody and you get a notification, right? Being able to push it out of the way or to reply to it real quick in a way that doesn't break the conversation would be very valuable, right? Let's say, you know, for example, you were talking to somebody about a really great place that you went to go eat to, but you forgot the name, right? So helping it pull up that information in a way that doesn't disrupt, oh, hold on, let me pull out my phone and, you know, just wait a second while I figure everything out. So I think that we're going to be communicating with our technology seamlessly and invisibly, and then that enables us to also free up some of our cognitive loads so that we can continue to have these more engaged and connected conversations in the way that we traditionally do. SPEAKER_06: I mean, the idea presumably is to initially sell the headphones, but tell me more about the broader vision, certainly around, you know, making this a profitable business. Yeah, definitely. SPEAKER_02: You know, at least for us, the number one step is how do we unlock all the potential brain computer interfaces to the world, right? And so the first step is we work with different companies, OEMs, some of, you know, the largest in the world are some of the ones that we're working with, and we help them release neuro-powered products. The first one is going to be a pair of headphones. Eventually it's going to be earbuds, AR glasses, helmets. So we also work with a few groups that build helmets for, for example, pilots or for individuals that are in high risk environments. And so imagine being able to at least step one, track their mental health, track their fatigue so that they don't, you know, to prevent accidents that could happen. And then longer term, enabling them to control their technology much more easily. And then even longer term, being able to make sure that key health markers can be caught ahead of time so that they're able to get care earlier and we're able to accelerate a lot of research. So, the idea is long term, not just have consumer products, but enterprise products. SPEAKER_06: Exactly. SPEAKER_02: Yeah. And we work across the realm. Like a lot of people think that what we're doing is we're building headphones, but the reality of it is our technology can scale across any type of head-worn device. And so we partner with different head-worn companies and we help empower their devices to be compatible with our platform. And then that enables them to get access to this portfolio of use cases that can help their, you know, employees, that can help students, that can be used for medical applications, et cetera. SPEAKER_06: Ramsey, I'm not sure what science fiction book it was. Somebody listening will remember, but there's at least one book about how our thoughts in the future, it'll be possible to read them. Now, obviously, we're talking about science fiction today, but if we think about where this technology is going, you can take that leap of faith and imagine that within my lifetime and yours, we might get to a place where our thoughts could be read. And it's amazing if we can achieve that as humans, but it's also really scary. Like our brains, like the things, the stuff between our ears, like one of the last private places left. You know, everyone's got a camera, there are drones everywhere. You know, we talk on cell phones, like the only place where we're really, we can really be private is in our heads. And that might go away. Yeah, I guess I'm a little bit less worried about that. SPEAKER_02: You know, main reason for that is just because when you're talking about non-invasive, so non-surgical measures, you know, it's one of these things where we're just so far away from that level of detail that like I'm less worried about it. And at the end of the day, you can just take it off, right? You can just take off your headphones or your earbuds. Where that really becomes more scary is invasive. Like with invasive, we can get to that type of future, I agree. But at the same time, you know, and this, I was actually at a panel with the DOD where they were asking us questions about, hey, should we be worried about, you know, people leveraging brain-computer interfaces, at least the invasive kind for, you know, all this kind of scary stuff, you know, like controlling jets or something. And like at the end of the day, it's like, one is everyone's more focused on how do we help that person with ALS communicate. Two, it's going to be way easier to fly a jet the way you fly it now for the next like 100 years. So like let's not really worry about that right now. Let's just help that person with ALS. Let's help them at least reliably say yes, no, and you know that they love somebody. And let's get through that breakthrough. You know, I mean, obviously in the far, far future, anything is possible, but I'm more of an optimist when it comes to where we're headed. I think we already have good enough ways to destroy one another. We don't need to do it in a more complicated and difficult way. SPEAKER_06: You know, I just saw the film Oppenheimer, like probably millions of other people, and you just see the challenges that they were dealing with and, you know, just how quantum physics has just developed from nothing into this incredibly powerful discipline, you know, in the course of a lifetime, half a lifetime. What is something that you just are trying to figure out that kind of keeps you up at night but you're really excited about? SPEAKER_02: Yeah, I mean, for me, the thing that—there's two parts to that. One is like what are some of the challenges and what are some of the excitements? So the challenges are essentially what we're doing right now is we're trying to validate as many of our assumptions as possible, trying to make values that are really sticky with customers that add a lot of value to them so that when the product comes out, you know, it is going to be successful. This is kind of like when the iPhone first came out. You know, we don't really know how much impact it has until it came out, right? For example, Uber wouldn't have existed without the iPhone. We wouldn't have GPS in a person's pocket, right? So there's going to be so many solutions that we don't even know people are going to start building with this. You have brain-computer interfaces as part of your everyday life that I'm really excited to see what others create from what we're building. Ramsey, thanks so much. Yeah, thank you. It was an absolute pleasure. It's a pleasure meeting you as well too. Yeah, nice meeting you. Good luck. SPEAKER_06: Thank you. It's Ramsey Zalkade, co-founder and CEO of Neurable. Hey, thanks so much for listening to How I Built This Lab. Please make sure to follow the show wherever you listen on any podcast app. Usually there's just a follow button right at the top so you don't miss any new episodes and it is entirely free. If you want to contact our team, our email address is hibt at id.wonderly.com. This episode of How I Built This Lab was produced by Ramell Wood and edited by John Isabella with music by Ramtin Arablui. Our production team at How I Built This includes Neva Grant, Casey Herman, JC Howard, Carrie Thompson, Alex Chung, Elaine Coates, Chris Masini, Carla Estevez, and Sam Paulson. I'm Guy Raz and you've been listening to How I Built This Lab. Hey, Prime members. You can listen to How I Built This early and ad-free on Amazon Music. Download the Amazon Music app today or you can listen early and ad-free with Wondery Plus in Apple Podcasts. If you want to show your support for our show, be sure to get your How I Built This merch and gear at wonderyshop.com. Before you go, tell us about yourself by completing a short survey at wondery.com slash survey. Hey, it's Guy here. And while we're on a little break, I want to tell you about a recent episode of How I Built This Lab that we released. It's about the company TerraCycle and how they're working to make recycling and waste reduction more accessible. The founder, Tom Zaki, originally launched TerraCycle as a worm poop fertilizer company. He did this from his college dorm room. Basically, the worms would eat trash and then they would turn it into plant fertilizer. Now, his company has since pivoted from that and they recycle everything from shampoo bottles and makeup containers to snack wrappers and even cigarette butts. And in the episode, you'll hear Tom talk about his new initiative to develop packaging that is actually reusable in hopes of phasing out single-use products entirely and making recycling and TerraCycle obsolete. You can hear this episode by following How I Built This and scrolling back a little bit to the episode, Making Garbage Useful with Tom Zaki of TerraCycle or by searching TerraCycle, that's T-E-R-R-A-C-Y-C-L-E, wherever you listen to podcasts.