SPEAKER_01: TED Audio Collective. You're listening to TED Talks Daily.I'm your host, Elise Hu.It turns out that the things human babies can master, like picking up tiny blocks, are giant challenges for robots.In his 2023 talk from TEDxMarin, the roboticist Ken Goldberg takes us through how advances in AI and deep learning have led to big strides in training robots to do even the most precise tasks, like untying tangled cables. I don't know about you, but if robots could untangle my necklaces, I'd line up for that for sure.Here is talk after the break. Support for TED Talks Daily comes from Capital One Bank.With no fees or minimums, banking with Capital One is the easiest decision in the history of decisions.Even easier than deciding to listen to another episode of your favorite podcast.
And with no overdraft fees, is it even a decision?That's banking reimagined.What's in your wallet?Terms apply.See CapitalOne.com slash Bank.Capital One N.A.Member FDIC. Support for TED Talks Daily is from Progressive, home of the Name Your Price tool.You can say how much you want to pay for car insurance, and they'll show you coverage options that fit your budget.It's easy to start a quote.
Visit Progressive.com to get started.Progressive Casualty Insurance Company and Affiliates.Price and coverage match limited by state law. Thank you so much for having me. Choose from over 40 themes.Buy all the stocks in a theme as is or customize to better fit your investing goals.All in a few clicks.Schwab Investing Themes is not intended to be investment advice or a recommendation of any stock or investment strategy.Learn more at schwab.com slash thematicinvesting.
SPEAKER_00: I have a feeling most people in this room would like to have a robot at home.It'd be nice to be able to do the chores and take care of things.Where are these robots?What's taking so long?I mean, we have our tricorders, and we have satellites, we have laser beams, but where are the robots?I mean, okay, wait, we do have some robots in our home, but... Not really doing anything that exciting, okay. Now, I've been doing research at UC Berkeley for 30 years with my students on robots.And in the next 10 minutes, I'm going to try to explain the gap between fiction and reality.In the field, there's something that explains this that we call Moravec's paradox.
And that is what's easy for robots, like being able to pick up a large object, large heavy object, is hard for humans. But what's easy for humans, like being able to pick up some blocks and stack them, well, it turns out that is very hard for robots.And this is a persistent problem.So the ability to grasp arbitrary objects is a grand challenge for my field. Now, by the way, I was a very klutzy kid.I would drop things.Anytime someone would throw me a ball, I would drop it.I was the last kid to get picked on a basketball team.I'm still pretty klutzy, actually.But I have spent my entire career studying how to make robots less clumsy.
Now let's start with the hardware.So the hand, it's a lot like our hand, and it has a lot of motors, a lot of tendons and cables, so it's unfortunately not very reliable.It's also very heavy and very expensive.So I'm in favor of very simple hands.So this has just two fingers.It's known as a parallel jaw gripper.So it's very simple, it's lightweight, and reliable, and it's very inexpensive. Now, actually, in industry, there's even a simpler robot gripper, and that's the suction cup.And that only makes a single point of contact.So again, simplicity is very helpful in our field.
Now, let's talk about the software.And this is where it gets really, really difficult, because of a fundamental issue, which is uncertainty.There's uncertainty in the control, there's uncertainty in the perception, and there's uncertainty in the physics. Now, what do I mean by the control?Well, if you look at a robot's gripper trying to do something, there's a lot of uncertainty in the cables and the mechanisms that cause very small errors, and these can accumulate and make it very difficult to manipulate things. Now, in terms of the sensors, yes, robots have very high-resolution cameras, just like we do.And that allows them to take images of scenes in traffic or in a retirement center or in a warehouse or in an operating room.But these don't give you the three-dimensional structure of what's going on.So, recently, there was a new development called LiDAR.And this is a new class of cameras that use light beams to build up a three-dimensional model of the environment.
And these are fairly effective.They really were a breakthrough in our field, but they're not perfect.So if the objects have anything that's shiny or transparent, well, then the light acts in unpredictable ways and ends up with noise and holes in the images.So these aren't really the silver bullet. And there's one other form of sensor out there now called a tactile sensor.And these are very interesting.They use cameras to actually image the surfaces as a robot would make contact.But these are still in their infancy. Now, the last issue is the physics.We take a bottle on a table, and we just push it, and the robot's pushing it in exactly the same way each time.
But the bottle ends up in a very different place each time.And why is that?Well, it's because it depends on the microscopic surface topography underneath the bottle as it's slid.For example, if you put a grain of sand under there, it would react very differently than if there weren't a grain of sand. and we can't see if there's a grain of sand because it's under the bottle.It turns out that we can predict the motion of an asteroid a million miles away far better than we can predict the motion of an object as it's being grasped by a robot.Now, let me give you an example.Put yourself here into the position of being a robot. You're trying to clear the table, and your sensors are noisy and imprecise.Your actuators, your cables and motors are uncertain, so you can't fully control your own gripper.
And there's uncertainty in the physics, so you really don't know what's going to happen.So it's not surprising that robots are still very clumsy.Now, there's one sweet spot for robots, and that has to do with e-commerce. And this has been growing, it's a huge trend, and during the pandemic, it really jumped up.I think most of us can relate to that.We started ordering things like never before.And this trend is continuing, and the challenge is, to meet the demand, we have to be able to get all these packages delivered in a timely manner. And the challenge is that every package is different.Every order is different.So you might order some nail polish and an electric screwdriver.
And those two objects are going to be somewhere inside one of these giant warehouses.And what needs to be done is someone has to go in, find the nail polish, and then go and find the screwdriver, bring them together, put them into a box, and deliver them to you. So this is extremely difficult and requires grasping.So today, this is almost entirely done with humans.And the humans don't like doing this work.There's a huge amount of turnover.So it's a challenge.And people have tried to put robots into warehouses to do this work. Hasn't turned out all that well.But my students and I, about five years ago, we came up with a method using advances in AI and deep learning to have a robot essentially train itself to be able to grasp objects.
And the idea was that the robot would do this in simulation.It was almost as if the robot were dreaming about how to grasp things and learning how to grasp them reliably. This is a system called DexNet that is able to reliably pick up objects that we put into these bins in front of the robot.These are objects it's never been trained on. and it's able to pick these objects up and reliably clear these bins over and over again.So we were very excited about this result, and the students and I went out to form a company, and we now have a company called Ambi Robotics.And what we do is make machines that use the algorithms, the software we developed at Berkeley, to pick up packages. And this is for e-commerce.The packages arrive in large bins, all different shapes and sizes, and they have to be picked up, scanned, and then put into smaller bins, depending on their zip code.We now have 80 of these machines operating across the United States, sorting over a million packages a week.
Now, that's some progress, but it's not exactly the home robot that we've all been waiting for.So I want to give you a little bit of an idea of some of the new research that we're doing to try to be able to have robots more capable in homes.And one particular challenge is being able to manipulate deformable objects. like strings in one dimension, two-dimensional sheets in three dimensions like fruits and vegetables.So we've been working on a project to untangle knots.And what we do is we take a cable, we put that in front of the robot, it has to use a camera to look down, analyze the cable, figure out where to grasp it and how to pull it apart to be able to untangle it.And this is a very hard problem because the cable is much longer than the reach of the robot.So it has to go through and manipulate, manage the slack as it's working.And I would say this is doing pretty well.
It's gotten up to about 80% success when we give it a tangled cable at being able to untangle it. The other one is something I think we also all are waiting for, robot to fold the laundry.Now, roboticists have actually been looking at this for a long time, and there was some research that has been done on this, but the problem is that it's very, very slow.So this was about... Three to six folds per hour.Okay?So we decided to revisit this problem and try to have a robot work very fast.So one of the things we did was try to think about a two-armed robot that could fling the fabric the way we do when we're folding.And then we also used friction, in this case, to drag the fabric to smooth out some wrinkles.And then we borrowed a trick which is known as the two-second fold.
You might have heard of this.It's amazing because... The robot is doing exactly the same thing, and it's a little bit longer, so we're making some progress there.And the last example is bagging.So you all encounter this all the time.You go to a corner store, and you have to put something in a bag.Now, it's easy, again, for humans, but it's actually very, very tricky for robots.Because for humans, you know how to take the bag and how to manipulate it.But robots, the bag can arrive in many different configurations.It's very hard to tell what's going on
and for the robot to figure out how to open up that bag.So what we did was we had the robot train itself by, we painted one of these bags with fluorescent paint, and we had fluorescent lights that would turn on and off, and the robot would essentially teach itself how to manipulate these bags.And so we've got it now up to the point where we're able to solve this problem, about half the time.So it works, but I'm saying we're still not quite there yet.So I want to come back to Moravec's paradox.What's easy for robots is hard for humans.And what's easy for us is still hard for robots.We have incredible capabilities.We're very good at manipulation.
But robots still are not.I want to say I understand. It's been 60 years, and we're still waiting for the robots that the Jetsons had. Why is this difficult?We need robots because we want them to be able to do tasks that we can't do or we don't really want to do.But I want you to keep in mind that these robots, they're coming.Just be patient because we want the robots, but robots also need us to do the many things that robots still can't do.Thank you.
SPEAKER_01: PRX.