What is an AI anyway? | Mustafa Suleyman

Episode Summary

In his TED 2024 talk, Mustafa Suleyman, CEO of Microsoft AI, challenges the simplistic notion of AI as mere tools, suggesting a deeper, more complex relationship between humans and machines. He introduces a metaphor to better conceptualize AI, proposing that it should be viewed as a new digital species. This perspective is not meant to be taken literally but is intended to help us grasp the profound implications of AI's integration into society. Suleyman recounts the evolution of AI from a fringe subject to a powerful force capable of outperforming humans in various tasks, such as language translation, disease diagnosis, and even creative endeavors like art and music. He reflects on the rapid advancements in AI, highlighting the exponential growth in computational power and the vast amounts of data these systems can process. These developments have led to AI becoming increasingly autonomous, capable of performing complex tasks without human intervention. The metaphor of AI as a digital species emphasizes the need for careful consideration of how these technologies are developed and controlled. Suleyman argues that to harness the benefits of AI while mitigating potential risks, we must rethink our approach to these technologies, treating them not just as tools but as entities with the potential to act independently. He stresses the importance of embedding safety and ethical considerations into AI development to ensure that these technologies enhance human well-being and reflect the best aspects of humanity. Suleyman's vision for AI is both optimistic and cautionary. He envisions AI as a transformative force capable of solving critical global challenges, but also recognizes the potential for unintended consequences if these technologies are not managed wisely. His talk calls for a balanced approach to AI, one that embraces its potential while vigilantly guarding against its risks.

Episode Show Notes

When it comes to artificial intelligence, what are we actually creating? Even those closest to its development are struggling to describe exactly where things are headed, says Microsoft AI CEO Mustafa Suleyman, one of the primary architects of the AI models many of us use today. He offers an honest and compelling new vision for the future of AI, proposing an unignorable metaphor — a new digital species — to focus attention on this extraordinary moment. (Followed by a Q&A with head of TED Chris Anderson)

Episode Transcript

SPEAKER_03: TED Audio Collective. You're listening to TED Talks Daily.I'm your host, Elise Hugh.Today, a compelling new way to think about AI from one of the primary architects of the AI tools we're using.In his talk from TED 2024, Microsoft AI CEO Mustafa Suleiman says calling AI just tools doesn't adequately capture what's happening between humans and machines.He lays out a metaphor for all of us to better understand it.Coming up after a short break. SPEAKER_04: Hi, I'm Ben.I suffer from a condition called writer's block.It strikes when I'm at work.That's why I choose Canva MagicWrite.It works fast, generating texts in seconds thanks to AI. SPEAKER_02: Common side effects include increased productivity, compliments from coworkers, feelings of satisfaction. SPEAKER_04: Now I can say bye-bye to writer's block. SPEAKER_02: Ask your boss if Canva MagicWrite is right for you at canva.com designed for work. SPEAKER_03: Support for this show comes from Factor.I know all of us have really crazy springs.You, like me, probably have a super busy life and it can be stressful to meal plan or to even have to think about what exactly we're going to eat or our families are going to eat at lunch and dinner.You can eat stress free this spring with Factor's delicious ready to eat meals. They are fresh, never frozen, and every meal is chef-crafted, dietitian-improved, and ready to eat in just two minutes.I love how not fussy this is and how easy it is to clean up.Get chef-prepared meals on the table in two minutes.Head to factormeals.com slash TEDtalksDaily50 and use code TED Talks Daily 5-0 to get 50% off your first box plus 20% off your next box.Pretty good deal. That's code TED Talks Daily 5-0 at factormeals.com slash TED Talks Daily 50 to get 50% off your first box plus 20% off your next box while your subscription is active. Thank you so much for having me. Choose from over 40 themes.Buy all the stocks in a theme as is or customize to better fit your investing goals.All in a few clicks.Schwab Investing Themes is not intended to be investment advice or a recommendation of any stock or investment strategy.Learn more at schwab.com slash thematicinvesting. SPEAKER_00: I want to tell you what I see coming.I've been lucky enough to be working on AI for almost 15 years now.Back when I started, to describe it as fringe would be an understatement.Researchers would say, no, no, we're only working on machine learning, because working on AI was seen as way too out there.In 2010, just the very mention of the phrase AGI, artificial general intelligence, would get you some seriously strange looks. and even a cold shoulder.You're actually building AGI, people would say.Isn't that something out of science fiction?People thought it was 50 years away or 100 years away, if it was even possible at all.Talk of AI was, I guess, kind of embarrassing. People generally thought we were weird.And I guess in some ways, we kind of were. It wasn't long, though, before AI started beating humans at a whole range of tasks that people previously thought were way out of reach.Understanding images, translating languages, transcribing speech, playing Go and chess, and even diagnosing diseases.People started waking up to the fact that AI was going to have an enormous impact, and they were rightly asking technologists like me some pretty tough questions. Is it true that AI is going to solve the climate crisis?Will it make personalized education available to everyone?Does it mean we'll all get universal basic income and we won't have to work anymore?Should I be afraid?What does it mean for weapons and war? And of course, will China win?Are we in a race?Are we headed for a mass misinformation apocalypse?All good questions. but it was actually a simpler and much more kind of fundamental question that left me puzzled, one that actually gets to the very heart of my work every day.One morning over breakfast, my six-year-old nephew, Caspian, was playing with Pi, the AI I created at my last company, Inflection.With a mouthful of scrambled eggs, he looked at me plain in the face and said, "'But Mustafa, what is an AI anyway?' He's such a sincere and curious and optimistic little guy.He'd been talking to Pi about how cool it would be if one day in the future he could visit dinosaurs at the zoo, and how he could make infinite amounts of chocolate at home, and why Pi couldn't yet play I Spy.Well, I said, it's a clever piece of software that's read most of the text on the open internet, and it can talk to you about anything you want. Right.So like a person, then. I was stumped, genuinely left scratching my head.All my boring stock answers came rushing through my mind.No, but AI is just another general-purpose technology, like printing or steam.It'll be a tool that will augment us and make us smarter and more productive.And when it gets better over time, it'll be like an all-knowing oracle that will help us solve grand scientific challenges.You know, all of these responses started to feel, I guess, a little bit defensive and actually better suited to a policy seminar than breakfast with a no-nonsense six-year-old.Why am I hesitating? I thought to myself.You know, let's be honest.My nephew was asking me a simple question that those of us in AI just don't confront often enough.What is it that we are actually creating? What does it mean to make something totally new, fundamentally different to any invention that we have known before?It is clear that we are at an inflection point in the history of humanity.On our current trajectory, we're headed towards the emergence of something that we are all struggling to describe. And yet, we cannot control what we don't understand.And so the metaphors, the mental models, the names, these all matter if we're to get the most out of AI whilst limiting its potential downsides.As someone who embraces the possibilities of this technology, but who's also always cared deeply about its ethics, we should, I think, be able to easily describe what it is we are building, and that includes the six-year-olds. So it's in that spirit that I offer up today the following metaphor for helping us to try to grapple with what this moment really is.I think AI should best be understood as something like a new digital species.Now, don't take this too literally. but I predict that we'll come to see them as digital companions, new partners in the journeys of all our lives.Whether you think we're on a 10-, 20- or 30-year path here, this is, in my view, the most accurate and most fundamentally honest way of describing what's actually coming.And above all, it enables everybody to prepare for and shape what comes next. Now, I totally get this is a strong claim, and I'm going to explain to everyone, as best I can, why I'm making it.But first, let me just try to set the context.From the very first microscopic organisms, life on Earth stretches back billions of years.Over that time, life evolved and diversified. Then a few million years ago, something began to shift. After countless cycles of growth and adaptation, one of life's branches began using tools.And that branch grew into us.We went on to produce a mesmerizing variety of tools.At first slowly, and then with astonishing speed, we went from stone axes and fire to language, writing and eventually industrial technologies.One invention unleashed a thousand more, and in time, we became homo technologicus.Around 80 years ago, another new branch of technology began.With the invention of computers, we quickly jumped from the first mainframes and transistors to today's smartphones and virtual reality headsets.Information, knowledge, communication, computation. In this revolution, creation has exploded like never before.And now a new wave is upon us, artificial intelligence.These waves of history are clearly speeding up as each one is amplified and accelerated by the last. And if you look back, it's clear that we are in the fastest and most consequential wave ever.The journeys of humanity and technology are now deeply intertwined.In just 18 months, over a billion people have used large language models.We've witnessed one landmark event after another. Just a few years ago, people said that AI would never be creative, and yet AI now feels like an endless river of creativity, making poetry and images and music and video that stretch the imagination.People said it would never be empathetic, and yet today, millions of people enjoy meaningful conversations with AIs, talking about their hopes and dreams and helping them work through difficult emotional challenges. AIs can now drive cars, manage energy grids and even invent new molecules. Just a few years ago, each of these was impossible. And all of this is turbocharged by spiraling exponentials of data and computation.Last year, Inflection 2.5, our last model, used five billion times more computation than the DeepMind AI that beat the old-school Atari games just over 10 years ago.That's nine orders of magnitude more computation.10x per year, every year, for almost a decade. Over the same time, the size of these models has grown from first tens of millions of parameters to then billions of parameters, and very soon, tens of trillions of parameters.If someone did nothing but read 24 hours a day for their entire life, they'd consume eight billion words.And of course, that's a lot of words.But today, the most advanced AIs consume more than eight trillion words in a single month of training. And all of this is set to continue. The long arc of technological history is now in an extraordinary new phase.So what does this mean in practice?Well, just as the internet gave us the browser and the smartphone gave us apps, the cloud-based supercomputer is ushering in a new era of ubiquitous AIs.Everything will soon be represented by a conversational interface, or to put it another way, a personal AI.And these AIs will be infinitely knowledgeable, and soon, they'll be factually accurate and reliable.They'll have near-perfect IQ.They'll also have exceptional EQ.They'll be kind, supportive, empathetic, these elements on their own would be transformational. Just imagine if everybody had a personalized tutor in their pocket and access to low-cost medical advice.A lawyer and a doctor, a business strategist and coach, all in your pocket 24 hours a day. But things really start to change when they develop what I call AQ, their actions quotient.This is their ability to actually get stuff done in the digital and physical world.And before long, it won't just be people that have AIs.Strange as it may sound, every organization, from small business to nonprofit to national government, each will have their own. Every town, building and object will be represented by a unique, interactive persona.And these won't just be mechanistic assistants.They'll be companions, confidants, colleagues, friends and partners as varied and unique as we all are.At this point, AIs will convincingly imitate humans at most tasks. And we'll feel this at the most intimate of scales. an AI organizing a community get-together for an elderly neighbor, a sympathetic expert helping you make sense of a difficult diagnosis.But we'll also feel it at the largest scales, accelerating scientific discovery, autonomous cars on the roads, drones in the skies.They'll both order the takeout and run the power station. They'll interact with us and, of course, with each other.They'll speak every language, take in every pattern of sensor data, sights, sounds, streams and streams of information far surpassing what any one of us could consume in a thousand lifetimes.So what is this? What are these AIs?If we are to prioritize safety above all else, to ensure that this new wave always serves and amplifies humanity, then we need to find the right metaphors for what this might become.For years, we in the AI community, and I specifically, have had a tendency to refer to this as just tools. But that doesn't really capture what's actually happening here. AIs are clearly more dynamic, more ambiguous, more integrated and more emergent than mere tools, which are entirely subject to human control.So to contain this wave, to put human agency at its center and to mitigate the inevitable unintended consequences that are likely to arise.We should start to think about them as we might a new kind of digital species.Now, it's just an analogy.It's not a literal description, and it's not perfect.I mean, for a start, they clearly aren't biological in any traditional sense. but just pause for a moment and really think about what they already do.They communicate in our languages. They see what we see.They consume unimaginably large amounts of information.They have memory.They have personality.They have creativity.They can even reason to some extent and formulate rudimentary plans.They can act autonomously if we allow them. And they do all this at levels of sophistication that is far beyond anything that we've ever known from a mere tool.And so saying AI is mainly about the math or the code is like saying we humans are mainly about carbon and water.It's true, but it completely misses the point. And yes, I get it, this is a super arresting thought, but I honestly think this frame helps sharpen our focus on the critical issues. What are the risks?What are the boundaries that we need to impose?What kind of AI do we want to build or allow to be built?This is a story that's still unfolding.Nothing should be accepted as a given.We all must choose what we create, what AIs we bring into the world or not.These are the questions for all of us alive at this moment. For me, the benefits of this technology are stunningly obvious, and they inspire my life's work every single day.But quite frankly, they'll speak for themselves. Over the years, I've never shied away from highlighting risks and talking about downsides.Thinking in this way helps us focus on the huge challenges that lie ahead for all of us.But let's be clear.There is no path to progress where we leave technology behind. The prize for all of civilization is immense. We need solutions in health care and education to our climate crisis.And if AI delivers just a fraction of its potential, the next decade is going to be the most productive in human history.Here's another way to think about it.In the past, unlocking economic growth often came with huge downsides.The economy expanded as people discovered new continents and opened up new frontiers. But they colonized populations at the same time. We built factories, but they were grim and dangerous places to work.We struck oil, but we polluted the planet.Now, because we are still designing and building AI, we have the potential and opportunity to do it better.Radically better.And today, we're not discovering a new continent and plundering its resources.We're building one from scratch.Sometimes people say that data or chips are the 21st century's new oil. But that's totally the wrong image.AI is to the mind what nuclear fusion is to energy. Limitless, abundant, world-changing.And AI really is different.That means we have to think about it creatively and honestly.We have to push our analogies and our metaphors to the very limits to be able to grapple with what's coming.Because this is not just another invention. AI is itself an infinite inventor.And yes, this is exciting and promising and concerning and intriguing all at once.To be quite honest, it's pretty surreal.But step back, see it on the long view of glacial time, and these really are the very most appropriate metaphors that we have today.Since the beginning of life on Earth, we've been evolving. changing and then creating everything around us in our human world today.And AI isn't something outside of this story.In fact, it's the very opposite.It's the whole of everything that we have created distilled down into something that we can all interact with and benefit from.It's a reflection of humanity across time.And in this sense, it isn't a new species at all. This is where the metaphors end.Here's what I'll tell Caspian next time he asks.AI isn't separate.AI isn't even, in some senses, new. AI is us.It's all of us.And this is perhaps the most promising and vital thing of all that even a six-year-old can get a sense for. As we build out AI, we can and must reflect all that is good, all that we love, all that is special about humanity.Our empathy, our kindness, our curiosity and our creativity.This, I would argue, is the greatest challenge of the 21st century, but also the most wonderful, inspiring and hopeful opportunity for all of us. Thank you. SPEAKER_01: Thank you, Mustafa.It's an amazing vision and a super powerful metaphor.You're in an amazing position right now.I mean, you are connected at the hip to the amazing work happening at OpenAI.You're going to have resources made available.There are reports of these giant new data centers, $100 billion invested and so forth. and a new species can emerge from it.I mean, you've done, in your book, you did, as well as painting an incredible optimistic vision, you were super eloquent on the dangers of AI.And I'm just curious where, from the view that you have now, what is it that most keeps you up at night? SPEAKER_00: I think the great risk is that we get stuck in what I call the pessimism aversion trap.We have to have the courage to confront the potential of dark scenarios in order to get the most out of all the benefits that we see.So the good news is that if you look at the last two or three years, there have been very, very few downsides.It's very hard to say explicitly what harm an LLM has caused. But that doesn't mean that that's what the trajectory is going to be over the next 10 years.So I think if you pay attention to a few specific capabilities, take, for example, autonomy.Autonomy is very obviously a threshold over which we increase risk in our society, and it's something that we should step towards very, very closely.The other would be something like recursive self-improvement.If you allow the model to independently self-improve, update its own code, explore an environment without oversight and without a human in control to change how it operates, that would obviously be more dangerous. But I think that we're still some way away from that.I think it's still a good five to 10 years before we have to really confront that, but it's time to start talking about it now. SPEAKER_01: A digital species, unlike any biological species, can replicate not in nine months but in nine nanoseconds and produce an indefinite number of copies of itself, all of which have more power than we have in many ways.I mean, the possibility for unintended consequences seems pretty immense.And isn't it true that if a problem happens, it could happen in an hour?No. SPEAKER_00: That is really not true.I think there's no evidence to suggest that.And I think that's often referred to as the intelligence explosion.And I think it is a theoretical, hypothetical maybe that we're all kind of curious to explore, but there's no evidence that we're anywhere near anything like that.And I think it's very important that we choose our words super carefully, because you're right, that's one of the weaknesses of the species framing, right?We will... design the capability for self-replication into it if people choose to do that.And I would actually argue that we should not.That would be one of the dangerous capabilities that we should step back from.So there's no chance that this will emerge accidentally. I really think that's a very low probability. It will happen if engineers deliberately design those capabilities in and if they don't take enough efforts to deliberately design them out.And so this is the point of being explicit and transparent about trying to introduce safety by design very early on. SPEAKER_01: So thank you.I mean, your vision of humanity injecting into this new thing the best parts of ourselves, avoiding all those weird, biological, freaky, horrible tendencies that we can have in certain circumstances.I mean, that is a very inspiring vision.And thank you so much for coming here and sharing it at TED.Thank you. SPEAKER_00: Good luck.Thanks a lot.