Generative AI: Its Rise and Potential for Society

Episode Summary

Dario Gill, IBM's Senior Vice President and Director of Research, discussed the rise of generative AI and its potential impact on business and society in a conversation with host Malcolm Gladwell. Gill explained that just over a decade ago, AI did not have a good reputation in the scientific community. However, advances in deep learning and computing over the past 10 years have led to major leaps forward. The recent explosion in generative AI marks an inflection point similar to the advent of the internet and web browsers in the 1990s. The technology is now good enough and easy enough to use that adoption is rapidly accelerating across many industries. When asked where we are in the evolution of AI, Gill responded that we are at a catalytic moment - the technology works well and is being democratized so that many more people can build and leverage AI systems. This will likely lead to surprises as creative people find new applications that even the original creators did not anticipate. The future directions will be largely unpredictable, just as the early internet's full potential was impossible to foresee. Gill and Gladwell also discussed AI's potential impact on income inequality and access to opportunity. Gill believes AI will be highly democratized in terms of usage, boosting productivity broadly. However, value creation will likely concentrate among those who can represent proprietary data well inside AI models. He advised institutions not to just be AI users, but AI value creators, in order to capture a sustainable competitive advantage. The conversation touched on many real-world examples, from AI's implications for the future of creative fields like screenwriting to transforming medical school curriculums. Gladwell emphasized that much of the coming revolution is non-technical - it will require rethinking human arrangements across institutions and societies. Gill agreed that technology progresses within the complex contexts of philosophy, politics and democracy. Overall, the discussion provided an insightful overview of where AI is headed and the vast challenges and opportunities that lie ahead.

Episode Show Notes

In order to stay competitive in a rapidly changing marketplace, businesses need to adapt to the potential of generative AI. In this special live episode of Smart Talks with IBM, Malcolm Gladwell is joined onstage at iHeartMedia’s studio by Dr. Darío Gil, Senior Vice President and Director of Research at IBM. They chat about the evolution of AI, give examples of practical uses, and discuss how businesses can create value through cutting edge technology.

Watch the video version of the conversation here: https://www.youtube.com/watch?v=WOwM__St6aU 

Hear more from Darío on generative AI for business: https://www.ibm.com/think/ai-academy

Visit us at: https://www.ibm.com/smarttalks/

This is a paid advertisement from IBM.

See omnystudio.com/listener for privacy information.

Episode Transcript

SPEAKER_01: All right. Welcome everybody. You guys excited? Here we go. SPEAKER_02: Hello, hello. Welcome to Smart Talks with IBM, a podcast from Pushkin Industries, iHeartRadio, and IBM. I'm Malcolm Gladwell. This season, we're continuing our conversations with new creators, visionaries who are creatively applying technology and business to drive change, but the focus on the transformative power of artificial intelligence and what it means to leverage AI as a game-changing multiplier for your business. Today's episode is a bit different. I was recently joined on stage by Dario Gill for a conversation in front of a live audience at the iHeartMedia headquarters in Manhattan. Dario is the Senior Vice President and Director of IBM Research, one of the world's largest and most influential corporate research groups. We discussed the rise of generative AI, what it means for business and society. He also explained how organizations that leverage AI to create value will dominate in the near future. Okay, let's get on to the conversation. Hello everyone. Welcome. And I'm here with Dr. Dario Gill. And I wanted to say before we get started, this is something I said backstage that I feel very guilty today because you're the... Inarguably one of the most important SPEAKER_02: figures in AI research in the world and we have taken you away from your job for a morning. It's like if Oppenheimer's wife in 1944 said, let's go and have a little getaway in the Bahamas. It's that kind of thing. What do you say to your wife? I can't... We have got to work on this thing I can't tell you about. She's like, get me out of Los Alamos. No, so I do feel guilty. We've set back AI research by about four hours here. But I wanted to... You've been with IBM for 20 years? 20 years, yeah, this summer. So and how old were you when you... Not to give away your age, but you were... How old were you when you started? I was 28. Okay. Yeah. So I want to go back to your 28 year old self now. If I asked you about artificial intelligence, I asked 28 year old Dario, what does the future hold for AI? How quickly will this new technology transform our world, et cetera, et cetera. What would 28 year old Dario have said? Well, I think the first thing SPEAKER_00: is that even though AI as a field has been with us for a long time, since the mid 1950s, at that time, AI was not a very polite word to say. Meaning within the scientific community, people didn't use that term. They would have said things like, maybe I do things related to machine learning, or statistical techniques in terms of classifiers and so on. But AI had a mixed reputation. It had gone through different cycles of hype. And it's also moments of a lot of negativity towards it because of lack of success. And so I think that that would be the first thing. We probably say, AI is like, what is that? Respectable scientists are not working on AI defined as such. And that really changed over the last 15 years only. I would say with the advent of deep learning over the last decade is when that reenter again, the lexicon of saying AI and that that was a legitimate thing to work on. So I would say that that's the first thing I think we would have noticed a contrast 20 years ago. SPEAKER_02: Yeah. So at what point in your 20 year tenure at IBM would you say you kind of snapped into present kind of wow mode? SPEAKER_00: I would say in the late 2000s when IBM was working on the Jeopardy project and just seeing the demonstrations of what could be done in question answering. SPEAKER_02: It literally Jeopardy is this crucial moment in the history of AI. Yeah. You know, there had been a long and wonderful history inside IBM on AI. So for SPEAKER_00: example, in terms of these grand challenges, at the very beginning of the field founding, which is this famous Dartmouth conference that actually IBM sponsored to create, there was an IBM out there called Nathaniel Rochester and there were a few others who right after that they started thinking about demonstrations of this field. And for example, they created the first game to play checkers and to demonstrate that you could do machine learning on that. Obviously, we saw later in the 90s like chess, that was very famous example of that. That was Deep Blue. With Deep Blue. Yeah. Right. And playing with Kasparov. But I think the moment that was really, those other ones felt like, kind of like brute force, anticipating sort of like moves ahead. But this aspect of dealing with language and question answering felt different. And I think for us internally and many others was when a moment of saying like, wow, what are the possibilities here? And then soon after that connected to the sort of advancements in computing and with deep learning, the last decade has just been an all out sort of like front of advancements and that. And I just continue to be more and more impressed. And the last few years have been remarkable too. Yeah. I want to ask you three quick conceptual questions before we dig into it. Just so I SPEAKER_02: sort of get a, we all get a feel for the shape of AI. Question number one is, where are we in the evolution of this? So, you know, the obvious quite, we're all suddenly aware of it. We're talking about it. Can you give us an analogy about where we are in the kind of likely evolution of this as a technology? SPEAKER_00: So I think we're in a significant inflection point that it feels the equivalent of the first browsers when they appear and people imagine the possibilities of the internet or more imagined experience the internet. The internet had been around for quite a few decades. AI has been around for many decades. I think the moment we find ourselves is that people can touch it and they can, before there were AI systems that were like behind the scenes, like your search results or translation systems, but they didn't have the experience of like, this is what it feels like to interact with this thing. So that's what I mean. I think maybe that analogy of the browser is appropriate because it's all of a sudden it's like, whoa, you know, there's this network of machines and content can be distributed and everybody can self publish. And there was a moment that we all remember that. And I think that that is what the world has experienced over the last nine months or so on. But fundamentally also what is important is that this moment is where the ease of the number of people that can build and use AI has skyrocketed. So over the last decade, you know, technology firms that had large research teams could build AI that worked really well, honestly. But when you went out into say, hey, can everybody use it, can a data science team in a bank, you know, go and develop these applications? It was like more complicated. Some could do it, but it was more, the barrier of entry was high. Now it's very different because of foundation models and the implications that that has for the moment. SPEAKER_02: At the moment when the technology is being democratized. SPEAKER_00: Being democratized, frankly, works better for classes of problems like programming and other things. It's really incredibly impressive what it can do. So the accuracy and the performance of it is much better. And the ease of use and the number of use cases we can pursue is much bigger. So that democratization is a big difference. SPEAKER_02: When you say, when you make an analogy to the first browsers, if you, if we do another one of these time travel questions back at the beginning of the first browsers, it's safe to say many of the potential uses of the internet and such, we hadn't even begun, we couldn't even anticipate. Right. Right. So we're at the point where the future direction is largely unpredictable. SPEAKER_00: Yeah, I think that that is right because it's such a horizontal technology that the intersection of the horizontal capability, which is about expanding our productivity and tasks that we wouldn't be able to do efficiently without it, has to marry now the use cases that reflect the diversity of human experience and institutional diversity. So as more and more institutions say, you know, I'm focused on agriculture, you know, to be able to improve seeds, you know, in these kinds of environments, they'll find their own context in which that matters that the creators of AI did not anticipate at the beginning. So I think that that is then the fruit of surprises will be like, why, we didn't even think that it could be used for that. And also clever people will create new business models associated with that, like it happened with the internet, of course, as well. And that will be its own source of transformation and change in its own right. So I think all of that is yet to unfold, right? What we're seeing is this catalyst moment of technology that works well enough and it can be democratized. Yeah. What next sort of conceptual question, you know, we can loosely understand or categorize SPEAKER_02: innovations in terms of their impact on the kind of balance of power between haves and have nots. Some innovations, you know, obviously favor those who already have a, make the rich richer. Some, it's a rising tide that lifts all boats. And some are biased in the other direction. They close the gap between, is it possible to say, to predict which of those three categories AI might fall into? SPEAKER_00: It's a great question. You know, a first observation I would make on your first two categories is that it will be both likely be true that the use of AI will be highly democratized, meaning the number of people that have access to its power to make improvements in terms of efficiency and so on will be fairly universal. And that the ones who are able to create AI may be quite concentrated. So if you look at it from the lens of who creates wealth and value over sustained periods of time, particularly say in a context like business, I think just being a user of AI technology is an insufficient strategy. And the reason for that is like, yes, you will get the immediate productivity boost of like just making API calls and, you know, that will be a new baseline for everybody, but you're not accruing value in terms of representing your data inside the AI in a way that gives you a sustainable competitive advantage. So I always try to tell people is don't just be an AI user, be an AI value creator. And I think that that will have a lot of consequences in terms of the haves and have nots as an example, and that will apply both to institutions and regions and countries, et cetera. So I think it would be kind of a mistake, right, to just develop strategies that are just about usage. SPEAKER_02: But to come back to that question for a moment, to give you a specific, suppose I'm an industrial farmer in Iowa with $10 million of equipment and I'm comparing it to a subsistence farmer, someone in the developing world who's got a cell phone. Over the next five years, whose SPEAKER_02: well-being rises by a greater amount? SPEAKER_00: Yeah, I think, I mean, it's a good question, but it might be hard to do a one-to-one sort of like attribution to just one variable in this case, which is AI. But again, provided that you have access to a phone, right, and some kind to be able to be connected, I do think so, for example, in that context, we've developed, we don't work with NASA as an example to build geospatial models using some of these new techniques. And I think, for example, our ability to do flood prediction, I'll tell you an advantage of why it would be a democratization force in that context. Before, to build a flood model based on satellite imagery was actually so onerous and so complicated and difficult that you would just target to very specific regions. And then obviously, countries prioritize their own, right? But what we've demonstrated is actually you can extend that technique to have global coverage around that. So in that context, I would say it's a force towards democratization that everybody sort of would have access if you have some kind of connectivity. So that Iowa farmer might have a flood model. The guy in the developing world definitely SPEAKER_02: didn't. And now he's got a shot at getting one. SPEAKER_00: Yeah, but now he has a shot at getting one. So there's aspects of it that so long as we provide connectivity and access to it, that there can be democratization forces. But I'll give you another example that can be quite concerning, which is language, right? So there's so much language in English. And there is sort of like this reinforcement loop that happens that the more you concentrate because it has obvious benefits for global communication and standardization, the more you can enrich base AI models based on that capability. If you have very resource-scarce languages, you tend to develop less powerful AI with those languages and so on. So one has to actually worry and focus on the ability to actually represent in that case, this language as a piece of culture, also in the AI such that everybody can benefit from it too. So there's a lot of considerations in terms of equity about the data and the data sets that we accrue and what problems are we trying to solve. I mean, you mentioned agriculture or healthcare and so on. If we only solve problems that are related to marketing as an example, that will be a less rich world in terms of opportunity that if we incorporate many, many other broader set of problems. SPEAKER_02: What do you think are the biggest impediments to the adoption of AI as you think AI ought to be adopted? I mean, what are the sticking points that you would? SPEAKER_00: Look, in the end, I'm going to give a non-technological answer as a first one has to do with workflow, So even if the technology is very capable, the organizational change inside a company to incorporate into the natural workflow of people on how we work, it's a lesson we have learned over the last decade. It's hugely important. So there's a lot of design considerations. There's a lot of how do people want to work, how do they work today, and what is the natural entry point for AI? So that's like number one. And then the second one is for the broad value creation aspect of it is the understanding inside the companies of how you have to curate and create data to combine it with external data such that you can have powerful AI models that actually fit your need. And that aspect of what it takes to actually create and curate the data for these modern AI, it's still a work in progress. I think part of the problem that happens very often when I talk to institutions is that they say, yeah, yeah, yeah, I'm doing it. I've been doing it for a long time. And the reality is that that answer can sometimes be a little of a cop out. It's like, I know you were doing machine learning. You were doing some of these things. But actually, the latest version of AI, what's happened with foundation models, not only is it very new, it's very hard to do. And honestly, if you haven't been assembling very large teams and spending hundreds of millions of dollars of compute and so on, you're probably not doing it. You're doing something else that is in the broad category. And I think the lessons about what it means to make this transition to this new wave is still in early phases of understanding. SPEAKER_02: So what would you say, I want to give you a couple of examples of people in real world positions of responsibility. Imagine them sitting right here. So imagine that I am the president of a small liberal arts college. And I come to you and I say, Dario, I keep hearing about AI. My college has, I'm making this much money if that every year, my enrollment is declining. I feel like this maybe is an opportunity. What is the opportunity for me? What would you say? SPEAKER_00: So it's probably in a couple of segments around that. One has to do is, well, what is the implications of this technology inside the institution itself instead of the college and how we operate? And can we improve, for example, efficiency? Like if you're having very low levels of sort of margin to be able to reinvest is you run IT, you run infrastructure, you run many things inside the college. What are the opportunities to increase the productivity or automate and drive savings such that you can reinvest that money into the mission of education as an example? So number one is operational efficiency. Educational efficiency is a big one. I think the second one is within the context of the college, there's implications for the educational mission on its own. How does a curriculum need to evolve or not? What are acceptable use policies or some of these AI? I think we've all read a lot about what can happen in terms of exams and so on and cheating or not cheating or what are the actually positive elements of it in terms of how curriculum should be developed and professions sustain around that. And then there's another third dimension, which is the outdoor-oriented element of it, which is like prospect students, right? So which is, frankly speaking, a big use case that is happening right now, which in the broader industry is called customer care or client care or citizen care. So in this question will be education like, you know, hey, are you reaching the right students around that that may apply to the college? How can you create them, for example, an environment to interact with the college and answering questions that could be a chat bot or something like that to learn about it and personalization. So I would say there's like at least three lenses with which I would give advice, right? SPEAKER_02: The second, let's pause on the second one because it's really interesting. So I really can't assign an essay anymore, can I? Can I sign an essay? Yeah. Can I say, write me a research paper and come back to me in three weeks? Can I do that anymore? I think you can. How do I do that? And then you can. Look, there's two questions around that. I think that if one goes and SPEAKER_00: explains in the context, like, what is it? Why are we here? Why in this class? What is the purpose of this? And one starts with assuming like an element of like decency on people or people are there like to learn and so on. And you just give a disclaimer, look, I know that one option you have is like just, you know, put the essay question and click go and like and give an answer, you know, but that is not why we're here. And that is not the intent of what we're trying to do. So first, I would start with the sort of like the norms of intent and decency and appeal to those as step number one. Then we all know that there will be a distribution of use cases that people like that will come in one ear and come out of the other and do that. And so for a subset of that, you know, I think the technology is going to evolve in such a way that we will have more and more of the ability to discern, right, you know, when that has been AI generated, right, and created, it won't be perfect, right. But there's some elements that you can imagine inputting the essay and you say, hey, this is likely to be generated right around that. And for example, one way you can do that just to give you an intuition, you could just have an essay that you write with pencil and paper at the beginning, you get a baseline of what your writing is like. And then later when you, you know, generate it, there'll be obvious differences around what kind of writing has been generated on the other. Yeah, but you've turned, everything you're describing makes sense, but it greatly, in this SPEAKER_02: respect, at least it seems to greatly complicate the life of the teacher, whereas the other two use cases seem to kind of clarify and simplify the role, right, suddenly, you know, reaching students, prospective students sounds like I can do that much more kind of efficient a lot. Yeah, I can bring out administration costs, but the teaching thing is tricky. SPEAKER_00: Well, until we develop the new norms, right? I mean, again, I mean, I know it's an abuse analogy, but calculators we deal with that too, right? And it says, well, calculator, what is the purpose of math? How are we going to do this? And so on. Can I tell you my dad's calculator story? SPEAKER_02: Yes, please. My father was a mathematician taught mathematics at the University of Waterloo in Canada. And in the 70s, when people started to get pocket calculators, his students demanded that they be able to use them. And he said no, and he took them to the administration, he lost. So he then changed completely throughout all of his old exams, introduced new exams, where there was no calculation, it was all like, deep think, you know, figure out the problem on a conceptual level and describe it to me. And they were all students deeply unhappy that he had made their lives less complicated. But to your point, to your point, I mean, he probably the result was probably a better education, he just removed the element that they could gain with their pocket calculators. I suppose it's a version of... SPEAKER_00: I think it's a version of that. And so I think they will develop the equivalent of what your father did. And I think people say, you know what, if like these kinds of things, everybody's doing it generically, and none of us have any meaning, because all you're doing is pressing buttons, and like the intent of this was something which was to teach you how to write or to think or something, there may be a variant of how we do all of this. I mean, obviously, some version of that that has happened is like, okay, we're all going to sit down and do it with pencil and paper and computers in the classroom. But there'll be other variants of creativity that people will put forth to say, you know what, you know, that's a way to solve that problem, too. SPEAKER_02: But this is interesting, because to stay on this analogy, we're really talking about a profound rethinking, just using a college as an example, a real profound rethinking of the way... There's no part of this college that's unaffected by AIA. B, in one case, I've made everyone's job easier. In one case, I've made... I'm asking us to really rethink from the ground up what teaching means. In another case, I've automated systems that I didn't think of. I mean, it's like... That's right. That's a lot to ask someone who got a PhD in medieval language literature, you know, 40 years ago. SPEAKER_00: Yeah. But you know, I'll tell you a positive sort of development that I'm seeing the sciences around this, which is you're seeing... As you see more and more examples of applying AI technology within the context of like historians do, as an example, right? Where you have archival and you know, and you have all these books and being able to actually help you as an assistant, right, around that. But not only with text now, but with diagrams, right? And I've seen it in anthropology, too, right? And in archaeology with examples of engravings and translations and things that can happen. So as you see in diverse fields, people applying these techniques to advance on how to do physics or how to do chemistry, they inspire each other, right? And they say, you know, how does it apply actually to my area? So once as that happens, it becomes less of a chore of like, my God, like, you know, how do I have to deal with this? But actually, it's triggered by curiosity. It's triggered by, you know, there'll be like, you know, faculty that will be like, you know what, you know, let me explore what this means for my area. And they will adapt it to the local context, to the local, you know, language and the profession itself. So I see that as a positive vector that is not all going to feel like homework. You know, it's not all going to feel like, oh, my God, this is so overwhelming. But rather to be very practical to see what works, what have I seen others to do that is inspiring, and what am I inspired to do? You know, what is, how is this going to help my career? I think that that's going to be an interesting question for, for, you know, those faculty members, for the students and professionals. Sorry, I'm gonna stick with this example alone, because it's really interesting. I'm curious, following up on SPEAKER_02: what you just said, that one of the most persistent critiques of academia, but also of many, of many corporate institutions in recent years has been siloing, right? Different parts of the, of the organization are going off on their own and not speaking to each other is a potential, is, is a real potential benefit to AI, the kind of breaking down a simple tool for breaking down those kinds of barriers? Is that a very, is that an elegant way of sort of saying what a, what I really think, and I was SPEAKER_00: actually just having a conversation with Provost, very much on this topic very recently, exactly on that, which is all these, this, you know, this appetite, right, to collaborate across disciplines, there's a lot of attempts towards a goal, right, creating interdisciplinary centers, creating dual degree programs, or dual appointment programs. But actually, in a lot of progress in academia, happens by methodology too, right? Like a new, you know, when, when some methodology gets adopted, I mean, the most famous example of that is the scientific method as an example of that. But when you have a methodology that gets adopted, it also provides a way to speak to your colleagues across different disciplines. And I think what's happening in AI is, is linked to that, that within the context of the scientific method, as an example, the methodology about which we, about which we do discovery, the role of data, the role of these neural networks of how we actually find proximity to concepts to one another, is actually fundamentally different than how we've traditionally applied it. So as we see across more professions, people applying this methodology is also going to give some element of common language to each other, right? And in fact, you know, in this very high dimensional representation of information that is present to neural networks, we may find amazing adjacencies or connections of themes and topics in ways that the individual practitioners cannot describe, but yet will be latent in these large neural networks. We are going to suffer a little bit from causality, from the problem of like, hey, what's the root cause of that? Because I think one of the unsatisfying aspects that this methodology will provide is they may give you answers for which they don't give you good reasons for where the answers came from. And then there will be the traditional process of discovery of saying, if that is the answer, what are the reasons? So we're going to have to do this sort of hybrid way of understanding the world. But I do think that common layer of AI is a powerful new thing. Yeah. Well, a couple of random questions that couldn't have been as you talk. In the, in the SPEAKER_02: writers strike that just ended in Hollywood, one of the sticking points was how the studios and writers would treat AI generated content, right? Would writers get credit if their material was somehow the source for a, but more broadly, did the writers need protections against the use of, I could go on, you know what, you probably were familiar with all of this. Had you been, I don't know whether you were, but had either side called you in for advice during that? The writers, had the writers called you and said, Daria, what should we do about AI? And how should we, that should be reflected in our contract negotiations? What would you have told them? I, the way I think about that is that I divided, I would divide it into two pieces. First is what's SPEAKER_00: technically possible, right? And anticipate scenarios like, well, you know, what can you do with voice cloning? For example, you know, now, for example, it is possible there's been dubbing, let's just take that topic, right? Around the world, there was all these folks that would dub people in other languages. Well, now you can do these incredible renderings, I mean, I don't know if you've seen them, where you match the lips, is your original voice, but speaking any language that you want as an example. So, basically that has a set of implications around that. I mean, just to give an example. So I would say, create a taxonomy that describes technical capabilities that we know of today, and applications to the industry and to examples of like, hey, you know, I could film you for five minutes and I could generate two hours of content of you and I don't have to, you know, then if you get paid by the hour, obviously I'm not paying you for that other thing. So I would say technological capability and then map with their expertise consequences of how it changes the way they work or the way they interact or the way they negotiate and so on. So that would be one element of it. And then the other one is like a non-technology related matter, which is an element of almost of distributed justice is like who deserves what, right? And who has the power to get what? And then that's a completely different discussion that is to say, well, if this is the scenario of what's possible, you know, what do we want and what are we able to get? And I think that that's a different discussion, which is as old as life. Which one do you do first? SPEAKER_02: I think it's very helpful to have an understanding of what's possible and how it changes the SPEAKER_00: landscape as part of a broader discussion, right? And a broader negotiation. Because you also have to see the opportunities because there will be a lot of ground to say, actually, you know, if we can do it in this way and we can all be that much more efficient in getting this piece of work done or this filming done, but we have a reasonable agreement about how we both sides benefit from it, then that's a win-win for everybody. So that's a, I think that will be a golden triangle. Here's my reading and I would like you to correct me if I'm wrong and I'm likely to be wrong. When I SPEAKER_02: looked at that strike, I said, if they're worried about AI, the writers are worried about AI. That seems silly. It should be the studios who are worried about the economic impact of AI. Doesn't in the long run, AI put the studios out of business long before it puts the writers out of business? I only need the studio because the costs of production are as high as the sky and the costs of production are overwhelming. And whereas if I don't, if I have a tool which brings, introduces massive technological efficiencies to the production of movies, then I don't need, why don't we need a studio? Why weren't they the scared ones? Or maybe you need like a different kind of studio. SPEAKER_00: Or a different kind of studio. A different kind of studio. But in the strike, the frightened ones were the writers and the, you know, were the studios. SPEAKER_02: Wasn't that backwards? I haven't thought about it. It can be, but the implications of it, it goes back to what we were SPEAKER_00: talking before. The implications, because they're so horizontal, it is right to think about it like, what does it do to the studios as well, right? But then, you know, the reason by that happens is that it's the order of either negotiations or who first got concerned about it and did something about it, right? Which is in the context of the strike. You know, I don't know what the equivalent conversations are going inside the studio and whether they have a war room saying what this is going to mean to us, right? But it doesn't get exercised through a strike, but maybe through a task force inside, you know, the companies about what are they going to do, right? Well, and to go back to your thing you said, the first thing you do is you make a list of what SPEAKER_02: technological capabilities are, but don't technological capabilities change every, I mean, they do. You're racing ahead so fast. So you can't, can you have a contract? I'm sorry for getting into the weeds here, but this is interesting. Can you have, you can't have a five-year contract if the contract is based on an assessment of technological capabilities in 2023, because by the time we get to 2028, 2028, it's totally different, right? SPEAKER_00: Yeah, but like, you know, I mean, where I was going is like there are some abstractions around that is like, you know, what can we do with my image, right? Like if I generally get the category that my image can be reproduced, generated, content, and so on, it's like let's talk about the abstract notion about who has rights to that or do we both get to benefit from that. If you get that straight, yes, the nature of how the image gets altered, created, or something will change underneath, but the concept will stay the same. And so I think is what's important is to get the categories right. Yeah, yeah. If you had to think about the biggest technological revolutions of the SPEAKER_02: post-war era, last 75 years, we can all come up with a list. Actually, it's really fun to come up with a list. I was thinking about this when we were, you know, containerized shipping is my favorite. The Green Revolution, the internet. Where is AI in that list? So I would put it first, SPEAKER_01: SPEAKER_00: in that context that you put forth over since World War II, undoubtedly like computing as a category is one of those trajectories that has reshaped, right, our world. And I think within computing, I would say the role that semiconductors have had has been incredibly defining. I would say AI is the second example of that as a core architecture that is going to have an equivalent level of impact. And then the third leg I would put to that equation will be quantum and quantum information. Yeah. And that's sort of like I like to summarize that the future of computing is bits, neurons, and qubits. And it's that idea of high precision computation, the world of neural networks and artificial intelligence and the world of quantum. And the combination of those things is going to be the defining force of the next hundred years in that category of computing. But it makes the list for sure. If it's that high up on the list, this is a total hypothetical, would you, SPEAKER_02: if you were starting over, if you're starting IBM right now, would you say, oh, our AI operations actually should be way bigger? Like how many thousands of people working for you? So within SPEAKER_00: the research division, it's about like 3,500 scientists. So in a perfect world, would you, SPEAKER_02: if it's that big, isn't that too small as a group? Yeah. Well, that's like in the research SPEAKER_00: division. I mean, IBM overall, like there's tens of thousands of people working on that. But I mean, like, so starting from first, so you have a, you've, we've got a technology that you're SPEAKER_02: ranking with compute and, you know, up there with, as a terms of a world changer. Are we, what I'm basically asking is, are we under invested in this? You know, but so, so yeah, SPEAKER_00: it's a good question. So like what I would say is that I think we should segment how many people do you need on the creation of the technology itself and what is the right size of research and engineers and compute to do that. And how many people do you need in the sort of application of the technology to create better products, to deliver services and consulting, and then ultimately to diffuse it through, you know, sort of all spheres of society. And the numbers are very different and that is not different than anywhere else. I mean, I mean, if you give examples of, since you were talking about in the context of World War II, how many people does it take to create, you know, an atomic weapon as an example? It's a large number. I mean, it wasn't just Los Alamos. There was a lot of people in Oak Ridge. It's a large number, but it wasn't a million people, right? Yeah. So, so you could have highly concentrated teams of people that with enough resources can do extraordinary scientific and technological achievements. And that's always by definition is going to be a fraction of like 1% compared to the total volume that is going to require to then deal with it. Yeah. But the application side is infinite almost. That's exactly. So that is where like in the end, the bottleneck really is. So with, you know, thousands of scientists and engineers, you can create world class AI, right? And so no, you don't need 10,000 to be able to create the large language model and the generative model or something, but you need thousands and you need, you know, very significant amount of computing data. You need that. The rest is, okay, I build software, I build databases, or I build a software product that allows you to do inventory management, or I build, you know, a photo editor and so on. Now that product incorporating the AI, modifying, expanding it and so on. Well, now you're talking about the entire software industry. So now you're talking about millions of people, right, who are necessary, you know, who are required to bring AI into their product. Then you go a step beyond the technology creators in terms of software and you say, well, okay, now what the skills to help organizations go and deploy it in the department of, you know, the interior, right? And then I said, okay, well, now you need like consultants and experts and people to work there to integrate into the workflow. So now you're talking into the many tens of millions of people around that. So I see it as these concentric circles of it, but to some degree, many of these core technology areas, just saying like, well, I need a team of like 100,000 people to create like AI or a new transistor or a new quantum computer. It's actually a diminishing return, right? In the end, like too many people connecting with each other is very difficult. But on the application side, I was just thinking about it, to go back to our example of that college, SPEAKER_02: just the task of sitting down with a faculty and working with them to reimagine what they do with this, with these new set of tools in mind, with the understanding that the students coming in are probably going to know more about it than they do. That is a Herculean people problem. SPEAKER_00: It's a people problem. Yeah. That's why I started in terms of the barriers of adoption of that. I mean, the context of IBM, an example, that's why we have a consulting organization, I mean, consulting that complements IBM technology. And the IBM consulting organization has over 150,000 employees because of this question, right? Because you have to sit down and you say, okay, what problem are you trying to solve? What is the methodology we're going to do? And here's the technology options that we have to be able to bring into the table. In the end, the adoption across our society will be limited by this part. The technology is going to make it easier, more cost effective to implement those solutions. But you first have to think about what you want to do, how you're going to do it, and how you're going to bring it into a life of this in this context, a faculty member, or, you know, the administrator and so on in this college, right? With that Hollywood, that notion, I thought, which was absolutely, I thought really interesting that SPEAKER_02: in the Hollywood strike, you have to have this conversation about a distributive justice conversation about how do we that's it's a really hard conversation, right to have in a point. So this brings me to my next point, which is that you we were talking backstage, you have you have two daughters, one in college, one about to go to college. That's right. So they're both science minded. So tell me about the conversations you you have with your daughter, you you have a unique conversation with your daughters, because your your advice to them is, is influenced by what you do for a living. Yes, it's true. So did you warn your daughters away from certain fields? Did you say whatever you do, don't be, you know, no, no, that's not my style. For me, no, I try not to be SPEAKER_00: like, you know, preachy about that. So for me, it was just about showing by example of things I love, right? And things I care about. And then, you know, bringing them to the lab and seeing things and then the natural conversations of things, working on or interesting people I meet. So So to the extent that they have chosen that, and obviously, this has an influence on them. It has been through seeing it, you know, perhaps through my eyes, right, and when you see me do and that I like my profession, right. But one of your daughters, you said, is thinking that she wants to be a doctor. But SPEAKER_01: SPEAKER_02: being a doctor in a post AI world is surely a very different proposition than being a doctor in a pre AI world. Do you think, have you tried to prepare her for that difference? Have you explained to her what you think will happen to this profession she might enter? SPEAKER_00: Yeah, I mean, not in like, you know, incredible amount of detail, but but but yes, at the level of understanding what is changing, like this lens of the information lens with which you can look at the world, and what is possible, and what it can do, like, what is our role? And what is the role of the technology and how that shapes at that level of abstraction for sure. But not at the level of like, don't be a radiologist, you know, because this is what I want for you. I was gonna say if you if you're SPEAKER_02: unhappy with your current job, you could do a podcast called parenting tips with Dario, which is just an AI person gives you advice on what your kids should do based on exactly this like, should I be a radiologist? Dario? Tell me. Like, it seems to be a really important question. Yeah. Let me ask this question in a more I'm joking, but in a more serious way. Surely it would, if I don't mean to use your daughter's example, but let's imagine we're giving advice to someone who wants to enter medicine. A really useful conversation to have is, what are the skills that are will be most prized in that profession? 15 years from now? And are they different from the skills that are prized now? How would you answer that question? SPEAKER_00: Yeah, I think I think for example, this is goes back to how is the scientific method on in this context, like the practice of medicine going to change, I think we will see more changes on how we practice the scientific method and so on as a consequence of what is happening with the world of computing and information, how we represent information, how we represent knowledge, how we extract meaning from knowledge as a method than we have seen in the last 200 years. So therefore, what I would like strongly encourage is not about like, hey, use these tools for doing this or doing that. But in the curriculum itself, in understanding how we do problem solving in the age of like data and data representation and so on, that needs to be embedded in the curriculum of everybody, you know, that is, I would say actually quite horizontally, but certainly in the context of medicine and scientists and so on, for sure. And to the extent that that gets ingrained, that will give us a lens that no matter what specialty they go with in medicine, they will say, actually, the way I want to be able to tackle improving the quality of care, the way to do that is in addition to all the elements that we have practiced in the field of medicine, is this new lens. And are we representing the data the right way? Do we have the right tools to be able to represent that knowledge? Am I incorporating that in my own, sort of with my own knowledge in a way that gives me better outcomes, right? Do I have the rigor of benchmarking too and quality of the results? So that is what needs to be incorporated. How, in a perfect world, if I asked you to, your team to rewrite curriculum for American medical schools, SPEAKER_02: how dramatic a revision is that? Are we tinkering with 10% of the curriculum or are we tinkering with 50% of it? SPEAKER_00: I think there would be a subset of classes that is about the method, the methodology, what has changed, like have these lens of it to understand. And then within each class, that methodology will represent something that is embedded in it, right? So it will be substantive but doesn't mean replacing the specialization and the context and the knowledge of each domain. But I do think everybody should have sort of a basic knowledge of the horizontal, right? What is it? How does it work? What tools do you have? What is the technology? And like, you know, what are the do's and don'ts around that? And then every area you say, and you know, that thing that you learn, this is how it applies to anatomy. And this is how it applies to radiology if you're studying that. Or this is how you apply in the context of discovery, right, of cell structure. And this is how we can use it, or protein folding, and this is how it does. So that way you'll see a connecting tissue throughout the whole thing. SPEAKER_02: Yeah. I mean, I would add to that, because I was thinking about this, that it's also this incredible opportunity to do what doctors are supposed to do but don't have time to do now, which is they're so consumed with figuring out what's wrong with you that they have little time to talk about the implications of the diagnosis. What we really want are if we can free them of some of the burden of what is actually quite a prosaic question of what's wrong with you, and leave the hard human thing of, should you be scared or hopeful, should you, you know, what do you need to do, or what, let me put this in the context of all the patients I've seen. That conversation, which is the most important one, is the one that seems to me, so like if I had to, I would add if we're reimagining the curriculum of med school, I'd like, with whatever, by the way, very little time, maybe we have to add two more years to med school. But like a whole... That's not going to be popular. But a whole thing about bringing back the human side of, you know, now if I can give you 10 more minutes, how do you use that 10 more minutes... But in that, in that reconceptualization that you just did is what we should be doing around that, SPEAKER_00: because I think the debate as to like, well, am I going to need doctors or not, is actually a not very useful debate. But rather, this other question is, how is your time being spent? What problems are you getting stuck? I mean, I generalize this by like the obvious observation that if you look around in our professions, in our daily lives, we have not run out of problems to solve. So as an example of that is, hey, I'm spending all my time trying to do diagnosis, and I could do that 10 times faster, and it allowed me actually to go and take care of the patients and all the next steps of what we have to do about it. That's probably a trade-off that a lot of doctors would take, right? And then you say, well, you know, to what degree does it allow me to do that, and I can do these other things and these other things that are critically important for my profession around that. So when you actually become less abstract and like we get past the futile conversation of like, oh, there's no more jobs, and AI is going to take it, all of it, which is kind of nonsense, is you go back to say, in practice, in your context, right, for you, what does it mean? How do you work? What can you do differently around that? Actually, that's a much richer conversation, and very often we would find ourselves that there's a portion of the work we do that we say, I would rather do less of that. This is the other part I like a lot. And if it is possible that technology could help us make that trade-off, I'll take it in a heartbeat. Now, poorly implemented technology can also create another problem. You say, hey, this was supposed to solve me things, but the way it's being implemented is not helping me, right? It's making my life much more miserable or so on, or I've lost connection in how I used to work, et cetera. So that is why design is so important. That is why also workflow is so important in being able to solve these problems. But it begins by going from the intergalactic to the reality of it, of that faculty member in the liberal arts college, or a practitioner in medicine in a hospital, and what it means for them, right? SPEAKER_02: Yeah. What struck me, Dario, throughout our conversation is how much of this revolution is non-technical. That is to say, you guys are doing the technical thing here, but the revolution is going to require a whole range of people doing things that have nothing to do with software, that have to do with working out new human arrangements. Talking about that, I mean, I keep coming back to the Hollywood strike thing, that you have to have a conversation about our values as creators of movies. How are we going to divide up the credit and the... That's a conversation about philosophy. It is. It is. And it's in the grand tradition of why a liberal education is so important in the broadest possible sense. SPEAKER_00: There's no common conception of the good. That is always a contested dialogue that happens within our society. And technology is going to fit in that context too. So that's why, personally, as a philosopher, I'm not a technological determinist. And I don't like when colleagues in my profession start saying, well, this is the way the technology is going to be, and by consequence, this is how society is going to be. I'm like, that's a highly contested goal. And if you want to enter into a realm of politics or a realm of other ones, go and stand up on a stool and discuss whether that's what society wants. You will find there is a huge diversity of opinions and perspective, and that's what makes a democracy the richness of our society. And in the end, that is going to be the centerpiece of the conversation. What do we want? Who gets what? And so on. And that is... Actually, I don't think it's anything negative. That's as it should be, because in the end, it's anchored of who we want as humans, as friends, families, citizens, and we have many overlapping sets of responsibilities. And as a technology creator, my only responsibility is not just as a scientist and a technology creator. I'm also a member of family. I'm a citizen, and I'm many other things that I care about. And I think that that sometimes in the debate of the technological determinists, they start now butting into what is the realm of justice and society and philosophy and democracy, and that's where they get the most uncomfortable, because it's like, I'm just telling you what's possible. And when there's pushback, it's like, yeah, but now we're talking about how we live and how we work and how much I get paid or not paid. So that technology is important. Technology shapes that conversation, but we're going to have the conversation with a different language, as it should be. And technologies need to get accustomed to... If they want to participate in that world with the broad consequences, hey, get accustomed to deal with the complexity of that world, of politics, society, institutions, unions, all that stuff. And you can't be whiny about it. It's like, they're not adopting my technology. That's what it takes to bring technology into the world. SPEAKER_01: Yeah. Well said. SPEAKER_02: Thank you, Dario, for this wonderful conversation. Thank you to all of you for coming and listening. And thank you. Thank you. Dario Gilt transformed how I think about the future of AI. He explained to me how huge of a leap it was when we went from chess-playing models to language-learning models. And he talked about how we still have a lot of room to grow. That's why it's important that we get things right. The future of AI is impossible to predict, but the technology has so much potential in every industry. Zooming into an academic or medical setting showed just how close we are to the widespread adoption of AI. Even Hollywood is being forced to figure this out. Institutions of all sorts will have to be at the forefront of integration in order to unlock the full power of AI thoughtfully and responsibly. Humans have the power and the responsibility to shape the tech for our world. I, for one, am excited to see how things play out. Smart Talks with IBM is produced by Matt Romano, Joey Fishground, David Jha and Jacob Goldstein. We're edited by Lydia Jean Cott. Our engineers are Jason Gambrell, Sarah Bruguier and Ben Toleday. Theme song by Gramascope. Special thanks to Andy Kelly, Kathy Callahan and the 8Bar and IBM teams, as well as the Pushkin marketing team. Smart Talks with IBM is a production of Pushkin Industries and Ruby Studio at iHeartMedia. To find more Pushkin podcasts, listen on the iHeartRadio app, Apple Podcasts or wherever you listen to podcasts. I'm Malcolm Gladwell. This is a paid advertisement from IBM.