The peril (and promise) of AI with Tristan Harris: Part 2

Episode Summary

In the second part of the conversation with Tristan Harris on "How I Built This Lab," the focus shifts to the rapid development of artificial intelligence (AI) and its potential to disrupt societal trust and the very fabric of reality as we know it. Tristan Harris, co-founder of the Center for Humane Technology, delves into the dangers and ethical dilemmas posed by AI, particularly generative AI, which has the capability to forge documents, videos, and even personal identities with alarming precision. This advancement in technology, while promising in many respects, also harbors the risk of undermining the authenticity of information, leading to a breakdown in societal trust and the potential for widespread misinformation. The episode explores the excitement surrounding AI in the tech community, particularly in the Bay Area, where there is a rush to capitalize on the next big technological breakthrough. However, Harris emphasizes the need for a more cautious approach, advocating for changes in incentives to ensure that AI development does not outpace our ability to manage its societal impacts. He draws parallels with the early days of social media, highlighting how legal protections such as Section 230 of the Communications Decency Act inadvertently shielded companies from liability for the harms caused by their platforms. Harris suggests that a similar approach to AI, focusing on liability for potential harms, could encourage a more responsible pace of development. The conversation also touches on the tangible ways AI is already impacting our lives, such as through AI-generated content that is indistinguishable from real human creations. Harris warns that as AI continues to improve, it will become increasingly difficult to discern what is real and what is fabricated, posing significant challenges to our understanding of truth and reality. He advocates for the development of secure and authenticated communication methods to counteract the potential for deception. Despite the daunting challenges posed by AI, Harris remains hopeful that humanity can navigate this technological transition responsibly. He calls for a collective effort to reimagine our legal and societal frameworks to accommodate the new realities brought about by AI. This includes creating new norms around the open sourcing of AI models and ensuring that the development of AI technologies is aligned with societal well-being rather than unchecked profit motives. Harris's work, including his involvement in shaping a White House executive order on AI, aims to raise awareness and prompt action on these critical issues. He stresses the importance of public engagement and legislative action to establish guardrails for AI development, ensuring that technology serves humanity's best interests. Despite the challenges and uncertainties, Harris's message is one of love and hope for a future where technology enhances rather than undermines the human experience.

Episode Show Notes

What if you could no longer trust the things you see and hear?

Because the signature on a check, the documents or videos presented in court, the footage you see on the news, the calls you receive from your family … They could all be perfectly forged by artificial intelligence.

That’s just one of the risks posed by the rapid development of AI. And that’s why Tristan Harris of the Center for Humane Technology is sounding the alarm.

This week on How I Built This Lab: the second of a two-episode series in which Tristan and Guy discuss how we can upgrade the fundamental legal, technical, and philosophical frameworks of our society to meet the challenge of AI.

To learn more about the Center for Humane Technology, text “AI” to 55444.


This episode was researched and produced by Alex Cheng with music by Ramtin Arablouei.

It was edited by John Isabella. Our audio engineer was Neal Rauch.


You can follow HIBT on X & Instagram, and email us at hibt@id.wondery.com.

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Episode Transcript

SPEAKER_03: Wondery Plus subscribers can listen to How I Built This early and ad-free right now.Join Wondery Plus in the Wondery app or on Apple Podcasts.Today's business travelers are finding that fitting in a little leisure time keeps them recharged and excited on work trips. I know this because whenever I travel for work, I always try and meet up with a friend to catch up, have a great dinner, or hit a museum wherever I am.So if you're traveling for work, go with the card that puts the travel in business travel, the Delta SkyMiles Platinum Business American Express card.If you travel, you know. TurboTax makes all your moves count.Filing with 100% accuracy and getting your max refund guaranteed.So whether you started a podcast, side hustled your way to concert tickets, or sold Hollywood memorabilia, switch to TurboTax and make your moves count.See guarantee details at TurboTax.com slash guarantees. Experts only available with TurboTax Live. Our friends at Coriant provide wealth management services centered around you.And you know what?Coriant's goal is to exceed your expectations and simplify your life.Coriant can help high achievers just like you preserve your wealth and provide for the people, causes, and communities you care about. Corriant has extensive knowledge across the full spectrum of planning, investing, lending, and money management.They're one of the largest integrated fee-only U.S.registered investment advisors, and they have deeply experienced teams in 23 strategic locations.Teams that put the collective power of their expertise into building you the custom wealth, investment, and family office solutions that can help you reach your holistic financial goals, no matter how complex they may be. Real wealth requires real solutions. For more information, speak with an advisor today at Corrient.com.That's Corrient.com.Hello and welcome to How I Built This Lab.I'm Guy Raz. So what if you could no longer trust the things you see and hear?I'm not talking about conspiracy theories.I'm talking about the breakdown of what we now consider to be hard facts, evidence.What if you couldn't trust the signature on a check, the documents or videos presented in court, the footage you see on the news, the calls you receive from your family?Because they could all be perfectly forged by artificial intelligence.The breakdown of trust in our society is That's just one of the risks that could be headed our way as AI gets smarter and smarter.And that's why my guest today, Tristan Harris, is sounding the alarm about the rapid development of AI.This episode is part two of my conversation with Tristan.He's the co-founder of the Center for Humane Technology.And if you haven't listened to part one yet, you'll want to go back and listen to that first.In that episode, Tristan talked about how so many of the technological tools we use every day, things like social media and search engines, were designed to grab as much of our attention as possible.And that's had some really damaging effects on our society.We also talked about the exponential development of AI and how it's advancing so quickly that even the people that work on it aren't aware of the full scope of its capabilities. Today, Tristan is back to talk more about how AI is changing our lives, what we need to worry about, and how we can protect ourselves from some of the scary stuff. But not everyone is worried about the dangers of AI, which presents its own challenge. I'm here in the Bay Area.You are too.And I go to, from time to time, I'll go to events, meetups, just to observe what people are talking about around generative AI.And there's a lot of excitement about what's happening in San Francisco and talk about, you know, this is the next generation. the next big thing.People are saying it's like what it felt like to be here in 2003, 2004 with Web 2.0.So there's a lot of excitement around it and not that much skepticism.And so I wonder, in a world where profit is incentivized, obviously we live in a capitalist system, what would stop somebody from pursuing this at lightning speed?I mean, if they're incentivized by financial rewards to pursue it. SPEAKER_05: Well, if they're fully incentivized to go as fast as possible and there's no counter incentive that says you're liable, let's say, for the harms that might show up. then of course you're gonna go as fast as possible.And so that's why in our work, people think that we're criticizing Sam Altman or OpenAI or one company, or we're criticizing AI overall.No, neither of those things are true.What we're criticizing are perverse incentives that lead to bad outcomes because we are true futurists who want the good future.And we see that to get that future, we have to change the incentives that we're currently operating with.And a good example of this is liability. So what lesson did we learn from social media?For those who don't know or remember this, in 1996, there was this thing called the Communications Decency Act, in which there's a section famously called Section 230, that basically gave all internet companies an immunity shield that you would not be liable for anything that you're online bulletin board where someone posts hate speech or something like that, or tells people to commit suicide or smear someone that you would not be liable for any of those harms. And that made sense when the internet was just a bunch of bulletin boards and it was not powered by AI. But we used that immunity shield and applied it to social media companies when they came along later.And so when they go and intentionally addict children, use social comparison, use variable schedule rewards, use social proof and social validation and direct messaging and all that to try to jack up their products, we allowed social media companies to not be liable for any of the downstream harms that we're now living with. And a correction we could make to AI companies is that instead of being incentivized to race as fast as possible, what do we want to incentivize?We want to have them move at the pace that we can get this right.Well, get this right would mean what if everybody was liable for the downstream harms that could occur? And they all moved at a slower pace, not a generically slower pace, but at a pace in which we're doing the relevant safety work.And how would we rebalance that equation so that everyone is doing a race to safety versus a race to power and capabilities? SPEAKER_03: How will people start to see this impact their lives? was on Instagram this week, and I got delivered a surfing video.And it was a video of Kelly Slater surfing in beautiful, pristine waters.And he was just weaving in and out of other surfers, maybe 40 surfers.It was an amazing video.And I looked at the comments, and they were uniformly, you know, you just went down there like, wow, this guy's the GOAT.This is amazing.Wow.And then finally, there was one comment, and it was like, guys, this is AI-generated.And And I looked at this video, and I'm pretty sure it was AI-generated, you know.That's already happening.Some of it is very good, and it's not even a fraction as good as it's going to be if, as you say, there's this exponential curve, as good as it'll be in a year, five years, ten years from now.What are we talking about?Lay it out for me, Tristan.I mean— Are we talking about a world where nothing is real and everything is, I mean, it's like Aldous Huxley again on steroids.We just will not be able to even know if a call from our spouse is real. SPEAKER_05: Yeah.So AI is not going to get worse at emulating someone's voice, someone's handwriting, someone's likeness.That's what generative AI does, is it gives you those capabilities.Anything that can be emulated works. will be emulated and that's why it's generative ai it's generating text generating images generating 3d models from scratch generating architectural designs generating movie scripts generating amicus briefs generating you know fake articles about people anything that can be simulated will be in new hampshire uh someone at deep fake joe biden and automated some robocalls with Joe Biden's voice telling people not to vote in New Hampshire.And I actually listened to it.To be honest, that one, I would have thought that that sounded like Joe Biden.And the point is that that's the worst that it will ever be.So if you're not impressed today with where it is, just look at the growth rate of how much better and how quickly it's getting better. And what can you do in the face of that?Well, by default, yes, if we live in the world that we live in today, we won't know what's true. But I was just talking to the digital minister of Taiwan, Audrey Tang, and she was talking about the need for authenticated, privileged messages.So anytime the government sends a message, it now comes through one number.If you get a text from the government from that number, you know it's the government.If you don't get a text from that number, it's not the government. Apple and Google could start working on an interoperable standard saying that we're going to verify and make sure that when there's a phone call, there's a real handshake, that there's a new secure encrypted handshake.It's just like we moved from, we went from the default on the internet being HTTP to to HTTPS, secure.So we went from kind of an unsecured, open, unencrypted internet to more secure and encrypted connections. I think that in the age of generative AI, we're going to move to these more privileged and secure environments. SPEAKER_03: I just wonder how it's like putting a finger in the dike, and there's more and more water building, and it's just about to – that dam is going to burst.And I just think, well, already people have used Chet GPT-4 to figure out how to break into – into passwords.I mean, even something as simple as every now and again, I'm sure this happens to you.I'm sure this happens to a lot of people listening.You get a text and it looks like it's from your bank and it says fraud detected on your account.And many people who aren't as used to getting these things might click on it.That's simple.But you know what I mean?I mean, it's just a matter of time. It's going to get smarter.Before it's able to just break through all of these systems that are designed to protect us. SPEAKER_05: Yeah, well, and this, by the way, I think is how we frame the way that we're worried about the risk, which is that we're just simply releasing more capabilities into society faster than society has the immune systems to absorb and adapt to all the new changes that accompany all of that AI getting released.You know, when the first time someone released that open source code that said, the AI that said, with three seconds of your voice, I can speak to your bank, Was every bank in the world prepared for that and planning for that years in advance?No.They don't know what new AI capabilities are going to be released.And that's just one tiny one.There's literally hundreds of them per week.It's hard to track. In our AI Dilemma talk, we quote the co-founder of Anthropic, Jack Clark, who said that unless you're scanning Twitter every single day for all these updates, you are missing updates that are critical for national security and sort of what it means to have a safe world.And so that's where you would say, OK, so why don't we just stop all this? Why don't we just not race?Why don't we just stop releasing all AI? Well, then people would respond to that thing.But if China doesn't stop, then the US is just going to fall behind.But I want to push back against this, which is not to say that I think we should just stop in the US.We have to get smarter about what does it mean to beat China?Because if they race so fast that they release stuff that then undermines their own society, that's not in their interest either. We have to be smarter than that.The US has to lead and say, we need to set the terms of the race.And it's actually a race to the responsible and conscious deployment of technology that in its effect strengthens your society relative to other ones. That's the true competition. SPEAKER_03: We're going to take a quick break, but when we come back, more from Tristan on the measures we could take to responsibly deploy AI and the role Tristan played in a recent White House executive order on AI.Stay with us.I'm Guy Raz, and you're listening to How I Built This Lab. As a business-to-business marketer, your needs are unique.B2B buying cycles are long and your customers face incredibly complex decisions.Isn't it time you had a marketing platform built specifically for you?LinkedIn Ads empowers marketers with solutions for you and your customers. LinkedIn ads allow you to build the right relationships, drive results and reach your customers in a respectful environment.You'll be able to drive results with targeting and measurement tools built specifically for B2B.In technology, LinkedIn generated two to five times higher return on ad spend than other social media platforms. terms and conditions apply. Hey, small business leaders, at How I Built This, we hear all about how founders have built their companies from the ground up.Today's sponsor, JustWorks, is all about supporting that small business growth.Whether you're looking for help with payroll, benefits, HR tools, or compliance, JustWorks has you covered. Do you ever get tired of doing it all or feel like you're too busy cutting checks, filing forms and browsing benefits to even think about the rest of your to do list?Running a business takes a ton of work, but you don't have to do it alone.Let me tell you how JustWorks can help your business. JustWorks can help handle some of the administrative work you don't love.With their easy to use platform, you can manage onboarding, payroll and PTO all in one place.JustWorks cloud based platform enables managers and employees alike to quickly and securely access benefits, payroll and other HR functionality from anywhere, anytime. So if it ever feels like your business is running you, visit justworks.com slash podcast to see how JustWorks can help you run your business.That's justworks.com slash podcast.This episode is sponsored by Miro.If you haven't heard of it, Miro is an incredible online workspace.Our team relies on Miro for a lot of our own brainstorms and processes.And I think it's super useful to try out if you want to build something great with your team. One of my favorite features is the Miroverse.It's this collection of over 2000 pre-made templates made by ordinary Miro users for all sorts of use cases, like collecting feedback, running meetings, icebreakers.It saves you the hassle of building from scratch.We actually partnered with the folks over at Miro to create a how to build a podcast Miroverse template to help you kickstart your journey on making your own podcast.Check it out and let me know what you think.You can find our template at Miro.com slash H-I-B-T.That's M-I-R-O dot com slash H-I-B-T.That's M-I-R-O dot com slash H-I-B-T to check out our Miroverse template for yourself. And he says that advancements in the technology could unravel the very fabric of our society. Societies, human societies depend on – more or less depend on a sense of trust that there's common information.And even if there are differences of viewpoints and so on, I mean you can trigger riots, conflicts, violent demonstrations with misinformation that is so credible that seems so real.I mean videos.I keep thinking about this movie. The Running Man that came out in like the 80s with Arnold Schwarzenegger.Have you seen that movie?It's vaguely familiar.Remind me of the plot.So basically, I think if I'm recalling correctly from my childhood, Arnold Schwarzenegger is a Bakersfield cop and he's a good guy, but he's disliked by his superiors or his colleagues or whatever.And there's a scene where they're all in a helicopter and there's an anti-government riot and the helicopter fires on these demonstrators and massacres them. And they essentially frame Arnold Schwarzenegger, who tries to prevent the other pilots from doing this.They frame him as the guy who did it.And they create a video where it looks like he is the butcher of Bakersfield.And so he's sent to prison. But you can imagine that future.I mean, it's so crazy that that film... What happened in that film could easily happen.You can imagine in a court of law, documentary evidence being presented, video evidence being presented, signatures of documents, photographs, recordings, all of these generated by AI that are so good, it's impossible to discern from real evidence.And again, I'm on a rant here, but it seems like this is going to completely upend how we think about communication, what we believe, what we present as fact and evidence, how we function as societies. SPEAKER_05: Yeah, 100%.I mean, here's a metaphor.Imagine that the whole world is run on top of Windows 95.You know, it's running the world's computers and everything in the world runs on Windows 95.Governments run on Windows 95.Banks run on Windows 95.Hospitals run on Windows 95.Legal documents, court cases, lawyers, it all runs on Windows 95. And then imagine one day someone publishes this code to the whole internet, and it basically teaches you how to hack any Windows 95 computer in the world.So now Windows 95, which runs the world, is not secure anymore. It's insecure. So in this metaphor, the way that our whole society has been constructed is like sitting on top of this box called civilization 2000s, right?Like we've sort of living on a early 2000s world stack of the assumptions of paperwork and signatures are actual signatures and photographic evidence is actual photographic evidence and People's voices are real and can only represent their only voice.But suddenly we just undermined collectively with AI that set of assumptions.And so what do you do when this happens?Well, you don't try to pretend let's all keep running the world on Windows 95. This moment with AI is forcing a kind of rite of passage.Humanity has to kind of go through a bar mitzvah or bat mitzvah to upgrade the systems that we have been relying on to accommodate the new assumptions.And we've done those upgrades before. In democracies, when you said the printing press came out, the printing press both killed the previous forms of government, of feudal governments, and it made way for democracies, first through a really unstable period, And then ultimately, you could have public education, you could have the Fourth Estate and news articles.It forced this reorganization of what kind of governance that we need to live in.We are in this uncomfortable, but we have to do it, adaptation period where we are needing to upgrade everything. the basic legal philosophical mechanisms, we have to come up with new meaning for what is evidence in a world where AI can generate that evidence.And there are ways of doing that.We could live in a world where the only places that any media you see on the internet will only be on the internet if it's watermarked because we know that it was real. So there are things like this that are the building blocks, the puzzle pieces of this upgrade, but there's about a million pieces that has to happen.And I know that can sound daunting to people, but I almost want us to be collectively saying, okay, we're going to hold hands together and we got to go through this transition.And yes, it's going to be a little bit rocky and we have to make these changes together. SPEAKER_03: I know that a big part of what you do is just creating public awareness.But you also went to the White House to help put together an executive order around the stuff late in 2023.Tell me what that order actually, in practical terms, will do.Like, what will it slow down this process?Will it actually create... actionable protections for us?Or is it just, I don't know.I mean, again, it's an executive order.It's not, you know, a congressional law.What does it do? SPEAKER_05: Well, so there's been multiple parts to answering this question.So, I mean, this is, I think, like 111 pages.It was done in record time.In six months, it touches algorithmic bias.So AI that's used in current AI systems that are biased and how to deal with those issues.It deals with AI and biological weapons and needing to lock down the supply chains for where people can get dangerous materials and saying... We need to handle that better.It deals with AI.And the next GPT-5 and GPT-6 systems, it says that if you train a system that uses more than 10 to the 26, I believe, flops or floating point operations, two technical jargon, then you have to notify the government.That's basically like saying, if you're building a nuclear weapon that's really powerful, the government has to know. Yeah. But to your real point, your real question you're asking is, what can that executive order do?Because it's not law, it's an executive order.It's not legally binding for making sure that all the companies have to do all these things.A lot of it is changing what's called federal terms and conditions.So to get federal funding, if you're a biology lab, you will not be able to get that funding if you don't do these new sort of protective measures for the dangerous biological materials. So what that's doing is using the leverage of the government and its funding power to start to incentivize different aspects of the supply chains of the world, educational environments, banks, et cetera, to do more of the things that are AI resilient.So think of it as a movement and a signal, like a big bat signal blasted into the sky that says the US government is taking AI seriously.Now, It's not the security blanket that suddenly makes the world safe or suddenly open AI and anthropic stop everything they're doing. And before they study or do more research, they look at the executive order.They're still racing to build AI as fast as possible.And we mentioned the nuclear metaphor.How did we get to nuclear proliferation safety?How did we get to nuclear nonproliferation and controls?There was also back then a lot of track, what are called track two dialogue.So informal conversations between American nuclear scientists And back then, Soviet nuclear scientists about basically making sure that we had safer controls on nuclear weapons so they couldn't accidentally go off.I'm happy to report that informally, there are some of those dialogues that are happening between Chinese AI scientists about the risks and US AI scientists about the risks.As I say all this, is this adequate to where we need to go? No, it is not.It is a small drop in the pond compared to what needs to happen. What we really need now is for people to demand from their lawmakers that we take these issues seriously.And I think things like liability as a regulatory framework are powerful because people understand it.You as an AI company shouldn't be worried about being liable for the harms if there are not going to be any harms or risks.So if you don't think there are risks, then go ahead and release it. But if you do think there are going to be risks and you're liable for them, what that does is it has everybody move at a slower pace, at the pace that we can get this right. SPEAKER_03: We're going to take a quick break, but when we come back, what Tristan thinks it'll take for the world to unite against the dangers of AI and how he stays motivated in the face of such an enormous challenge.Stay with us.I'm Guy Raz, and you're listening to How I Built This Lab. SPEAKER_01: You're at a place you just discovered.And being an American Express Platinum card member with global dining access by Resi helped you score tickets to quite the dining experience.Okay, chef.You're looking at something you've never seen before, much less tasted.After your first bite, you say nothing because you're speechless.That's the powerful backing of American Express.See how to elevate your dining experiences at americanexpress.com slash with Amex.Terms apply. SPEAKER_03: We've all been there.One confusing email turns into 12 confused replies and then a meeting to get aligned.And who has time for that?Grammarly is a trusted AI writing partner that saves your company from miscommunication and all the wasted time and money that goes with it. I personally love using Grammarly to help me strike the right tone when I'm sending important emails to my teams and business partners.I was amazed at how seamlessly it works with all the different communication tools I use every day.Grammarly works everywhere you work. integrating seamlessly across 500,000 apps and websites.No cutting, no pasting, no context switching.Personalized, on-brand writing help is built into your docs, messages, emails, everything. So join the 70,000 teams who trust Grammarly to work faster and hit their goals while keeping their data secure.Learn more at grammarly.com. Welcome back to How I Built This Lab.I'm Guy Raz, and I'm talking with Tristan Harris, co-founder of the Center for Humane Technology.Tristan has compared the development of AI to the development of nuclear weapons.But in some ways, the AI problem is even more complicated. I keep thinking about the nuclear analogy, right?Because there are nine nuclear powers and probably will be 10 with Iran eventually.And we're talking about states, nation states, and all of them pursued this more or less for power, right?To increase their power. This is different because it's not just countries.It's not like it's just China or Iran or the United States or North Korea.It's individual companies.It's individual people.I mean, not to say that some guy working in basement in Ukraine or Belarus is going to build something as effective as what open AI will do.But every day, there's a new company that is researching generative AI capabilities and what they might be able to build.So how do you create mechanisms to control all of that? SPEAKER_05: Yeah, I want to say that, you know, you could have been there in 1945.And it said, you see the first nuclear bomb go off, you get that there's going to be a nuclear arms race.And you could say, I'm going to throw up my hands.The world is over.Every country is going to get a nuclear weapon.There's going to be conflict, and then there's going to be a nuclear escalation, and the world's going to be over.Notice that we made it through that.It's a miracle we made it through. SPEAKER_03: Even for the next 40 years, it didn't feel like that was going to happen. SPEAKER_05: That's right, for a long time.And it didn't happen just because humans are good or humanity got lucky.There's a lot of people who worked very hard.There was the Pugwash Movement.There was the Russell Einstein Manifesto.There was the Union of Concerned Scientists, the Atomic Bulletin, all the nuclear nonproliferation work, building satellites that could detect when people are moving nuclear weapons around, doing better controls and understanding of all the sources of uranium in the world.We built this whole global infrastructure to try to have better understanding of safety and control for what would make a world with nuclear technology safe.That required a lot of people working really hard.So now I want to say the situation looks pretty similar. you know, building towards artificial general intelligence and going faster every day looks pretty bleak.It does.It's not as tractable or easy as nukes because back then you needed to have state-level resources and access to uranium, which is a very specific and hard to find thing, not easy to get.In this case, what uranium was for nuclear weapons, advanced NVIDIA GPU chips are for AI. So when you see that the Biden administration has created the CHIPS Act and is actually restricting sales of NVIDIA chips to China, that's basically like saying we need to start controlling and looking at the global supply and flows of NVIDIA GPU chips.Now, how do you get out of this?With a new union of concerned AI scientists and a movement of tech engineers and a movement of the public and legislators that are calling to action.And so there's going to be that kind of effort here. SPEAKER_03: All right, so what does that effort look like?Like, what do we need to do to, you know, to prevent the AI version of a nuclear catastrophe, right?Especially when so many of these AI tools are publicly available for anyone to use. SPEAKER_05: We need there to be different norms around that where we probably don't want to open source the really advanced AI systems that are coming.Think of it this way, for those who don't know, by the way, What we're talking about with open source AI models, which is different than open source code.Open source code is more safe and more secure because if I do Linux in an open source code way, more people look at the code, they can identify the bugs, they can improve the code.It makes the overall thing safer, more secure, more trustworthy because it's so transparent. But AI systems, AI models that are open source means that anybody can retrain them to do even more dangerous things.So for example, Facebook released Llama 2, their open AI model, and they tried to tune it to be safe.So if you ask it, how do I make a biological weapon?It will not answer.You would say, sorry, I can't answer that. But once it's out there in the open, I won't go into the technical details, but basically for about $100, you can retrain all the safety controls off of it.So you can say, be the worst evil version of yourself or your evil twin personality, and it'll suddenly answer happily any questions about biological weapons.Now, it's not smart enough to have accurate really deeply accurate instructions about how to do that.But we probably don't want to be releasing Lama 3, Lama 4.And Mark Zuckerberg has publicly stated within the last week or so that he wants to build open source artificial general intelligence, which is the most dangerous thing you could possibly do. SPEAKER_03: You know, I still think most of us can't fully imagine how quickly our lives are going to change.And it's already created chaos, a certain level of chaos, but manageable chaos.And I don't think that most of us can imagine what could happen.And sometimes I wonder, like, is it effective?Is it effective to scare people or to create these kinds of, you know, doomsday scenarios in people's minds?But at the same time, I think about—and you referenced this in your talk— This film, The Day After, that came out in like the mid-80s, I was like eight or nine years old when that came out.And for the life of me, I don't know why my parents let me watch it with them.And I was terrified.I mean, watching – I remember the scenes of the bombs exploding in Kansas City, Missouri and – It was just terrifying.I had nightmares for years, you know, I was... And that film really did, I mean, not to say that, you know, resulted in major treaties, but it did create a sense, sort of this, it built a consciousness, at least in the United States, because at the end of the film, it's like, it says... This is just a representation of what could happen in nuclear war.And in fact, it will be much, much worse than what you've seen.And I don't know, is there a world where it's worthwhile creating some like a day after around AI so people just understand what we're possibly facing? SPEAKER_05: Yeah.I'm so glad you're bringing this up.And what's interesting about it is Reagan had military advisors saying, we can win a nuclear war.We just have to keep- We got to keep building them, building bombs.Keep building them, have more of them.Yeah, exactly.And if one side believes that the other side actually believes that they're going to try to win a nuclear war, that's what creates the risk.Because then everyone's on hair trick or alert for anything that looks like it could be a nuke.And then something that's an accident, like a flock of birds comes across the radar and you almost hit the button.Yeah. So what we needed to do was create a new trustworthy basis for coordination that the US and Russia would trust that they're actually so existentially terrified by Armageddon that both of them would fear everyone losing more than they fear me losing to you.And I think what that film the day after did is it painted a picture of how everyone loses if this happens.So this actually can have a really big impact.And the point of this from a metaphorical stance is that we as public communicators, you guy with this podcast and people who are listening to this, We have to make the dark future legible so that we can steer towards the light.If we don't have the dark future be legible and people just want to focus on AI making cancer better and giving us solutions to climate change, but not really seeing how the incentives pull us to racing to roll out capability as quickly as possible and destabilize society, if we're not honest with ourselves about that, we're going to get the thing that we're not honest with ourselves about. And it's by being honest with ourselves about that risk side that we can actively collectively choose to steer towards the light side.And that's if all the open source developers agree on those risks.That's if China agrees on those risks. That's if the UAE, which is also building an open source model called Falcon, agrees with those risks.That's the world that we need to create.How much time do we have? Well, like many things with climate change too, we should have started more than a decade ago.The next best time is today. SPEAKER_03: I just think the gravity of this is enormous and how quickly it's happening is enormous.And we have very few choices in this game, you know, it's... SPEAKER_05: I feel disempowered because- Yes, I hear you.And we, that's why, I mean, in my mind, it's like, it's the next 12 months, it's like everything has to happen and don't get anxious about that.Just say, okay, what can we all do over the next 12 months that amounts for the maximum set of things shifting the incentives to, you know, this isn't a problem with a solution.This is a predicament with responses and ways of navigating.And this is about how do we find the wisest, clearest solutions steady handed path through this that we can.And I think that we all have to stay resolved and calm and say, what will it take for the world to go well and to work every day at assuring that outcome?By the way, if you're interested, you can text AI to 55444.We're interested in gathering sort of public support and power around demanding the kind of AI guardrails and safety that we want.There are many groups that you can get involved with online. demanding from Congress and legislators that we need better mechanisms of having liability for AI systems.There's a lot that can happen, but really just sharing this around and having more people talk about it is one of the best ways to make an impact. SPEAKER_03: Tristan, I imagine that you get attacked.You're in the Bay Area and you come from that world.I imagine you get attacked, not just praised.I mean, a lot of people love what you do and your message, but there are probably people who really hate what you do and claim that you're overhyping this and, you know, you're not making money off this, right?This is a nonprofit organization.This is, I mean, what is your incentive?What drives you to keep doing this, even with all the pushback that you get? SPEAKER_05: It's really simple, Guy.It's love.I want to be able to live in a future and have other human beings and life forms be able to enjoy and love the future that we're creating.Just like we have to care about the planet and the health of the environment underneath our feet and that supplies our air, we also have to care about protecting the social fabric, trust, and the shared reality upon which everything else depends. SPEAKER_03: Yeah.Do you think that humans are going to be around in 500 years? SPEAKER_05: I don't know, is the honest answer.I don't know.We often, in our work at Center for Humane Technology, we do think about this moment as an initiatory threshold, like a rite of passage for humanity, that we cannot keep doing technology the way that we have been doing it. You know, we did DuPont chemistry whose motto was better living through chemistry.And we all love that.We reverse engineered this whole field of organic and organic compounds and we can synthesize anything with chemistry.And we got a lot of amazing things out of that that everyone's grateful for.But we also got forever chemicals and forever chemicals literally never go away.That's why they're called forever chemicals.Your body can't degrade them. SPEAKER_03: We all have it in our bodies. SPEAKER_05: Every single one of us, you and me and everybody listening to this, if you go to Antarctica right now and you open your mouth to drink the rainfall, In the rain in Antarctica, you will have levels of forever chemicals that enter your body that are more than the current EPA says are safe for human health.Now, we have created this mess.The answer isn't we should be self-hating primates who don't want to build any technology.It's how do we do technology without externalities? How do we do social media without destroying mental health of teenagers?How do we do smartphones without destroying attention spans?How do we do food packaging without creating forever chemicals and plastics and plastic pollution?I think we can be pro-technology and anti-externalities, and that's the nuanced position that I want everybody to be in, rather than saying you're either for tech and for acceleration or you're a D cell.It's like, no, I'm for getting this right. I hope we get this right as a species. SPEAKER_03: Me too.I really do.Me too.Tristan, thanks so much.Thanks so much, Guy.I really appreciate it.That's Tristan Harris, co-founder of the Center for Humane Technology.And thanks for listening to the show this week.Please make sure to click the follow button on your podcast app so you never miss a new episode of the show.And as always, it's free. This episode was researched and produced by Alex Chung with editing by John Isabella.Our music was composed by Ramtin Arablui.Our audio engineer was Neil Rauch. Our production team at How I Built This also includes Carla Estevez, Chris Messini, J.C.Howard, Catherine Seifer, Carrie Thompson, Malia Agudelo, Neva Grant, and Sam Paulson.I'm Guy Raz, and you've been listening to How I Built This Lab. If you like how I built this, you can listen early and ad-free right now by joining Wondery Plus in the Wondery app or on Apple Podcasts.Prime members can listen ad-free on Amazon Music.Before you go, tell us about yourself by filling out a short survey at wondery.com survey. SPEAKER_00: The global smartwatch industry is worth $45 billion annually.The Apple Watch is the undisputed bestseller, but Apple's dominance wasn't always a given.In the wake of Steve Jobs' death, Samsung was ready to capitalize on the company's uncertain path and beat Apple to market with the first smartwatch.By 2013, Samsung had become an electronics powerhouse, a far cry from its humble origins as a family grocery store. It was ready to take on Silicon Valley's finest. In this face-off, both companies will have to sway consumers while surviving PR disasters as they open the Pandora's box of interactive biometrics.Hi, I'm David Brown, the host of Wondery's show, Business Wars.We go deep into some of the biggest corporate rivalries of all time.And in our latest season, we're clocking the fierce battle over wearable technology between Apple and Samsung.Make sure you follow Business Wars wherever you get your podcasts. You can listen ad-free on the Amazon Music or Wondery app.