Chatbot

Episode Summary

Robert Epstein, one of the founders of the Löbner Prize for artificial conversation, spent two months exchanging emails with a chatbot named Ivana that he thought was a Russian woman interested in a romantic relationship. Even an expert in chatbots was fooled, showing how far chatbot technology has advanced. The Löbner Prize challenges chatbots to pass the Turing test proposed by Alan Turing in 1950. In the test, a judge communicates with a human and a computer, and the computer tries to respond like a human convincingly enough to fool the judge. Turing predicted computers would be able to fool 30% of judges in just 5 minutes of conversation within 50 years, which was fairly accurate. In 2014, a chatbot named Eugene Goostman fooled some judges into thinking it was a 13-year-old Ukrainian boy. One of the first chatbots was ELIZA, created by Joseph Weizenbaum in the 1960s. ELIZA imitated a therapist by rephrasing users' statements as questions. People enjoyed conversing with ELIZA, even asking to speak to it privately. Some saw potential for chatbots like ELIZA to greatly increase the efficiency of psychotherapy. However, Weizenbaum himself worried that people would settle for poor substitutes for human interaction. Today chatbots are ubiquitous, handling simple tasks like customer service inquiries. More advanced chatbots like Babylon Health can assess medical symptoms or have conversations. Most modern chatbots don't try to seem human. Exceptions include chatbots used for unethical purposes, like having fake profiles on dating sites. Chatbots are also used to stoke outrage and spread misinformation on social media. Overall, chatbots work best when focused on simple, specialized tasks while letting humans handle complex interactions. This aligns with Adam Smith's ideas on productivity through division of labor. So automation through chatbots reshapes rather than eliminates jobs. However, we must be cautious about letting convenience and efficiency stop us from having meaningful human interactions. Rather than chatbots fooling humans, the ideal may be leveraging them to free up more time for humanity.

Episode Show Notes

It's claimed that some computers can now pass the Turing test: convincing people that they are human. Tim Harford asks how important that distinction is, and what it means for the future of human interaction.

Episode Transcript

SPEAKER_00: Amazing, fascinating stories of inventions, ideas and innovations. Yes, this is the podcast about the things that have helped to shape our lives. Podcasts from the BBC World Service are supported by advertising. SPEAKER_05: Ryan Reynolds here from Mint Mobile. With the price of just about everything going up during inflation, we thought we'd bring our prices down. So to help us, we brought in a reverse auctioneer, which is apparently a thing. Mint Mobile Unlimited Premium Wireless. SPEAKER_03: How did it get 30? 30? How did it get 30? How did it get 20? 20? How did it get 20? How did it get 15? 15? 15? Just 15 bucks a month? Sold! Give it a try at mintmobile.com SPEAKER_05: slash switch. New activation and upfront payment for a three month plan required. Taxes and SPEAKER_02: fees extra. Additional restrictions apply. See mintmobile.com for full terms. SPEAKER_06: 50 things that made the modern economy with Tim Harford. SPEAKER_03: Robert Epstein was looking for love. The year being 2006, he was looking online. He began a promising email exchange with a pretty brunette in Russia. Epstein was disappointed. He wanted more than a pen friend, let's be frank, but she was warm and friendly. Soon, SPEAKER_03: she confessed she was developing a crush on him. SPEAKER_01: I have very special feelings about you. It, in the same way as the beautiful flower blossoming in my soul, I only cannot explain. I shall wait your answer holding my fingers have crossed. SPEAKER_03: The correspondence blossomed. It took a long while for Epstein to notice that Ivana never really responded directly to his questions. She'd write about taking a walk in the park, having conversations with her mother, and repeat sweet nothings about how much she liked him. Suspicious, he eventually sent Ivana a line of pure bang on the keyboard gibberish. She responded with another email about her mother. At last, Robert Epstein realised the truth. Ivana was a chatbot. What makes the story surprising isn't that a Russian chatbot managed to trick a lonely middle-aged Californian man. It's that the man who was tricked was one of the founders of the Löbner Prize, an annual test of artificial conversation in which computers try to fool humans into thinking that they, too, are human. One of the world's top chatbot experts had spent two months trying to seduce a computer programme. Each year, the Löbner Prize challenges chatbots to pass the Turing test, proposed in 1950 by the British mathematician, code-breaker and computer pioneer Alan Turing. In Turing's imitation game, a judge would communicate through a teleprompter with a human and a computer. The computer's job was to imitate human conversation convincingly enough to persuade the judge. Alan Turing thought that within 50 years computers would be able to fool 30% of human judges after five minutes of conversation. Not far off, it took 64 years, although we are still arguing over whether Eugene Guzman, the programme that in 2014 was trumpeted as passing the Turing test, really counts. Like Ivana, Guzman managed expectations by claiming not to be a native English speaker. He said he was a 13-year-old kid from Odessa in Ukraine. One of the first and most famous early chatbots, Eliza, would not have passed the Turing test but did, with just a few lines of code, successfully imitate a human non-directional therapist. She... it... was programmed by Joseph Weitzenbaum in the mid-1960s. If you typed, my husband made me come here, Eliza might simply reply, your husband SPEAKER_02: made you come here. If you mentioned feeling angry, Eliza might ask, do you think coming here SPEAKER_04: will help you not to feel angry? Or she might simply say, please go on. People didn't care SPEAKER_03: that Eliza wasn't human. At least someone would listen to them without judging them or trying to sleep with them. Joseph Weitzenbaum's secretary asked him to leave the room so that she could talk to Eliza in private. Psychotherapists were fascinated. A contemporary article in the Journal of Nervous and Mental Disease mused that several hundred patients an hour could be handled by a computer system. Supervising an army of bots, the human therapist would be far more efficient. And indeed, cognitive behavioral therapy is now administered by chatbots such as Wobot, designed by a clinical psychologist, Alison Darcy. There's no pretence that they're human. Joseph Weitzenbaum himself was horrified by the idea that people would settle for so poor a substitute for human interaction. But like Mary Shelley's Frankenstein, he'd created something beyond his control. Chatbots are now ubiquitous. They handle complaints and inquiries. Babylon Health is a chatbot that quizzes people about their medical symptoms and decides whether they should be referred to a doctor. Amelia talks directly to the customers of some banks but is used by Allstate Insurance to provide information to the call centre workers who themselves are talking to customers. And Alexa and Siri interpret our voices and speak back with the simple goal of sparing us stabbing clumsily at tiny screens. Brian Christian, author of The Most Human Human, a book about the Turing test, points out that most modern chatbots don't even try to pass it. There are exceptions. Ivana-esque chatbots were used by Ashley Madison, a website designed to facilitate extramarital affairs, to hide the fact that few human women used the site. It seems we're less likely to notice a chatbot isn't human when they plug directly into our libido. Another tactic is to rile us up. One effective chatbot, M Gons, tricks people by starting an exchange of insults. Politics, most notoriously the 2016 US election campaign, is well seasoned with social media chatbots, pretending to be outraged citizens, tweeting lies and insulting memes. But generally, chatbots are happy to present as chatbots. Seeming human is hard. Commercial bots have largely ignored the challenge and specialise in doing small tasks well, solving straightforward problems and handing off the complex cases to a person. Adam Smith explained in the late 1700s that productivity is built on a process of dividing up labour into small specialised tasks. Modern chatbots work on the same principle. This logic leads economists to believe that automation reshapes jobs rather than destroying them. Jobs are sliced into tasks, computers take over the routine tasks, humans supply the creativity and the adaptability. That's what we observe, for example, with the digital spreadsheet, the cash machine or the self-checkout kiosk. Chatbots give us another example. But we must be wary of the risk that as consumers, producers and perhaps even as ordinary citizens, we can tort ourselves to fit the computers. We use the self-checkout, even though a chat with a shop assistant might lift our mood. We post status updates or just click an emoji that are filtered by social media algorithms. As with Eliza, we're settling for the feeling that someone's listening. Brian Christian argues that we humans should view this as a challenge to raise our game. Let the computers take over the call centres. Isn't that better than forcing a robot made of flesh and blood to stick to a script, frustrating everyone involved? We might hope that rather than fooling humans, better chatbots will save time, freeing us up to talk more meaningfully to each other. SPEAKER_06: The perfect guide to the history of chatbots is Brian Christian's The Most Human Human. For a full list of our sources, please see bbcworldservice.com slash 50things.