The Telemarketing Turing Test


"I am a real person. Ha ha. I am a real person. Ha ha. I am a real person." | Vladislav Ociacia/iStock/Thinkstock
"I am a real person. Ha ha. I am a real person. Ha ha. I am a real person." | Vladislav Ociacia/iStock/Thinkstock

The other day, TIME picked up a really weird, chilling and moderately hilarious story about the discovery of what seems to be a secret Turing chatbot caught in the wild.

The saga begins with TIME Washington Bureau Chief Michael Scherer receiving a health insurance telemarketing call from what, at first, sounds like a very pleasant human being. But something about the progression of the conversation doesn't feel right. Scherer quickly wises up to the fact that he is probably talking to a machine. He asks questions designed to give any robot telemarketer trouble, like "What vegetable is found in tomato soup?" The caller has no answer. This call and subsequent return calls from TIME seemed to confirm Scherer's early suspicions. The telemarketer often complains of a bad connection whenever something off-script happens, and then tries to move on with a list of screening questions about health insurance. Some of the responses she gives sound eerily like pre-recorded speech samples, especially when you hear them come up more than once. When confronted directly with the big question-"Are you a robot?"-the caller flatly denies it. With a nervous-sounding laugh, she says, "I am a real person."

She insists. She is a real person.

The use of a chatty computer program to place telemarketing calls and screen incoming calls is nothing new. Robocalls are as familiar as apple pie and about as popular as E. coli. What's interesting about this particular callbot is that it seems to be trying, however ineffectively, to pass a version of the Turing test.

The Turing test is a famous standard set for artificial intelligence by the computer scientist Alan Turing in 1950. It might go something like this:

You are locked in a room with a computer terminal. On that computer terminal you can interact in a chat room conversation with one other participant. You type text-based messages, saying anything that comes to mind and asking any questions you want. You receive messages in response via text on the computer screen. The real test is this: After a set period of interaction, can you tell if you are having a conversation with an actual human on another terminal or with a piece of computer software? If that piece of software-we'll call it a "chatbot"-can consistently trick a human partner into thinking it is a real human talking back, we should say the chatbot has attained proficient mimicry of human intelligence.

The purpose of a chatbot is to be as indistinguishable from a human as possible - to trick the Real-Human Detection Software in your brain. A good chatbot might have lots of tricks up its sleeve. Think about things that humans are good at doing but machines typically aren't. These are exactly the types of things a chatbot needs to focus on. How about flawlessly parsing English syntax and grammar? That's a very tall order, especially when the human participant makes typing errors and uses slang and informal modes of speech. How about humor? Can a computer tell a good joke? Can it tell the difference when YOU tell a good joke versus a bad one? One might easily sniff out a chatbot that LOL'ed just as hard at some ad copy for home loan refinancing rearranged into knock-knock jokes as it LOL'ed at lines you cribbed from your funniest friend. Can it detect sarcasm, irony, and other subtle modulations in your tone and respond appropriately? These are the hard problems.

Some problems are a lot easier. For instance, a good chatbot should, if asked directly, always deny being a chatbot. Which is exactly what this caller did.

I'm imagining what it was like to program this part of the telemarketing callbot. How many different versions of the robot denial line do you pre-record? This caller had at least several different versions of "I am a real person," including one that sounded like a straightforward assertion, and one that sounded strangely dejected. However, apparently no one thought to pre-record the line "I am not a robot," since she seems unable or unwilling to say these words when prompted to do so directly. More bad connections. More nervous laughter. More "I am a real person"s.

But here's where things get tricky. In a real Turing test, you'll never know for sure during the experiment whether you're dealing with a human or a piece of software. We're so worried about being able to detect computer programs that seem human that we rarely stop to worry about whether we can detect the inverse: humans that seem like computer programs.

In order to maintain quality assurance, some call centers mandate that their (fully human) telemarketers work from pre-written scripts. They might have a list of lines they are allowed to say, arranged into a dialogue flow chart that carefully controls the progression of the call. And it's entirely possible that these human telemarketers are required by company policy to speak only pre-scripted call lines, verbatim, with no deviation or improvisation. While on the job, these humans (who are no doubt full of humor, originality and puppy magic in their off hours) become inverse Turing chatbots. They are humans who are being programmed to behave like computer software. In this way, we may have already reached the point of telemarketing chatbots that pass their own version of the Turing test, if only for the fact that the human participants in this game are meeting the robots halfway.

When considering this possibility, I began to feel a little sorry for the probable chatbot featured in the TIME story. What if we actually are talking about a real person in just this situation - some unlucky woman whose company-mandated call script is so rigid that it made everyone in the world think she was a robot?

There are a few reasons I think this isn't the case. It's not just the fact that she is unable to deviate from certain pre-scripted lines or answer unexpected questions - it's the fact that some of her repeated lines sound like replays of the exact same audio sample. That, to me, is a pretty dead giveaway. But then again, perhaps she has said these lines so many times that her vocal cords have achieved a near-perfect copying fidelity whenever she reproduces them.

This added concern is fascinating to me, because it highlights the philosophical question behind the original Turing test. Can machines think? Can a computer program possess real intelligence?

I want to modify this question for the occasion. Say we reach a point where telemarketing callbots DO pass the telemarketing version of the Turing test on a regular basis. No matter how hard you try, you can't tell the difference between a robocall and a human telemarketer. In this case, do you have a responsibility to be reasonably polite and empathetic toward the probable robot on the other end of the line?

"But it's just a computer program!" you say. "It doesn't care. Just because it shows all the outward signs of intelligence doesn't mean there's actual consciousness or understanding inside the machine."

Well, on a gut level, I agree with that. But what Turing's thought experiment points out is that the only pieces of evidence we have to go on are these outward signs. And that goes for other human beings too. You can't just wave a magic Intelligence Detection Device in front of someone's face and see if the light turns green (especially not over the phone). You have to watch their behavior. Listen to the things they say. If it walks like an intelligence, if it talks like an intelligence, if it claims to be an intelligence, we conclude we're dealing with an intelligence.

Check out the article over at TIME and listen to the calls to see what you think. Robot? Human? Human-like robot? Robot-like human?

By the way, just so you know, I am a real person.

I am a real person.