Before a Bot Steals Your Job, It Will Steal Your Name

The future of AI looks a lot like Tessa, Ernie, and Amy.

A pixelated blue name tag that reads "Tessa" in white letters, on a black background
Joanne Imperio / The Atlantic

In May, Tessa went rogue. The National Eating Disorder Association’s chatbot had recently replaced a phone hotline and the handful of staffers who ran it. But although it was designed to deliver a set of approved responses to people who might be at risk of an eating disorder, Tessa instead recommended that they lose weight. “Every single thing that Tessa suggested were things that led to the development of my eating disorder,” one woman who reviewed the chatbot wrote on Instagram. Tessa was quickly canned. “It was not our intention to suggest that Tessa could provide the same type of human connection that the Helpline offered,” the nonprofit’s CEO, Liz Thompson, told NPR. Perhaps the organization didn’t want to suggest a human connection, but why else give the bot that name?

The new generation of chatbots can not only converse in unnervingly humanlike ways; in many cases, they have human names too. In addition to Tessa, there are bots named Ernie (from the Chinese company Baidu), Claude (a ChatGPT rival from the AI start-up Anthropic), and Jasper (a popular AI writing assistant for brands). Many of the most advanced chatbots— ChatGPT, Bard, HuggingChat—stick to clunky or abstract identities, but there are now many new additions to the already endless customer-service bots with real names (Maya, Bo, Dom).

As generative AI continues to advance, expect a deluge of new human-named bots in the coming years, Suresh Venkatasubramanian, a computer-science professor at Brown University, told me. The names are yet another way to make bots seem more believable and real. “There’s a difference between what you expect from a ‘help assistant’ versus a bot named Tessa,” Katy Steinmetz, the creative and project director of the naming agency Catchword, told me. These names can have a malicious effect, but in other instances, they are simply annoying or mundane—a marketing ploy for companies to try to influence how you think about their products. The future of AI may or may not involve a bot taking your job, but it will very likely involve one taking your name.

The very first chatbot, ELIZA, wasn’t capable of much. A therapist bot created by the MIT professor Joseph Weizenbaum in the mid-1960s, ELIZA was more parrot than psychoanalyst, often doing little more than repeating and rephrasing questions that users asked it. Still, people ascribed this janky form of AI with more understanding, creativity, and personality than Weizenbaum had expected. A decade after ELIZA’s debut, Weizenbaum remarked that he was “startled to see how quickly and how very deeply people conversing with [ELIZA] became emotionally involved with the computer and how unequivocally they anthropomorphized it.” Today, the projection of human traits onto computers has a name: the ELIZA effect.

The following decades brought chatbots with names such as Parry, Jabberwacky, Dr. Sbaitso, and A.L.I.C.E. (Artificial Linguistic Internet Computer Entity); in 2017, Saudi Arabia granted citizenship to a humanoid robot named Sophia. But that was before AI was convincing enough to feel real. In this new era of generative AI, human names are just one more layer of faux humanity on products already loaded with anthropomorphic features.

Although Microsoft’s official name for its chatbot is simply Bing Chat, the AI initially appeared to have an alter ego that called itself Sydney. It was suppressed after professing its love for a journalist, but maybe not permanently. “If you want it to be Sydney,” Microsoft’s chief technology officer said of Bing Chat in May, “you should be able to tell it to be Sydney.” ChatGPT, meanwhile, sends out responses word by word, as if it’s thinking. A rectangle blinks as it types, not unlike when a friend is typing over iMessage. “These are design choices, very thoughtfully done, to create that verisimilitude,” Venkatasubramanian said. “These bots are designed to create an impression of sentience, which, as humans, we are particularly susceptible to.”

Names are an easy way to make products feel smarter and more personal. That seems to be especially true of the customer-service bots that companies have been turning to for years, and especially post-ChatGPT. Every bank seems to have its own Erica (Bank of America), Sandi (Santander), or Amy (HSBC). People craving White Castle sliders can now place their order through the company’s drive-through bot, Julia. The bot displays its name on the screen before taking orders—“I’m Julia, a new voice assistant”—and shamelessly encourages customers to order extra food and drinks. Queries to Lufthansa can be directed toward its AI, Elisa—a human-seeming touch that would provide little comfort if the airline lost my luggage. But giving a bot a real name can translate to sales. Research from 2021 found that giving customer-service chatbots anthropomorphic features, including a human name, has “a direct, beneficial relationship with transaction outcomes.”

The proliferation of chatbots with human names follows the popularity of Amazon’s Alexa, but the bots don’t “wake up” when their name is called—an issue so present for people named Alexa that it helped inspire a nonprofit organization dedicated to renaming the device. Still, like Alexa, many of the customer-service bots are female-coded products whose sole purpose is to obey commands, though that is not universally true. A spokesperson for Anthropic said the company named its chatbot Claude because it “wanted a warm, friendly name for our model” and “noticed a convention of naming assistants with female names that we wanted to buck.”

With Alexa and other home assistants, “you can still physically see the product and know that at the end of the day, it is a technology gadget,” says Merve Hickok, the senior research director at the Center for AI and Digital Policy. “Chatbots are disembodied. Our interactions with chatbots are similar to how we communicate with other humans.” In the ChatGPT era, people might already assume that bots are sentient; addressing one by name doesn’t help. The risk could be especially apparent for chatbots’ most vulnerable users, says Gavin Abercrombie, an AI expert at Heriot-Watt University, in Edinburgh—such as children and adults suffering from dementia. If voice assistants like Alexa could encourage a 10-year-old girl to touch a penny to a live electrical outlet, then a generative AI that can communicate more like a person, and is named like one too, seems destined to backfire. “Giving a device a human name is not necessarily the wrong choice, but it has to be really thought out,” Abercrombie told me. “What are we trying to do? What kind of relationship do we expect the users to have with this?”

White Castle’s Julia, which simply facilitates the purchase of hamburgers and fries, is no one’s idea of a sentient bot. But as we enter an era of ubiquitous customer-service chatbots that sell us burgers and plane tickets, such attempts at forced relatability will get old fast—manipulating us into feeling more comfortable and emotionally connected to an inanimate AI tool. Resisting the urge to give every bot a human identity is a small way to let a bot’s function stand on its own and not load it with superfluous human connotations—especially in a field already inundated with ethical quandaries.

But for now, bots with human names are becoming unavoidable. My name has so far evaded Silicon Valley, but I doubt it’ll be long before I end up expressing my concerns to an AI-powered Jacob.