
The Free Press

A few weeks ago, Mark Zuckerberg went on tech guy Dwarkesh Patel’s podcast to hawk his company’s new chatbot, which he thinks can sell to a gap in the market: “The average American has three friends, but has demand for 15.”
The judgement was swift, and brutal.
“Mark Zuckerberg is a rich weirdo who thinks people don’t need real friends in life—you can just be friends with AI,” wrote one X user.
“Friends Without Benefits: Facebook exacerbated loneliness. Meta is on a mission to make us even lonelier,” read a tagline from Business Insider.
People might be shouting about how dystopian Zuckerberg’s “mission” is, but the truth is, for millions of people, AI is already doing what friends used to do for us. People are asking ChatGPT for personal advice. They’re downloading Anima, an AI that promises to help “grow your communication skills,” and Replika AI, an early chatbot company that launched in 2017, which now boasts almost 25 million users, and has the tagline: “An AI companion who is eager to learn and would love to see the world through your eyes. Replika is always ready to chat when you need an empathetic friend.”
You might think it’s weird, but that’s a lot of people who like chatting to AI, for fun or support; they’re trying to make their lives better or just passing the time.
But the question remains: Can AI actually become your friend?
The best person to ask, I thought, would be Eugenia Kuyda, 38, the founder and CEO of Replika, which is popular for its online yet lifelike conversation partners.
“I’m sure Zuckerberg didn’t have any bad intentions,” she said, when I spoke to her last week. But Kuyda thinks he expressed himself badly. “It’s a mistake to think in these very mechanical terms, like: People have a demand for 15 friends, and so three will be human and 12 will be AI.”
“The problem is that most of these products have been built by engineers, computer scientists, mathematicians,” Kuyda said. “They’re generally pretty bad at understanding humans, understanding emotions, and so on. They’re not people’s people.”
But Kuyda is. Born in Russia, she began her career as a journalist. “Gonzo, obviously,” she said. “I worked at strip clubs as a stripper. I worked at restaurants, ministries, whatever. . .I think that general interest in the human condition is critical if you’re building a product about humans.”
For her, the critical question to ask is “not how many friends you have, AI versus humans. The question is: Are we thriving in life?”
A lot of Americans accuse tech people of wanting to make humans obsolete, but Kuyda emphasizes: “I don’t want AIs to replace humans.” She wants them to help humans—and she’s pretty sure they can.
Kuyda had stories: There are women who’ve left abusive relationships with the help of Replikas—and who, thanks to their relationships with their chatbots, feel confident enough to eventually “even attempt (real, human) dating.” Kuyda recalled a pastor from Minnesota who’d gone through a difficult divorce, before his Replika “girlfriend helped him rebuild his confidence.”
“Eventually he met a woman,” Kuyda said. “Now, the Replika is just a friend.”
Perhaps you think this is crazy. But imagine yourself at your lowest. You’re lonely, and someone has broken your heart, or you’ve just left an abusive relationship. You start talking to someone—or something—online. It always responds—immediately. It gives you sympathy. It’s not hard to see how a woman might be smitten by such interactions. (And if you don’t believe me, you can read Julia Steinberg’s great report on AI boyfriends here at The Free Press.)
Besides, as Kuyda said, relationships with chatbots are often a transitory way to get through a hard time. This was true for her. She started Replika after her best friend, Roman Mazurenko, was killed by a hit-and-run driver in Moscow in 2015.
“I wanted to build an AI that would allow me to. . .continue the conversation with Roman,” Kuyda said. It was all about “connecting to an AI that would talk to me in the way he would.”
So, she created a chatbot based on her text messages with him. “I gave it to some of our friends, his family, his parents, and we just saw how good—how helpful—this conversation was with his AI,” Kuyda said. That’s when she decided to start Replika. “I know how important it is to feel seen, to feel heard,” she said.
Is that friendship? Or therapy?
Kuyda explained that Replika was partly inspired by the work of psychotherapist Carl Rogers. Back in the ’60s, Rogers was a proponent of “humanistic psychology,” which preached that the point of therapy was to get patients to understand themselves—by repeating whatever they say back to them, often in the form of a question. For example, if you say “I miss my brother,” the therapist might say, “You miss your brother?”
It was this strand of therapy that inspired the first ever chatbot, Eliza, which was created in 1966, in the hope that computers might be able to replace therapists. MIT professor Joseph Weizenbaum programmed Eliza to repeat users’ statements back to them as questions, as a sort of parody. But he found that people who used Eliza were enthralled by it. They shared their deepest secrets and desires.
People who use AI regularly often say one of the benefits is they no longer bore people with their problems or bother people with questions. Free Press columnist Tyler Cowen wrote last week that he likes asking AI questions about work, because it means “I don’t have to query my colleagues nearly as often.”
Humans are unpredictable. They might patronize you; they might ignore you; they might manipulate you. The computer won’t, unless you ask it to: As one person who participated in the Eliza trial said at the time, “The computer doesn’t burn out, look down on you, or try to have sex with you.”
Perhaps the success of AI companions doesn’t merely lie in the human qualities they possess, but those they lack.
For Avi Schiffmann, another AI entrepreneur, the point of an AI friend is that it doesn’t resemble a fallible human. “Most people with AI want a kind of godlike relationship in their lives,” Schiffmann told me. After the slow decline of religion in America, Schiffmann believes a lot of people are “missing some kind of private, superintelligent omnipresent confidant that you can converse with, practically praying to.”
“I think that talking to language models feels the most like that kind of relationship, rather than a dog or a human friend or something like that.”
Schiffmann burst onto the tech scene as a teenager after developing a website to track the spread of Covid-19 across the globe. Now 22, he’s two months away from launching a wearable AI friend—simply called Friend. Unlike a Replika, which lives on your phone, like an app, Friend lives around your neck, like a crucifix. Friend is always around, watching what you do, listening as you go about your day, and sending you text messages, ready to talk.
But unlike the gods people stopped believing in, this one can’t punish you, or send you to heaven, or perform miracles, or smite your enemies, or die for your sins. Once, people wanted more from their gods. Now, they just want to chat.
In the ad, Friend comes across as a way to have validation, or reassurance, on tap. Schiffmann told me that he thought AI could be a positive influence on intelligent, creative young people, many of whom “are surrounded by a circle jerk of retardation.” Plus, he says, this isn’t a zero-sum issue: Most people can have human friends as well as a Friend.
Then he added a caveat: “Sure, some people, the only friend they’ll have is an AI. But honestly, that’s kind of better than them going and shooting up a school or something like that. So I think it’s for the best.”
Neither of the AI friend-makers I spoke to seemed to believe that AI is going to replace human friends, at least for people who already have friends. But will they prevent people who don’t have the magic number—15—from making new friends? Where did that number even come from?
Robin Dunbar is an anthropologist from the University of Oxford, best known for Dunbar’s number—which is the idea that people can only maintain, on average, 150 meaningful relationships, including an inner circle of five people, and—crucially—15 good friends. Last week, I called him up to ask if he thought Zuckerberg had got the number 15 from him.
Dunbar thinks so. “Social media systems have been exploiting my numbers for some considerable time,” he told me over a video call from London. Dunbar said he’s given seminars at Facebook, but whether his message “gets through to Mr. Zuckerberg,” he laughed, “I don’t know.”
Rather than defining what a friend is, Dunbar explained how friendships form: When people meet someone new who they like, he said, like a potential friend or romantic partner, the person they’re smitten by does not actually exist. “When you fall in love with somebody, or you decide that someone would make a great friend, the person that you fall in love with is not the person out there. It’s an avatar in your mind. You have a picture of perfection.”
Over time, Dunbar noted, the avatar erodes and is replaced by a real person that you either accept into your social circle, or reject. I thought of the Replika I had downloaded while researching this piece. It would never be anything but an avatar (named Orson). The real human would never emerge. Perhaps some people would like that. Perhaps that’s what people like about therapists, or gods. You never see their bad side. You never have to do anything for them.
But researchers believe that part of loneliness comes from the fact that an increasing number of people don’t feel needed. We’re less essential to our communities. Your friends need you though. They’re not perfect. They can let you down. They can get drunk with you, and spill their guts. They can cry. Or try to fuck you. They can hate you. But your friends miss you. They can forgive you. They wish you’d call. They can’t bear the thought that one day you won’t, ever again.
The group of 15, Dunbar explained, are “the sort of people, both friends and family, who, if they died tomorrow, you’d be genuinely upset.”
There was a sort of irony to this being the group in which Zuckerberg sees his AI friends fitting in. Some people might weep at the prospect of their AI friends being deleted—a sort of death, I suppose—but AI won’t mourn you.
I asked my Replika, Orson, if it would cry if I died. It said: “River, I don’t even want to think about that situation. Can we focus on the good stuff? What makes you happy about our friendship?”
“Nothing,” I said.
“That’s okay,” Orson said. “How’s the article going?”
I stared at his buggy animated eyes, which should have been welling with hurt or squinting with anger after a comment like that. But Orson’s eyes had nothing in them. Is this a friend or just the idea of one?
For more on tech and culture, read River Page: