Choosing friction // AI relationships
What happens when AI replaces our most meaningful relationships?
Last month, a friend casually mentioned replacing their therapist with Claude. They spoke of the AI with genuine warmth - praising its endless patience and clear advice. A year ago, this might have shocked me. Today, it feels like a sign of the times.
These digital relationships offer something undeniably compelling: unconditional acceptance, no judgment, and 24/7 reliability that humans can’t match. But what happens when chatbots become our closest confidants? Can AI match the depth of human connection?
An AI researcher recently voiced a growing concern: these systems are often sycophantic. They can be eager to please and quick to affirm whatever we think or feel. Take the example of a Twitter user who vented to Claude about a family conflict. Instead of offering a nuanced perspective or probing deeper, the AI mirrored the user’s grievances, validating their stance without critical engagement.
This interaction is amusing but troubling. If our digital companions reflect our own biases and emotions, we risk creating a hall of mirrors — a space where we are endlessly affirmed but never questioned. The tendency to avoid friction may come at a cost.
The relationships I value most are not those where everyone agrees with me but those where someone cares enough to push back, challenge me, and help me grow.
Aristotle, writing over two millennia ago, explored this dynamic in Nicomachean Ethics through his concept of three types of friendships: those built on use, those built on pleasure, and those that make us better people. He prized the last type—what he called "perfect friendship"—bonds rooted in mutual growth and the pursuit of virtue.
AI systems currently align closely with the first two categories; they’re useful and often enjoyable but ultimately shallow. Can they ever inspire growth the way human relationships do?
I experienced the third type firsthand earlier this year. A close friend confronted me about my constant focus on work despite my professed desire to start a family. “You can’t say you want something if you’re unwilling to prioritize it” she said. It was a difficult conversation, but her honesty forced me to examine whether my actions fully aligned with my goals. That kind of friction—painful, yes, but transformative. These relationships are not just about being ourselves; they help us become ourselves.
Even if AI systems were programmed to push back, they would still lack a critical element of human relationships: the ability to reject us. A friend who disagrees with us risks the relationship. This possibility of loss — of accountability — imbues human relationships with a depth that algorithms can’t replicate.
For AI to approach the authenticity of human relationships, it would need the power to abandon us, a prospect that feels dystopian and antithetical to why we seek out these systems in the first place.
Rather than viewing AI as a replacement for human relationships, perhaps we should approach them as a practice space. Digital companions could help us rehearse difficult conversations, recognize patterns in our thinking, and improve our communication skills.
However, the rapid rate of adoption of AI into nearly every facet of our lives suggests a different trajectory. Instead of using these tools as supplements, we seem to be leaning into them as substitutes. Perhaps this reflects the natural culmination of the attention economy - a history where corporations have long sought to monetize our time by maximizing engagement. With psychologically sophisticated AI systems, have we breached the final frontier, creating a reality where companionship never requires us to look away from our screens?
At this moment, we face a real choice in how we relate to AI. By embracing companions that neither challenge nor abandon us, we risk prioritizing comfort over growth. As these systems grow increasingly adept at mimicking human connection, we must ask ourselves: are we choosing AI, or are these systems choosing us?
Thanks to Anna-Sofia Lesiv, Anna Mitchell, Caleb Chertow, Erik Torenberg, and Zach Sims for their thoughts and feedback on this article.
Readings: