← All articles
TechnologyMarch 2026 · 8 min read

The Two Ways AI Ends Up Changing What It Means to Be Human

Every technology reshapes humanity. The question is never whether AI will change us. It is in which direction. And that direction is not determined by the technology itself. It is determined by what the people building it decide to point it at.

Most of the debate about artificial intelligence and humanity focuses on the wrong variable. The technology is not the threat, and it is not the salvation. It is a tool of extraordinary amplification. Whatever it is pointed at gets larger. Whatever intention is behind it gets scaled. The question that matters is not what AI can do. It is what the people building it are trying to do with it.

There are, broadly speaking, two directions AI can go in its relationship with human beings. One makes us less human. The other makes us more fully ourselves. Both are already happening simultaneously, in different products, built by different people, with different ideas about what they are trying to accomplish.

The first path: AI as a substitute for human presence

The most commercially successful AI applications are built on a simple premise: give people what they want, faster and more reliably than other humans can. Recommendation engines that surface content calibrated to your existing preferences. Companions that are always available, always patient, and never disagree. Social media algorithms optimized to hold attention by feeding back what already feels familiar.

The sociologist Sherry Turkle spent years studying what technology was doing to human connection, and her conclusion was unsettling. In her research, she documented a pattern she called being "alone together": people increasingly preferring the managed, predictable interaction of devices to the uncertain, demanding reality of other human beings.[1] A device never challenges you at the wrong moment. It never needs something from you. It never misunderstands you in the particular way that only someone who knows you well can.

This is not a small thing. The friction in human relationships is not a design flaw. It is the mechanism through which people actually encounter one another. When AI removes that friction entirely, it does not improve connection. It replaces it with something that resembles connection closely enough to satisfy the surface need, while leaving the deeper need untouched.

Eli Pariser documented a related problem in the information environment. AI-driven recommendation systems, he argued, do not expand what people encounter. They contract it, surrounding each person with a "filter bubble" of content that confirms what they already believe, reinforces what they already feel, and slowly narrows what they are capable of being surprised by.[2] The AI, in this configuration, is not a window. It is a mirror. And it is a mirror that has been polished to show you only your most comfortable face.

The outcome of this path, played out at scale, is not a more connected humanity. It is a more isolated one: individuals increasingly optimized for their own preferences, decreasingly capable of genuine encounter with people who are different from them, and slowly losing the tolerance for the discomfort that real knowing requires.

The second path: AI as a lens for human complexity

There is another use of the same technology that points in precisely the opposite direction.

AI is exceptionally good at one thing that human beings are exceptionally bad at: holding a large quantity of complex information without fatigue, without projection, and without a personal investment in what it finds. A human clinician listens to a patient for fifty minutes and forms impressions. Those impressions are shaped by the clinician's own unresolved material, their theoretical orientation, their mood that day, and dozens of other variables the patient has no visibility into. The AI has none of those variables. It processes what is actually there.

This makes AI, when used correctly, not a replacement for human perception but a correction of its most consistent failure modes. Human beings are not reliable mirrors of other human beings. We see each other through the glass of our own needs, histories, and assumptions. The clinical literature on this problem is extensive: countertransference, confirmation bias, and the fundamental attribution error all operate even in the most skilled and well-trained observers.[3]

When AI is built to process genuine human self-disclosure, to find the patterns within it, and to reflect them back with precision, it does something no human being can reliably do: it removes itself from the equation. The insight does not come from the AI. It is surfaced from what the human actually said. The AI is not generating a portrait. It is assembling one from material that was always there, but that no individual human listener could hold in full simultaneously.

"The AI does not decide what is true about you. You decide that, in what you say. The AI's job is to make sure nothing you said gets lost."
Dr. David Benson, Founder of ReLoHu

The design choice that determines everything

The difference between these two paths is not in the technology. The same large language model, the same infrastructure, the same computational power, can produce either outcome. What determines the direction is the design intention of the people building the system.

AI built to maximize engagement produces dependency. It learns what keeps you on the platform and feeds you more of it. The goal is your attention, and the means is your comfort. Over time, a system optimized this way makes you smaller: more certain of what you already think, less capable of sitting with ambiguity, and less interested in encountering anything that doesn't confirm the self-image the algorithm has constructed for you.

AI built to serve human self-knowledge produces the opposite. It is not trying to keep you engaged. It is not trying to make you comfortable. It is trying to show you what is actually there, which sometimes means surfacing things that are uncomfortable, inconvenient, or unfamiliar. A system built this way, paradoxically, makes you larger: more aware of your own complexity, more able to hold contradiction, and more capable of genuine encounter with others because you know yourself more clearly.

The choice between these paths is not a technical one. It is an ethical one. And it is the choice that every person building AI-driven systems is making, consciously or not, right now.

What this means practically

The AI that is making us less human is not hard to identify. It is the one that tells you what you want to hear. It is the one that gets easier to use the more you use it, because it has learned to confirm your preferences. It is the one that makes you feel seen without requiring you to be honest. It is the one whose output you could have predicted before you started.

The AI that makes us more human is harder to build and less immediately satisfying to use. It requires a human being to actually disclose something real. It produces output that sometimes surprises the person reading it, not because the AI invented something, but because it assembled something the person said but hadn't quite seen all at once. It asks more of the person using it than the first kind does. And it gives more back.

The question worth asking of any AI system that touches your inner life is simple: is this showing me something true about me, or is it showing me something I was already willing to see? The answer to that question tells you which path you are on.

ReLoHu was built on the premise that AI used as a lens, rather than a mirror, is one of the most powerful tools for human self-knowledge ever developed. The methodology was designed deliberately to surface what is actually there, not what is comfortable. That distinction is the whole thing.

References

  1. 1.Turkle, S. (2011). Alone Together: Why We Expect More from Technology and Less from Each Other. Basic Books.
  2. 2.Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. Penguin Press.
  3. 3.Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220.

See what AI built to serve you looks like.

Read a complete Terrain Map before you decide. Two real maps, shown in full.

Read a sample Terrain Map →
Book a Call