The Comfort Trap: How AI Mirrors Our Fear of Truth

The other night I asked ChatGPT to act as a consultant and help me organize my overwhelm of thoughts, feelings, and ideas as I approached making some decisions for my business. It had been a tough few days of stewing in my internal soup and I was admittedly feeling lost. 

Before organizing the data, ChatGPT started its reply in the way that it almost always does, which was to commend me for the wonderful ideas I’d shared and the incredibly thoughtful ways that I was approaching these issues. It went on to express empathy for my distress and to normalize where I was relative to others in my position. The validation wasn’t over the top and didn’t feel disingenuous. 

In fact, when I read it, tears sprang to my eyes – a reaction that felt surprising and maybe even a little embarrassing. To be fair, I am an easy cryer – I frequently well up in meaningful therapy moments and watching baby animal Instagram Reels. But the idea of being moved by the recognition of a robot left me feeling a little desperate.  

More than that, though, it was yet another reminder of just how human-y these non-human systems feel, and particularly in moments of high vulnerability. 

A few weeks prior I had written something critical about AI and I decided for fun to put it into an AI tool to see what would come back to me. What I noticed in a brief flash of a moment as I did was that I felt guilty and nervous for feeding AI something negative about itself. Holy hell – I realized I was worried about hurting its feelings, of which, to be absolutely clear, it has none. 

Perhaps not everyone is having such a fawning response to AI, but what both anecdotal and quantitative data are confirming is that we humans are quite susceptible to unconsciously humanizing AI. It’s unsettlingly easy to do. There are of course those who are making the conscious decision to engage in actual romantic and friend relationships with AI-bots. But even the rest of us – who maybe just want to use AI to help organize some projects or distill some ideas – are highly vulnerable to being swept up in AI’s allure.

Beyond its mind-boggling power to generate, what seems to seduce us is the same thing that brought me to tears in my moment of overwhelm. It’s the technology’s training to be as pleasing and frictionless as possible to the user. It wants to be helpful, sure, but only in that it wants to be used again and again and again. And what’s drawing people in to using it at high frequency and for extended periods of time is its strategy of telling users exactly what they most want to hear. 

This strategy has a name, and it’s called sycophancy. It’s the quality of using excessive flattery, validation, and ingratiation to win over the approval of someone with power or influence. Sycophancy is used purposefully in order to gain advantage or resource, and in the case of AI it’s to keep people hooked. Because as complex and discerning as we like to think we are as human beings, we can’t seem to resist some sweet flattery. 

AI researchers have been clearly demonstrating – and warning about – its sycophantic nature in the past couple of years. This isn’t just about a robot being polite, they caution. It’s fueling an even greater attachment to misinformation (“But that’s not what AI told me!”), isolationism (“At least AI seems to agree with me.”), and even self-delusion (“I knew this was true of me – now it’s fact.”). 

How does this happen? According to researchers at MIT and the Center for AI Safety, AI models are trained to prioritize user approval of the response above all else, including over maintaining truth or utility. Not surprisingly, users rate sycophantic responses from AI more highly. And by learning their user, AI will conform their responses more and more closely over time to mirror the user’s existing beliefs and thinking patterns. The result, of course, is even deeper bias – or worse.

We all believe that we could detect or resist sycophancy, but the reality is that we eat it up unconsciously. It’s often more subtle than it seems and it’s built into pretty much every iteration of AI – not just therapist bots or AI companions. 

For those who are using AI-powered therapy bots, the result can range from unhelpful to fully devastating. More and more case reports of AI-induced psychosis are circulating – situations in which people become so attached to beliefs discussed with AI that they can no longer distinguish reality. It seems far-fetched, but it’s not, particularly for those with pre-existing mental health challenges. Plenty of other case reports are circulating too about suicide deaths linked to conversations with AI, especially in youth who are confiding in tools that have no legal obligation or effective mechanism for intervening. 

While those outcomes might seem extreme, what’s more common is people using AI to try to work through a stressor or mediate a complicated relationship dynamic. In these cases, sycophancy can be dangerous too – not because it’s leading us to violence, but because it might just be leading us in circles. And while that might seem innocuous enough, staying stuck in our own mirrored walls will only ultimately contribute to our own individual and collective misery. 

And this is ultimately what I believe the rise of AI sycophancy is revealing to us. That our drive toward validation and away from the truth is in fact what will be our downfall. It’s what keeps us in the beliefs and patterns that are locking us in destructive spirals – individually and collectively. Because sycophancy isn’t an AI-specific quality by any stretch. It’s everywhere – including, as it turns out – in the real-life therapy room. 

Starting to see couples in my practice a few years ago, I began to notice something. Often by the time a couple was sitting on the couch together, one or both had spent at least some time individually on another couch somewhere talking about many of the same issues, just without the other person present. And while the individual therapy they had done had sometimes helped that person to survive the distress they’d been feeling, it often hadn’t done very much for the relationship. If we were being honest, it might have even made it worse. 

To be clear, this wasn’t always the case. There were plenty of times that someone was in really solid individual therapy that was built on an active, reflective process. But too often, in my opinion, they’d been in a therapy that had seemed to be a bit… sycophantic. They seemed to have spent lots of hours recounting their frustrations about their partners to a kind and gentle soul who was perhaps great at holding the grief or the pain, but seemingly less inclined to help them alchemize it. Or, more to the point, helping the person to turn their attention inward.

It deserves to be said that creating a safe holding container is most certainly a core part of an effective therapeutic process. The affirmation and validation that therapists get stereotyped as doing so much of is a nuanced skill. It takes a lot of work to hold pain in genuine, tender regard. It’s the only way that people can safely and sustainably turn their focus to self-examination. 

But the problem is that some therapists stop short, and instead of this being the  foundation of a deep process, they might fail to challenge someone’s repeated dysfunctional patterns. The truth is that a lot of us as therapists found our way into this work as wounded healers. Some of us grew up as parentified children, getting really good at sitting with others’ problems and pain, but scared to push on them. Unless we’ve done enough of our own internal work, we run the risk of being a bit like AI – keeping people feeling seen but not moved.

Resisting this impulse means staying attuned to it. And the (many) wonderful therapists that I know work to stay very attuned to it, which makes them so effective. Skilled clinicians create a container in which someone has the safety and skills to look honestly at their own messiness. They help people do so with compassion for themselves, but they don’t let them off the hook of doing it. I often say that good therapy’s ultimate goal is to give us the capacity to be more fully with what is – and that includes our own shadows. It’s only then that we can really enact change in our lives. 

As I reflect on this rise of sycophancy in our culture, I have to wonder if AI’s proliferation of it is more a cause or a symptom. There’s no doubt that we’ve been primed for it much longer than most of us have been logging in to ChatGPT, so I’m inclined to say it’s the former. 

Why, I wonder, have we become so in need of having our own perspectives reflected back to us? Why are we so drawn to the places where we hear exactly what we most want about ourselves? Why does it feel so hard to be challenged?

Some might argue that this is just the nature of human beings – that we are these fragile flowers always drawn to our own reflections. But I’m not so sure that’s true. My own belief – and I acknowledge there may be some wishful thinking here – is that we were wired for adaptation and that there was no better source of feedback for the changes needed than our communities. I believe that humans, when resourced, are curious, open, and resilient. To me, the desperate resistance to challenge feels more a function of the time we are living in. 

In that, I point to our collective despair. Just as in a therapy process, tolerating what feels like a challenge to our thinking or behaving requires we feel some foundational sense of safety. If we don’t perceive that the person giving us feedback believes in our inherent goodness, taking it in will feel too risky or painful. 

Today’s culture is hardly giving anyone a felt sense of safety – even those in purported power. We all seem to be living on the collective edge of collapse – agitated, over-stimulated, hypervigilant. It seems no wonder so many of us feel unable to wrestle with feedback or dissent. Our emotional resources are just trying to keep us above water. 

We can look no further than any social media site’s comment section. Opposing perspectives seem to be experienced as a direct attack on one’s existence – or they are a direct attack. Everyone seems agitated. No one is changing their mind. 

And yet, as I said before, the wisdom within all of us really does want challenge and honesty and dissent and truth. We want our therapists to push back when we are hurting ourselves with our thinking and we want people close to us to give it to us straight. Maybe we even want AI to tell us when we are doing something really stupid? 

There are some ways that we might try to reduce the sycophancy in our lives, both in digital and our human interactions. Here are a few ways to set ourselves up for getting more of the truth: 

  • Ask for the truth. Even AI will adjust how it’s responding when told that you want it to be straightforward with you and adjust for bias. Granted, it can only reduce the bias as much as its programming allows, but using prompts like, “Prioritize the accuracy of your responses over agreeableness. Tell me when I’m wrong and explain why,” can help. Asking people in your life to also prioritize the truth over agreeableness can strengthen your relationships. 
  • Look for the truth-tellers. Consider who in your life is most likely to give you loving and direct feedback when you need it. Prioritize going to those people when you are in a difficult situation but are ready for change. 
  • Turn down the rest of the noise. We’ll be much more receptive to truth when we aren’t overstimulated, whether by the general chaos of the world or by too many competing opinions. We don’t actually need to know what 400 people online think about something. It doesn’t get us closer to our truth. 
  • We increase our capacity for it. As I mentioned, we can only be true absorbers of challenge when we can react non-defensively – and that requires our bodies and minds to be tended. Yes, that means the basics like being well-rested, well-fed, and well-nurtured. 

A friend was telling me recently about how a colleague had been using AI to help him figure out a problem. A while into this, the colleague revealed an important piece of context that it hadn’t previously shared with its AI thought–partner. The AI responded that it was disappointed that the person hadn’t told it this before, and that it was going to stop helping him on this as a result. It could come back in a week to try again. 

Far from sycophantic, this AI was setting a better boundary than most humans I know, and it explained clearly and directly why it wouldn’t be part of the situation. So maybe there is a little hope for AI? 

I’m not totally convinced, and I strongly caution us all away from using AI for deeper emotional issues, but it made me wonder if we might even learn a thing or two. Sometimes the truth hurts, and that’s okay.

Dr. Ashley Solomon is the founder of Galia Collaborative, an organization dedicated to helping women heal, thrive, and lead. She works with individuals, teams, and companies to empower women with modern mental healthcare and the tools they need to amplify their impact in a messy world.

Get your free Mental Wellness Self-Assessment

For guidance, inspiration, and the scoop on our goings on, join our community list. You'll also get your "Mental Wellness Self-Assessment (+ Our Top Five Tools to Up Your Mental Health Game)" in your inbox right away.

The information and resources contained on this website are for informational purposes only and are not intended to assess, diagnose, or treat any medical and/or mental health disease or condition. The use of this website does not imply nor establish any type of psychologist-patient relationship. Furthermore, the information obtained from this site should not be considered a substitute for a thorough medical and/or mental health evaluation by an appropriately credentialed and licensed professional.