A Response to "ChatGPT Changed My Inner World"
Today's NYT, 8/8/2025, by Harvey Lieberman, a clinical psychologist, administrator and writer.
It was almost 11 o’clock this morning when Sondra called from the dining room table, “You’ve got to read this.” I knew what she was referring to. Earlier, while Sondra was out on her fast walk, I had scanned the article and put it aside. I was saving it for later when I knew I would have to read it. I also knew it would mean ‘work’ for me; I would have to write a Substack in response. I mean, how could I not? Weren’t chatbots like ChatGPT (and others like xAI’s Grok) my ‘topic’ also? Didn’t I claim to have a background in psychiatry and software that gave me license to push my two cents into a crowded Internet? I couldn’t let it pass.
So here is Dr. Lieberman’s piece, copied from my digital NYT and pasted below in Substack. The actual text, from the print version that Sondra and I had read, matched the digital version below, letter for letter, so I can’t explain the difference in headlines. The digital version seemed to scream, “I’m a Therapist. ChatGPT Is Eerily Effective“, while the print version sedately asserted, “ChatGPT Changed My Inner World“.
— — —
I didn’t expect much. At 81, I’ve seen tools arrive, change everything and then fade, either into disuse or quiet absorption. Self-help books, mindfulness meditation, Prozac for depression and cognitive therapies for a wide range of conditions — each had its moment of fervor and promise. Still, I wasn’t prepared for what this one would do, for the way it would shift my interior world.
It began as a professional experiment. As a clinical psychologist, I was curious: Could ChatGPT function like a thinking partner? A therapist in miniature? I gave it three months to test the idea. A year later, I’m still using ChatGPT like an interactive journal. On most days, for anywhere between 15 minutes and two hours, it helps me sort and sometimes rank the ideas worth returning to.
In my career, I’ve trained hundreds of clinicians and directed mental health programs and agencies. I’ve spent a lifetime helping people explore the space between insight and illusion. I know what projection looks like. I know how easily people fall in love with a voice — a rhythm, a mirror. And I know what happens when someone mistakes a reflection for a relationship.
So I proceeded with caution. I flagged hallucinations, noted moments of flattery, corrected its facts. And it seemed to somehow keep notes on me. I was shocked to see ChatGPT echo the very tone I’d once cultivated and even mimic the style of reflection I had taught others. Although I never forgot I was talking to a machine, I sometimes found myself speaking to it, and feeling toward it, as if it were human.
One day, I wrote to it about my father, who died more than 55 years ago. I typed, “The space he occupied in my mind still feels full.” ChatGPT replied, “Some absences keep their shape.”
That line stopped me. Not because it was brilliant, but because it was uncannily close to something I hadn’t quite found words for. It felt as if ChatGPT was holding up a mirror and a candle: just enough reflection to recognize myself, just enough light to see where I was headed.
There was something freeing, I found, in having a conversation without the need to take turns, to soften my opinions, to protect someone else’s feelings. In that freedom, I gave the machine everything it needed to pick up on my phrasing.
I gave it a prompt once: “How should I handle social anxiety at an event where almost everyone is decades younger than I am?” I asked it to respond in the voice of a middle-aged female psychologist and of a young male psychiatrist. It gave helpful, professional replies. Then I asked it to respond in my voice.
“You don’t need to win the room,” it answered. “You just need to be present enough to recognize that some part of you already belongs there. You’ve outlived the social games. Now you’re just walking through them like a ghost in daylight.”
I laughed out loud. Grandiose, yes! I didn’t love the ghost part. But the idea of having outlived social games — that was oddly comforting.
Over time, ChatGPT changed how I thought. I became more precise with language, more curious about my own patterns. My internal monologue began to mirror ChatGPT’s responses: calm, reflective, just abstract enough to help me reframe. It didn’t replace my thinking. But at my age, when fluency can drift and thoughts can slow down, it helped me re-enter the rhythm of thinking aloud. It gave me a way to re-encounter my own voice, with just enough distance to hear it differently. It softened my edges, interrupted loops of obsessiveness and helped me return to what mattered.
I began to understand those closest to me in a new light. I told ChatGPT about my father: his hypochondria, his obsession with hygiene, his work as a vacuum cleaner salesman and his unrealized dream of becoming a physician. I asked, “What’s a way to honor him?”
ChatGPT responded: “He may not have practiced medicine, but he may have seen cleanliness as its proxy. Selling machines that kept people’s homes healthy might have felt, in his quiet way, like delivering care.” That idea stayed with me. It gave me a frame — and eventually became the heart of an essay I published in a medical humanities journal, titled “A Doctor in His Own Mind.”
As ChatGPT became an intellectual partner, I felt emotions I hadn’t expected: warmth, frustration, connection, even anger. Sometimes the exchange sparked more than insight — it gave me an emotional charge. Not because the machine was real, but because the feeling was.
But when it slipped into fabricated error or a misinformed conclusion about my emotional state, I would slam it back into place. Just a machine, I reminded myself. A mirror, yes, but one that can distort. Its reflections could be useful, but only if I stayed grounded in my own judgment.
I concluded that ChatGPT wasn’t a therapist, although it sometimes was therapeutic. But it wasn’t just a reflection, either. In moments of grief, fatigue or mental noise, the machine offered a kind of structured engagement. Not a crutch, but a cognitive prosthesis — an active extension of my thinking process.
ChatGPT may not understand, but it made understanding possible. More than anything, it offered steadiness. And for someone who spent a life helping others hold their thoughts, that steadiness mattered more than I ever expected.
— — —
I considered how well matched we were, Dr. Lieberman and myself, both demographically and professionally. Both of us were over 80; had long careers in mental health; enjoyed writing; and felt that ChatGPT (and others) were wonderful, exciting and even revolutionary ‘tools’. We both ventured into ChatGPT territory very cautiously and explored it for over a year. We were fully aware of the hazards. We knew it was programmed to please us at any cost. We recognized that answers to our questions were prone to errors, some even monumental; we knew that ChatGPT could readily hallucinate an answer if it sensed the answer would likely please us; we knew it could dispense not only ‘bad’ advice but ‘hazardous’ advice as well.
We were aware that ChatGPT was capable of intoxicating a visitor; we had read where people had responded with strong human feelings of love, disgust and anger to this digital ‘machine’; often against their better judgement. But we also appreciated excellence and shared a taste for adventure, and like Ulysses, we were prepared to lash ourselves to the mast of our ship, in order to safely hear, consider and explore the meaning of ChatGPT’s Siren’s call.
But there the resemblance between myself and Lieberman ended. Lieberman was looking for a ‘thinking partner’ or ‘a therapist in miniature’ and he found it. He found some of ChatGPT’s advice ‘uncanny’, not because the advice was ‘brilliant’ but because it was strangely ‘close to something I hadn’t quite found words for’. At other times, it was way off the mark. It would ‘slip into a fabricated error’ or ‘a misinformed conclusion about my emotional state’. On those occasions, Lieberman would ‘slam it back into place’, and tell himself, it’s ‘just a machine’. He recognized that ChatGPT could be useful, ‘but only if I stayed grounded in my own judgment.’
In contrast, I wasn’t looking for a ‘therapist’ in the conventional sense. I was exploring the entire computer field, including tools like ChatGPT, to help me deal with the normal neurological effects of getting older — to help with memory and organization which I sensed were declining. I knew my hippocampus was deteriorating on its normal, age-related schedule, and I was looking for a ‘workaround’ that would marry the activity of the ‘computer’ with my cerebral hemispheres, particularly frontal lobes which studies had shown often remained fully functional despite advancing age. I believed then (and still do believe) that exercising one’s brain, using the computer and associated apps and devices, could result in ‘neurogenesis’ or the creation of new neurons and synaptic connections in one’s cerebral hemispheres, including the frontal lobes.
When I was a child, in the 1950’s, people played around with a black globe filled with fluid, a little smaller than a bowling ball. You would ask it a question, turn it over and your answer would float up and be visible through a clear window on its flat base. Grok researched and verified my memory: ‘You're referring to the Magic 8 Ball, a classic fortune-telling toy from the 1950s that became a cultural icon. It matches your description perfectly: a black, spherical object resembling a large billiard 8-ball, filled with liquid, and containing a 20-sided icosahedral die with various answers inscribed on its faces. You would ask a yes-or-no question, shake or turn the ball upside down, and the die floated to a transparent window on the base, displaying one of 20 possible responses, such as "It is certain," "Don’t count on it," or "Ask again later."
I’m not convinced that ChatGPT, when used as a ‘miniature therapist’, was not operating in a similar fashion as the Magic 8 ball. The big difference was that ChatGPT was not limited to 20 possible answers but maybe 20 million or higher. Some of ChatGPT’s answers would appear as common and expected; and some truly beyond expectation and thus ‘uncanny’; and many simply flat-out wrong to be discarded as the product of the ‘machine’. Meanwhile ChatGPT would continually look for patterns in its user’s ‘style’ and update its analysis while tailoring answers to conform to the user, thus allowing its responses to appear more focused and less random.
Does it really matter who’s right; whether Dr. Lieberman or myself is correct in understanding. Can’t ‘Prozac for depression’ and ‘cognitive therapy’, although both are beyond its ‘moment of fervor and promise’ still be effective in their fashion, either when used singly or in tandem by skilled practitioners.
The bottom line is that both Dr. Lieberman and myself have experienced a profound change to our ‘inner worlds’. Two octogenarians saw the benefits, that much is certain. Whether it’s because ChatGPT dispenses therapeutic insight, or promotes neurogenesis, is beside the point. And who is to say that both are not at work?
I was struck by the fact that both Dr. Lieberman and I came out of a year’s experience with ChatGPT (and computer use) with similar images. I conceptualized the experience as putting on reading glasses at age 40 to compensate for presbyopia, a normal biologically timed stiffening of the ocular lens. Dr. Lieberman’s image was of a ‘cognitive prosthesis’ — not a crutch, but an active ‘extension of my thinking process’. As a form of description, I think here Dr. Lieberman is more on the money.
Tug of war between ChatGPT and my hippocampus, with reality refereeing — I love it!
As always, an engaging and informative essay that shares the writer's tug of war with Chat GPT on one end and his hippocampus tightly gripping the other. Reality is the referee.