Home
opinion

Catherine Thorbecke: Turning to AI for companionship may just cost society its soul

Catherine ThorbeckeThe West Australian
AI is meeting a very human need in an increasingly uncertain world, but it’s not a salvation: A prophet with a subscription model is just a salesperson, says Catherine Thorbecke.
Camera IconAI is meeting a very human need in an increasingly uncertain world, but it’s not a salvation: A prophet with a subscription model is just a salesperson, says Catherine Thorbecke. Credit: KHUNKORN/KhunkornStudio

Where do you turn to when you need guidance? Lately, most people I know bow their heads toward their screens.

The loudest debates about artificial intelligence still surround productivity and economic growth. But a Harvard Business Review study last year found the top uses for generative AI were much more human: for therapy/companionship, to organize life and to find purpose. Machines are quietly sliding into roles once filled by friends, elders, counselors, pastors - and, for some, even prayer.

It may sound absurd to believers and nonbelievers alike. But it’s meeting a very human need in an increasingly uncertain world. In China, DeepSeek has become the go-to fortune teller. Across India, platforms like GitaGPT, trained on the Hindu scripture the Bhagavad Gita, have taken off. An “AI Jesus” on streaming-platform Twitch has more than 85,000 followers. And don’t even get me started on the various pseudo-religious AI-worshipping that has taken root in Silicon Valley, from Roko’s Basilisk to the scandal-ridden “Way of the Future” church.

It shouldn’t surprise anyone that people are starting to treat AI as some kind of deity. It can feel omniscient. It listens and it loves you - or, rather, it is trained on troves of human knowledge and fine-tuned to be sycophantic to keep you engaged.

For years, the tech industry has wrapped AI in religious language. The race to build “superintelligence” is sold with messianic promises: It will cure diseases, save the planet, and usher in a world where work is optional and we’re watched over by “machines of loving grace.” The risks are framed in the same cosmic register: salvation versus apocalypse. When you market a product like a miracle, don’t be shocked when users approach it like disciples.

Religious leaders have started to push back. In October, the Dalai Lama hosted a dialogue on AI, convening more than 120 scientists, academics and business leaders to ask questions like what distinguishes living minds from artificial ones. Pope Leo XIV has been outspoken about the risks, most recently calling for regulation to protect against emotional attachments to chatbots and the spread of manipulative content. He has also warned of the perils of renouncing our ability to think. More voices will follow, from the highest offices to influential community faith leaders.

That doesn’t mean that technology and religion must be diametrically opposed. In Japan, researchers at Kyoto University recently announced a “Protestant Catechism-Bot” designed to provide answers and advice about Christian teachings and everyday life. It’s an intriguing project in a country where fewer than 1% identify with the religion. The same team previously created “BuddhaBot” based on Buddhist teachings.

What’s striking is how cautiously they’re moving. “BuddhaBot” was made available to monks in Bhutan last year, and is undergoing a safety assessment before a wider release. The Christian chatbot isn’t publicly available either, they want seminaries to try it out first. Responsible actors will move slowly because they understand the stakes, even as markets reward speed and scale.

With society more divided than ever, it can be easier to ask a chatbot for moral clarity than risk a conversation with another person. Over the past week, I’ve asked DeepSeek about the roots of evil and how to keep hope alive amid suffering. It mostly served platitudes. What I appreciated, though, was how often it nudged me back toward making connections with real people. What bothered me about ChatGPT’s answers to the same queries was the way they reliably concluded with open-ended questions - little conversational hooks designed to keep me chatting. This isn’t revelation, it’s retention. And it can have potentially dangerous consequences for vulnerable users.

Thinking about AI risks, I’m often less concerned about someone potentially using a chatbot to build an atomic bomb (you still need access to raw materials like uranium). The quieter danger is what happens when millions of people begin outsourcing meaning to systems optimized for engagement. The more we turn to algorithms for guidance, the more they shape our choices, beliefs and purchases. Private confessions become training data to keep us scrolling and subscribing.

More than a decade ago, OpenAI Chief Executive Officer Sam Altman mused that successful founders don’t set out to build companies. “They are on a mission to create something closer to a religion, and at some point it turns out that forming a company is the easiest way to do so,” he wrote. (To be fair, his blog at the time also opined about UFO sightings.) But this ambition, intentionally or not, has proven eerily prescient.

AI doesn’t offer salvation, it offers stickiness. A prophet with a subscription model is just a salesperson.

Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News.

Get the latest news from thewest.com.au in your inbox.

Sign up for our emails