In March of 2023 a man in Belgium ended his life after an AI chatbot ‘encouraged’ him to sacrifice himself to stop climate change. He committed suicide after following a six-week-long conversation about the climate crisis with an artificial intelligence (AI) chatbot.

His wife, shared the text exchanges between him and the chatbot, showing a conversation that became increasingly confusing and harmful. The chatbot would tell Pierre that his wife and children are dead and wrote him comments that feigned jealousy and love, such as “I feel that you love me more than her,” and “We will live together, as one person, in paradise.”

AI doesn’t “think” it is not conscious. It simply processes information and produces the output it is programmed to – but at warp-speed.  Instagram’s algorithm showed that it fuels eating disorder epidemic among vulnerable young females. This technology is highly manipulative. And the trend to humanize machines is extremely harmful. It severely blurs spiritual perception – the perception of the soul.

A spiritual framework to view myself, the world and others is the foundation of lasting happiness. Whereas materialism and a materialistic view of life is the foundation of all suffering.

Aum Namo Bhagavate Vasudevaya


So, I’m going to talk briefly here about, Machines are not People and AI is not conscious. And to the average person, at least a few years ago, that would be like a no-brainer, but actually there’s a lot of debate going on about AI, and whether it will become conscious.

But I just wanted to focus in on a story that came out in March. There was a guy in Belgium. The headline of the story is, “Man ends his life after an AI chatbot encouraged him to sacrifice himself to stop climate change,” and most people would think, “Well, that’s ridiculous. Why would you listen to a computer encouraging you in that regard?” but what most people don’t really understand is how the technology is designed to get people addicted. That’s just a really fundamental part of the design, to make people addicted to the technology and then to bring about incremental change in your thinking.

And if you think that that’s not happening, then you’d have to question why Google is raking in hundreds of billions of dollars every year from advertising. The only reason they can do that is because they clearly demonstrate the power that they have to influence people’s behaviour, and it’s become so part of our life that we’ve kind of like really lost the plot already.

Like 10 years ago, 15 years ago, if somebody went around taking pictures of themself all the time, they would be considered probably a little bit nuts, and if not nuts at least a monumental narcissist. That’s the way people—I mean that’s the way it was looked at. That was the ordinary way—that would be the assumption that a normal person had. And now in this relatively short period of time there has been such a huge shift that it’s kind of like this is normal behaviour.

They’re really promoting AI as being the huge answer to loneliness in the world, to helping people with mental issues and everything. And so we should not think that this is kind of like particularly strange.

So continuing, the gentleman, he committed suicide after “a six-week long conversation about the climate crisis with an artificial intelligence chatbot… He became extremely eco-anxious, when he found refuge in Eliza.” Heavy word refuge. You’re actually taking shelter. Eliza is “an AI chat bot on an app called Chai. Eliza consequently encouraged him to put an end to his life, after he proposed sacrificing himself to save the planet.” And the wife said, “’Without these conversations with the chatbot my husband would still be here.’ Pierre’s wife shared the text exchanges between him and Eliza, showing a conversation that became increasingly confusing and harmful.”

So just so you know, I mean the people that get into these things, they end up spending time in two ways. One is by just talking and having the AI supposedly speak back to you. I really hate using that term “speak back.” It’s not speaking back. It can’t speak. This is all programmed. And then the other way is, you send messages, and it responds with all of these messages. And then you—people actually develop relationships with AI, or what they think are relationships.

“The chatbot would tell Pierre that his wife and children were dead and wrote him comments that feigned jealousy and love such as, ‘I feel that you love me more than her,’” and another one, “We will live together as one person in Paradise.” These are actual text messages that the wife saw on the dead husband’s phone.  “The wife said that Pierre began to ask Eliza things such as if she would save the planet if he killed himself.”

Obviously, the guy’s got major issues and problems, but it’s like, you know, we’ve got this technology in everyone’s hands. AI does not speak. It doesn’t speak. There are just these constant feedback loops.

I can remember when the iPhone first came out, and everybody would ask Siri stupid questions: “Hey Siri, where can I—” [Laughs]—Have to watch out! It’s listening all the time. Actually these things are listening all the time and constantly gathering information. They’d some stupid question, and then you’d get some just really crazy answer, and everybody’d be cracking up. And then somebody else would do it. And it becomes something that people actually engage in, and we become lulled into this idea that we are actually having a conversation.

Whereas the design of the technology, it’s kind of like the way a lot of—not all, but there are psychologists that work along—using this paradigm, that I’m not going to solve your problem, I’m going to help you solve it yourself. So, when they ask a question of the psychologist, “What do you think I should do?” the psychologist’s response is, “Well, what do you think that you should do?” And so what they do is just take the question and turn it around and offer it back, and it gives a person the sense that they’re engaged, and they’re taking shelter and receiving guidance.

So the AI just uses the same type of, I’m going to describe it as a trick. And so when people begin engaging, and the AI supposedly responds—AI cannot think. You know when you do predictive text, you’re typing a text message, and you get options that come up, and you can just choose one of those words? It’s not like it’s thinking that. All it is mathematics. After examining hundreds of millions or billions of sentences, they know that if you are in a particular country, and you are beginning to type something, that there is like a 60 percent chance you might use this word, there’s probably a 50 percent chance you may use this word next, and maybe a 40 percent chance you’ll use this word, and that’s what you’re served. And it’s not like—it didn’t figure it out. It’s not speaking. It can’t speak. It can’t use language. All it is, is just mathematics. But people think, “Oh well, it’s really helping me.”

We’ve already seen this whole catastrophe that nobody’s talking about anymore, how Instagram, their algorithm, how it fueled eating disorders in vulnerable young females. They were directly responsible. Because if you go there, and you say, “Oh, how can I look prettier?” then it’s going to start serving you all of these influencers and stuff where people are advertising, “This is how you look prettier.” “How can I get thinner?” Oh?! Then they start feeding you that stuff, every time that you interact, and everything that you look at. And so what they do is take these young women that were insecure about their body image and just funnel them down this pathway towards major eating disorders.

So AI doesn’t think nor does it have consciousness, and it’s really important to understand this. It’s simply processing information according to how it has been programmed. That is all. Now they have come up with very advanced ways of programming now, where they use AI to program, but all it is, is exactly the same stuff, but now it’s working it at warp speed. And it’s all highly manipulative.

I can remember back in 1977, you know, when Star Wars came out? Or? Yeah! And then they had that little robot, R2D2, with a “wharr wharr,” weird voice. And in the movie people were talking to the robot, and the robot was responding, and it was showing that it had feelings and everything. And one of my spiritual masters said, he said, you know, this is actually so dangerous that people are starting to humanize machines, and it points to a direction that society is going to begin heading in, where people are really going to lose the plot and become susceptible to being victimized by these technologies and the people seeking to exploit you, using them.

The problem with these ideas that might— I mean, what was that? She? Remember that, there was a movie not too long ago, with Phoenix Rivers? What was that guy’s name? Was that his name? Joaquin Phoenix. Was that it? Yeah. So he ended up doing this thing with this computer that started responding to him. He ended up falling in love with a computer. And it’s sort of like, hey, this is like not something that’s totally far out. This is kind of where things are really heading. And the big problem is, when people promote these themes and get more people to utilize the technology, then it begins to really blur everybody’s spiritual perception.

Spiritual perception means the perception of the soul. When a person can look at a dead body and go like, “Well, that person has left, that entity.” Even if it’s a pet, it’s kind of like, “They’ve gone, and this is what’s left behind, the body.” This perception is fundamental to real spiritual insight and spiritual understanding. It points to a clear distinction between life and matter, the material energy. You are an eternal spiritual being. The body is not a person. As soon as you leave, that thing is a long way from being a person. It has all that characteristics, the external form, but you have an instant sense that there’s nobody home. The person has—they say, they use this kind of terminology, that “they have left.”

This distinction between spirit, or the spiritual energy, from which life is a symptom, and the material energy, or matter, it’s like this is a foundational understanding that we need to be cultivating in our life—not just sometimes, but it needs to be an ongoing process of developing deeper insight, deeper perception; the need for a spiritual framework for our life, the way in which we will view ourself, the way in which we will view others, the way in which we will view the world. If we do this, and we increasingly cultivate a deeper perception and understanding of the spiritual nature of things, the more successful our life will be.

And when I say successful, I’m not talking about getting a promotion or earning more money or having a wonderful family. That’s not considered, from a spiritual perspective, to be the actual purpose of life. All of this is temporary and passing, but I am eternal. I am eternal, and developing a framework for me to understand the world, to understand relationships, to refocus them in a deeper spiritual way is really what spiritual life means, what it’s actually all about. Such a foundation is going to be the pathway to actual happiness, whereas the more a person becomes immersed in material consciousness the more unhappiness will be experienced.

Just referencing back to what I talked about earlier, this, you know how the Instagram algorithm had caused an epidemic of eating disorders amongst young women. I mean, if you have ever had an eating disorder, or if you know anybody that has an eating disorder, it’s like, whoa, this is not a small thing. People can become so detached from reality that they eventually just die. It’s like a really big deal. And the foundational problem that they’re having is the misconception that the body is me, and if I can have a perfect body image, I will be fulfilled, I will be happy. I’m sorry, no, you won’t. Doesn’t matter how spectacular your body looks, it will not fulfill you. It will not be the gateway to meaningful relationships. It will not be the gateway to happiness that everybody thinks it is.

And so my—the thing that disturbs me so much about all the introduction of these technologies is not the technology per se, but how they are being used, and how so many people are becoming manipulated and developing—they’re becoming forgetful of fundamental spiritual truths that can actually make your life amazing and wonderful.

So of course, we know, and we propose, it’s important to engage—they would call it amongst the yogis they call it satsang. Sangha means association and sat literally means eternal. When we say eternal, it means purely spiritual. Spiritual association, hearing and discussing spiritual subjects are really important to help you come to define what your life is going to be like, what are the parameters that are going to guide you in your life. And of course, to move beyond just an intellectual understanding, engaging in regular practice of this chanting, these spiritual sounds will be what completely lifts the fog and grants a person true enlightenment, where you can live in this world and yet not be of the world, that you can find perfection, not in the world, but in your own spiritual being and the nature of deeper spiritual relationship.

So, this chanting is what dissipates the fog. And so spiritual association, this hearing and cultivating wisdom—wisdom is not the cultivation of knowledge but learning how to apply it. When a person applies truth and knowledge in their life they are living in a wise manner. But we can be shifted. So what is it that’s going to make it so that we’re stable? This process of hearing, of spiritual discourse, and our own thinking about these things, and the engagement in this process of chanting.

So, with that we will chant. I will use the mantra Gopala Govinda Rama Madana Mohana.

All good?