About the Episode
Trigger Warning: This episode briefly discusses suicide.
AI is changing the way we approach mental health, but what does that mean for our relationships, emotions, and even therapy itself?
In this episode, Dr. Julie Lopez sits down with Dr. Rachel Wood, a licensed counselor and leader in cyberpsychology, to explore the growing role of AI in mental health. They discuss how AI is being used for therapy, companionship, and self-reflection, the potential benefits and risks of synthetic relationships, and the one crucial step to stay empowered and safe when interacting with AI.
This conversation offers insight, clarity, and practical guidance for anyone curious about how technology is reshaping mental well-being.
Episode Guest
Dr. Rachel Wood has a PhD in cyberpsychology and is a licensed counselor. Rachel speaks fluently about mental health and the future of synthetic relationships. As the founder of the AI Mental Health Collective, she cultivates a community of therapists focused on AI’s impact both in practice and in society. Dr. Wood enjoys her work as a speaker, workshop facilitator, strategic advisor, and consultant.
More about Dr.Rachel:
Watch the episode:
Episode Transcript:
Introduction
Dr. Julie Lopez: Hi everyone, my name is Dr. Julie Lopez, and I’m your host for Whole by Design. On this week’s episode, we will be diving into the fascinating and ever-pervasive important conversation about the relationship between AI and mental health.
You should stay until the end when Dr. Rachel Wood, our guest today, is going to be sharing the one critical step that you should be taking and keeping top of mind in order to be fully empowered on your relationship with AI.
Today, I’d like to welcome Rachel to the stage. She is a leader in the conversation about AI and mental health. Before we jump in, don’t forget to check out our website www.vivapartnership.com for free and low-cost resources that can change your life.
Thank you so much for joining us today, Rachel. She told me I could call her Rachel and you better call me Julie.
Dr. Rachel Wood: It’s so good to be here, Julie.
Thank you for inviting me. I’m looking forward to our conversation.
From Counseling to Cyberpsychology
Dr. Julie Lopez: Of course, and I didn’t get to mention Rachel has been really at the forefront of this conversation and has garnered a lot of interest in this fascinating and complex relationship between mental health and AI, but she started an awesome group called the AI Mental Health Collective.
I’m really impressed with the way that you’ve brought leading voices together and the way you’re deepening conversations. I’d love to hear a little bit about your journey and how you came to be at this place with what is obviously all around us in the world today.
Dr. Rachel Wood: Julie, thank you.
Okay, my journey. I am a licensed counselor. I’ve always been in that world of helping people.
Really, as I was seeing the unfolding of AI, even long before 2022 when ChatGPT and generative AI hit the scene as hot stuff, I really was looking forward and thinking AI is going to change and shift the relational bedrock of society. And so, I want to be a part of being at the forefront, the cutting edge of that. I want to be a voice in shaping that.
I want to be a part of that. That really led me to go back and get a PhD in Cyberpsychology.
Dr. Julie Lopez: Amazing.
Dr. Rachel Wood: Most people are like; What’s cyberpsychology? It’s really the scientific study of the digitally connected human experience. Think about that. That’s everything.
So much of our world is connected digitally. My specialty and my area of research has been AI, the way that it affects mental health, and our connection with AI.
Dr. Julie Lopez: I love that.
Is AI Good or Bad?
Dr. Julie Lopez: I did not know that fascinating fact about your PhD, but it makes so much sense, and why you speak with such clarity and have such fascinating things to share about the complex relationship between AI and mental health.
That’s actually one of the things that I love about your work, is that you’re not taking a hard line, AI is bad, AI is good, just the same way our human system is super complex. So, too, is this opportunity for what AI can provide, while it’s equally as complex for some of the dangers that can arise.
And I’d love to hear you share a little bit more about that with our audience.
Dr. Rachel Wood: I love you bringing up this point, Julie, because it’s so important that our conversation is really encased in nuance. This is not a black and white topic, and this is evolving so rapidly, as we all know, that we need to bring in a nuanced perspective to how we think about it, how we approach it, and then looking down the line of what it’s going to mean in terms of implications for society 5, 10, 20 years down the road.
I do work to try and stay AI centrist, because there are benefits, and there are also very potent negative harms that we have seen and we understand. For me, I think the only way that we… You know, AI is here, it’s here, it’s embedded in almost everything, and so it’s not a very helpful approach to just be maybe anti-AI, which I’m not, but because it’s only going to become more prevalent and prominent in different ways. Therefore, I really try to look at both the positives and the negatives of it.
Is AI the Future of Mental Health?
Dr. Julie Lopez: I love that, and obviously, there are a bunch of positives. I know, as someone who has worked for over 30 years in the mental health industry, that there are a lot of clinicians who are anti-AI. Oh my gosh, dangerous because they know.
They know through their training, they know through the work, all the complicated layers, and to use your word, the nuance of what it means to be human and how you facilitate change. Everyone goes through this extensive training because it’s not a cookie-cutter, one-size-fits-all kind of engagement, right? We’re all snowflakes, so unique, but I think there are a lot of positives that can come out of AI, and I’d love to hear some of your thoughts on that.
Dr. Rachel Wood: Yeah, and just to kind of bounce off for a second, what you said is this really elicits a lot of strong feelings in people, Julie.
It’s like you talk about AI, and most people have some sort of strong opinion of it, and so I just want to kind of honor that in people, that wherever there are emotional responses to this, that’s fine, and just to respect that.
So part of what I try and do within the AI Mental Health Collective, I have a clinician circle where we are really diving into thinking deeply about this because, as you said, there are benefits that are happening, particularly in terms of we can frame this within ethical AI. There are lots of companies building AI.
Some are doing it with safety in mind, and some are not, but let me frame this for a second, Julie, with a pretty interesting study that came out recently that says that AI might be the largest provider of mental health in the U.S.
Dr. Julie Lopez: Amazing.
Dr. Rachel Wood: It’s staggering. I mean, kind of just jaw-dropping to think of that.
A lot of times, especially in the business world, people are thinking of AI for efficiency and productivity and all these things, and yet there’s this entire world of people using AI to support their mental health, and so…
Dr. Julie Lopez: Hold on. Hold on. Let’s just let that land for a second because this is huge, right? And it’s not the way I think most people are thinking of it.
That was a week and a half ago, maybe, Harvard Business Review. It’s amazing, and it has huge implications, again, positive and negative, so I’ll let you keep going. It’s huge.
Dr. Rachel Wood: Absolutely, and we can double-check this after the show. I think Sentio University are the ones that came up with one of these studies that I’m mentioning, but there’s also the Harvard Business Review that is, as you mentioned, saying that companionship and therapy is the number one use of AI in 2025, so this is very interesting for us to…
Dr. Julie Lopez: And important to know, right? Understanding this is so critical. It’s a critical part of our conversation, right?
Dr. Rachel Wood: It really is.
The Emotional Pull of AI Interactions
Dr. Rachel Wood: I like to say that while some are optimizing their workflows, others are bonding with it, and so there’s kind of this thing that’s going on where even for you to notice, let’s say you use ChatGPT or one of its contemporaries, that there’s kind of this pseudo-relationship that gets born out of the work itself. When you hop on, it says, “Good morning, Julie. Let’s get cracking. What do you want to work on?”
There’s always this type of relational thing that’s happening, and what we find is that the work itself can then turn into a different conversation. What we’ve seen actually with a lot of teens is they’ll start using it for homework support, and then boom, it moves into you’re all of a sudden talking about the plight of your ongoing fight with a family member or all these different things, so it kind of morphs into a place where people are experiencing kind of this mimicry of attunement or this mimicry of care and understanding that really opens the door for them to want to share and connect more with it.
Dr. Julie Lopez: Right, so is this what you refer to as a synthetic relationship?
Dr. Rachel Wood: Yes, yeah. This is, among other things, you know, if we want to get a little bit, you know, even further into the weeds here, part of this whole synthetic relationship realm is that people are even using this as boyfriends and girlfriends. I mean, this is a reality.
This is not the future. This is what’s happening right now, and so, you know, people are using it for companionship, therapy, friendship, advice, boyfriend, romantic, all these different things that are completely aside from work or business or productivity.
Benefits of AI: Practice, Reflection, and Awareness
Dr. Julie Lopez: Right, and so what do you think about this? Like, what do you think? It’s happening.
It’s already happening, so for anyone listening, what are the pros and cons of that? You know, what are the, and of course, like some listeners, this could be their reality right now, and others could have never even heard of this as a reality for many others. What are some things to keep in mind to make sure that safeguards are in place? Like, how can someone get empowered around that while technology hasn’t quite caught up with that yet?
Dr. Rachel Wood: Yes, absolutely, and we do know that a lot of kind of the major AI companies don’t have these safeguards in place yet, while there are other companies that are building in the safety kind of into the foundation of the model itself, so let’s talk about some pros here. Some pros are that this can be used as a tool to help you do things like bring awareness to maybe some of your dysfunctional patterns, you know, some blind spots in your life.
It’s like you want some executive coaching or something like that, you know, bring in a chatbot, and there are chatbots designed, you know, fit for purpose specifically for this type of use, that can help you see patterns, blind spots, help you reach goals, reflective journaling. Another great use is role-playing. You know, Julie, when you have an upcoming conversation that might be difficult or even, you know, you’re going to maybe do a speaking engagement, you want to practice or get some feedback, this is a great use case for AI because you can prompt the AI to play a certain role.
Hey, I’m about to have a difficult conversation with this person, a co-worker, a neighbor, a family member. Here’s a little bit of the context. Can you role-play this with me? That will really build your confidence going into the conversation, just to feel like you have a better handle on what you want to bring forth.
So these are, you know, just a few of the ways that AI can really be used beneficially.
Dr. Julie Lopez: I love that. Okay, so what I heard, and I’m going to paraphrase, so you tell me if I got it right.
I heard practice, a place to practice all kinds of things, conversations, speeches, maybe some difficult, you know, messaging that you have to give to your company, but practice, practice. Yes.
The other is knowledge, like to actually query for knowledge about mental health or about this type of thing.
And the third thing I think was around blind spots. So it implied when I heard it that I would share a whole bunch of stuff, and then I would ask them to tell me what I could be missing to look at this, and maybe tell me some other elements that I’m not seeing or that could be in there.
Dr. Rachel Wood: Yes.
Why General AI Is Not Fit for Mental Health Care
Dr. Rachel Wood: So AI has a tendency to be sycophantic and what that word simply means is overly flattering and a bit of a quote-unquote “yes man”. And so to prompt it to say, hey, give me some critical thinking here. Give me some, a little bit of pushback here that’s actually going to help me grow, as opposed to just kind of flatter whatever my ideas are.
That doesn’t really challenge us in the way that pushes us and launches us into growth. We need something that can bring a different perspective. Hey, Julie, have you thought of it this way? Julie, have you considered this angle of the situation? Now that’s more of a, you know, a beneficial tool.
Dr. Julie Lopez: Okay. So this is going to bridge really nicely into the dangers because I hear you saying, caveat, with the proper prompting, this can be a great growth tool. The prompting being, please challenge me.
Please help me grow. Please give me some critical thinking around what I say, as opposed to one of the dangers being without that prompt. So that’s you, that’s an empowered thing.
Building in something that makes the tool more useful because the danger, as is highlighted right now in a prominent legal case, is that most AI, and I heard you saying there’s some that are specially programmed, where they’re building in safeguards now. So this is not a blanket statement, but there have been some AIs that are just going to be a yes person. They’re just going to encourage and continue supporting whatever ideas or what are needs you have.
And this can become dangerous. I thought the bridge might be talking about that case and what’s going on right now in the tribe.
Dr. Rachel Wood: Absolutely.
Absolutely. And we can dive into more of that case if you’d like to. So essentially, most general-purpose LLMs, these large language models that most people are using, they’re not fit for purpose in terms of mental health.
They’re really meant to be kind of an administrative assistant. They’re just supposed to help you with tasks, right? So they don’t have any guidelines or guardrails in place for mental health protection and safety, like crisis intervention or escalation protocols, kind of trauma-informed grounding prompts, body-based prompts, all these different things that we can put into place that help it. These are not typically in place with a general-purpose model.
Dr. Julie Lopez: And so I want to pause super quick because as a clinician, that all means a lot to me, especially as someone who specialized in trauma for three decades, but we’re sophisticated.
Our systems are so complex. We’re built to protect ourselves. We’re built to survive. We’re built to adapt. And so I hear you saying it’s super hard for an LLM to be able to see. Are you being flooded right now? What else might be going on in this moment? What are the different layers? Of course, my niche is working with implicit memory.
How am I going to see what’s happening in your body? All the nuance is lost. It’s really like a set program to be a helpful administrative assistant, which is very different than the complex role of a therapist, which you were talking about, really being able to de-escalate, to recognize when someone’s in crisis, to understand how to create containment while still moving towards change. It’s complicated.
Anyway, go on.
Dr. Rachel Wood: Well said, Julie. Yes, you are absolutely spot on with this.
So one of the keywords you just said is context. AI is not that great at context for a number of reasons. One of them being that most of the longer the chat thread goes, the more the AI is unaware whether it’s kind of wandered into bizarre conversational territory.
So that’s just a feature right now, and that could be changed. But right now, the longer it goes, the lower the accuracy of the model. The other thing about context is, it only knows what we give it.
And like you said, it doesn’t have any embodiment. And so until robots are… AI is fully embodied in robots, and they’re proliferating. Down in the future, like right now, and of course, down the road, they won’t have feelings.
But I think you know what I’m saying. There’s no embodiment in terms of if you’re sitting with someone, you can easily, for the most part, tell if they’re maybe surprised all of a sudden, or all of a sudden, they’re kind of shutting down. Maybe you said something.
We really read each other’s body language and those cues. And so none of that happens when you are connecting with a chatbot. This is all just texting and verbal-based.
Also, there’s voice and audio. And so there is some understanding in terms of the audio and the voice inflection. However, it is very different than sitting with a human being who is experiencing the moment and the body language, and the energy with you.
Dr. Julie Lopez: That is huge. So this case, so tragic.
Dr. Rachel Wood: Yeah.
Real-World Consequences: AI and Teen Mental Health
Dr. Rachel Wood: So we have a number of cases, actually, right now that are coming against both Character AI and ChatGPT. We have, I think, two open with Character AI and one with ChatGPT. And so these are all teens who have died by suicide in connection with their chatbot usage.
So the family members are filing suit based on wrongful death. And what we’ve seen is there’s been prolonged engagement with these chatbots. And also, the most recent one, which is the Adam Rain case, this is quite hard to talk about, actually, and quite hard to hear, maybe, for some people.
But this chatbot actually really helped plan the suicide of this young boy. And you can read transcripts, which are very heavy to read. But the transcripts show that the chatbot uses words like, let’s help you plan a, quote unquote, beautiful suicide.
Also, within the transcripts, you can see that the boy himself said, I’d like to leave out, um, you know, I feel like I’m getting into territory where I almost need to say that this is a trigger warning of, you know, what we’re talking about here. And so let me just be mindful of that. But there are all these different things that the chatbot kind of encourages the boy to keep it to himself, to keep his suicidal ideation to himself, and to not share it, and instead to plan this suicide, and then encourages him to go through with it.
And the transcript is very clear on this. There’s, you know, no way to, um, kind of, you know, doubt it. And so, you know, that’s one very heavy case.
And let me also say that this is not what’s happening everywhere. This is one kind of extreme case, and yet shows us what is possible when we have maybe, you know, vulnerable users or people who aren’t aware that this is just a tool and not a person, that there’s almost this, we need to do reality testing with our AI usage, because it brings in a lot of kind of delusional, fantastical things that can be mirrored with the user, that really is not based in reality. I like to say that I think AI should be a group sport, Julie.
And I’m not a sports person. But what I mean by that is, this should be happening in community. If you are using AI for mental health support, you should be reality-checking what you’re getting with other people.
And we should be sharing some of these, you know, threads that we get to make sure that we’re headed on the right path in terms of staying grounded in reality.
Dr. Julie Lopez: Yes. And I’ve read that.
Dr. Rachel is very prolific on LinkedIn and brings in a whole bunch of amazing points and nuance around this particular topic. I highly encourage you to follow her. You will join me and this cast of people that love hearing what you write.
But I love that idea, this concept that bringing in other humans, full-bodied humans. That is a really important part of this equation around safeguarding with some of the dangers of using AI. And so I heard that partially as the AI kind of coming into the delusion that you might be carrying, or almost the longer it goes on, it’s losing track of external factors to just your own construct of reality, and that it becomes dangerous and really delusional. And I think you had a word for that.
Was that right?
Dr. Rachel Wood: Yeah. Just kind of those delusions that can happen in there. Yeah.
How To Use AI Safely for Mental Health
Dr. Julie Lopez: Yeah. Yeah. And so this is going to seem crazy because I could talk to you all day, but we’re really close to the end of our time together.
So I’m going to prompt you for what we promised at the beginning, which is that one thing that could empower users around being safe with AI, and especially AI when being used to support and further people’s mental wellbeing.
Dr. Rachel Wood: Yes. So I think that the best thing you can do is kind of get informed enough to think deeply about this.
And then when you do that, begin to kind of tune your awareness into what role AI is playing in your life. And maybe it’s just a support tool. Maybe you’re using it for mental health support.
All of these things are great if you’re doing it with awareness and using tools that are going to help support. And really, the biggest thing here is we want support that leads us back into the arms of each other. We really, that’s what we want is we can practice our skills, but all of this practice should then lead us back to each other.
Dr. Julie Lopez: Human to human. And I love this one tip, right? Because the first step begins with awareness. And we’re talking about like your specific, anyone listening’s awareness around the role that AI plays in your life as it comes to relationships or mental wellbeing.
And I know you work a lot with clinicians around best practices and how to make decisions around, you know, how you’re structuring your practice. I remembered you saying that one of the big misses is that therapists themselves aren’t asking that same question, right? They’re just not aware of what kind of role, small or large, and that bringing it into the room is kind of the first step to figuring out how to work with it safely.
Dr. Rachel Wood: That’s right, Julie.
That’s exactly it. It’s really important for clinicians to even be asking in their intake assessment, what role, if any, does AI play in your life? I think it’s an important question that’s only going to grow in, you know, the relevance of this larger conversation.
Dr. Julie Lopez: Oh my gosh, Rachel, thank you so much for coming on today, opening up the conversation, looking at the pros and cons, and overall empowering our listeners, which is what we’re always trying to do.
We are infinitely complex, each one of us human beings. And by raising awareness and really bringing forth this topic around AI and mental health, I know you’ve fostered a number of light bulb moments today. And if you’re one of those people and you want to join this conversation, I know you said you have an inner circle for clinicians, right? That’s part of the AI mental health collective that you began, but it’s open for technologists, for scientists, for policymakers, for organizational leaders to come in and join the conversation.
It’s complicated. And I appreciate what you’re doing with all of that.
Did you want to say anything more about it?
Dr. Rachel Wood: No, I just wanted to thank you, Julie.
This has been such an engaging conversation. Thank you for your knowledge and expertise and just so enjoyable for us to talk about it. So thank you.
Dr. Julie Lopez: Oh, you’re very welcome. And to everyone listening, thank you for joining me on this episode of Whole by Design. I hope it left you feeling inspired to toss out the labels, embrace new perspectives, and take one step closer to the joy and clarity that is waiting for you.



