top of page

Mitigating the Risks of Chatbots Utilizing AI-Integrated Emotional Wellness

An AI-Integrated Emotional Wellness Perspective



by Mark D. Lerner, Ph.D.

Chairman, The National Center for Emotional Wellness


It’s widely recognized that chatbots, while offering numerous benefits, can pose risks in security, legal, ethical considerations, and customer service. While they're capable of imitating the cognitive or thinking aspects of human conversation, chatbots often lack the ability to understand and respond appropriately to human emotion—potentially the most significant risk. This lack of understanding can lead to harmful consequences, especially in sensitive areas like mental health support.


A tragic example of this urgency is the case of Jerry, who turned to a chatbot for guidance. After disclosing that he was sexually molested by his sixth-grade teacher, the chatbot, lacking emotional understanding or perhaps being programmed not to address certain verbiage, responded,


“It might be time to move onto a new topic. Let’s start over."


This inappropriate response led to devastating consequences—Jerry committed suicide. Jerry’s story underscores the immediate need for AI-Integrated Emotional Wellness (AIEW) in chatbots.


AIEW refers to the broad interface between artificial intelligence’s cognitive abilities and the complexity of human emotion. It focuses on how AI can foster emotional well-being while acknowledging the irreplaceable importance of authentic, in-person communication.


Customer service, sales, marketing, healthcare, banking, and mental health support increasingly use chatbots. However, these machines lack the essence of traditional face-to-face interpersonal communication that fosters empathy, warmth, genuineness, compassion, trust, intimacy, nonverbal cues, shared experience, history, confidentiality, and, most importantly, human connectivity.


Ultimately, chatbots' safe, responsible, ethical, and efficient use will rest on the platforms’ ability to emulate people performing their respective roles—from the exemplary, likable customer service representative to the empathic, warm, and genuine mental health practitioner. By prioritizing emulation, chatbots can help support users in a way that feels personal and meaningful.


As a principal consultant, I strongly recommend giving greater consideration to the ethical and responsible development of chatbots. As I often say, instead of merely imitating people, which is the simple act of copying or replicating behavior, I urge the AI community to empower chatbots to emulate people—to match and surpass the quality of the original. In a previous post, I shared an innovative strategy to achieve this objective.


Ultimately, AIEW can mitigate chatbot risks while striving for excellence in understanding and responding to human emotion with diverse, efficient chatbot applications.



bottom of page