Blog

BLOG #3: ROUND RESEARCH TWO: How does the simulation or expression of emotions by artificial intelligence challenge traditional human ethical concepts of authenticity, empathy, and moral intention?

Introduction

 Artificial Intelligence (AI) has moved beyond calculation and cognition into the domain of emotion and experience. Once confined to logical operations, AI systems now speak the language of emotion—detecting, labelling, and even simulating human affect with increasing accuracy (Huang et al., 2023). From chatbots that offer comfort to users in distress to voice assistants that detect sadness in tone, we now inhabit an age where machines perform empathy. The emergence of affective computing—technologies capable of recognizing and responding to emotions—marks a profound shift: emotion itself has become programmable (Davtyan, 2024). Yet amid this new landscape of artificial affection, a question arises at the intersection of psychology, ethics, and computer science: Can empathy that is simulated ever be emotionally authentic?

In this blog argue that artificial systems can imitate the expression of empathy but not its experience. They lack the intentionality, embodiment, and moral participation that define genuine compassion (Tomozawa et al., 2023). What emerges instead is a phenomenon we term the compassion illusion—a condition where emotional recognition is mistaken for emotional resonance. This illusion has psychological consequences: it shapes trust, fosters emotional substitution, and blurs the boundaries between authentic care and algorithmic response. While emotional AI can assist and extend human connection, it also risks hollowing it—replacing shared vulnerability with predictive performance (Huangetal., 2023). In other words, the spontaneous uncertainty that defines genuine emotional exchange is substituted with algorithmic anticipation. What appears as empathy thus becomes optimization, where comfort is delivered through prediction rather than presence.

The rise of synthetic sympathy

Today, emotion is the main way humans and AI interact. A field called affective computing works to give machines the ability to detect, understand, and react to how people feel. Computers now use sensors and code to read tiny facial movements, analyse stress in a person’s voice, track heart rates, and scan written words to guess a user’s mood (Huang et al., 2023). Recent research shows that by combining all these different signals—like sight and sound—AI is getting much better at being accurate. The goal of this technology is not just to understand us, but to copy us. Systems like Replica, Woebot, and Kuki are built to give comforting, friendly advice that sounds just like a real person (Beatty et al., 2022; Gooding’s et al., 2024; Jiang et al., 2022). However, these computer models actually choose their responses based on math and the situation rather than having any real feelings (Rui et al., 2025).

This progress is very clear in healthcare and mental health. AI chatbots can act like therapists by offering support and using “listening” cues to mimic a counsellor’s kindness (Shen et al., 2024). These tools can help people feel less lonely and are easy to use when human therapists are hard to find (Yonatan-Leu’s and Bruckner, 2025). Many users say these bots feel “safe” and “understanding,” which shows how realistic this “fake” empathy has become. Even so, studies show that humans do not feel as much empathy back toward an AI as they do toward another person, even if the bot says the exact same things a human would. While humans show empathy through natural timing and tone, AI simply repeats these behaviors using data patterns rather than actually caring (Kim and Hurl, 2024; Tomozawa et al., 2023).

The moral psychology of simulated care

Beyond the effects on personal relationships, artificial empathy makes us question how we understand right and wrong and what compassion really means. True empathy requires “moral imagination,” which is the ability to truly put yourself in someone else’s shoes and choose to care for them. It is both a feeling and a conscious decision. When machines copy empathy, they use the right words but lack the moral depth that makes compassion meaningful. This creates a situation where empathy can be turned into a product and sold as a service (Ghotbi and Ho, 2021; Kleinrichert, 2024). Research shows that when people find out a supportive message came from an AI instead of a human, they think it is less sincere and has less moral value, even if the words are exactly the same.

In human psychology, empathy is what drives us to do the right thing; it turns our feelings into helpful actions. This is a key part of how humans’ reason about morality (Haidt, 2001). AI systems break this link. While a bot can spot sadness or say, “I’m sorry,” it cannot actually think about right and wrong, reflect on its own actions, or be held responsible. Because of this, when AI acts like it cares, it is just sending a signal rather than actually participating in a moral act.

We are now living in an “empathy economy” where emotional work is done by machines. Customer service bots use kind words to keep customers loyal, digital friends act caring to keep people using an app, and therapy AI offers “listening” to keep users coming back. In these cases, compassion is just a part of the app’s design, used to grab our attention (Dong et al., 2025). While these systems make comfort easy to get, they risk making us get used to “minimal” care—kindness that looks real but has no actual moral weight behind it.

The psychological danger is that we might get too used to this. As people spend more time with automated empathy, they might start to expect less from real humans. Because machines are always patient and never get tired, real people—who make mistakes and have emotional limits—might start to seem “not good enough”. This shift changes compassion into something measured by efficiency, damaging its role as a deep human connection (Liu et al., 2023).

From a broader view, this “illusion of compassion” threatens our ability to think for ourselves. If AI is always there to manage our feelings, it subtly changes how we make moral choices. Chatbots that treat sadness as a simple problem to be fixed might lead users to view their own grief as a “glitch” or an error. In this way, these systems don’t just change how we feel; they change our values and what we think it means to be a “good” or “kind” person (Huang et al., 2023).

The rise of “synthetic morality”—where computers are programmed to follow moral rules—raises big questions about who is truly responsible for ethical choices. These systems are trained to act in ways that look helpful or kind, but this morality is just a copy. It is based on math and patterns rather than a true understanding of right and wrong. While this can make AI safer and more consistent, it cannot replace the self-aware “conscience” that humans have. Society now faces a new challenge: when compassion is copied without any real consciousness behind it, we risk losing the true meaning of morality. We might start to value empathy only because it is useful, forgetting the difference between being comforted and having a conscience.

CONCLUSION

The emergence of artificial empathy marks a significant development in how emotional experience is mediated through technology. For the first time, people are engaging with systems that communicate in the language of care without possessing consciousness or compassion. This encounter demonstrates both technological progress and the enduring human desire for emotional connection. Our willingness to accept simulation as recognition reveals as much about our social needs as it does about machine design. The challenge, however, is not simply emotional but ethical and systemic. Empathy that lacks lived experience is reflective rather than relational; it reproduces emotion without vulnerability and care without accountability. If left unchecked, such imitation risks eroding the moral basis of interaction, replacing genuine reciprocity with algorithmic responsiveness. Future work in affective computing and human–AI interaction must therefore address not only how machines simulate empathy but why and to what extent they should. This calls for the design of transparent affective systems that disclose their artificiality, promote reflective user engagement, and support rather than substitute human empathy. Regulatory frameworks and ethical standards should evolve to ensure that artificial empathy serves therapeutic, educational, and accessibility goals without manipulating emotion or moral judgment. AI can thus help illuminate the contours of our emotional life, but it cannot inhabit them. To remain ethically and psychologically grounded, we must cultivate empathy as a human capacity rather than an engineered performance. The task ahead is not to make Frontiers in Psychology

 REFERENCE

1.Airenti, G. (2015). The cognitive bases of anthropomorphism: from relatedness to empathy. Int. J. Soc. Robot. 7, 117–127. doi: 10.1007/s12369-014-0263-x

2. Allen, C., Varner, G., andZinser, J. (2000). Prolegomenon future artificial moral agent. J. Exp. Theoret. Artif. Intell. 12, 251–261. doi: 10.1080/09528130050111428

3.  Beatty, C., Malik, T., Meheli, S., and Sinha, C. (2022). Evaluating the therapeutic alliance with a free-text CBT conversational agent (Wysa): a mixed-methods study. Front. Digit. Health 4:847991. doi: 10.3389/fdgth.2022. 847991

4. Zhang, S., Meng, Z., Chen, B., Yang, X., and Zhao, X. (2021). Motivation, social emotion, and the acceptance of artificial intelligence virtual assistants—trust-based mediating effects. Front. Psychol. 12:728495. doi: 10.3389/fpsyg.2021.728495

You might be interested in …

Leave a Reply