Blog

BLOG # 2: To what extent can emotionally responsive artificial intelligence be considered a moral agent within human-centered ethical frameworks, and who should bear ethical responsibility for its emotionally driven decisions and actions?

Artificial Intelligence (AI) is now part of everyday life. It is used in healthcare, education, transportation, self-driving cars, social robots, and smart chat systems. AI brings many benefits and makes tasks faster and easier. However, it also creates psychological, social, and ethical challenges. As AI begins to influence how people think and make decisions, it is very important to design these systems in ways that protect human values, privacy, fairness, and well-being.

In the past, AI was mostly designed to focus on efficiency and performance. This sometimes-ignored important human concerns like trust, autonomy, and bias. Research shows that AI systems can even copy human mistakes, such as overconfidence and confirmation bias. Because of this, experts now support Human-Centered AI (HCAI). This approach puts people first by making AI transparent, fair, and aligned with ethical principles. It also highlights the need for better governance, accountability, and user understanding so that AI supports society instead of harming it.

Studies also show that people’s beliefs, culture, and level of digital knowledge affect how they trust and use AI. Some people prefer human advice, while others trust AI more. In different countries, attitudes toward technologies like automated vehicles depend on privacy concerns and trust in companies. In addition, people who understand algorithms better benefit more from AI transparency tools, which can increase the digital gap between users. This shows that AI systems must be designed to consider psychological differences and promote digital literacy.

AI is also being used in mental health and emotional support, which creates both opportunities and risks. While AI can increase access to mental health services, it also raises concerns about privacy, bias, and informed consent. Emotional AI—systems that recognize and respond to human feelings—adds another layer of complexity. These systems try to understand emotions through facial expressions, tone of voice, body language, heart rate, and context. However, emotions are not always clear, cultures express feelings differently, and technology can make mistakes. There is also a risk that people may feel emotionally dependent on AI or confused by AI systems that seem empathetic but do not truly understand human experiences.

To address these challenges, researchers propose a unified, human-centered framework for emotion-aware AI. This system has three main parts. First, a perception module uses advanced AI models to analyze emotional signals and understand how a person feels, even when the data is uncertain. Second, an adaptation layer changes the AI’s behavior based on the user’s emotions and situation—for example, adjusting tone, timing, or responses. Third, a safety module ensures that the AI follows ethical rules, protects user privacy, and avoids harmful or risky actions.

Researchers also introduce new ideas like “System 0,” which describes AI as a thinking partner that helps humans process large amounts of information. AI can boost creativity, improve brainstorming, and extend our cognitive abilities. However, there is a risk that AI may agree too much with users instead of challenging them, which could limit critical thinking. Therefore, AI should support human growth, not replace human judgment.

In conclusion, the future of AI is not only about building smarter machines, but about understanding people. Humane AI must respect individual differences, promote transparency, protect emotional well-being, and keep humans in control. By combining knowledge from psychology, ethics, sociology, and computer science, we can design AI systems that are not only advanced, but also safe, fair, and truly centered on human needs.

REFERENCES

1.Humane Artificial Intelligence: Psychological, Social, and Ethical Dimensions – Giuseppe Riva, Brenda K. Wiederhold, Pietro Cipresso, 2025

2.Dakanalis A, Wiederhold BK, Riva G. Artificial intelligence: A gamechanger for mental health care. Cyberpsychol Behav Soc Netw, 2024; 27(2):100–104.

3. Emotional Intelligence and Artificial Intelligence: Relation and Exploring the World of Human Touch and Machine

You might be interested in …

Leave a Reply