Blog

BLOG POST#2: CAN THE EMERGENCE OF EMOTIONALLY RESPONSIVE ARTIFICIAL INTELLIGENCE BE ETHICALLY RECONCILED WITH HUMAN CENTERED MORAL THEORY, OR DOES IT DEMAND THE FORMULATION OF NEW ETICAL PARADIGMS?

After identifying the topic I went forward and divided my topic into three so as to advance my research and also make it clear and more understandable. These subtopics are:

  1. to what extent can emotionally responsive AI be considered a moral agent within human- centered ethical frameworks, and who bear ethical responsibility for its emotionally driven decisions and actions?
    Under this question i will be looking into   AI is increasingly employed in high-stakes fields—such as therapy, education, and autonomous driving—where AI is expected to “feel” or “simulate” emotions to optimize interactions because as AI progresses from simple functional assistants to “relational presences,” the risk shifts from simple technological malfunction to subtle emotional manipulation and the exploitation of human vulnerabilities. 
  2. How does the stimulation or expression of emotions of emotions by artificial intelligence challenge traditional human ethical concepts of authenticity, empathy and moral intention?

Under this question I’ll be looking into artificial systems where they imitate the expression of empathy but lack the conscious experience, intentionality, and moral accountability of human interaction. 

3. Do existing human- centered moral theories sufficiently address the ethical implications of emotionally responsive AI, or is there need to develop new ethical paradigms that account for non human emotional intelligence?

Under this question I will be looking into the state where humans develop emotional reliance on non-sentient machines that can mimic empathy but lack genuine emotional understanding.

IMPACTS OF RESEARCH TO THE COMMUNITY.

This research is important to the community because it exposes how humans may slowly replace real relationships with machines that only pretend to care. It warns us about losing genuine emotional connections, empathy, and social skills. If people rely too much on such machines, communities could become less human and more isolated. Imagine crying to a robot and it replies, “I understand,” but actually just runs code—kind of funny, but also scary. As a student, I think this pushes us to balance technology use. Otherwise, we might end up with perfect “listeners” but no real friends—basically upgrading loneliness with Wi-Fi.

SOURCES

  1. https://pmc.ncbi.nlm.nih.gov/articles/PMC10555972/

2.https://trendsresearch.org/insight/emotion-ai-transforming-human-machine-interaction/?srsltid=AfmBOoon41qTZraZM_pu1bGib52U72jxQz2IsFcvzRNL7L0CYVciLSWd

3.https://medium.com/@efantinatti/ai-morals-from-philosophy-to-machinae-ac528d415bf1

4.https://www.linkedin.com/pulse/beyond-models-paradigm-shift-toward-human-centered-ai-javier-2qc2e/

5. https://medium.com/common-sense-world/the-role-of-emotions-in-the-age-of-artificial-intelligence-authenticity-simulation-and-human-e97d04943a6c

You might be interested in …

Leave a Reply