Daryl Cameron head shot in gray suit, white shirt, and dark rimmed glasses smiling.
Published on:

As artificial intelligence (AI) continues to influence all facets of our personal and professional lives, questions abound, like, “Can people have actual feelings for robots?” or “Can a chatbot comfort someone in distress?”

Penn State faculty members and Rock Ethics Institute senior research associates Daryl Cameron and Alan Wagner are working hard to better understand these questions. Recently, Cameron, associate professor of psychology, and Wagner, associate professor of aerospace engineering, collaborated with other scholars on an article assessing how empathy for and from robots is considered from an interdisciplinary perspective. They published their work in the journal Current Directions in Psychological Science.

The article was co-written with Martina Orlandi, a former postdoctoral scholar in the Rock Ethics Institute; Eliana Hadjiandreou and Stephen Anderson, doctoral alumni of the Department of Psychology and Cameron’s Empathy and Moral Psychology Lab; and India Oates, a former post-baccalaureate research associate in Cameron’s lab.

Cameron and Wagner discussed the potential for empathy between humans and AI in the Q&A below.

Q: How did you end up collaborating on this article?

Cameron: Alan and I have been talking about empathy and robots for about 10 years now and have worked on several projects together. We applied for and received the Center for Socially Responsible Artificial Intelligence’s Moral Psychology of Human-Technology Interaction seed grant, which allowed us to develop this theory paper on the complexities of human-robot relationships.

In the paper, we go into what empathy is and how difficult it is to define. So, if we’re talking about compassion, we can picture those cute little robots that make expressions that seem to convey warmth. And perspective taking, which is being able to predict and understand the minds of others around you. It doesn’t involve emotion in the same way; it’s many things that can predict emotions.

Someone might say, “I value this in terms of my personal wellbeing. This robot makes me feel cared for; it makes me feel happy.” But then you might have some scientists or philosophers or engineers who say, “Well, you’re not really happy.” We suggest it’s important to consider how people relate to their own feelings when they empathize with robots and realize their experiences may have some value.

Q: How do humans and AI agents interact in ways that resemble empathy?

Wagner: People are interacting a lot with AI agents like chatbots — and treating them like their best friends. If you made the AI more empathetic, could that help people? It’s tough to say; we don’t know at this point. AI can say, “Sorry you’re having a bad day,” but it’s just words from an algorithm. It doesn’t have the experiences to feel those things — it doesn’t know what it feels like to have a family member die of cancer. But on the other hand, even just saying “sorry” can help people. It could be helpful, but there’s still a lot of territory to explore.

Q: What are some areas where more empathetic AI might be useful?

Wagner: One area that could be beneficial is with AI agents that interact with the elderly. You have a growing population of elderly people who have no children or maybe no family members at all. Is it OK to develop agents for them to serve as virtual companions? What are the potential positives and negatives of that? It’s been shown that older adults that have more social interactions are less likely to suffer from depression — there’s a whole slew of positive psychological benefits. But does that carry over to an AI agent? Is it deceptive?

In Japan, it’s been suggested that robots could be companions for children so that their parents could work longer hours — essentially, being raised by robots. There’s also this field called “grief tech,” where you get text messages from people who’ve died. You now have products where you can make recordings of yourself and collect data on yourself, so that an artificially intelligent agent can reproduce your likeness after you’ve passed away. Not just to reproduce things you’ve said, but to answer questions in a manner you would have said them. Maybe it’s helpful for loved ones, but we have no idea what the long-term psychological impact is.

Q: What are the potential harms of human-AI interplay?

Cameron: Something that’s always in the back of my mind is, if I just start treating the chatbot like a jerk, will that filter into my own interactions with other people? If we begin to liken human interaction too much to our AI interactions, will we treat humans as tools for our own bidding? And that’s part of the broader concern about overly agreeable AI. If AI always agrees with what we say, when we do confront the friction of actual human interaction, we might not be prepared for it.

Building off that, something we discuss in the paper is that there could also be some benefits in treating AI like humans. If you are polite in those spaces from a character development standpoint, it’s a chance to practice and can be something that translates outward.

Wagner: Where this work gets interesting — and also scary — is how gentle nudges, small little cues you don’t even notice, can influence your psychological behavior. Now with a chatbot that has a reach of 400 million people, you can imagine how implementing a gentle nudge influences a lot of people in whatever way that the designer chooses.

Q: What else could AI teach us about empathy?

Cameron: It would be interesting to study how we can use the perspective provided by AI to self-reflect on our own experiences to provide feedback and guidance. We know human empathy can be limited in various ways by biases, prejudices and first assumptions, and that can be hard to break out of. There’s a growing body of work that large language models can be useful for debunking and debiasing certain kinds of misinformation or biases that people have.

In our paper, we talk about whether interactions with robots could provide a platform to think about the limits of our own empathy in different ways. If you find yourself saying “please” to a robot, you might be developing your ability to empathize with other people. That sort of interaction would be interesting to test.

There are some who are very opposed to the idea that there could be positive upshots of these interactions. But there’s sometimes an underestimation of some of the potential benefits, I think, and it does us a disservice to not study them.

Penn State is shaping the future of higher education in the age of artificial intelligence. Our focus is on human-centered, ethical AI innovation that delivers meaningful impacts for Penn State and the broader community. Through visionary planning, strategic partnerships, targeted hiring and strategic investments, we will equip every Penn State student, staff and faculty member with the AI-related knowledge, experience and confidence they need to succeed in the AI-powered future. Learn more at psu.edu/ai.