Through a study from three universities: Cornell University, Olin College, and Stanford University, They have come to realize that AI’s capabilities of displaying empathy in conversation agents, such as Alexa and Siri, are rather limited. The findings of this study, submitted to the CHI 2024 conference, indicate that although the CAs are good at showing emotional reactions the situation gets difficult when interpreting and exploring the users’ experience is concerned.
Biases and discrimination uncovered
Using the data collected from researcher Andrea Cuadra from Stanford, this study is aimed at measuring how CAs detect and respond to different social identities among humans. Testing for 65 various identities, the research study found that CAs are inclined to categorize individuals, and the identities especially concerning sexual orientation or religion are the most vulnerable to this habit.
CAs, the knowledge of which is incorporated in the language models (LLMs), that are trained on great volumes of human-created data, may therefore have harmful biases that are in the data they have used. It is prone to discrimination specifically, CAs self can be on the go to show solidarity towards ideologies that have negative effects on people such as Nazism.
The implications of automated empathy
It was revealed from his artificial empathy concept that the applications of it in education and the healthcare sector are varied. On the other hand, there is a great deal of emphasis on the need for humans to remain vigilant and to avoid the tilling of the problems that may arise with such advances.
As stated by the researchers, the LLMs demonstrate a high ability to provide emotional responses, but at the same time, they are lame or lack sufficient abilities for interpretation and exploration of user experiences. This is a downside since the UIs may not be able to fully engage with clients in deep emotional interactions beyond the ones whose layers have been stripped off.