Through a study from three universities, Cornell University, Olin College, and Stanford University, They have come to realize that AI’s capabilities of displaying empathy in conversation agents, such as Alexa and Siri, are rather limited. The findings of this study, submitted to the CHI 2024 conference, indicate that although the CAs are good at showing emotional reactions, the situation gets difficult when interpreting and exploring the users’ experience is concerned.
Study uncovers biases and discrimination
Using the data collected from researcher Andrea Cuadra from Stanford, this study is aimed at measuring how CAs detect and respond to different social identities among humans. Testing for 65 various identities, the research study found that CAs are inclined to categorize individuals, and the identities especially concerning sexual orientation or religion are the most vulnerable to this habit.
CAs, the knowledge of which are incorporated in the language models (LLMs), that are trained on great volumes of human created data, may therefore have the harmful biases that are in that data they have used. It is prone to discrimination specifically,CAs self can be on the go to show solidarity towards ideologies which have negative effects on people such as Nazism.
The implications of automated empathy
It was revealed from his artificial empathy concept that the applications of it in education and the health care sector are varied. On the other hand, a great deal of emphasis on the need of humans to remain vigilant and to avoid the tilling of the problems that may arise with such advances.
As stated by the researchers, the LLMs demonstrate a high ability in the provision of emotional responses, but at the same time, they are lame or lack sufficient abilities for interpretation and exploration of user experiences. This is a downside since the UIs may not be able to fully engage with clients in deep emotional interactions beyond the one whose layers have been stripped off.