Brief Report
Empathetic Concern and Attitudes towards Sentient Artificial Intelligence
Artificial intelligence (AI) software is ubiquitous across today’s business landscape because it helps people better manage information and uncover insights when working with large amounts of data. As the complexity and skills of these utilitarian AI systems has continued to expand, questions have been raised about the possibility of developing a sentient AI, which is able to have subjective experiences related to its environment, thoughts, and other behaviors. Although many experts suggest this technology is still well beyond our current technology and/or programming abilities, researchers have been busy studying peoples’ attitudes about how this hypothetical technology ought to be treated within our existing moral, ethical, and legal frameworks.1,2
For example, In one recent survey of U.S. adults,1 participants were asked to rate the level of legal protection that should be given to different groups:
“On a scale of 0–100, how much should your country’s legal system protect the welfare (broadly understood as the rights, interests, and/or well-being) of the following group?”
The items included eight realistic groups like human residents of your country, non-human animals, the environment, and corporations, as well as one hypothetical group, sentient artificial intelligence. Participants reported significantly lower levels of desired protection for sentient AI compared to other groups.
Social psychological research suggests that people’s estimates of their own behavior often fails to match their actual behavior when put into a new situation. This phenomenon can be described in terms of the availability heuristic: When asked to report how they might behave in a situation they have never experienced before, people will use any information available in associative memory to construct an estimate of how they would respond.3,4 Given people’s willingness to adopt low-quality information to estimate their own performance, audiences are justified in being skeptical about any claim about how people would react to a hypothetical agent, like a sentient AI.
That said, there are research techniques that may support a higher say-do correspondence in experimental investigations. For example, researchers can give participants contrived real-life experiences, where human controllers act through what participants believe is a sentient AI, 5 or they can give participants vicarious real-life experiences, where participants observe others interacting with a sentient AI. These techniques give relevant context and emotional information to participants that otherwise may not be accessible when answering a question like “how do you feel about how another agent is being treated.” 6
This study was designed to investigate the effects of a video experience showing how a sentient AI character is treated by their human on 1) general views about sentient AI and 2) empathetic concern for the specific sentient AI character in the video.
Participants were 102 (59% female) U.S. college students between the age of 18 and 25 years (M = 19) who agreed to complete an online survey in Qualtrics. The original sample consisted of 169 participants. Cases were removed based on incomplete responses (n = 67).
Prior to the video manipulation, participants were asked to report their views about the benefits of technology. A third item adapted from the Toronto Empathy Questionnaire assessed their empathetic concern for others.7
The benefits to humanity of today’s technology outweigh the harms. | 6-point Likert (6 = Strongly Agree) |
My life would be better with less technology. | 6-point Likert (6 = Strongly Agree) |
The misfortunes of others is not my concern. | 6-point Likert (6 = Strongly Agree) |
Following the video experience (see below), participants were asked to report their views about the advantages, treatment, ownership, and likelihood of sentient AI technology (or True AI).
Finally, participants responded to a pair of items that were designed to assess empathetic concern for the True AI character in the video. Empathy is often discussed as a set of related but distinct constructs, including empathetic concern, which is “other oriented feelings of sympathy and concern for unfortunate others.”8,9,10
Humanity will be better off with True AI. | 6-point Likert (6 = Strongly Agree) |
True AI deserves not to be treated cruelly. | 6-point Likert (6 = Strongly Agree) |
Humans should be able to own True AI. | 6-point Likert (6 = Strongly Agree) |
Rate the likelihood that True AI will exist someday. | Sliding scale (0 - 100%) |
Rate how sympathetic you felt for the AI in the video. | 4-point unipolar (4 = Very Sympathetic) |
Rate how concerned you felt about the welfare of the AI in the video. | 4-point unipolar (4 = Very Concerned) |
Participants watched an online video about a True AI character, which was described as being a digital copy of a real person’s mind that was intended to be a “personal assistant” to their human owner. The video segments were from the “White Christmas” episode of Black Mirror (Netflix). The full 12-minute video segment was broken down into three components based on the AI’s situation (the times shown are the minutes remaining when watching the video in Netflix). Participants were randomly assigned to one of three video conditions: Learning only, Learning + Training, or Learning + Training + Serving.
Learning about their new reality, the True AI faces the news that they are a digital copy of their human owner’s mind that has been integrated into a smart home computer system. [45:10 – 40:40]
Training to obey the human owner, the True AI is put through days and then weeks of sensory deprivation. [40:40 – 35:10]
Serving their human owner, the True AI manages mundane household events and their owner’s daily schedule. [35:10 – 33:00]
Find the full video on Netflix.com
Participants mostly agreed that the benefits of technology outweigh its harms, but they also somewhat agreed their life would be better with less technology.
Scores on the third covariate item were reverse-scored so that higher scores indicated higher empathy. Participants showed a moderate level of empathy; agreement with the idea that “the misfortunes of others is my concern.”
Item | Mean (SE) |
---|---|
The benefits to humanity of today’s technology outweigh the harms. | 4.42 (0.11) |
My life would be better with less technology. | 4.03 (0.13) |
The misfortunes of others is [not] my concern. (R) | 4.43 (0.12) |
A multivariate analysis of variance (MANOVA) showed that participants’ views about the advantages, treatment, and ownership of True AI did not change across the video conditions, p = .796. These variables were collapsed across video conditions for Figures 1, 2, and 3.
A one-factor analysis of variance (ANOVA) showed that participants’ views about the likelihood of True AI existing someday did not change across the video conditions, p = .823.
However, when evaluating the data, a large difference in likelihood scores was detected between male and female participants (12 participants did not indicate their gender identity). A Mann-Whitney-Wilcoxon test showed that men (M = 74%, SE = 3.8) gave significantly higher likelihood estimates for the future possibility of True AI compared to female (M = 57%, SE = 3.1) participants, W = 532.5, p < .001.
Figure 4 shows the distribution of male and female scores; the dashed lines indicate the median score.
Ratings of sympathy and concern for the AI character in the video were averaged for each participant to create a single empathetic concern score.
A Pearson’s correlation test confirmed that empathy (covariate) was correlated with empathetic concern for the True AI character, r = 0.242, p = .014, so it was included as a covariate in the following model.
An analysis of covariance (ANCOVA) on empathetic concern for the True AI character in the video showed a significant effect of the video experience, F(2,98) = 4.383, p = .015, and that empathy was a significant covariate, F(1,98) = 7.148, p = .009. Bonferroni contrasts showed that empathetic concern significantly increased from Learning only to the Learning + Training + Serving video condition, padj = .020.
Figure 5 shows the adjusted mean empathetic concern for the True AI increased after participants observed how it was trained and served its owner. Error bars indicate standard error.
This report was created with RStudio and RMarkdown