Virtual assistants have come a long way since the days of Microsoft Word’s Clippy, an animated paper clip that asked you if you needed help with your document. Every day, virtual assistants are becoming more sophisticated and human-like. These assistants are intended to make programs and apps easier to use. But research conducted suggests that human-like virtual assistants may actually deter some people from seeking help from humans on tasks that are supposed to measure achievement.
"We demonstrate that anthropomorphic features may not prove beneficial in online learning settings, especially among individuals who believe their abilities are fixed and who thus worry about presenting themselves as incompetent to others," says psychological scientist and study author Daeun Park of Chungbuk National University. "Our results reveal that participants who saw intelligence as fixed were less likely to seek help, even at the cost of lower performance."
Other research has shown that people are inclined to see computerized systems as social beings with only a couple of social cues. The social dynamic can make the systems seem less intimidating and more user-friendly, but Park and co-authors Sara Kim and Ke Zhang wondered whether that would be true in a situation where performance matters, like within online learning platforms.
"Online learning is an increasingly popular tool across most levels of education and most computer-based learning environments offer various forms of help, such as a tutoring system that provides context-specific help," says Park. "Often, these help systems adopt humanlike features; however, the effects of these kinds of help systems have never been tested."
In one study conducted online, the researchers had 187 participants complete a task that supposedly measured intelligence. In the task, participants saw a group of three words and were supposed to come up with a fourth word that is related to the previous three words. On the more difficult problems, they automatically received a hint from an onscreen computer icon — some participants saw a computer “helper” with humanlike features including a face and speech bubble and others saw a helper that looked like a regular computer.
Participants reported greater embarrassment and concerns about self-image when seeking help from the anthropomorphized computer versus the regular computer, but only if they believed that intelligence is fixed, not a malleable trait.
The findings indicated that a couple of anthropomorphic cues are sufficient to elicit concern about seeking help, at least for some people. Park and colleagues decided to test this directly in the second experiment with 171 university students.
In the experiment, the researchers manipulated how the participants thought about intelligence by having them read made-up science articles that highlighted either the stability or the malleability of intelligence. The participants completed the same kind of word problems, but this time they freely choose whether to receive a hint from the computer “helper”.
The results showed that students who were led to think about intelligence as fixed were less likely to use hints when the helper had humanlike features than when it didn’t. More importantly, they answered more questions incorrectly. Those who were led to think about intelligence as a malleable trait showed no differences.
The findings could have implication for our performance using online learning platforms. The researchers conclude, “Educators and program designers should pay special attention to unintended meanings that arise from humanlike features embedded in online learning features," says Park. "Furthermore, when purchasing educational software, we recommend parents review not only the contents but also the way the content is delivered."
The findings of this research were published in Psychological Science.