The Uncanny Valley: Why Consumers Distrust Lifelike AI
Guest WriterGuest Writer
Despite the rise of voice assistants like Amazon Alexa, people are uncomfortable with lifelike AI. For example, Google unveiled the âDuplexâ feature for the Google Assistant last year. The human-sounding AI could make simple phone calls on behalf of users, mainly for booking restaurant reservations.
The AI sounded too lifelike. Call recipients reported feeling âcreeped outâ by the Duplex bot because it was almost indistinguishable from a human. This is an example of âthe uncanny valleyâ or the eerie feeling people get when human-like AI imitates a person, yet falls short of seeming completely real. This gap in realism leads to feelings of revulsion and mistrust.
|| #IoTForAll #IoT #AI" quote="Consumers distrust AI, but greater knowledge and transparency about its presence in our lives may contribute to increased trust in the technology." theme="]
Creating warm and trustworthy relationships with customers requires that special touch that only high-end developers can deliver. For AI to gain the trust of consumers going forward, and achieve better business outcomes, developers need to have a solid grasp of the uncanny valley, and its consequences.
Businesses considering using AI should consider numerous points on how AI may impact consumersâ level of trust before adoption.
AIâs increased realism is unnerving, but this negative emotional response is nothing new. Looking at lifelike dolls, corpses, and even prosthetic limbs can trigger the same effect. This is because lifeless yet human-esque objects remind us of our own mortality. Sci-fi and horror films utilize this phenomenon to great effect, conjuring images that are too close for comfort.
Lifelike AI is also disturbing because humans are biologically incentivized to avoid those who look sick, unhealthy, or âoffâ. This is known as âpathogen avoidance,â which biologically serves to protect against dangerous diseases. Lifelike AI seems almost human, but almost human isn't enough.
Humans have evolved to control their environment. As a result, we hesitate to delegate tasks to algorithms that are neither fully understood nor failsafe. So when AI fails to perform to human standards, which is often, weâre acutely aware.
For example, Uberâs self-driving car has yet to operate safely on auto-pilot. According to research by UC Berkeley, one AI housing system set about charging minority homeowners higher interest rates for home loans.
Even in the case of Google Duplex, users doubted whether the AI could correctly understand the simple details of their restaurant reservation.
AI is perceived as untrustworthy because no matter how often it succeeds, even if it fails a handful of times, those situations stick out. Though convenience is appealing, consumers demand reliability, control, and comfort when using the technology.
Voice assistants like Amazon Alexa occupy a happy medium for users. The AI isnât too lifelike, and itâs easily understood how to control the technology. People only trust what they understand. But lifelike AI, like most, isn't well known.
To gain trust, AI developers and businesses must ensure a more comfortable AI experience for users. Foremost, this means that the AI should appear and sound less human.
People want technology such as Google Duplex to announce itself as AI, and this would make them more comfortable with the technology. Visually, AI can be created to appear cute rather than anatomically accurate. If the AI is easily distinguishable from a human, people are more likely to adopt it.
Although machine learning algorithms are too complex to be understood by humans, transparency and explainability engender trust. To this end, sharing information about AI decision-making processes can shine a light into the âblack boxâ of machine-learning algorithms. In one study, people were more likely to trust and use AI in the future if they were allowed to tweak the algorithm to their satisfaction.
This suggests that both a sense of control and familiarity are key to fostering acceptance for lifelike AI.
Finally, if consumers will not trust a businessâ AI system, revert back to the old-fashioned way and use humans to communicate with customers - and seek help from third-party sources like virtual assistants to ensure the task doesnât become overwhelming.
To open people up to lifelike AI, companies must avoid the uncanny valley. Familiarity, education, and visual distinction are needed to help people be comfortable in the presence of humanoid technology.
New Podcast Episode
Recent Articles