Will People Trust Voice Assistants to Make Medical Appointments?

By Centific Editors

Do you trust a voice assistant to make your medical appointments? Amazon is betting that your answer is “Yes.”

Amazon and virtual care provider Teledoc recently announced that they are making it possible for patients to ask for doctors using Amazon Alexa’s voice assistant. Through a voice-activated virtual care program, patients can get non-emergency medical help by telling Alexa (via an Amazon Echo device) that they want to see a doctor. A Teledoc physician will call them back. 

This relationship could be a step forward for voice assistants fueled by artificial intelligence.

A Matter of Trust

Using voice assistants for medical care requires that people trust them to do more personal and consequential things than checking the weather. Trust happens at a rational level (“Is Alexa accurate?”) and an emotional one (“How do I feel about the experience I’m having talking to a machine?”). Amazon and Teledoc are betting that Alexa has achieved both rational and emotional trust, which would be a breakthrough for AI if indeed the service takes off. But building trust is very complicated and difficult with any AI product.

Rational trust is intuitive to understand. People have to feel confident that their voice assistant is going to do the tasks they were designed to do. When it comes to managing medical information, voice assistants still have a lot of room for improvement. This is probably why Amazon and Teledoc are focusing on using Alexa to schedule medical appointments rather than manage complex medical issues. We’re not quite ready to rely on voice assistants to answer questions about our health.

Emotional trust is a bit more complicated, but it’s as important as rational trust. When we trust a voice assistant emotionally, we’re comfortable having the voice assistant ever-present in our most intimate living spaces, as we are with devices such clocks and stereos. To trust a device emotionally, we need to feel comfortable about its appearance and, perhaps more importantly, about the tone of voice that the device uses. The University of Waterloo conducted a study in which people interacted with Amazon Alexa, Google Assistant, and Apple Siri. The subjects were asked to respond to how they had perceived each of the virtual agents to measure consumer trust. Finally, they were asked to describe what each digital assistant would look like physically if she were a human being. The more humanlike the voice assistant seemed, the more likely people were to trust it. This is why technology companies such as Amazon continue to refine voice assistants with different voice tones.

There is one other important aspect to building emotional trust that is often overlooked: how inclusive the device is. All segments of a population need to feel like a device caters to their needs in order for the device to build trust at scale. If only one segment is comfortable using a device – say, white Americans whose first language is English – then trust does not happen at scale. Rather, trust is limited to that audience.

This is why businesses are paying more attention to capabilities such as AI localization, defined as training AI-based products and services to adapt to local cultures and languages. A voice-based product, e-commerce site, or streaming service must understand the differences between Canadian French and French; or that in China, red is considered to be an attractive color because it symbolizes good luck. AI-based products and services don’t know these things unless people train them using fair, unbiased, and locally relevant data. And an AI engine requires even more data at a far greater scale. (For example, for one of our clients, Centific delivered 30 million words of translation within eight weeks.) Consequently, more people are needed to train AI to deliver a better result.

Alexa has plenty of room to grow. Alexa can speak English, Spanish, French, German, Italian, Hindi, Japanese and Portuguese. The American, British, Australian, Canadian, and Indian dialects are available for English; Spanish, Mexican, and American are available for Spanish; and French and Canadian dialects are available for the French language. This is a great start. Now, consider the fact that there are 6,000+ languages across the world, and the scope of the challenge becomes clearer.

Alexa is making progress. In 2020, Amazon launched multilingual mode on Echo devices which allows bilingual customers to switch from English to Spanish, and vice versa, as they instinctively do while chatting with friends and family. Amazon did this in order to respond to the reality that now more multi-generational families living together, which in some communities, brings multiple languages into the home. Amazon also launched Live Translation to U.S. customers, a new Alexa feature that assists with conversations between individuals who speak two different languages. 

The Importance of Mindful AI

We believe that Amazon’s relationship with Teledoc can succeed by embracing Mindful AI – or making AI more valuable, trustworthy, and inclusive. We define Mindful AI as follows: developing AI-based products that put the needs of people first. Mindful AI considers especially the emotional wants and needs of all people for which an AI product is designed – not just a privileged few. When businesses practice mindful AI, they develop AI-based products are more relevant and useful to all the people they serve.

Mindful AI is not a solution. It’s an approach. There is no magic bullet or wand that will make AI more responsible and trustworthy. AI will always be evolving by its very nature. But Mindful AI takes the guesswork out of the process. Learn more about Mindful AI in this blog post. And contact Centific to get started.

Photo source: https://pixabay.com/photos/telemedicine-doctor-laptop-6166814/