VUI, alongside voice patterns, is quickly becoming one of the most important aspects of software design. Human voices are diverse, complex and nuanced. This means that designing voice commands can be tricky. Have you ever misunderstood a request from a colleague? If it can easily happen in between people in real life, so it’s no wonder it is so daunting to process voice commands through a computer. Humans frame thoughts in a multitude of ways and have plenty of cultural differences that affect communication, including slang, abbreviations, etc. All of this has an impact on the way we interpret and comprehend voices and words.
What makes a great experience with voice interactions
This means that software designers and engineers who are incorporating a chatbot or voice assistant into your product have a challenge to tackle. You have to cultivate trust between the user and artificial intelligence. That’s where Voice User Interfaces come into play. VUI refers to the auditory, visual and tactile interfaces that make it possible to create interaction between people and devices. These can run a wide gamut but they all work to drive usability of your product and, done well, can build better experiences for your customers.
We can find voice interactions in everything from phones to TVs to smart homes and more. Voice recognition is quickly getting more advanced and VUIs are very different from graphical user interfaces – they require totally different guidelines. Users have their own expectations as far as how speech communication takes place and carry these over into communication with devices. To create an awesome product, you need to have a basic understanding of how people naturally communicate with their voices as well as some of the fundamentals of voice interaction. If you don’t understand the principles that govern human communication, you run the risk of frustrating users if things go wrong.
Important voice patterns
A great voice experience allows for the ways people set out to create meaning and intent. In order to evolve from building voice keyboards to creating conversational, voice-first experiences, you need to take a close look at certain attributes of voice communication between people. You can’t just add voice to your app and call it a voice experience – you have to think about the whole experience from a voice-first perspective. There are subtle but crucial differences in designing for voice vs. designing for screen. Here are four design voice patterns that are important to voice-first interactions.
1. Be personal
Creating a personal experience for users is almost essential these days. In voice interactions, the interaction itself should be personalized. A singular interaction for everyone will feel generic and robotic instead of sincere. Voice-first interactions should be predictable but also varied and engaging. For example, instead of say “Hello” every time, you can say “Hey There!” or “Happy Monday!” and so on depending on the tone you’re trying to set. If you’re designing an experience with memory, you can make it even more personal so users can pick up right where they left off, as in “Hey Dave, you have logged in for 6 days in a row, you’re a hero! Do you want to resume with your normal class or try something different?”
2. Be adaptable
VUIs operate by listening for the user’s intent or what they want to accomplish. Phrases like “Now What,” “Yep,” “Sounds great,” and “Super” are called utterances and should be incorporated into a conversational UI. Through Automatic Speech Recognition and Natural Language Understanding, you can resolve all of these and more back to an “Okay” intent. Interfaces can also provide clues and guidelines for a user to lead them in the right direction since there is no navigation to scroll through. For example, you can design software to share a list of things users can say to get started. People should be able to talk just like they do in everyday life.
3. Be available
Designing for voice is so different than designing for screen. Voice UIs don’t benefit from hierarchical menus. They hinder what the user is capable of doing and make the experience more confusing. If you’re working to design conversations, freeform interactions, you need all available features presented at top level. For example, if a user visits their account to find their account number, you wouldn’t want them to need several voice commands to sift through the information architecture to find it. They should simply be able to say “Find my routing number.” It makes sense to program a variety of different, often used commands into the software so users can try different things until they get what they are looking for.
4. Be relatable
The best voice -first UIs are cooperative. Both participants in the conversation need to look beyond the explicit statements to find the implicit meaning and advance the conversation. If the user says “I’m looking for a winter coat,” the VUI might respond with, “I have a number of winter coats in mind. Are you looking for extra warmth or a coat that’s not too bulky?” With this question, I’m looking for values like size, material, etc. The VUI is programmed with synonym words like thin, thick, gore-tex, sub-thermal, etc. that can match what a person says to the expectations of your API.
These important voice patterns are crucial to developing rich and compelling VUIs. It’s time to join the designers who are changing the way people think about technology by using voice in creative ways.