When designing for voice, it may be tempting to think many of the same old UX rules apply. In reality, designing for voice is completely different from designing other digital products. You need to understand both the psychology of how people naturally use their voices to communicate, and how you want them to use their voice to get the most out of your product. Whether you are designing a product with voice serving as a compliment to other features, or you are designing hardware where voice is the main feature, here are a few best practices to consider.
How do people naturally communicate?
You are probably familiar with the notion that more than 90 percent of communication is nonverbal. The researcher Albert Mehrabian is most often credited with the formula for nonverbal communication: 7 percent is verbal, 38 percent is tone of voice, and 55 percent is body language. While there is some debate over the accuracy of these numbers (every situation is different), there is some truth to be found here. Mainly, when we use our voice to communicate, we’re used to pairing it with additional cues to convey our true meaning to the other person. Obviously, when a user is talking to a device, they can’t rely on body language. When designing for voice, we have to both account for and leverage the user’s intuition.
Voice Design Best Practices
- Set expectations. Whether the product includes other features in addition to voice, or it relies entirely on verbal commands to operate (like Amazon’s Alexa), voice is meant to be a more convenient option. So avoid making it more cumbersome than it needs to be. It does no good to use voice and ask for concert tickets if that’s out of the scope of what the product can do. When designing for voice, the first thing you should do is set user expectations so they know how to use their voice with your product. Siri does this with a combination of audible and visual cues. How many times have you accidentally activated Siri, only to see these words flash across the screen: “Some things you can ask me: ‘FaceTime Lisa’”? This is Apple’s way of setting realistic expectations for what Siri knows how to do. The same goes for your own product’s voice interface design. As soon as possible, tell the user what they can expect to get when they use their voice to make a command.
- Hold the user’s verbal hand. Unlike devices that rely primarily on text or navigation, where a user can clearly see where they are or can control their ability to get to where they want to go, voice-based products need to keep the user aware of the path they are following. The Interaction Design Foundation uses the example of a weather app. Rather than responding with “sunny and dry,” a better response would be “Today’s weather forecast is sunny and dry.” This not only clarifies the user’s supposed intention (maybe they wanted the forecast for the next hour, not the entire day), it also provides an example for how they can ask for information in the future. Design conversation paths so you can direct the user where they want to go.
- Incorporate visual cues. If most communication if nonverbal, then it makes sense to leverage that reality in voice interface design. Example: when a user asks Alexa a question, lights appear to show that the device is processing the request. Similarly, when a user asks Siri to search online for an answer to a question, Siri will display the text of the question as she understood it. Both of these are different types of visual cues that reduce frustration for the user. Think about the visual cue that makes the most sense for your product, and incorporate it into the overall voice interface design.
Find your voice
As people increasingly use devices to multitask, voice functionality will likely continue to grow. Botsociety can help you approach voice interface design thoughtfully. By keeping the unique characteristics of verbal and nonverbal communication top of mind, you can design the best possible user experience.
Also published on Medium.