A lot of bot designers shy away from broaching difficult topics. Anything that might be deemed sensitive, personal, or embarrassing gets stamped “too risky” and scrapped.
It’s true that sometimes this impulse is spot on. It would be disastrous to replace crisis hotline responders with bots—these are conversations that need to be highly nuanced, and an error could mean the difference between life and death. You don’t really want that on your hands.
But by and large, I think people are too cautious in their assumptions about what may work as an automated conversation.
Provided the context is right and your interaction is designed thoughtfully, you can be really successful, and potentially do a lot of good.
Take this example from the journal Computers in Human Behavior. A 2014 study called “It’s only a computer: Virtual humans increase willingness to disclose” demonstrated exactly what you’d guess from the title: when people thought they were talking to a virtual health interviewer versus a real interviewer, they felt more comfortable divulging sensitive information and expressing sadness. This same effect has been observed with virtual human interviewers in other contexts. It’s not that people want to lie — it’s just that having a human in front of us can trigger our natural instinct to “impression manage,” or control how the other person is perceiving us out of fear of being judged. With a computer, this sensation is heavily decreased.
I’ve found this to be deeply true in my own experience in the healthcare industry. Initially, I had many of the same fears as others when a potentially dicey topic came up. Won’t people feel weird saying something so personal to a robot? Will they worry about some kind of malicious intent? I was quickly proven wrong. When we paid careful attention to crafting the interactions in a supportive, trustworthy way, people felt comfortable disclosing a litany of things I never would have imagined — like self-harm, depression, low food access, low medication adherence, and even health status markers like the results of a Hepatitis C test. Not only could we save time for busy clinical staff by handling these matters in an automated way, we could also do a better job helping patients feel safe enough to answer honestly. As a result, people were more often put in touch with the help and resources they needed.
So, the power of automation should not be doubted. However, there are still many ways to go awry when it comes to designing such tough conversations. Let me walk through some things I’ve learned in the field.
Even though it’s a bot, you can’t be completely sterile and expect people to share at the same rates. A bot that barks “Do you have depression??” is going to feel a whole lot different than a bot that says, “Some people tend to feel a bit sad or low after leaving the hospital. Is this something you’re experiencing?” Make sure your users feel safe and supported. Also, don’t forget about your response post-disclosure: if someone does tell you “yes,” you need to address it in some way before moving on.
Something like “I’m sorry to hear that, I’ll make a note” will make someone feel a lot better than a bald “okay.”
Even with a bot, it can make people feel vulnerable sharing sensitive things, so you want to make sure you meet those feelings with care and warmth.
Things don’t go over too well when you don’t explain the purpose of a question, and what you intend to do with their reply. If your user doesn’t have a reasonable means of inferring all that, lay it out for them. Something like “I’d like to confirm your birth date so I can make sure it’s really you before we proceed” will be met with less resistance than “What’s your birth date?” In this age of privacy concerns, even asking people for things like their date of birth, email address, or phone number could be met with a similar kind of anxiety as asking for sensitive health information.
In a similar vein, it’s important to make sure people don’t feel targeted by your content. Even if an individual question is designed sensitively, you still have the potential to make folks feel uncomfortable if it carries special weight for them based on their identity. For example, questions about drug use could be offensive to people like African Americans or Latinos who have been inaccurately stereotyped as using drugs at higher rates. They may wonder if you’re only asking these questions based on the demographic information you have. So if you’re asking the same questions to everybody who uses your service, explain that. Or, if you are targeting people in some way, like maybe asking specific questions of new parents, you similarly would want to explain how you got this information so they don’t feel creeped out.
Give them an out
It’s also important to provide a means for people to opt out of the sensitive conversation or question. Sometimes, however carefully you frame something, people just won’t want to engage, and you want to respect that. Let someone know they can skip, and if possible, explain how they can accomplish the task another way that may be more comfortable for them. When you provide an opt-out as part of the design itself, fewer users will be compelled to abandon the interaction altogether. Also, using autonomy-supportive language affirming that people have choices, it’s a good way to build trust overall, even with users who would have felt fine answering the question. If it feels like you’re forcing people to do things, that can trigger a psychological resistance reaction that makes people want to do the opposite.
Know when to scrap it
Although on the whole, these sensitive conversations are avoided more often than they should be, you also need to know when you’re working with a bad idea. For instance, you ’re usually fine when a conversation is simplistic, like a yes/no question. But if your topic is complex, and your user’s input is likely to be diverse, you will run the risk of not understanding the user’s intent, leading to confusion and error messages. For non-sensitive complex topics, it’s sometimes fine to take your best guess, put it out into the world, and then iterate. The cost of a hiccup isn’t so high. But as you’d imagine, you risk a lot more damage in sensitive conversations.
I hope this primer helped get you started thinking about how you may want to handle difficult topics in your bot. Have any other insights to share? Thinking about a tricky conversation you may want to broach in your bot? Let me know in the comments! Check out Botsociety to get started designing questions of your own today!