Let’s get right into it: these lessons will hopefully aid others on the journey toward creating successful chat products via pretending to be a bot:
Be upfront about who is behind the bot and nail the intro… eventually: By listening to the questions our users asked as they interacted with our fake bot, we learned exactly what content we’d need to provide in the bot’s introduction. For instance, who made the bot, what it does, and privacy information.
Different bots will likely produce slightly different questions in users’ minds as they are just starting out, so while pretending to be a bot, feel free to leave the intro out. That way while your testers are being observed, you can hear them ask questions aloud — that way you know which questions are truly important to answer right up front.
Then after you understand your users’ questions, take the time to make your bot’s introduction thoughtful and helpful — while still being concise. If we’d pretended to be a bot that included intro content, it’s doubtful that users would have told us which parts they didn’t need. So this really helped us keep it shorter and sweeter than if we’d included every piece of information we’d assumed might be helpful in the introduction.
'More is more' when it comes to directing users towards what to do next: In our ‘fake bot’ test period, users told us that they were unsure how to respond and what they were supposed to be doing. By age 15 (our target group), people know how to talk to a human. They know the rules of how to break the ice, how to ask questions, how to interpret what the other person is saying. Not so yet with bots! And that’s part of the delight of a chatbot done well at this early stage; people don’t know what to expect, and so we have the potential to delight them — or disappoint in a big way.
We found that quick answer buttons can help users know how they should respond, especially early in an experience; they set the tone for how the user should engage, so they don’t have to think about what to type. We learned the hard way about how important it is to be even more explicit about what is expected of them than you’d be in any human conversation with a person, or in a traditional digital experience.
Take advantage of this opportunity to try all the features you want and see what sticks! We found that the chatbot reaching out — say with a reminder or notification — was an exciting feature for our 15–25y/o audience!
As designers and product strategists with full-time jobs, we often feel inundated by too many reminders for things as it is, so we were surprised to hear how excited this audience was to know that we’d be able to reach out with reminders for things they care about. While it will take us a little while to develop that into the app, it was easy to be able to offer the feature as humans, letting us see how users responded to that capability. Now it’s on our roadmap; if we’d gone on gut feel alone, we might not have prioritized it so highly.
Set expectations for your fake bot’s response time: We did our best to respond to users’ questions and conversations as soon as we could, and even made ourselves available after office hours to meet the needs of our first users. But occasionally, we took too long to respond and broke our users’ trust; when you text a human, you expect the conversation to be asynchronous. But when you text a bot, something seems wrong if it doesn’t reply right away. So set expectations up front about what hours the bot will be available in this fake bot testing phase and then be sure to stick to it.
Make it easy for users to provide ongoing feedback about the experience. After our initial 30-minute session with each test user, we asked them to interact with the bot for the following week. And the most enlightening part of that next week wasn’t how often they used it or how they answered the bot’s questions, but what they told our team directly through the bot. We told them they could preface any message to the bot with an asterisk if they wanted the message to be sure to be seen by us — and that it would be appreciated if they could provide feedback in this way. Here’s an example of something a user let us know through this mechanism:
“*one thing the bot should/could be prepared to help/aid/assist with is anxiety concerning test results… what I like so much about the last response ("what is in your control") is that it gave me power. Anxiety (I think) is a fairly common thing among survivors and current patients. And the thing is we're pretty anxious bc all the things aren't in our control.”
This very rich piece of feedback was sent at the moment, with very little effort from the user, but it opened our eyes to a whole new potential content area to explore.
And make it easy and straightforward for yourself to ask follow-up questions. Another thing that worked out very well was having a script item prepared, so we could paste it in whenever someone left feedback with an asterisk, letting them know that they were heard (“*Noted, thanks for the feedback!”). This set a precedent for asterisk-preempted messages to be outside the normal bot chat flow.
This meant that we could even include personal messages or requests to participants, always starting with an asterisk and introducing ourselves by our real name to keep it clear who they were talking to. We used this method to ask clarifying questions about their asterisk comments, request that they pick a time for a follow-up session, and, in the case below, ask for permission to use a message in our blog post!