Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Blogs Banner

Lessons learned from life as a chatbot: Part 1

This is the first in a two-part series of articles exploring some of our learnings from working with chatbots.

It was a blindingly sunny day in the Bay Area, but inside a corporate meeting room, the shades were drawn. Kylie was FaceTiming with a 16-year old cancer survivor, asking questions about platforms like Facebook Messenger. For a while, the conversation sounded like any you might expect from a 30-something asking a teenager about what the kids are using nowadays.


“Yeah, I use Messenger every day” he replied nonchalantly. “All my friends are on it.”

“Ok, good to know. Do you have your Messenger app open? We have something in the works that we’d like you to take a look at...”

Kylie glanced at Janise, sitting just out of view of the camera. Her hands hovered over the keyboard; knuckles cracked and ready to fly at a moment’s notice. The Facebook Messenger UI open and waiting, the Google Doc of pre-written responses at the ready… it was bot time.

 

At HopeLab, when the idea for a chatbot came out as a promising concept through extensive needs-based user research, we had a few options for next steps.
  • We could ask our audience what they thought about bots, and get them to anticipate how they might use one. But we’d already heard our audience didn't know what bots were, and we knew humans — even super-switched on teenagers — aren’t famous for their abilities to accurately predict the future.
  • We could build out our idea as a lightweight bot and then test that. But without some indication that it was the right thing to build, why waste even a week of developer time to put together our best guess at what would be helpful to our users?
  • Or we could go with the fastest, cheapest, and most accurate way to test the idea: we could pretend to be a bot.
So we recruited a small group of users and asked them to chat with our trial automated service, Cancer Mindshift, over Facebook Messenger — with the knowledge that it was early days and the service was only part automated, still part human. With just a handful of users and a few days of part-time side hustle to our normal jobs, our understanding of the product and our users skyrocketed. By pretending to be a bot, we learned more than we could have hoped for. Here are a few of our big takeaways:
 
  1. There's still plenty of confusion from people about what a bot even is. “Is it a human or a machine? Why isn’t it an app? What will happen when I talk to it?” Though bots are starting to become more common on platforms such as FB messenger, almost no one we spoke to admitted to having chatted with a “bot” or an automated text-based system before. And these answers were from the technologically savvy generation of millennials, who are very likely have interacted with chatbots in the past. As designers of technology, sometimes we forget how long it takes for new tech to really sink into a culture’s understanding. Though people have been talking about bots for years in Silicon Valley, the concept is still very new for many of our users. And pretending to be a chatbot helped us find out what exactly was confusing them and what they needed to get on board.
  2. People really want signals that they're heard. This is true even if they know that the chatbot may not understand them at this point and is still delivering generic responses. We found that bot responses don’t have to be long to be helpful — subtle responses that we as people use all the time in our spoken and text conversations are often sufficient. Examples include “Sorry to hear that” or “Sounds like you’re dealing with a lot” or even an emoji.
  3. People sometimes open up to bots even more than to other people. We think this happens because people know that the feedback and advice coming back is truly objective. Though hard to wrap their mind around at first, once they got going, our users really started to open up. Here’s something we heard from a test user in this crucial test phase after she asked for help with a stressful emotional situation she was experiencing. She was happy to let us share her feedback: “This is actually a true thing that is happening, and it was almost better to talk to the bot about it... Friends just say, ‘damn, that sucks’; and it's hard. The bot reminded me to break it down into small manageable bits — that so much of it is not in my control. So what can I do to feel empowered? That was super important to hear.”


     
  4. Bots are good at serving up content quickly, but people pretending to be bots aren’t. We rapidly saw how hard it is to reply quickly as a human, even when you have a simple script to copy and paste from. This is something that bots were made for — and people weren’t. And our users noticed the lag from the second one, complaining that the system was too slow, even if it was only a two-second response.

So many lessons learned in so little time. We’re extremely grateful to our small but mighty group of testers, who were able to suspend disbelief in the concept of a bot long enough to give it a shot and to let us know what they thought as they interacted with it.

From just five 30-minute user sessions and a week of those five people texting intermittently with the system, we validated our product idea and found a plethora of ways to increase its likelihood of success going forward.

Stay tuned for Part Two, which will focus on lessons for designers and developers considering pretending to be a bot to validate their ideas.

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Keep up to date with our latest insights