At Thoughtworks, we often have clients who are new to user-centered design, or are unsure how to best harness the value of a research team. They may be accustomed to using analytics and have seen their product teams make decisions based on research results, but are unsure themselves how to best use all that different research methodologies have to offer.
The goal of this article is to provide you with insight into the value of different types of research and how you can use them to build software products that delight your customers and achieve commercial success.
Let’s start with the definition of research. Research isn’t something that happens once, conducted by one person or one team before a project starts. It takes many forms and is conducted by different members of the team before, during and after the development process. In the broadest sense, research is anything that either helps you understand what you should do, or helps you understand if what you did was successful. Different types of research can help you to:
Understand your user(s) and their needs, goals and pain points.
Understand the affordances and limitations of technology.
Untangle how to meet user needs with the given technology.
Measure the efficacy of a given solution.
Understand your own processes, products, and people.
The key is being able to use the right type of research at the right time, and to tie different types of research together to get the best results for yourself, your team and your product.
Research when you’re creating software products is fundamentally different from academic research, in that the goal of this research should be to learn things that help you create a better product and drive the success of your business. As such, the research plan and results should emphasize quality over quantity, where quality means measurable, positive outcomes (as opposed to number of studies or thickness of the report.) Research should provide you with the information needed to make data-driven decisions to achieve your business goals.
In the drive toward outcomes, the question you are trying to answer and the metrics for success should determine what type of research to conduct. For example, if you want to answer the question, “Is this solution effective?” then you need to define effective and have some form of the solution to test — anything from a paper prototype to working code. The definition of effective is key, since this can range from user satisfaction (e.g. 80% of customers rate the product 5 stars) to task completion (e.g. customers are able to complete key tasks on the first try) to meeting specific criteria (e.g. the deployment process can be completed in half the time).
On the other hand, if you want to understand the affordances and limitations of a specific piece of technology, then you need a combination of experience with that technology, and expert knowledge about it. If the goal is something broader like “we need to transform,” then you will need an understanding of what “transform” means, which will likely require its own investigation.
For example, the success metric “Kids eat a nutritious meal” is decidedly different from “Kids like everything on their plate.” In the first case, the key metric is nutrition, so input from a nutritionist is key. If you relied on the kids’ opinions, you might end up giving them a plate full of candy for dinner.
Example Success Metric with Research Method
Note that these methods aren't mutually exclusive, and that you might need a combination. For example, if the definition of success is: “Kids eat everything on their plate AND it is nutritious” then you need both the input of a nutritionist and to measure the emptiness of the plate. You may also need to make tradeoffs.
Also note that this can sometimes be automated and may need to be iterative. The amount of broccoli eaten, the number of applications deployed and time to completion can be measured on an ongoing and automated basis. If the results are below expectations, then you should examine why, which may require anything from debugging to engaging with users.
This section is designed for reference when deciding what type(s) of research to do for a specific situation. Please note that the different needs and methods are not mutually exclusive. In fact, it’s always a good idea to triangulate your data.
Measure time on task or number or percent of tasks completed --> Leverage analytics
Learn if users are able to complete key tasks --> Measure task completion
Evaluate and select a technology --> Use expert knowledge
Create best practices/ standards --> Use expert knowledge
Achieve broad goals such as “Modernize our practices” --> Conduct discovery research
Learn about end-user habits (what actually they do) --> Leverage analytics and observation
Learn about end-user goals (what they want to do) --> Conduct interviews & surveys
Determine if “it worked” --> Start with the definition of success
Answer “Why” questions (Why they do this or that) --> Conduct interviews
Understand why internal users didn’t “get it right” or aren’t doing “what they're supposed to do” --> Conduct interviews
Understand why external users didn’t “get it right” or aren’t doing “what they're supposed to do” --> Conduct interviews
Decide between two or more options --> Use A/B testing
Make UI decisions --> Harness heuristics and existing patterns
Determine what to build --> Organize workshops
Determine what to build next --> Evaluate iterative feedback
If something has been thoroughly researched already, and there is no reason to believe that the answers have changed, then there is no need to repeat that research. However, if the results of one study apply to another case, as is often the case in user interface decisions (e.g., when to use radio buttons vs. drop-downs), then harnessing those answers will both save you time and create consistency in your applications.
Beware of organic knowledge that does not have hard research behind it. Many companies have an “Urban Myth” knowledge of users, which started as an unproven hypothesis that somehow became fact, only for people to later learn that it’s wrong. The only way to know if a fact applies to your case is to understand the research behind it, specifically the methodology and respondents.
It’s also important to understand that you are not your users. What’s obvious or intuitive for someone who designs software can often be mysterious or confusing for someone in other fields. Domain knowledge matters — I once did a series of interviews with doctors and nurses, and while they could amputate a limb or sew it back on, they couldn’t figure out how to click the link in a chat. Make sure that the existing knowledge is representative of the target users.
Qualitative and quantitative research can both be a part of any study. An evaluative study of an existing prototype can measure quantitative time on task and to gather qualitative information about how users feel about the task, the interface or any other aspect of the experience. For example, present a user with a prototype and a task. While they complete the task, note their comments, ask about their understanding and ask clarifying questions. In this way, you gain a qualitative understanding of their views while simultaneously measuring task completion.
This combination of “why” questions with other studies may lead to insights that you weren’t directly studying. For instance, while testing the usability of a starter kit, you might learn more about a specific group’s processes that impact their needs.
The qualitative responses themselves can also lead to quantitative information. By examining the responses, you can extract categories for further quantitative evaluation. For example, if you ask the open-ended question, “What is your favorite fruit?” you’ll soon see quantifiable patterns emerge such as the mention of mangos and strawberries 50% of the time, but the grapple only being mentioned once. You may then decide to do further qualitative research to understand “why” and end up learning about cultural influences. Likewise, if you ask potential alpha customers what infrastructure needs they have, you may be able to find patterns in their responses.
Don’t stop testing after you release! In fact, some testing needs to happen after a product is released: Analytics will tell you what and how people really use something, and surveys will allow you to learn about customer satisfaction (CSAT) once the product is in use as a part of the users’ routine.
Remember, your research team is there to support you and help you create the best product. If you’re running analytics of any kind, talking to users, doing design reviews or even just looking at how a product is performing in the market, then you’re already doing research. The key is to do it deliberately and systematically, followed by synthesizing all of the data (analytics, interviews, user studies and more) to draw a complete picture that moves you closer to your business goals.