Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Getting the most out of research

Getting the most out of research

At Thoughtworks, we often have clients who are new to user-centered design, or are unsure how to best harness the value of a research team. They may be accustomed to using analytics and have seen their product teams make decisions based on research results, but are unsure themselves how to best use all that different research methodologies have to offer. 

 

The goal of this article is to provide you with insight into the value of different types of research and how you can use them to build software products that delight your customers and achieve commercial success.

 

Definition of research

 

Let’s start with the definition of research. Research isn’t something that happens once, conducted by one person or one team before a project starts. It takes many forms and is conducted by different members of the team before, during and after the development process. In the broadest sense, research is anything that either helps you understand what you should do, or helps you understand if what you did was successful.  Different types of research can help you to:

 

  • Understand your user(s) and their needs, goals and pain points.

  • Understand the affordances and limitations of technology.

  • Untangle how to meet user needs with the given technology.

  • Measure the efficacy of a given solution.

  • Understand your own processes, products, and people.

 

The key is being able to use the right type of research at the right time, and to tie different types of research together to get the best results for yourself, your team and your product. 

 

Outcomes over output

 

Research when you’re creating software products is fundamentally different from academic research, in that the goal of this research should be to learn things that help you create a better product and drive the success of your business. As such, the research plan and results should emphasize quality over quantity, where quality means measurable, positive outcomes (as opposed to number of studies or thickness of the report.) Research should provide you with the information needed to make data-driven decisions to achieve your business goals.

 

Choosing a research method

 

In the drive toward outcomes, the question you are trying to answer and the metrics for success should determine what type of research to conduct. For example, if you want to answer the question, “Is this solution effective?” then you need to define effective and have some form of the solution to test — anything from a paper prototype to working code. The definition of effective is key, since this can range from user satisfaction (e.g. 80% of customers rate the product 5 stars) to task completion (e.g. customers are able to complete key tasks on the first try) to meeting specific criteria (e.g. the deployment process can be completed in half the time).  

 

On the other hand, if you want to understand the affordances and limitations of a specific piece of technology, then you need a combination of experience with that technology, and expert knowledge about it. If the goal is something broader like “we need to transform,” then you will need an understanding of what “transform” means, which will likely require its own investigation. 

 

For example, the success metric “Kids eat a nutritious meal” is decidedly different from “Kids like everything on their plate.” In the first case, the key metric is nutrition, so input from a nutritionist is key. If you relied on the kids’ opinions, you might end up giving them a plate full of candy for dinner.

 

Example Success Metric with Research Method

 

  • Success = User Satisfaction (e.g. kids like the food on their plate) --> Ask the users via interview of survey
 
  • Success = Task completion (e.g. kids eat everything on their plate) --> Define those tasks and measure against them.  As these criteria are not opinion- or satisfaction-based, the value measures of the product (e.g. % of plate emptied) should drive them.
 
  • Success = Criteria met (e.g. kids have a nutritious meal on their plates) --> Define those criteria and measure against them. As these criteria are not opinion- or satisfaction-based, the definition of correct should come from a governing body or a domain expert in that area. (e.g., kids need n milligrams of Vitamin A per day).

 

Note that these methods aren't mutually exclusive, and that you might need a combination. For example, if the definition of success is: “Kids eat everything on their plate AND it is nutritious” then you need both the input of a nutritionist and to measure the emptiness of the plate. You may also need to make tradeoffs.

 

Also note that this can sometimes be automated and may need to be iterative. The amount of broccoli eaten, the number of applications deployed and time to completion can be measured on an ongoing and automated basis. If the results are below expectations, then you should examine why, which may require anything from debugging to engaging with users.

 

I need to...

 

This section is designed for reference when deciding what type(s) of research to do for a specific situation. Please note that the different needs and methods are not mutually exclusive. In fact, it’s always a good idea to triangulate your data.

 

Measure time on task or number or percent of tasks completed --> Leverage analytics 

  • Clearly define start and end points or qualifications. For example, if you want to measure the time to complete onboarding, you first need to define what indicates the start of onboarding and the completion. You also need to decide if it is a measure of all of the related tasks or a pure stopwatch function (e.g., are you only counting the time it takes to complete specific tasks, or are you starting the timer when one task begins, and running it until a set of following tasks is complete, regardless of if other unrelated tasks are also being completed during that time?).
    • When possible, automate the measurement, and present in a format that is usable to the primary consumer (e.g. spreadsheet or chart).  

 

Learn if users are able to complete key tasks --> Measure task completion

  • Anything from paper prototypes to high-fidelity, clickable prototypes pulling real data can be used for evaluative user testing. In these studies, it’s essential to start by understanding the key tasks to test, who the correct users are, and create an appropriate script for testing.
 

Evaluate and select a technology --> Use expert knowledge 

  • Experienced experts will know how to evaluate the affordances and limitations of a technology — their experience allows them to know both what questions to ask and how to evaluate something.
    • You should ask the expert when:
      • When the opinions of the users are not the key to success or may be contrary to the definition of success (e.g., nutritious).
      • When experience matters.
      • When knowledge of existing products or processes are key.
      • When they can act as proxies for large numbers of users. Experts can take many forms: SMEs, Support Personnel, and people who have been “in the trenches.” 
 

Create best practices/ standards --> Use expert knowledge 

  • SMEs have a breadth and depth of experience which they can pull from to aid in creating standards and best practices based on their knowledge in the industry. SMEs exist in all disciplines, from engineering to customer experience.
 

Achieve broad goals such as “Modernize our practices” --> Conduct discovery research

  • Broad research will help you understand products, people and processes. This is done through activities such as interviews and workshops to uncover things that will help move you forward, hold you back or understand your customers and yourselves.
 

Learn about end-user habits (what actually they do) --> Leverage analytics and observation 

  • What people say they do and what they actually do are often different. Learning what people actually do can be achieved through means such as analytics and observational studies, and augmented through interviews. 
    • Analytics allow you to measure users’ actual actions in an existing product after it’s launched. Augmenting this quantitative data with qualitative interviews allows you to understand why they do what they do. Is it because the software forces them to? Is it because there is another process which they need to follow? Is it because they don’t actually understand how something works and they’re trying a hack to achieve an end goal? 
 

Learn about end-user goals (what they want to do) --> Conduct interviews & surveys 

  • While instrumentation will tell you what people actually do, talking to them will tell you what they want to do. More importantly, it will tell you why.  
    • Interviews allow you to dig into habits, goals and barriers.  
    • Surveys can be helpful to capture information from a large number of users and quickly aggregate results.
    • Depending on the case, you might consider forming advisory groups whom you can interview multiple times to learn how their specific experiences change.
 

Determine if “it worked” --> Start with the definition of success 

  • The only way to determine if something worked is to first define what “it worked” means. Use the definition of success as a guide for how to conduct research.
    • Beta testing - Releasing to a small group of target users who are specifically there to give you feedback will allow you to get early feedback before releasing, and will allow you to see how your product works in a real world context.
 

Answer “Why” questions (Why they do this or that) --> Conduct interviews 

  • Interviews not only provide rich data on goals, but can also reveal when users are doing workarounds. 
 

Understand why internal users didn’t “get it right” or aren’t doing “what they're supposed to do” --> Conduct interviews 

  • Interviewing internal customers can reveal if training issues, business process or politics are playing a role.  
 

Understand why external users didn’t “get it right” or aren’t doing “what they're supposed to do” --> Conduct interviews 

  • Interviewing external users may reveal that they don’t understand how to do something or that they are trying to do something else but this is the closest solution they could find.
 

Decide between two or more options --> Use A/B testing  

  • Depending on the specific project, you can either test with a prototype in a usability setting — swapping which version users see first to account for any order bias — or A/B test with a live site.
    • If an application is already live and you are considering changes, you can A/B test by releasing an updated version that’s only visible to a subset of the population, allowing you to see actual usage results.  
 

Make UI decisions --> Harness heuristics and existing patterns

  • Rules of thumb offer a quick way to answer common questions and help create a more usable product by default.
    • Using existing patterns within your ecosystem will also allow consistency between your products and make it easier for your users to understand new applications.  
 

Determine what to build --> Organize workshops 

  • Working with current and target customers as well as stakeholders will allow you to understand your customers, your products and yourselves.
    • Workshop exercises include Value Stream Mapping, User Journey Mapping, Elevator Pitch and more.
 

Determine what to build next --> Evaluate iterative feedback 

  • Continually re-evaluate what you need to build based on customer feedback from channels such as support and usage data, and combine this with an understanding of your evolving business needs.  

 

When to not do research

 

If something has been thoroughly researched already, and there is no reason to believe that the answers have changed, then there is no need to repeat that research. However, if the results of one study apply to another case, as is often the case in user interface decisions (e.g., when to use radio buttons vs. drop-downs), then harnessing those answers will both save you time and create consistency in your applications.

 

Beware of organic knowledge that does not have hard research behind it.  Many companies have an “Urban Myth” knowledge of users, which started as an unproven hypothesis that somehow became fact, only for people to later learn that it’s wrong. The only way to know if a fact applies to your case is to understand the research behind it, specifically the methodology and respondents.  

 

It’s also important to understand that you are not your users. What’s obvious or intuitive for someone who designs software can often be mysterious or confusing for someone in other fields. Domain knowledge matters — I once did a series of interviews with doctors and nurses, and while they could amputate a limb or sew it back on, they couldn’t figure out how to click the link in a chat. Make sure that the existing knowledge is representative of the target users. 

 

When to use qualitative research vs. quantitative

 

Qualitative and quantitative research can both be a part of any study. An evaluative study of an existing prototype can measure quantitative time on task and to gather qualitative information about how users feel about the task, the interface or any other aspect of the experience. For example, present a user with a prototype and a task. While they complete the task, note their comments, ask about their understanding and ask clarifying questions. In this way, you gain a qualitative understanding of their views while simultaneously measuring task completion.  

 

This combination of “why” questions with other studies may lead to insights that you weren’t directly studying. For instance, while testing the usability of a starter kit, you might learn more about a specific group’s processes that impact their needs.

 

The qualitative responses themselves can also lead to quantitative information. By examining the responses, you can extract categories for further quantitative evaluation. For example, if you ask the open-ended question, “What is your favorite fruit?” you’ll soon see quantifiable patterns emerge such as the mention of mangos and strawberries 50% of the time, but the grapple only being mentioned once. You may then decide to do further qualitative research to understand “why” and end up learning about cultural influences. Likewise, if you ask potential alpha customers what infrastructure needs they have, you may be able to find patterns in their responses.

 

Final thoughts

 

Don’t stop testing after you release!  In fact, some testing needs to happen after a product is released:  Analytics will tell you what and how people really use something, and surveys will allow you to learn about customer satisfaction (CSAT) once the product is in use as a part of the users’ routine. 

 

Remember, your research team is there to support you and help you create the best product.  If you’re running analytics of any kind, talking to users, doing design reviews or even just looking at how a product is performing in the market, then you’re already doing research. The key is to do it deliberately and systematically, followed by synthesizing all of the data (analytics, interviews, user studies and more) to draw a complete picture that moves you closer to your business goals. 

Are you ready to get the most out of your research?