Master
ThoughtWorks
Menu
Close
  • What we do
    • Go to overview
    • Customer Experience, Product and Design
    • Data Strategy, Engineering and Analytics
    • Digital Transformation and Operations
    • Enterprise Modernization, Platforms and Cloud
  • Who we work with
    • Go to overview
    • Automotive
    • Healthcare
    • Public Sector
    • Cleantech, Energy and Utilities
    • Media and Publishing
    • Retail and E-commerce
    • Financial Services and Insurance
    • Not-for-profit
    • Travel and Transport
  • Insights
    • Go to overview
    • Featured

      • Technology

        An in-depth exploration of enterprise technology and engineering excellence

      • Business

        Keep up to date with the latest business and industry insights for digital leaders

      • Culture

        The place for career-building content and tips, and our view on social justice and inclusivity

    • Digital Publications and Tools

      • Technology Radar

        An opinionated guide to technology frontiers

      • Perspectives

        A publication for digital leaders

      • Digital Fluency Model

        A model for prioritizing the digital capabilities needed to navigate uncertainty

      • Decoder

        The business execs' A-Z guide to technology

    • All Insights

      • Articles

        Expert insights to help your business grow

      • Blogs

        Personal perspectives from ThoughtWorkers around the globe

      • Books

        Explore our extensive library

      • Podcasts

        Captivating conversations on the latest in business and tech

  • Careers
    • Go to overview
    • Application process

      What to expect as you interview with us

    • Grads and career changers

      Start your tech career on the right foot

    • Search jobs

      Find open positions in your region

    • Stay connected

      Sign up for our monthly newsletter

  • About
    • Go to overview
    • Our Purpose
    • Awards & Recognition
    • Diversity & Inclusion
    • Our Leaders
    • Partnerships
    • News
    • Conferences & Events
  • Contact
Global | English
  • United States United States
    English
  • China China
    中文 | English
  • India India
    English
  • Canada Canada
    English
  • Singapore Singapore
    English
  • United Kingdom United Kingdom
    English
  • Australia Australia
    English
  • Germany Germany
    English | Deutsch
  • Brazil Brazil
    English | Português
  • Spain Spain
    English | Español
  • Global Global
    English
Blogs
Select a topic
View all topicsClose
Technology 
Agile Project Management Cloud Continuous Delivery  Data Science & Engineering Defending the Free Internet Evolutionary Architecture Experience Design IoT Languages, Tools & Frameworks Legacy Modernization Machine Learning & Artificial Intelligence Microservices Platforms Security Software Testing Technology Strategy 
Business 
Financial Services Global Health Innovation Retail  Transformation 
Careers 
Career Hacks Diversity & Inclusion Social Change 
Blogs

Topics

Choose a topic
  • Technology
    Technology
  • Technology Overview
  • Agile Project Management
  • Cloud
  • Continuous Delivery
  • Data Science & Engineering
  • Defending the Free Internet
  • Evolutionary Architecture
  • Experience Design
  • IoT
  • Languages, Tools & Frameworks
  • Legacy Modernization
  • Machine Learning & Artificial Intelligence
  • Microservices
  • Platforms
  • Security
  • Software Testing
  • Technology Strategy
  • Business
    Business
  • Business Overview
  • Financial Services
  • Global Health
  • Innovation
  • Retail
  • Transformation
  • Careers
    Careers
  • Careers Overview
  • Career Hacks
  • Diversity & Inclusion
  • Social Change
Agile Project ManagementTechnology

How estimating with "story counts" worked for us

Huimin Li Huimin Li

Published: May 14, 2013

For about two years now, a norm has emerged on the Mingle team: “Every story is 4 points.” As a BA on our team, I quipped, “Well, that’s because our BAs are particularly good at writing stories.” :)... And then started digging into data to understand why.

Let’ s analyze our data


I created two charts below using data from one of Mingle’s previous releases and found them to be strikingly similar.

This chart maps the story count over 3 months for a release:

This chart maps story points over 3 months for a release:

Aside from the Y-axis scale, can you tell any obvious difference? I bet not.

Why is that the case?

  • Stories got broken down within the same range during team conversations

​When we estimate on the Mingle team we always have representatives of every role, if not the entire team. During estimation, everyone is involved in breaking big stories into more digestible pieces. We use a 1-2-4-8 scale, with 8 as our threshold. Anything estimated bigger than 8 becomes a placeholder for further breaking down. Below is the distribution of our estimates used in the burn up charts on the previous page. 

Similar story sizes was the result of the conversations on our estimation sessions. This contributed to the similarity of the earlier burn up charts.

  • Size differences got evened out over time

    Applying normal distribution to story points, standard deviation decreases as the sample size grows.

Forecast of story count vs. story point, 2 weeks out

Forecast of story count vs. story point, 1 month out

Forecast of story count vs. story point, 3 months out

Which is why we refactored our process 

Looking at our data, we didn’t find any additional value that story points provided us (related to progress tracking). As such, we have transitioned from story points to story count:

  1. We still maintain our estimation sessions. We highly value the team conversation catalyzed by gauging the size of the work.
  2. Leave the estimate points as a reference on the card, which could help inform prioritization. But we do not translate those numbers into scope or capability. 
  3. We started using story count in our burn-up charts.

We use a 1-2-4-8 scale, with 8 as our threshold. Anything estimated bigger than 8 becomes a placeholder for further breaking down.

Below is the distribution of our estimates used in the burn up charts on the previous page. 

We believe that the key to progress reporting is not an “accurate” prediction, but visible signals that we can act on. We look to our burn-up chart to tell us: “Hey, it looks like we might not be able to get everything done by the expected date. Let’s have a conversation.”

We are happy with this change

It has resulted in these two significant benefits:

  1. Fewer metrics, more conversations: In estimation meetings, we have shifted focus from numbers to a collaborative conversation. This provides a better platform for our team to discuss and eventually establish a shared understanding about what to build and how. We noticed that subsequent development work became much smoother after these conversations.
  2. Less math, more effective planning: In scope planning meetings where we used points, we had to scratch our heads to figure the exact number of points to put in or take out. Freed up from these calculations, we focus more on business value and being more responsive to ad-hoc requirements. 

Yes, the estimation column is empty! As the estimation points had naturally phased out of our process, we had an explicit conversation during our retrospective about whether or not we should reinstitute them. We decided not to, and have been happy with it.

In summary, I would like to quote Martin to support our decision: “So whenever you're thinking of asking for an estimate, you should always clarify what decision that estimate is informing. If you can't find one, or the decision isn't very significant, then that's a signal that an estimate is wasteful."

How do you estimate on your project? Check out other insightful perspectives on estimation in this ebook.

Master
Privacy policy | Modern Slavery statement | Accessibility
Connect with us
×

WeChat

QR code to ThoughtWorks China WeChat subscription account
© 2021 ThoughtWorks, Inc.