Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Blogs Banner

To Err is Human: Making Decisions with An Algorithmic Rule Engine

If you’re human, then you're biased. Even if you were somehow able to become the platonic ideal of a judge, a veritable modern-day Solomon, you can still get tired, lonely or distracted.

It’s inevitable. And it gets worse.

Organizations are collections of people, tiny autonomous decision makers throwing grain after grain of sand into the finely-honed gears of their organizations. An analyst has a particularly fine morning coffee, and his assessment goes up a fraction of a point. A manager looks at my resume while she's tired, and I miss the job that could have made me a millionaire before 30 (or so I tell myself).

All these little variations add up to enormous hidden costs. Like the imperceptible sub-atomic particle that collides with just the right section of DNA, causing a runaway cancer, these insidious variations are almost impossible to see on a balance sheet. Over time, they can rot a healthy organization by distorting the processes you rely upon.



We can never permanently defeat our own biases or the noise it creates, but we don’t have to. Instead, we can create tools to help manage our decision-making.

I will describe below a very basic algorithmic decision-making rule engine. It sounds complicated and expensive, but we built a working version with only an Excel spreadsheet, some basic formulas, and a sample set of previous decisions.

Human Bias Creates Noise, Noise Creates Problems 

This sounds theoretical, but the decision-making engine has practical applications in the real world. One example comes from a time when our team was advising a client product strategist in identifying and prioritizing features to build in a software product. There was an obvious benefit of more reliable decision making, but there were also other benefits that could not have been predicted.

How often have you seen prioritization conversations become heated after someone feels their personal judgement is under attack? How many times have you waded through the soupy quagmire of poorly defined goals?
When we started framing conversations with this tool, the conversations stayed constructive. It’s much easier to talk using a quantified, transparent and impersonal framework.

The engine also gave rigor to our conversations with senior leaders. When discussing product vision, they complimented our team as  the easiest to understand. Because we had provided a “clear window into our reasoning” they also found it easier to work with us on setting priorities.  In the ultimate compliment, one senior manager even requested a copy of the template so she could experiment on her own, building models for the different products under her group.

Creating the Tool

This article provides you with what you need to create your own algorithmic decision-making rule engine. It sounds complicated and expensive, but all you need is an Excel spreadsheet, some basic formulas, and a sample set of previous decisions.
  1. Select 6 to 8 variables that are distinct and related to the predicted outcome you would like to track.
  2. Take the data from your set of previous cases and compute the mean and standard deviation of each variable
  3. For every case in the set compute a “standard score” for each variable. The standard score or z-score places all variables on the same scale so they can be compared and averaged,
Standard score = (Value - mean of set) / standard deviation
  1. Compute a “summary score” for each case. This is the number you will use for comparisons.
Summary score = average of the variables’ standard scores
  1. Order the cases in the set from high to low summary scores and determine the appropriate action for different ranges. Hint: Finding Quartiles can be helpful.
EX: "if the ROI of this project is in the bottom quartile we will end it immediately. But if it is at least above that, we will pursue another 6 months. If it is in the upper 4th, we will scale it immediately and invest the remaining budget.”
  1. Consider calculating the outlier thresholds. Be mindful that you’ll need a normal distribution and enough data for this to work. If you have any outliers, examining them might help you spot otherwise hidden opportunity or avoid obfuscated but catastrophic risk.
  2. When you get a new case, add it to the data summary and see where it lands in your ranges, to help make the decision.
  3. From time to time, reexamine the variables used in the decision-making process, to see if the long-term outcomes are what you are hoping for or expect.



At first, your reasoning tool will primarily be useful for retrospection. You can examine past projects through a more quantified perspective. We found it quite interesting comparing a ranked set of results to our personal perceptions. We discovered that sometimes each of us was blind to the root cause of a success or failure. By framing our discussion around discreet criteria we discovered the missing pieces in our perception.

When new projects begin, the tool can then aid in projections. One of our principal consultants assembled a Monte Carlo simulation of likely time and costs for a future project based on the same data gathered here. This projection of possibilities and their likelihood is far more accurate than a simple project plan that only accounts for a single expected result.

A simpler example of using this tool for planning is if you notice that previous projects with a certain profile almost always go over their planned resource allotment. You might set a rule in the sheet to flag anything new that matches these criteria, alerting you to the danger long before it would otherwise become apparent.

It takes a little sophistication and upfront work to get a reasoned rule going. But don’t be intimidated, your computer will handle the hard formulas. While statistics is an enormous field, you only need a beginner’s understanding—and the courage to use it—for this simple tool to bring clarity and focus to your decisions.

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Keep up to date with our latest insights