My First Change Management Experiment

Posted by Chris Murman

11 October 2016

In software, speed can be everything. After all, it’s an industry where Mark Zuckerberg’s advice to, “move fast and break things” is sometimes taken as a mantra. But sometimes, you can go too fast for your own good. This is the story of several teams at a company that were tasked with moving very fast, but who were very unhappy and whose product was suffering as a result. Here’s how I coached them into a happier, and more productive, workflow that worked for both the teams and their clients’ needs.

The Problem

I was working as an Agile coach inside a company that serviced large clients. One day, my boss pulled me aside for a chat. Over the past year, as I was starting to take on new responsibilities as our agile champion, he would pull me into chats to discuss team challenges. The challenges the teams were having should sound familiar to many. They were having trouble meeting deadlines. As a result, clients were frustrated. The resulting turmoil had team members begging off the project. In general, this group of five teams needed something drastic to change, and my boss asked if I was interested in helping.

Was humbled and at the same time completely out of my depth. Most of my coaching opportunities had been one-on-one up to that point. He was asking about an entire division of the company.

Just how does one get started when given this chance?

Understand the Issue Before you Plan

Before putting a change plan together, I interviewed many of the leads from every discipline (project management, development, quality assurance, DevOps, art/UX). The three main issues that they saw with their current process was:

Work was being shipped, but nobody was super-happy about how it was going out the door. Stop me if you’ve heard this before.

I decided to make small changes, rather than huge overhauls. My initial change management plan started out Lean, taking a page out of Jason Little’s playbook.

First, everyone would run Scrum by the guide. I chose that particular Agile framework because I was working with many teams and needed something easily teachable and repeatable. If I received any questions for my methodology, the guide was an easy reference. Some team members also jumped between teams depending on release schedules, and I wanted the transition to be easy. Scrum was not the solution, but it would easily reveal the solution.

Second, I’d create a survey. Leadership had asked me to consider measuring change on teams, and I had previous success with anonymous surveys. The survey would not necessarily serve as the retrospective tool, but gathering data should be done in conjunction with the event. As an anonymous radiator for morale, it could also serve as a great heartbeat for the team. Again, it would not serve as the solution, but would reveal one.

I had no idea of what data to capture or how to measure, though. So, I enlisted an expert to help me out. Together, we settled on a survey that would measure five key areas:

Designing and Measuring a Quality Survey

We didn’t want the survey to be super-long, or nobody would take it. The questions also shouldn’t be slanted to only tell the story we had predetermined. We decided to make the survey short and general. Using a scale of 1-10 and open text fields to capture why team members were feeling the way they were, we were all set to kick off.

After the first five sprints under the new Scrum structure, I surveyed the team and looked at the results. My survey partner encouraged me to compare the score averages to each other, and not view them in a vacuum. This was because when people answer these types of surveys, they answer all the questions in relation to each other.

The lowest average score was in “overall satisfaction”, which wasn’t a surprise to me because you could see it on their faces. The second lowest score was for “quality of deliverable”, which was a surprise because this company prided itself on amazing quality.

It turned out that the QA team was doing too much context switching: they might have to test two to three different apps in the same day, and would lose context of which issue was in which app. The teams also had too much work-in-progress because of the client’s desire to speed things up. All the work was in progress at once, making daily work feel more like spinning plates. And like spinning plates, one can’t do it indefinitely.

Everyone agreed on doing two pivots. The first was to either staff up QA team members to equal the number of app releases in progress, or limit the number of apps in progress. That allowed for each release to have a subject matter expert and drill deep on quality.

The second involved having a “definition of ready” (DoR). All the teams had a documented definition of “done”, but hadn’t done the same for “ready”. As such, work was being pushed by clients and leads without any ability of the team to push back. Tasks would be added to the sprint that everyone knew wouldn’t be completed, but just happened to be up next. Asking all teams to adhere to a DoR allowed only the work that was ready to be committed to.

Did it work?

After five more sprints, I did the survey again and looked at the numbers. With just those small tweaks, every metric rose! The two greatest jumps were in sprint satisfaction and quality of work delivered. We all did the happy dance and put together a deck for leadership. It wasn’t the end of our transformation, just the beginning of things getting better.

To my knowledge, teams are still staffing with an appropriate amount of testers for releases. They also created a pre-release checklist for clients as a result of adding a DoR. This lets clients know all the things implementation teams need before they can get started. While not every item is delivered before work is started, it gets the conversation going about what’s needed for success.

Best of all, the requests for team members to transfer out of the department have started going down.

What I Learned

Numbers are your friend: I didn’t think the numbers would really help in any way other than to assist teams in quantifying how they felt. I was wrong. What I didn’t realize was that sometimes metrics can tell your story best, even if they appear to lean toward vanity. Leadership certainly paid more attention to them, and it gave me instant credibility. It would have been harder to prove my point with just anecdotal stories of change. Hard facts, regardless of their subjective context, are an amazing weapon for change agents.

Small can be mighty: Short iterations allowed us to make many small tweaks along the way; teams didn’t make that many wholesale changes. You don’t have to move mountains to make life better for teams. Often, a few tweaks will be enough.

Speed has a cost: I was asked to help teams move faster. While not called out specifically called out in the 12 agile principles, there is an inherent feeling of management using the methodology to help teams move faster. Instead of should have asking how to go faster, I should ask “why” more. If we can all learn to just slow down and pay attention to our work, the quality and efficiency of it will increase.

Just do it: Anyone can be an agent for change. The best way to learn about change management is just to get out there and start learning in the field!

Note: This post is based in a case study I presented at this year’s Agile Alliance conference on the theme of what happens when we move way too fast. You can read more of my posts at or follow me on Twitter, @chrismurman.

comments powered by Disqus