Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Blogs Banner

Disruptive Testing: Part 4 - Matt Heusser

Welcome to Part 4 of our series where we interview “disruptive” testers who inject fresh perspective, leadership and enthusiasm in the testing community. Today we chat with Matthew Heusser, who combines his testing expertise and refreshing writing style as a prolific writer, trainer and presenter. He is perhaps best known for his writing as the lead editor for "How To Reduce The Cost Of Software Testing" (Taylor and Francis 2011), managing editor for Stickyminds.com, a top-rated blogger, former contributing editor to the Software Test and Quality Assurance magazine and contributor to SearchSoftwareQuality, informIT, and other popular industry publications. Matt has also organized the Agile-Alliance Sponsored Workshop on Technical Debt, and has served on the board of directors for the Association for Software Testing.

Q Hi Matt, we're excited to have you on our testing series. Let’s start with a concept you have pioneered - what is Lean Software Testing? How does it differ from Agile Testing?

Lean Software Testing (LST) consists of tools to measure, manage, and improve the flow of testing regardless of method. It plugs well into Agile software processes in that while it is human-centered and emergent, provides big visible charts, and so on, you can use it for anything. The big benefit of LST is that it points the way to significant improvement in flow — the kind of improvement that can allow a “traditional” team to get testing “done” within an iteration boundary.

The ‘tools’ of LST alone won’t get you excellent testing, as it improves what you already have. If what you have is bad testing, pure LST will just get you bad testing much faster. So you also need test craftsmanship - what tests should I run right now? What did those tests tell me? How can I figure out what is important to the customer? Are we being fooled here? And you want some technical skill. When I teach LST, I prefer to do a three-day course, where day two hammers those craft ideas, and day three is application and synthesis. So LST is compatible with Agile, or Kanban, or even waterfall - but I think in spirit it is certainly an Agile method.

Q We’ve heard a lot about scaling Agile. How does one scale testing in the enterprise?

I’ve heard this problem a lot, and I’ve been reluctant to go on record, for reasons that I hope are obvious, but let’s just get this out there. Scaling agile isn’t hard. You create one single multi-disciplinary team and let them determine their own processes. Staff it with volunteers, put that team at the heart of your most critical project and you can get an immediate increase in overall throughput.

If you only have one ‘killer to the company’ project, like an ERP upgrade, you can declare victory. Keep the team around and give them the next killer initiative. If you want more, then take volunteers again. Eventually, the folks who don’t volunteer end up doing Maintenance Ongoing Operations and Support Engineering (MOOSE), the kind of work well suited to people who like more traditional, plan-driven approaches. Or stop in the middle.

Now for the honesty part: This kind of change makes people scared. Every low-value architect, every project manager who finds value in knowing the bureaucratic business process, every manager and director of a single defined, specific role, and, I am sorry to say, some of an entirely different character, are going to be scared of loss. They won’t know how they fit in the new world. So they fight the change. They find every excuse you can imagine. We end up with vague problems like ‘scaling’ agile.

Adopting Agile in an organization without resistance is just not that hard. Resistance is a culture problem; it requires leadership. Scaling? That is something you do to a fish after you catch it.

Q What are your thoughts on Continuous Delivery (CD)? What are the challenges in applying CD to small / medium / large enterprises?

Let’s assume you have that CI system in place, have done the infrastructure work, your code is higher quality than industry norms because of Test Driven Development (TDD) and shared understanding, and you have fewer regressions than usual.

Oh, and you have added continuous monitoring and push-button rollback.

Also, your app should not touch money. That is, your users either use the application free, like Facebook, Twitter or Google, and you charge vendors based on rendering of advertisements. So if the website is down, they don’t pay. Or else you cut off the money part, which is audited, from the CD part. Or perhaps your app is a giveaway designed to create ‘eyeballs’ you will ‘monetize’ later — say a downloadable game designed to promote Mountain Dew or Taco Bell.

In that case, you've got the magic combination to make CD work.

Notice it is a combination of technical practices and business model. CD is not for everybody. The folks I have seen fall down with Continuous Delivery to production are the ones that have one piece of that but not the other.

Now continuous deployment to staging? I think that has much more broad applicability business-model wise and requires a little less tech work.

In the middle you can do continuous delivery to a beta server, then, on login, direct a certain class of users to the beta.

Q What do you consider to be valuable testing metrics?

I have little hope for test metrics in isolation. Even if they work, they can lead to sub-optimization of the process — getting really good at testing does not guarantee that the customer sees working software providing value at the end!

So I want metrics that help me understand if the test is the bottleneck, and, if not, I want to work on something else. When it comes to figuring that out, I tend to prefer inquiry metrics (“I wonder why …”) to control metrics. Take ‘bug count’, for example. If you wonder what is happening with bug count over time, and do a query, you are likely to get something that resembles reality. However, once you tell the team that you are measuring bug count and tracking it every week, it becomes a control metric - and you will see programmers (measured for low bugs) fighting with testers (measured for high bugs) over what is a bug. You’ll see what could be one root cause bug reported five times to get the count up, and testers seeking for easy-to-find bug low value bugs, over powerful but complex and hard to find bugs.

LST provides a set of out of the box metrics. Here are a few of them:

Cycle time  - Calendar time spent testing each story, or, better yet, for the entire development flow from ‘work begins’ until ‘release’.

Cadence - Release-test time from ‘build’ to ‘deployed’.

Lead time - Calendar time from ‘defined work item’ to ‘deployed’.

Touch Time - Percent of lead time actively spent working on a typical individual story.

Batch Size -The smallest increment of test work management can recognize, in minutes, hours or days. (Ideally, I like a batch size of “one story”, because while that ties testing to business outcomes, you meet teams where they are.)

Work In Progress (WIP) - Number of batches currently assigned to test.

Queue Size - Number of batches not in WIP but “backing up”.

Throughput - Number of batches accomplished per week.

Failure Demand - Percent of batches that only need to be done because of a preventable bug somewhere else in the system. I don’t strive for 0% failure demand, but in many cases, it can be cut in half easily.

Many of the teams I work with want to do CD, or “automation”, or some other illustrious goal, when the existing state is a cadence of days, weeks, or months - the WIP and Queues are huge, cycle time is in weeks and touch time is in single digits of percentage. What we really want is a smaller cadence and higher throughput. By focusing on improving these measures, we create the state I detailed above, where automation or CD makes more sense technically and socially.

Q With the boundaries between roles blurring, what is the role of the tester in risk management - specifically in helping out with a black swan event?

Testers can do a few things here. First, asking “what if” questions, at the story and project level, can bring out the critical success factors no one realizes. (One term for this is “failure modes and effects analysis.”) The tester can help create a consensus of a minimal standard of correctness. For instance, on an internal application, clearly wrong data might lead to the odd error page, which might be acceptable. On an external application, it might not. Testers can help create that shared understanding. They can interview the end customer to see if that understanding is right. I am also a fan of having a portion of release testing be exploratory in nature.

When I was at Socialtext, we had perhaps two times when the production server turned into a brick during a software version upgrade. One of those times, we had a problem on staging where the indexer to the database ran rogue on update, grabbing as much system resources at it possibly could forever. The programmers were insistent that it was a staging-only event. We were fooled.

As a tester, I can do two things. First, I can say, “Okay, but let’s spend the time to create a staging clone and try the upgrade again to make sure it never happens.” (Which we testers did not). Second, I can help create the institutional memory to say, four months later, “Hey, remember that indexing problem? This sounds a bit like that.” (Which we did).

Finally, A very senior tester can become an advisor to management and actually make recommendations on what to invest time in, what to fix, and how to steer the project. I have had that happen a few times in my career; it is a good feeling.

Q What are your thoughts on the healthcare.gov fiasco? You put it down to bad testing?

Well, you can read my article on the subject but I will give you the short version for testing: I don’t think this was a testing problem. The application was so horribly broken that the testers must have known what was going on.

No, this was a communication and management problem. Think about it. At every level, the employees were probably doing what was right: The tester said it was horribly broken and would never work. The supervisor said the application had serious problems and needed to be fixed. The manager said the app had problems that need to be worked on. The director, who managed both dev and test resources, tells his boss, the VP, that yes, there are problems and that his team is working on it. The VP tells his boss, the CIO that there are a few problems but he was on top of it. The CIO tells the CEO that he has it handled … at each point, the person is telling the truth from their perspective.

The problem is too many levels in the chain of command, and no one at the higher levels doing any management by walking around (MBWA). When senior level folks do MBWA, mid-level folks get uncomfortable. To fix that, you need to reorganize and add transparency. That’s not a testing problem.

But it is more than that. The folks on that team were under incredible pressure to deliver on a fixed time schedule. Incredible pressure. Historically, the folks being asked, “You will make the deadline, right?” were a little bit like the South Americans, pressured by the Spanish, about the seven cities of gold. I don’t want to make light of a historical tragedy, but the fact is, if you put someone under enough pressure, especially in a system that is failing, they will tell “management” what “management” needs to hear in order to go away. Then, when the thing is a huge mess on the floor, you just have to find someone else to blame … and on a project that large, there were plenty of people to blame.

Q Tips to make software testing more cost-effective?

One place to start is by measuring touch time, cycle time, WIP, throughput, and perhaps failure demand. Look at those numbers, and say, “Can we do better?” Make a big visible chart. If you have to start somewhere, look at batch size.

Q With the growing importance of experience design in delivering products that customers love to use, what do you see as the role of a tester going forward?

The best shops I have worked in had a variety of testers - some tool smiths, which I think is the dominant paradigm (and a mistake), some subject matter experts who knew how to use the product, and were customer proxies - and some test crafts folk. You need all three. SME’s can be extremely helpful in figuring out what the product should be, while the crafts folk can ask the right questions to help improve the prototype. In many cases, the two roles can suggest a UI change that gets you 80% of the features for 20% of the work.

Q Can Developers become good Testers?

I like to think that I did! (Laughs) - I actually spent ten years as a programmer before moving to test full-time. My undergraduate work was in Math with a CS concentration, and I earned a Master’s in CIS at night while programming full-time. I wrote a lot of code.

I think the big difference between myself and some coding-testers is that I actually studied how the important bugs were found and when, and developed strategies to make more of that work - instead of trying to take some pre-conceived notion of test automation and make it work.

James Bach once said that he rediscovered testing for himself - that is, he figured out, to him, what it means to test. I recommend something similar - study the risks on your projects, at your organization, and figure out how to be the most valuable. For me, at the time, that meant rejecting a certain very loud opinion about test automation - certain rhetoric - in favor of evidence. Starting out as a programmer, whose entire job it was to automate things, made me biased toward that rhetoric. In my experience, that attraction of that meme - of the 100% test automation environment - was the single largest thing standing in my way of doing effective testing. I got out; I think many can. Some folks, who really just want to be programmers, and view testing as a stepping-stone, will not. The good news is those folks typically don’t stick around that long in testing! (Smiles)

Q Do Testers have to be technically equipped?

As I said before, I see at least three different types of testers - the tool smith, the SME, and the crafts person. That said, more knowledge is more knowledge, and I also see an increasing amount of technical skill required just to operate in our society. To operate on a team that’s building software, I’d like the tester to understand flow control, variables, to be able to read code, and to crank out a Perl or Ruby script if they really had to. At the very least, I expect that of the craftsperson. For the SME, testing is often a short-term or transitional role, and they may have other ways of adding value.

Can a SME make a career in testing without writing code EVER? Certainly. The problem these days will come when the SME wants to change companies.

Q What advice will you give to a Tester who is new in the industry?

Get to know what matters to your customer. Study your bugs. Visualize your test process. Eliminate the waste. Improve cadence and touch time. Look for risks your team is ignoring. Find ways to make your work more valuable. Meet other testers and talk about what works for you, and them, and why. Talk to programmers and customers and other roles and see the best way to improve the flow of software, not just the testing.  Recognize groupthink when it happens (it is a bad thing) and seek truth, even when it hurts. Often, when it hurts, when people say you are making a mountain out of a molehill, when the pressure is on to get back into line — that is the time you are being most effective as a tester.

Perhaps most importantly, have fun!

 

Thanks a lot Matt for your lucid, insightful responses!

 

Do let us know your thoughts, feedback, and suggestions...and stay tuned for another testing guru. You can also read through other interviews in this series with James Bach, Lorinda Brandon and Markus Gärtner

 

Disclaimer: The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.

Keep up to date with our latest insights