Click here for responses to audience questions during the webinar
Refocus your view of testing to drive superior results
Does your organization view testing as the final hurdle to be overcome before an application goes live? Is your approach limited to detecting errors, fixing bugs and checking for known weaknesses? Is the majority of your testing practice keyboard driven? Are you spending more and more money on testing with diminishing results? If you answered "yes" to any of these questions, this webinar will illustrate a very different perspective on testing; one that boosts productivity and turbo charges performance while improving quality.
ThoughtWorks takes a very different view, with an inside-out approach to testing on all our projects. Instead of testing at the end of the development process, we integrate testing thoroughly into software development and systems management. Our own experience shows that when supported with the right processes, tools and training, integrated testing significantly speeds up the overall delivery of software. ThoughtWorks has introduced revolutionary testing techniques to enterprise IT and made significant contribution to the development of testing tools such as, Selenium, Cruise Control, White, Sahi, Watir, Nunit, JUnit and many more.
Join us for an insightful webinar on testing and learn how repositioning testing at the heart of your software development approach not only builds in quality, but also creates value by striving to prevent defects before they occur, enabling developers to get feedback in minutes and regression test results in hours, and enabling businesses to adopt testing activities which are highly aligned with their goals. We are at an important juncture in the history of software development. Those who embrace this change will survive.
About the speaker
Global Head of Testing for ThoughtWorks, talks about the important principles that underpin this new way of testing. He also addresses key practices that are used to improve software quality and reduce the testing effort.
Responses to audience questions during the webinar
Q: Kristan, you mentioned in your example on preventing defects in features working with developers earlier on how to test the feature. Do you feel at this stage creating a unit test strategy would be helpful? Also, how should QA/Dev approach Unit test?
A: When a developer is going to start work on a new feature they should get together with the tester and discuss how the feature will be tested. During this time they can agree on where best to apply the test, at a unit, integration or functional level. Once the feature has been created and the developers are handing over to the testers it is good if they run through the unit tests they have created so the tester knows what is already covered and/or can offer suggestions for improvement. By working this way a unit test strategy is continuously evolving and both developers and testers have input into it.
Q: Automation, although good, is expensive in time of cost and time. Where would it be better use, new features or old features? Should it be tightly couple with unit test?
A: Automated testing should apply to the product and be tightly integrated into the software development process. By getting everyone involved and using Continuous Integration for fast feedback the expense of automation can be greatly reduced. To really get the value out of automated testing, which will allow you to run regression suite in hours not weeks you would need to automate both new and old features. There are different approaches you can take to retro-fitting automated tests to an existing system the one I prefer is to apply a "tax" to each new feature that is developed. What I mean by this is when a new feature is being created, identify the tests that are applicable to the new feature and key regression tests you would run to ensure the new feature has not broken existing features. Then automate the new tests and the regression tests, this will target your regression test development in areas that provide the most value and where the team is focused. Unit tests should be considered in the overall automated testing approach.
Q: How do you effectively scale to large teams and keep everyone on the same page? And if we are distributed time wise what has to happen to keep informaiton from being dropped on the floor?
A: I assume you are asking how to scale the testing principles to a large team. This is an rather complicated topic which needs more than a paragraph or two as an answer but I will give you a summary. An approach that I have seen be effective is to create a smaller "example" team who apply the principles and practices mentioned. They get over a lot of the initial road bumps and figure out how best to apply the practices in the organisation. To help the team get started it is good to include some "embedded experts", people from each role (dev, ba, test) who can work with the team to introduce new techniques. When the example team is up and running you can redistribute some people from the team who are now "embedded experts" themselves and can help another team transform and rotate in new people to pick-up the skills and techniques. Distributed teams need to communicate frequently - at least daily and use tools that help reduce the distance and time barriers. For example setting up a Skype connection that is always on between the two locations with a screen and camera in a fixed location so people can go to the location to "talk" to someone from the other team. There are also tools like Mingle that help teams manage the work they are doing between distributed locations.
Q: What is the best way to direct change and get management buy in?
A: Another big topic that needs more than a paragraph to answer. A key element to change is the desire to change. If a team or management do not have the desire then change will be extremely difficult to implement and generally gets undone when those directing the change leave. One technique that I find very useful is to show by example, this may be setting up an example project or doing a site visit to another location/team that is utilising techniques and processes that are trying to be implemented. By showing people how they could be working, letting them talk to others who have gone through change themselves helps create that desire to change.
Q: Involving everyone in testing would be a problem as it does not bode well with separation of responsibilities. Tester do testing because they are good at it (at least they are supposed to be ;) while developers, BAs and PMs are not.
A: A team needs to trust and respect each other and the abilities that they have. When the team is all focused on the same end goal they will achieve a better result faster. When testing is lead by a tester but owned by the team then the team works to ensure that an adequate level of quality is achieved, rather than some team members railroading others to accept lower quality and go to production. Testers should be good at testing, and lead the testing approach for the team but also act as a mentor for other team members on how to better perform testing tasks. For example working with the developers on unit testing, BA's on acceptance testing and PM's on planning, scheduling and budgets will get a better end result.
Q: I had some Internet connecticvity issues early on and joined late. Is a copy of the presentation available so I can view what I missed? Thanks. Ed
A: Yes, the webinar was recorded and it is available from http://testing.thoughtworks.com
Q: How early in the process can you create automated test scripts?
A: As soon as you start to capture requirements, if you are using an automated test framework as I described during my talk the intention of the test can be captured. Tools to assist with this type of approach include Cucumber, Concordion, JBehave, Specflow, TWIST, Fitnesse to name a few. To get a better understanding of how the test intentions may look have a look at an article written by Alister Scott http://testing.thoughtworks.com/articles/specification-example-love-story
- Q:Historically, if our Client Services/Product Support folks, who work closely w/ our clients, start working very closely w/ us in Software Development, they seem to lose touch w/ clients/business, thus, lose their 'appeal' so to speak. If they don't work closely with us, however, they don't provide us sufficient/quick feedback until we are about to go to Production - too late. I'm not sure how we can make them really be part of the process and provide us feedback AND do their jobs.
A: Getting time from client facing staff can be difficult, so it is good to try and have regular short contact throughout Software Development instead of larger chunks at the end. Using techniques like showcases - where the team takes 1 hour a fortnight (or week) to demonstrate the work they have done and get feedback and making the test environment available for them to "play-in" when ever they do get time. If you are able to involve client facing staff in helping to create the test intentions (before coding starts) for each feature this can provide early feedback, buy-in and prevent issues from occurring.
Q: You mentioned the importance of considering the quality of tests. I would be interested to hear a few quick thoughts on methods for measuring the quality of tests within a large test case repository.
A: A great way to understand the quality of the testing being done by the team is to get someone outside of the team to spend a couple of hours doing exploratory testing of the system. Issues they discover that the team was not aware of gives you an indication to the quality of the testing being done, the more issue uncovered and the more severe those issues are, the poorer the quality of the tests that have been run. It is useful if the person doing the testing has domain knowledge and is given a rough guide of areas to cover. There are also other things you may want to consider like ease of maintenance of the test suite, test duplication and the ability to determine feature coverage but it is more valuable to understand if the tests are finding issues.
Q: How can we get a copy of this presentation for further review?
A: The webinar was recorded and it is available from http://testing.thoughtworks.com
Q: We do not create automated tests for first releases except for load testing, for a simple reason that there is no regression testing. Do you agree with this approach? What is the advantage of developing automated testing other than for regression and load?
A: Testing should be tightly integrated into the software development cycle for example if a small amount of code is created to implement a feature, the test for that feature is also created and both are committed to the source control repository. A continuous integration tool can then be used to build, deploy and test the code as the small changes are added. When working in this way you are creating the regression test suite from the first day and building the suite along with the code. This provides the team with extremely fast feedback and helps build quality into the system. As I mentioned in the webinar tests should be treated as an asset of the product, so code + tests = product.
Q: What are some coaching techniques that worked in driving the message across the org that testing is everyone's responsibility and that tests+code = product?
A: To spread this message across the organization you need to get support from the top and grassroots, then create an example with a team/project to show how it can work within your organisation. At the same time using training and presentations to spread the message. Once the example team/project is up and running you do team tours - where others in the organization see how the team is working and rotate people through the team while taking team members who have experience to be the seed for the next team/project.
Q: Is your approach any different for a brand new application vs. an existing one?
A: The approach will vary slightly if you are working on an existing application that does not have an existing regression suite. There are different approaches you can take to retro-fitting automated tests to an existing system the one I prefer is to apply a "tax" to each new feature that is developed. What I mean by this is when a new feature is being created, identify the tests that are applicable to the new feature and key regression tests you would run to ensure the new feature has not broken existing features. Then automate the new tests and the regression tests, this will target your regression test development in areas that provide the most value and where the team is focused.
Q: You mentioned starting the automation scripting early in the process, even before dev has develope the feature. This seems costly for QA, it seems these scripts may have a high risk of change as the feature is created by dev, features will change
A: The approach suggests creating the intention of the automated test before developing the feature, implementing the automated tests is then done in line with the code creation. It is less likely that the intention of the test will change, how it is implemented may so by delaying the code implementation you reduce the amount of rework. If you architect your automated tests well it will also make changes to the tests easier to implement, an example of this is the page object model.
Q: We have 2 very large applications that have 0% unit test coverage. Our Automation Suite is the only backstop we have. The lack of unit testing is our largest hurdle from adopting true agile.
A: Unit test are critical to a test strategy and help provide fast feedback when changes are made. They also make debugging issues that arise a lot easier. Getting from 0% coverage to a coverage you are comfortable with will take time and is not something you can do overnight. The team should incrementally add unit tests focusing on new code they are writing, code they are updating and also code they need to fix due to defects. Retrofitting unit tests can be difficult as the code may not have been written for testability and therefore may require more work - but the time spent will be repaid.
Q: what % of "tester" time is typical for waterfall and agile.
A: On an agile project the tester would be involved from the beginning to the end, where as a waterfall project testers tend to get heavily involved toward the end. However the number of testers required on an agile project is far fewer than those on a waterfall project from my experience and therefore the overall testing effort is different.
Q: If the coders write there code to test cases, what about the test cases we overlook or miss? How would you recommend cover those during code writing?
A: To get an understanding of how the application is behaving and if there are any tests that have been overlooked I like to use exploratory testing. After the developers have created the feature testers should spend some time testing that feature and how it fits into the broader system. Because of the existence of automated tests around the feature which covers a lot of testing that would traditionally be done, namely verify that the code does what the requirement says, the tester can focus on those hard to find issues. They will also be able to uncover areas that are not automated and update the automated tests if appropriate.
Q: Very interesting ideas but automation during early stages of development has been very time expensive. Would you recommend that.
A: Yes I do recommend doing automation during early stages of development but ensure you follow the right process to reduce the amount of time. The approach I mentioned in the presentation suggests creating the intention of the automated test before developing the feature, implementing the automated tests is then done in line with the code creation. It is less likely that the intention of the test will change, how it is implemented may so by delaying the code implementation you reduce the amount of rework. If you architect your automated tests well it will also make changes to the tests easier to implement, an example of this is the page object model.
Q: We are currently following an Agile development process with writing User Stories in a BDD method which Developers use to build the web site/features. This BDD methodology is very Developer-centric and QA is often driven to not over do the number of test scenarios we identify and feel are necesaary to validate the quality of the product. How can QA justify larger scenario test sets as a reflection of what QA feels is OUR responsibility in terms of verifying the quality of the product, versus allowing Development to drive QA's level of testing? I recognize there needs to be balance, but feel we are viewed as insuarnce for Developments efforts and not viewed as a team with our own responsibility to the delivery of a quality product.
A: One technique you can use is to frequently perform exploratory testing on features that have just been completed. Issues that are identified can help the team get an understanding of gaps in the current testing being done and where they may want to make changes/improvements. Personally I find it easier to have conversations and convince team members when I have some data - so instead of saying we need to run this test in case something happens it is better to say we need to start including these types of tests because we have found a number of issues that are related.