7 Deadly Sins of Automated Software Testing
January 4, 2012 § Leave a comment
7 Deadly Sins of Automated Software Testing
Tue 3 Jan 2012
When Craig Smith and I were planning our presentation for Agile 2011 – we toyed with the idea of listing 7 anti-patterns for automated testing. I’ve taken these patterns and loosely matched them to the classic seven-deadly-sins. This metaphor also allowed me to include a linocut that my uncle Errol Smith made – pictured here.
Senior management often view automated testing as a silver bullet in reducing testing effort/costs and increasing delivery speed. While it is true that automated tests can provide rapid feedback on health of the system, all approaches to automated testing are not created equal and there are some gotchas that should be avoided.
Over indulging on propriety testing tools
Many commercial testing tools provide simple features for automating the capture and replay of manual test cases. While this approach seems sound, it encourages testing through the user-interface and results in inherently brittle and difficult to maintain tests. Additionally, the cost and restrictions that licensed tools place on who can access the test cases is an overhead that tends to prevent collaboration and team work. Furthermore, storing test cases outside the version control system creates unnecessary complexity. As an alternative, open source test tools can usually solve most automated testing problems and the test cases can be easily included in the version control system.
Too lazy to setup CI server to execute tests
The cost and rapid feedback benefits of automated tests are best realised when the tests are regularly executed. If your automated tests are initiated manually rather than through the CI continuous integration system then there is significant risk that they are not being run regularly and therefore may in fact be failing. Make the effort to ensure automated tests are executed through the CI system.
Loving the UI so much that all tests are executed through the UI
Although automated UI tests provide a high level of confidence, they are expensive to build, slow to execute and fragile to maintain. Testing at the lowest possible level is a practice that encourages collaboration between developers and testers, increases the execution speed for tests and reduces the test implementation costs. Automated unit tests should be doing a majority of the test effort followed by integration, functional, system and acceptance tests. UI based tests should only be used when the UI is actually being tested or there is no practical alternative.
Jealously creating tests and not collaborating
Test driven development is an approach to development that is as much a design activity as it is a testing practice. The process of defining test cases (or executable specifications) is an excellent way ensuring that there is a shared understanding between all involved as to the actual requirement being developed and tested. The practice is often associated with unit testing but can be equally applied to other test types including acceptance testing.
Frustration with fragile tests that break intermittently
Unreliable tests are a major cause for teams ignoring or losing confidence in automated tests. Once confidence is lost the value initially invested in automated tests is dramatically reduced. Fixing failing tests and resolving issues associated with brittle tests should be a priority to eliminate false positives.
Thinking automated tests will replace all manual testing
Automated tests are not a replacement for manual exploratory testing. A mixture of testing types and levels is needed to achieve the desired quality mitigate the risk associated with defects. The automated testing triangle originally described by Mike Cohn explains the investment profile in tests should focus at the unit level and then reduce up through the application layers.
7. Avarice (Greed)
Too much automated testing not matched to defined system quality
Testing effort needs to match the desired system quality otherwise there is a risk that too-much, too-little or not the right things will be tested. It is important to define the required quality and then match the testing effort to suit. This approach can be done in collaboration with business and technical stakeholders to ensure there is a shared understanding of risks and potential technical debt implications.
Automated testing is not without pitfalls and some of these are identified here. There are of course many other anti-patterns for automated testing that justify further discussion.