Whither Test Cases?
February 26th, 2013 by Damian
I’m so sick of test cases. I wish they were dead.
Yes, that’s right. Frigging test cases. I hate their smugness, the way they sit there with their tedious screeds of text, secure in the warm glow of adoration they get cast upon them by everyone who doesn’t have to write the bastard things.
Oh, all right, fine – I don’t have deep philosophical objections to test cases, nor to your customers who ask for them. What I object to is over-inflation of their use to The One True Way, and more importantly the way they are perceived as infinite benefit but zero cost. If someone really wants test cases, that’s fine, but they can budget for the not-at-all-insubstantial cost of creating (let alone maintaining) them and the testing team certainly shouldn’t be solely relying on them for the testing of the solution.
What? Why don’t I like them? Well, there is a traditional perception of test cases being all things to all people: a guide, tutorial, help reference, reference bible, oh, and incidentally also the steps that you take to test the system. I think this is a dreadful, awful perception and want to stamp it out whenever I see it. What tests cases do is show you a set of specific steps that should result in specific outcomes. That’s it. Any other value people place on them is, at best, misguided, and that information should be obtained from the actual sources:
- Introduction to the system? Training.
- Reference? Technical guides.
- System behaviour? Use cases or requirements.
- Old issues? Issue tracking or test results.
- And so on.
Someone said the other day that testing is traditionally the documentation layer for every other part of the system, and I think that’s now both unacceptable and frankly kind of silly. Testers should be paid money to test, not to obediently write things down that other people are supposed to be responsible for maintaining. Once we re-establish test cases as being only a largely wasteful, exhaustively detailed list of Steps To Do Things, it becomes clear that there are other more agile, more effective, and more wide-ranging ways of performing meaningful testing that will give a much better result. Plenty of people have described such methods, but the political battle needs to be won first.
Posted in Uncategorized | No comments »
WeTest presentation: test planning in the real world
February 12th, 2013 by Damian
I recently hosted the third WeTest presentation, where we had a really jolly good discussion on test planning, the real world, and, bending processes to your whims. Or, as the blurb put it:
This session will cover test planning in non-lean environments, and planning for the real world instead of planning for a process checkbox. Damian will speak about his experience with trying to perform meaningful test planning within the constraints of a rigid testing process, test plan templates design and dissemination, and finding out "why you’re testing" before you get to the "what you’re testing".
I’ll put the body of my experience report below, since I was told I talk too much and therefore cut most of it out on the night. If the workshop was anything to go by, I figure there’s at least another few hours of thoughtful discourse to go on this subject (hot topics: specificity of test planning, semantics of the word strategy, and the dreaded spectre of assumptions).
Surprisingly, the usual suspects don’t have anything additional to to add on their own sites, but the post-match discussion at the WeTest forum and elsewhere was robust and lively. Thanks to all the attendees and participants for making this such an entertaining and thought-provoking evening.
The WeTest workshops are only possible due to our generous sponsors, Assurity, who provide the premises, pizza, and beer. Thanks! I’ve written a report of the event for them, and you can find it over at their site.
Introduction
Over the last fifteen years, I’ve done a lot of test planning. I’ve produced test plans that were a few bullet points in an email, and I’ve – regretfully – produced test plans that were 50 pages long when they were just a template. So, some thoughts I have come to after planning approximately a million projects’ testing…
Working within rigid frameworks
There are a lot of people who do not want to think in their test planning process, and fortunately for them there are also a number of managers who believe that thought is unnecessary and wasteful. A test planning process that is rigid, locked-down, and consists largely of boilerplate is seen as desirable and efficient by these folk; it is easily controllable and repeats easily on paper. Naturally, I could not disagree more strongly with this, but I genuinely don’t believe that these people are at all interested in change. So, when your hands are tied, a lot of what you’re able to achieve will be minor victories by bending the rules, or snuck in under the radar while still fulfilling the Capital-P process.
When producing test strategies under such rigid frameworks, there are two ways of achieving a useful outcome. One option is to bootstrap the test plan: put the bare minimum of information and scoped testing in, and don’t mention anything at all outside that defined scope. This puts the onus of discovery squarely on other people – most of whom won’t read the document anyway – freeing you to develop and explore your tests around the areas you need to focus on. However, this can lead to situations in which you genuinely haven’t considered something important, and this can be bad in highly regulated or mandated environments.
So, my preferred approach is to take an explicit and exhaustive scope of work – say, gleaned from previous projects or other reference information – and include absolutely everything in the test planning process and whether it is in the scope of testing or out of it. This allows you to specifically examine and dismiss the chaff while still keeping a record that it has at least been considered. Naturally, this approach can still miss things, and it makes documents tend towards being both large and with a low signal to noise ratio, but it is an effective way to manipulate a rigid process in order to achieve a thoughtful outcome.
It’s probably important to note here that I don’t personally believe you have to write actual documentation, formal or otherwise, in order to do good test planning. Let me read you a quote from Kaner and friends, from a book I can’t recommend strongly enough:
There’s a lot of testing literature that says, in essence “you can’t do testing well without a written test plan.” In our experience, the main positive effect of that advice has been better job security for the paper and toner manufacturers. There are too many badly written plans. And we’ve seen a lot of good testing done without following written plans. It’s time for better advice.
Kaner et al, Lessons Learned
They are, perhaps unsurprisingly, correct. However, for the purposes of this discussion, I’m going to assume that you, like me, are operating within rigid frameworks that don’t comprehend not having documentation; if you’re not, I’d love to hear from you later! One reasonable argument that can be made for at least using a common template as a starting point – even if you snip most of the sections out and change the others so there’s only one page of actual content – is that one objective of a rigid framework is not to frighten anyone, and one of the best ways not to do that is to provide consistent-looking documents repeatedly.
I also find that prescribed processes tend towards conflating the length of a document with its apparent usefulness. I’m sure at some point we’ve all produced 50-page test plans or 400-page test case documents, and received the customary gasps over their girth. However, if we’re trying to achieve excellence in our planning, we need to make sure the documents are useful and are actually going to be read. When you have master test plans and detailed test plans or fifty-page templates, you are heading down the path of creating documentation because you have been told to create documentation; no-one will read or care about this document until the project goes poorly, when everything you wrote will be pulled out and discussed at length. And one of the reasons the project went poorly is because no-one reviewed the test planning, apart from the poor bastard doing it.
Let’s have a William Morris line that I try to live by:
Have nothing in your houses that you do not know to be useful or believe to be beautiful.
A smaller document is a fundamentally better, more readable, and more easily reviewable document. Extra words add nothing, and remove focus from the actual meaning. Another advantage of smaller documents is that I think test planning is an inverted pyramid of importance versus detail: the important stuff is without detail, holding everything else up, and the detail can be added on top to strengthen the important stuff. A lot of details are unimportant during test planning, but there will always be really important details that change from project to project, and those details really will be important.
In a similar vein, don’t repeat yourself, and don’t repeat other people. Reference other documents for unchanging information, including defect management and acceptance criteria. Most templates I have worked with were particularly bad in this regard, which lead to at best redundancy and at worst complete conflicts of information. Copy and paste works between projects with careful consideration, but if you’re copying and pasting information within documents of the same project with the same audience, there is a terrible problem somewhere. Similarly, blindly parroting a list of items that someone else came up with is not a useful way of thinking about your own test strategy. One technique that I forced upon my poor testers was to include a project overview section in the test strategy template, and in that section I would ask them to write their understanding of the project and its aims and constraints. Although this made a lot of people dislike me, it did act as an extremely useful tool to drive out misunderstandings and information gaps early in the test process, particularly when the tester would write in plain English and produce their own diagrams. However, this sort of activity costs about a day or more on every project, and is hard to justify if you’re in an environment that just expects testers to check boxes and move on.
Also, when you’ve come up with a test strategy that is different from what people may be expecting, it’s always best to fling it out against the wall as soon as possible to see what sticks. The other interested parties in your test effort will very likely have hidden expectations of what you should be including, and you either need to add that information early on or start preparing a reasonable explanation of why you won’t, which brings us neatly to…
Find out why you’re testing before deciding what you’re testing
I recently saw a testing group on LinkedIn – don’t judge me – discussing what their first step would be upon starting a new project. It’s important to note that there was no other context given – it was simply what your first action would be in some hypothetical project. The responses were varied, to say the least. One tester said, straight off the bat, that the first thing to do would be to write the test cases. Then they thought about it for a few more minutes – a good start – and came back with this list:
- Creation of test cases
- Typo
- Font fallback
- Performance test
- Page speed test
- Test in all browsers
Bear in mind that they knew not a single thing about the project and its audience, its platform, the software interaction, or even its purpose, yet they assumed that font fallback was going to be so important that it should be one of their five key testing objectives. "Big deal," I hear you think, "they just assumed it was a web app, and in the real world they’d have more knowledge." That’s fair enough. The second person, then, to chime in was specific straight away: they asked "who is the primary stakeholder?" That, on the face of it, is a perfectly sensible statement. Except again, it’s not; it’s charging straight at the problem like you have blinkers on. Who says there’s only one primary stakeholder? I bet every stakeholder would have a different opinion on that matter!
James Whittaker made this very point:
Assumptions are a very bad thing for software testers. Assumptions can reduce productivity and undermine an otherwise good project. Assumptions can even undermine a career. Good testers can never assume anything. In fact, the reason we are called testers is that we test assumptions for a living. No assumption is true until we test and verify that it is true. No assumption is false until we test that it is false.
Any tester who assumes anything about anything should consider taking up development for a career. After all, what tester hasn’t heard a developer say "Well, we assumed the user would never do that!" Assumptions must always be tested. I once heard a test consultant give the advice: "Expect the unexpected." With this I disagree; instead, expect nothing, only then will you find what you seek.
Whittaker, stickyminds.com
This is true throughout the entire test planning process. You can’t assume you know for certain anything about the internal or external expectations or outcomes unless you confirm them. In most cases, this will be validating your thinking and plans against documents, but there is generally no substitute for opening the communication lines and going directly to likely people. This is particularly important if the customer is external to your company or you don’t have experience in the problem domain.
A word of warning, though: it’s been my experience that by actively seeking out and engaging stakeholders, you increase two major political risks:
- It can increase your proposed scope of work dramatically unless you carefully manage it; people tend to seize upon a tester coming and asking what is important to be tested and throw in whatever their main worry at that time is, which may not be a universally-acknowledged problem at all. Everyone wants to get their part of a project working beautifully and on time.
- It can make your test strategy into a pawn for project- or development-level information or initiatives. If you can’t guarantee that an added section – say, on unit testing – is going to be of high quality, useful, and maintained, then do everything in your power not to add it.
Cautions aside, collecting test requirements can help you to answer a couple of fundamental questions very early on in your test planning process. You will have to forgive me for plagiarism in the next few minutes, because frankly Kaner et al nailed it in Lessons Learned; I absorbed these lessons years ago and they have become a tremendous resource in effective planning, and gilding a lily is just senseless. So anything idiotic I say from here on is entirely my fault, and anything brilliant you can just assume comes straight from the minds of Bach, Kaner, and Pettichord.
So, fundamental questions:
- Why bother? If you are addressing a risk that matters, then you need to test it. If the risk is insignificant, then testing is too expensive – don’t include activities in your strategy unless they address a risk that matters enough to spend time testing. If you’re not testing a risk that is very visible or otherwise of note, make sure it’s explicitly descoped with a really good reason.
- Who cares? If no-one is interested in the outcome of any given testing, it serves no-one’s interest to include it.
Because of all of this, I have found that a semi-formal Test Requirements layer is immensely useful. On the face of it, this can be seen as more needless bureaucracy and just another bloody document to create and get agreed, and you can certainly approach it that way and write something like "we need to verify the quality of the release". But unless you’re working at the most cookie-cutter, CMM-5 organisation, the test requirements you uncover might surprise you, on every single project.
Designing and disseminating test plans
So, with the testing requirements identified, you probably now have a bit of a better idea of where to begin your actual test planning.
The test strategy serves as the link between your test requirements and the actual testing. It’s a description of the decisions that you made around the motivations, thoughts, plans, focus areas, reasons, and – yes – strategies that are being used for a particular testing effort. Note the active tense there – I find that a strategy serves as a map of the testing process; initially we have the rough outlines of what we’re expecting, the approximate edges of what needs your attention, and the finer detail is added as we go along. Naturally, the map can be completely redrawn when required. Test strategies are living documents in the truest sense of the phrase: they are not complete until the software is taken from your hands and you are no longer testing any part of that delivery. Up to and including that point, anything and everything within that document is open for discussion, debate, and – most importantly – change.
A lot of process-heavy environments will mandate discrete, immutable milestones that have to be achieved and signed off before later work can begin, and one of these milestones is inevitably the signoff of the ‘final’ test strategy. This is obviously a problem, since change is your friend – expect and embrace it. A test planning process that doesn’t allow for change in just about every area is almost certainly not going to work out very well. This is because, to paraphrase the three gentlemen previously mentioned, you cannot possibly know everything at the start; if you could, you’d be a world-famous psychic and not stuck in a room with me. It is always going to be better for your test process to have a strategy that reflects reality, rather than outdated or over-optimistic outlooks. If you’re lucky enough to be able to issue updated versions without setting off alarm bells, do so. Otherwise, change it when no-one’s looking; there is no argument that can be made against this without making the rigidity of the process look foolish.
A reasonable starting point for a test strategy’s purpose is to:
- guide, plan, and direct the test team’s activities,
- specify the deliverables of the test effort,
- report the scope of testing, and
- record your motivations and reasons for your decisions.
You should tell the audience your priorities, what you’d like to do, if there’s an area you’ve considered, but dismissed, or if you’re not sure about something. Your test strategy should explain both your decisions and their reasons for being made.
Really, there is only one reason that any of us test: something important might go wrong. Your entire test process exists to identify, investigate, and report the risks that the product may fail.
In general, there are five areas to consider when planning your strategy (this is the Satisfice Context Model):
- Product: what will be delivered, and presumably sold for money out the other end of this process? Who is it for? Is there a contract?
- Development: where is the item under test coming from? What risks and constraints are there? What interactions will you have with the development team?
- Test Team: who will actually be doing the testing? Again, what risks and constraints are there?
- Test Environment: the systems, tools, and materials to execute the testing. Do you have what you need?
- Mission: How will you be successful? Why are you doing this? This is the outcome of the previous inputs, and is strongly tied to the test requirements that you will have gathered already.
Altogether, you’ll be making choices in three rough areas:
- Strategy: what exactly are you testing? What techniques will you use to create tests? How will you recognise bugs? The test strategy specifies the relationship between the test project and the test mission.
- Logistics: how will you apply resources to fulfil the test strategy? Who will test? When? What do you need to succeed?
- Deliverables: who sees your outcomes? How will bugs be tracked? What documentation will you create? What reports will you make? When and how often?
You will be making choices about all of these things either explicitly in your test planning, or implicitly via some other means. There’s no option just to not choose things. These decisions will form your strategy, and there are many possible strategies. You’ll recall earlier that I made my testers write about the project and the product in their own terms, rather than copying what someone else thought. A similar approach is excellent for communicating your chosen strategy: essentially tell a compelling story in your own words that explains and justifies the testing that is to be done. If I can quickly read the extremely generic and simplified examples in Lessons Learned:
“We will release the product to friendly users after a brief internal review to find any truly glaring problems. The friendly users will put the product into service and tell us about any changes they’d like us to make.”
“We will define use cases in the form of sequences of user interactions with the product that represent, altogether, all the ways we expect normal people to use the product. We will augment that with stress testing and abnormal use testing (invalid data and error conditions). Our top priority is finding fundamental deviations from specified behavior, but we’re also concerned with ways in which this program might violate user expectations. Reliability is a concern, but we haven’t yet decided how best to evaluate that.”
“We will perform parallel exploratory testing and automated regression test development and execution. The exploratory testing will be risk-based, and allocated to coverage areas as needed. We’ll revisit the allocation each week. The automated regression testing will focus on validating basic functions (capability testing) to provide an early warning system about major functional failures. We will also be alert to opportunities for high-volume random testing.”
Stories like these not only clearly spell out the high-level goals and strategies to achieve them, but are also able to be absorbed more readily by other audiences, especially non-technical ones. This drives discussion and acceptance, and is far more likely to find variances in expectations – in the examples I just read, I can see things that would make most rigid process project managers or technical leads upset, which is a good outcome – you uncover the pain points in planning rather than in execution. Using this approach to paint a broad picture before hitting specifics makes it completely clear what you and your test team are intending to do by clearly communicating your emphasis. Again, your ideas are your test plan.
Speaking of your audience, your strategy, its decisions, and the language in which you communicate will be defined by how well you know that audience. Sometimes you may need to direct an internal test team, and sometimes you may need to communicate with stakeholders outside the company. There may be other audiences later on – for example, your support organisation once the project goes live. Your test strategy will certainly be read the very next time this software is changed, and one day it will be you that has to pick up someone else’s strategy and make sense of it, so be nice.
Finally, I’m going to steal from Lessons Learned again where they discuss what a good test strategy is:
- It should be specific: generic statements and boilerplates are nowhere near as good as tailored thought. Each strategy you write should be unique and reflect what’s special and important about your current test effort.
- It should be risk-focused: what matters? On top of that, why does it matter? Your test requirements will help answer these questions.
- It should be diversified: this is a bigger chunk than I’ll discuss here, but there’s a very useful heuristic called The Law of Diverse Half-Measures. It can be quickly summed up as: it’s better to do a bunch of things reasonably well than do one or two things perfectly. Different approaches, methods, and techniques will find more problems than any single way, and you should plan with that in mind.
- Lastly, it should be practical: don’t overreach. If you’re not going to do what your plan says, then don’t write it that way! A whole lot of things are easier to say than to do.
As a side note – I don’t think I should have to tell you that test strategies should also be honest. If you have pressure to lie or obfuscate in your testing or planning, I think you should find a new job.
Things I still don’t know how to solve
To finish off, I’d like to throw a few things on the table that I still don’t know how to solve adequately, and also request some experiences from people who have the great luxury of not working within rigid, prescribed test processes:
- How do you approach having both internal and external audiences for test planning? How candid are you?
- How do I convince people that getting test requirements is a good idea and not just more work?
- Perhaps on a related note – why do people not like to think?
- Do you do undocumented test strategy and/or test planning? How?
Thanks for your time.
Posted in Test Plans | No comments »
Thou shalt perform Excellent Testing
June 30th, 2008 by Damian
This is number two in my list of commandments for testers.
It probably goes without saying that actually performing testing will be your main job function. Talking about testing, thinking about testing, planning your test process – these are all very important and very useful things to do, but the primary reason for your continuing pay packet is your raising of bugs and highlighting of issues. In other words, go forth and break software.
Thou shalt seek the better bug
So, you’ve found a problem. It’s a minor thing – perhaps a screen widget doesn’t disallow an invalid entry. But how deep does the rabbit hole go? Does it blindly store the invalid entered value? Is that same dodgy value then retrieved by another process or business function and utilised somewhere it really shouldn’t be? Is there an external symptom of this case, like a somewhat alarming error in the system log? Go the whole hog – can you make the application crash?
You are not Peter Sinclair; never take an application’s first answer. Find out how much more value you can get out of the bug before you raise it. If nothing else, it provides you with a lot of supporting information and scenarios that can be provided to the development team.
Thou shalt make thy bug a prime target
Assuming that the organisation you work for is a rational one, a small bug ain’t gonna get fixed unless someone cares about it. It is your job to find out why someone should care, and demonstrate the knock-on impacts possible; it’s almost always better to be able to raise a serious bug than a minor one, and a serious bug warrants more attention by far.
Of course, you have a similar responsibility to reality. If your discovered issue doesn’t actually affect anything, this should be writ large in your report. Again, assuming that your organisation is rational, you aren’t paid on the severity of the issues you raise, so be sensible.
Thou shalt apply diverse half-measures
There is another important heuristic to be followed when you’re testing software: the rule of diverse half-measures. Essentially, this says that: it’s better to do more, different kinds of testing to a pretty good level than it is to do one or two kinds of testing perfectly.
"This strategic principle derives from the structured complexity of software products. When you test, you are sampling a complex space. No single test technique will sample this space in a way that finds all important problems quickly. Any given test technique may find a lot of bugs at first, but the find-rate curve will flatten out. If you switch to a technique that is sensitive to a different kind of problem, your find rate may well climb again. In terms of overall bug-finding productivity, perform each technique to the point of sufficiently-diminished returns and switch to a new technique.
Diversification has another purpose that is rooted in a puzzle: How is it possible to test a product for months and ship it, only for your users to discover, on the very next day, big problems that you didn’t know about? A few things could cause this situation. A major cause is tunnel vision. It wasn’t that you didn’t test enough; it was that you didn’t perform the right kind of test. We’ve seen cases where a company ran hundreds of thousands of test cases and still missed simple obvious problems, because they ran an insufficient variety of tests." – the Bible
Even though this heuristic is obviously a deep and considered approach to scalable and effective test management, you may find it useful to glibly sum it up in a way that is more familiar to programmers: we didn’t expect the user to do that.
Posted in Testing Guidelines, Testing Methods | No comments »
« Older entries