On typical, traditional, phased and gated
projects, the work flow might look something like this: develop requirements,
create a high-level test plan, create test scripts for each requirement, execute
tests, update tests, and report. A set of requirements is delivered to the
delivery team, who must figure out how to design, code, and test the final
implementation. Architects decide how to design the system, which is then
handed over to programmers to code. Testers may or may not have been involved
in reviewing the requirements, but they are expected to create a high-level
test plan following a template such as IEEE provides. The testers then create
test scripts or procedures. When the code is delivered, whether it is piecemeal
or all at once, the tests are executed and bugs are reported, fixed, and
retested. Test cases are updated. Tests may or may not be automated once
functionality is stable.
The
levels of precision are important for test planning
In agile, we don’t know all the details of
each feature at the beginning of the projects, so we need to adapt our
approach. Using agile principles of simplicity and iterative development, we
look at levels of precision and ask what information the team and stakeholders
need to know at each level. The levels of precision that are important for test
planning are:
·
Product release: Multiple teams working on a
single product
·
Feature teams: A single team working on a
product the only team, or perhaps part of a larger product release
·
Feature or epic: Some capability or piece of
functionality that is useful to the business.
·
Story: A small, testable chunk of functionality,
implementable within an iteration
For consistency of terms, let’s assume we
have a three-month product release with two-week iterations for each of four
feature teams. There are multiple features that each team completes in the
release cycle.
In the next sections, we’ll explore the
level of planning, types of testing, and the documentation that might be
expected at each of the four levels described above.
Product release
At the product-release level, teams should
have a good idea of the product vision. An overall test approach should take
into account dependencies between teams or between features – for example, how
to coordinate performance and load testing that individual teams might not be
able to do sufficiently in their test environments. Consider economies of scale
for tests such as interoperability, browser compatibility, or mobile devices.
Also, think about what additional testing needs to be done during the end
game – the time needed before releasing a product to production to
commercialize it. One example might be final user acceptance tests. This test
approach should cover what is important to this product release and may also
include the need for new test tools or environments. The size of the “document”
might vary depending on the number of teams involved and the complexity of the
product, but I recommend keeping it as simple and clear as you can.
Feature teams
Each feature team works with its product
owner to determine how much work they might get done during the release cycle.
Often, a release-planning session is held when teams size the stories and
features in their product backlog. Testers help during the planning session by
asking clarifying questions that may help the team determine the “bigness” of
the story. Impacts to the system as a whole should be considered at this time.
Use an exploratory mindset to consider what performance or security concerns
might there be, or whether this story will affect other functionality. The
answers to these questions may be captured in the story, but I also like to
take this opportunity to start writing test ideas in my notebook for future
use. It doesn’t cost anything to capture these ideas and, if I don’t use them,
I can cross them out later.
Each
feature team works with its product owner to determine how much work they might
get done during the release cycle
At the release level, sometimes a test
document is required. Before you create one, think about who will use it and
why they want it. If you need some form of document, keep it as simple as
possible – preferably one page long. Since the functionality is not set yet,
there is no point including any details of scope unless there is a risk
involved that others need to know. If a document is not needed, I record the
testing risks and assumptions I want to monitor on a whiteboard that is visible
to the team.
Recently at a conference, a colleague, Huib
Schoots, said that instead of embedding high-level feature and story tests in a
document that agile-resistant managers demanded, he used a mind map to capture
those details. The mind map satisfied the managers, allowed him to keep to
agile principles of simplicity, and enabled him to stick to an one-page test
plan to capture the important information.
Think about developing personas and tours
for extra exploratory test ideas and coverage of the application. Another easy
tool that is effective at the release level for capturing high-level test ideas
is a test matrix. For maximum effectiveness, use collaboration to generate the
test ideas for the matrix. When tools like these are used and made visible, it
provides a different viewpoint into testing.
Features or Epics
As features (some business-value
capability) are broken up into stories, we need to make sure they are testable
so that the team gets into the cadence of coding and testing each story until
it is “done”. There is a danger that teams work at too low a level and forget
about the bigger picture: the feature. Work with your team’s product owner to
create high-level acceptance tests (examples of expected behaviors and
misbehaviors) for the feature. This will help define the scope and keep the business
value visible. Mind maps are powerful tools for generating test ideas at any
level, but I find creating them for each feature the most valuable. Try
collaborating with the whole team on a testing mind map before breaking it down
into stories. It is a way to explore some of the possible issues before they
arise.
There
is a danger that teams work at too low a level and forget about the bigger
picture: the feature
Like in the release-planning session, these
test ideas can be saved for when we really need them at the story level and you
have a simple document maybe a picture of the whiteboard or flipchart. Again, I
often jot down additional test ideas in my notebook for later.
One idea that I have found works well for
teams is to create an extra story for each feature called “Test the Feature.”
The tasks will be mostly testing tasks, such as “Explore the Feature,” “Perform
Load Test,” etc. I also recommend including a task called “Automate the
Workflow” that can be done through the GUI when the feature is stable.
Including a story like this ensures that the feature testing is not forgotten.
Stories
Once we are working at the story level, we
start getting into more detail, for each story, we need high-level acceptance
tests an example of expected behavior and at least one example of misbehavior
to define the scope of the story. I encourage doing this during story readiness
sessions (pre-planning or backlog grooming). There are many variations on how
to define these tests, and it often depends on the tool you use.
Once we have these acceptance tests, we can
start the conversation either in pre-planning or in the iteration-planning
session. We get into more details, and again we can explore other scenarios to
help ensure that the team shares a common understanding. Our tests for the
story evolve from these conversations and can be expanded during the iteration.
I find it valuable to start writing my test ideas and variations during
story-readiness meetings, continue during iteration planning, and then use all
the tools in my toolbox to define other tests that will prove the story works
as expected. Once this list is “complete,” I suggest reviewing with the product
owner or another tester and then collaborating with the programmers who are
working on the story to ensure that you have the same understanding and to plan
the needed automation.
Remember to write down exploratory test
ideas through all of this, so the exploratory test charters will be ready when
the code is complete.
Test results
Test results are strongly related to test
planning especially for detailed tests. Visibility and transparency are some of
the biggest benefits in agile projects, so I suggest that test results should
be readily available all the time. Ideally, you want to extract them from your
automation framework or continuous integration environment, find simple ways to
keep track of your exploratory testing so you can review the results when you
need to. Another way to make results visible is to use tools like low-tech
dashboards. Spending time collating results does not add value to the customer
or the team.
Summary
In summary, plan collaboratively, keep it
simple, make working at the right level of precision valuable so you have the
right information, and keep test results visible.