Anyone who has worked as a test manager on a project will be familiar with this situation: A test concept is needed. Now. By applying a variety of time-consuming investigation techniques, you try to gather the necessary information to create a test concept document. In addition to the significant effort involved, the sense and value of this approach must also be questioned. As there is no explicit design or test phase in Agile, a new method is needed, where results can be achieved quickly. Furthermore, not only the test manager, but also all roles and employees involved must know, understand and be able to actively implement the contents of the test concept. This is crucial in the context of a test-first and built-in quality approach.
– Swiss Testing Day, June 21, 2022
Learn more about the Agile Testing Canvas and exchange ideas directly with its creators at our workshop. Many more highlights await you on June 21 with talks on topics such as performance engineering, automated testing, agile testing, DevOps and more. See programme
The Agile Testing Canvas
For the past several months, we have been working intensively on this topic. We gave much thought to how a contemporary test concept could be created in an agile environment and what form it should take. The first idea was to create a kind of Definition of Test, based on the Definition of Ready and Definition of Done. The focus was on defining the appropriate content items, as well as a clear and structured form of presentation. After various considerations and several workshops, we decided that a canvas was the best solution. Not only does a canvas have a clear structure, but the elaboration of the results by the team is also a big advantage. The test concept is no longer a “one person show”, it becomes a team effort. Not only does the collective approach increase the appreciation for testing, it also deepens the understanding of the defined and developed measures.
In my current mandate, we have used the Canvas at different levels in the SAFe environment. We defined the framework at the programme level and then added detail at the team level. The resulting canvas can also be seen as a commitment to built-in-quality.
The Agile Testing Canvas was designed for agile projects. However, it can – with some adaptation – also be used for hybrid and even waterfall projects. For example, instead of DoR/DoD, milestones and quality gates can be defined.
Design and Structure
The Agile Testing Canvas is divided into eight sections, with the numbering also indicating the order in which the fields should be completed:
- System under Test (SUT)
- Test pyramid
- Test matrix
- Test procedure (pipeline)
- Definition of Ready (DoR)
- Definition of Done (DoD)
- Risk-based testing
- What else?
Let us now take a closer look at each area.
1. System Under Test (SUT)
We start with the first area, the System Under Test (SUT). The test object (including its subsystems) is placed inside the system boundary, represented here as a square. Peripheral systems required for testing are shown outside the system boundary. We could further specify for each of the surrounding systems in which environments they are available, and whether there is a corresponding mock.
It should be noted here that the SUT only represents the system to be tested, including the necessary peripheral systems. Also, it does not represent a complete system architecture diagram.
2. The test pyramid
In the test pyramid (as per Mike Cohn), we note the relevant test levels and assign the corresponding test focus to each. We can also map existing test environments to the different levels (e.g., DEV / TEST / INT / PROD). According to the test-first principle mentioned earlier, the test effort should be greatest at the lowest level of the pyramid. The further we move up in the test pyramid, the smaller the number of test scenarios. As we level up, the degree of automation also decreases, in contrast to business relevance, which increases as we move up in the test pyramid.
3. The test matrix
The test matrix orders test activities according to different viewpoints. We consider four areas: technical, functional, user and operational views.
The first quadrant depicts the technical team view. Here we are at code level, so the tests are documented from the development perspective. These include test procedures from whitebox testing. We aim for a high degree of automation.
In the second quadrant, the functional team view is mapped to the functional aspects of the artifacts. Depending on the project structure, these can be features or stories. Initial usability tests can also already be targeted.
In the third quadrant, we look at the functional view of the product, with a focus on users. In addition to the validation of software components that have emerged from a feature or epic, UAT and usability tests across the entire product are also scheduled at this point.
The technical or operational view of the product is presented in the fourth quadrant. This is where everything that belongs to non-functional requirements (NFA) can be found, such as load and performance tests or security tests. A reference to ISO 25010 is useful at this point.
4. The test procedure (pipeline)
After we have identified and filled in the contents of the test pyramid (field 2) and test matrix (field 3), we bring them together in the pipeline. It is important that we place activities according to development steps in the process and do not think in more traditional test level terms. At this point, we also define responsibilities (at role level). Additional information, such as the necessary test infrastructure, may also be noted. In building the pipeline we want to consider the Built-in-Quality approach.
5. Definition of Ready (DoR)
In the DoR, we define the criteria needed to make a backlog item available for implementation. Here we distinguish between the different abstraction levels of the backlog items (Epic/Feature and Story), because the requirements for the DoR differ between them. From our point of view, it makes sense to regard the DoR as a checklist and not as a rigid constraint. Otherwise, there is a risk that the team will block itself. The DoR should be reviewed at regular intervals and adjusted if necessary.
6. Definition of Done (DoD)
Whereas the DoR represents a criteria regarding the quality of a backlog item for implementation, the DoD details the required quality aspects for a backlog item to be regarded as successfully completed. When the DoD is fulfilled, the backlog item becomes a product increment. Again we distinguish between different abstraction levels (Epic/Feature and Story) in DoD.
7. Risk-based testing
Complete testing is impossible, so we need to prioritize. This is best accomplished using a risk assessment method. With the quality risk assessment matrix, we assign a risk rating to each item. The ensuing classification results from the multiplication of two values, probability (of error) and extent of damage. We can then define the scope of testing in accordance with the risk classification.
8. What else?
Obviously, we cannot cover every possible detail or concept related to quality in this canvas. We use this field to link to further details. For example, we can link here to an existing test manual or attach concepts related to test data, test automation or defect management.
The development of an agile test concept need no longer be a one-person show, instead the entire team is called upon. In addition to being easy to use, the canvas can also be easily adapted to a wide variety of project structures and approaches.
Want to get started now? You can dowload the canvas here (currently the canvas is only available in german).