I've been doing regression testing for over 20 years, split about 50/50 between waterfall and Agile implementations. Regression testing has several characteristics, few of which are attractive no matter what framework you're in. It's a long, tedious, unpleasant process that no one really wants to do. And, as with all such processes, it is by definition prone to error (when was the last time you really put your best effort into something long, tedious, and unpleasant? When was the last time you realistically expected anyone else to?)
Is regression testing really a best practice, particularly in an Agile framework?
Within the Agile/Scrum framework, the nature of regression generally poses several challenges beyond the tedium and unreliability. For example:
In the Agile/Scrum framework, a lot of stuff tends to get timeboxed, from sprints down to daily standups. A regression effort has no timebox, and it disrupts the established timeboxes already created. A regression doesn't start at the beginning of a sprint, or end at the conclusion of one. It starts when the code is complete, and ends when it's done…whenever that is, and whatever that means ("all tests pass" is an ideal not often realized - bugs found during regression are often negotiated as blockers or non-blockers).
All this is effectively throwing dev teams unceremoniously back into waterfall processes. Sprint cadence is disrupted, task switching ensues as individuals jump between testing and bug fixing, and no one is really quite sure what tomorrow will look like. Being "agile" is no longer a consideration, let alone a priority.
Regression testing is an activity which may (is even likely to) cross several sprint boundaries, so that there can be no legitimate sense of scope. We may still pretend like there are sprint boundaries by having all the retros and planning events normally associated with boundaries, but without defined scope these have little practical meaning.
Teams want to start on new feature work as quickly as possible. When a sprint boundary is crossed and regression work remains, teams (or a subset of team members) are likely to task switch away from regression work to "work ahead" on features. They explain to each other and their supervisors that they are creating forward progress on new features that would be stalled if everyone stayed focused 100% on regression. It's a compelling argument, and one easily embraced at all levels within an organization.
However, once task-switched away from regression, it can be challenging to return developers' focus back to it (for investigation into a possible issue that can't be pinned down or reproduced, for example). Inevitably, the task switch is likely to be required anyway.
This causes significant slowdowns both in the regression and new feature work, and the entire process struggles. Commitments are missed and release plans need to be re-evaluated.
Is manual regression necessary?
In a perfect Agile framework, no. The breadth and depth of automated tests will prevent the need for manual regression.
But what about in a legacy system, or any system where the automated testing isn't fully developed?
Scrum and Scaled Scrum principles offer a way through the regression with an artifact we are all familiar with: Definition of Done (DoD).
With a well-crafted DoD, commonly shared and adhered to across all project teams, regression testing should never need to be done. (It might be more proper to say that regression testing is always being done, but I can't propose it like that or no one will want to be on the project!)
In a scrum project, whether small or scaled, it's important that the DoD reflect the necessary activities needed to create a potentially-shippable "done" increment at least once per sprint. This includes identification, creation, and execution of all tests necessary, not just within the feature itself, but also as the feature fits into the larger system. Anything that the feature may potentially affect needs to be tested with it.
The increment is not potentially-shippable until there is confidence that the larger system has not been broken as a result of the new implementation. This implies a perfect DoD, which may not be achievable, but should be constantly reflected on and refined in cross-team retrospectives.
The result of a DoD crafted well enough is that at the end of any given sprint, a dedicated regression test effort can be confidently discarded as wasted effort - because it's already been done.
Without the length, tedium, unpleasantness, and errors.
Does your team forgo regression testing because your team is continually testing or have extensive automated tests? If not, do you need help moving in that direction? Join in the conversation below or Contact Us.