Rethinking Our Job as Software Testers: We Should Try to Test as Little as Possible

[article]
Summary:

DevOps transforms testing from finding all bugs to prioritizing critical ones. Risk assessment helps testers focus on high-impact areas. Testing can be pre-release (critical issues), post-release (resolvable issues), or out-of-scope (low-impact). Testing should align with business goals and user stories. Data science helps measure and improve software quality.

Software engineering has undergone a huge transformation with the introduction of DevOps practices. Working in shorter Agile sprints is only a part of it, and maybe the least important part.

In my opinion, the most game-changing aspect of DevOps is the fact that our teams now have the opportunity to manage the software we developed, once it is released to production and is being accessed by our end-users.

In the past, even when we worked on web-based systems, the responsibility for developing the product was separated from the responsibility of running the same product in production. The fact that we run the software we wrote gives us a number of tools, opportunities, and advantages that were unavailable before.

To put it in more concrete terms, our job as testers was never to find all bugs in the system. Without a doubt, that was a big part of the value we could provide to our organization. But in the age of DevOps, this is no longer the case. We can now extend our area of responsibility to the phases after the development has been completed, and continue evaluating the quality of our product even when it is running and being used by our customers.

Let’s explore what this means, and how we need to rethink our job and the value we provide as Software Quality Engineers within our organizations.

Making Risk Assessment a Bigger Part of Our Job

Risk assessment helps us test smarter, instead of testing harder.

Aimlessly testing every area and component in the system is a waste of resources, and it is also something we do not have time to do when working with Agile or DevOps. It is important to remember that QA is not about catching all the bugs, but discovering the ones that are important to our teams and our stakeholders in order to make their decisions.

With risk assessment, we identify high-risk areas where potential issues can have a significant impact on our end users.

Part of our job is to prioritize testing within the stories based on a number of aspects, such as their business importance, potential bug impact, and historical areas where severe bugs were found. We do this to ensure that critical areas and features are thoroughly tested.


Testing risk assessment is also about planning what to test first, based on what bugs may take more time to resolve if found.

At the end of the day, risk-based testing is about answering the simple yet critical question: If I can only run X number of tests, which ones should I run?

Pre-Release, Post-Release, and What Falls Out of the Testing Scope

Another important way to determine what to test is by classifying tests. Consider what to test before the release, what to test (observe) after the release, and where we should not invest effort in testing, instead letting users find and report these bugs if they are there at all.

Pre-release testing focuses on the stuff we definitely don’t want our users to encounter once the product has been deployed. Therefore, Pre-Release testing should cover all the things that could either cause serious damage to users (e.g. data loss), things that are traditionally harder to resolve quickly (e.g. system performance and response time), and things that may cause less functional damage, but have a big impact on the business (e.g. typos on important screens or big UI issues).

Post-release testing—or production monitoring—will focus on areas that are harder or more expensive to test pre-release and are also things we can solve quickly by releasing patches or hotfixes. Here we count areas such as complex localized environments or devices, specific data sets, custom configurations, etc.

We should also define what falls out of the scope of our testing. QA shouldn't be the department that delays the release due to testing unnecessary scenarios. These are areas that have both low potential for defects and where bugs found are most probably low in severity.

Align Business Value Goals With User Stories

Effective software testing is closely tied to achieving business objectives. In this sense, testers need to verify each user story developed is aligned with concrete business value goals that can be measured once the product is deployed to production.

These efforts usually start with the Product team that is in charge of defining software requirements.

Based on that, the Developers translate these specifications into functional software, while Quality Assurance verifies that the software behaves as intended.

Throughout this process, the test case design and test execution must remain intricately intertwined with both the business goals and end-user requirements.

Moreover, the testing team should ensure that as part of the User Stories, our Product teams have defined the goals they wish to achieve. These should be concrete and measurable numbers of how the behavior of the users should change as a result of the functionality developed (e.g. usage of the feature should grow by at least 3%).

Then, as part of our development process, we need to incorporate instrumentation tools to measure these goals. Finally, once the product is released, we measure these instrumentation metrics and compare them with the initially defined goals.

Our process should be a closed-loop process, and we need to ensure this is carried out as part of our work.

Leveraging Data Science to Holistically Measure and Improve

In a DevOps environment, Monitoring and Data Science can be the most important tools of testers.

As I wrote earlier, monitoring provides real-time insights into the performance and health of deployed applications. This allows testers to proactively identify and address potential issues before they escalate and learn about the quality of those features. Additionally, it empowers testers to extract actionable insights from vast datasets, uncovering hidden trends and patterns that inform strategic decision-making.

This means that once features are deployed into production, our attention turns to assessing their impact on the usage patterns of our customers and our business goals.

Some organizations already have product adoption metrics in place, and they measure the ever-changing behavior of their users. But in my experience, many teams do not have these programs in place, and most of those that have them measure only a very limited set of metrics.

Testers have a crucial role here. It’s about more than focusing on the technical side of things, but keeping an eye on how users are actually interacting with our product.

By tracking usage data, testers can gain valuable insights into user behavior and satisfaction levels. After all, if we roll out features that nobody ends up using, it's a clear sign that we've missed the mark somewhere.

Adopting a holistic approach is the right way of doing this. By using metrics such as Net Promoter Score (NPS), conversions, and user engagement with new features, testers gain a comprehensive understanding of the software's overall impact on end-users and the business.

This broader perspective enables testers to identify areas for improvement, driving continuous improvement of the software's quality and user experience.

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.