When you are finished changing, you are finished — Benjamin Franklin

Quick time to market is something we all are concerned about. We constantly work on granularizing the chunk of work and push into the production. That is one of the primary reasons we have transitioned from Waterfall to Agile. We start with MVP and then keep pushing the enhancements and instead of treating project development as a destination we treat it as a journey.

With multiple and frequent releases, regression testing plays a vital role. Release on release, QA engineers being the gatekeepers of quality, must make sure that along with the new functionalities/stories working as expected, the existing ones are not impacted. There should not be any collateral damage in short.

BUT, the volume of Regression Suite keeps on growing and we may not be able to stretch the regression window in the same ratio.

A simple math here:

On an average if each SQUAD/POD flushes out 80 story points per sprint on an average and there are 4 of them. Then each sprint, almost 300 story points are developed and tested. The poor Regression Suite has a big inflow but hardly any outflow unless we explicitly re-visit the suite on a regular basis. Now, should we do it? If yes, how should we do it is a separate discussion altogether. We can take it in a separate thread sometime later.

But there is one more aspect of the solution which often takes back seat and it is prioritization of the Regression Suite. In which order the regression Suite must be executed?


It is very important because we need to find the blockers or high priority/severity bugs during the early phase of the regression cycle. I know the LAST THING that we all collectively want is a production issue. But what is the SECOND LAST thing? To find the Sev1 issue in the eleventh hour. This problem can be solved to a great extent by prioritization of our Regression Suite.


The QA engineers would be the obvious answer since they are the ones who take care of the Regression Suite? Isn’t it? Well the answer is NO. We may not be the right people to make this judgement call. Then who is the right person? There is no single person who is qualified enough to take this call. This problem needs to be attacked from all the 3 sides. We need a 3-dimensional approach to solve this problem.

  1. QA Engineers to decide which test case should qualify for the Regression Suite.
  2. Developers should decide the likelihood/probability of failure of that test case since they have the best understand of the code and know which part of the code could be more vulnerable as compared to others.
  3. Business Analyst/Product Owners should tell the business criticality of the test case.

Based on the above inputs, we can arrive at something called risk ranking in terms of HIGH, MEDIUM and LOW. The regression suite can be executed in the same order.


Since it introduces some extra work for Developers and Business Analysts, a very strong business case had to be put forth which was difficult without actual data in hand. Also, as we could not simulate the actual data, we had to build a hypothesis based on the historical data of defects for previous releases. We highlighted on below mentioned few advantages which helped a lot to convince. After all, all we needed was just a couple of Releases to back our hypothesis with real time data.


1. All the high-risk cases are executed during the early phase of the regression cycle yielding most of the defects. This minimizes the code fixes during the later phase.

2. If at all we are in a situation where we are unable to execute all the cases, then together as a team we can agree on execution methodology. For eg. Go for session-based testing approach for Medium risk cases and exploratory testing for Low Risk Cases any other combination of that sort.

3. There can be multiple dry runs in Test region for High Risk cases to find out regression defects even before landing into the actual regression cycle.

The RISK BASED TESTING has multiple advantages if it is implemented in a right way. I would take this up in detail in a separate blog. But the crux of the discussion is that we implemented it and witnessed almost 20% of reduction in the regression defects.

We are now planning to implement RBT in other projects as well.

Thanks for reading.

An avid reader, explorer, full stack qa engineer...