Insights

Rethinking QA, Part III - The Downward Spiral

By Daniel Pasco

Looking for designers or developers for your next project?

Black Pixel offers design, development, and testing services backed by unrivaled experience.

Hire Us

In my last post, I talked about the horrors of bad QA. I stated that the problems were institutional and came down to how organizations answered two fundamental questions.

The problem is that the answers people come up with often look something like this:

What is QA's purpose? Verify that the app does what it's supposed to do.

How does the QA team fulfill that purpose? Test the app to verify that it meets its requirements

Functional Verification As An Approach To Application Quality

One reasonable interpretation of the QA team's role is that they ensure that the application performs all of the functions it is supposed to. There's frequently a great deal of functionality to be implemented, and making sure that it's all working without failing covers quite a bit of ground.

Since most projects have a large set of requirements defining what the app should and should not do, verifying that those requirements are actually being met by the application seems like an organic way to do this. I'm not pooh-poohing this, it really is important, especially in terms of identifying regressions (things that used to work, but don't work now - typically due to an unexpected side effect of a code change).

Metrics

As a natural offshoot of this approach, we now have a convenient way to measure the progress of the QA team - the fraction of the requirements that have test cases defined - as well as the product team itself. How many of these tests can be run at all with the current build? How many of those tests pass?

Once all the tests are passed, the app is theoretically ready to ship.

Bugs

Naturally, issues will be found along the way, and ultimately fixed and verified as a part of this process, so QA can also be used to identify and track any issues found during nominal testing.

Required Tasks and Personnel Requirements

We're basically looking at five major activities performed by the QA team under this model:

  1. Develop a test matrix that can be used to assess how much of the required functionality is in place.
  2. Develop a comprehensive test plan that covers these tests in detail.
  3. Test the app to verify that it meets all of its requirements
  4. Identify and track any issues found during requirements verification
  5. Measure and report current test coverage

A manager or senior QA person can be used to track the overall test matrix, help develop the test plan, and provide insights on how close the project is to completion.

The other points are pretty straightforward: figure out everything that the app is supposed to do and keep testing it to verify that it does those things. Open tickets on any issues you find along the way. Easy.

Since the goals of the general QA staff are to simply put the app through its paces and report on any issues they find, the job is not considered to be particularly demanding. The basic task at hand calls for a rudimentary knowledge of what the app is supposed to do and how to exercise the basic functionality.

This has the following consequences:

  • The job tends to be lower paying and attracts a correspondingly lower caliber employee, certainly far below the capabilities of any typical member of the design or development teams.
  • Since the overall bar for sophistication in the tester and the testing tends to be low, this can (and in my experience, often does) result in subpar results and very poor bug reports.
  • Mastery of the platform on which the application runs is frequently poor.

How Well Does This Approach Work?

As the last post indicated, the answer is often "not very well."

Most QA teams I've seen develop a test plan - much time is spent developing this plan in detail and missing the big picture. Since much of this is done in a comparative vacuum, there also tends to be a fairly broad divergence between the documentation the QA testers are basing their plans on and the actual system they are supposed to test as requirements change over time.

If the number of passing tests becomes an important metric for QA, what is being measured is the progress of the development team in implementing the nominal functionality of the app. This often means that QA is being used as an input for project management.

The Downward Spiral

All of these factors, particularly the low capabilities of the QA staff, tend to create the perfect storm of bad QA discussed previously. What we've typically seen is a mass of unfinished test plans, a slew of very badly written bug reports, and a product that shipped anyway, with significant issues that were found by its end users.

Companies that have tried to incentivize their testers have also encountered some bizarre unintended consequences, such as rewarding whoever finds the most bugs: you get lots of dupes, lots of invalid bugs, the same bug over and over in different areas, etc.

The bottom line is, while there may be effective ways of putting together a low paid, unempowered, and unfairly unskilled team of people together to do this work effectively, in practice, such successes seem very rare.

For the cases in which the QA team is actually a significant drain on the developers' productivity, one might wonder if the company might not have been better off piling up their QA budget on the floor and setting it on fire instead of even trying.

It Gets Worse

This approach to quality assurance is common, and often disastrous. But the worst part is that in addition to generating considerable expense and entropy to the project, the entire philosophy is inherently flawed.

Consider the following blank rectangle. Let's say that this represents all of the defects in your application. If the entire rectangle were filled, it would mean that every bug in the application had been discovered.

01_image.png

Requirements verification, although an important part of the overall process, will only identify a subset of these issues. I refer to testing the basic functionality of an application as nominal testing (some people call it happy path testing). The problem is that this testing will only expose a subset of the actual issues in your app:

blog-image.png

The areas that haven't been filled in represent the defects that are in the application, which could be due to how the application responds to system events, error handling, or some other unforeseen circumstances. Nominal testing leaves a huge number of issues unexposed, and they can be devastating.

I will be diving into this in more detail in my next post: Here Be Dragons

-Daniel

 

Daniel Pasco

Daniel Pasco

Daniel is the CEO of Black Pixel. He tweets at @dlpasco.