3/29/2007

What are good goals for testing?

Sadly, software testing has not become a prestigious endeavor in our industry. In ASIC design, for example, the test verification folks often are paid the most and lead the teams. In software, test development and test execution are often looked at as stepping stones to product development. As a result, test development has developed a stigma and is often done as an afterthought and often not pursued by the best engineers.

Without reasonable tests and test infrastructure, your company will spend a lot of time and effort doing manual testing at each patch or release. You may not be able to respond to customer requirements in a competitive fashion. The worst result may be inflicting poor software on your customers and having them find your bugs while you lose your reputation.

So how can you set good goals and get better results in your testing effort? Here are the basics:

  • Make sure everyone feels responsible for quality
  • Define detailed acceptance criteria that drives what you test
  • Invest in infrastructure for testing
  • Drive toward specific goals around automation and responsiveness

I have talked about many of these issues both in the book and in other blogs but I will summarize them here as a reminder.

Everyone needs to feel responsible for quality. From the developer who should produce automated exhaustive functional unit tests to the project or release owner who should drive quality across many functional areas, everyone should feel responsible. Break down the organizational walls. Treat quality like performance or reliability and bring to bear the resources you need to make it happen. Don't differentiate between a development specification and a test specification -- make them one document. Send a message that there is only one thing and that is a quality product for your customers. Send a message that there is only one team and everyone must work together to achieve quality.

Don't let acceptance of a project or product only be that it contains no critical bugs. That measure is a second order measure. Define the parametrics for the product around performance, durability, interoperability, compatibility, usability, etc.. Decide on those things together with all of your developers and product management. Use those criteria along with exhaustive functional unit tests to drive the test portion of your specifications. Make sure you identify real customer scenarios as a basis for the tests. Finally, measure, measure, measure the results in both your schedule as the tests are completed and in your integration as more of the tests are deployed. Do not hide these details behind test points. Expose them in highly visible presentations. Make sure everyone can see what is important to the team with respect to determining progress and whether the project or product is done.

I hate the word framework. It is used to denote a lot of things and does not communicate what work is actually being done. Testing frameworks fall into this grouping for me. So what infrastructure needs to be in place for people to actually develop automated tests? Here is a list of the basics:

  • Tools to setup and knockdown a test. These may include tools that can reboot or reinitialize test platforms and/or test drivers automatically. They may include setting up a configuration or data set. Clearly how fast they work is a factor.
  • A programmatic interface to the test platform. This may be through a CLI, scripting language or API. It must work remotely. It must control as much of the product as possible in order to test as much as possible. Avoid doing this through the GUI as test vehicles that go through the GUI are often hard to maintain.
  • A standard output format. Define what you need for summary and detailed information. If every test produces different output formats or worse undecipherable output formats, you will spend an enormous effort in trying to figure out what is working and what is not.

Obviously there are always more items that you may need for a particular product or environment but your list should always include the items listed above.

The last thing you should do is set goals around automation and responsiveness. How much effort should it take to test a patch or a release? How much elapsed time should it take to test a patch or a release? By its nature, the last question will affect the answer to the first question (i.e. with a limited amounted of time you will need to automate more and it will reduce the effort). So here are my rules of thumb for elapsed time:

  • For a patch you need to be able to broadly but not deeply test your whole product within 12 hours.
  • For a release you need to be able to test your whole product broadly and deeply within 7 days (excluding duration tests for obvious reasons).

The patch level test is often also used on nightly builds during a development cycle.

Obviously your goals should be to reduce the effort for both kinds of test cycles as low as possible. However, there will always be a manual component to verify that GUIs look right.

If you follow the guidelines above you can set better goals around testing and achieve better results for your team and your company.

More later ...

No comments: