Determining the quality of your software can be a complex matter. However, product quality is also critical to the success of your company. Yet the creation of a software product is a creative process, seemingly unfettered by procedures that can build in quality. Often quality is what you measure by post-production testing, rather than something you plan from the start. How can early stage companies create a quality product without adding delay and excessive burn rate?
Remember when the quality of Japanese cars was so much better than American cars? Ford Motor Company recognized this problem and changed their ways. I still remember their old advertising tag line, “At Ford, Quality is Job One.” Are they perfect? Of course not. But quality has definitely improved over the years
Is this true at companies creating a software product? Usually not. A more truthful slogan is likely to be “At [insert software company name], Getting the Damn Product to Market is Job One!” Quality Assurance or QA is an after thought. Or as one cynical software manager told me once, “QA? Isn’t that what Beta customers are for!?” (More about Alpha and Beta releases later.)
Lets look at the problem of creating a quality software product. There are several ways to define quality (and corresponding ways to test for it):
Some aspects of quality are baked in during software development and others can only be tested after the product is completed. For example, solving the right problems, basic usability and responsiveness should be addressed during design and development. Different requirements may be discovered after a release is made, but the goal is to minimize rework.
Other testing is can be completed after development. For example, environment tests on different computers and operating systems are typically done with a product close to release or after release. Specific portability guidelines can be followed and small tests made during development.
Another small issue with QA is where to put it in your company organization. Should QA report to Engineering, Marketing, Operations or someplace else? Almost everyone I know has QA report to Engineering. In the past, some organizations put it in Marketing or Customer Service as a true representative of the customer viewpoint. The danger of this approach is lack of ownership by Engineering for creating a quality problem from the beginning.
Moving QA out of Engineering also creates problems at Application Service Providers (ASPs) running a web application for customers and subscribers. These companies often have a separate Operations group that is responsible for the web site, backups, security, etc. In this case, the performance of the web application should still be part of Engineering.
In other words, the engineers that created the software should carry the pagers that squeal when the web application is spitting error messages in the middle of the night.
Yahoo does it this way. Sure, they have an Operations group that handles backups and hardware failures. But if Yahoo Classified, Yahoo Store or My Yahoo has problems, it is the engineers that take ownership and respond.
Can you use offshore outsourcing to enhance the QA process? If you are using offshore outsourcing to develop all or a part of your product, make sure they recognize quality as one of the deliverables.
Providing a specification helps. It should define your product, and specify the required performance and environmental constraints on where the software will run. You can also define the approach required for unit testing, coding standards and source code documentation.
QA is also a terrific use of offshore outsourcing after your product is already developed. This is especially true for complex products requiring testing of many different scenarios and conditions. A good offshore outsourcing team can set up an automated testing procedure for regression tests and testing of your product’s portability.
You should take a pragmatic approach to software QA. It is often impossible to test everything because of limits of time and budget.
I was responsible for the testing of a database product at one of my startups. We received a few bug reports of slow searches and so I decided we should test every combination of inputs to find, and then fix the slow searches. I found there were literally billions of input combinations. Even if I could run a search combination per second, it would take almost a year to do all the searches! I finally created a list of the top hundred likely search combinations to test and found that to be sufficient.
Automated testing is typically used for regression tests and sometimes for tests of the GUI. Automated user interface testing is only practical if the user interface is stable and not changing. It takes significant effort to modify test scripts to keep up with a changing user interface.
Other automated tests vary inputs randomly for Black Box tests, or using algorithms to support White Box tests. The terms Black Box and White Box testing are often used in software QA.
Black box testing focuses on the external behavior of the software product. It does not consider how the software was written or structured. It is appropriate for use case scenarios and individual feature/function testing.
For example, to use a product function like “place an order”, the user must complete a specific series of steps like: find an item, put it in the shopping cart, enter shipping and payment information, etc. You don’t care what the program is doing to complete each step; only that each step is completed successfully, with no error messages, until the end result is achieved.
When you design White Box tests your use knowledge of the inner workings and structure of the code to design a more comprehensive test. For example, you may know entering data in a particular field of the screen causes the product to access a specific database table. You can use facts like this to design your tests to cover the access of all database tables.
The concept of Test Coverage means how much of your product’s source code is executed when tests are run. If a series of tests cause every line of code in your product to be executed then you have 100% test coverage. This is hard to measure in complex products without the use of software tools.
If you are willing to send source code offshore, or it comes from there to begin with, then you can use offshore QA for white box testing. Also, the programming needed for regression and other automated testing is an excellent set of tasks to outsource.
Whether you use offshore outsourcing for development, QA, or both, you should use a bug tracking system accessible over the Internet. Both testers and developers should be able to submit bugs and run reports. Attaching files to the bug reports is very useful for providing valuable information like tracebacks, output files and screen shots. (See the hint in Compile Time below for capturing screen shots.)
Effective bug tracking uses several severity levels defined for bugs. For example, here is a severity classification scheme I have used in the past with some example bugs:
Decide ahead of time how many bugs of each severity level will be allowed in a product release. For example, you may decide customers can tolerate this many known bugs in your product release:
I do not recommend having the bug tracking system send bug reports directly to development engineers. You should have a triage process first, to assign and send bugs. Otherwise developers get distracted by bug reports that may be less important than the feature they need to complete for the next release.
Before a release can be made, the software product should pass an Acceptance Test. Usually, one or two major use case scenarios show the product has basic functionality intact. Engineering should run these tests and then someone in Marketing or Customer Support should do the same to confirm the release is acceptable. The last step is important when you are using offshore outsourcing.
Alpha and Beta releases of your product can precede Version 1.0. There are multiple definitions for these releases depending on your goals and customer situation. I consider an Alpha release to be a major internal release to ensure the product development is on track. Basic functions and use cases should be implemented but is not necessarily ready for use by customers. An Alpha release can be preceded by multiple other internal releases to monitor development progress.
A Beta release can be used as the first release provided to eager and waiting customers. It should provide basic value to the customer and not just be a collection of loosely related features. At Beta, customers can tolerate a limited amount of testing and should be told they will probably find bugs.
Quality assurance of your software product can be critical to your success. Ensure your product works and provides a great user experience. Your customers will get all the benefits you promise and say “Wow, it doesn’t get any better than this!"