While Ford announced that “Quality is Job One,” it is Toyota and Honda that continue to build the most reliable cars year after year. Many, if not all, software companies want to build a quality product, few actually do build products that meet their own internal management quality expectation and external customer’s satisfaction. The lack of quality is a symptom of many issues. What can be done to improve quality?
At ReplayTV, I was responsible for getting the world’s first digital video recorder (DVR) accepted for Panasonic branding, the premier brand name owned by Matsushita Electronics Inc. (MEI), which also owns JVC and Magnavox. I worked closely with the QA team from the manufacturing division that pumped out 3 million VCRs a year. In my visits to Japan, I experienced first hand how they lived and breathed quality day in and day out. Panasonic employed several hundred engineers to build the automated VCR manufacturing line to exact specifications, and there were only a handful of people to watch over the production line in operation. With respect to ReplayTV, they put the hardware through torture tests (shock, extreme heat over extended time) and the software through thousands of test cases. The end result: my original Panasonic branded ReplayTV is still working perfectly after almost a decade.
Quality is no accident. It requires a lot of work from the conception phase, through design and implementation, and continues through the maintenance phase. In the case of software, maintenance was, is, and will always be the longest portion of the software life cycle, unless the software is quickly withdrawn from the market. The approach to solving the quality problem must be approached from all directions. Here’s a quick run down of what I recommend as the key ingredients for a solution to the software quality challenge:
- A requirement review process that includes customers or those who are as closed to customers as possible. Do this early and often as requirements change. Many of these requirements would be stated as “bugs” and viewed as quality issues from the customers’ view. As discussed in my previous blog, requirements management is critical to a project.
- Employ test driven development and test automation as much as it is practical to do so. Try out different approaches and measure the ROI. It’s a sliding scale and not all or nothing.
- A peer review process for design review, code review, and documentation review. Why peer? While it’s good to have a key leader to facilitate and spot check, it’s too difficult for anyone person to have the time or the detail knowledge of the entire product. A small group of senior engineers and technical leads can be assigned for specific areas of expertise. Non-engineers can also review design docs and user docs.
- A basic set of metrics to measure quality. Some of the obvious ones include the number of bugs of a given severity (easy to collect), and time to fix a bug (hard to collect). Whatever the metrics are, it’s important to agree up front what’s expected, and also adjust accordingly when the actual results come in.
- Formal classes and informal on-the-job training for engineering to ensure that the skills and knowledge are shared within the organization. One common reason for poor quality is that there isn’t enough people knowing about the product to be able to fix the problems at the root.
- Formal classes and informal on-the-job training for technical support to be able to troubleshoot the product.
- Plan a tour-of-duty for developers and QA engineers to work as technical support. While it’s often impractical to dedicate the best developers on customer support, periodic direct exposure to customers is highly valuable.
The list above is by no means comprehensive, but it points out that quality is a multi-prong challenge, and requires a systematic solution. Most software products don’t have to be built like a Sherman tank (or a ReplayTV 🙂 ) and they certainly aren’t.
Decide on how much quality is enough, plan for it, and be willing to pay for it.
Because quality costs.
5 thoughts on “Quality Costs”
Great post! I think the key words here (you mentioned them in requirements review) is “Early and Often”. The chart that I love to reference to illustrate this is the Cost of Defect Prevention from the Construx group:
This clearly shows the cost to correct a defect found at different stages of the project. Anyone who’s had to fix a showstopper bug POST-launch will not need to look at the big spike on the right to remember how costly (money, time, stress) it is to repair something that could have been caught
Many people consider quality to be just another ‘feature’ to be traded off against other features for a product, which in turn are balanced with resources and time in the traditional constraint triangle. Burying quality with other features only works for a mature organization where the importance of quality is well understood and managed. For young or immature organizations, quality should be given its own leg on the triangle.
One thing to consider, and this may be controversial for some, is that you can have too much quality. Sure, for the most part, quality pays for itself, and therefore, more is better. But, there is always a break-even point where adding more quality (e.g., removing that last low-priority bug) returns less than the lost revenue for being late to market.
Shouldn’t really be controversial… As Peter Drucker states:
“Quality in a product or service is not what the supplier puts in. It is what the customer gets out and is willing to pay for.”
It really is about the customer. I think that sometimes, as product developers, we have gotten lucky — that cool new thing just happens to resonate with enough people. “If we build it, they will come.” We can think of a few “buggy” products that still found willing (even enthusiastic) customers (thanks to those “pioneers” and “early adopters!”), can’t we? — at least until expectations and the perception of quality changes (or, we attract new and different customers).
Absolutely right in my view, Tâm! These recommendations might seem like common sense, but they are certainly NOT common knowledge or common practice. It’s been incredible to me over the years to see the foolhardiness and lack of humility with which people approach launching a product. I started my career in customer service, repairing mass spectrometers and gas chromatographs for HP, and I’ve seen customers in tears over a broken instrument. When I later worked at the factory in manufacturing engineering you better believe I was passionate about quality!
And later when I led an R&D product development project I insisted that the R&D engineers spend time in manufacturing actually helping build the prototypes, which I also insisted be built by the very same people who would build the production versions. At other times I’ve had engineers do “follow me home” programs where they actually have to watch customers struggling to install, learn and use their system. They are always amazed at how challenging it is for customers. Suddenly that bug that was classified a “feature request” becomes a bug again, and they have more passion for fixing it!
But getting engineers to embrace having their work peer reviewed, taking the time to really understand the customer, do a couple of shifts in tech support, in manufacturing, or even a customer visit, has been met with mild enthusiasm in most cases. After all, they have “real” work to do.
. . . unfortunately, without a solid understanding of the challenges faced by the whole process, and especially the customer, it is often “full speed ahead in the wrong direction”!
– Kimberly Wiefling, Author, Scrappy Project Management
Thanks for your comments, Kimberly. While the tour of duty is a simple concept, it’s rarely implemented because of the constant lack of resource in engineering. In the long run, it has tremendous value because one can’t really understand the challenges that another person face until he or she feels the same pain. Intellectual understanding doesn’t yield the same level of responsive action.