I’ve been busy dogfooding lately. It’s an ideal diversion for masochists. When it gets to be too much, I can always take respite in a nice horror film. Thank goodness what passes for dogfood now is a vast improvement over years past.
Dogfooding is the practice of using prerelease builds of products for your day-to-day work. It encourages teams to make products right from the start, and it provides early feedback on the products’ value and usability.
Years ago, running a dogfood build and having your machine unplugged were almost indistinguishable in terms of productivity. These days, substantial parts of dogfood builds are fully functional, while others remain unusable, unreliable, or unconscionable. This begs the question, “Why?”
Why are some portions of new builds solid, thoughtful, and engaging, while others remain flaky, unfathomable, and exasperating? How can that be? Ask managers and they’ll say, “Well, it’s tough to tell ahead of time what’s going to be good or rancid.” Sounds like they’re washing down my dogfood with male bovine dung.
Enigma? I don’t think so
Software quality is unpredictable? Don’t make me gag. Poor-quality software has all the subtlety of a neighborhood ice cream truck. You know it’s bad for you; you know it’s coming a mile away; yet you can’t resist. Managers choose to ignore the signs and buy the ice cream (poor software) because they hate disappointing the children (upper management) and can’t resist the instant gratification (“progress”).
We’ve gotten so used to poor software that many people have forgotten the early signs. Let me summarize the rest of this column and make it simple for you. Good software is solid and originated out of complete and critical customer scenarios. Bad software is buggy and originated out of someone’s behind.
Twins of evil
How do you spot bad software before it’s integrated into the main branch? First, remember there are two aspects to quality—engineering and value. Most engineers get caught up in the engineering side of quality—the bugs. However, flawlessly engineered features can be glorified crud to customers if the ideas came from the wrong place. I talk about this more in The other side of quality—Designers and architects.
We’re looking to predict both buggy code and code with questionable pedigree. Predicting buggy code is easier, so we’ll start there.
The usual suspects
In 2003, Pat Wickline studied the root causes of late cycle bugs in Windows Server 2003. The results were similar to his 2001 study of bugs in Windows 2000 Hotfixes. Simply put, more than 90% of bugs could have been found by design reviews, code reviews, code analysis (like PREfast), and planned testing. No one method would have found every bug, but the combination nearly finds them all.
In 2004, Nachiappan Nagappan studied measurable attributes in an engineering system that correlated well to bugs found later. Those attributes were code churn (the percentage of code lines added or changed per file) and code analysis results (the number of PREfast or PREfix defects found per line of code).
He’s updated his thinking to focus on churn and complexity measures.
If you want to prevent poorly engineered code from getting into the main branch, have your build track code churn and code analysis results. If those measures go beyond the norms for quality code, then reject the check-in. If your developers don’t like it, tell them to do more design and code reviews and write more unit tests.
“What if there’s too much code churn, but the feature enables a complete and critical customer scenario?” you might ask. Allow me first to congratulate you on coming up with the only decent reason to not junk the code entirely. Then junk the code entirely. It’s time for a rewrite of that section. That’s the only way the feature will ever reach your engineering quality goals.
You’re gonna love it
Let’s move on to predicting questionable feature pedigree. Buggy code is easy to measure and control, though it does require management to set a bar and stick to it. The value of software is harder to measure, but in the end it requires the same thing—management must set a bar and stick to it.
How do you know if a feature or check-in will really be valued by customers? That’s easy. If it’s part of a complete and critical customer scenario, then users will love it. How do you know if a scenario is complete and critical? That’s the hard part.
Luckily, you don’t have to do that work. We pay marketing, product planning, and upper management to figure out the complete and critical customer scenarios for a release. No one feature team or product group can do it, because complete scenarios cut across product groups. Instead, engineering’s job is to tell the planners what’s possible, and then solidly implement the planned critical scenarios from end to end.
Quit fooling around
Of course, overzealous engineers of all kinds, not just PMs, will try to sneak in features that aren’t part of planned complete and critical scenarios. While doing so might relieve that engineer’s creative constipation, what comes out is predictably putrid for customers.
To trap poorly conceived features before scarce resources get wasted, you must take two steps:
1. Have a clearly documented vision or value proposition that lists the complete and critical scenarios for the release. Prototypes, personas, user experience designs, and high-level architectures also help clarify what’s needed immensely.
2. Convene a governing board who owns the vision or value proposition and have them review every feature. If the feature doesn’t fit a complete and critical scenario, it’s cut. Period. At the beginning of each major milestone, every GPM reviews their list of upcoming features with the governing board. While the board may not review every feature in great detail, they must still ruthlessly and relentlessly uphold the quality, value, and integrity of the release.
The best groups at Microsoft have been following this process for years.
These two steps precisely correspond to setting the bar and sticking to it. While this bar is more subjective than the engineering quality bar, both require the same disciplined commitment by management to be successful.
Quality is no accident
It’s not difficult to predict quality. In fact, it is straightforward. Yet managers at all levels rarely apply the rigor necessary to assure quality.
Maybe managers are afraid assuring quality will add too much time to the schedule. As if doing it right the first time and sticking to only the critical needs takes longer. In fact, when quality is what customers expect, then focusing on quality is always the fastest way to ship.
Maybe managers are afraid engineers won’t like assuring quality. As if engineers take no pride in their work or enjoy ambiguity and wasting their time. In fact, engineers take great pride in the quality of their work, prefer to know what’s expected, and hate wasting effort.
The truth is that quality is expected, quality is fundamental, quality is central to our success. It is because our customers say it is.
Quality is the right thing to do and the right way to do it. It is the key to our future survival and prosperity. Quality is no accident. You can predict and control it. All you need is a brain and a backbone. Get yours today.
While they both vigorously advocate quality, it’s worth noting the differences between Where’s the beef? and Bold predictions of quality. The first discusses why quality is needed and the mechanics of getting it. The second describes how to measure and refine your work to push the quality bar higher. We’ve made significant progress, but quality is an ideal that demands eternal vigilance.
PingBack from http://msdnrss.thecoderblogs.com/2007/10/01/how-do-you-measure-yourself/
At Microsoft, we can execute, but can we think? When billions of dollars are on the line, you better