Specs, by and large, are terrible. Not only PM specs, but dev and test specs too. By terrible, I mean difficult to write, difficult to use, and difficult to maintain. You know, terrible. They are also incomplete, poorly organized, and inadequately reviewed. They’ve always been this way and they aren’t getting better.
I’d love to blame PMs for this, partly because I enjoy it, but mostly because they are the leading source of awful specs. However, the facts don’t support blaming PMs. Everyone writes bad specs, not just PMs. Even PMs who occasionally write good specs, mostly write poor ones. And good specs are still difficult to write and maintain, regardless of who authors them.
If PMs aren’t to blame for shoddy specs, who is? Management would be an easy target—another group I’d enjoy blaming. It’s true that some organizations, like the Office division, traditionally produce better specs than others. So clearly management has a role. However, Office has changed management many times over the years, so the cause must be deeper than the people in charge.
It’s a setup
It’s clear that the blame falls squarely on the spec process—how we write specs and the tools we use to write them. The process is cumbersome, difficult, and tedious. The templates are long, intimidating, and complex to the point of being intractable. Basically, we’ve made writing good specs as hopeless as winning a marathon in a fur coat and flip-flops.
Anal anachronistic alarmists will say, “The spec process is absurdly dysfunctional for a reason. All template elements and process steps are needed to avoid past catastrophes.” See, you never have to worry about too much bureaucracy from on high when there’s plenty down low where it counts.
Dysfunctional processes always come from the best of intentions. The trouble is that the original goal and intent was lost somewhere along the way. Revive the goal and intent, and new and better ways to achieve it will present themselves.
Eric Aside I worked in Boeing research for five years. Not all, but most of the bureaucracy there seemed to come from the top. I’ve been at Microsoft for 16 years. Not all, but much of the bureaucracy here seems to come from the bottom. We are free to act independently at the lowest levels. Sometimes that means we’re given enough rope to choke ourselves.
The goal of all PM, dev, and test specs is to communicate design and design decisions to people across time and location. We want to make that communication easy and robust, with plenty of feedback and quality checks.
In case you missed it, those were four separate requirements:
- Quality checks
Each requirement can be satisfied with a different solution. The approach, “We’ll just add more sections to the spec to cover all requirements,” is as idiotic as, “We’ll just add more methods to the class to cover all requirements.” Instead, let’s take on the requirements one at a time.
Keep it simple and easy
The spec needs to be easy to write, understand, and maintain. It should use standard notation, like UML, for diagrams and common terminology for text. It shouldn’t try to be too much or say too much.
The simpler the format the better. The generic spec template in the Engineering Excellence Handbook has 30 sections and three appendices. The superior Office spec template has 20 sections. Both are far too complex.
A spec needs to have three sections plus some metadata:
- Requirements Why does the feature exist? (Tied to scenarios and personas.)
- Design How does it work? (Pictures, animations, and diagrams are especially useful.)
- Issues What were the decision points, risks, and tradeoffs? (For example, dependencies.)
- Metadata Title, short description, author, feature team, priority, cost, and status.
That’s it. The status metadata could be a workflow or checklist, but that’s the limit of the complexity.
“But what about the threat model? What about the privacy statement? The instrumentation or the performance metrics?” I can hear you demanding. Get a grip on yourself. Those items are quality checks I’ll talk about soon. The spec structure itself is simple, with no more or less than it needs. It’s easy to write and easy to read.
Make it robust
The spec needs to be robust. It must verifiably meet all the requirements, both functional requirements and quality requirements. “How?” you ask. What do you mean, “How?!?” How would you verify the requirements in the first place? You’d write a test, right? Well, that’s how you write a robust spec. In the first section, when you list functional and quality requirements, you include the following:
|Unique ID||Priority||Functional or quality||Short description||Related scenario(s)||Test(s) that verify the requirement has been met|
If you can’t specify a test to verify a requirement, then the requirement can’t be met, so drop it. Can’t drop it? Then rewrite the requirement till it’s testable.
Eric Aside I believe there is a basic equivalence in solid designs between tests and requirements. Every requirement should have a test. Every test should stem from a requirement. This results in clear, verifiable requirements; more comprehensive tests; consistent completion criteria (all tests pass = all requirements met); and better designs because test-driven designs are naturally simpler, more cohesive, and more loosely coupled.
The more eyes that see a spec before it’s implemented, the better it will be and the less rework it will require. You want feedback to be easy to get and easy to give. At the very least, put draft specs on SharePoint, using change tracking and version control. Even better, put drafts on a wiki or a whiteboard in the main area for the feature team.
How formal does your process, feedback, and change management need to be? As I discussed in a previous column (Stop writing specs, co-located feature crews), the degree of formality necessary depends on the bandwidth and immediacy of the communication. People working on the same feature at the same time in the same shared workspace can use very informal specs and processes. People working on different features at different times in different time zones must rely on highly formal specs and processes.
Regardless, you want the spec to be fluid till the team thinks it’s ready. How will you know it’s ready? It’s ready when the spec passes inspection by the test team using the quality checks.
Check that quality is built in
Here is where our current specs go farthest off base. Instead of adding security, privacy, and a host of other issues as quality checks, groups add them as separate sections in the spec. This is a disaster, and here’s why:
- Specs become bigger and far more complicated.
- Authors must duplicate information across sections.
- Bottom sections get little attention, causing serious quality gaps.
- Designs become incomprehensible because their description is spread across multiple sections.
- Mistakes and gaps are easy to miss because the whole picture doesn’t exist in one place.
- Updates are nearly impossible because multiple sections are affected by the smallest change.
Instead, the quality checks that apply to every spec are kept in a list everyone can reference. The first few checks will be the same for every team:
- Are the requirements clear, complete, verifiable, and associated with valid scenarios?
- Does the design meet all requirements?
- Have all key design decisions been addressed and documented?
The next set of quality checks is also fairly basic:
DTP: the text in the penultimate and last rows of the table, should be styled with a hanging indent.
|ü Have all terms been defined?||ü Are there issues with compatibility?|
|ü Are security concerns addressed?||ü Are failures and error handling addressed?|
|ü Are privacy concerns met?||ü Are setup and upgrade issues covered?|
|ü Is the UI fully accessible?||ü Are maintenance issues addressed?|
|ü Is it ready for globalization and localization?||ü Are backup and restore issues met?|
|ü Are response and performance expectations clear and measurable?||ü Is there sufficient documentation for support to do troubleshooting?|
|ü Has instrumentation and programmability been specified?||ü Are there any potential issues that affect patching?|
A team may also add quality checks for their product line or team that reflect particular quality issues they commonly face.
Online materials Spec checklist (Spec checklist.doc)
The key is that the design section describes the feature completely, while the quality checks ensure nothing is missed. Yes, that means the “How” section could get pretty big to cover all the areas it needs. But those areas won’t be rehashes of the feature specialized for each quality requirement (security for the dialog, privacy for the dialog, accessibility for the dialog).
Instead, the areas will be the feature’s logical components (the API, the dialogs, the menus). Duplication is removed, each feature component is described as a whole, and all the quality requirements are incorporated into the design in context.
Eric Aside In an interesting and funny coincidence, the day after this column was published, Office simplified their spec template to a single design section and a published quality checklist. While I couldn’t claim the credit for the change, I did feel vindicated.
What’s the difference?
With all those checks and tests added, you might ask if I’ve simplified specs at all. Here are the big changes:
- The number of sections is reduced to three (Requirements, Design, and Issues).
- Designs are described completely in one section.
- All functional and quality requirements can be verified.
I’ve also talked about opportunities to make specs less formal and easier to understand.
Who’s to blame for bad specs? We all are, but bad specs are mostly the result of bad habits and poor tools. By making a few small changes and using vastly simplified templates, we can improve our specs, our cross-group communication, and our cross-discipline relations. Altogether, that can make working at Microsoft far more productive and pleasant.
Be First to Comment