When you get invited to a spec review meeting, what’s the first thought that enters your mind? My guess is, “Necessary evil,” or “Okay, time to see how bad things are.” Perhaps you just let out a resigned groan like the guy who cleans up after elephants at a circus.
How about when you attend a code review? Ever leave feeling uneasy—like you just left your home for a week, but you can’t remember if all the doors are locked and the lights are off?
In design reviews, do you just sit there while two people debate a few areas so long that your issues never get heard? Even worse, half the people haven’t read the docs in advance, which may be a good thing because the author didn’t run a spell check on his Swiss-cheese–excuse of a design.
Well sir or madam, you have a problem. Unfortunately, your team is suffering from a common condition: “Failing to Orchestrate Collective Knowledge Effectively for Design.” You’re FOCKED.
A bad combination
You get FOCKED when you don’t differentiate three activities:
· Generating ideas and solutions
· Receiving feedback on work in progress
· Assessing quality and detecting issues in completed work
Most teams combine these activities into one—a review meeting. Some might call it a clustered version of FOCKED.
Naturally, this is a bad move. By combining three different goals into the same exercise, you not only guarantee failure in every activity, you frustrate and confuse the participants and send mixed messages to the organizer.
How do you avoid consenting to this treatment? Understand what your goals are, and choose the most appropriate and effective method to achieve them. Brainstorming, informal reviews, and inspections are the methods I suggest.
The perfect storm
When you want to generate ideas and solutions, use brainstorming. You can brainstorm in small or large groups or one-on-one in front of a whiteboard. With more than four or five people, use an affinity technique: Everyone gets a stack of Post-It notes. They write down as many ideas as they can and group the notes into common themes.
Regardless of how you brainstorm, the goal is to get as many ideas as possible. There’s no bad suggestion. The best solution may result from parts of many initially unworkable proposals.
Brainstorming is great to use early in the design and specing process, before you’ve written much down. Get together with key stakeholders and feature team members and crank out ideas.
If you can’t pick a solution, use the Pugh Concept Selection to decide. This method uses a table to rate each solution against each independent requirement. The rating is positive, negative, or zero, based on fit, as compared to your default choice. Then scale the rating based on significance. The solution with the highest total wins.
Better yet, don’t pick a solution. Keep each design idea until you discover some constraint or factor that makes it unfeasible. Eventually, you’ll be left with the one solution that optimally meets all your requirements. The Toyota folks call this “set-based design.”
Who’s in charge?
Be careful not to confuse brainstorming with “design by committee.” Although many people may contribute and discuss different ideas, there should be one owner of the design. The owner has the final say; her name is the one on the design.
Having a single owner gives the design clarity and consistency. It creates accountability for meeting the requirements and a champion for understanding and defending the spirit of the design choices.
In contrast, design by committee is for groups of wimps who need others to blame when their spineless consensus crud implodes under the smallest of strains.
So, what do you think?
Sometimes you just need to know if you’re headed in the right direction. Reviews of one-page specs can provide this. So can informal peer reviews through e-mail or at team meetings. Reviews of work in progress act as checkpoints against wasted effort caused by early naïve or subtle mistakes.
The problem is that people use this type of review under the guise of issue detection. Informal reviews are pathetic at issue detection compared to other methods. Yet almost all spec, design, and code reviews done in engineering groups are informal reviews.
Trying to use informal reviews for issue detection (like a quick, “Hey, have a look at this bug fix before I check it in”) sets up you, your team, and our products for failure. You get a half-hearted review by unprepared reviewers. Your team gets a constant sense of, “Are these reviews really worth it? I feel like we miss stuff.” And your products get buggy code that you still end up fixing later—hopefully before you ship.
Teams can get disenchanted or think that they have to work the reviews even harder. The shame of it is that informal reviews are wonderful for getting feedback. They were never meant for quality control or assurance. There’s no reason to drag in the whole team, berate them for not reading the docs or code, then slog through an ineffective meeting, all for an informal review.
If you need feedback, ask for feedback. Get feedback early. Make it fun and casual, then thank folks for their help. Walkthroughs (guiding a group through your design or code) and informal reviews are great ways to do this. If you need issue detection for quality control and assurance, you need to use inspections.
It’s just a formality
Inspections are for issue detection in completed work. Period. They stink for generating ideas and solutions or for giving feedback. However, if you want to find all the bugs in your designs and code before it ships, you want inspections.
Inspections list all the issues found and give you an accurate estimate of all the issues not spotted. The secret is in the formality. Inspections take no more time than thorough informal reviews, but their formal procedure leaves no unaccounted issues.
Eric Aside The particular approach to formal inspections I describe here is based on the one used by the Software Engineering Institute’s Team Software Process (TSP).
Here’s a quick summary of the procedure:
· Plan Ensure that the work is complete, then schedule the meeting.
· Overview Give all your inspectors (that is, reviewers) enough context to understand your work, a checklist of issue types to find, and a copy of the work itself.
· Prepare Tell the inspectors to individually list all the issues they find (based on the issue-type checklist).
· Meet Get inspectors together to merge issue lists, agree on duplicates, determine which issues must be fixed, and decide if the work will require a re-inspection after the issues are corrected.
· Rework Address all the must-fix issues and as many minor issues as you choose.
· Follow-up Learn the root cause of the issues you fixed, and update your checklists to avoid those issues in the future. Repeat the inspection process as required.
The magic of inspections is found in the checklist of issue types and in the issue-list merge. However, before we get there, the work must be ready.
Are you ready, kids?
The point of inspections is to find all the remaining issues in your completed work. If the work is incomplete and full of gaps and basic errors, the inspectors will get bogged down in triviality, lose focus on the tougher checklist issues, and hate your guts for wasting their time.
Before you schedule the meeting, make sure you have a clean spec, design document, or source code. Ensure that specs and design docs cover everything you need to have verified. Ensure that source code compiles, builds, and functions. This includes having a clean PREfast run and completing any other aspects of your group’s initial quality bar. Ideally, your team should have someone assigned to verify that the work is complete before scheduling an inspection.
A great way to ensure your work is complete is to conduct a personal inspection. Lots of developers already do this. They look through their own code or designs trying to find issues. Add a one-page checklist of your common mistakes and you’ve got a personal inspection. You’ll want your checklist to contain the most common ways you mess up. Every person’s checklist is unique because everyone makes different kinds of mistakes; for example, I’m great with resource cleanup, but I constantly switch parameters around. Update your checklist after every inspection, removing issues you no longer find and adding new issues you start to notice.
Checking it twice
After your work is complete, send it to inspectors (a few days before the scheduled meeting). Attach enough context for them to understand the work, such as a description and the checklist to use.
The team checklist should be one page, like your personal checklist. However, the team checklist should be filled with issue types that concern your team—for example, particular security, reliability, logic, or failure handling issues that your team’s software is most prone to have. Like your personal checklist, the team checklist should be regularly updated to remove issues that are no longer prevalent (the team has improved) and to add arising issues.
The inspectors then carefully and independently inspect the completed work for the issue types on the checklist, noting the type and line number for each issue. Often inspectors will find issues that aren’t on the checklist. Those issues should be noted as well. Issue types that are prominent but missing from the checklist should be researched for the best way to eliminate them in the future (possibly by updating the unit testing, automated code analysis, or inspection checklist).
There are many approaches to inspecting. One effective technique is to look through the entire work for each issue type, one at a time, then move to the next issue in the checklist. Sure, it sounds monotonous, but the lack of context switching makes it an easy and effective method. Regardless of their approach, after a little practice, inspectors get very good at catching a huge percentage of the issues.
By using the same checklists, the inspectors are ready to merge issue lists and determine how many issues they found in common.
Magical merge meeting
At the inspection meeting, the inspector who found the most issues fills out a spreadsheet, listing all his issues. Often an appointed moderator who is experienced with the process handles the data entry and keeps the meeting on track.
For each issue found, the group of inspectors note who else found the same issue (type and occurrence). Multiple issues can be found on the same source line. In addition to agreeing on duplicates, the inspectors also mark whether or not the issue must be fixed or if it should be left to the discretion of the author.
When the first inspector finishes his list, the next inspector follows the same procedure, skipping over issues already noted by the first inspector. This continues until all issues are captured in the spreadsheet. The spreadsheet then automatically computes statistics like defect density, yield, the total number of issues found, and the likely number of issues still lurking in the work.
How is that possible? Because everyone looked at the same work for the same issues and you captured which issues were found and missed by each person. So you can accurately estimate how many total issues were missed overall. You are no longer FOCKED.
Eric Aside The math is based on the old capture-recapture technique used to estimate the number of fish in a lake. You catch n fish at various lake locations, tag them, and release them into the lake. Later you catch another bunch of fish at the same locations and note the percentage tagged. Then you divide n by that percentage and you have a good estimate for the total number of fish. For inspections, the first inspector’s issues are considered “tagged.” The other inspectors “fish” for issues in the same design or code “lake.” The results give a simple yet accurate estimate of the total number of issues.
Tricks of the trade
Naturally, there are lots of little tricks to help you get the most out of inspections:
· While you can do inspections with large groups, typically you only need two or three inspectors to get great results (not counting the author). Start with three to five inspectors to get the hang of it, and then drop the number as people gain skill.
· Just like informal reviews, it takes time to do inspections right. Plan three to five pages per hour for docs and 200–400 lines per hour for code. These are reasonable rates. Faster and you miss issues, slower and your head turns to mush.
· Don’t inspect too much at once. A thousand lines of pure new code will take around three or four hours to inspect. Be kind and inspect it in chunks.
· Do not discuss fixes at the meeting. Inspections are for issue detection, not for generating ideas. Don’t let yourselves get FOCKED.
· It helps to have a moderator experienced at filling out the spreadsheet and keeping people from discussing fixes.
· The moderator can also be used to verify that work is complete and ready for inspection. Until people get the hang of it (including people new to your group), you should definitely have a moderator.
· Your team should have a quality bar for how many issues are acceptable to pass on for further testing. If the results of your inspection indicate the number of must-fixes remaining is too high, the authors must have their work re-inspected after fixing the issues found. A nice guideline is: any yield under 75% requires re-inspection.
Getting it right
When used properly, inspections can decrease by orders of magnitude the number of issues released to customers. Teams, both inside and outside Microsoft, have seen these results repeatedly. The key is to not confuse your goals for orchestrating collective knowledge.
If you want to generate some ideas or solutions, run a brainstorming meeting, keeping an open mind and avoiding criticism. If you want early feedback, ask for an informal review, allowing people some flexibility and expecting that some issues will be missed. If you want to assess quality and detect issues, organize an inspection, focusing on the findings and not the fixes. Remember that you can always use brainstorming or informal reviews to help you fix the issues you find.
A little communication and clarity can go a long way. Knowing when to apply which techniques for group engagement can take the failure out of FOCKED, and we all know what a profitable strategy that can be.