Avoid these seven common pitfalls and reap the rewards of this valuable software engineering practice.
by Karl Wiegers
Reprinted from Software Development Magazine
A quarter-century ago, Michael Fagan of IBM developed the software inspection technique, a method for finding defects through manual examination of software work products (anything produced on a project, such as code, documentation, and so forth) by a group of the developer's peers. Many organizations have achieved impressive results from inspections, including Raytheon, Motorola, Bell Northern Research, Hewlett Packard, and Bull HN Information Systems. However, other organizations have had difficulty getting any kind of software review process going. Considering that effective technical reviews are one of the most powerful software quality practices available, all software groups should become skilled in their application.
In this article, I will describe seven common problems that can undermine the effectiveness of software technical reviews of any type (inspections being a specific type of formal review) and several symptoms of each problem. I'll also suggest several possible solutions that can prevent or correct each problem. By laying the foundation for effective software technical reviews and avoiding these common pitfalls, you too can reap the benefits of this valuable quality practice.
Sin 1: Participants Don't Understand the Review ProcessSymptoms. Many software engineers don't instinctively know how to conduct and contribute to software reviews. Review participants may have different understandings of their roles and responsibilities, and of the activities performed during a review. Team members may not know which of their work products should be reviewed, when to review them, who should participate, and what review approach is most appropriate in each situation.
Team members may not understand the various types of reviews that can be performed. The terms "review," "inspection," and "walkthrough" are often used interchangeably, although they are not the same beast. A lack of common understanding about review practices can lead to inconsistencies in review objectives, review team size and composition, forms used, record keeping, and meeting approaches. Too much material may be scheduled for a single review because participants are not aware of realistic review rates. It may not be clear who is running a review meeting, and meetings may lose their focus, drifting from finding defects to solving problems or challenging the developer's programming style. The consequences of this confusion are typically missed defects, frustration, and an unwillingness to participate in future reviews.
Solutions. Training is the best way to ensure that your team members share a common understanding of the review process. For most teams, four to eight hours of training should be sufficient, though you may wish to obtain additional specialized training for those who will act as inspection moderators. Training can be an excellent team-building activity, as all members of the group hear the same story on some technical topic and begin with a shared understanding and vocabulary.
Your group should also adopt some written procedures for how to conduct reviews. Procedures help review participants understand their roles and activities, so they can consistently practice effective and efficient reviews.
Your peer review process should include procedures for both formal and informal reviews. Not all work products require formal inspection (though inspection is indisputably the most effective review method), so a palette of procedural options will let team members choose the most appropriate tool for each situation. Adopt standard forms for recording issues found during review meetings, and for recording summaries of the formally conducted reviews. Good resources for guidance on review procedures and forms are Software Inspection Process (McGraw-Hill, 1994) by Robert Ebenau and Susan Strauss and Handbook of Walkthroughs, Inspections, and Technical Reviews, 3rd Edition (Dorset House, 1990) by Daniel Freedman and Gerald Weinberg.
Sin 2: Reviewers Critique The Producer, Not the ProductSymptoms. Initial attempts to hold reviews sometimes result in personal assaults on the skills and style of the work product author. A confrontational style of raising issues exacerbates the problem. Not surprisingly, this makes the developer feel beaten down, defensive, and resistant to legitimate suggestions that are raised or defects that are found. When developers feel personally attacked by other review participants, they will be reluctant to submit their future products for review. They may also look forward to reviewing the work of their antagonists as an opportunity for revenge.
Solutions. When helping your team begin reviews, emphasize that the correct battle lines pit the developer and his or her peers against the defects in the work product. A review is not an opportunity for reviewers to show how much smarter they are than the developer, but rather a way to use the collective wisdom, insights, and experience of a group of peers to improve the quality of the group's products. Try directing your comments and criticisms to the product itself, rather than pointing out places the author made an error. Practice stating your comments in the first person: "I don't see where these variables were initialized," not "You forgot to initialize these variables."
In an inspection, the roles of the participants are well-defined. One personÑnot the authorÑis the moderator and leads the inspection meeting. In the review courses I teach, students often ask why the author does not lead the inspection meeting. One reason is to defuse the confrontational nature of describing a defect directly to the person who is responsible for the defective work product. I have found the best results come when both reviewers and author check their egos at the door and focus on improving the quality of a work product.
Sin 3: Reviews Are Not PlannedSymptoms. On many projects, reviews do not appear in the project's work breakdown structure or schedule. If they do appear in the project plan, they may be treated as milestones rather than tasks. Because milestones take zero time by definition, the non-zero time that reviews actually consume may make the project appear to slip off schedule because of them. Another consequence of not planning reviews is that potential review participants often haven't set time aside to participate.
Solutions. A major contributor to schedule overruns is inadequate planning of the tasks that must be performed. Not including these tasks in the schedule doesn't mean you won't perform them; it simply means that when you do perform them, the project may wind up taking longer than you expect. The benefits of well-executed software technical reviews are so great that project plans should explicitly show that key work products will be examined at planned checkpoints. Slippages more often happen because of the defects found, not because of the review time.
When planning a review, estimate the time required for individual preparation by the reviewers, the review meeting (if one is held), and likely rework. (The unceasing optimism of software developers often leads us to forget about the rework that follows most quality assurance activities.) The only way to create realistic estimates of the time needed is to keep records from your reviews of different work products. For example, you may find that your last 20 code reviews required an average of five staff-hours of individual preparation time, eight staff-hours of meeting time, and seven staff-hours of rework. Without collecting such data, your estimates will forever remain guesses, and you will have no reason to believe you can realistically estimate the review effort on your future projects.
Sin 4: Review Meetings Drift into Problem SolvingSymptoms. Software developers are creative problem solvers by nature. We enjoy nothing more than sinking our cerebrums into sticky technical challenges, exploring elegant solutions to thorny problems. Unfortunately, this is not the behavior we want during a technical review. Reviews should focus on finding defects, but too often an interesting bug triggers a spirited discussion about how it ought to be fixed.
When a review meeting segues into a problem solving session, the progress of examining the product grinds to a halt. Participants who aren't equally fascinated by the problem at hand may become bored and tune out. Debates ensue as to whether a proposed bug really is a problem, or whether an objection to the developer's coding style indicates brain damage on the part of the reviewer. Then, when the reviewers realize the meeting time is almost up, they hastily regroup, quickly flip through the remaining pages, and declare the review a success. In reality, the material that is glossed over likely contains some major problems that will come back to haunt the development team in the future.
Solutions. The kind of reviews I'm discussing in this article have one primary purpose: to find defects in a software work product. Solving problems is usually a distraction that siphons valuable time away from the focus on error detection. One reason inspections are more effective than less formal reviews is that they have a moderator who controls the meeting, including detecting when problem solving is taking place and bringing the discussion back on track. Certain types of reviews, such as walkthroughs, may be intended for brainstorming, exploring design alternatives, and solving problems. This is fine, but don't confuse a walkthrough with a defect-focused review such as an inspection.
My rule of thumb is that if you can solve a problem with no more than one minute of discussion, go for it. You have the right people in the room and they're focused on the issue. But if it looks like the discussion will take longer, the moderator should remind the recorder to note the item and ask the author to pursue solutions after the meeting.
Rarely, you may encounter a show-stopper defect, one that puts the whole premise of the product being reviewed into question. Until that issue is resolved, there may be no point in completing the review. In such a case, you may choose to switch the meeting into problem-solving mode; but if you do that, don't pretend you completed the review as originally intended.
Sin 5: Reviewers Aren't PreparedSymptoms. You arrive at work at 7:45 a.m. to find a stack of paper on your chair with a note attached: "We're reviewing this code at 8:00 a.m. in conference room B." There's no way you can properly examine the work product and associated materials in 15 minutes. If attendees at a review meeting are seeing the product for the first time, they may not understand the intent of the product or its assumptions, background, and context, let alone be able to spot subtle errors. Other symptoms of inadequate preparation are that the work product copies brought to the meeting aren't marked up with questions and comments, and some reviewers don't actively contribute to the discussion.
Solutions. Since about 75% of the defects found during inspections are located during individual preparation, the review's effectiveness is badly hampered by inadequate preparation prior to the meeting. This is why the moderator in an inspection begins the meeting by collecting the preparation times from all participants. If the moderator judges the preparation time to be inadequate (say, less than half the planned meeting time), he or she should reschedule the meeting. Make sure the reviewers receive the materials to be reviewed at least two or three days prior to the scheduled review meeting.
When reviews come along, most people don't want to interrupt their own pressing work to carefully study someone else's product. Try to internalize the fact that the time you spend reviewing a coworker's product will be repaid by the help you'll get when your own work comes up for review. Use the average collected preparation times to help reviewers plan how much time to allocate to this important stage of the review process.
Sin 6: The Wrong People ParticipateSymptoms. If the participants in a review do not have appropriate skills and knowledge to find defects, their review contributions are minimal. Participants who attend the meeting only to learn may benefit, but they aren't likely to improve the quality of the product. Management participation in reviews may lead to poor results. If the team feels the manager is counting the bugs found to hold against the developer at performance appraisal time, they may hesitate to raise issues during the discussion that might make their colleague look bad.
Large review teams can also be counterproductive. I once participated in a review (ironically, of a draft peer review process) that involved 14 reviewers. A committee of 14 can't agree to leave a burning room, let alone agree on whether something is a defect and how a sentence should properly be phrased. Large review teams can generate multiple side conversations that do not contribute to the review objectives and slow the pace of progress.
Solutions. Review teams having three to seven participants are generally the most effective. The reviewers should include the author of the work product, the author of any predecessor or specification document for the work product, and anyone who will be a user of the product. For example, a design review should include (at least) the designer, the author of the requirements specification, the implementor, and whoever is responsible for integration testing. On small projects, one person may play all these roles, so ask some of your peers to represent the other perspectives. It's O.K. to include some participants who are there primarily to learn (an important side benefit of software reviews), but focus on people who will spot bugs.
I'm not dogmatic on the issue of management participation. As a group leader, I also wrote software. I needed to have it reviewed (thereby setting an example for the rest of the team), and I was able to contribute usefully to reviews of other team members' products. This is very much a cultural issue dependent upon the mutual respect and attitudes of the team members. A good rule of thumb is that only a first-line manager is permitted in a review, and then only if it is acceptable to the author. Managers should never join in the review "just to see what's going on."
Sin 7: Reviewers Focus on Style, Not SubstanceSymptoms. Whenever I see a defect list containing mostly style issues, I'm nervous that substantive errors have been overlooked. When review meetings turn into debates on style and the participants get heated up over indentation, brace positioning, variable scoping, and comments, they aren't spending energy on finding logic errors and missing functionality.
Solutions. Style can be a defect if excessive complexity, obscure variable names, and coding tricks make it hard to understand and maintain the code. This is obviously a value judgment: an expert programmer can understand complex and terse programs more readily than someone with less experience. Control the style distraction by adopting coding standards and standard templates for project documents. These will help make the evaluation of style conformance more objective. Use code reformatting tools to enforce the standards so people can program the way they like, then convert the result to the established group conventions.
Be flexible and realistic about style. It's fine to share your ideas and experiences on programming style during a review, but don't get hung up on minor issues, and don't impose your will on others. Programming styles haven't come to Earth from a font of universal truth, so respect individual differences when it is a matter of preference rather than clarity.
A Little Help from Your FriendsAny software engineering practice can be performed badly, thereby failing to yield the desired benefits. Technical reviews are no different. Many organizations have enjoyed excellent results from their inspection programs, with returns on investment of up to tenfold. Others, though, perceive reviews to be frustrating wastes of time, usually because of the seven deadly sins I mentioned here, not because reviews are inherently a waste of time. By staying alert to these common problems and applying the solutions I've suggested, you can help ensure your technical review activities are a stellar contributor to your software quality program.
Karl Wiegers is a software process engineer at Eastman Kodak Company and the author of the Jolt Productivity Award-winning Creating a Software Engineering Culture (Dorset House, 1996). You can reach him through Software Development magazine.
© 1998 Miller Freeman Inc.