1. Original Entry + Comments2. Write a Comment3. Preview Comment
New comments for this entry are disabled.


August 04, 2010  |  What to expect from technical review  |  74443 hit(s)

I had an incident recently where I was having a sort of disagreement with a writer about some content in a document. The writer's trump card, so to speak, was to note that "It went through tech review!" As far as the writer was concerned, it had been reviewed, and the reviewer (or reviewers, I forget if there was more than one) had not indicated the change I was after, and that was that.

This gave me pause. We try to get a technical review of anything substantial we write, and we put a lot of stock in those reviews. Yet I still felt that whatever change I was arguing for was valid. So that got me to thinking about tech reviews and where they fit into the overall scheme of things when it comes to assessing the done-ness of a document. Conclusion: reviews are good (indeed, essential), but they're not the last word.

Why is that, tho? Well, here are some of the reasons I came up with (with help from the folks on my team) as to why a tech review might not be giving you the entire story:
  • TR is often done in a big hurry. It tends to come at a bad time for our reviewers, who have their own pressing deadlines to attend to. Anecdote: For our most recent release, our primary reviewer read something like 10 of our chapters all in one day (Father's Day, in fact). How carefully do you think he considered everything he was reading?

  • In our division, TR is mostly about code. One thing that interests most of our reviewers is code, and they'll usually read that. Descriptions? Background? Step-by-step procedures? Maybe. Even so, reviewers often do not run code or follow steps to see if they work. One of my writers is quite adamant on this point: "Assume they haven't run your code. The burden of testing code is on you."

  • Reviewers focus on what they're interested in. Not surprisingly, individual developers or testers, sometimes PMs, will zero in on the features they're most familiar with. If they don't work with something, they're unlikely to give it a thorough review, and might not even read it. (Sometimes they'll admit this, sometimes not.) If you get only one review, you need to be aware of what that reviewer's concerns are (and aren't) with respect to your document.

  • TR often isn't looking at the big picture. Most reviewers will consider the text a given and will react to specifics in it. It's a pretty rare reviewer who will contemplate the flow or order of information, or even whether a section of a document (or the document itself) should even exist. And of course, few reviewers will sit and think about what's missing in your doc. (Another way to say this is that few reviewers do what's sometimes called a developmental edit of the document they're reading.)

  • Reviewers focus on what's immediately in front of them. Somewhat related to the previous point. Unlike the writer, the reviewer probably doesn't know where any given document fits into the larger plan, and is therefore unlikely to assess the document in a bigger context. For example, a reviewer might simply assume that some concept or technique discussed in a document has been introduced somewhere else and not think to ask "Does the reader already know this?" This is a function of how tech review often occurs -- in pieces, with documentation not necessarily presented for review in the same order that the reader will ultimately see it.

  • One review is just one opinion, as of today. A flippant way to say this is that if you get one review, you get one opinion. If you get two reviews, you have three opinions, and so on. Reviewers don't necessarily agree with each other, often quite dramatically. And even a single reviewer might change their mind based on others' thoughts, new information, your passionate rebuttal, time of day, phase of moon, whatever. (Much like editors, haha.) To be clear, the opinions you're getting are from experts, and are specifically what you're asking for. Still, even for the reviewers, there's a difference between an opinion and a fact.

  • Reviewers make mistakes. Sometimes when you push back on a tech-review comment, the response is "Oops." But to know that, you'd have to push back, innit?

  • Reviewers are not (necessarily) our audience. This is a variation on Homo Logicus -- our reviewers already know tons of stuff about what we're writing, and it's difficult to imagine the state of mind of someone to whom this is all new. For example, there's something slightly absurd about a bunch of lifelong professional programmers opining about what a rank beginner will or won't understand. That's like you and me sitting around arguing about how hard a foreigner might find it to learn English. No, we writerly types are the reader advocates, and we need to take that into account when we process TR comments.

  • You have to know what you're getting from whom. If you're interested in the accuracy of your code, get a review from a tester. If you want to know whether your approach is a best practice, try grabbing a Dev lead. If you want to know whether you're messaging a feature right, grab the lead PM. Or whatever. But you don't want get these mixed up -- don't expect a tester to be telling you whether your document is positioning the product right, and I wouldn’t count on a lead PM to be running the steps in my procedures. (Always there are exceptions, of course.) As such, you need to weight appropriately the feedback you get from different people, based on their roles, and for that matter, on what you already know about their reviewing history, the time they're able to devote, and basically all of the above. Plus ...

  • Some people are good reviewers, and others aren’t. ‘Nuff said.
The takeaway here is that you should not think that because something has gone through tech review, it must be right, and you must especially not think that because a reviewer said nothing about a chunk of text or code, that the reviewer must therefore approve of it. You're not necessarily getting approval; you're just not getting disapproval, based on what occurred to the reviewer off the top of their head in the small amount of time they allotted to reviewing your text.

And importantly, as one of writers summed it up, a tech review is just one type of input. It's an essential one, but there are other factors that go into documentation review beyond what you get in technical review.

Coming soon (for a broad definition of "soon"): Ok, so how do you get a good tech review? If you already have thots about that, by all means, leave a comment.




Jim Glass   05 Aug 10 - 7:59 AM

Very nice Mike. I passed this on to my team.

Now how do we address the fact that most of our prose never actually gets reviewed by the techical owners? I used a tool called 'Slingshot' back in the day that tracked who actually looked and and sometimes provided comments. Over 90% of the material was never reviewed.

My connical story is about my first job at Microsoft, as a contractor, where the Dev, PM, Manager all did iterative "reviews" my first few chapters covering the Hard drive device drivers. Then the week of the first release I got called in because an FTE writer read it and said it was all wrong. Turns out the Dev/PM team had given me the wrong specs. The Dev was my brother!

The PMs/devs don't hate us, they just don't have time for conducting the actual reviews of our material. Until it become a commitment on their review, I don't see this changing.



 
Anonymous   05 Aug 10 - 11:17 AM

Nice piece! Another item for the list: PMs, devs, and testers don’t review documentation because by reviewing they would take partial responsibility for it, which they don’t want to do. Why should they? Their performance reviews don’t depend on the quality of the documentation.

There is not a good solution to this problem from the writers' side. The company could begin to attach some accountability for documentation to product teams' performance reviews; this would do more to improve tech review of documentation than any number of bugs written against owners of various technologies.


 
Brian   05 Aug 10 - 1:15 PM

Very interesting post, especially from a documentation point of view. Over here in traditional publishing, I find a lot of the same issues with technical reviewers, so I have a little boilerplate message telling them I want to know two things:
1. Is what's there correct? Run the code, try the steps, etc.
2. Is anything not there that should be?
It's the second one that often trips up reviewers, because they usually never thought of it, but gets interesting responses once I specifically point it out.

On one point, I disagree with you -- I never expect reviewers to do a development edit, because that's my job. I've been trained to do it, I practice it, and I wouldn't expect a reviewer to look at the manuscript the same way I do. (Which should explain why I was so surprised -- but still grateful! -- when you did a development edit on my book.) Within your realm, does that fall under the responsibility of the technical editor?


 
mike   05 Aug 10 - 1:28 PM

Hi, Brian. I'm glad you included some tips for how you solicit TR feedback--a forthcoming post will be about TR strategies & tips-n-tricks.

"Developmental edit" is probably too refined a term for what I mean here. The ideal tech review would, to me, take into account not only what's on the page, but how it's on the page and what's not on the page. This can certainly overlap substantially with what the editor does, but it can also include information that the reviewer is in a better position to judge.

A relatively easy example might be an introductory topic for some new technology. Certain reviewers (the lead PM for that feature, for example) might be in the best position to comment on whether the doc hits the intended audience, where we assume too much or too little about their knowledge, whether the information as presented is likely to be sensible to that audience, and whether we're selling the feature appropriately, so to speak.

The editor probably has opinions about this sort of thing, but a lot of the final answers probably should come from that reviewer. Assuming, as I say, that the reviewer decides to read with these questions in mind.

I should note that we frequently get very good reviews; I don't mean to malign tech reviewers as a class. It's just that, you know, we can't count on getting excellent reviews every time. :-)