About

I'm Mike Pope. I live in the Seattle area. I've been a technical writer and editor for over 35 years. I'm interested in software, language, music, movies, books, motorcycles, travel, and ... well, lots of stuff.

Read more ...

Blog Search


(Supports AND)

Feed

Subscribe to the RSS feed for this blog.

See this post for info on full versus truncated feeds.

Quote

If someone acts decently in the personals context, they are definitely a good person in real life, because the personals bring out the worst in everyone.

Oh, Please



Navigation





<June 2024>
SMTWTFS
2627282930311
2345678
9101112131415
16171819202122
23242526272829
30123456

Categories

  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  
  RSS  

Contact Me

Email me

Blog Statistics

Dates
First entry - 6/27/2003
Most recent entry - 6/2/2024

Totals
Posts - 2654
Comments - 2677
Hits - 2,669,938

Averages
Entries/day - 0.35
Comments/entry - 1.01
Hits/day - 349

Updated every 30 minutes. Last: 8:47 PM Pacific


  09:38 PM

I had an incident recently where I was having a sort of disagreement with a writer about some content in a document. The writer's trump card, so to speak, was to note that "It went through tech review!" As far as the writer was concerned, it had been reviewed, and the reviewer (or reviewers, I forget if there was more than one) had not indicated the change I was after, and that was that.

This gave me pause. We try to get a technical review of anything substantial we write, and we put a lot of stock in those reviews. Yet I still felt that whatever change I was arguing for was valid. So that got me to thinking about tech reviews and where they fit into the overall scheme of things when it comes to assessing the done-ness of a document. Conclusion: reviews are good (indeed, essential), but they're not the last word.

Why is that, tho? Well, here are some of the reasons I came up with (with help from the folks on my team) as to why a tech review might not be giving you the entire story:
  • TR is often done in a big hurry. It tends to come at a bad time for our reviewers, who have their own pressing deadlines to attend to. Anecdote: For our most recent release, our primary reviewer read something like 10 of our chapters all in one day (Father's Day, in fact). How carefully do you think he considered everything he was reading?

  • In our division, TR is mostly about code. One thing that interests most of our reviewers is code, and they'll usually read that. Descriptions? Background? Step-by-step procedures? Maybe. Even so, reviewers often do not run code or follow steps to see if they work. One of my writers is quite adamant on this point: "Assume they haven't run your code. The burden of testing code is on you."

  • Reviewers focus on what they're interested in. Not surprisingly, individual developers or testers, sometimes PMs, will zero in on the features they're most familiar with. If they don't work with something, they're unlikely to give it a thorough review, and might not even read it. (Sometimes they'll admit this, sometimes not.) If you get only one review, you need to be aware of what that reviewer's concerns are (and aren't) with respect to your document.

  • TR often isn't looking at the big picture. Most reviewers will consider the text a given and will react to specifics in it. It's a pretty rare reviewer who will contemplate the flow or order of information, or even whether a section of a document (or the document itself) should even exist. And of course, few reviewers will sit and think about what's missing in your doc. (Another way to say this is that few reviewers do what's sometimes called a developmental edit of the document they're reading.)

  • Reviewers focus on what's immediately in front of them. Somewhat related to the previous point. Unlike the writer, the reviewer probably doesn't know where any given document fits into the larger plan, and is therefore unlikely to assess the document in a bigger context. For example, a reviewer might simply assume that some concept or technique discussed in a document has been introduced somewhere else and not think to ask "Does the reader already know this?" This is a function of how tech review often occurs -- in pieces, with documentation not necessarily presented for review in the same order that the reader will ultimately see it.

  • One review is just one opinion, as of today. A flippant way to say this is that if you get one review, you get one opinion. If you get two reviews, you have three opinions, and so on. Reviewers don't necessarily agree with each other, often quite dramatically. And even a single reviewer might change their mind based on others' thoughts, new information, your passionate rebuttal, time of day, phase of moon, whatever. (Much like editors, haha.) To be clear, the opinions you're getting are from experts, and are specifically what you're asking for. Still, even for the reviewers, there's a difference between an opinion and a fact.

  • Reviewers make mistakes. Sometimes when you push back on a tech-review comment, the response is "Oops." But to know that, you'd have to push back, innit?

  • Reviewers are not (necessarily) our audience. This is a variation on Homo Logicus -- our reviewers already know tons of stuff about what we're writing, and it's difficult to imagine the state of mind of someone to whom this is all new. For example, there's something slightly absurd about a bunch of lifelong professional programmers opining about what a rank beginner will or won't understand. That's like you and me sitting around arguing about how hard a foreigner might find it to learn English. No, we writerly types are the reader advocates, and we need to take that into account when we process TR comments.

  • You have to know what you're getting from whom. If you're interested in the accuracy of your code, get a review from a tester. If you want to know whether your approach is a best practice, try grabbing a Dev lead. If you want to know whether you're messaging a feature right, grab the lead PM. Or whatever. But you don't want get these mixed up -- don't expect a tester to be telling you whether your document is positioning the product right, and I wouldn’t count on a lead PM to be running the steps in my procedures. (Always there are exceptions, of course.) As such, you need to weight appropriately the feedback you get from different people, based on their roles, and for that matter, on what you already know about their reviewing history, the time they're able to devote, and basically all of the above. Plus ...

  • Some people are good reviewers, and others aren’t. ‘Nuff said.
The takeaway here is that you should not think that because something has gone through tech review, it must be right, and you must especially not think that because a reviewer said nothing about a chunk of text or code, that the reviewer must therefore approve of it. You're not necessarily getting approval; you're just not getting disapproval, based on what occurred to the reviewer off the top of their head in the small amount of time they allotted to reviewing your text.

And importantly, as one of writers summed it up, a tech review is just one type of input. It's an essential one, but there are other factors that go into documentation review beyond what you get in technical review.

Coming soon (for a broad definition of "soon"): Ok, so how do you get a good tech review? If you already have thots about that, by all means, leave a comment.

[categories]  

[4] |