I'm Mike Pope. I live in the Seattle area. I've been a technical writer and editor for over 30 years. I'm interested in software, language, music, movies, books, motorcycles, travel, and ... well, lots of stuff.

Read more ...

Blog Search

(Supports AND)

Google Ads


Subscribe to the RSS feed for this blog.

See this post for info on full versus truncated feeds.


Eat food. Not too much. Mostly plants.

Michael Pollan


<July 2014>




Email me

Blog Statistics

First entry - 6/27/2003
Most recent entry - 7/23/2014

Posts - 2304
Comments - 2489
Hits - 1,647,657

Entries/day - 0.57
Comments/entry - 1.08
Hits/day - 407

Updated every 30 minutes. Last: 9:52 AM Pacific

  09:58 AM

Andrey Karpov, who analyzes software for defects, has identified what he calls the "last line effect." This happens when people use copy and paste to quickly create a bunch of similar lines of code. He's figured out that mistakes are most often made in the last pasted block of code. He backs up his thesis with hard numbers and with examples taken from real code. He muses:
I heard somewhere that mountain-climbers often fall off at the last few dozens of meters of ascent. Not because they are tired; they are simply too joyful about almost reaching the top—they anticipate the sweet taste of victory, get less attentive, and make some fatal mistake. I guess something similar happens to programmers.
I've certainly made this mistake while writing code. This also made me wonder how often this syndrome is evident in writing or editing. What would this look like? Obviously, there are many pitfalls associated with copying and pasting, but is there an analogue in writing and editing to the last line effect?

[categories]   ,

[1] |

  06:41 PM

As a technical writer, you will frequently find it useful to be on the consuming end of information and to take some lessons away from that experience. I had such an experience today.

I was working with some internal tools and couldn't get things working. I pinged the experts, and one of them sent me back instructions that ran something like this (altered to be suitable for public consumption):

1. Save the attached configuration file.
2. Overwrite the current config file in the X folder.
3. Try the process again.

I saved the file and overwrote the config file. No luck, so I contacted the expert again. I got this response:

Please follow the instructions exactly.

So, I tried again. Still no luck, so I sent a dense response detailing what I'd tried and where it had failed. The expert, I must say, became a little impatient.

Long story short, the process I was trying has two configuration files in two places. I had overwritten the wrong one. The part of step 2 that said "in the X folder" had somehow not penetrated to me, probably because I was looking right at an actual configuration file and I had no reason to imagine that there were two of them. So the "in this folder" qualification hadn't really registered.

It's arguable of course that it's my own damned fault for not reading the instructions carefully enough. But I've done (and indeed, still do) technical support, and I generally don't blame customers for not being careful enough in reading the docs. If instructions aren't working for people, I try to take away a message that the instructions aren't clear enough. I certainly don't respond to confused customers with the message that they're just not reading instructions carefully enough—especially the instructions that obviously don't seem to be working.

But those are after-the-fact issues. What would have prevented the confusion in the first place is for the expert/writer to have anticipated a possible problem, along these lines:

2. Overwrite the current config file in the X folder. (Note that there are two config files—make sure you overwrite the one in folder X.)

That is, it helps tremendously if the writer can anticipate trouble spots and steer the reader around them. Of course, it would be best if processes didn't have such trouble spots—why are there two files with the same name in different folders?—but there are many cases where things are just going to be a bit confusing, oh well.

It would have saved an hour or more of time, not to mention aggravation on both sides, if this small issue would have been anticipated.

So look out for where users might be confused when reading docs. And if readers tell you that they're confused with your existing docs, take that as your problem, not theirs.


[1] |

  11:13 PM

I just installed Word 2013 and was disappointed to note that some of the long-standing keyboard shortcuts no longer work. For example, I've been using Alt+V,A for years (decades?) to invoke an ancient menu command to toggle between hiding and showing revision marks. Even when they introduced the ribbon and the menus went away, a lot of those old menu-command shortcuts still worked. And some still do; but this particular one no longer does, darn it.

I spent a little while trying to map keystrokes to the show-revision and hide-revision commands in the Review tab. Either I'm not finding them or (as I believe) there's no longer a single command to toggle show/hide of rev marks in the way I've come to rely on.

So, macro time. Using the macro recorder and some editing, I created the following macro and then mapped Alt+V,A to it. Macros are stored in Normal.dotm, so as long as that remains available I should be good. (Right?) However, I'll have to update Normal.dotm on each machine on which I install Word 2013.

Perhaps there's an easier mapping for this functionality. If this macro thing doesn't work out, I'll investigate further.
Sub ShowOrHideShowRevisions()
If ActiveWindow.View.RevisionsFilter.Markup = wdRevisionsMarkupNone Then
' Hide revisions
With ActiveWindow.View.RevisionsFilter
.Markup = wdRevisionsMarkupAll
.View = wdRevisionsViewFinal
End With
' Show revisions
With ActiveWindow.View.RevisionsFilter
.Markup = wdRevisionsMarkupNone
.View = wdRevisionsViewOriginal
End With
End If
End Sub

[categories]   ,


  06:17 AM

A challenge: you have a conference room and 60 minutes to teach a group of engineers to become better tech writers. What do you tell them?


[8] |

  11:36 AM

Everyone knows about a herd of cows and a clutter of cats and a murder of crows, right? These are called collective nouns or terms of venery. (The latter, more interesting, term refers to hunting, should you be wondering.) Many such terms are listed here, here, and on Melanie Spiller's site.

For fun the other day, we came up with terms of venery for the many species that can be found in the world of IT. Herewith our list. Can you come up with more?

A compilation of programmers
A unit of testers
A click of QA engineers
A spec of program managers
A package of builders
A deployment of SysOps -or- A distribution of SysOps
A bundle of network engineers
A row of DBAs
An interface of UX designers
A lab of usability testers
A snarl of IT admins
A triage of Helpdesk engineers
A pixel of graphic artists -or- A sketch of graphic artists
A meeting of managers
A retreat of general managers
A scribble of writers -or- A sheaf of writers
A revue of editors (haha) -or- A scrabble of editors
A project of interns
An oversight of auditors
A tweet of tech evangelists
A quarrel of patent lawyers

Contributors: me, David Huntsperger, Peter Delaney, Scott Kralik

[categories]   ,


  09:02 AM

Imagine that you're a music company in about 1984. For many decades you've been selling vinyl records, and then along comes this newfangled "compact disc" business. It's obvious to your company that this is the future, and your audiophile customers are all excited. But your everyday customers are confused: are you going to stop making records? Are they supposed to replace their enormous record collections with CDs? And what about the whole ecosystem that's grown up around records: record stores, stereo manufacturers, even furniture makers ... what do you tell them?

I've lived through similar scenarios in the software industry multiple times: the company devises a new technology—not just an update to your already successful releases, but a new approach. As with the record company, tho, it's rarely easy to simply pull the plug on your old stuff, since many of your customers are heavily invested in your old technology.

If you're the documentation person under these circumstances, you have a tricky job. If the new technology is sufficiently different, you can create a brand-new documentation set from scratch for the new technology. (The documentation sets for record players and CD players have very little shared information.)

But it's not always that clean a break. Consider a database product where the new technology is an innovative search syntax. Everything else about the database (storage, backup, etc.) is the same; you just have a new way for users to craft their queries. Moreover, the old query syntax still works.

Too often, what ends up happening is that writers add a section to the existing documentation that describes the new technology. This "solves" the problem. Hey, now we have two technologies! We've documented both of them!

But what do your users actually need?
  • All users need to understand that there are two technologies, and why, and how users should choose between them. In your compare-n-contrast, you have to be careful not to trash-talk your old technology (in spite of what your engineers and early adopters probably think); a few years ago, you spent a lot of effort to convince your users how great that technology was.

  • Existing users need to understand what the new technology means for them. Do they have to upgrade? What does it mean for their existing investment? How long can they continue to use the old technology?

  • New users (probably) need to be directed to the new technology. They also need to understand that there's an existing body of knowledge about the old technology (for example, documentation and articles and books and forums) that could mislead them if they're not aware of the different versions.
You can accomplish this easily—well, "easily"—in some sort of introduction or overview. But you also have to think about how to help users who drop into your documentation from unexpected places—say, from a web search. Your existing documentation is of course all about the old technology. The descriptions are about the old stuff; if there are examples or illustrations, they're probably about the old stuff. Existing customers will probably continue to use the old technology and will still need documentation for it, so you can't just rip out the old stuff and replace it with new docs.

You might consider reviewing every page of your existing documentation where the old technology is featured (for example, every page that shows query syntax). Then you have to ask whether you replace the existing examples with new ones, or whether you add corresponding examples of the new syntax. In the latter case, how much explanation do you need in order to make sure readers understand that there are two syntaxes?

As I say, I've lived through this. As of last year, the technology I worked with (ASP.NET) had three distinct approaches to creating websites. We had a heck of a time even crafting the message of how to select between them.

And the idea of visiting each page (page design, database access, deployment, etc.) and updating it for all three technologies—or creating technology-specific versions of each of these stories—was a challenge indeed. (They've since added a fourth technology fourth and fifth technologies.)

The evolution of a product is of course exciting for users, who get new and improved technology to work with. But unless a new technology represents a completely clean break with the old, and unless you can create separate, standalone doc sets for each technology, in some ways the documenter's job can actually be harder than it is for the engineers.

[categories]   ,


  10:31 AM

The title of this entry does not, as far as I know, reflect an actual book title. But based on something I saw today, maybe it could. Here's an article I saw today on the ArsTechnica site:

Keep it secret, keep it safe: A beginner's guide to Web safety

I was initially interested, because although I am more-or-less conversant with the basics of safe browsing—using wifi safely at a coffee shop, for example—there are certainly other people in our household who might value some tips "for beginners" about how to use the web safely.

Then I actually read the article. Here are a couple of examples of advice for those beginners:
Clicking the browser's padlock icon while visiting Facebook, for example, gives us the most relevant information about the certificate and its encryption algorithms: the certificate has been signed by VeriSign and the connection uses TLS 1.1 with 128-bit RC4 encryption.


If you want to roll your own [VPN] server, you can use free software like OpenVPN (or, for Mac users, the VPN server included in the $20 OS X Server package).
Frankly, I'm not really sure how grateful my wife would be to learn these things.

Obviously, the issue has to do with the term "beginner." It's not actually clear to me who exactly the author had in mind as a beginner, but it's not my wife, or my kids, or a bunch of other people who are perhaps not quite ready to examine the certificate chain for the current session.

Scene 2. The other day I was working on a programming problem and someone handed me a working example in the programming language named Python. I don't, er, speak Python, so I had to set up my computer with the requisite tools. In the process of looking for instructions about this, I ran across an article that included the following gems:
You want to use Python on a Windows 7 machine but you don't know what you're doing. What you do know is that in order to go anywhere and do anything you've got to install packages. Or maybe you don't even know that yet.
The good news is: it's easy.
There is no bad news.
See all that stuff flying by? Forget about it.
I was more than willing to overlook the perhaps too-flippant tone because the article in effect carried out its promise to document the process for (real) beginners.

So. If I see a title that involves the phrase "for beginners," I have a specific idea of what the reader is expected to know (or not know). Perhaps the author of the ArsTechnica article knows something about the audience for articles in that publication such that when he writes "for beginners," he actually means "really technical, but new to this thing." That's quite legitimate, if sometimes a little misleading. (One of the problems I had in finding information about Python "for beginners" is that the assumed starting point for most of the information I found was someone who already knew programming, operating systems (often Linux), tools and technologies (.tar), etc.)

As with any piece of technical writing, you need to have a clear sense when you start of who you're talking to. For a lot of writing, it's not a bad idea to actually lay this out at the beginning of your piece. And if you're going to use a term like "beginners," it seems like you have more obligation than usual to actually indicate what you mean by that.

[categories]   ,

[2] |

  11:57 AM

We got a customer comment the other day observing that we had a contradiction in our documentation. In one topic, we note that the maximum size of a particular document type is 128K. In another topic, we note that the maximum is between 2K and 10K (dependent on some technical details).

We investigated. The results were a little surprising: seemingly paradoxically, both topics were technically correct. The 128K limit pertains to a transport limit — it's the largest document that will be accepted for upload. The 2K-10K limit is a business rule that is invoked later when the document is being saved.

It's like a 10-ton truck trundling down a road. Maybe the weight limit on the road is 50 tons. However, if the road crosses a bridge with a weight limit of two tons, that's the effective limit for the whole road.

We contemplated various ways to fix this problem. A complicating factor was that the text about the 128K limit was generated into the documentation automatically. (By a JavaDocs-like process, if you're curious.) The particular conundrum was how to explain, yet dismiss, the 128K limit in a way that made sense to the customer, since for the most part there is no practical circumstance under which the clearly documented 128K limit actually made sense.[1]

A lesson (or maybe just observation) is how hard it is to write documentation in a holistic way. It's quite possible that the two topics were created by different writers at different times. Each topic is, as noted, "correct" in a narrow way. It's a real challenge to try to understand the overall customer experience. This is especially true for API/reference documentation, which is focused on a very tiny slice of the whole — a little like writing a dictionary definition and trying to anticipate all the contexts in which people might use a word.

It's a hard problem, but it's one worth trying to solve. In the end, the customer doesn't really care that the 128K limit is "technically correct" or that the topics were written in different contexts, blah-blah. The end result, as we experienced ourselves, is that the customer is confused. And whatever the difficulties of trying to coordinate far-flung pieces of documentation, surely documentation that leaves a customer with worse information than what they started with has to be a big incentive to try.

[1] The limit was set high for a legimitate reason; the only thing I'll say about that is that the 2K to 10K limits are set by business rules, not physical constraints.


[2] |

  10:47 PM

At work the other day I was working a list of our products and I found I kept hunting around in the list for a specific one. Here's how the list was arranged (I left a few out for brevity):

Amazon CloudFront
Amazon CloudWatch
Amazon DynamoDB
Amazon Elastic Compute Cloud (Amazon EC2)
Amazon Elastic MapReduce
Amazon Glacier
Amazon Relational Database Service (Amazon RDS)
Amazon Route 53
Amazon Simple Email Service (Amazon SES)
Amazon Simple Storage Service (Amazon S3)
Amazon Virtual Private Cloud (Amazon VPC)
Amazon Web Services Account Billing Information
Auto Scaling
AWS CloudFormation
AWS Elastic Beanstalk
AWS Identity and Access Management (IAM)
AWS Storage Gateway
AWS Support
Elastic Load Balancing

It's a bit more obvious here than it was in the document I was updating, but you can see that the products are arranged in strict alphabetic order. (You might wonder, as I did, why sometimes it's "Amazon" this and other times it's "AWS" that, but what you see here are the official product names, and there's no messing with that.)

Still, and in spite of this perfectly logical order, "Elastic Load Balancing" at the end felt like it had been tacked on as an afterthought. Likewise "Auto Scaling" felt out of place, and seeing Amazon CloudWatch separated from AWS CloudFormation was odd.

Putting things in alphabetical order has a number of recognized challenges. You need to decide whether you're going to sort case sensitively; how to accommodate spaces and punctuation; how to handle acronyms and initialisms; and so on. (You can explore some of these under Special Cases in the Wikipedia article on Alphabetical Order, or if you happen to have a copy of the Chicago Manual of Style (16th ed), refer to 16.56ff.)

None of the special-case handling, however, addressed the particular situation of our list, which was this: from the perspective of the user looking for a product, the "Amazon" or "AWS" portion of the name is essentially invisible. Users know these products as CloudFront and Glacier and Auto Scaling. (Or in some cases, the products are best known by their initials, like S3 and IAM.)

So we've taken a stab at alphabetizing the list in what might be called "user-oriented name order." You can see the result in the published page. I'm actually curious how people like this and whether they'd agree that the order we've come up with makes more sense.


[4] |

  10:42 AM

The legitimacy of try and in the sense of try to has been debated for a long time, but it's an established usage in informal English:

I'm going to try and be there at five o'clock.
Please try and understand my point of view.

(For a good summary, including OED cites, N-gram stats, corpus search results, and a blessing from Fowler, see the blog The Writing Resource.)

Objections to try and sometimes seem a little forced; for example, Grammar Girl posits an argument from logic: "If you use and, you are separating trying and calling. You're describing two things: trying and calling." She goes on to say that try-and versus try-to may be more of a pet peeve with her.

And yet. I ran across an interesting example today of try and where I had to read the sentence a number of times before I got it:
If you try and lose then it isn't your fault. But if you don't try and we lose, then it's all your fault.
This is from Orson Scott Card's book Ender's Game.

The intent, as I eventually deduced, was "If you try and [you] lose ...". For my first several attempts to read the sentence, I kept parsing it as "If you try to lose ...", which didn't completely make sense. But first readings are stubborn. In other words, the intent is per Grammar Girl's logical parsing (two actions), but I was not reading it that way.

I think some punctuation here might have helped — a comma after try. Or an extra you inserted after try and.

Speaking of try and lose, here's The Most Interesting Man in the World on this topic:

[source: memegenerator]

[categories]   , ,

[2] |