On Writing, Tech, and Other Loquacities

The collected works of Lana Brindley: writer, speaker, blogger


Leave a comment

OpenStack Newton Summit – Docs Wrapup

Everything is bigger in Texas: including the conferences!

DSC_1078

This Summit was a homecoming of sorts. OpenStack started in Austin with 750 people, and returned six years and twelve conferences later with 7500 people. Even the baristas in the downtown coffee shops noticed us the second time around.

For documentation, this conference was bigger than usual as well. We had a total of eight sessions, in addition to the contributor meetup on the last day, which is more docs sessions than we have ever had before.

And we had a lot to talk about! The biggest thing on our minds was the future of the OpenStack Installation Guide. The Big Tent has changed the way that projects go about joining the OpenStack ecosystem, and with Foundation having an increased focus on ensuring new projects have sufficient documentation, we needed to change our approach to documenting the installation of an OpenStack cloud. There is no ‘right’ way to install a cloud any more, and there is certainly no ‘right’ set of components you should be installing when you do it. But with a small documentation team, and a seemingly endless parade of new components requiring documentation, we were faced with a big technical challenge, where everyone had some kind of skin in the game. Despite some differences of opinion, the session itself was extremely productive, and we came away with a solid set of deliverables for Newton. First of all, we’re going to create the infrastructure to allow projects to write their own installation docs in their repos, and then publish them seamlessly to the docs.openstack.org front page. This means that projects have responsibility for their own docs, but the docs team will provide assistance in the form of templates and infrastructure support to ensure that all projects are treated as first class citizens. Secondly, the existing Installation Guide will change focus to be more about an installation tutorial, giving people a highly opinionated and completely manual installation method to learn the ropes, but not to install a production cloud. Thanks to the OpenStack User Survey, we can safely say that most production clouds are installed using some kind of automated tool, so having manual installation instructions is useful as a training tool, but not in a real world scenario.

With the big question more or less settled, we got on to the fairly long laundry list of other things that needed to be done, which all ended up focusing mostly on streamlining some of our processes, being clearer about the way we operate, consolidating guides that had (for obscure historical reasons) been in their own repos into the main one again, and general editing and tidying up. A full list of the goals can be seen here: Newton Docs Deliverables. And, for historical interest, here’s the whiteboard from the Summit session:

DSC_1099

During the Mitaka release, docs had a focus on Manageability, aiming to work more effectively and efficiently, with a focus on collaboration. For Newton, while manageability themes are still very much present, the focus is more on Scalability, and making our documentation efforts scale out to represent a much greater proportion of products, contributors, operators, and users. From empowering projects to write their own documentation with our support, to making our processes simpler to find and understand, to ensuring our documentation is as accurate, up-to-date, and effective as possible, it’s going to be an exciting cycle for docs!

I leave you with one of my favourite Texan big things: a bathtub margarita!

DSC_1074

Share


Leave a comment

Content as a driver of change – Then & Now

Humans have always written things down.

Those of you reading this post, with your laptops, and mobile phones, and iPads, and vanity email accounts, and your single sourced, content-reuse, DITA-compatible Docbook XML toolchains, with all your fancy Javascript elements and mind-boggling CSS overlays. You are just the latest in a long line of human beings who have been doing the same thing for millennia. Albeit with different tools.

Panel_of_simple_figures_with_boomerangs_-_Google_Art_Project

The original owners of the land we are standing on today are the Wurundjeri people. Australian indigenous art is the oldest unbroken tradition of art in the world. These weren’t just the pre-history version of hanging a Monet print on your loungeroom wall. Indigenous art exists on all manner of things: paintings on leaves, wood and rock carvings, sculptures, and of course cave drawings. This art gave early Australians a way to record the things that mattered most to them in their lives: they often involve scenes of hunts or special ceremonies. In the case of Australian art, many include megafauna and other extinct species, and even the arrival of European ships. More than a record of events, though, they were probably also a method of teaching. Each indigenous tribe had its own mythology (collectively known in English as ‘the Dreaming’), which used stories to convey morals or other educational information. Most children who grew up in Australia would be familiar with the Dreamtime story about Tiddilik the Frog, a fable about greed and about finding humour in bad situations. Indigenous art and the stories that lie behind them are really just an early technical manual for life itself, especially in a world where living for any length of time could be quite difficult.

De_Architectura027

Who here remembers the story of Archimedes and his bath? It’s a demonstration of how Archimedes used water displacement to measure the density of an object (in this case, the king’s crown). Of course, the bit we all remember of the story, though, is that Archimedes, having made his discovery in the bath, went running naked through the streets of Syracuse, crying “Eureka! I have found it!”. This story comes to us from one of the oldest surviving technical manuals in existence, the “De architectura” by Marcus Vitruvius Pollio, which was published in around 15BC. Of course, the Ancient Greeks & Romans were well known for their literature, their scholars, their philosophers, and perhaps above all, their library. The Royal Library of Alexandria in Egypt was the largest repository of knowledge in the world between the 3rd century BC and 30BC. The famous fire that destroyed it was probably set by Julius Caesar himself in 48BC, but the library continued in some capacity until the Roman Emperor Aurelian destroyed what remained in about 270AD. This was of course a massive blow to literature, but it also an incredible loss of technical data as well. Thankfully, the Ancients managed to keep going even after the library was destroyed, and we now have surviving copies of wonderful pieces like Pliny’s Naturalis Historia, which is essentially the world’s very first Natural History encyclopaedia, and which set the stage for many more technical manuals to come.

Gutenberg_Bible,_Lenox_Copy,_New_York_Public_Library,_2009._Pic_01

Jumping over to Europe, Gutenberg did his thing with the printing press in the mid 1400s, but printed books were still a terrifically rare and expensive thing until well into the 15 and 1600s. Up until that period, if you were a fairly ordinary person in a fairly ordinary European town, you were probably aware of the existence of almost exactly one book: the bible that your local clergy had sitting on a plinth in your church. You probably couldn’t read yourself, or if you could probably not well enough to be able to read and understand a book written predominantly in a particularly stuffy version of Latin, and even if you could read that well, you wouldn’t be allowed to touch it. No, the bible was the word of God, and as such could only be read and interpreted by men of the cloth. They didn’t really want people going off and reading the Bible on their own and drawing their own conclusions about things. Of course, this got really interesting once the Reformation really started to get underway in the mid 1500s, and people started to read the Bible for themselves. In fact, for a little while there in England, Henry VIII decided that ordinary folk (and all women) were banned from reading the Bible. All this running around reading things and learning by everyday people was just a little too much for him to bear, especially when they started disagreeing with him.

Marianne_Stokes_(1855-1927)_-_-The_Frog_Prince-

Still in Europe, with better access to mass printing, publishing written versions of early verbal history became the thing to do. We all know the Brothers Grimm were writing fairy tales in German in the early 1800s, but they certainly weren’t the first to try and document the oral history of early Europeans. Charles Perrault is considered the original author of many of the Disney favourites, including Cinderella, Little Red Riding Hood, and Sleeping Beauty, and he was writing in French over a century before the Grimms, in the late 1600s. But even he was just writing down stories he’d heard from others. My favourite version of Cinderella comes from Giambattista Basile, published in Neapolitan in 1634, some years after he died. These stories, gruesome as they were before Disney got a hold of them, were intended in many cases to be fables for children, with a moral story, but were also used as cautionary tales for adults. In Basile’s version of Cinderella a husband is warned of the horrors of not being too picky about your second or third wife, he gives a general warning to the household about choosing your housekeeping staff carefully, a warning to parents about treating children fairly, and a warning to young women about being proud. And that’s before we get to the bit Disney likes: “if you’re a good person, good things will happen to you”. Some versions of the story also slam home the opposing moral: “if you’re a bad person, bad things will happen to you”, with both the step-sisters either mutilating their own feet to fit the slipper, having their eyes pecked out by birds at Cinderella’s wedding, or some equally terrible combination. As for other horrifying fairy tales, anyone who has read anything by Hans Christian Anderson will know that they often got worse before they got better. There’s a reason Disney never took on “The Little Match Girl”. For a long time, what we now know as fairy tales were the easiest and most entertaining way for a largely illiterate population to record and share moral stories and warnings.

geisha_maiko

A ribbon that runs through all of these is the idea of the master and apprentice. These types of relationships began in Europe in the 1300s, and were a way for a trade person to get cheap labour, while a young apprentice got a bed to sleep on, food to eat, and the hope of a trade later on. This system was used throughout England and Europe for all skilled trades: from seamstresses and blacksmiths, to Knights with their squires. However, the general principles of apprenticeships exist throughout the world, with one of the earliest examples being the idea of a Maiko, or a trainee Geisha. Geisha have existed in Japan since around 700, and still take in Maiko to this day. While this isn’t written knowledge, it is an important footnote when we’re discussing the history of content, as this was the main way that specialised technical knowledge was handed down.

JamesCampbell_NewsfromMyLad

Of course, a young apprentice, wishing to remember all the things they had learned, might be inclined to write them down. By the time the Industrial Revolution was in full swing, paper and books had become affordable, schooling was more available to children throughout Europe, and literacy was becoming much more widespread, especially to those bright young apprentices who left home to seek their fortunes. And while young people have written home to their families since ancient times, letter writing really hit its stride around the turn of the century when it became not just a way to record their days and connect with their families, but also a way to explore political and religious matters, and explore emotions: poison pens, love letters, and obituaries are all well represented in letters. Another form of writing more like the manuals we know today of course, is the recipe book. Many household cooks would enshrine their recipes in writing, to be handed down to the next generation. I regularly bake a family choc chip biscuit recipe that has been handed down mother to daughter for at least five generations, and possibly quite a few more than that.

But enough history. The older writers in the audience will probably remember most of these more recent forms of technical communication. Some of the more unfortunate among you may still be working with some of them. In that case, I’m sorry.

oreillybooks

Printed books are pretty all of our yesterdays. In some ways, it still feels as though you’re not a REAL writer until you’ve got your name on the outside of an actual book, made out of dead tree, and sent from some printer. I chose a picture of O’Reilly books on purpose, as OpenStack released yet another of our manuals as O’Reilly dead tree version last year, although we have no immediate plans to repeat that in a hurry. Personally, I’m part of the problem here. I love having dead tree reference books, especially for things like Style Guides, which are somehow easier to have sitting on my desk as I write, rather than relying on an internet search (which can, for me, at least, be very distracting. Hello, Twitter!). As for writing them, though? No, I love the idea of being able to catch and fix errors even after publication. Nevertheless, printed books, especially technical manuals, are our history, our present and, to some extent at least, probably also our future.

SONY DSC

SONY DSC

A close cousin of the printed manual, whitepapers are caught somewhere between marketing material and technical documentation. In digital form, they are probably not going to go away any time soon, but the printed whitepaper has almost certainly been confined to the recycling bin these days. My very first piece of technical writing was a white paper. I had a Marketing undergraduate degree and half an MBA, so it was a fairly logical piece of work for me to be doing at the time. I enjoyed it immensely, and immediately set out to become the whitepaper expert, intending to build a career around it. Thank goodness I discovered technical manuals in the meantime, and was saved from a life of writing whitepapers!

onlinebooks

And, finally in the ‘recent’ category, I have a screenshot from my very own project. This is, for all intents and purposes, an online version of a printed ‘book’. It has a table of contents down the side, divided into chapters and sections, and it’s designed to be read from beginning to end: simple concepts at the beginning, more complicated procedures as you move through, with reference information (tables of data, contact details, and a glossary) at the end.

These have all been great methods of getting information out there, but they are all destined to become as archaic as the fairy tales and the cave paintings we discussed earlier. Let’s take a look at those things we’re doing a little differently today, that will drive the way we revolutionise and improve content management in the future.

webcast

First of all, I want to briefly touch on MOOCs. These are the future of face to face training courses. MOOCs not only allow people all around the world to study when and where they choose, but they also allow institutions to create online tool that mimic real world scenarios, and allow students to learn real skills in a safe environment. This is great especially for the tech industry, where students can work on realistic IT setups that they might not be able to recreate in their own environments, but it also works well for teaching other knowledge work skills such as customer service and financial skills.

DITA_finch

The main thing that, I think, changed the way we looked at the information we were creating, was DITA. Of course, DITA isn’t new. It was named in 2001, and formalised in 2005, but varying groups have been working on data mapping and the like since the 60s and 70s, and it became especially popular in the 90s, with the publication of JoAnn Hackos’s book ‘Managing Your Documentation Project’ (and later ‘Information Development’) a book probably most of us have on our shelves, and to which I (at least) still refer to regularly. DITA was really the first formal, open standard that let us consistently and accurately categorise data into formal types. And it was simple enough that we could all use it, remember it, and above all teach it to others easily. Even if you’re not using a specific DITA tool, the general principles of DITA–splitting content into one of only three data types–could be used to underpin any tooling system.

Of course, the main driving principle behind DITA (besides the categorisation) is about content reuse and single sourcing. This is another key component of how we’re changing the way we look at content. It’s not about a beginning and an end any more. With this idea, we walked away from the age old idea of delivering a story, and moved towards this critical period of considering what information is required where, and when. This was important mostly because we were actually starting to consider how people consume information, and learned difficult concepts. We no longer assumed that information we gave to people in the beginning of a book stuck with them as they moved through the rest of the content. Sometimes, learners needed to go over information again and again before they actually learned it and could apply that information to later, more complicated, tasks. And, being the inherently lazy writers that we are, we didn’t want to retype that every time. So single sourcing and content reuse were naturally very easy for us to adopt.

And that leads me to perhaps my favourite topic right now: every page is page one. This is a model designed by Mark Baker, and while his model is certainly not the only one out there, it’s certainly one of the best developed. The general idea behind this is that no piece of content is more or less important than any other. It’s not quite DITA, in that a ‘page’ in EPPO terms is much bigger than a ‘topic’ in DITA terms. The best example comes from Baker himself, where he refers to a recipe. A recipe contains, in DITA terms, a concept (some information about the recipe, that describes what you’re actually creating, and maybe some background, where the recipe has come from, and the types of ingredients that you need), followed by a procedure (the actual steps of the recipe), and finished with reference information (serving suggestions, maybe information on converting measurements, or ingredient substitutions). In EPPO, the entire recipe is the ‘page’: it contains everything you need to be able to perform the task, including all that concept and reference info. One of the best ways to think about EPPO is in terms of a Wikipedia page, there are links to further information if you need it (and I’m sure all of us here have gotten sidetracked by clicking those links in a Wikipedia article!), but that page contains all the specific information about a particular topic. There is no beginning to Wikipedia, and there is most certainly no end.

mrsduffeekindle_crop

So this leads me to the big question: what does the future hold for content? I think there are a few main themes we can tease out of our little journey through documentation:
The internet is making things possible that never were before
Control over content is shifting from those producing it, to those consuming it.
Consumers are used to being able to search vast resources for content, and filtering those results themselves. They don’t want us to tell them what they need to know.

Since well before the birth of Christ, in one form or another, we’ve been writing stories. Now the internet allows people to create their own stories, not just have one told to them. In many ways, this shows a maturation in human development: we’re no longer willing to receive whatever is fed to us, we want to create our own realities, and we have the tools to be able to do that.

But that is a massive challenge–and (I would argue) an opportunity–for technical writers. We get to break new ground, and thankfully we’ve been working on the building blocks of this type communication for a few decades now. The challenge now is to start delivering documentation in a completely new way, without leaving our organisations, our management, or our more stubborn clients behind. Nobody said breaking new ground would not require effort, or determination. As we shed old ideas, old processes, old technologies, and old systems, there will be people who decry change, and impede our progress. But even if you only manage to implement a small piece of your grand vision, even if all you ever get to do is plant a seed of an idea in someone’s head that maybe–just maybe–there’s a different way to do things, then you have succeeded. After all, every one of the pieces of content I have mentioned here had its detractors, from every day ‘concerned citizens’, right up to royalty, and the literati.

public-domain-images-free-stock-photos-autumn

I mentioned Archimedes earlier, but now I would like to pick a different quote of his: give me a lever and a firm place to stand, and I shall move the world.

Right now it seems to me, that where we could go next is almost infinite. People have always created and consumed content. As long as we continue to put the information out there, and give people the tools to find it, they will continue to do so. We are not at the end of a journey, nor at the beginning of one. We are merely at a step along a very long road. Let’s find out where it leads us.


References: https://docs.google.com/document/d/10meNxWpeiyYprcQjOFIBZJlm3xEiWgarN5GVk1uhJOM/edit?usp=sharing

This was originally presented as a keynote at the Australian Technical Writers’ Association Conference in Melbourne on 23 October, 2015. No video recording exists of this talk.

Share


Leave a comment

Writing Effective Procedures

Writing procedures can be much more difficult than you’d think. We see procedures everywhere, so it’s natural to think that we should be able to write one without too much trouble. For that reason, I wanted to take you through some terrible real-life procedures. This is at least partly so we can all have a chuckle at other peoples’ mistakes, and feel a little bit better about ourselves. But it’s also because it’s a lot easier to find examples of bad procedures than good ones.

With that end in mind, I went through my junk drawer, and pulled out one or two manuals that I had lying around, and I’m going to use them as examples of what not to do as we go along.

The first thing you need to look at is whether you’re documenting a process or a procedure. It’s easy to use these terms interchangeably, but they actually mean different things. The main thing to remember is that a process can contain many procedures. A process gives an overview of tasks: you might need to install the package, configure the package, and then use the package. Overall, that’s a process. Each of those things, though, is a procedure. Procedures are instructions for doing something.

WriteProcedures

Here’s an example of a certain hand-held computer game. As you can see, the instructions for using the stylus are … step 5? Every procedure in this book is numbered. What’s happened here is each procedure in a process has been numbered, rather than each step in a procedure.

So the next thing to worry about is whether you should be using bullets or numbers. This one is a really simple test: is the order important? If the order is important, use numbers. If it’s not, use bullets. Oddly, though, we get this one wrong all the time …

WriteProcedures1

These ones should all be bullets. You don’t need to operate the product from a power source before you remove the unit from the packaging.

WriteProcedures3

Let’s try this one together: Most of these ones should be numbered, the text even tells us that. The ones on the left under “Cutting Tips” are bullets, the order isn’t important, it’s a list of tips. What about at the top under “Starting and Stopping the Trimmer”? This one probably doesn’t matter, I’d be inclined to use numbers, though, mostly because you can’t stop the trimmer unless you’ve already started it.

WriteProcedures4

And just another one, because it’s so easy: The bullets in red are fine, but then we go to numbers in the purple, and then for a little variety we throw in some upper-case letters in green. Bullets would have been for all of these.

So the next thing to worry about is whether you’re describing a concept or a task. A concept is a description, it answers the question “What do I need to know?”. A task is an action, it answers the question “What do I need to do?”. As writers, it’s much easier for us to think about things rather than tasks. Users think about tasks, though, not things. Remember the old adage about not needing a drill, but a hole? That’s the essence of this point.

WriteProcedures5

This one just has so much wrong with it it’s hard to know where to start. Considering we’re talking about concepts and tasks though, let’s start with pulling those out. I’ve marked the concepts in blue, and the tasks in purple. To add insult to injury, we also have numbers where we should have bullets (in red), because this really is such a hodge-podge of information that there’s no way the order is important. Just to round things off, we also have a typo, and a vaguely insulting term about our children (in yellow).

But looking at that brings me nicely to the next point, which is about the level of detail. Make sure you don’t suddenly change depth in the middle of your procedure. If you find yourself doing this, you might actually need to do more than one procedure, or consider whether you’re actually writing a process. This one is best explained by example:

WriteProcedures6

This certainly isn’t the worst example I could have picked, but it’s interesting all the same: a few of the steps here go into detail about some extra function that your product may or may not have (in yellow), while others are as simple as “open the velcro strap” (in blue). We also have process/procedure issues here, with procedures being numbers in order, and steps getting lowercase letters (in red). This is just confused by the photo references typed in red, and both angle brackets *and* square brackets being used. We also have a few stray bullets in one step. And having said all that, I’ll remind you that this is for a pair of boots. Admittedly, slightly more complicated boots than you’re wearing today, probably, but they’re just boots in the end. Also, I’m more than a little disturbed about the idea of “closure and locking of the foot” (in green).

Everyone knows what anthropomorphism is, right? Someone like to explain it? Yep, it’s applying human qualities to non-human things or animals. We do this a lot, especially to animals, but we also tend to do it to computers a lot.

I went online to find these ones, since I didn’t have any good examples in my stack of manuals. It seems to be something we do almost exclusively to computers rather than appliances, but we *really* do it a lot.

WriteProcedures7

I’ll give you a pro tip: computers don’t actually *think*. They might display things, they might take a while to process commands, but they definitely do not think.

Have to say, though, that going through manuals looking for anthropomorphism does make this one sound slightly creepier than the author intended …

WriteProcedures8

Which brings me to one of my favourite words, and it should be one of your favourites too: parallelism. When you’re writing fiction, you don’t want every paragraph or sentence to start with “Then”. When you’re writing procedures, though, it’s a good thing to have each step start with “click” or “type” or something like that. When you mix it up, it might sound more interesting, but it just becomes confusing. When faced with two statements that seem to be saying different things, users often think you want them to be doing something different. Every step should start with an action, and the same action should use the same verb. Use “click” for a mouse click, “type” for typing on the keyboard, “press” for a hardware button, etc.

WriteProcedures9

This manual almost gets it completely right. Three procedures here all need to start with the same three steps. But in one procedure, they write it using different terms. Is “tilting the motor head back” a different action to “raising the motor head”?

So, finally some takeaways:

The main elements of a procedure are:

  • Main heading (‘ing’ verb)
  • Concept
  • Before you begin
  • Warnings
  • Procedure sub-heading (infinitive ‘to’ verb)
  • Numbered steps
  • Reference info
  • Related topics

And the things you really need to remember when writing:

  • Mouse or keyboard, GUI or CLI? Stick to it!
  • Verb (or location) first
  • Active voice
  • Give instructions, not suggestions
  • Complete sentences
  • Plain English

I’ve also created a handout with these for you to print and hang up somewhere, which you can download here.


This article was originally given as a public tech talk at Red Hat Brisbane, in September 2012.

Share


1 Comment

Open Source Documentation in Four Easy Steps (and one slightly more difficult one)

At Red Hat, we have a content services department that is about sixty people strong. Even though the department is pretty big these days, back when I started with the company, we were still trying to work out the best way to run a successful enterprise-level documentation team. What that means is that I have been involved in some of the big discussions that we have had over time about what processes we needed to get in place in order to allow us to produce the massive amounts of documentation we required as our product offerings grew. As a department, we grew very big, very fast, and our processes needed to be flexible enough to accommodate the large number of new hires we had, and still have, coming in, but robust enough to be valuable and reliable. They also need to fit in well with the engineering practices in place in the company, and the tools that our development teams use and are familiar with. Of course the other really important factor was that we had to be open. We wanted to use completely open tools to produce our docs, but we also needed to be able to work with community teams, such as the Fedora group.

Like many documentation groups, at Red Hat we use a five-phase waterfall model to produce documentation. It’s based on the ever-popular JoAnn Hackos method: starting with planning, the content specification, then writing and editing, translation and production, and then a retrospective review. At Red Hat at the moment we’re at a place where our development teams are increasingly using Agile-style development models to produce software, and that means the pressure has been on us to develop in a less rigid way than the old waterfall model has been allowing us to do. Also, it’s no secret that the online world is changing, and people now expect to be able to interact with information at a much deeper level than ever before. They don’t want to be presented with static, hard-copy books any more. They want dynamic, interactive, usable, and above all useful documentation.

In order to be able to work out what kind of model we needed to use, we needed to go back to basics. All technology is about solving problems. Back when we were sitting around in caves, we had a problem: there was all this food running around outside, but we didn’t have a way to get it to stop running around, so we invented a club and solved the problem. Since then, we’ve used technology to solve all sorts of problems: horses were sometimes problematic to control, and they didn’t go very fast, so we invented cars. The hard wheels used on early cars weren’t very comfortable, and when they broke they really broke, so we invented pneumatic tyres. We also had problems being able to see in the dark so we invented electric light, being able to go to the toilet when it was raining or cold so we invented indoor plumbing, being able to send messages to people on the other side of the country so we invented the telephone, or on the other side of the world so we invented email.

Even these really technological things that we find ourselves documenting now, are all solutions to problems. One of the first things you need to be aware of when you’re writing documentation is what problem your users have. If you can’t describe the problem in one or two sentences, then you don’t understand it well enough, and you need to keep researching. Because if you keep going, all you’re going to end up with is hollow marketing spin. That’s how we end up with documentation that talks about “leveraging synergies”: words that sound great, but have no meaning.

So at Red Hat we came up with a fairly simple model, and that is that documentation needs to be able to be boiled down to three things:

  • Describing the problem
  • Solving the problem
  • Giving any additional information

Anyone who has done any work with DITA would understand that what I’m really talking about here is:

  • Concept
  • Task
  • Reference

 

So we’ve more or less said that DITA is where we need to go next. But we didn’t want to completely restructure the tools we were using. We have a fairly large people investment in our tools. The main tool we use is Publican, which was developed by an engineer in our Brisbane office. It uses Docbook XML and gives us a command line interface that we can use to create new blank books, apply corporate formatting, and it integrates into our internal packaging system so we can create all these different formats for our books – HTML, PDF, and ePUB on the website, and we can also create RPM packages and man pages to package in with software. We combine Publican with SVN to give us a complete CMS, in short.

We looked at DITA and DITA-OT, the DITA Open Toolkit. We realized two things: first of all, it would take a significant amount of work for us to bring an open DITA toolchain to the level of maturity and system integration of our existing Docbook toolchain. Secondly, we wouldn’t get the really significant benefits of topic-based authoring without a Component Content Management System – a CMS that manages content at a very granular level. Putting those two things together made it clear that if we changed to DITA all in one hit, it would take us significant time and energy just to get back to where we already were with a mature open source complete tool chain. So we decided to take an evolutionary, rather than revolutionary approach. It’s a much more open source approach: to re-purpose something that you already have, add a script here, a small command-line tool there, release early, release often, and let the user community guide the development, rather than trying to design and implement some grand system in a distant (and expensive) future.

What we needed was something that worked in a similar manner to DITA, gave us content re-use and all that good stuff, but that would work with our existing Docbook XML and Publican tools. The first thing we did was to start creating topics in Docbook, using Docbook syntax, and a command line tool that we called the “Topic Tool”. This was a really simple command line tool that allowed us to write XML snippets (or ‘topics’), and save them in SVN. We used an extensible template model, where the topic tool retrieves a Docbook template from a central repository to match the topic type you specify. That way we can create new topic types, and even modify the Docbook syntax of existing topic types, without changing the tool on users’ machines. That was an important decision, and a major part of the evolutionary “Release, Review, Refine” approach we wanted to use. Over time we did change the Docbook syntax of the basic topic types and create new topic types, validating the open source maxim “plan to throw the first one away”.

The basic workflow with the Topic Tool is like this: you tell the tool which topic type you want, and it will then download the template and prefill some information for you. You can then edit the topic in a text editor, and import it back into the repository. It’s then possible to view your topic from the repo directly, which means anyone can now see it and use it. We then include those snippets into any book you want using an xi:include, build the book as normal with Publican, and voila! we have a book with content reuse. So that was pretty awesome, and if you read any of our Virtualisation documentation you’ll probably not know it, but that’s all based on topics and maintained using the topic tool.

Of course, once we got to about 300 topics in the topic tool, we started to notice that we have another problem, we were having trouble locating topics within the repository. This made us realise that what we needed was a better way to organise it all, so we wrapped a neat interface around Topic Tool using Open Grok. OpenGrok is designed for software engineers to search source code repositories, so it worked well for what we were trying to do. This is where the open source ecosystem came into its own all over again – there are a million off-the-shelf components and projects that you can choose from to build your own system. In the end we had a web-based search tool that was pretty basic, but did the job.

Content reuse is an obvious application of topic-based authoring, but by this stage, we’d started to realise something even more exciting. Our definition of a topic is a unit of information with a single subject – that means that it talks about one thing, and one thing only – and that has a single information role: that is, it’s a concept, a task, or a reference. If we gave three topics – a concept, a task, and a reference – to a robot, along with a rule describing the “explain, answer, extra info” pattern, and some kind of graphical template, that robot can assemble those topics into meaningful and useful output for an end user. What we wanted to do was to automate this process.

When humans assemble content into a book, they are making decisions. What aspects of the information are their decisions based on, and what rules are they consciously or unconsciously using? That was what we wanted to create: A system that would allow us to store metadata about a topic, and use rules to automate assembly on a scale that we just couldn’t do by hand-coding.

So we developed a system that we call Skynet, which allows us to dynamically sort and locate topics. Select the topics you want, and Skynet will download the code that presents those topics in a consumable way. Of course, we started dreaming big after all this. We’ve started thinking about moving away from the documentation-as-a-book paradigm, and started considering “Documentation 2.0”. Why not include comment fields on our documentation, that will allow our reviewers – quality engineers, subject matter experts, editors, and the like to make comments directly in the book rather than creating a separate list? And why not offer that functionality to our users as well? What if we had the equivalent of a Facebook ‘like’ button? Users could ‘like’ sections that they found useful, or leave comments saying “when I tried to follow these instructions, X happened” or “this seems to be missing a step” or the like. If we break away from the book model, we start to be able to think about documentation as something that our users can interac with. We could have popular topics bubble up to the top of a list, or divide books into audiences, and present the information for each audience differently, giving them a tab to click to see the information in various ways. We could implement something similar to the Amazon “customers who bought this also bought this” and present similar topics to our readers. Using single-sourced content, and content reuse, through a system like Skynet, is going to allow us to move into these more innovative delivery methods.

The team working on the Skynet project have 110% discoverability as one of their goals, to quote the team leader: “the documentation finds you”. In other words, when you’re working on something, and you get stuck, the documentation is there at a click or a glance, ready for you to interact with it. Of course, I’m sure some of you are saying “Help” right now, and yes, I agree with you. That is something else we’re talking about, and something that Skynet will enable us to do. Skynet pushes out XML now, and of course there’s plenty we can do with that as it is, but we can also extend it to push out all manner of things, including Mallard for Gnome Help.

So let’s take this conversation back to processes. All this dreaming is fantastic, but at some point we still have to actually do the hard work. Without a solid process, and a great set of standards, we’re not going to be able to get there. We’re doing a lot of internal testing, and we’re dipping our toes in the water with the topic tool and with Skynet. So far, we’ve been able to slip these in to our existing standards, but that’s not going to last for long. With a paradigm shift as big as this, everything is going to have to change, and that includes the way we go about producing our documentation. We need to be organised, we need to make sure what we do is repeatable, and we need to maintain our high standards of quality and accuracy in our documentation. Most of all, though, we need to maintain and even increase our focus on the customer. These changes come about not because we got bored with doing things the old way, but because we believe it’s a better way to serve our audience. Never, ever forget who you’re writing for, it’s those poor sods out there with their problems that they’re trying to solve. Our goal is to give them the tools they need to solve them.

So, to recap:

One of the main things that we have learned is that process is king. If you don’t have a solid process for producing documentation, then you’re going to find yourself floundering at every point along the way. You’re going to end up with documentation that doesn’t cover what it needs to cover, isn’t accurate or well-written, and doesn’t get out on time. Without a plan for how you’re going to tackle the project from end to end, then you’re not going to succeed. It’s that simple.

The second thing is about tools. You need to decide ahead of time what tools you are going to need during the project, and make sure you have them ready and up and running before you start. It’s horrible to get halfway through writing and find out that one of your writers doesn’t understand how to use a semi-colon. It’s even worse if you get halfway through and realise that one of your writers doesn’t understand Docbook XML, or whatever authoring tool you’re using.

While we’re talking about tools, it’s important to keep it open everywhere you can. This can seem counter-intuitive to those of you who have worked in big companies, but being open doesn’t mean giving away business secrets, or exposing your competitive advantage. I think Red Hat of all companies really proves that the openness can co-exist with secure business practices.

Part of keeping it open is about keeping it real. The people behind your processes, the people doing the actual work day in and day out: they’re real people. They’re real people, with real lives, and real families. You need to be able to work with people, and ensure that the loss of one person isn’t going to make the whole project tumble. The other thing you need to remember is that your readers are real people as well, you need to make sure that you’re giving your readers something useful, and something that they will get value out of.

And finally, I want to remind you about reviews. We all understand the importance of reviewing our writing for correctness, and reviewing our projects to make sure we can learn from our mistakes. You need to extend reviews to the documentation process itself, as well. Never be afraid to change things around. Just because it worked last time doesn’t mean it’s going to work next time. And just because it’s worked in the past, doesn’t mean it’s the best way to do it in the future.


This post was originally a talk given at the Open Help Conference in Cincinnati Ohio, on 5 June 2011.

The slide deck is available on Slideshare: Open Source Documentation in Four Easy Steps (and one slightly more difficult one)

You can also download this article as a PDF file: FourEasySteps


Share


Leave a comment

Keeping It Stupidly Simple

Everyone has heard the old adage about the “KISS Principle: Keep It Simple, Stupid”. Easy to say, easy to remember, but often hard to do. At least, hard to do well. When we simplify our language, it often comes across as patronising, dumbed-down, or just plain rude. So how should Stupid keep it simple, without making it stupidly simple?

Consider the sentence:

“Insert the writable media into the optical disk drive.”

It’s not horribly bad as it stands, but it could be made simpler. Here’s one version:

Open the disk drawer by pressing the button on the front of the drawer. Place the CD into the tray with the label facing upwards. Close the drawer by pressing the button again. Do not force the drawer closed.”

Well, it’s simpler. We’ve lost some of the more easily-confused terms such as “writable media” and”optical disk drive”, replacing them with more common and regular words. We’ve given more specific instructions about the actual process of performing the task, which can help with understanding, and also give users more information about troubleshooting. This would be great for a manual that is introducing people to computers for the first time.

But what if I were to tell you that this instruction is to go into a Developer’s Guide, that is, a book read and used by software developers? All of a sudden, the new version of this sentence has become horribly patronising. It is safe to assume that a software developer has opened a disk drawer once or twice before, and probably doesn’t need to be given explicit instructions about where to find the button. They probably also understand the terms “writable media” and “optical disk drive”. So we’re back to where we started from. How do we simplify the sentence for this audience without speaking down to the audience?

Think about what the sentence is trying to convey. How would you explain this to someone who is sitting across the table from you? Imagine you have a friend who is a software developer. You go around to their house, and they ask you a question about this product you’re working on the manual for. How would you explain it to them? If they said “what do I do now?” would you respond by handing them a CD and saying “Insert the writable media into the optical disk drive”? Probably not. I can just about guarantee that you would say something more like this:

Put the CD into the disk drive

So there’s your answer. It’s not patronising, it’s not too complicated. It uses terms that everyone is familiar with, and isn’t couched in lengthy words and stuffy language. It gives all the information the user needs, and isn’t drowning in information we can safely assume they already know.

The problem, of course, is that keeping it simple is not always simple. Corporate language is increasingly creeping into the everyday. Keeping it out of technical documentation is becoming increasingly difficult. Of course, if the product you are documenting is called a “Synergy Manipulation Process Leveraging Suite” there’s not much you can do about that. You can, however, ensure that you give information about the product in plain language. Explain what it does (other than leverage synergies!), explain how to use it. Try standing up and reading your text out loud. Try explaining the processes and concepts to a friend and take note of the language you use. Look at each individual word and think “is there a simpler word that I can use here?”. Keep your sentences short and to the point. Avoid repetition unless it is absolutely necessary.

Just yesterday, to give a real-world example, I saw a blog-post titled “Marketing Leaders Should Help Create the Next Generation of Australian Multi-Channel Retail”. Now, I don’t even know what that means (and surely it needs another noun on the end … “retail what“?). I clicked on the link, and read the first sentence, trying to work out if it was something I might be interested in, and saw whole sentences full of nothing but corporate-speak. Needless to say, I didn’t read any more. And therein lies a valuable lesson – write for your audience, but never write for the sake of putting words on paper. Even if your audience is a group of corporate-types in suits, who live and breathe corporate-speak, don’t write an empty document, filled with empty words. Make sure you have something to say, and then say it as simply and as accurately as possible.

The pictured quotes on this page have come courtesy of Andrew Davidson’s wonderful Corporate Gibberish Generator


This blog post has been cross-posted to Professional Open Source Documentation

Share


Leave a comment

FOSS Training

I was privileged enough to be able to attend linux.conf.au in Wellington in January. While there, I caught Bob Edwards’ and Andrew Tridgell’s talk on “Teaching FOSS at Universities” (video of which can be found here). It intrigued me.

Open source software development is very different to developing software in a more traditional, closed source environment. The aim of the course is to teach students how to go about working within the open source community. It covers the practical aspects of checking out code from a repository, submitting patches, and undergoing code approvals and reviews. It also looks at some of the less tangible aspects, like what’s accepted and expected within the community, the motivation behind project development, and governance. The course also goes into some detail about documentation.

Documentation for open source projects is not quite the known quantity that it can be in many proprietary software environments. I once had a developer I was working with describe it as “we live in the Wild West out here”, and – at least to an extent – he makes a good point. While writing for an open source project may not be as wild and exciting as that sentence makes it sound, it can sometimes be unpredictable and, at times, incredibly frustrating. Frequently, a book has been written and reviewed in preparation for a release, only to find at the last minute that a feature has been pulled from the version, a component has suddenly been renamed, or the graphical interface has had some kind of redesign. All of these things happen to open source writers on a regular basis, and frequently the only solution is to pull an all-nighter, get the changes in, and have the document released on schedule. And that’s only if you were lucky enough to find out about the change with enough time to spare before release!

So how does a writer plan for and write a documentation suite when there’s so much unknown in a project? The answer is – perhaps ironically – to plan ahead. You can’t plan for every contingency, nor should you. But if you have a plan of any description, you’re going to be better off when things start to go wrong. Pin down the details as best you can as far ahead as possible. But don’t leave it there, continue to review and adapt your plan. Keep your ear to the ground, and constantly tweak your schedule and your book to suit. If something comes up in a mailing list about a feature you’ve never heard of, don’t be afraid to ask the question – “Does this need to be documented? Will it be in the next version? Where can I get more source information?”. Another trick is to make sure you build in ‘wiggle room’ to your schedule, in case you suddenly discover a new chapter that needs adding, or a whole section that needs to be changed. If you’re consistently a few days or a week ahead of schedule, then even a substantial change should not throw you too far off balance.

Just like a ballet dancer, technical writers need to be disciplined, structured, and organised. But you also need to have grace, poise, tact, and – most importantly – flexibility.

Thanks to Bob and Tridge, I’ll be lecturing the 2010 FOSS course students at the Australian National University later this week. I’ll also be contributing the textbook that is being developed for the course. True to form, it is being built by and for the open source community, using open source tools (including Publican which has been developed in-house by some of my esteemed colleagues). Watch this space for more information.

Cross-posted to FOSS Docs
Share


Leave a comment

Creating technical documentation in five easy steps

Writing a book is an adventure. To begin with it is a toy and amusement. Then it becomes a mistress, then it becomes a master, then it becomes a tyrant. The last phase is that just as you are about to be reconciled to your servitude, you kill the monster and fling him out to the public.

–Winston Churchill

absinthe

Step 1: Planning – who is the audience? What are the book’s goals?

Step 2: Content – what are the chapters about? Where will you get the information?

Step 3: Writing – first draft, review, second draft …

Step 4: Internationalisation/Localisation – will the book be translated? Into what languages?

Step 5: Review – what worked? What didn’t? How will the book be maintained?

This is a very distilled version of JoAnn Hackos’ method. It all seems very easy doesn’t it? It’s a fairly logical progression through the steps. Writing in general is often considered an art, a talent (you either have it or you don’t), a skill, and somewhat mysterious and unique to a small portion of the population. In fact, writing is something that many people can do, and a lot of people can do well. Where it gets difficult is the same place as where any task worth doing gets difficult – sticking with it. Writing is not something you can start on Monday, and have a completed book by lunchtime on Thursday. This goes for technical writing as much as any other style, and it’s where the apparent ‘magic’ comes in. Some people have the ability to sit in a small room on their own for weeks at a time, taking and distilling technical minutae by day, and sipping absinthe by night until – like a miracle – they give birth to a brand new shiny technical manual. And some people … well, some people just don’t. Which is not terrifically surprising, on the face of it.

The idea of writing a book is romanticised in our culture. Everyone ‘has a book in them’; we’re all trying to write the ‘great American/Australian/British/$NATIONALITY novel’; one day, I’ll ‘be the next Hemingway/Dickens/Crichton/$AUTHOR’. How many people have started on the path, only to find – days, weeks, months, or years later – that it has been consigned to the desk drawer, and forgotten? This all leads us to believe, however subliminally, that writing a book is hard. It takes a long time, it is terrifically difficult, and only a bare few make it out the other side. It makes us feel better about the unfinished manuscript in the bottom drawer.

Which leads us to realise why so many versions of the writing ‘process’ exists. If you google for it, you will be spoiled for choice in the methods available. It’s a way of breaking down the mammoth task of creating a book into small, manageable, easy-to-chew lumps. Somehow, five (or six, or seven, depending on the method you choose) small steps aren’t half as scary to tackle as one big one: “Write a book”.

When it comes to technical writing, though, the process has more purpose. Technical documents are very rarely produced in isolation. The book could be part of a suite of documents for one product – the Installation Guide, the User Guide, the Reference Guide; it could be a guide for a product that forms part of a complete solution – the front-end tool, the back-end database, and the libraries; it could simply be a book produced by a large technical company that produces a large range of products. Whatever other books or products compliment the work in progress, there needs to be a consistent approach, a ‘look and feel’ that creates a brand around the product. By following a standard process for each and every book written, that brand is more easily created and maintained, even by many authors, all working on individual projects, and in their own unique ways – absinthe or no absinthe.

Cross-posted at Foss Docs

Share


Leave a comment

Crafting beautiful technical documentation

Writing gives you the illusion of control, and then you realize it’s just an illusion, that people are going to bring their own stuff into it.

– David Sedaris

Technical writing is a strange breed. When you write fiction or poetry or a screenplay, it’s a release, it’s a way of expressing what is inside yourself, and allowing your imagination to creep into the those little crevices in your brain, and poke about to see what squirms. Writing technical documentation is almost entirely the opposite. It’s about getting into the heads of your readers, finding out what makes them tick, how they work, and then presenting them with the information in a way that will make them go “Aha!”. It’s about taking source documentation that would make your eyelashes curl, and crafting it – shaping it, massaging it, chewing it up and spitting it out – into something that not only makes sense, but is useful, intuitive, and – dare I say it – beautiful.

Beautiful technical documentation? Why yes. I think so. Bad technical writing is hard to use, hard to understand, and hard to find what you want. Good technical documentation is intuitive, easy to navigate, and aesthetically pleasing. Good technical documentation is beautiful.

The question, then, is how to create beautiful technical documentation, and how to know when that’s what you’ve got. While it would seem easy to tell when you haven’t got it, it is not always as simple as it might sound. The problem is the same as a lot of artists and craftsmen complain of – getting too close to the subject matter. One of the reasons that engineers can not generally create effective documentation is because they get too close to the nuts and bolts of the thing. They spend too much time looking at the engine of the beast, that they become unable to describe what colour the paintjob is. That is where the documentation team step in – we bring fresh eyes to the project, and are able to look at it from the top down. We can describe what it looks like, what it does, and how to do it, without having to explain how that happens. But once you’ve been working on that single document for months, you’ve been through revisions, and revisions of revisions, you’ve been bombarded with information from the technical team, you’ve had requests for more detail, more depth, and more minutiae … then how do you tell if it is any good? Your advantage – your fresh pair of eyes, your ability to see the big picture, and your talent for information organisation – is no longer whole. Now you are the one who is too close to the project.

A writer of fiction would tell you this: put the book down, step away from the desk. Leave it for a week or two, a month or two. And then tackle it with fresh eyes. A technical writer would scoff – who has time for all that? This book needs to be released next Wednesday!

Often, the solution is to hand it to someone else – a fellow writer – for review and comment. But what about when that option isn’t available either? Every writer has their own method of handling this. What I do is this: I put it down, not for long, but for an afternoon, or overnight. And I write something else. Something completely different. A blog post, for example, or a chapter of a novel, or a short story. Anything that has absolutely nothing in common with the piece you’re working on. Ensure the voice that you are writing in changes, the topic changes, the emotion changes. Then, make yourself a cup of tea, and pick the book back up again. But don’t start at the beginning. Read it backwards. Read each page, on its own, in reverse order. I even read the paragraphs in reverse order. Start at the last one, and work your way back to the beginning of the book. You’re checking for typos, for sentence structure, for punctuation, grammar, and all that good stuff. By reading it out of order, you’re less likely to drift off and start thinking about something else. You’re more likely to read what’s there, rather than what you think is there.

Then find a blank piece of paper. Put yourself in the mind of your customer: What do they need to know? What are they trying to achieve? Why do they have your book? The answers will be myriad – but list the obvious ones out. You need to think about what your customer knows, and what your customer doesn’t know – that gap is where your book fits.

Once you’re thinking like a customer, pin that list up somewhere you can see it, go back again, and read the book in order. If you’re able, read it aloud, it helps to catch odd phrasing. This time, you need to be looking for flow. Make sure each paragraph flows into the next, that each section flows into the next, that each chapter flows into the next. Check that you’re introducing concepts in order from the top down – start with the big things, and then explain the detail as you go on. Cut out anything that doesn’t fit. Don’t be afraid to cut and paste paragraphs, to taste-test them in a new arrangement.

And the whole time – there’s only one thing you should be thinking about – your customer. If the customer perceives value in your documentation, if your book bridges that gap between what the customer knows, and what they need to know – then they will see the beauty in it.

Cross-posted from Foss Docs

Share