On Writing, Tech, and Other Loquacities

The collected works of Lana Brindley: writer, speaker, blogger


Leave a comment

OpenStack Newton Summit – Docs Wrapup

Everything is bigger in Texas: including the conferences!

DSC_1078

This Summit was a homecoming of sorts. OpenStack started in Austin with 750 people, and returned six years and twelve conferences later with 7500 people. Even the baristas in the downtown coffee shops noticed us the second time around.

For documentation, this conference was bigger than usual as well. We had a total of eight sessions, in addition to the contributor meetup on the last day, which is more docs sessions than we have ever had before.

And we had a lot to talk about! The biggest thing on our minds was the future of the OpenStack Installation Guide. The Big Tent has changed the way that projects go about joining the OpenStack ecosystem, and with Foundation having an increased focus on ensuring new projects have sufficient documentation, we needed to change our approach to documenting the installation of an OpenStack cloud. There is no ‘right’ way to install a cloud any more, and there is certainly no ‘right’ set of components you should be installing when you do it. But with a small documentation team, and a seemingly endless parade of new components requiring documentation, we were faced with a big technical challenge, where everyone had some kind of skin in the game. Despite some differences of opinion, the session itself was extremely productive, and we came away with a solid set of deliverables for Newton. First of all, we’re going to create the infrastructure to allow projects to write their own installation docs in their repos, and then publish them seamlessly to the docs.openstack.org front page. This means that projects have responsibility for their own docs, but the docs team will provide assistance in the form of templates and infrastructure support to ensure that all projects are treated as first class citizens. Secondly, the existing Installation Guide will change focus to be more about an installation tutorial, giving people a highly opinionated and completely manual installation method to learn the ropes, but not to install a production cloud. Thanks to the OpenStack User Survey, we can safely say that most production clouds are installed using some kind of automated tool, so having manual installation instructions is useful as a training tool, but not in a real world scenario.

With the big question more or less settled, we got on to the fairly long laundry list of other things that needed to be done, which all ended up focusing mostly on streamlining some of our processes, being clearer about the way we operate, consolidating guides that had (for obscure historical reasons) been in their own repos into the main one again, and general editing and tidying up. A full list of the goals can be seen here: Newton Docs Deliverables. And, for historical interest, here’s the whiteboard from the Summit session:

DSC_1099

During the Mitaka release, docs had a focus on Manageability, aiming to work more effectively and efficiently, with a focus on collaboration. For Newton, while manageability themes are still very much present, the focus is more on Scalability, and making our documentation efforts scale out to represent a much greater proportion of products, contributors, operators, and users. From empowering projects to write their own documentation with our support, to making our processes simpler to find and understand, to ensuring our documentation is as accurate, up-to-date, and effective as possible, it’s going to be an exciting cycle for docs!

I leave you with one of my favourite Texan big things: a bathtub margarita!

DSC_1074

Share


Leave a comment

linux.conf.au 2015 – Documentation Miniconf

Day 1 is drawing to a close at linux.conf.au 2015 and we’ve just wrapped the documentation miniconf. There was an interesting mix of talks today, and as the first documentation miniconf at an LCA, it’s given me some great ideas for growing the miniconf in future years.

As for me, after doing the Agile Documentation Lego talk at LCA in Perth in 2014, I felt I needed to give a good follow up show, this time focusing on Every Page is Page One. To do this, I devised a game based on the children’s book “We’re Going on a Bear Hunt”, and using Play-Doh to make it a little more hands on.

2015-01-11 (1)

Share


Leave a comment

The only writing sample you need is your resume

I came across this article recently, which states “code is the new resume”. It asserts that people seeking coding jobs should be contributing to open source and using those contributions as proof that they have the skills for the job they’re seeking. I wholly agree, but it got me thinking about the equivalent for writers.

When I have had prospective tech writers come to me for advice on where to start, I have always pointed them towards an open source project that I think would meet their skills and interests. I usually have three main reasons for this. I want them to get experience working with processes within a docs team (particularly a mature docs team that functions well, such as Libre Office or OpenStack). I want to give them an opportunity to get familiar with the tools and programs used by highly technical writing projects (things like Docbook XML and Publican, rather than proprietary tools like Madcap or Adobe). And, perhaps most importantly, I want to give them a chance to write things that they can share with prospective employers.

But contributing to open source docs, while beneficial in career terms, rarely ends up being something you can confidently wave in front of an employer. Rarely, if ever, will you get the chance to create a new document from scratch, something you can call truly yours. And even if you get that chance, rarely will it remain the piece of art you crafted for very long. Open source software moves quickly, and by the time you publish your meticulously researched and effectively written document, there will be a team of hungry writers circling, ready to rip into your virgin words and tear them apart. Within months, your perfect book could be an almost unrecognisable crime against information architecture, full of passive voice and typoed commands, with a title that no longer reflects its content. Certainly not something you want your name anywhere near.

Herein lies the tech writer’s dilemma: when asked for writing samples, what do you do? You don’t want to admit to authorship of something that (through no fault of your own) makes you want to quit the industry, but you also don’t want to say that you’ve been contributing to a project for months and have nothing to show for it. My answer: make your resume your writing sample. You won’t always get away with it, because some employers will ask for writing samples as a matter of course (at which point, having kept a tech writing blog for a while can be very handy. Just sayin’), but having plenty of prose in your resume and making sure it shows off your skills will do wonders for proving you can do the job.

There are no rules saying you need to deliver a two page resume, developed using a standard Word template, to apply for a job. Designers have been handing in creative resumes for decades, and we can take a leaf out their book. Offering something different, something that screams “I’m a writer, and I’m damn good at what I do” is going to make any recruiter or hiring manager stop and look. Remember how many resumes these guys see. Offering a bog-standard resume means that yours will get thrown away at the first typo.

shakespeare

First of all, do your research. If you can, find out what writing tools your prospective employer uses, and use it. If you don’t have that in your repertoire, then use the closest thing you can do. When I applied for my first job that used Docbook XML, I delivered my resume in LaTeX (complete with “Typeset in LaTeX and TeX” in a footnote at the bottom of each page. Nothing like rubbing it right in). I later found out that the hiring manager ran around the office showing it off to all his existing writers, pointing excitedly to the footnote. Once I’d learned Docbook XML, following jobs got that instead. If the company you want to work for uses Word, then deliver a beautifully formatted Word resume (and don’t forget to use styles!). By the same token, be aware of internal culture, and the fact that people get very passionate about their tools. Never deliver a resume built in Word with a .docx extension to an open source company if you don’t want to be teased about it forever after (assuming you get the job despite it, of course).

And, perhaps most importantly, don’t just deliver a series of dot points. This is your chance to prove you can write. Include a fairly long prose introduction, but don’t waffle. Be clear about your goals, the job you’re after, and any relevant work you’ve done previously. If you can, do some research into the company you’re looking to join, and tailor this part to the role you want. Mention how your experience meets their demands, not as a canned response to selection criteria, but as someone who has gone looking for core values and culture clues, and is addressing the human beings that work within that group. Write directly, succinctly, but passionately. Don’t use words too big for the subject (with apologies for paraphrasing C.S. Lewis), make your language casual, but not informal. Get your writer friends to proofread it until you are confident it is perfect. Feel free to email me with your text and I’ll also help.

Don’t make recruiters ask for writing samples. Get creative, use your skills to your advantage, and don’t be afraid to have some fun with it. If you have your own stories of resumes (good or bad), or hiring, please share!

Share


Leave a comment

OpenStack at linux.conf.au

linux.conf.au this year is being held on the beautiful University of Western Australia campus, in Perth.

11846594876 3d35cfd16f b.jpg

About 500 geeks have descended on the campus, with talks being organised into six topic streams across three days. The conference is entirely volunteer-run, with financial and community support from Linux Australia. The conference is designed by and for developers, with an emphasis on deeply technical talks. Of course, it’s not all code, though. An important aspect of open source technology is the community, and usually one track at linux.conf.au is dedicated to community matters such as diversity, governance, and documentation.

11855948834 bce9a4d1fd b.jpg

Every year, the conference schedule belies a popular trend in open source tech. This year, that trend has definitely been OpenStack, with many talks discussing various aspects of the project including continuous integration, bare metal provisioning, and deployment. Some of the highlights (to watch video of these talks, see the links at the end of this post):

Continuous Integration for your database migrations by Michael Still

Rapid OpenStack Deployment for Novices and Experts Alike by Florian Haas

How OpenStack Improves Code Quality with Project Gating and Zuul by James Blair

Diskimage-builder: deep dive into a ‘machine compiler’ by Robert Collins

Processing Continuous Integration Log Events for Great Good by Clark Boylan

Provisioning Bare Metal with OpenStack by Devananda van der Veen

One of the most important and interesting aspects of OpenStack is its ability to link in with other projects to extend and expand its use. The conference has been a great opportunity for developers to work out how they can use OpenStack to help their own projects. Rackspace also helped encourage this by offering developers at the conference free access to the Rackspace cloud for OpenStack developers.

Of course, sometimes it’s not just the formal talks at a conference that are the best part. Sometimes it’s all about meeting a new person who might be able to help you out with that sticky problem, or catching up with old friends and having the opportunity to just geek out for a little while over a nice meal and maybe a drink or two.

IMAG0119.jpg

Watch the talks mentioned in this blog post:

Share


Leave a comment

linux.conf.au and the OpenStack Miniconference

linux.conf.au doesn’t start until Wednesday, but that didn’t stop an eager group of Rackspace engineers rocking up on Sunday night.
11846078833_225150858c_bWe didn’t just sit around in empty conference rooms, though. Michael Still (he’s a Nova core and manages a team of OpenStack engineers in Australia), ran an OpenStack miniconference, which is an informal, day-long session before the formal conference opening. We heard from people from all over the OpenStack universe, including HP, Catalyst, Canonical, and IBM, and the topics of conversation hit upon all aspects of the project. For a full list of the speakers and topics, head along to the miniconf site.
The general idea of this informal start to the conference is to give like-minded people a chance to get together and discuss things that are important to them before the formal conference sessions begin. The miniconfs also act as an incubator of sorts, to be able to foster and encourage people who might be interested in a topic to dip their toes further into the water. The OpenStack miniconf in particular gave people an opportunity to learn about OpenStack development, help them to understand the state of the nation of OpenStack and what things they might be able to help with, and to provide a bunch of knowledgeable people for them to ask questions of.
11845810725_912f311b84_bThis is Michael talking to Anita Kuno, an OpenStack developer from HP, who discussed support services for OpenStack developers during the miniconf.
Of course, the other really awesome thing about the miniconf is that gave a chance for OpenStack developers to get together in person. People who have chatted and argued on a mailing list or in code reviews during the year can often find common ground over a quick chat in between sessions, or during a Q&A session.
11846082123_8be3ce7822_bAnd now that the miniconf is done, it’s time to get the conference started! Stay tuned for another post about linux.conf.au.

Share


Leave a comment

OpenStack Contributions

Like most people I learn best by doing, so one of the first tasks I set myself when I started working on OpenStack documentation was to get a docs patch in as quickly as possible. In the end, this turned out to be a copyedit on the introduction to the High Availability guide, and it happened on day two, right before I did my induction in Sydney on day three. I had no idea this was a big deal until I got to the Sydney office to find myself being lauded, which was somewhat amusing, and slightly embarrassing.

Stackalytics_firstcommit

So the main upside of this is that I am now officially an OpenStack ATC (active technical contributor). The next step from here is to keep making patches of course, along with code reviews and the like and hopefully to continue being useful to the community in this way.

One of the more important things about coming to a new project is working out the workflows. All the formal stuff is documented, of course, but sometimes it’s the common knowledge things that are hardest to pick up; the bits that ‘everyone knows’. I’m still feeling a little daunted by code reviews (what exactly constitutes an acceptable patch? To what extent should I be testing the proposed content?), but after discussions with the core docs contributors I’m starting to get a handle on those. This week has been spent largely as a sponge: just trying to soak everything up. After a day or two off over the weekend, hopefully my brain has made sense of most of the things I’ve learned so far, and next week will be a week of action!

Share


Leave a comment

Linux.conf.au 2014 – Perth, WA

Possibly my favourite conference, linux.conf.au is coming to sunny Perth in January 2014. I’ll be returning to the Haecksen miniconf driver’s seat (check out haecksen.net for more info and the Call for Proposals), and also will be giving a talk myself, called There and Back Again: An Unexpected Journey in Agile Documentation. This is a talk I’ve given a few times already, including at OSDC 2013, so I’m really looking forward to sharing it with the linux.conf.au audience. That, and I’ve never been to Perth before, so yay!

linux.conf.au 2014 - Perth WA

Share


Leave a comment

The Mechanical Turk and OSDC

OSDC 2011 kicked off today at the ANU. I tripped along to the OSIA (Open Source Industry Australia) miniconf, which Red Hat sponsored, and was mightily pleased to hear Stuart Gutherie reference one of my favourite pieces of historical weirdness: the mechanical turk. I couldn’t help but hold forth, and eventually gave an impromptu lightning talk on the subject.

The mechanical turk existed in the late 1700’s, and was billed as an “automaton chess player”. In reality, it was a wooden box in which an accomplished chess player could sit and manipulate the chess board on top, while a wooden “turk” would appear to move the pieces mechanically. It was invented by Wolfgang von Kempelen, a Hungarian, who also invented a “speaking machine”, a speech synthesiser, which was actually quite important to the early development of phonetics as an area of study, but not half as famous as his great hoax, the mechanical turk.

What makes the mechanical turk so interesting is the lengths von Kempelen went to to persuade his audience that the turk was, indeed, an automaton. He would show the audience first one empty cabinet, then another, and then in the third cabinet would be a complicated looking system of levers and pulleys. The cabinet was designed that, throughout this proceeding, the human operator could easily slip from one side to the other on a sliding seat to hide from view.

The trick was not exposed until the mid-1820’s, despite some very public appearances. The mechanical turk won chess games against such prominent figures as Napoleon Bonaparte and Benjamin Franklin before being found out, although no one has recorded the names of those chess players working the turk from underneath.

How this becomes relevant to the IT industry is equally as fascinating as the history of the machine. In the days of the turk, playing chess was something that no machine was capable of doing. It required a level of computation that only humans were capable of achieving. We have now invented computers that can play chess (and even appear on game shows) with ease, but there are still tasks that we can’t replace with a small shell script. Many of these tasks are frustratingly simple, but require a human brain to parse the data. Writing tasks often fit into this category: things like writing short captions for images, changing the style used in a document, or changing text from British English to American English spelling.

Within Red Hat, we spend a lot of our time working on errata text. After a program has been released it is fairly normal to find bugs. When the developers go through and fix a bunch of those bugs, they will send out an update termed an “errata release”. For each bug fixed, the technical writers need to document it, which means writing four sentences: one sentence each about the cause of the bug, the consequence of the bug, the fix that was applied, and the result of the fix. This is, naturally, quite tedious and boring. It’s natural for us to want to automate this process, but unfortunately it’s a job that requires a human brain.

So we did the next best thing: we created a mechanical turk. We call it “The Turkinator”, and it’s currently available for Fedora errata releases. Basically, you choose whether you want to write a Cause, a Consequence, a Fix, or a Result; you’re given a bug to read; you submit your sentence and that’s it. In this way, we automate the task of writing errata text by breaking the big task down into little tiny pieces and asking humans to perform the work of a shell script.

This model has an extra added bonus in the open source space, though, and that is what we like to call “micro-contributions”. Anyone who has contributed — or thought about contributing — to an open source project would understand how daunting that first contribution can be. By creating the possibility of micro-contributions, potential contributors can have their first (and second, and third …) patch completed in under a minute. Instant contributors for the project, and instant contributions for the development team.

Share


1 Comment

linux.conf.au 2012 Call for presentations!

linux.conf.au is the biggest Linux and open source conference in the southern hemisphere, and rightfully so! I was a speaker at the conference last year in Brisbane (the video is on my videos page) and had a great time.

This year it’s being held in Ballarat, Victoria, and I must say I’m quite looking forward to finding out what a regional LCA is like. Anyway, the CFP is open, I’ll be submitting again, and I suggest you do too. More details on the LCA website.

Share


1 Comment

Open Source Documentation in Four Easy Steps (and one slightly more difficult one)

At Red Hat, we have a content services department that is about sixty people strong. Even though the department is pretty big these days, back when I started with the company, we were still trying to work out the best way to run a successful enterprise-level documentation team. What that means is that I have been involved in some of the big discussions that we have had over time about what processes we needed to get in place in order to allow us to produce the massive amounts of documentation we required as our product offerings grew. As a department, we grew very big, very fast, and our processes needed to be flexible enough to accommodate the large number of new hires we had, and still have, coming in, but robust enough to be valuable and reliable. They also need to fit in well with the engineering practices in place in the company, and the tools that our development teams use and are familiar with. Of course the other really important factor was that we had to be open. We wanted to use completely open tools to produce our docs, but we also needed to be able to work with community teams, such as the Fedora group.

Like many documentation groups, at Red Hat we use a five-phase waterfall model to produce documentation. It’s based on the ever-popular JoAnn Hackos method: starting with planning, the content specification, then writing and editing, translation and production, and then a retrospective review. At Red Hat at the moment we’re at a place where our development teams are increasingly using Agile-style development models to produce software, and that means the pressure has been on us to develop in a less rigid way than the old waterfall model has been allowing us to do. Also, it’s no secret that the online world is changing, and people now expect to be able to interact with information at a much deeper level than ever before. They don’t want to be presented with static, hard-copy books any more. They want dynamic, interactive, usable, and above all useful documentation.

In order to be able to work out what kind of model we needed to use, we needed to go back to basics. All technology is about solving problems. Back when we were sitting around in caves, we had a problem: there was all this food running around outside, but we didn’t have a way to get it to stop running around, so we invented a club and solved the problem. Since then, we’ve used technology to solve all sorts of problems: horses were sometimes problematic to control, and they didn’t go very fast, so we invented cars. The hard wheels used on early cars weren’t very comfortable, and when they broke they really broke, so we invented pneumatic tyres. We also had problems being able to see in the dark so we invented electric light, being able to go to the toilet when it was raining or cold so we invented indoor plumbing, being able to send messages to people on the other side of the country so we invented the telephone, or on the other side of the world so we invented email.

Even these really technological things that we find ourselves documenting now, are all solutions to problems. One of the first things you need to be aware of when you’re writing documentation is what problem your users have. If you can’t describe the problem in one or two sentences, then you don’t understand it well enough, and you need to keep researching. Because if you keep going, all you’re going to end up with is hollow marketing spin. That’s how we end up with documentation that talks about “leveraging synergies”: words that sound great, but have no meaning.

So at Red Hat we came up with a fairly simple model, and that is that documentation needs to be able to be boiled down to three things:

  • Describing the problem
  • Solving the problem
  • Giving any additional information

Anyone who has done any work with DITA would understand that what I’m really talking about here is:

  • Concept
  • Task
  • Reference

 

So we’ve more or less said that DITA is where we need to go next. But we didn’t want to completely restructure the tools we were using. We have a fairly large people investment in our tools. The main tool we use is Publican, which was developed by an engineer in our Brisbane office. It uses Docbook XML and gives us a command line interface that we can use to create new blank books, apply corporate formatting, and it integrates into our internal packaging system so we can create all these different formats for our books – HTML, PDF, and ePUB on the website, and we can also create RPM packages and man pages to package in with software. We combine Publican with SVN to give us a complete CMS, in short.

We looked at DITA and DITA-OT, the DITA Open Toolkit. We realized two things: first of all, it would take a significant amount of work for us to bring an open DITA toolchain to the level of maturity and system integration of our existing Docbook toolchain. Secondly, we wouldn’t get the really significant benefits of topic-based authoring without a Component Content Management System – a CMS that manages content at a very granular level. Putting those two things together made it clear that if we changed to DITA all in one hit, it would take us significant time and energy just to get back to where we already were with a mature open source complete tool chain. So we decided to take an evolutionary, rather than revolutionary approach. It’s a much more open source approach: to re-purpose something that you already have, add a script here, a small command-line tool there, release early, release often, and let the user community guide the development, rather than trying to design and implement some grand system in a distant (and expensive) future.

What we needed was something that worked in a similar manner to DITA, gave us content re-use and all that good stuff, but that would work with our existing Docbook XML and Publican tools. The first thing we did was to start creating topics in Docbook, using Docbook syntax, and a command line tool that we called the “Topic Tool”. This was a really simple command line tool that allowed us to write XML snippets (or ‘topics’), and save them in SVN. We used an extensible template model, where the topic tool retrieves a Docbook template from a central repository to match the topic type you specify. That way we can create new topic types, and even modify the Docbook syntax of existing topic types, without changing the tool on users’ machines. That was an important decision, and a major part of the evolutionary “Release, Review, Refine” approach we wanted to use. Over time we did change the Docbook syntax of the basic topic types and create new topic types, validating the open source maxim “plan to throw the first one away”.

The basic workflow with the Topic Tool is like this: you tell the tool which topic type you want, and it will then download the template and prefill some information for you. You can then edit the topic in a text editor, and import it back into the repository. It’s then possible to view your topic from the repo directly, which means anyone can now see it and use it. We then include those snippets into any book you want using an xi:include, build the book as normal with Publican, and voila! we have a book with content reuse. So that was pretty awesome, and if you read any of our Virtualisation documentation you’ll probably not know it, but that’s all based on topics and maintained using the topic tool.

Of course, once we got to about 300 topics in the topic tool, we started to notice that we have another problem, we were having trouble locating topics within the repository. This made us realise that what we needed was a better way to organise it all, so we wrapped a neat interface around Topic Tool using Open Grok. OpenGrok is designed for software engineers to search source code repositories, so it worked well for what we were trying to do. This is where the open source ecosystem came into its own all over again – there are a million off-the-shelf components and projects that you can choose from to build your own system. In the end we had a web-based search tool that was pretty basic, but did the job.

Content reuse is an obvious application of topic-based authoring, but by this stage, we’d started to realise something even more exciting. Our definition of a topic is a unit of information with a single subject – that means that it talks about one thing, and one thing only – and that has a single information role: that is, it’s a concept, a task, or a reference. If we gave three topics – a concept, a task, and a reference – to a robot, along with a rule describing the “explain, answer, extra info” pattern, and some kind of graphical template, that robot can assemble those topics into meaningful and useful output for an end user. What we wanted to do was to automate this process.

When humans assemble content into a book, they are making decisions. What aspects of the information are their decisions based on, and what rules are they consciously or unconsciously using? That was what we wanted to create: A system that would allow us to store metadata about a topic, and use rules to automate assembly on a scale that we just couldn’t do by hand-coding.

So we developed a system that we call Skynet, which allows us to dynamically sort and locate topics. Select the topics you want, and Skynet will download the code that presents those topics in a consumable way. Of course, we started dreaming big after all this. We’ve started thinking about moving away from the documentation-as-a-book paradigm, and started considering “Documentation 2.0”. Why not include comment fields on our documentation, that will allow our reviewers – quality engineers, subject matter experts, editors, and the like to make comments directly in the book rather than creating a separate list? And why not offer that functionality to our users as well? What if we had the equivalent of a Facebook ‘like’ button? Users could ‘like’ sections that they found useful, or leave comments saying “when I tried to follow these instructions, X happened” or “this seems to be missing a step” or the like. If we break away from the book model, we start to be able to think about documentation as something that our users can interac with. We could have popular topics bubble up to the top of a list, or divide books into audiences, and present the information for each audience differently, giving them a tab to click to see the information in various ways. We could implement something similar to the Amazon “customers who bought this also bought this” and present similar topics to our readers. Using single-sourced content, and content reuse, through a system like Skynet, is going to allow us to move into these more innovative delivery methods.

The team working on the Skynet project have 110% discoverability as one of their goals, to quote the team leader: “the documentation finds you”. In other words, when you’re working on something, and you get stuck, the documentation is there at a click or a glance, ready for you to interact with it. Of course, I’m sure some of you are saying “Help” right now, and yes, I agree with you. That is something else we’re talking about, and something that Skynet will enable us to do. Skynet pushes out XML now, and of course there’s plenty we can do with that as it is, but we can also extend it to push out all manner of things, including Mallard for Gnome Help.

So let’s take this conversation back to processes. All this dreaming is fantastic, but at some point we still have to actually do the hard work. Without a solid process, and a great set of standards, we’re not going to be able to get there. We’re doing a lot of internal testing, and we’re dipping our toes in the water with the topic tool and with Skynet. So far, we’ve been able to slip these in to our existing standards, but that’s not going to last for long. With a paradigm shift as big as this, everything is going to have to change, and that includes the way we go about producing our documentation. We need to be organised, we need to make sure what we do is repeatable, and we need to maintain our high standards of quality and accuracy in our documentation. Most of all, though, we need to maintain and even increase our focus on the customer. These changes come about not because we got bored with doing things the old way, but because we believe it’s a better way to serve our audience. Never, ever forget who you’re writing for, it’s those poor sods out there with their problems that they’re trying to solve. Our goal is to give them the tools they need to solve them.

So, to recap:

One of the main things that we have learned is that process is king. If you don’t have a solid process for producing documentation, then you’re going to find yourself floundering at every point along the way. You’re going to end up with documentation that doesn’t cover what it needs to cover, isn’t accurate or well-written, and doesn’t get out on time. Without a plan for how you’re going to tackle the project from end to end, then you’re not going to succeed. It’s that simple.

The second thing is about tools. You need to decide ahead of time what tools you are going to need during the project, and make sure you have them ready and up and running before you start. It’s horrible to get halfway through writing and find out that one of your writers doesn’t understand how to use a semi-colon. It’s even worse if you get halfway through and realise that one of your writers doesn’t understand Docbook XML, or whatever authoring tool you’re using.

While we’re talking about tools, it’s important to keep it open everywhere you can. This can seem counter-intuitive to those of you who have worked in big companies, but being open doesn’t mean giving away business secrets, or exposing your competitive advantage. I think Red Hat of all companies really proves that the openness can co-exist with secure business practices.

Part of keeping it open is about keeping it real. The people behind your processes, the people doing the actual work day in and day out: they’re real people. They’re real people, with real lives, and real families. You need to be able to work with people, and ensure that the loss of one person isn’t going to make the whole project tumble. The other thing you need to remember is that your readers are real people as well, you need to make sure that you’re giving your readers something useful, and something that they will get value out of.

And finally, I want to remind you about reviews. We all understand the importance of reviewing our writing for correctness, and reviewing our projects to make sure we can learn from our mistakes. You need to extend reviews to the documentation process itself, as well. Never be afraid to change things around. Just because it worked last time doesn’t mean it’s going to work next time. And just because it’s worked in the past, doesn’t mean it’s the best way to do it in the future.


This post was originally a talk given at the Open Help Conference in Cincinnati Ohio, on 5 June 2011.

The slide deck is available on Slideshare: Open Source Documentation in Four Easy Steps (and one slightly more difficult one)

You can also download this article as a PDF file: FourEasySteps


Share