On Writing, Tech, and Other Loquacities

The collected works of Lana Brindley: writer, speaker, blogger


Leave a comment

Facebook, Dynamite, Uber, Bombs, and You.

This is the transcript of a talk I gave at WriteTheDocs Australia, in Melbourne, on 15 November, 2018. A video is also available, see the Videos page for a link.


This little story starts with the American son of German migrants, Herman Hollerith. He was born in 1860, got a degree in engineering, and then went to work in the US Census Bureau in 1879. At that time, the census was just a headcount, they didn’t collect any real data on the population, simply because they didn’t have the ability to process that information. As it was, they only ran a census every ten years, and it took them several years to process all the information. This meant that the big concern of the department is that before too long it was going to take them longer than ten years to do the calculation, meaning the next census would have started before the last one was complete. 

These days, we call that overwhelming technical debt.

So young Master Hollerith was a bit of a bright spark, says “there ought to be a machine for doing the purely mechanical work of tabulating” and set out to build one. By 1884, he had a prototype, and the US Census Bureau used the machines for the 1890 census.

Herman Hollerith, Bright Spark.

The data is first encoded on the punch card with the pantograph, then operators would load the card into the tabulator. Pins would drop down onto the card, and where there was a hole the pin would drop through and land in some mercury, which completed an electrical circuit, and advanced one of the dials on the machine. So each dial on the machine represents one character trait: age, gender, state, etc. Then the operator would record the data, remove the card and put it into a sorter drawer. Then they could reset the tabulator, ready for the next one. According to the US Census history site, operators could process 80 cards a minute this way.

Because Hollerith was a bit of a smarty pants, he didn’t sell the machines to the government, he leased them, and after a bit he managed to lease them to all sorts of government departments, all around the world, which was good to keep the money flowing in in between censuses. To handle this, he created a company called, creatively, the The Hollerith Electric Tabulating System.

Over the next few years, Hollerith’s company found other companies that were making “machines” like employee punch clocks and weighing systems, and they merged into a partnership called the Computing-Tabulating-Recording Company (CTR). That company then became International Business Machines in 1924, and the companies were finally all merged in 1933.

Anyway, there is a whole other talk on IBM corporate history, but the short version is that Thomas J Watson became IBM chairman in a ridiculously dodgy deal in about 1914 WHILE HE WAS IN JAIL, and remained so until he died in 1956. Because the guy was so dodgy, he didn’t like writing things down. I mention that, because it becomes important later in the story.

IBM Dehomag Hollerith D11 Tabulator

These are some early IBM machines made by the German IBM subsidiary called Dehomag. These machines were called the Dehomag Hollerith D11 tabulator and sorter. They were originally used primarily in banks to process account transactions, calculate interest, and – most interesting to the government of the day – cross reference bank account numbers to census data.

This was in about 1935.

IBM Dehomag Hollerith D11 Sorter

You know when you see pictures of people who have been in the concentration camps, and they have a number tattooed on their arm? Those numbers connected the human being to the punch card for a Hollerith machine. So here’s a fact: In Holland, they had extensive Hollerith machine infrastructure in the years before the war, and 73% of Dutch Jews were killed by the end of it. In France, they had very little Hollerith infrastructure, and what they had was disorganised. Only 25% of French Jews were killed by the end of the war. In short, without this technology, the Holocaust would still have happened, but it wouldn’t have been so well organised, so well planned, and so well executed.

Of course, I’m sure you all know about the Nuremberg Trials that happened after the war ended. We remember those for the Nazis who got sentenced for war crimes in the main trial, but there were 12 more trials between 1946 and 1949, which covered 177 people: physicians, judges, military personnel, civil servants and also industrialists.

Gustav Krupp and his friend Adolf Hitler

People like Gustav Krupp, who provided Panzer Tanks to the Nazis. Interestingly, it didn’t slow that company down much: you probably know them as ThyssenKrupp now. They make elevators. Anyway, so you would expect that senior officials in IBM would have ended up before the Nuremberg trials, too, but jokes on you!

The Nuremberg Trials were pretty unique, in that they had to be conducted in four languages, more or less simultaneously, which had never been done before. Guess who provided the computing power for that? Got it in one! Interestingly, for completely unrelated reasons, I was in Nuremberg a couple of months ago, and took the opportunity to go to the Documentation Center and Nazi Party Rally Grounds. It was a fascinating tour, but much to my dismay, IBM was not mentioned once.

Hopefully what you’ve picked up so far is that Hollerith was a brilliant young man who solved a very difficult problem. It’s not his fault that the technology he developed was used by Hitler to murder people. It’s also not his fault that not only was no one held responsible for that, but also that we seem to have collectively forgotten about it. The point I’d like to make, here, is that throughout history technology has been used in some pretty horrible ways. So let’s look at some more historical “oh no” moments …

Alfred Nobel, Repentant “Merchant of Death”

Alfred Nobel, you might recognise his name. He was the guy who invented dynamite, but he also owned a large weapons manufacturing plant. Anyway, Alfred’s brother died in Cannes, and a French newspaper got confused and wrote an obituary for Alfred instead. It was pretty nasty stuff, titled “The merchant of death is dead”, and this wonderful line: “Dr. Alfred Nobel … became rich by finding ways to kill more people faster than ever before”. So Alfred read this, had what is technically known as an oh no moment, and completely secretly went on to establish the Nobel prize with his personal fortune. Hilariously, he decided not to tell anyone about it, so they all got a nice surprise after he died for real, and they read his will.

Otto Hahn got the Nobel Prize for Chemistry in 1944 for working out Nuclear Fission. The prize really should have been given to him along with his two colleagues Lise Meitner and Fritz Strassmann, but the other two had had to leave Germany in a bit of a hurry a few years earlier. Later on, of course, that technology was used to bomb Nagasaki and Hiroshima, and the entire world had an oh no moment. Now we have ethics committees in Chemistry.

Eugenics was the practice of sterilising portions of the population in order to stop them breeding. Hitler was obviously a big fan, but before WWII it was a bit of a big deal in the US. 33 US states had sterilisation programs in place against mentally ill people, disabled people, alcoholics, people living in poverty, and people deemed to be promiscuous. Some reports say around 65,000 Americans were legally sterilised during the first half of the 20th century. That was a bit of an oh no moment for Biology. The World Health Organisation (WHO) was created in 1948, as the first specialised agency of the UN. It’s mission was multi-faceted, but I’ll draw your attention to this bit: “to address the underlying social and economic determinants of health through policies and programmes that enhance health equity and integrate pro-poor, gender-responsive, and human rights-based approaches”. 

Basically, don’t let our government kill you and tell you it’s good for  your health.

In a similar vein, some great drug failures like Thalidomide made the idea of having better oversight on medicines seem like a good idea. Thalidomide was sold over the counter from 1957, and was recommended to pregnant women for relieving the symptoms of morning sickness, unfortunately it created horrific birth defects. Only 40% of children born with defects survived, and those who didn’t were missing limbs. oh no. Court cases were brought against the manufacturer all over the world, including a class action in Australia as recently as 2012. The US Food and Drug Administration was created in 1927, but the thalidomide cases significantly strengthened its abilities, and a whole bunch of other laws around the world were introduced to address drug testing around the world.

We didn’t let too many bridges collapse before we decided that civil engineering could use some regulation. In America, the National Society of Professional Engineers was established in 1932, which adopted a formal code of ethics in 1964.

And it only took one plane flying into a building to establish that maybe letting people take box cutters on planes was less than sensible.

Hopefully you can see where I’m going here.

People very rarely come up with new ideas, new inventions, amazing new discoveries, with the intention of killing or hurting people. It’s the unintended consequences that cause the problems. But it’s also those unintended consequences – the “oh no” moments – that lead to improvements in the way we handle things. We end up with better laws, better regulations, and our society improves as a result.

Ethics committees and government oversight departments and legal rulings don’t stop bad things happening. But they can certainly help prevent them, and at least they give us some kind of recourse if the worst happens.

Now, I want you to consider these two cases: Volkswagen was caught out having written software code that allowed their cars to cheat emissions tests.

Uber also developed software (called ‘greyball’) which allowed them to cheat law enforcement officials trying to crack down on ride-sharing.

The difference is that Volkswagen software engineers went to jail, and Uber software engineers didn’t. Why? Because one is a car company, and one is a software company.

Startups especially like to use the phrase “move fast and break stuff”. In IT, we talk about “innovation” a lot, and “thinking outside the box”. I’m sure we all know a project manager who has encouraged us to “challenge paradigms” or “think different”. This is all great, and I’m not suggesting we should stop building new things, or thinking up interesting ways to tackle problems! But what happens when we step back from what we’ve created, and go … oh. No.

Really, in my opinion, IT should have its oh no moment when IBM provided the computing power that made the Holocaust possible, but not only did it go unpunished, we’ve largely forgotten about it over the intervening 80 years or so. So there’s never really been a public reckoning. So now we’ve looked at some examples of oh no moments leading to real change, let’s look at some aspects of IT innovation that haven’t …

The development of the world wide web in the 90s was obviously very optimistic, and I’m not sure we can blame anyone for failing to see 4Chan coming. But we can probably blame some of the social media sites for failing to see the dens of iniquity they have become, and we can certainly blame most of them for failing to do anything about it once it happened. Twitter’s response to the incredible amount of white supremacism has been at best ineffective and, at worse, non-existent.

I find self-driving cars a particularly thorny problem. On the one hand, there are huge benefits to the technology. Consider the implications not only on the environment and on the convenience it can add to our lives, but the added mobility and independence it would give people with disabilities means that this technology could add so much to our lives and to our society. But we haven’t fully thought through the impacts yet. Most of the accidents related to self-driving features so far have been because humans became too reliant on the tech, doing stupid stuff like watching movies, reading, or napping, instead of acting as a last-resort safeguard. What happens when we rely on the tech so much that we stop looking before we cross the road, because we “just know” the cars will stop for us? I have self-driving features in my car, and it makes stupid mistakes ALL THE TIME. The tech is not advanced enough for us to rely on it – driving is, after all, a life or death proposition every time you get on the road. But I also don’t think it should be hidden away until it’s perfect, because how else do you learn what “perfect” is? This is a tricky one.

I looked into the killer robots thing a few months ago, and that’s another tricky one, because the technology being developed for fully-automatic weapons systems, is also used for things like the afore-mentioned self-driving cars, aeroplane technology, medicine and surgery applications, and even peace-keeping operations, like dropping aid packages into war zones. In this case in South Korea, a university went into partnership with an arms manufacturer to develop autonomous weapons, and what ended up stopping them was a bunch of universities signing an open letter (which was initiated by an Australian academic, incidentally), threatening to boycott the university involved. The South Korean government wasn’t intending to step in, the UN didn’t step in. Without an ethics organisation, what other recourse is there to stop things? As it is, we don’t really know that the research has been stopped. The chancellor of the university wrote a lovely letter saying that, but the weapons organisation funding it could be quite happily moving along, and I wouldn’t be at all surprised if they had waved large amounts of money at academics to bring them into the project. That’s all speculation of course, but that’s really my point: there’s no oversight, no regulations, no repercussions, but there is a hell of a lot of money.

Here’s one more for you, that was reported in the Economist a couple of months ago, and has been picking up pace in the mainstream media recently: Xinjiang is a province in north-west China, largely occupied by the Uighur people. So, the Uighur are the largest Muslim group in China. In Hotan, the capital of Xinjiang, there is a police station every 300m or so. If you don’t think that already sounds like a police state, wait for this bit: every citizen has an identity card, and at checkpoints around the province, police will scan people’s cards, take photographs and fingerprints, perform an iris scan, and are told to unlock and hand over any smartphones, which are put into a cradle and the data downloaded to be analysed later on. That’s not just for people they’re suspicious of, they’re for everybody. There are four or five checkpoints every kilometre, with citizens moving through them many times a day. The roads are lined with poles holding cameras which watch pedestrians, but also perform pattern matching between number plates on cars, and the faces of the people driving. And if you’re Uighur and you have done even a relatively minor infraction then you get sent to one of hundreds of thousands of “re-education” camps in the province, which don’t officially exist. No one really knows how many people are locked up, but the estimate in the Economist article was 140,000 people in Hotan alone. The ABC in recent reporting says 2 million. So, locking up minority groups for no reason is by no means a new thing, but the way technology is being put to use in this case certainly is. I bet the people who invented facial recognition have had several oh no moments thanks, at least in part, to China.

In that same vein, you might also have heard of Palantir. They’re the company that use Minority Report-style predictions about crime in an area. It was originally developed for the Pentagon to identify terrorists in Iraq, but that technology has now been imported to downtown Los Angeles, where it’s being used to lock up brown people *before* they commit a crime. So it’s not just China.

So, while the increasing mainstream media awareness of personal data and the nefarious purposes we can put it to has been heartening recently, I’m not sure that Cambridge Analytica and Facebook are enough to be considered an oh no moment that will actually change anything in our industry, but I think it might be starting.

So, what does all this have to do with documentation?

You might be aware that, after quality assurance, the group that finds the most bugs in software is the documentation team. We are often put in the position of poking at products we don’t yet fully understand, in order to work out how to use them. It is the writer’s job to come at products like a clueless user, poke things, bend them, use them in ways they haven’t been designed.

I say we should expand that thinking just a tiny bit: how could I use this product to do harm? How could I use it to discover things about people I really shouldn’t be discovering? Can I use this social product to stalk my ex? What about someone who said something nasty about me online, can they find out anything important about me? Can I use this platform, this API, this plugin, this app, this feature to do something that, as reasonable moral human beings, we feel a little uncomfortable about? It’s also important to think about using it in conjunction with other tech. Recognising someone’s face is one thing, but when you combine that with GPS locations, government databases, and purchase history, you have a completely different problem. And also an answer to why I stopped using supermarket loyalty cards. Only last week I received an email confirming my booking for a hotel I hadn’t heard of. Curious, I clicked the link to ‘edit my booking’ and discovered that, while I couldn’t see the whole credit card number, I could have adjusted the dates of the booking, upgraded the room, or purchased additional services. All because someone mistyped their email in a form.

If I can give you one piece of advice, it’s don’t read your marketing department’s hype. Or if you do, don’t believe it. Nuclear fission has saved millions of lives through cancer treatment, provided light and power to billions, and made surgery and even vegetables safe through irradiation, that’s what the marketing department want you to know. But it also made nuclear war a real possible threat, and the marketing department is unlikely to mention it.

So, question things. Raise bugs. 

Talk about it with your development team, and your manager. Until software engineering has a real, honest to god, oh no moment, and an ethics board with actual legal teeth is born, you — the tech writer — are at the forefront between technology that helps, and technology that can hurt.

Share


Leave a comment

Content as a driver of change – Then & Now

Humans have always written things down.

Those of you reading this post, with your laptops, and mobile phones, and iPads, and vanity email accounts, and your single sourced, content-reuse, DITA-compatible Docbook XML toolchains, with all your fancy Javascript elements and mind-boggling CSS overlays. You are just the latest in a long line of human beings who have been doing the same thing for millennia. Albeit with different tools.

Panel_of_simple_figures_with_boomerangs_-_Google_Art_Project

The original owners of the land we are standing on today are the Wurundjeri people. Australian indigenous art is the oldest unbroken tradition of art in the world. These weren’t just the pre-history version of hanging a Monet print on your loungeroom wall. Indigenous art exists on all manner of things: paintings on leaves, wood and rock carvings, sculptures, and of course cave drawings. This art gave early Australians a way to record the things that mattered most to them in their lives: they often involve scenes of hunts or special ceremonies. In the case of Australian art, many include megafauna and other extinct species, and even the arrival of European ships. More than a record of events, though, they were probably also a method of teaching. Each indigenous tribe had its own mythology (collectively known in English as ‘the Dreaming’), which used stories to convey morals or other educational information. Most children who grew up in Australia would be familiar with the Dreamtime story about Tiddilik the Frog, a fable about greed and about finding humour in bad situations. Indigenous art and the stories that lie behind them are really just an early technical manual for life itself, especially in a world where living for any length of time could be quite difficult.

De_Architectura027

Who here remembers the story of Archimedes and his bath? It’s a demonstration of how Archimedes used water displacement to measure the density of an object (in this case, the king’s crown). Of course, the bit we all remember of the story, though, is that Archimedes, having made his discovery in the bath, went running naked through the streets of Syracuse, crying “Eureka! I have found it!”. This story comes to us from one of the oldest surviving technical manuals in existence, the “De architectura” by Marcus Vitruvius Pollio, which was published in around 15BC. Of course, the Ancient Greeks & Romans were well known for their literature, their scholars, their philosophers, and perhaps above all, their library. The Royal Library of Alexandria in Egypt was the largest repository of knowledge in the world between the 3rd century BC and 30BC. The famous fire that destroyed it was probably set by Julius Caesar himself in 48BC, but the library continued in some capacity until the Roman Emperor Aurelian destroyed what remained in about 270AD. This was of course a massive blow to literature, but it also an incredible loss of technical data as well. Thankfully, the Ancients managed to keep going even after the library was destroyed, and we now have surviving copies of wonderful pieces like Pliny’s Naturalis Historia, which is essentially the world’s very first Natural History encyclopaedia, and which set the stage for many more technical manuals to come.

Gutenberg_Bible,_Lenox_Copy,_New_York_Public_Library,_2009._Pic_01

Jumping over to Europe, Gutenberg did his thing with the printing press in the mid 1400s, but printed books were still a terrifically rare and expensive thing until well into the 15 and 1600s. Up until that period, if you were a fairly ordinary person in a fairly ordinary European town, you were probably aware of the existence of almost exactly one book: the bible that your local clergy had sitting on a plinth in your church. You probably couldn’t read yourself, or if you could probably not well enough to be able to read and understand a book written predominantly in a particularly stuffy version of Latin, and even if you could read that well, you wouldn’t be allowed to touch it. No, the bible was the word of God, and as such could only be read and interpreted by men of the cloth. They didn’t really want people going off and reading the Bible on their own and drawing their own conclusions about things. Of course, this got really interesting once the Reformation really started to get underway in the mid 1500s, and people started to read the Bible for themselves. In fact, for a little while there in England, Henry VIII decided that ordinary folk (and all women) were banned from reading the Bible. All this running around reading things and learning by everyday people was just a little too much for him to bear, especially when they started disagreeing with him.

Marianne_Stokes_(1855-1927)_-_-The_Frog_Prince-

Still in Europe, with better access to mass printing, publishing written versions of early verbal history became the thing to do. We all know the Brothers Grimm were writing fairy tales in German in the early 1800s, but they certainly weren’t the first to try and document the oral history of early Europeans. Charles Perrault is considered the original author of many of the Disney favourites, including Cinderella, Little Red Riding Hood, and Sleeping Beauty, and he was writing in French over a century before the Grimms, in the late 1600s. But even he was just writing down stories he’d heard from others. My favourite version of Cinderella comes from Giambattista Basile, published in Neapolitan in 1634, some years after he died. These stories, gruesome as they were before Disney got a hold of them, were intended in many cases to be fables for children, with a moral story, but were also used as cautionary tales for adults. In Basile’s version of Cinderella a husband is warned of the horrors of not being too picky about your second or third wife, he gives a general warning to the household about choosing your housekeeping staff carefully, a warning to parents about treating children fairly, and a warning to young women about being proud. And that’s before we get to the bit Disney likes: “if you’re a good person, good things will happen to you”. Some versions of the story also slam home the opposing moral: “if you’re a bad person, bad things will happen to you”, with both the step-sisters either mutilating their own feet to fit the slipper, having their eyes pecked out by birds at Cinderella’s wedding, or some equally terrible combination. As for other horrifying fairy tales, anyone who has read anything by Hans Christian Anderson will know that they often got worse before they got better. There’s a reason Disney never took on “The Little Match Girl”. For a long time, what we now know as fairy tales were the easiest and most entertaining way for a largely illiterate population to record and share moral stories and warnings.

geisha_maiko

A ribbon that runs through all of these is the idea of the master and apprentice. These types of relationships began in Europe in the 1300s, and were a way for a trade person to get cheap labour, while a young apprentice got a bed to sleep on, food to eat, and the hope of a trade later on. This system was used throughout England and Europe for all skilled trades: from seamstresses and blacksmiths, to Knights with their squires. However, the general principles of apprenticeships exist throughout the world, with one of the earliest examples being the idea of a Maiko, or a trainee Geisha. Geisha have existed in Japan since around 700, and still take in Maiko to this day. While this isn’t written knowledge, it is an important footnote when we’re discussing the history of content, as this was the main way that specialised technical knowledge was handed down.

JamesCampbell_NewsfromMyLad

Of course, a young apprentice, wishing to remember all the things they had learned, might be inclined to write them down. By the time the Industrial Revolution was in full swing, paper and books had become affordable, schooling was more available to children throughout Europe, and literacy was becoming much more widespread, especially to those bright young apprentices who left home to seek their fortunes. And while young people have written home to their families since ancient times, letter writing really hit its stride around the turn of the century when it became not just a way to record their days and connect with their families, but also a way to explore political and religious matters, and explore emotions: poison pens, love letters, and obituaries are all well represented in letters. Another form of writing more like the manuals we know today of course, is the recipe book. Many household cooks would enshrine their recipes in writing, to be handed down to the next generation. I regularly bake a family choc chip biscuit recipe that has been handed down mother to daughter for at least five generations, and possibly quite a few more than that.

But enough history. The older writers in the audience will probably remember most of these more recent forms of technical communication. Some of the more unfortunate among you may still be working with some of them. In that case, I’m sorry.

oreillybooks

Printed books are pretty all of our yesterdays. In some ways, it still feels as though you’re not a REAL writer until you’ve got your name on the outside of an actual book, made out of dead tree, and sent from some printer. I chose a picture of O’Reilly books on purpose, as OpenStack released yet another of our manuals as O’Reilly dead tree version last year, although we have no immediate plans to repeat that in a hurry. Personally, I’m part of the problem here. I love having dead tree reference books, especially for things like Style Guides, which are somehow easier to have sitting on my desk as I write, rather than relying on an internet search (which can, for me, at least, be very distracting. Hello, Twitter!). As for writing them, though? No, I love the idea of being able to catch and fix errors even after publication. Nevertheless, printed books, especially technical manuals, are our history, our present and, to some extent at least, probably also our future.

SONY DSC

SONY DSC

A close cousin of the printed manual, whitepapers are caught somewhere between marketing material and technical documentation. In digital form, they are probably not going to go away any time soon, but the printed whitepaper has almost certainly been confined to the recycling bin these days. My very first piece of technical writing was a white paper. I had a Marketing undergraduate degree and half an MBA, so it was a fairly logical piece of work for me to be doing at the time. I enjoyed it immensely, and immediately set out to become the whitepaper expert, intending to build a career around it. Thank goodness I discovered technical manuals in the meantime, and was saved from a life of writing whitepapers!

onlinebooks

And, finally in the ‘recent’ category, I have a screenshot from my very own project. This is, for all intents and purposes, an online version of a printed ‘book’. It has a table of contents down the side, divided into chapters and sections, and it’s designed to be read from beginning to end: simple concepts at the beginning, more complicated procedures as you move through, with reference information (tables of data, contact details, and a glossary) at the end.

These have all been great methods of getting information out there, but they are all destined to become as archaic as the fairy tales and the cave paintings we discussed earlier. Let’s take a look at those things we’re doing a little differently today, that will drive the way we revolutionise and improve content management in the future.

webcast

First of all, I want to briefly touch on MOOCs. These are the future of face to face training courses. MOOCs not only allow people all around the world to study when and where they choose, but they also allow institutions to create online tool that mimic real world scenarios, and allow students to learn real skills in a safe environment. This is great especially for the tech industry, where students can work on realistic IT setups that they might not be able to recreate in their own environments, but it also works well for teaching other knowledge work skills such as customer service and financial skills.

DITA_finch

The main thing that, I think, changed the way we looked at the information we were creating, was DITA. Of course, DITA isn’t new. It was named in 2001, and formalised in 2005, but varying groups have been working on data mapping and the like since the 60s and 70s, and it became especially popular in the 90s, with the publication of JoAnn Hackos’s book ‘Managing Your Documentation Project’ (and later ‘Information Development’) a book probably most of us have on our shelves, and to which I (at least) still refer to regularly. DITA was really the first formal, open standard that let us consistently and accurately categorise data into formal types. And it was simple enough that we could all use it, remember it, and above all teach it to others easily. Even if you’re not using a specific DITA tool, the general principles of DITA–splitting content into one of only three data types–could be used to underpin any tooling system.

Of course, the main driving principle behind DITA (besides the categorisation) is about content reuse and single sourcing. This is another key component of how we’re changing the way we look at content. It’s not about a beginning and an end any more. With this idea, we walked away from the age old idea of delivering a story, and moved towards this critical period of considering what information is required where, and when. This was important mostly because we were actually starting to consider how people consume information, and learned difficult concepts. We no longer assumed that information we gave to people in the beginning of a book stuck with them as they moved through the rest of the content. Sometimes, learners needed to go over information again and again before they actually learned it and could apply that information to later, more complicated, tasks. And, being the inherently lazy writers that we are, we didn’t want to retype that every time. So single sourcing and content reuse were naturally very easy for us to adopt.

And that leads me to perhaps my favourite topic right now: every page is page one. This is a model designed by Mark Baker, and while his model is certainly not the only one out there, it’s certainly one of the best developed. The general idea behind this is that no piece of content is more or less important than any other. It’s not quite DITA, in that a ‘page’ in EPPO terms is much bigger than a ‘topic’ in DITA terms. The best example comes from Baker himself, where he refers to a recipe. A recipe contains, in DITA terms, a concept (some information about the recipe, that describes what you’re actually creating, and maybe some background, where the recipe has come from, and the types of ingredients that you need), followed by a procedure (the actual steps of the recipe), and finished with reference information (serving suggestions, maybe information on converting measurements, or ingredient substitutions). In EPPO, the entire recipe is the ‘page’: it contains everything you need to be able to perform the task, including all that concept and reference info. One of the best ways to think about EPPO is in terms of a Wikipedia page, there are links to further information if you need it (and I’m sure all of us here have gotten sidetracked by clicking those links in a Wikipedia article!), but that page contains all the specific information about a particular topic. There is no beginning to Wikipedia, and there is most certainly no end.

mrsduffeekindle_crop

So this leads me to the big question: what does the future hold for content? I think there are a few main themes we can tease out of our little journey through documentation:
The internet is making things possible that never were before
Control over content is shifting from those producing it, to those consuming it.
Consumers are used to being able to search vast resources for content, and filtering those results themselves. They don’t want us to tell them what they need to know.

Since well before the birth of Christ, in one form or another, we’ve been writing stories. Now the internet allows people to create their own stories, not just have one told to them. In many ways, this shows a maturation in human development: we’re no longer willing to receive whatever is fed to us, we want to create our own realities, and we have the tools to be able to do that.

But that is a massive challenge–and (I would argue) an opportunity–for technical writers. We get to break new ground, and thankfully we’ve been working on the building blocks of this type communication for a few decades now. The challenge now is to start delivering documentation in a completely new way, without leaving our organisations, our management, or our more stubborn clients behind. Nobody said breaking new ground would not require effort, or determination. As we shed old ideas, old processes, old technologies, and old systems, there will be people who decry change, and impede our progress. But even if you only manage to implement a small piece of your grand vision, even if all you ever get to do is plant a seed of an idea in someone’s head that maybe–just maybe–there’s a different way to do things, then you have succeeded. After all, every one of the pieces of content I have mentioned here had its detractors, from every day ‘concerned citizens’, right up to royalty, and the literati.

public-domain-images-free-stock-photos-autumn

I mentioned Archimedes earlier, but now I would like to pick a different quote of his: give me a lever and a firm place to stand, and I shall move the world.

Right now it seems to me, that where we could go next is almost infinite. People have always created and consumed content. As long as we continue to put the information out there, and give people the tools to find it, they will continue to do so. We are not at the end of a journey, nor at the beginning of one. We are merely at a step along a very long road. Let’s find out where it leads us.


References: https://docs.google.com/document/d/10meNxWpeiyYprcQjOFIBZJlm3xEiWgarN5GVk1uhJOM/edit?usp=sharing

This was originally presented as a keynote at the Australian Technical Writers’ Association Conference in Melbourne on 23 October, 2015. No video recording exists of this talk.

Share


Leave a comment

linux.conf.au 2015 – Documentation Miniconf

Day 1 is drawing to a close at linux.conf.au 2015 and we’ve just wrapped the documentation miniconf. There was an interesting mix of talks today, and as the first documentation miniconf at an LCA, it’s given me some great ideas for growing the miniconf in future years.

As for me, after doing the Agile Documentation Lego talk at LCA in Perth in 2014, I felt I needed to give a good follow up show, this time focusing on Every Page is Page One. To do this, I devised a game based on the children’s book “We’re Going on a Bear Hunt”, and using Play-Doh to make it a little more hands on.

2015-01-11 (1)

Share


Leave a comment

Linux.conf.au 2014 – Perth, WA

Possibly my favourite conference, linux.conf.au is coming to sunny Perth in January 2014. I’ll be returning to the Haecksen miniconf driver’s seat (check out haecksen.net for more info and the Call for Proposals), and also will be giving a talk myself, called There and Back Again: An Unexpected Journey in Agile Documentation. This is a talk I’ve given a few times already, including at OSDC 2013, so I’m really looking forward to sharing it with the linux.conf.au audience. That, and I’ve never been to Perth before, so yay!

linux.conf.au 2014 - Perth WA

Share


Leave a comment

Writing Effective Procedures

Writing procedures can be much more difficult than you’d think. We see procedures everywhere, so it’s natural to think that we should be able to write one without too much trouble. For that reason, I wanted to take you through some terrible real-life procedures. This is at least partly so we can all have a chuckle at other peoples’ mistakes, and feel a little bit better about ourselves. But it’s also because it’s a lot easier to find examples of bad procedures than good ones.

With that end in mind, I went through my junk drawer, and pulled out one or two manuals that I had lying around, and I’m going to use them as examples of what not to do as we go along.

The first thing you need to look at is whether you’re documenting a process or a procedure. It’s easy to use these terms interchangeably, but they actually mean different things. The main thing to remember is that a process can contain many procedures. A process gives an overview of tasks: you might need to install the package, configure the package, and then use the package. Overall, that’s a process. Each of those things, though, is a procedure. Procedures are instructions for doing something.

WriteProcedures

Here’s an example of a certain hand-held computer game. As you can see, the instructions for using the stylus are … step 5? Every procedure in this book is numbered. What’s happened here is each procedure in a process has been numbered, rather than each step in a procedure.

So the next thing to worry about is whether you should be using bullets or numbers. This one is a really simple test: is the order important? If the order is important, use numbers. If it’s not, use bullets. Oddly, though, we get this one wrong all the time …

WriteProcedures1

These ones should all be bullets. You don’t need to operate the product from a power source before you remove the unit from the packaging.

WriteProcedures3

Let’s try this one together: Most of these ones should be numbered, the text even tells us that. The ones on the left under “Cutting Tips” are bullets, the order isn’t important, it’s a list of tips. What about at the top under “Starting and Stopping the Trimmer”? This one probably doesn’t matter, I’d be inclined to use numbers, though, mostly because you can’t stop the trimmer unless you’ve already started it.

WriteProcedures4

And just another one, because it’s so easy: The bullets in red are fine, but then we go to numbers in the purple, and then for a little variety we throw in some upper-case letters in green. Bullets would have been for all of these.

So the next thing to worry about is whether you’re describing a concept or a task. A concept is a description, it answers the question “What do I need to know?”. A task is an action, it answers the question “What do I need to do?”. As writers, it’s much easier for us to think about things rather than tasks. Users think about tasks, though, not things. Remember the old adage about not needing a drill, but a hole? That’s the essence of this point.

WriteProcedures5

This one just has so much wrong with it it’s hard to know where to start. Considering we’re talking about concepts and tasks though, let’s start with pulling those out. I’ve marked the concepts in blue, and the tasks in purple. To add insult to injury, we also have numbers where we should have bullets (in red), because this really is such a hodge-podge of information that there’s no way the order is important. Just to round things off, we also have a typo, and a vaguely insulting term about our children (in yellow).

But looking at that brings me nicely to the next point, which is about the level of detail. Make sure you don’t suddenly change depth in the middle of your procedure. If you find yourself doing this, you might actually need to do more than one procedure, or consider whether you’re actually writing a process. This one is best explained by example:

WriteProcedures6

This certainly isn’t the worst example I could have picked, but it’s interesting all the same: a few of the steps here go into detail about some extra function that your product may or may not have (in yellow), while others are as simple as “open the velcro strap” (in blue). We also have process/procedure issues here, with procedures being numbers in order, and steps getting lowercase letters (in red). This is just confused by the photo references typed in red, and both angle brackets *and* square brackets being used. We also have a few stray bullets in one step. And having said all that, I’ll remind you that this is for a pair of boots. Admittedly, slightly more complicated boots than you’re wearing today, probably, but they’re just boots in the end. Also, I’m more than a little disturbed about the idea of “closure and locking of the foot” (in green).

Everyone knows what anthropomorphism is, right? Someone like to explain it? Yep, it’s applying human qualities to non-human things or animals. We do this a lot, especially to animals, but we also tend to do it to computers a lot.

I went online to find these ones, since I didn’t have any good examples in my stack of manuals. It seems to be something we do almost exclusively to computers rather than appliances, but we *really* do it a lot.

WriteProcedures7

I’ll give you a pro tip: computers don’t actually *think*. They might display things, they might take a while to process commands, but they definitely do not think.

Have to say, though, that going through manuals looking for anthropomorphism does make this one sound slightly creepier than the author intended …

WriteProcedures8

Which brings me to one of my favourite words, and it should be one of your favourites too: parallelism. When you’re writing fiction, you don’t want every paragraph or sentence to start with “Then”. When you’re writing procedures, though, it’s a good thing to have each step start with “click” or “type” or something like that. When you mix it up, it might sound more interesting, but it just becomes confusing. When faced with two statements that seem to be saying different things, users often think you want them to be doing something different. Every step should start with an action, and the same action should use the same verb. Use “click” for a mouse click, “type” for typing on the keyboard, “press” for a hardware button, etc.

WriteProcedures9

This manual almost gets it completely right. Three procedures here all need to start with the same three steps. But in one procedure, they write it using different terms. Is “tilting the motor head back” a different action to “raising the motor head”?

So, finally some takeaways:

The main elements of a procedure are:

  • Main heading (‘ing’ verb)
  • Concept
  • Before you begin
  • Warnings
  • Procedure sub-heading (infinitive ‘to’ verb)
  • Numbered steps
  • Reference info
  • Related topics

And the things you really need to remember when writing:

  • Mouse or keyboard, GUI or CLI? Stick to it!
  • Verb (or location) first
  • Active voice
  • Give instructions, not suggestions
  • Complete sentences
  • Plain English

I’ve also created a handout with these for you to print and hang up somewhere, which you can download here.


This article was originally given as a public tech talk at Red Hat Brisbane, in September 2012.

Share


Leave a comment

The Mechanical Turk and OSDC

OSDC 2011 kicked off today at the ANU. I tripped along to the OSIA (Open Source Industry Australia) miniconf, which Red Hat sponsored, and was mightily pleased to hear Stuart Gutherie reference one of my favourite pieces of historical weirdness: the mechanical turk. I couldn’t help but hold forth, and eventually gave an impromptu lightning talk on the subject.

The mechanical turk existed in the late 1700’s, and was billed as an “automaton chess player”. In reality, it was a wooden box in which an accomplished chess player could sit and manipulate the chess board on top, while a wooden “turk” would appear to move the pieces mechanically. It was invented by Wolfgang von Kempelen, a Hungarian, who also invented a “speaking machine”, a speech synthesiser, which was actually quite important to the early development of phonetics as an area of study, but not half as famous as his great hoax, the mechanical turk.

What makes the mechanical turk so interesting is the lengths von Kempelen went to to persuade his audience that the turk was, indeed, an automaton. He would show the audience first one empty cabinet, then another, and then in the third cabinet would be a complicated looking system of levers and pulleys. The cabinet was designed that, throughout this proceeding, the human operator could easily slip from one side to the other on a sliding seat to hide from view.

The trick was not exposed until the mid-1820’s, despite some very public appearances. The mechanical turk won chess games against such prominent figures as Napoleon Bonaparte and Benjamin Franklin before being found out, although no one has recorded the names of those chess players working the turk from underneath.

How this becomes relevant to the IT industry is equally as fascinating as the history of the machine. In the days of the turk, playing chess was something that no machine was capable of doing. It required a level of computation that only humans were capable of achieving. We have now invented computers that can play chess (and even appear on game shows) with ease, but there are still tasks that we can’t replace with a small shell script. Many of these tasks are frustratingly simple, but require a human brain to parse the data. Writing tasks often fit into this category: things like writing short captions for images, changing the style used in a document, or changing text from British English to American English spelling.

Within Red Hat, we spend a lot of our time working on errata text. After a program has been released it is fairly normal to find bugs. When the developers go through and fix a bunch of those bugs, they will send out an update termed an “errata release”. For each bug fixed, the technical writers need to document it, which means writing four sentences: one sentence each about the cause of the bug, the consequence of the bug, the fix that was applied, and the result of the fix. This is, naturally, quite tedious and boring. It’s natural for us to want to automate this process, but unfortunately it’s a job that requires a human brain.

So we did the next best thing: we created a mechanical turk. We call it “The Turkinator”, and it’s currently available for Fedora errata releases. Basically, you choose whether you want to write a Cause, a Consequence, a Fix, or a Result; you’re given a bug to read; you submit your sentence and that’s it. In this way, we automate the task of writing errata text by breaking the big task down into little tiny pieces and asking humans to perform the work of a shell script.

This model has an extra added bonus in the open source space, though, and that is what we like to call “micro-contributions”. Anyone who has contributed — or thought about contributing — to an open source project would understand how daunting that first contribution can be. By creating the possibility of micro-contributions, potential contributors can have their first (and second, and third …) patch completed in under a minute. Instant contributors for the project, and instant contributions for the development team.

Share


1 Comment

linux.conf.au 2012 Call for presentations!

linux.conf.au is the biggest Linux and open source conference in the southern hemisphere, and rightfully so! I was a speaker at the conference last year in Brisbane (the video is on my videos page) and had a great time.

This year it’s being held in Ballarat, Victoria, and I must say I’m quite looking forward to finding out what a regional LCA is like. Anyway, the CFP is open, I’ll be submitting again, and I suggest you do too. More details on the LCA website.

Share


1 Comment

Open Source Documentation in Four Easy Steps (and one slightly more difficult one)

At Red Hat, we have a content services department that is about sixty people strong. Even though the department is pretty big these days, back when I started with the company, we were still trying to work out the best way to run a successful enterprise-level documentation team. What that means is that I have been involved in some of the big discussions that we have had over time about what processes we needed to get in place in order to allow us to produce the massive amounts of documentation we required as our product offerings grew. As a department, we grew very big, very fast, and our processes needed to be flexible enough to accommodate the large number of new hires we had, and still have, coming in, but robust enough to be valuable and reliable. They also need to fit in well with the engineering practices in place in the company, and the tools that our development teams use and are familiar with. Of course the other really important factor was that we had to be open. We wanted to use completely open tools to produce our docs, but we also needed to be able to work with community teams, such as the Fedora group.

Like many documentation groups, at Red Hat we use a five-phase waterfall model to produce documentation. It’s based on the ever-popular JoAnn Hackos method: starting with planning, the content specification, then writing and editing, translation and production, and then a retrospective review. At Red Hat at the moment we’re at a place where our development teams are increasingly using Agile-style development models to produce software, and that means the pressure has been on us to develop in a less rigid way than the old waterfall model has been allowing us to do. Also, it’s no secret that the online world is changing, and people now expect to be able to interact with information at a much deeper level than ever before. They don’t want to be presented with static, hard-copy books any more. They want dynamic, interactive, usable, and above all useful documentation.

In order to be able to work out what kind of model we needed to use, we needed to go back to basics. All technology is about solving problems. Back when we were sitting around in caves, we had a problem: there was all this food running around outside, but we didn’t have a way to get it to stop running around, so we invented a club and solved the problem. Since then, we’ve used technology to solve all sorts of problems: horses were sometimes problematic to control, and they didn’t go very fast, so we invented cars. The hard wheels used on early cars weren’t very comfortable, and when they broke they really broke, so we invented pneumatic tyres. We also had problems being able to see in the dark so we invented electric light, being able to go to the toilet when it was raining or cold so we invented indoor plumbing, being able to send messages to people on the other side of the country so we invented the telephone, or on the other side of the world so we invented email.

Even these really technological things that we find ourselves documenting now, are all solutions to problems. One of the first things you need to be aware of when you’re writing documentation is what problem your users have. If you can’t describe the problem in one or two sentences, then you don’t understand it well enough, and you need to keep researching. Because if you keep going, all you’re going to end up with is hollow marketing spin. That’s how we end up with documentation that talks about “leveraging synergies”: words that sound great, but have no meaning.

So at Red Hat we came up with a fairly simple model, and that is that documentation needs to be able to be boiled down to three things:

  • Describing the problem
  • Solving the problem
  • Giving any additional information

Anyone who has done any work with DITA would understand that what I’m really talking about here is:

  • Concept
  • Task
  • Reference

 

So we’ve more or less said that DITA is where we need to go next. But we didn’t want to completely restructure the tools we were using. We have a fairly large people investment in our tools. The main tool we use is Publican, which was developed by an engineer in our Brisbane office. It uses Docbook XML and gives us a command line interface that we can use to create new blank books, apply corporate formatting, and it integrates into our internal packaging system so we can create all these different formats for our books – HTML, PDF, and ePUB on the website, and we can also create RPM packages and man pages to package in with software. We combine Publican with SVN to give us a complete CMS, in short.

We looked at DITA and DITA-OT, the DITA Open Toolkit. We realized two things: first of all, it would take a significant amount of work for us to bring an open DITA toolchain to the level of maturity and system integration of our existing Docbook toolchain. Secondly, we wouldn’t get the really significant benefits of topic-based authoring without a Component Content Management System – a CMS that manages content at a very granular level. Putting those two things together made it clear that if we changed to DITA all in one hit, it would take us significant time and energy just to get back to where we already were with a mature open source complete tool chain. So we decided to take an evolutionary, rather than revolutionary approach. It’s a much more open source approach: to re-purpose something that you already have, add a script here, a small command-line tool there, release early, release often, and let the user community guide the development, rather than trying to design and implement some grand system in a distant (and expensive) future.

What we needed was something that worked in a similar manner to DITA, gave us content re-use and all that good stuff, but that would work with our existing Docbook XML and Publican tools. The first thing we did was to start creating topics in Docbook, using Docbook syntax, and a command line tool that we called the “Topic Tool”. This was a really simple command line tool that allowed us to write XML snippets (or ‘topics’), and save them in SVN. We used an extensible template model, where the topic tool retrieves a Docbook template from a central repository to match the topic type you specify. That way we can create new topic types, and even modify the Docbook syntax of existing topic types, without changing the tool on users’ machines. That was an important decision, and a major part of the evolutionary “Release, Review, Refine” approach we wanted to use. Over time we did change the Docbook syntax of the basic topic types and create new topic types, validating the open source maxim “plan to throw the first one away”.

The basic workflow with the Topic Tool is like this: you tell the tool which topic type you want, and it will then download the template and prefill some information for you. You can then edit the topic in a text editor, and import it back into the repository. It’s then possible to view your topic from the repo directly, which means anyone can now see it and use it. We then include those snippets into any book you want using an xi:include, build the book as normal with Publican, and voila! we have a book with content reuse. So that was pretty awesome, and if you read any of our Virtualisation documentation you’ll probably not know it, but that’s all based on topics and maintained using the topic tool.

Of course, once we got to about 300 topics in the topic tool, we started to notice that we have another problem, we were having trouble locating topics within the repository. This made us realise that what we needed was a better way to organise it all, so we wrapped a neat interface around Topic Tool using Open Grok. OpenGrok is designed for software engineers to search source code repositories, so it worked well for what we were trying to do. This is where the open source ecosystem came into its own all over again – there are a million off-the-shelf components and projects that you can choose from to build your own system. In the end we had a web-based search tool that was pretty basic, but did the job.

Content reuse is an obvious application of topic-based authoring, but by this stage, we’d started to realise something even more exciting. Our definition of a topic is a unit of information with a single subject – that means that it talks about one thing, and one thing only – and that has a single information role: that is, it’s a concept, a task, or a reference. If we gave three topics – a concept, a task, and a reference – to a robot, along with a rule describing the “explain, answer, extra info” pattern, and some kind of graphical template, that robot can assemble those topics into meaningful and useful output for an end user. What we wanted to do was to automate this process.

When humans assemble content into a book, they are making decisions. What aspects of the information are their decisions based on, and what rules are they consciously or unconsciously using? That was what we wanted to create: A system that would allow us to store metadata about a topic, and use rules to automate assembly on a scale that we just couldn’t do by hand-coding.

So we developed a system that we call Skynet, which allows us to dynamically sort and locate topics. Select the topics you want, and Skynet will download the code that presents those topics in a consumable way. Of course, we started dreaming big after all this. We’ve started thinking about moving away from the documentation-as-a-book paradigm, and started considering “Documentation 2.0”. Why not include comment fields on our documentation, that will allow our reviewers – quality engineers, subject matter experts, editors, and the like to make comments directly in the book rather than creating a separate list? And why not offer that functionality to our users as well? What if we had the equivalent of a Facebook ‘like’ button? Users could ‘like’ sections that they found useful, or leave comments saying “when I tried to follow these instructions, X happened” or “this seems to be missing a step” or the like. If we break away from the book model, we start to be able to think about documentation as something that our users can interac with. We could have popular topics bubble up to the top of a list, or divide books into audiences, and present the information for each audience differently, giving them a tab to click to see the information in various ways. We could implement something similar to the Amazon “customers who bought this also bought this” and present similar topics to our readers. Using single-sourced content, and content reuse, through a system like Skynet, is going to allow us to move into these more innovative delivery methods.

The team working on the Skynet project have 110% discoverability as one of their goals, to quote the team leader: “the documentation finds you”. In other words, when you’re working on something, and you get stuck, the documentation is there at a click or a glance, ready for you to interact with it. Of course, I’m sure some of you are saying “Help” right now, and yes, I agree with you. That is something else we’re talking about, and something that Skynet will enable us to do. Skynet pushes out XML now, and of course there’s plenty we can do with that as it is, but we can also extend it to push out all manner of things, including Mallard for Gnome Help.

So let’s take this conversation back to processes. All this dreaming is fantastic, but at some point we still have to actually do the hard work. Without a solid process, and a great set of standards, we’re not going to be able to get there. We’re doing a lot of internal testing, and we’re dipping our toes in the water with the topic tool and with Skynet. So far, we’ve been able to slip these in to our existing standards, but that’s not going to last for long. With a paradigm shift as big as this, everything is going to have to change, and that includes the way we go about producing our documentation. We need to be organised, we need to make sure what we do is repeatable, and we need to maintain our high standards of quality and accuracy in our documentation. Most of all, though, we need to maintain and even increase our focus on the customer. These changes come about not because we got bored with doing things the old way, but because we believe it’s a better way to serve our audience. Never, ever forget who you’re writing for, it’s those poor sods out there with their problems that they’re trying to solve. Our goal is to give them the tools they need to solve them.

So, to recap:

One of the main things that we have learned is that process is king. If you don’t have a solid process for producing documentation, then you’re going to find yourself floundering at every point along the way. You’re going to end up with documentation that doesn’t cover what it needs to cover, isn’t accurate or well-written, and doesn’t get out on time. Without a plan for how you’re going to tackle the project from end to end, then you’re not going to succeed. It’s that simple.

The second thing is about tools. You need to decide ahead of time what tools you are going to need during the project, and make sure you have them ready and up and running before you start. It’s horrible to get halfway through writing and find out that one of your writers doesn’t understand how to use a semi-colon. It’s even worse if you get halfway through and realise that one of your writers doesn’t understand Docbook XML, or whatever authoring tool you’re using.

While we’re talking about tools, it’s important to keep it open everywhere you can. This can seem counter-intuitive to those of you who have worked in big companies, but being open doesn’t mean giving away business secrets, or exposing your competitive advantage. I think Red Hat of all companies really proves that the openness can co-exist with secure business practices.

Part of keeping it open is about keeping it real. The people behind your processes, the people doing the actual work day in and day out: they’re real people. They’re real people, with real lives, and real families. You need to be able to work with people, and ensure that the loss of one person isn’t going to make the whole project tumble. The other thing you need to remember is that your readers are real people as well, you need to make sure that you’re giving your readers something useful, and something that they will get value out of.

And finally, I want to remind you about reviews. We all understand the importance of reviewing our writing for correctness, and reviewing our projects to make sure we can learn from our mistakes. You need to extend reviews to the documentation process itself, as well. Never be afraid to change things around. Just because it worked last time doesn’t mean it’s going to work next time. And just because it’s worked in the past, doesn’t mean it’s the best way to do it in the future.


This post was originally a talk given at the Open Help Conference in Cincinnati Ohio, on 5 June 2011.

The slide deck is available on Slideshare: Open Source Documentation in Four Easy Steps (and one slightly more difficult one)

You can also download this article as a PDF file: FourEasySteps


Share


Leave a comment

Open Source Developer’s Conference 2011 – Call for Papers!

The Open Source Developer’s Conference (OSDC to its friends) is being held in Canberra in November this year, which is a little bit exciting, and they have just opened their call for papers.

Also this year, for the first time, they’re asking for miniconf proposals too. I would love to do a whole miniconf on open source documentation, but I’m not sure I have that kind of stamina. Of course, If you’re interested in helping me out, let me know!

I spoke at OSDC last year, when it was in Melbourne, and the footage is on my videos page. I thoroughly enjoyed the experience, so it will be interesting to see what kind of event Canberra can put on this year.

Share


3 Comments

The Grass is Greener on The Open Side

The Grass is Greener on the Open Side

Now, I know what you’re thinking. You’re thinking, “Oh my, here we go again. Another open source advocate banging on about freedom”. Well, yeah, I have to admit to at least a little bit of truth in that. Open source advocates do like to talk about morals, and they do like to say things about open source being ‘good for society’ and how it’s the ‘way of the future’. Most of all, though, open source advocates like to bang on a lot about ‘freedom’. But I’m not your average open source advocate. Every tech writer has their favourite program to use, and in many cases you don’t get a choice about which one that is. I’m not going to tell you that you shouldn’t be using those programs, and I’m not going to tell you to go to your boss and tell them that you’re not going to be using those freedom-hating platforms any more. It’s just not practical. I will tell you to use whatever works for you. If there is a program that you use that ticks all your boxes – that does absolutely everything you need it to do, then by all means go ahead and use it. All I want you to do is to be aware of the alternatives, and to understand the differences between them. That way, you’re making an informed choice about the software you use, and the way you interact with technology.

Freedom, and how it relates to beer

So, I said I wasn’t going to bang on about freedom, but I do need to mention it, if only to straighten out some of the terms I’ll be using. Freedom gets mentioned a lot when discussing open source software, and thanks to Wikileaks there are a lot of nonsense phrases doing the rounds right now like “information wants to be free”. I would like to explain what we actually mean when we talk about open source software being ‘free’. As I’m sure you’re all painfully aware, English can be a bit confusing sometimes, and we quite frequently come across words that have two or three different meanings, depending on the context it’s used in. The English word ‘free’ is a perfect example. We can steal two words from the Romance languages to describe the different ways we use ‘free’ in English. First of all, a word I’m sure you all know and love, is ‘gratis’. The word ‘gratis’ means free of charge, or without cost. The other word is ‘libre’ which means the state of being free, or of having liberty. There’s a much easier way to illustrate this concept though.

We all know that the best beer is the beer you don’t have to pay for. That is, it’s beer that is ‘gratis’, or free of charge. We can refer to software as being ‘free as in beer’ when we mean that it doesn’t cost any money to use. This is the type of ‘free’ that is being used when we discuss freeware, which you’ve probably all come across before. Freeware is free as in beer, but it has its catch: you still need to read and agree to the end user license agreement in order to use it, and you won’t be allowed to change the way the program works, or create any add-ons or extras, such as documentation or translations. In many cases, freeware can only be installed on personal networks, not business ones, and there are quite often restrictions on how easy it is to share the program too.

When we talk about the ‘libre’ sense of free, we say ‘free as in freedom’ or ‘free as in speech’. Essentially, when we talk about freedom with open source software, this is the freedom we mean. It’s the freedom to see the nuts and bolts of the software you’re using, the freedom to make changes and share them with your friends, the freedom to take the code and use it in your own project, and the freedom to suggest and submit changes to the code itself, or the stuff that wraps around the program, like the documentation. It is also possible to have software that is free as in freedom, but isn’t free as in beer, too.

Have you got a licence for that thing?

So before we move on there’s one other term I’d like to straighten out: pirates. My entire network at home was set up using free software – that’s free as in beer, it didn’t cost me a cent. However, I’m not a pirate (and that’s not just because I don’t have a wooden leg and a parrot). Every piece of software I use in my home network is open source and was obtained perfectly legally.

This is because the free-as-in-freedom and the free-as-in-beer is written into the license agreement for the software I use. You’re probably familiar with the End User License Agreement (EULA). That’s the bit that you have to agree to when you install closed-source software. It’s usually a big long chunk of text, all written in legalese, and we all ignore it and hit “I agree” to continue. Open source software doesn’t use a EULA, but it does have a license. The license works in more or less the same way as a EULA, except instead of saying “You may not sell, license or distribute copies of the software” it says something more like “you can use this software free of charge, as long as you keep it that way”. In other words, if I wanted to pay for it, or I wanted to sell it to my friends, I would be breaking the agreement, in the same way that giving away copies of Microsoft Office for free would be breaking the EULA. There are lots of different open source licenses, but they all work in much the same way, with only minor differences between them. The main one is the GNU General Public License, which is referred to as the GPL. The main restriction on the GPL is that whatever you do with the code, it needs to include a copy of the GPL with it. And that’s really as scary as it gets. I could go on at length about licensing, but there’s probably another whole article in there, so let’s move along.

Decision time!

Consider you’re in the market for a new bit of software. Often your purchase decision will come down to features, and if one option has the features you need and the other one doesn’t, then by all means go ahead and install the software that has all the bells and whistles you require. Provided you agree to the terms of the license or the EULA, and you pay whatever is requested, then there’s no problem. But what about when the features are equal, and the differences come down to licenses and cost? Most of the big name software will cost you money in some form or another for the full version. After you’ve paid your money the product is yours though, right? Wrong. The software company can decide to change the software whenever they want. You would have all seen this happen on your Windows machines. You agreed to a EULA when you installed it, but Windows gets new updates every other day. Got any idea what’s in those updates? No, nor do most people. We need to trust that the big software companies are going to do right by us, and in most cases that’s not difficult to do. They’re big companies with millions of users, and if they tried anything nasty, we’d probably know about it. So that’s a risk most of us are perfectly willing to take.

>So you’ve paid your money, you’ve got your software, and you’ve been happily working away with it for a while. Then someone sends you a file that you can’t open, because it was created with a newer version of the software. All of a sudden, you realise that your old version doesn’t have the features that the new version has got. So what do you do? You have to upgrade. Which costs more again. Once again, you click through the EULA, agree to it, pay your money, and you’re off again. You’re probably very familiar with this process.

Now, what happens if you don’t like something about the program, or if you discover a bug in the program – something that doesn’t work properly? For the most part, you probably shrug it off. There’s not much you can do about it. What about the documentation? We’ve all come across laughably bad documentation, hopefully you weren’t the author of too much of it. What happens when the documentation for your piece of software doesn’t describe things properly? Or doesn’t include information you need? Have you ever thought “Gee, I could write that so much better”. If you’re multi-lingual, have you ever wished that a company would provide documentation in a different language? Have you ever wanted to create a guide that covers a situation you use every day, and you think others would find it useful too? You can’t do any of that stuff with closed source software, because the EULA specifically forbids it. The only way to be able to make those changes would be to go and get a job at Adobe or Microsoft. And while I’m sure we’d all love to land a position like that, unfortunately for most of us it’s not horribly likely.

So let’s look at the alternative. Most open source software will cost you nothing, it’s free as in beer as well as free as in freedom. You would go to the website, pick the version you want, and you’re off. If you need help, you can check out the embedded help, or the official documentation on the site, just like with any other program. And if you don’t like either of those options, there are heaps of other places to get help: wikis, forums, chat channels, numerous blogs and websites. You could also go to an open-source manual website, like flossmanuals.net and find out if someone else has written a full guide. And what about problems or bugs? The first thing to do is to search the web, it’s possible that someone else has come across the issue and has already found a solution. If not, then get in contact with the developers – all open source projects will have a number of ways to do this – and let them know about it. They’ll probably ask you for some more details about the problem so that they can get it fixed, and then they’ll go right ahead and fix it for you.

The other fun part is if you think you have something to add. If you’re a programmer, and you’d like to write a new feature, or fix a bug, you can do that. If you’re a writer, and you want to improve the documentation, or you want to do a translation, then you can do that too. In those cases, you will usually be welcomed with open arms and given everything you need to get started. And that’s because open source software is not developed by a group of paid engineers in an office block, but by people like you and me. It’s created by a community, and anyone who wants to be a part of that community and work to improving the software they’re using will always be welcome. Of course, you don’t have to contribute to a project, though, if you don’t want to. You can also just download and use it, just as you would with any other software.

Arguments, naturally

Of course there are arguments both for and against open and closed-source software. Most of them on both sides are reasonably valid. One of the main ones is that closed source stuff is usually more stable than open source, because they have a room full of developers who are paid to fix bugs and write features. This is interesting, because in some cases it’s true. But to be a fair comparison, you can’t really compare the stability of Word against the stability of a project that was created by two guys in their garage for their three friends to use. If you want to compare them fairly, compare the stability of Microsoft Word to the stability of Open Office, which is an open source project by Oracle and is supported by IBM, amongst others, and has been around for over 10 years. Neither of these projects are likely to go away any time soon. The other side of the coin is the little software development groups. Anyone can produce software and sell it, and if those little shops go bust you end up with an unsupported product. That doesn’t change whether it’s open or closed source. The difference is that with open source, because the code is available to anyone who wants to look at it, there’s at least a chance that someone at some point will pick up the code and have another bash at it. That’s never going to happen with closed source projects, simply because the licensing doesn’t allow it to happen.

I’m still not sold

The good news is that you don’t need to be totally sold on either open or closed source software. You don’t have to go totally one way or the other. Because open source software is free to use, and easy to get, you can go ahead and download any number of programs, just to see if you like them. And if you feel like making a contribution to the program, or joining the community around your favourite program, then go ahead and do that too. The great part is that you can install and use open source software anywhere you want, and use closed source software in exactly the same way. They will happily co-exist on the same system.

It’s not just about the freedom

I said right at the beginning that a lot of open source advocates bang on a lot about freedom, and I guess that in a lot of ways they’re right. Freedom really is a big part of open source, and explains a lot of why it’s awesome. But I think there’s something more to it. It’s not just about the freedom, it has a lot more to do with the community. Whenever you get group of people together with a common goal in mind, they can achieve just about anything. When the goal of that group is freedom, then I think that the world can really only become a better place because of it.

—————————————————–

This article was originally a web seminar for the Society of Technical Communicators. Since then, I have also presented it for the Canberra Society of Editors. It was longer as a speech, but had fewer funny bits.

—————————————————–

A shortened version of this article was published in Words: A Quarterly Bulletin for Technical Writers and Communicators. Volume 3, Issue 2: May 2011

—————————————————–

Share