On Writing, Tech, and Other Loquacities

The collected works of Lana Brindley: writer, speaker, blogger

Facebook, Dynamite, Uber, Bombs, and You.

Leave a comment

This is the transcript of a talk I gave at WriteTheDocs Australia, in Melbourne, on 15 November, 2018. A video is also available, see the Videos page for a link.


This little story starts with the American son of German migrants, Herman Hollerith. He was born in 1860, got a degree in engineering, and then went to work in the US Census Bureau in 1879. At that time, the census was just a headcount, they didn’t collect any real data on the population, simply because they didn’t have the ability to process that information. As it was, they only ran a census every ten years, and it took them several years to process all the information. This meant that the big concern of the department is that before too long it was going to take them longer than ten years to do the calculation, meaning the next census would have started before the last one was complete. 

These days, we call that overwhelming technical debt.

So young Master Hollerith was a bit of a bright spark, says “there ought to be a machine for doing the purely mechanical work of tabulating” and set out to build one. By 1884, he had a prototype, and the US Census Bureau used the machines for the 1890 census.

Herman Hollerith, Bright Spark.

The data is first encoded on the punch card with the pantograph, then operators would load the card into the tabulator. Pins would drop down onto the card, and where there was a hole the pin would drop through and land in some mercury, which completed an electrical circuit, and advanced one of the dials on the machine. So each dial on the machine represents one character trait: age, gender, state, etc. Then the operator would record the data, remove the card and put it into a sorter drawer. Then they could reset the tabulator, ready for the next one. According to the US Census history site, operators could process 80 cards a minute this way.

Because Hollerith was a bit of a smarty pants, he didn’t sell the machines to the government, he leased them, and after a bit he managed to lease them to all sorts of government departments, all around the world, which was good to keep the money flowing in in between censuses. To handle this, he created a company called, creatively, the The Hollerith Electric Tabulating System.

Over the next few years, Hollerith’s company found other companies that were making “machines” like employee punch clocks and weighing systems, and they merged into a partnership called the Computing-Tabulating-Recording Company (CTR). That company then became International Business Machines in 1924, and the companies were finally all merged in 1933.

Anyway, there is a whole other talk on IBM corporate history, but the short version is that Thomas J Watson became IBM chairman in a ridiculously dodgy deal in about 1914 WHILE HE WAS IN JAIL, and remained so until he died in 1956. Because the guy was so dodgy, he didn’t like writing things down. I mention that, because it becomes important later in the story.

IBM Dehomag Hollerith D11 Tabulator

These are some early IBM machines made by the German IBM subsidiary called Dehomag. These machines were called the Dehomag Hollerith D11 tabulator and sorter. They were originally used primarily in banks to process account transactions, calculate interest, and – most interesting to the government of the day – cross reference bank account numbers to census data.

This was in about 1935.

IBM Dehomag Hollerith D11 Sorter

You know when you see pictures of people who have been in the concentration camps, and they have a number tattooed on their arm? Those numbers connected the human being to the punch card for a Hollerith machine. So here’s a fact: In Holland, they had extensive Hollerith machine infrastructure in the years before the war, and 73% of Dutch Jews were killed by the end of it. In France, they had very little Hollerith infrastructure, and what they had was disorganised. Only 25% of French Jews were killed by the end of the war. In short, without this technology, the Holocaust would still have happened, but it wouldn’t have been so well organised, so well planned, and so well executed.

Of course, I’m sure you all know about the Nuremberg Trials that happened after the war ended. We remember those for the Nazis who got sentenced for war crimes in the main trial, but there were 12 more trials between 1946 and 1949, which covered 177 people: physicians, judges, military personnel, civil servants and also industrialists.

Gustav Krupp and his friend Adolf Hitler

People like Gustav Krupp, who provided Panzer Tanks to the Nazis. Interestingly, it didn’t slow that company down much: you probably know them as ThyssenKrupp now. They make elevators. Anyway, so you would expect that senior officials in IBM would have ended up before the Nuremberg trials, too, but jokes on you!

The Nuremberg Trials were pretty unique, in that they had to be conducted in four languages, more or less simultaneously, which had never been done before. Guess who provided the computing power for that? Got it in one! Interestingly, for completely unrelated reasons, I was in Nuremberg a couple of months ago, and took the opportunity to go to the Documentation Center and Nazi Party Rally Grounds. It was a fascinating tour, but much to my dismay, IBM was not mentioned once.

Hopefully what you’ve picked up so far is that Hollerith was a brilliant young man who solved a very difficult problem. It’s not his fault that the technology he developed was used by Hitler to murder people. It’s also not his fault that not only was no one held responsible for that, but also that we seem to have collectively forgotten about it. The point I’d like to make, here, is that throughout history technology has been used in some pretty horrible ways. So let’s look at some more historical “oh no” moments …

Alfred Nobel, Repentant “Merchant of Death”

Alfred Nobel, you might recognise his name. He was the guy who invented dynamite, but he also owned a large weapons manufacturing plant. Anyway, Alfred’s brother died in Cannes, and a French newspaper got confused and wrote an obituary for Alfred instead. It was pretty nasty stuff, titled “The merchant of death is dead”, and this wonderful line: “Dr. Alfred Nobel … became rich by finding ways to kill more people faster than ever before”. So Alfred read this, had what is technically known as an oh no moment, and completely secretly went on to establish the Nobel prize with his personal fortune. Hilariously, he decided not to tell anyone about it, so they all got a nice surprise after he died for real, and they read his will.

Otto Hahn got the Nobel Prize for Chemistry in 1944 for working out Nuclear Fission. The prize really should have been given to him along with his two colleagues Lise Meitner and Fritz Strassmann, but the other two had had to leave Germany in a bit of a hurry a few years earlier. Later on, of course, that technology was used to bomb Nagasaki and Hiroshima, and the entire world had an oh no moment. Now we have ethics committees in Chemistry.

Eugenics was the practice of sterilising portions of the population in order to stop them breeding. Hitler was obviously a big fan, but before WWII it was a bit of a big deal in the US. 33 US states had sterilisation programs in place against mentally ill people, disabled people, alcoholics, people living in poverty, and people deemed to be promiscuous. Some reports say around 65,000 Americans were legally sterilised during the first half of the 20th century. That was a bit of an oh no moment for Biology. The World Health Organisation (WHO) was created in 1948, as the first specialised agency of the UN. It’s mission was multi-faceted, but I’ll draw your attention to this bit: “to address the underlying social and economic determinants of health through policies and programmes that enhance health equity and integrate pro-poor, gender-responsive, and human rights-based approaches”. 

Basically, don’t let our government kill you and tell you it’s good for  your health.

In a similar vein, some great drug failures like Thalidomide made the idea of having better oversight on medicines seem like a good idea. Thalidomide was sold over the counter from 1957, and was recommended to pregnant women for relieving the symptoms of morning sickness, unfortunately it created horrific birth defects. Only 40% of children born with defects survived, and those who didn’t were missing limbs. oh no. Court cases were brought against the manufacturer all over the world, including a class action in Australia as recently as 2012. The US Food and Drug Administration was created in 1927, but the thalidomide cases significantly strengthened its abilities, and a whole bunch of other laws around the world were introduced to address drug testing around the world.

We didn’t let too many bridges collapse before we decided that civil engineering could use some regulation. In America, the National Society of Professional Engineers was established in 1932, which adopted a formal code of ethics in 1964.

And it only took one plane flying into a building to establish that maybe letting people take box cutters on planes was less than sensible.

Hopefully you can see where I’m going here.

People very rarely come up with new ideas, new inventions, amazing new discoveries, with the intention of killing or hurting people. It’s the unintended consequences that cause the problems. But it’s also those unintended consequences – the “oh no” moments – that lead to improvements in the way we handle things. We end up with better laws, better regulations, and our society improves as a result.

Ethics committees and government oversight departments and legal rulings don’t stop bad things happening. But they can certainly help prevent them, and at least they give us some kind of recourse if the worst happens.

Now, I want you to consider these two cases: Volkswagen was caught out having written software code that allowed their cars to cheat emissions tests.

Uber also developed software (called ‘greyball’) which allowed them to cheat law enforcement officials trying to crack down on ride-sharing.

The difference is that Volkswagen software engineers went to jail, and Uber software engineers didn’t. Why? Because one is a car company, and one is a software company.

Startups especially like to use the phrase “move fast and break stuff”. In IT, we talk about “innovation” a lot, and “thinking outside the box”. I’m sure we all know a project manager who has encouraged us to “challenge paradigms” or “think different”. This is all great, and I’m not suggesting we should stop building new things, or thinking up interesting ways to tackle problems! But what happens when we step back from what we’ve created, and go … oh. No.

Really, in my opinion, IT should have its oh no moment when IBM provided the computing power that made the Holocaust possible, but not only did it go unpunished, we’ve largely forgotten about it over the intervening 80 years or so. So there’s never really been a public reckoning. So now we’ve looked at some examples of oh no moments leading to real change, let’s look at some aspects of IT innovation that haven’t …

The development of the world wide web in the 90s was obviously very optimistic, and I’m not sure we can blame anyone for failing to see 4Chan coming. But we can probably blame some of the social media sites for failing to see the dens of iniquity they have become, and we can certainly blame most of them for failing to do anything about it once it happened. Twitter’s response to the incredible amount of white supremacism has been at best ineffective and, at worse, non-existent.

I find self-driving cars a particularly thorny problem. On the one hand, there are huge benefits to the technology. Consider the implications not only on the environment and on the convenience it can add to our lives, but the added mobility and independence it would give people with disabilities means that this technology could add so much to our lives and to our society. But we haven’t fully thought through the impacts yet. Most of the accidents related to self-driving features so far have been because humans became too reliant on the tech, doing stupid stuff like watching movies, reading, or napping, instead of acting as a last-resort safeguard. What happens when we rely on the tech so much that we stop looking before we cross the road, because we “just know” the cars will stop for us? I have self-driving features in my car, and it makes stupid mistakes ALL THE TIME. The tech is not advanced enough for us to rely on it – driving is, after all, a life or death proposition every time you get on the road. But I also don’t think it should be hidden away until it’s perfect, because how else do you learn what “perfect” is? This is a tricky one.

I looked into the killer robots thing a few months ago, and that’s another tricky one, because the technology being developed for fully-automatic weapons systems, is also used for things like the afore-mentioned self-driving cars, aeroplane technology, medicine and surgery applications, and even peace-keeping operations, like dropping aid packages into war zones. In this case in South Korea, a university went into partnership with an arms manufacturer to develop autonomous weapons, and what ended up stopping them was a bunch of universities signing an open letter (which was initiated by an Australian academic, incidentally), threatening to boycott the university involved. The South Korean government wasn’t intending to step in, the UN didn’t step in. Without an ethics organisation, what other recourse is there to stop things? As it is, we don’t really know that the research has been stopped. The chancellor of the university wrote a lovely letter saying that, but the weapons organisation funding it could be quite happily moving along, and I wouldn’t be at all surprised if they had waved large amounts of money at academics to bring them into the project. That’s all speculation of course, but that’s really my point: there’s no oversight, no regulations, no repercussions, but there is a hell of a lot of money.

Here’s one more for you, that was reported in the Economist a couple of months ago, and has been picking up pace in the mainstream media recently: Xinjiang is a province in north-west China, largely occupied by the Uighur people. So, the Uighur are the largest Muslim group in China. In Hotan, the capital of Xinjiang, there is a police station every 300m or so. If you don’t think that already sounds like a police state, wait for this bit: every citizen has an identity card, and at checkpoints around the province, police will scan people’s cards, take photographs and fingerprints, perform an iris scan, and are told to unlock and hand over any smartphones, which are put into a cradle and the data downloaded to be analysed later on. That’s not just for people they’re suspicious of, they’re for everybody. There are four or five checkpoints every kilometre, with citizens moving through them many times a day. The roads are lined with poles holding cameras which watch pedestrians, but also perform pattern matching between number plates on cars, and the faces of the people driving. And if you’re Uighur and you have done even a relatively minor infraction then you get sent to one of hundreds of thousands of “re-education” camps in the province, which don’t officially exist. No one really knows how many people are locked up, but the estimate in the Economist article was 140,000 people in Hotan alone. The ABC in recent reporting says 2 million. So, locking up minority groups for no reason is by no means a new thing, but the way technology is being put to use in this case certainly is. I bet the people who invented facial recognition have had several oh no moments thanks, at least in part, to China.

In that same vein, you might also have heard of Palantir. They’re the company that use Minority Report-style predictions about crime in an area. It was originally developed for the Pentagon to identify terrorists in Iraq, but that technology has now been imported to downtown Los Angeles, where it’s being used to lock up brown people *before* they commit a crime. So it’s not just China.

So, while the increasing mainstream media awareness of personal data and the nefarious purposes we can put it to has been heartening recently, I’m not sure that Cambridge Analytica and Facebook are enough to be considered an oh no moment that will actually change anything in our industry, but I think it might be starting.

So, what does all this have to do with documentation?

You might be aware that, after quality assurance, the group that finds the most bugs in software is the documentation team. We are often put in the position of poking at products we don’t yet fully understand, in order to work out how to use them. It is the writer’s job to come at products like a clueless user, poke things, bend them, use them in ways they haven’t been designed.

I say we should expand that thinking just a tiny bit: how could I use this product to do harm? How could I use it to discover things about people I really shouldn’t be discovering? Can I use this social product to stalk my ex? What about someone who said something nasty about me online, can they find out anything important about me? Can I use this platform, this API, this plugin, this app, this feature to do something that, as reasonable moral human beings, we feel a little uncomfortable about? It’s also important to think about using it in conjunction with other tech. Recognising someone’s face is one thing, but when you combine that with GPS locations, government databases, and purchase history, you have a completely different problem. And also an answer to why I stopped using supermarket loyalty cards. Only last week I received an email confirming my booking for a hotel I hadn’t heard of. Curious, I clicked the link to ‘edit my booking’ and discovered that, while I couldn’t see the whole credit card number, I could have adjusted the dates of the booking, upgraded the room, or purchased additional services. All because someone mistyped their email in a form.

If I can give you one piece of advice, it’s don’t read your marketing department’s hype. Or if you do, don’t believe it. Nuclear fission has saved millions of lives through cancer treatment, provided light and power to billions, and made surgery and even vegetables safe through irradiation, that’s what the marketing department want you to know. But it also made nuclear war a real possible threat, and the marketing department is unlikely to mention it.

So, question things. Raise bugs. 

Talk about it with your development team, and your manager. Until software engineering has a real, honest to god, oh no moment, and an ethics board with actual legal teeth is born, you — the tech writer — are at the forefront between technology that helps, and technology that can hurt.

Share

Leave a Reply