API the Docs London, 20 June 2017

Note: I first wrote this post largely on the train back from London after the conference (which perhaps explains why it’s not the best blog post ever) – I’ve now added an update at the end with links to videos of the talks.

This one day conference was hosted / sponsored by Provonix, and held at the Trampery in the Old St (“Silicon Roundabout”) area of Shoreditch, London (the home of the hipster – unless the trendy people go somewhere else now like Dalston which is entirely possible… the venue did have 1970s style furnishing though – you’d expect nothing less, of course).

I took the day off and got the earliest affordable train down to London from Sheffield, using up the rail vouchers I had been sent last year as an apology for the many delays on the Sheffield to Manchester line last year when I used to commute to Manchester by train (I’m currently commuting up the M1 by car to Leeds most days, so it made a nice change to be back on a train).

The day was divided into several talks, and between them short breaks to mingle and chat. There was also an “unconference” session where people broke into separate discussion groups to talk about a set of suggested topics of interest (I went to the API Specification discussion, hoping to hear something useful about Swagger/OpenAPI).

 

The talks

The list of talks was, I think, aimed to give a balance between more technical discussions of some aspect of API documentation, and more general talks about best practices and good approaches to organising documentation (which would apply whatever tool-chain or approach was being taken).

I missed the first two due to getting the 07.46 from Sheffield and arriving at the venue at 10.45, rather than taking out a second mortgage for a train at 06.30 or a hotel the night before, but the remaining talks were as follows:

  • Rosalie Marshall at GDS (Government Digital Service, the UK government agency attempting to bring public sector IT into the 21st century) talked about their experience over the last couple of years with building both an API documentation team, and also tool-chain and the challenges of getting buy-in across their (obviously quite large) organisation. They seem to have done a good job of getting a developer portal up and running, which is being rolled out gradually across the organisation (rather than being imposed from above). They began by seeing what (if anything) was being used across various departments already, and also put effort into finding out what departments thought their problems were, what they would like to achieve, etc. It looks like they have settled on something built out using markdown in git, following a “docs as code” approach. All of this sounds excellent as far as I am concerned!
  • Daniel Beck – this talk was about what to communicate to customers when deprecating an API. This was all sensible advice, but as much aimed at marketing or product owners as technical writers – essentially based around how to give customers bad news / help them adapt, and as much to do with killing off a service than with the more typical (to me) scenario of making sure the product road map is communicated to customers, and that deprecation of functionality is communicated well in advance via docs and release notes.
  • Jennifer Riggins – “How can API docs be agile?”. This was a really nice talk (I think previously given at an API Days conference). Some good insights / coverage / summary of whether docs should be in the definition of done (only if you get the help you need to deliver – don’t be the one that makes a sprint fail its goals too often if you want to stay popular). Lots of sensible advice about docs backlogs, priorisiting (RICE), getting help from devs and being “curator”, etc., working with devs, “pair writing” with a dev, automating as much as possible, remembering principles like DRY, and so on. A nice talk, and much of the info can be found on http://apidocswriter.com.
  • Daniele Procida – “What no-one tells you about documentation”. Daniele (who is a man, from Switzerland) gave a nice talk arguing that all documentation can be split into four separate groups, and that the key to success is being clear what is required for each group. This makes things easier for both the reader and the technical writer. At first this seemed not interesting / obvious, but the more he talked the more it seemed like it was very good common sense which was being distilled to its essence. Often we naturally break things up the way he suggests anyway, but he argues problems occur when we don’t, so it’s a sort of design pattern that, once aware of, becomes very powerful when we remember to apply it. There is more detail of his argument on: www.divio.com/blog/documentation,which is well worth a read. In short, he identifies that all documentation should be one of either:
    1. Tutorials
    2. How-to
    3. Reference
    4. “Discussions” (the more narrative explanation and “why” stuff – perhaps a better term for this is needed?)
    IMG_0228 - Copy
    He explained (quite convincingly at the time at least!) that keeping the distinction clear while writing is a useful / powerful tool. A clear distinction helps the reader too – and you often see the same distinction used, it’s just that calling it out as a design pattern helps us see what we are doing, and where not to go wrong.Also, much of the advice he offers about what to include in each level / type of documentation, and how to go about writing it, is very good advice in my opinion. He is by the way a huge advocate of Sphinx and reStructuredText.
  • Andrew Johnston of Spotify (from Canada) talked about producing documentation for GraphQL APIs. GraphQL could well be the next big thing. He talked about how they’ve being using it and how he has tried to ensure decent docs coverage / challanges, etc. GraphQL is a bit different to REST / SOAP and could change how things are done quite a bit. It will involve documenting each edge and node of the graph that the web queries traverse, using some tooling (he went through the tooling from Facebook). A good heads-up if this ever catches on (which it may well do, quite soon, of course).
  • Ben Hall – “the art of documentation, and readme.md”. This covered similar ground to the talk from Daniele, in a different way. Good useful non-techie advice (from a non-writer, about what devs expect). Ben was keen to stress the importance, when using GitHub, of a very good GitHub readme.md. This is good advice, and combined with Daniele’s talk, you should know what goes in the readme! (it’s important to have the “why” stuff there – readers need to know that the project is for). He’s also involved with an online teaching resource for developers called katacoda. Oh, and he gave a shout out for using grammarly as a linter. Also – he had nice examples of good API docs sites he had come across. He compared for example, Clearbit (good) with Docker (bad – you’d never know what Docker actually was from the docs, they assume you know all about it already). That said, Clearbit (an API that aggregates info about a company or individual) looks kind of scary. Stay off the internet, folks – the marketeers know too much about you already.

Hopefully later the Provonix site will include details (possibly videos) of the talks, including the two I missed.

 

The “unconference” sessions

Before lunch we had an “unconference” session where various groups broke out into about separate discussion groups about topics of interest. I was quite interested in the static site generator talk, but instead chose to go to the session about API specification tools / markdown extensions. More details below, but five of us had an interesting chat about use of Swagger (one other person was, like me, currently generating Swagger output from source code comments via Swashbuckle).

We discussed the lack of JSON object documentation, but the other people using Swagger were editing the Swagger spec (YAML/JSON) direct, which we weren’t (we were both editing in source code comments direct) so we didn’t resolve that.

Somebody did mention that they took the Swagger specification, read it into Asciidoctor, and could therefore build out a single site with the more general “how to” documentation (in .rst) in the same site as the API ref stuff. I haven’t yet attempted this myself (the Swagger specification we are generating where I work currently is pretty large due to the sheer number of APIs and also the size of the JSON payloads, which are unwieldy to say the least at present) but it’s a useful tip.

Somebody from Nexmo had done nice things to pull swagger into their own system also, though they’d done quite a lot more, which they were happy (keen, even) to demo. They’ve built a nice Ruby based system that has markdown extensions which build out nice docs. Apparently they open-sourced this work, so that could well be worth checking out (I will post a link when I find it).

On the other hand the other people in this particular session were writing Swagger specs by hand as a way to document APIs which were already defined/built elsewhere – though in one case they did generate client SDK code from the Swagger spec. Interestingly one person was not aware that, in addition to this, you can use the Swagger spec to generate the server side code that defines the endpoints (nor had they heard of generating the Swagger spec from the server code – I hadn’t either, until recently).

So out of 5 users of Swagger present, there were at least 3 different use-cases, and we could identify a further use-case which none of us presently took advantage of (generating server code from Swagger spec).

This was all very interesting, though I admit I was a little disappointed not to come away with any insight into how to address my current concerns with Swagger, namely the lack of suitable description of payload items when generating Swagger output from C# comments. I’m suffering from a slight trough of disillusionment, it seems (I’m in good company as Tom Johnson had similar concerns according to his blog).

I didn’t get chance to hear much of what was discussed at the other sessions, but hopefully some notes will appear on Provonix’s website/blog at some stage (I may update this blog with links or notes when I can). One possible suggestion for future events like this would be to have someone write up some quick notes on a whiteboard / poster-sized sheet, that could be available for people to look or discuss in the breaks between sessions.

 

Summary

The day was well organised by the sponsors, and it was definitely worth the cost of my train ticket (the event itself was free). Apart from the useful talks, the main thing that I found useful was the chance to meet and talk to other people, whether developers or full time technical writers, who are involved in documenting APIs – most other meetups for technical writers (with the exception of the Write the Docs groups) seeem to be more biased toward documenting GUIs with Word and Framemaker, rather than docs-as-code and API documentation).

Oh, and there were free laptop stickers:

IMG_1034

At last, I can hold my head up high in London’s Shoreditch / Manchester’s Northern Quarter…

 

Update (22 July)

Since writing this post a month or so ago, slides and videos of the presentations have been uploaded to the Provinix site here:

https://pronovix.com/api-docs-london-2017

I’ve now had chance to watch the videos for the two talks that I missed:

  • Jessica Parsons – “The best of both worlds: a Git-based CMS for static sites”. Jessica is from Netlify, a static site hosting service (which I’ve used before, as they provide, or used to provide, free cloud hosting for static web sites which I used to temporarily host mkdocs generated API documentation at my previous contract, as a proof of concept before I got agreement to set up our own webspace). Jessica’s talk goes through the advantages of using static site generators to build simple websites (ideal for documentation) rather than using CMS (database driven) systems or Help Authoring Tools. This fits with the “docs-as-code” approach, with version control being done by Git or Subversion rather than by a database back end, and of course hosting with a service like Netlify allows build hooks for Github, and so on.Jessica gives a quick overview of several static site generators (Jekyll, Hugo, Sphinx, mkdocs, Gitbook, Slate) as well as explaining the general concept. I was especially interested to hear about Slate – I hadn’t realised that it produced output inspired by the Stripe API documentation (which is something everyone wants to emulate), so that’s something I will be checking out. Slides can be found here: http://slides.com/verythorough/best-of-both#/
  • Jaroslaw Machaň – “Opening a door to a sleeping castle”. Jaroslaw is from Ceska Bank in the Czech Republic, and talked about the API platform they built to support internet banking. Not surprisingly, a good developer portal was vital.

These were both useful and interesting talks, so it’s a shame I missed them because of trains etc. – many thanks to the people at Provonix for uploading videos and slides.

 

 

 

My favourite (technical) technical writing resources

There are a number of really useful blogs and sites around these days which will be of interest to the more technical type of technical writer – I thought it would be useful to create and maintain my own list of links to my favourites.

The list is not meant to be exhaustive, and doesn’t include many standard tomes (Chicago Manual of Style, etc.) – it’s really just a list of things I really like and which relate to how I see the field developing. I’ll update this page as I come across other sites I like.

  • I’d rather be writing  Tom Johnson’s excellent blog contains many articles and also short courses, as well as many podcasts (I’ve found that listening to the podcasts while commuting is a good way keep up with technical writing trends). Tom’s based in Silicon Valley and has worked for several of the big players. His podcasts include recordings of some Bay Area meet-ups, which provide interesting insights.
  • hack.write()  Another interesting blog, this one tends to focus on developer documentation, using “documentation as code” approaches. It’s currently quite a new blog with only a few pages, but seems to be updated very regularly (at least for now).
  • Write the Docs (WTD)  The Write the Docs group is a really good way to keep up with what’s going on. The WTD slack channel is full of some very knowledgeable folk, with a lot of useful discussion. There are conferences and meetups. I find WTD to be more interesting than other groups like STC (and it’s free and open to anyone). Again, there is a slight slant with WTD toward developer docs, API docs, docs-as-code, static site generators (especially Sphinx and to some extent Jekyll), open source, use of GitHub, and so on.
  • Modern Technical Writing  I think everyone involved in software documentation (especially developer facing documentation) should read this short book. It’s only a couple of quid for the e-book on Amazon. I wrote a review here.
  • Elegant Documentation Principles  Nice GitHub project (no code, just the readme file containing the text you see when you visit the repo) with some principles for good tech writing. I like how the author has taken some principles from software development and attempted to come up with similar ones for good technical writing.
  • Every Page is Page One  Mark Baker’s longstanding blog (named after his book) is strong on topic based authoring / single sourcing, but also other issues, including lightweight markup.
  • StaticGen  This site is a nice resource (provided by Netlify, a hosting service) giving a fairly comprehensive list of the static site generator tools that are available. It’s built from a public GitHub project so you can suggest changes if you know of something missing. There’s a link to a matching list of headless-CMS systems too.
  • Beautiful Docs  This is another GitHub readme, with a list of nice examples of great documentation, a list of useful tools, and a list of some other resources about tech writing. The project owner accepts contributions, so you can fork the project and send a pull request with your suggestions for any other good examples.
  • On docs, learning to code, and life  Jennifer Rondeau’s blog has a few nice entries about API docs writing and related issues. I especially like the rant about why technical writers should stop talking about “Subject Matter Experts” and become part of the team, and her suggestions on learning API technology (don’t rely on reading about how to document APIs, instead learn about APIs). Other topics include problems with Swagger, why API docs are important, and so on. Worth a read.
  • AgileDocumentation  This blog by Rob Woodgate, a UK based technical writer, has some good thoughts on how tech writing (and tech writers) fit in with the Agile process. For example he considers the question of should documentation be in the definition of done? (his answer is, it depends). Much of this blog is relevant to anyone writing software documentation in an organisation using Agile development process, whether or not the tech writer is a “technical technical writer” producing developer facing docs. However it seems to me that one of the features of Agile (tech writers are part of the team, not separate) is to an extent a reflection of (or perhaps a driver of) the “documentation as code” movement (tech writers using the same tools and processes as software developers, because code and docs are both software).
  • Documentation as code  Because this term has become quite trendy just recently, I thought I’d look up where it came from, and I came across this slideshow by Evan Goer from 2013 (Evan’s a programmer with an interest in documentation). The term seems to have been picked up by others (though I am prepared to believe that he got the term from someone else originally). Anyway, what it means, simply, is for tech writers to use tools and processes that software developers have already got, rather than re-invent wheels. Or, as Evan puts it; “The radical notion that documentation is part of your project, just like your source code and build scripts and unit tests and everything else”. Most obviously, docs should be in version control, just like other assets (I’ve been doing this for twenty years so no argument from me).
  • Google Developer Documentation Style Guide – Google recently made their style guide available to the public, and it contains a lot of sensible advice, including the caveat that it is only for guidance. Obviously the first rule to break the one about using American spelling…

“Modern Technical Writing” by Andrew Etter – a breath of fresh air

Recently I read a review (on Tom Johnson’s excellent blog about technical writing) of a short book called “Modern Technical Writing”, by Andrew Etter. It’s available on Amazon as a download for Kindle for £2.69 / $3.56 (or free if you have Amazon Kindle Unlimited).

Based on Tom’s review I downloaded the book, and I’m glad I did – I thoroughly recommend it. It’s refreshingly short – more of a pamphlet or extended essay than a typical text book, and all the better for that  (I read it in full during one 45 minute train journey home from work).

I guess I like the book partly because it says a lot of things about technical writing that I’ve thought for many years, in particular:

  • know your audience
  • understand the topic
  • use simple text markup, not heavyweight WYSIWYG, and don’t bother with complex DITA type projects that will never work either
  • put the source in source control (Etter is a fan of Git)
  • build static websites
  • wikis are rubbish

Now, the first two are, you would think at least, uncontroversial, but I loved Etter’s quick romp through this area, especially his description of technical writers who obsess about the Chicago Manual of Style and so on:

“…their impression of the job was that technical writers interviewed engineers, took copious notes, wrote drafts in Adobe Framemaker, and waited several hours for some arcane process to build these drafts into printable books… None of them seemed to give any thought to their reader’s needs, preferring their own criteria for what constituted a job well done. They used phrases like ‘clear, concise, correct and complete’ and avoided words like ‘searchable’, ‘scannable’, ‘attractive’, and most egregiously, ‘useful’. … they were products of a dysfunctional profession, …judged far too long on meaningless criteria by managers [who] would rather produce minimal value from a lot of work than tremendous value from far less work.”

Ouch. I had flash backs to an unhappy time working for a large company that did everything in Microsoft Word and used technical writers merely to proofread and copy-edit those Word docs (in arbitrary time scales) before they were submitted to “some arcane process” (which took longer than the time allotted to actually write the stuff) that destroyed all the formatting that line managers obsessed about, and then uploaded the content in mangled html form to a website.

I’ve never liked WYSYWIG, hate MS Word with a passion for anything other than writing a letter, and could never see what on earth Framemaker fans were so keen on. Instead, I used LaTeX in the 90s (and into the 00s), and kept the source files (plain text with readable markup) in version control, using whatever version control tools the software engineers around me were using (so initially RCS, then Perforce or Subversion). I was deemed to be a bit odd. I used other things of course when duty called, but my favourite Help Authoring Tool was Help and Manual, partly because it is more lightweight than Flare, and partly because its source is well formed XML that you can edit safely when you need to (and you can play around with the CSS that drives the styles too).

But then I noticed a few years ago something happening in discussions I saw in places like techwr-l. Suddenly everyone was talking about how to get their Framemaker files into Subversion. Then, lots of people were acting all interested in XML and DITA – which is plain text markup. XML is really really bad of course, and not meant for humans, but still.

And now, just as programmers have tended to move from XML to JSON for data purposes, it seems technical writers are coming round to lightweight, readable, simple markup like markdown or reStructuredText. And putting the stuff in Git. Because it’s software, just like the rest of the code, and there’s no excuse to use different tools or re-invent wheels.

I still think there’s scope for some form of markup that’s as easy to work with as markdown, but provides semantic tagging, while still being simple syntactically. But until that comes along, I’m with Etter – my favoured approach for getting stuff done would be markdown or rst, stored in git or svn, and built into a static website using one of the many open source tools that do that. The only problem really is there are so many different tools – Etter’s book gives a quick overview of a few.

So, in short – if you’re a technical writer, it’s well worth reading this book. Even if you don’t agree with all of it, there’s probably food for thought, and you won’t have invested too much time or cash.

Or, wait for me to publish my own personal manifesto for technical writing which I’ve had at the back of my head for a few years now – but I may not get round that that, as I’m too busy playing around with static site generators. And that Etter bloke stole my thunder, anyway.

PS. Tom Johnson’s review (much better than mine, as is his blog) is here:

http://idratherbewriting.com/2016/07/26/modern-technical-writing-review/