Note: I first wrote this post largely on the train back from London after the conference (which perhaps explains why it’s not the best blog post ever) – I’ve now added an update at the end with links to videos of the talks.
This one day conference was hosted / sponsored by Provonix, and held at the Trampery in the Old St (“Silicon Roundabout”) area of Shoreditch, London (the home of the hipster – unless the trendy people go somewhere else now like Dalston which is entirely possible… the venue did have 1970s style furnishing though – you’d expect nothing less, of course).
I took the day off and got the earliest affordable train down to London from Sheffield, using up the rail vouchers I had been sent last year as an apology for the many delays on the Sheffield to Manchester line last year when I used to commute to Manchester by train (I’m currently commuting up the M1 by car to Leeds most days, so it made a nice change to be back on a train).
The day was divided into several talks, and between them short breaks to mingle and chat. There was also an “unconference” session where people broke into separate discussion groups to talk about a set of suggested topics of interest (I went to the API Specification discussion, hoping to hear something useful about Swagger/OpenAPI).
The talks
The list of talks was, I think, aimed to give a balance between more technical discussions of some aspect of API documentation, and more general talks about best practices and good approaches to organising documentation (which would apply whatever tool-chain or approach was being taken).
I missed the first two due to getting the 07.46 from Sheffield and arriving at the venue at 10.45, rather than taking out a second mortgage for a train at 06.30 or a hotel the night before, but the remaining talks were as follows:
- Rosalie Marshall at GDS (Government Digital Service, the UK government agency attempting to bring public sector IT into the 21st century) talked about their experience over the last couple of years with building both an API documentation team, and also tool-chain and the challenges of getting buy-in across their (obviously quite large) organisation. They seem to have done a good job of getting a developer portal up and running, which is being rolled out gradually across the organisation (rather than being imposed from above). They began by seeing what (if anything) was being used across various departments already, and also put effort into finding out what departments thought their problems were, what they would like to achieve, etc. It looks like they have settled on something built out using markdown in git, following a “docs as code” approach. All of this sounds excellent as far as I am concerned!
- Daniel Beck – this talk was about what to communicate to customers when deprecating an API. This was all sensible advice, but as much aimed at marketing or product owners as technical writers – essentially based around how to give customers bad news / help them adapt, and as much to do with killing off a service than with the more typical (to me) scenario of making sure the product road map is communicated to customers, and that deprecation of functionality is communicated well in advance via docs and release notes.
- Jennifer Riggins – “How can API docs be agile?”. This was a really nice talk (I think previously given at an API Days conference). Some good insights / coverage / summary of whether docs should be in the definition of done (only if you get the help you need to deliver – don’t be the one that makes a sprint fail its goals too often if you want to stay popular). Lots of sensible advice about docs backlogs, priorisiting (RICE), getting help from devs and being “curator”, etc., working with devs, “pair writing” with a dev, automating as much as possible, remembering principles like DRY, and so on. A nice talk, and much of the info can be found on http://apidocswriter.com.
- Daniele Procida – “What no-one tells you about documentation”. Daniele (who is a man, from Switzerland) gave a nice talk arguing that all documentation can be split into four separate groups, and that the key to success is being clear what is required for each group. This makes things easier for both the reader and the technical writer. At first this seemed not interesting / obvious, but the more he talked the more it seemed like it was very good common sense which was being distilled to its essence. Often we naturally break things up the way he suggests anyway, but he argues problems occur when we don’t, so it’s a sort of design pattern that, once aware of, becomes very powerful when we remember to apply it. There is more detail of his argument on: www.divio.com/blog/documentation,which is well worth a read. In short, he identifies that all documentation should be one of either:
1. Tutorials
2. How-to
3. Reference
4. “Discussions” (the more narrative explanation and “why” stuff – perhaps a better term for this is needed?)

He explained (quite convincingly at the time at least!) that keeping the distinction clear while writing is a useful / powerful tool. A clear distinction helps the reader too – and you often see the same distinction used, it’s just that calling it out as a design pattern helps us see what we are doing, and where not to go wrong.Also, much of the advice he offers about what to include in each level / type of documentation, and how to go about writing it, is very good advice in my opinion. He is by the way a huge advocate of Sphinx and reStructuredText. - Andrew Johnston of Spotify (from Canada) talked about producing documentation for GraphQL APIs. GraphQL could well be the next big thing. He talked about how they’ve being using it and how he has tried to ensure decent docs coverage / challanges, etc. GraphQL is a bit different to REST / SOAP and could change how things are done quite a bit. It will involve documenting each edge and node of the graph that the web queries traverse, using some tooling (he went through the tooling from Facebook). A good heads-up if this ever catches on (which it may well do, quite soon, of course).
- Ben Hall – “the art of documentation, and readme.md”. This covered similar ground to the talk from Daniele, in a different way. Good useful non-techie advice (from a non-writer, about what devs expect). Ben was keen to stress the importance, when using GitHub, of a very good GitHub readme.md. This is good advice, and combined with Daniele’s talk, you should know what goes in the readme! (it’s important to have the “why” stuff there – readers need to know that the project is for). He’s also involved with an online teaching resource for developers called katacoda. Oh, and he gave a shout out for using grammarly as a linter. Also – he had nice examples of good API docs sites he had come across. He compared for example, Clearbit (good) with Docker (bad – you’d never know what Docker actually was from the docs, they assume you know all about it already). That said, Clearbit (an API that aggregates info about a company or individual) looks kind of scary. Stay off the internet, folks – the marketeers know too much about you already.
Hopefully later the Provonix site will include details (possibly videos) of the talks, including the two I missed.
The “unconference” sessions
Before lunch we had an “unconference” session where various groups broke out into about separate discussion groups about topics of interest. I was quite interested in the static site generator talk, but instead chose to go to the session about API specification tools / markdown extensions. More details below, but five of us had an interesting chat about use of Swagger (one other person was, like me, currently generating Swagger output from source code comments via Swashbuckle).
We discussed the lack of JSON object documentation, but the other people using Swagger were editing the Swagger spec (YAML/JSON) direct, which we weren’t (we were both editing in source code comments direct) so we didn’t resolve that.
Somebody did mention that they took the Swagger specification, read it into Asciidoctor, and could therefore build out a single site with the more general “how to” documentation (in .rst) in the same site as the API ref stuff. I haven’t yet attempted this myself (the Swagger specification we are generating where I work currently is pretty large due to the sheer number of APIs and also the size of the JSON payloads, which are unwieldy to say the least at present) but it’s a useful tip.
Somebody from Nexmo had done nice things to pull swagger into their own system also, though they’d done quite a lot more, which they were happy (keen, even) to demo. They’ve built a nice Ruby based system that has markdown extensions which build out nice docs. Apparently they open-sourced this work, so that could well be worth checking out (I will post a link when I find it).
On the other hand the other people in this particular session were writing Swagger specs by hand as a way to document APIs which were already defined/built elsewhere – though in one case they did generate client SDK code from the Swagger spec. Interestingly one person was not aware that, in addition to this, you can use the Swagger spec to generate the server side code that defines the endpoints (nor had they heard of generating the Swagger spec from the server code – I hadn’t either, until recently).
So out of 5 users of Swagger present, there were at least 3 different use-cases, and we could identify a further use-case which none of us presently took advantage of (generating server code from Swagger spec).
This was all very interesting, though I admit I was a little disappointed not to come away with any insight into how to address my current concerns with Swagger, namely the lack of suitable description of payload items when generating Swagger output from C# comments. I’m suffering from a slight trough of disillusionment, it seems (I’m in good company as Tom Johnson had similar concerns according to his blog).
I didn’t get chance to hear much of what was discussed at the other sessions, but hopefully some notes will appear on Provonix’s website/blog at some stage (I may update this blog with links or notes when I can). One possible suggestion for future events like this would be to have someone write up some quick notes on a whiteboard / poster-sized sheet, that could be available for people to look or discuss in the breaks between sessions.
Summary
The day was well organised by the sponsors, and it was definitely worth the cost of my train ticket (the event itself was free). Apart from the useful talks, the main thing that I found useful was the chance to meet and talk to other people, whether developers or full time technical writers, who are involved in documenting APIs – most other meetups for technical writers (with the exception of the Write the Docs groups) seeem to be more biased toward documenting GUIs with Word and Framemaker, rather than docs-as-code and API documentation).
Oh, and there were free laptop stickers:

At last, I can hold my head up high in London’s Shoreditch / Manchester’s Northern Quarter…
Update (22 July)
Since writing this post a month or so ago, slides and videos of the presentations have been uploaded to the Provinix site here:
https://pronovix.com/api-docs-london-2017
I’ve now had chance to watch the videos for the two talks that I missed:
- Jessica Parsons – “The best of both worlds: a Git-based CMS for static sites”. Jessica is from Netlify, a static site hosting service (which I’ve used before, as they provide, or used to provide, free cloud hosting for static web sites which I used to temporarily host mkdocs generated API documentation at my previous contract, as a proof of concept before I got agreement to set up our own webspace). Jessica’s talk goes through the advantages of using static site generators to build simple websites (ideal for documentation) rather than using CMS (database driven) systems or Help Authoring Tools. This fits with the “docs-as-code” approach, with version control being done by Git or Subversion rather than by a database back end, and of course hosting with a service like Netlify allows build hooks for Github, and so on.Jessica gives a quick overview of several static site generators (Jekyll, Hugo, Sphinx, mkdocs, Gitbook, Slate) as well as explaining the general concept. I was especially interested to hear about Slate – I hadn’t realised that it produced output inspired by the Stripe API documentation (which is something everyone wants to emulate), so that’s something I will be checking out. Slides can be found here: http://slides.com/verythorough/best-of-both#/
- Jaroslaw Machaň – “Opening a door to a sleeping castle”. Jaroslaw is from Ceska Bank in the Czech Republic, and talked about the API platform they built to support internet banking. Not surprisingly, a good developer portal was vital.
These were both useful and interesting talks, so it’s a shame I missed them because of trains etc. – many thanks to the people at Provonix for uploading videos and slides.