Category Archives: Data Visualisation

dc.js + crossfilter.js + d3.js = huh? Part II

As I began my first explorations of dc.js and crossfilter I was more than a little baffled by the need for dc.js and crossfilter and then realised that dc.js has native support for crossfilter. Doh!

I then found this great Hacker News discussion about how dc.js, crossfilter.js and d3.js relate to each other. Below are a few quotes but you really should read the whole thing.

Love the paintbrush description of D3. I only realised this after spending an awfully long time coding a bar chart in D3… Ooops! But I was learning D3 at my kitchen table – that’s my excuse and I am sticking to it.

“dc.js is the ‘glue’ that holds d3 and crossfilter together. So I can create a crossfilter, generate multiple dimensions, group those dimensions, then render multiple charts.”

“D3 is like a paintbrush — you can make anything with it if you’re DaVinci, but it’s a very low-level tool so you need to be a master if you want to make anything that’s not my drippy kindergarten giraffe drawing.”

“The benefits of crossfilter or dc.js over plain d3.js is the layer of abstraction making it easier to use.

“Crossfilter seems really cool – but since it’s another library, what is it that Dc is offering?”

“dc.js sits on top of D3 and provides glue that between multiple D3 charts and crossfilter”

“dc.js marries crossfilter.js with d3.js — that’s it in a nutshell.”

Does any of that make any sense?

No? Well it should come as no surprise to anyone who has recently learned D3 that the best explanation comes from the great D3 Noob resource.

This is the D3 Noob explanation which I think is the best I explanation of the dc.js + crossfilter.js + d3.js thing that I have read:

“…crossfilter isn’t a library that’s designed to draw graphs. It’s designed to manipulate data. D3.js is a library that’s designed to manipulate graphical objects (and more) on a web page. The two of them will work really well together, but the barrier to getting data onto a web page can be slightly daunting because the combination of two non-trivial technologies can be difficult to achieve.

This is where dc.js comes in. It was developed by Nick Qi Zhu and the first version was released on the 7th of July 2012.

Dc.js is designed to be an enabler for both libraries. Taking the power of crossfilter’s data manipulation capabilities and integrating the graphical capabilities of d3.js.”

Better? Good. Next thing is to have a look at this Bare bones structure for a dc.js and crossfilter page then read this excellent explanation of Crossfilter, dc.js and d3.js for Data Discovery and then read this Introduction to dc.js.

There are times when I wonder how I learned coding before the existence of the interwebs and people who share their knowledge so freely.

Then I remember it was my friends on my Artificial Intelligence course who helped me get my head around Prolog’s tail end recursion.

Further reading

Mixing emotion and statistics to tell the data story

It is rare to get a press release that has everything needed to tell a story, let alone a data story.

A few weeks back I noticed an interesting data story from Barnardo’s UK in a local newspaper in sarf lunnun (or South London if you don’t speak the East End patois).

Getting in touch with Dan at Barnardo’s I soon had a press release with real facts in, a cracking model released image and some interesting data.

Even data references! How unusual is that?

Below is the result of my work.

barnados graphic 01-02

The image was a little too tight for my liking so had to use context-sensitive fill in Photoshop and more than a little retouching to give myself a few more valuable pixels on the right hand side.

At the same time I wanted the typography and data to intrude onto the boy’s image.

I tried various styles of bar chart, most of them very slight variations on the finished result, the main option being a yellow key line around the ’81 families in Tower Hamlets’ bar.

Barnardo’s liked what I had done and it helped to get the reality of life behind statistics out there.

Here is the story on my hyperlocal site Love Wapping.


Data scrubbing always precedes data visualisation (sigh)

How best to analyse and visualise the 1,378 A4 pages of court case transcript?

I know the story within these 1,378 pages better than most. My task is to extract the story and turn it into something that explains the story to others.

I consider myself lucky that I have the transcript as it is a précis of a protracted and involved tale. OK, it’s a long précis but it is still a précis.

Each of the 30 days of the court case give me a base structure as a timeline. The witness testimony gives another level of granularity. The names, places and organisations that each witness mentions provides a way to create links (‘edges’) between the different actors (‘nodes’).

One way to visualise this might be a co-occurrence matrix along the lines of the classic ‘Les Mis’ example. or some form of hierarchical edge bundling or chord diagram. The data will decide that, not me.

I have already undertaken some basic semi-automatic data scrubbing of the original Word documents, converting these into text files and then creating a PDF of each file and then creating a compound document of all the resulting PDFs. Huh? Why bother doing that?

The utility of entity analysis in DocumentCloud is clear.
The utility of entity analysis in DocumentCloud is clear.

I have been exploring the potential benefits of DocumentCloud and if you give DocumentCloud a single PDF it will treat each page within the PDF as a separate document and generate timelines for all people, places and other entities using the OpenCalais system.

Default hierarchical view of documents in Openview
Default hierarchical view of documents in Openview

I discovered DocumentCloud through using the Knight Foundation’s Overview system. You can automatically import DocumentCloud into Overview and so take advantage of the benefits of both. So I urge you to go get accounts sorted with both and see what you can do. The staff at Overview and DocumentCloud are super helpful.

Before I uploaded the transcript into these systems I spent some time looking at tagging options and desperately trying to remember the SGML project I undertook a couple of decades back – yes before I even knew HTML existed. Fortunately DocumentCloud does much of the tagging legwork for you so no need to hack another SGML editor.

But I keep coming back to the quality of the data. Which is what any data driven journalism project relies on. Good old fashioned clean data.

Sure the results given by Overview and DocumentCloud are better if I had uploaded Word files (eek!). But looking at the resulting text I know it could be better still and at a level where it could then be used as a data structure foundation on which to build to useful computational structures.

Possibly because I know the real story currently hidden in the text I am anxious to generate the best possible narrative.

I make no apologies for thinking about any dataset as being one big list. That’s what an A.I. degree does for you. And I can see the one big list being nicely parsed and imported into MySQL – then it can be used for all sorts of things.

Patience has never been a virtue of mine and so I have spent several days trying to dodge the reality that before the data can become useful I really need to fire up Python and get the transcript text 100% clean and then properly structured.

First off just a simple lexical analysis. Down the road possibly some limited semantic analysis. But the reality is that any elegant data visualisation always needs a clean dataset.

So that’s me in Python land for a couple of weeks. I know I still have it easy as much of the transcript is of the form:

19 Q: Did you do this naughty thing?

20 A: No I did not do that naughty thing, honest.

21 Q: Are you sure?

22 A: Oh yes, on my life guvnor!

Stripping the line numbers is hardly a big issue although I am in no way a Regex expert. I do have a book on that very subject I got for Christmas though.

But I want to be hacking D3 loveliness right now!

So the workflow at the moment looks like it will be the usual:

Python + Regex -> MySQL -> D3 -> End product(s).

[Note: I am now using a very nice clean and simple product called iA Writer for all my writing needs. iA Writer just lets you write stuff with no Mr. Paperclip nonsense and it is also a great way to get up to speed with markdown. Highly recommended.]