Part everything, all User Experience

Found this in a job description for a very interesting UX role which astounded me with its clarity:

“You might describe yourself as an Experience Architect, Interaction Designer or User Researcher. Ideally, you’re part-interaction designer, part-information architect, part-usability expert, part-business analyst – and all user experience.”

Crikey! This has been written by someone who actually knows what UX is about.

And that is very rare indeed.

Part-interaction designer, part-information architect, part-usability expert, part-business analyst, all user experience is me. With part data viz thrown in of course.

The amount of junk bullshit job descriptions for graphic designers posing as UX peeps I get sent is ridiculous.

So when I read this it was very refreshing indeed because it describes what I do and how I do it precisely.

Not quite as refreshing as three weeks on an idyllic Caribbean island but close enough.

I will not be apologising for ripping this off and copy and pasting it straight into my CV.

You never know, I might even get some decent UX work out of it.

Now that would be odd.

dc.js + crossfilter.js + d3.js = huh? Part II

As I began my first explorations of dc.js and crossfilter I was more than a little baffled by the need for dc.js and crossfilter and then realised that dc.js has native support for crossfilter. Doh!

I then found this great Hacker News discussion about how dc.js, crossfilter.js and d3.js relate to each other. Below are a few quotes but you really should read the whole thing.

Love the paintbrush description of D3. I only realised this after spending an awfully long time coding a bar chart in D3… Ooops! But I was learning D3 at my kitchen table – that’s my excuse and I am sticking to it.

“dc.js is the ‘glue’ that holds d3 and crossfilter together. So I can create a crossfilter, generate multiple dimensions, group those dimensions, then render multiple charts.”

“D3 is like a paintbrush — you can make anything with it if you’re DaVinci, but it’s a very low-level tool so you need to be a master if you want to make anything that’s not my drippy kindergarten giraffe drawing.”

“The benefits of crossfilter or dc.js over plain d3.js is the layer of abstraction making it easier to use.

“Crossfilter seems really cool – but since it’s another library, what is it that Dc is offering?”

“dc.js sits on top of D3 and provides glue that between multiple D3 charts and crossfilter”

“dc.js marries crossfilter.js with d3.js — that’s it in a nutshell.”

Does any of that make any sense?

No? Well it should come as no surprise to anyone who has recently learned D3 that the best explanation comes from the great D3 Noob resource.

This is the D3 Noob explanation which I think is the best I explanation of the dc.js + crossfilter.js + d3.js thing that I have read:

“…crossfilter isn’t a library that’s designed to draw graphs. It’s designed to manipulate data. D3.js is a library that’s designed to manipulate graphical objects (and more) on a web page. The two of them will work really well together, but the barrier to getting data onto a web page can be slightly daunting because the combination of two non-trivial technologies can be difficult to achieve.

This is where dc.js comes in. It was developed by Nick Qi Zhu and the first version was released on the 7th of July 2012.

Dc.js is designed to be an enabler for both libraries. Taking the power of crossfilter’s data manipulation capabilities and integrating the graphical capabilities of d3.js.”

Better? Good. Next thing is to have a look at this Bare bones structure for a dc.js and crossfilter page then read this excellent explanation of Crossfilter, dc.js and d3.js for Data Discovery and then read this Introduction to dc.js.

There are times when I wonder how I learned coding before the existence of the interwebs and people who share their knowledge so freely.

Then I remember it was my friends on my Artificial Intelligence course who helped me get my head around Prolog’s tail end recursion.

Further reading

Another data driven journalism project starts

Tired and generally worn out. But I have another data journalism project that I need to get done that might crack open all sorts of things.

To be honest it is more a continuation of previous work that I have been undertaking over the last 18 months or so. Might lead somewhere, might not.

Either way I need to do some basic data analysis to see if what I have been told is true.

If it is then BINGO. Maybe even double BINGO.

With all this talk of data driven this and data driven the other it’s sometimes easy to forget that the best information is almost always what someone tells you.

Funny thing being that what to them might be a minor occurrence could be a significant development into investigative work.

I have had the Excel files sitting on my hard drive for a couple of weeks but life has been a bit busy.

First thing – convert the Excel files with multiple worksheets into individual CSV files. I use a file conversion service at to do this.

Not any more I don’t – two Excel files uploaded came back empty. Oops. Probably a fault in the files, not the Zamar service but am not taking any chances at the moment.

This Excel VBA macro seems to do the job. Code below:

Sub ExportSheetsToCSV() 
Dim wSheet As Worksheet 
Dim csvFile As String
For Each wSheet In Worksheets
On Error Resume Next
csvFile = CurDir & "\" & wSheet.Name & ".csv" ActiveWorkbook.SaveAs Filename:=csvFile, _ FileFormat:=xlCSV, CreateBackup:=False ActiveWorkbook.Saved = True
Next wSheet
End Sub

Next? Oh joy. Data scrubbing.

I have tried to document a data journalism story before failed miserably. This time I hope to be able to keep a little diary.

Hyperlocal, data driven journalism, data visualisation, UXD.

Fork me on GitHub