Analysing Audience Reaction To The PASS Summit 2015 Keynote

If my post from a few weeks back asking for help with my session at PASS piqued your interest, I’m pleased to say that you can now watch the recording of my session “Analysing Audience Reaction To The PASS Summit 2015 Keynote” online here:

Despite having only 24 hours to prepare all my demos after collecting the data I think it went pretty well. The data from Bing Pulse was particularly interesting: I had around 30-35 people providing regular feedback throughout the keynote and there were some obvious peaks and troughs, as you can see from this graph:


Audience satisfaction was a little flat for the first half hour and then rose noticeably when the content became more technical and the demos started. Both the SQL Server R Services and Stretch Database demos were the high points as far as my panel were concerned; the lowest point came when there was mild outrage at the use of Profiler in a demo.

Thanks once again to everyone who helped me out!

Loading Twitter Archive Data In Power Query

If you’re a Twitter user (I’m @Technitrain if you aren’t following me already) you may be aware that you can download your entire Twitter history – all of the data from all of your tweets – as a series of .js files. All the details on how to do this are here:

Since Power Query can work with JSON data I thought it would be easy to use it to analyse my Twitter archive… but while it’s possible, it’s not entirely straightforward. The problem seems to be that Power Query doesn’t like the first line in each .js file that Twitter gives you. Removing that first line isn’t too difficult but it requires some M code, so here’s a function (I’ve called it GetTwitterArchiveFile) that handles that problem. You give it the binary data from the file and it returns a table containing all the data from that file:

(TwitterFile as binary)=>
    //Read data from file and interpret as Lines
    Source = Lines.FromBinary(TwitterFile),
    //Remove the first line
    RemoveFirstRow = List.Skip(Source,1),
    //Convert back to Binary
    ConvertBackToBinary = Lines.ToBinary(RemoveFirstRow),
    //Now convert to JSON
    ConvertToJSON = Json.Document(ConvertBackToBinary),
    //Flatten to a table
    ConvertToTable = Table.FromList(ConvertToJSON, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
    //Expand the first set of columns
    ExpandColumns = Table.ExpandRecordColumn(ConvertToTable, "Column1", {"source", "entities", "geo", "id_str", "text", "id", "created_at", "user", "retweeted_status", "in_reply_to_status_id_str", "in_reply_to_user_id", "in_reply_to_status_id", "in_reply_to_screen_name", "in_reply_to_user_id_str"}, {"source", "entities", "geo", "id_str", "text", "id", "created_at", "user", "retweeted_status", "in_reply_to_status_id_str", "in_reply_to_user_id", "in_reply_to_status_id", "in_reply_to_screen_name", "in_reply_to_user_id_str"})

Here’s an example of how to use the above function: it’s another function (called GetTwitterFullArchive) which, when you pass it the path of your tweets folder (this will be wherever you unzipped the download that you get from Twitter) returns the combined contents of all of the .js files in that format by calling GetTwitterArchiveFile() for each one:

(TweetsFolderPath as text) =>
    //Connect to Tweets folder
    Source = Folder.Files(TweetsFolderPath),
    //Remove everything but Content column
    #"Removed Other Columns" = Table.SelectColumns(Source,{"Content"}),
    //Use Custom Column to call GetTwitterArchiveFile for each .js file in the folder
    #"Added Custom" = Table.AddColumn(#"Removed Other Columns", "Custom", each GetTwitterArchiveFile([Content])),
    //Remove the Content columns
    #"Removed Columns" = Table.RemoveColumns(#"Added Custom",{"Content"}),
    //Expand all columns in the Custom column
    #"Expanded Custom" = Table.ExpandTableColumn(#"Removed Columns", "Custom", {"source", "entities", "geo", "id_str", "text", "id", "created_at", "user", "retweeted_status", "in_reply_to_status_id_str", "in_reply_to_user_id", "in_reply_to_status_id", "in_reply_to_screen_name", "in_reply_to_user_id_str"}, {"source", "entities", "geo", "id_str", "text", "id", "created_at", "user", "retweeted_status", "in_reply_to_status_id_str", "in_reply_to_user_id", "in_reply_to_status_id", "in_reply_to_screen_name", "in_reply_to_user_id_str"})
    #"Expanded Custom"

Invoking this function in a query, for example like this:

    Source = GetFullTwitterArchive("C:\Users\Chris\Downloads\TwitterArchive\data\js\tweets")

Gives you the following output:


As you can see, there are plenty of other columns that can themselves be expanded, but this is a good starting point for any analysis.

There’s nothing really ground-breaking in what I’ve done here – it’s a fairly standard example of how you can use Power Query functions to combine data from multiple files, very similar to this example of combining data from multiple Excel files.

There’s absolutely loads of interesting data that you can play with here, but to start with here’s a query that finds the top 10 people I have replied to on Twitter:

    Source = GetFullTwitterArchive("C:\Users\Chris\Downloads\TwitterArchive\data\js\tweets"),
    #"Removed Other Columns" = Table.SelectColumns(Source,{"in_reply_to_screen_name"}),
    #"Grouped Rows" = Table.Group(#"Removed Other Columns", {"in_reply_to_screen_name"}, {{"Count", each Table.RowCount(_), type number}}),
    #"Filtered Rows" = Table.SelectRows(#"Grouped Rows", each ([in_reply_to_screen_name] <> null)),
    #"Sorted Rows" = Table.Sort(#"Filtered Rows",{{"Count", Order.Descending}}),
    #"Kept First Rows" = Table.FirstN(#"Sorted Rows",10)
    #"Kept First Rows"

Here’s the output as a table and chart:



Looks like Jamie Thomson is the winner by a wide margin!

Here’s an example of a  NodeXL graph I built from this data, showing the relationships between users who I have mentioned together in a single tweet:


You can download the sample workbook for this post, containing all the functions (but not my data) here.

Analysing #SQLPASS Tweets using NodeXL

I’ve got a large backlog of serious technical blog posts to write but today, since I’m still recovering from my trip to the PASS Summit in Seattle last week, I couldn’t resist going back to my favourite data visualisation tool NodeXL and having some fun with it instead. Anyone that saw the keynotes last week will know that the future of BI is all about analysing data from Twitter – forget about that dull old sales or financial data you used to use on your BI project – and so, inspired by Sam Vanga’s blog post from today on that same topic I decided to take a look at some Twitter data myself.

In NodeXL I imported 1757 tweets from 515 different people that included the #sqlpass hashtag from the 8th of November when Twitter activity at the conference was at its peak (I couldn’t import any more than that – I assume Twitter imposes a limit on the number of search results it returns). In basic terms, when NodeXL imports data from Twitter each Twitter handle becomes a point on a graph, and a line is drawn between two Twitter handles when they appear in a tweet together. I won’t bother going into any detail about how I built my graph because analysing the results is much more interesting, so I’ll just say that after playing around with the clustering, layout and grouping options here’s what I came up with:


It looks very pretty from this distance but it’s not very useful if you can’t read the names, so I saved a much larger .png version of this image here for you to download and explore, and if you’ve got NodeXL you can download the original workbook here (don’t bother trying to open it in the Excel Web App). It’s fascinating to look at – even though the data comes from a very restricted time period the cliques in the SQL Server world emerge quite clearly. For example, here’s the group that the clustering algorithm has put me in  (I’m @Technitrain), which is at the bottom of the graph on the left-hand side:


There’s a very strong UK/SQLBits presence there (@timk_adatis and @allansqlis for example), but also a strong BI presence as well with @marcorus and @markgstacey, which is pretty much what you’d expect. There are several other small groups like this, plus a large number of unconnected people in groups on their own in the bottom right-hand corner of the graph, but on the top left-hand side there’s a monster group containing a lot of well-known SQL Server personalities. Jen Stirrup (@jenstirrup) is right in the centre of it, partly because she’s one of the SQL Server Twitter royalty and partly because of her well-deserved PASSion award that day. Highlighting in red just the tweets that involved her shows at the very highest level how well-connected she is:


Keeping Jen selected and zooming in shows the people clustered together with Jen a bit better:


Selecting not only Jen’s tweets but also the tweets of the people who tweeted to her and also to each other (which is one of many useful features in NodeXL), highlights just how close the members of this group are:


This is clearly where the popular kids hang out…

Anyway, I hope this gives you an idea of the kind of thing that’s possible with NodeXL and Twitter data and inspires you to go and try it yourself. Hell, NodeXL is so much fun it might prove to the DBA crowd that BI doesn’t need to be boring!