Saturday, November 16, 2019

TC19 Wrap up

So that’s a wrap, TC19 has finished! So what did I get up to and what were my thoughts of conference.



For me my conference was broken into the following session types:

Keynotes
Hands on training 
Certification; and
Other demonstration sessions

So here are my thoughts on the above.

Keynotes

Opening Keynote

Wow, just wow! It always surprises me by the sheer number of Tableau users come to Conference and the size of the venues. This year Conference was at The Mandalay Bay hotel in Las Vegas, and the arena that normally host concerts (I think the capacity was 7,000). And the opening to the keynote lived up to all of the theatre of concert! 



The first content of the keynote, was delivered by Adam Selipsky and focused on building a data culture and drew out an analogy against how building a data culture was to the war effort of the code breakers in WWII and also how diversity is very important in a data culture by focusing on some key women in the code breaking effort, which was highlighted by the book Code Girls, written by Liza Mindy (who was also present in the opening keynote and received a standing ovation).
This focus on a data culture brought out these elements, proficiency, including everyone, agility and community, which led into the introduction of Tableau Blueprint which is Tableau’s framework and resources for building out a data culture in your organisation.


I’m keen to look at the resources available, as part of my role over the last 2+ years has been in tech-enabling my team and building an internal community. So hopefully it will validate the approach we’ve taken and introduce some resources to help us further on this journey.

There was then a fireside chat with Marc Benioff, the Salesforce CEO. When the conference was held in Berlin earlier this year Salesforce had just announced their intention to acquire Tableau, now with the deal finalised there was a lot more available around the strategy of the acquisition, although it was a very high-level set of initial thoughts. It will be interesting to find out if more is said at Salesforce’s Dreamforce event next week.


The final part of the opening keynote focuses on some of the new features which have either just been released or are likely to be part of the next couple of releases.

Key updates from this that were exciting for me were:

 - Prep builder moving to the browser: this will mean that the data preparation and visualisation and publication can now be done end to end in the cloud. This will be important along with the AWS partnership to help scale deployments and potentially (and this is me speculating) the ability to add different tiers to the subscription model where an online prep and Desktop can have a reduced feature set for a lower price point.

 - Ask Data and Explain Data: I’ve put these these two together as I see them working together to further democratise data analysis. With both being go to tools to assist with the initial data discovery and to help consumers of the dashboards to interact with the content and delve deeper in the insights.

- Data Model: Like above this will help democratise data analysis. Often users will get confused with how the data should be related and joining data at different levels of aggregation. These are all things that experienced ‘data people’ will be familiar with and know how to deal with, such as the different types of joins and the use of LOD expressions to roll up granular records to a more aggregate level. I first saw this last year at the New Orleans conference and was initially disappointed as, while I totally get the benefits of making it easier for people to do the analysis and not everyone is currently a ‘data person’, I thought it hid the stuff that was join on under the hood and could make people feel like they didn’t need to understand important concepts in data engineering. However, now seeing it for the second time I’m much more open to the Data Model features as it can help act as a stepping stone by taking away some of the initial complexities and as users become more familiar to data engineering you can then look under the hood and make changes if they are required.

All of the features stuff was just a teaser to the second day’s content for Devs on Stage.

Devs on Stage

As a precursor to the Devs on Stage session, I bumped into Andrew Beers (CTO) on the morning 5km run. When running with Andrew in the morning I didn’t know that he was CTO and I just thought he was a product developer, but having that ignorance during the run was probably good as we chatted about a broad range of topics instead of focusing on the roadmap which was covered a few hours later in the keynote.

Here’s the whole list of stuff covered in the Keynote, with me expanding on the areas I’m excited about or thought they could be implemented in a different way.

Visualisations & Web
 -Mark animations
The lack of animations in Tableau has been a missing feature for a while now, and is one of the reasons people love D3 so much. And as Hans Rosling showed in his presentations (TED Talk) movement is a very important pre-attentive attribute when showing movement over time, or changes in position. From having had a play with mark animations in the alpha release it looks like Tableau’s implementation of this will make presentation of dashboards very engaging.

 - Pages play button on the web
 - PDF subscriptions
 - Extract refresh based on schedules
 - Public - web editing
 - Public - Web Explore
I can see this as being a easily misused feature. It’s great that you can very easily open up another person’s workbook to get under the hood to understand how it was built, but the fact you can now also edit on the web and then save it to your profile (yes it automatically attributes it to the original author) how many duplicate versions are we going to see of people’s IronViz submissions or weekly challenges like Makeover Monday. The plagiarism of visualisations on public has already been highly discussed on social media, including this post by Hesham (IronViz winner).


 - Tooltip editing in the browser

Prep Builder 
 - List view
 - Fixed LOD
 - Rank 
 - Reusable steps
This will allow multiple steps which may be common in your Flows to be saved to Server and they can be reused. Coming from a background with a lot of experience with Alteryx were this concept is a standard macro where it means common steps with a number of tools can be collapsed down into a single tool in the workflow, I was a bit disappointed to see the result in Prep is to insert all of the steps that make up the saved flow so it doesn’t save screen real estate.

 - Incremental refresh

Data Modelling
I’ve already covered my thoughts on data modelling above,  and I would encourage you to re-watch the keynote for this section as it is much easier to see a visual representation of the Data Model than writing out the functionality here.

 - Relationships
 - Automatic LOD
 - Automatic Join
 - Multi-Fact Schema Support

Server administration
 - Custom welcome message
 - Custom login message
 - Login based licences
The change here will mean that you change the persona of a user on Server it will allocate them the according licence, therefore if you change someone to Creator they will get given a Creator licence allowing them to use Desktop to now build their own dashboards. However what wasn’t clear from the Keynote was how this is done in practice and work with license agreements. For example when converting someone to a Creator does it allocate a licence from an asset pool managed on server, and if there are no available Creator licences will it go off and buy one? Equally if a user moves from Creator to Viewer does there key get given back to be reallocated to another user? If any one has the answer to this then let me know and I’ll update the blog.

 - Resource monitoring tool
 - Content migration tool

Analytics
 - Set control 
With the introduction of Set Actions last year, it enabled users to use interactivity on the dashboard to update members of the set. This brought in very useful functionality which Bethany Lyons covered in great detail at last year’s conference. However what if you don’t want to have another sheet in your dashboard to interactively change the members in the set and instead have a interface that looks like a multi-select filter. This is what Set Control enables, so it can make a simpler interface, but I still love the interactive option.

 - Buffer calculations
I think this has to be my favourite new feature to be introduced, and I ended up chatting a lot to the Developers in the Data Village about additional functionality it would be good to see with the buffer. Following the introduction of the spatial calculations in 2019.1(?) with MakePoint() and Distance(), Buffer() now enables you to create a trade area around the point and use an intersect join to spatially match two datasets (common examples will be customers and stores) based on the radius of the buffer. Coming from Alteryx this is a feature I’ve used lots in Alteryx using the Trade Area tool and Spatial Match, but what doing this now in Tableau enables is the easy change to the radius of the buffer to see what changes in radius size does to the number of customers in overlapping trade areas.

Source: https://twitter.com/hyounpark/status/1195038757054906368?s=21

 - Dynamic parameters
I think this was the feature that got the most cheers from the crowd and it’s been one of the most requested features on the community. So now it’s finally arrived, albeit they didn’t say which version it’s going to be in. For those that don’t know, at the moment when you create a list parameter you can complete the list by taking the values from a field in the data source. However, that list then become static and doesn’t get updated when there is a new extract of data (imagine a list of dates). Now with Dynamic Parameters the list of values in the parameter control are now longer static, so it will save having to recreate the parameter list and publishing the workbook when there’s new values in the list. 

Hands on Training

One of my key reasons I managed to get my employer to support my attendance at conference was the ability to hand select a training course that is suitable for my interests and skill set which wouldn’t be possible from an off the shelf training course. So the majority of the sessions I chose were hands on training sessions where you have a laptop in the room and follow along with an instructor on some advanced/Jedi level sessions. I picked session on using Python inside Tableau, using the metadata api, Jedi calcs and advanced table calculations. 

The way these sessions are set up are you have access to a virtual environment which means all of the required content is available to you (such as already having Anaconda installed in the Python session) and a set of instructions to follow along with. You also get to take the data and instructions away with you, so post conference you can reflect on what you did and practice, instead of leaving behind a saved workbook with all your advanced calculations and only memories of how to do it.

Also the benefit of these sessions you can do stuff outside your knowledge but then gain an understanding of how that can be applied. So before the conference I knew nothing about the APIs available and what you would use them for. But having now attended a hands on session for the Metadata API, where I was using the GraphiQL interface to make GraphQL queries, I now have an appreciation of what is possible and will be looking out how this can be applied in work.


Certification

I’m not going to provide an overview of the DCP exam as there are already a lot of blog posts covering this, such as this one by Mark Edwards.

Another selling point for my business case of going to Conference was that I could take my Desktop Certified Professional exam there (at a $100 discount!). As the DCP exam is 3 hours I preferred the option to do this at conference instead of at home as you don’t have the set up time with the online Proctor and for me it feels like.a more formal exam experience instead of sitting on my sofa at home doing it (which is what I needed to do for my associate exam due to WiFi issues in the set up).

A consideration that you’ll need to take into account if you decide to take your exams at conference will be that you do the exam in a room with around 200 people who will be doing a variety of exams so there will be a flow of people in and out as they will finish their exam earlier than you. I didn’t find this too distracting as I took the exam at the earliest time available and it seems like other people just wanted to get the exam out of the way so the room filled up at 9am which minimised the disruption.

I am now on the 3 week wait for results...

Other sessions

Hackathon Demos

After my DCP exam I attended the demos where just over 20 people presented what they had done for 6 hours earlier that day for their Hack. 

I found this to be a really fun session to see the variety of hacks the teams produced. 

If I get to go to conference again next year I would love to take part in it, as one of my post-conference promises to myself is to dig into the APIs that are available and start learning about them, so the Hack would be a good opportunity to put this to the test.


Explain Data

One of the tips I received last year when going to Conference was “Make sure you go to one of Bethany Lyons’ sessions”, and I want to share that with everyone reading my blog as a tip, as last year her session on set actions was the best session I went to. So this year as soon as the app came out with the conference sessions I made sure I tracked down what Bethany was covering this year and attend one of those. 

I haven’t used Explain Data yet, but saw what it could do from following attendees tweets from TC Europe. So I was keen to get a demonstration. In this session Bethany took us through how it can be used and what Explain Data is not. Like the Data Modelling tools it becomes a tool to help guide you. So Explain Data can help guide you along your data exploration journey and understand trends and outliers in your data. However she made very clear in this session that just because a data point is an outlier it shouldn’t just be excluded, it might be that data need normalising and then the point isn’t an outlier. Also just because Explain Data does some clear stats in the background it doesn’t mean you should take the results at face value, as correlation doesn’t mean causation. With this I see a feature that will be helpful but to be used with caution and ensure you can at least recall your high school stats courses when using it.


Beyond Design - Secret for Creating Engaging and Effective Information Experiences 

This session was led my two Zen Masters so they can be seen as masters in their field. The two presenters were Michael Cisneros and Liliach Manheim.   
This was a very engaging presentation which looked to answer the community question of ‘how to I make effective visualisations’ so by going through a few standard visualisations many of which were familiar to me as they are shared on Tableau Public, but it also included examples not built in Tableau.

The session drew out the design elements which you need to consider when building effective dashboards. I really enjoy these sessions are they are ones where you can reinforce the elements of visualisation design that you may have self-taught through reading books and blog posts but have it delivered by experts in their field.

Two minute tips and tricks 

This was always going to be hard slot to entertain having to fill the afternoon slot at the end of conference and after Data Night Out! This session was led by Jeffery Shaffer (author of Big Book of Dashboards) and Luke Stanke (co-contributor to Workout Wednesday), and both Zen Masters, so it was sure to be full of tips and tricks.

Unfortunately I didn’t get any photos from this session as it required focus to keep up with how quickly they could go through the tips, so it will be a workbook I’ll be sure to download from Tableau Public next week (and hopefully there will be a recording to watch it again)

Tips included, building out a Top 10 and Bottom 10 view on one sheet which makes comparison of the results much easier without needing to change the axis when putting it on a dashboard, a KPI dashboard, building out a calendar using the index() function to fill in missing dates, and date calculations to get the last whole month and then relative calculations such as month over month and year over year change.

Labs feedback session

The way the schedule works out at conference means that you can end up with some good slots in your schedule to explore the Data Village. There’s so much going on in the Data Village, with it being the place to get food, meet vendors, breakout sessions and an area just to have a bit of a chill out. But for me the experiences in the Dev areas where you can play with the new features, speak to the teams behind them and do a interactive feedback sessions like the one I took part in exploring the potential to use a chatbot to explore the data. I was keen to take part in this session to see where the Devs thought the product could go. For me, in my role, it’s important to know your audience and when looking at financial information different stakeholders want to consume that in very different ways, so a chatbot could have a use case in this example. Plus one of the learnings from the Effective Dashboards session, people understand text so not everything needs to be presented as a chart.

The session was based on the Titanic passenger dataset, so you could ask questions via a Slack interface such as “How many passengers were on board?”, “What was the distribution of passenger ages?”, and “Did age increase your change of survival?”. The results I got from my questions were mixed, for example receiving the results as a chart when I would just expect a number to be presented or a bar chart when expecting a scatterplot. 

The session was a ‘Wizard of Oz’ experiment, which is a common method used with the Lean methodology. This meant that Tableau hadn’t build a platform which could do the NLP, query the data and present back the data. What they actually had was someone behind a screen using Tableau Desktop to surface the results and sharing them back on a screen. This technique is very common with examples being betting companies testing whether providing a card account service would be beneficial for their premier accounts, but with minimal set up and outlay they just provide a pre-paid card (think like Revolut) that a team can modify the balance on manually (with a slight delay) to test out the market for this product. So it will be interesting to see (and if) Tableau develop this further as there’s lots of considerations to factor in, such as not knowing the field names and being able encode what the natural query actually represents. Something you can see developing anyway with the AskData product.

Summary

So that’s a wrap of my conference which was a very valuable experience. The next US conference is going to back in Las Vegas next year (I was hoping for somewhere closer to East Coast, just so the time difference wasn’t as big which would make the jet lag more manageable). But I’m excited to get back to my team and share my learnings with them.


No comments:

Post a Comment