Becoming Data Informed

Becoming Data Informed

One of the biggest challenges for companies as they shift to being more data informed is that there is a fear of losing control. How can we ensure that teams are aligned to the company objectives when they have the authority and responsibility to react to customer data, pivoting where necessary. Spotify are famous for having popularised the way of organising product teams (tribes, chapters and guilds) so they encountered this challenge a few years ago. The DIBBs model was their approach at ensuring that all teams are aligned to the customer strategy while keeping their required independence.

Henrik Kniberg, who was the Agile/Lean coach for Spotify way back in 2016, gave a talk, about how Spotify uses DIBBs to guide their autonomous teams. The talk covers the theory of the process but doesn’t give a lot of the practical details.

We recently implemented DIBBs at UXDX to inform our product direction, and there was quite a bit of trial and error as we went. In this article I’ll share the steps we went through, and the benefits we received, in following the DIBBs model.

What is DIBBs?

DIBBs is a framework for helping teams derive actionable intelligence from their data. The steps are:

1) Analysing the available Data (D)

2) Translating the data into Insights (I)

3) Formulating Beliefs (B)

4) Placing Bets (B)

Simple right? Like everything, putting theory into practice is where the difficulty lies. Therefore, in this article I will go through the use case for our product development conference, UXDX. In particular, we are going to focus on attendee feedback, what they liked and what they disliked, and how using the DIBBs framework led to a complete turnaround in the improvements that the team were planning to implement. While an event is technically a service, we don’t see much of a difference between it and a product from a review and improvements perspective, so while reading try to think about how you can apply the examples to your situation.

1)   Data

Fortunately we have lots of data. Over 5,000 people have attended a UXDX event and they were all asked about their goals before the event as well as their feedback after the event. There was a challenge in collating and categorising this qualitative information, but to derive insights we needed ways of categorising and assessing the data.

This took a lot of time, and we made a lot of costly mistakes but, through trial and error, we agreed upon a set of standardised category types. Luckily we had picked a good tool, Airtable, for categorising the data which made changes trivial, saved a lot of time, and protected our sanity.

2)   Insights

To start at a really high level we first looked at our NPS score. NPS can be dangerous if you use it as a standalone metric, since it is easy to manipulate or misinterpret, but we still believed that it was a good starting point to see where we needed to focus.

  • 42% of people rated the event 9 or 10 out of 10
  • 16% rated it from 1 to 6 out of 10

On a positive note, we seemed to be resonating with a large portion of our audience. But with 16% of people in the lower category we knew we needed to make some improvements.

We needed to dig deeper into each of these cohorts to get to the real insights into why different attendees were having such different experiences. As I mentioned earlier, we ask people to share their goals before an event as well as feedback after the event. We were originally worried that, if people weren’t satisfied, they would prefer anonymous feedback (email is optional on feedback, but we incentivise people to enter their email through a draw for free tickets). Fortunately, 75% of people who gave negative feedback did enter their email. These emails allowed us to tie the feedback back to the goals that people had when they signed up for the conference. Not surprisingly, people leaving positive feedback had a much higher email entry rate at 98%.

When people were giving positive reviews their goals were

  • Be inspired / Be exposed to new ideas
  • Learn how others are solving the same problems that we are having
  • Network with my peers
  • Learn how to better align UX and Dev / build cross functional teams
  • Get exposed to ideas outside of my current area of focus
  • Find out the latest trends in Product / UX / Dev

When we looked into the original goals of attendees who rated the event poorly, they had different, much more specific, goals. Their goals were more focused:

  • Learn advanced Customer Interviewing techniques
  • How to embed OKR's in a scaling startup
  • Learn about customer research techniques, specifically in a B2B context
  • Multivariate testing tools and techniques at scale

Now some real insights were starting to emerge. The conference is serving the needs of people looking for knowledge, inspiration and networking, particularly in the areas of cross team collaboration. However, when goals are more around learning particular skills, people feel that the conference does not deliver.

3)   Beliefs

The next step was to create some assumptions or beliefs as to why the conference, in its current structure, is not helping people increase their skills. For context, the conference had two stages in 2019 where there were 40 talks across the two days. In addition there were five 3-hour workshops. After looking at our insights, we came up with the following beliefs:

  1. The 30 min case study talk format delivers inspiration and insight into how teams are solving problems like those faced by our audience
  2. The 30 min case study talk format does not improve skills as it does not go into enough detail for people to be able to pick up the new processes /  tools / methodologies without further training

These beliefs may seem very straightforward, but as you will see below, they proved quite valuable in terms of choosing which Bets to pursue.

4)   Bets

To figure out how to solve the problems we gathered the full team, who had also been involved in the insight identification process, to ideate on the best solutions.

The First Bet: Longer Sessions

The first solution that came to mind, and the one we were pursuing before the DIBBs exercise, was to make the sessions longer to let the speakers go into more detail. This would be a big change because we would need to reduce the number of talks by 33% to accommodate the increased times. Given the risk we looked at the data and realised that more people listed the talk length and variety of talks higher than the people who were asking for more detail. Had we followed through on this bet it would have made the event experience worse for the majority of people.

The Second Bet: Better Communication

Since we had a difference in expectations between the different groups we mulled the idea of better communication to call out the styles of talks and what use cases they solve. This would probably result in the metric improving but our mission is to help teams shift from projects to products - and learning new skills is a critical part of that journey. Updating the communication, to call out that we don't solve particular use cases, would conflict with our mission so we rejected it.

The Third Bet: Label Talks

Every person will have different areas of specialisation, knowledge and experience. This means that two people can watch the same talk and have a very different opinion on the content and quality. Our third bet was to label talks with audience seniority level so people can know whether it will be an introductory talk or more detailed for those who already have a base level of knowledge. But, if we go back to our beliefs, labelling won't solve the problem that you can't go hands-on enough in a 30 minute talk to improve someones skills. We still liked this bet, and it has a low cost of implementation, so we decided to pursue it anyway.

The Fourth Bet: Make Talks More Detailed

Our fourth idea was to keep the talks the same length but see if we could get more detailed. We have access to five years worth of ratings on talks as well as all of the videos. There is a clear alignment between more detail oriented talks and satisfaction ratings. We also reviewed the poorest performing videos to identify the mistakes that people may make. Using this data we have started to create a guide for the speakers on what good looks like and what mistakes to avoid. Again, this is something we are going to adopt, and we expect that this will improve the satisfaction ratings of the talks, but, looking back on our beliefs, it still doesn't solve our problem around new skill adoption.

The Fifth Bet: More Workshops

In 2019 we ran five workshops on topics aligned to the agenda. These workshops are where people can go into the details of particular topics. From an organisational perspective, these are a challenge to plan and execute. Do you run an additional workshop day? We receive a lot of feedback that people can't take that many days out of the office. Do you run them in parallel with the talks? People have to choose between the talks and the workshops. Our bet is to increase the number of workshops to 20 for 2020 and run them in parallel with the talks. This keeps the ticket cost down for attendees and keeps the number of days out of the office down as well. And for the FOMO crowd we will share all of the videos of the talks with the attendees.

What did we learn

Our gut reaction was to react literally to the feedback people gave - make the sessions longer so that speakers could go deeper into a topic. This isn't an example of being data informed - this is being data-reactionary.

Had we not clearly stated our beliefs we may have ended up with "good" ideas that do improve the product but they wouldn't have solved the underlying problem.

Using the DIBB’s framework we avoided making a bad decision, we came up with more ideas by getting the full team involved and the team have much more ownership of the problem space and the solution. The framework gave management the comfort to delegate control of decision making to teams without the fear that the teams will go off in wrong directions. Overall there was better alignment between the customer, team and business. It worked well for us so give it a try and see how it works for you.

Rory Madden

Rory Madden

FounderUXDX

I hate "It depends"! Organisations are complex but I believe that if you resort to it depends it means that you haven't explained it properly or you don't understand it. Having run UXDX for over 6 years I am using the knowledge from hundreds of case studies to create the UXDX model - an opinionated, principle-driven model that will help organisations change their ways of working without "It depends".

Get latest articles straight to your inbox

A weekly list of the latest news across Product, UX, Design and Dev.

We care about the protection of your data. Read our Privacy Policy.

UXDX is my favourite newsletter. Incredible content across the key areas in our industry.

Dennis Schmidt
Dennis Schmidt
Product Designer, COYO

Subscribe to the UXDX Newsletter

We care about the protection of your data. Read our Privacy Policy.