How To Engineer Luck With Data
We all know the ones that got lucky.
Luck has been the necessary ingredient in every business success story I've ever heard. Apart from hard work, it's the 'lucky break' that has separated the winners from the losers. There's some inflexion point that happens and the market just "gets it".
I've also noticed that the cultural trope of the successful CEO is still alive and well: it's the CEO who 'just knows' or has a 'gut feel' about which direction to take, what to do next, who to hire and fire. It always makes me think of Mad Men's prodigal Don Draper.
But the reality is that these 'gut led' CEOs aren't all that successful.
The documented success rate of CEOs in general is no better than 50/50 (and some estimate a failure rate as high as 75%), but the trope persists.
As much as the best of us enjoyed watching Don Draper do his thing, the reality is that a failed CEO affects a lot of people. Failed CEOs cost shareholders money and employee jobs. Some claim they have just been woefully unlucky, but could they have increased their odds of succeeding?
Data rules. Long live the data-driven CEO.
Recently, in this Age of Data, the 'intuitive CEO' is being replaced with a new and far more reliable trope: the data-driven CEO.
The CEO that comes up with great ideas and knows how to test them, knows how to separate the data signal from the noise, and looks for input/feedback/numbers before jumping head first into anything.
In the Age of Data luck can be engineered.
There are three steps you'll need to get right before you can expect data to start blowing into your business tailwind: CENTRALISE IT. SHARE IT. USE IT.
It's simple but not easy. Here's how to do it.
1. Centralise your data to monetise it by getting rid of pesky (and pervasive) data silos.
Data collection is becoming mainstream. It's one of the reasons we are creating data at such an incredible rate. It's also why 'big data' has become as much of a problem as it is a solution and why you can drown in your new 'data lake': too much data is almost as bad as no data at all.
Collecting and tracking absolutely every data point sounds like a hedge, but it often leads to data paralysis; a state when you no longer see the wood for the trees.
It isn't data availability or access to talent to analyse and interpret the data that's holding us back. According to HBR "the biggest obstacle to using advanced data analysis is plain old access to the data." That means only one thing: the dreaded silo.
Silos occur for many reasons, some more benign than others.
For example, rapid company growth can lead to replacement systems not incorporating unstructured information from the old systems. Less pleasantly, internal company politicking can keep data locked up in one department and away from someone who needs it. To engineer your luck with data, whatever the reasons for the silos, you must break them down.
You can do this either by centralising your data warehouse in a warehouse like Amazon's Redshift or BigQuery and connecting your analytics platforms via an ETL like Stitch or Fivetran. Alternatively, you can integrate a product like Segment or mParticle to centralise your analytics and integrate other platforms. In fact, Segment recently announced that it now gives users the ability to track a member across all integrations and without third-party cookies.
Both a warehouse and third party solution have benefits and drawbacks.
- Redshift/BigQuery + ETL (Extract Transfer Load) get the data in, but don't help you get the data out. And that means selecting another vendor that provides a business intelligence or data visualisation service to help you leverage all of the cross-platform data in the warehouse. So, while it's the most flexible, this warehouse solution may also get quite expensive for the most cost conscious among us.
- Segment and mParticle are likely to be more cost effective (but far from cheap), but they do have limitations in terms of integrations. Also, you can't query the data in raw form the way you can when it's indexed in a query-friendly warehouse.
If neither of the above solutions work, get creative, get manual or get help and, most importantly, choose your vendors carefully. If you control or influence what other third party platforms your company buys, always ask the question: "How does this play with the solutions we already have in place?"
When you centralise all of your data - Google Analytics, web analytics, email marketing, heat maps, A/B testing, social analytics, mobile analytics, CRM (and legacy CRM), and ad serving analytics - you will be one step closer to luck.
And one step ahead of the vast majority of your competitors.
2. Share data with your team. Distribute key metrics to those who need them to be even more excellent at what they do.
On the subject of competitors: many organisations still believe that starting with Analytics 101 with the aim of outrunning the competition is enough. It may not be. A recent MIT Sloan and SAS Analytics Survey found that "competitive advantage with analytics is waning. The percentage of companies that report obtaining a competitive advantage with analytics has declined significantly over the past two years."
This is both good news and bad news.
It's good news because it means that the bar is rising. The more companies excel at analytics, the better and more personalised the customer experience, the faster great companies grow, and the more efficiencies are realised.
It's bad news because for most organisations, it means that being at the start line or even jumping the first hurdle is no longer enough.
To be better than the rest, you need to make sure that after data is centralised that it's effectively distributed. That means figuring out who in your organisation needs to know, what they need to know, and how frequently they need to know it - and making sure they get it.
From marketing to product to strategy to technology to growth, every team will have their own metrics to monitor and improve on. Do you know what they are and how to get it to them?
Once your teams are getting the data they need to excel at their jobs, you're another step closer to engineering luck with data.
One remains - and it's not as obvious as it sounds.
3. Use the data. Learn to read the signals, design thoughtful experiments to test your ideas, and iterate.
This may seem like one of the most obvious things you will read today but it's surprising how much strategic head space and discipline it takes to actually put the insights you glean from your data to good use.
To do it right, you must think like a scientist. We've all been through the science curriculum in high school and remember how to conduct an experiment.
You start with a hypothesis that you frame into an aim of an experiment. You then write up a method (controlling for variables, keeping a control version) to test that aim. Then collect and analyse the results and write your conclusion.
Easy done. The only challenge is that it's not intuitive to translate this approach to a day job but it's necessary if you're reading signals from data, and are coming up with theories as to how to do more of the stuff that's working and less of the stuff that's not.
The best thing is that this approach can be applied in every department - from data-driven marketing to product development to sales to growth to development - and will significantly decrease the chances of failure. Listening to data, learning from it, and iterating your product or service on what you learn is the final - and critical - piece in the puzzle.
And there you have it.
When it comes to data monetisation CENTRALISE IT. SHARE IT. USE IT. The maxim of engineering your luck with data. It's not a guarantee (because, well, there aren't any) but it's as close to one as we're going to get.
About the author: Alexandra is the Head of Growth at DataMuse, a data and growth consultancy that specialises in monetising data and growing data ROI by mining data for insights with impact, developing and implementing robust data strategies, and centralising data silos. The founding team have 25 years combined experience in strategy, growth, and data analytics. Alex holds a BA from Yale University and an MBA from the University of Cambridge.