We’ve all heard that data is the new [insert-valuable-commodity-here], especially when it comes to your marketing. Yet when it comes to actually analysing that data, it can be all too easy to be seduced by the first idea that comes along. We know as well as anyone that time is short, and the board are keenly waiting to hear about your new initiative, not to mention the projected returns it will deliver.
For marketers working with data, it can often feel like there is a pressure to deliver data-driven innovation at the same pace as every other department and team. Weekly sales reports, monthly targets, and daily trading updates are all part of the drumbeat that runs through every retail operation, and your desk is no different. But, data science isn’t like all those other departments that the board is used to dealing with.
Big data might be the hot new thing, but rushing into building a narrative, taking a run at the testing, and simply hoping that it all comes out in the wash is not the way to surprise and delight. Several months down the line you want to be able to put forward a confident set of predictions on future behaviour, based on past testing. With that in mind, we’ve put together three key concepts that it is vital for the board to get their heads around:
Your new social targeting based on enriched audience data worked a treat, and you’ve had a 22% uplift in sales. But before you decide which fancy Powerpoint transition to use to announce it at the team meeting… run it again. And one more time for luck. If you are still getting results around that 20% mark, then now you have a result worth bringing along to that meeting.
Multiple changes at once can muddy the waters. Did that new 10% off voucher go live at the same time as the web team reduced the number of questions on the cart page? If four things happened together (not to mention the seasonal ebb and flow) you’ll need to re-run the test to work out which was the main driver of sales uplift.
You’re not alone here. This is one of the key issues in statistics, and one of the main sources of error, especially in very complex problems like medicine and marketing. It’s an area where Machine Learning, given enough data, can help to isolate patterns and symmetries, to help find out the root cause of a change in performance.
- Twyman’s law
Any figure that looks unusual or interesting is probably wrong. This is a big one for the Vuzo team, however much we might want to pinpoint it as the hot new trend! We find a suspiciously high number of people born on 1st Jan 1980 who harbour resentment about filling in unnecessary DOB fields. There are plenty of glitches like Potwin’s Farm to mess up your geo-targeting. And that’s before you run into Simpson’s Paradox or a hole in your Google Analytics visitor stats from the clocks going forward an hour.
Identifying systematic errors is not something humans are terribly good at – hence tools like Bayesian statistics and ML can help decide what is really a glitch and what is something we just don’t like the look of (incidentally, one of the main sources of destruction in stats is dropping data points because you think they are an outlier).
So, once you have got buy-in from the board they will be more likely to be understanding about why results might be taking a little longer, and probably very grateful to know things are being done properly!