Advertisement

How To Determine Whether Your Marketing Is Working

All
retail marketers try to measure the impact of their strategies and tactics.
They have been asked to prove return on investment for too long to have survived
without an answer. Yet many marketing teams use the wrong approach. In fact,
the most common approaches to knowing what marketing is working don’t actually
measure ROI at all.

Forrester found
that marketers spent approximately $1.09 billion in 2017 to measure their
impact. In many large companies, that investment goes to multitouch
attribution. Smaller companies rely on attribution rules,
such as last-click attribution or weighted attribution. Attribution models have
different reasons for their allure, but they have one thing in common: they
don’t actually measure the incremental sales driven by marketing.

What’s
Wrong With Attribution 

Every
attribution model seeks to help marketers determine which tactics drive
revenue. Unfortunately, multitouch attribution and all rules-based attribution
models have several flaws.

Advertisement

For
starters, these models cannot measure incrementality. In other words, they
can’t measure the difference between running your advertising or doing nothing
at all. This is one of the reasons multitouch attribution is considered to be
in the “trough of disillusionment.” It can’t actually deliver the most
foundational thing that all marketers are being pressed to prove. Without
measuring incrementality, you cannot measure the ROI of your marketing
investment.

Furthermore,
multitouch attribution needs to capture every customer interaction and use that
data to judge touchpoint influence on purchase decisions. But capturing every
touch point is impossible. Too many customers engage with brands in untrackable
ways, like offline conversations. Customers also have existing brand affinity
that could drive purchases.

Lastly,
the complex methodologies underpinning multitouch attribution are various forms
of forecasting. They use the past to predict the future. But past results do
not guarantee that trends will continue. Different creative, competitive
activity and changing media habits are just three of the reasons multitouch
attribution is a poor fit for evaluating the performance of your current
campaigns.

To drive
more sales, retail marketers must use a different method.

The
Science-Based Approach To Measuring Incrementality

The only
way to measure incrementality is
to compare a test group to a control group. That’s why pharmaceutical companies
are required by the FDA to conduct double-blind studies. By comparing a group
using a new drug (aka exposed group) to a group taking a placebo (aka control
group), you know whether the drug is actually working.

Historically,
this test vs. control approach was part of most marketers’ tool kit via test
markets. For example, a retailer would choose one or more representative metro
areas in which to launch a new product line extension. They would then compare
their overall brand sales in the test markets to other markets where they had
not launched the new product. Test markets are seldom used these days because
they are unacceptably slow and give the competition too much insight into
future plans.

Today,
retail marketers can accurately measure incremental sales without having to
launch a test market. By tracking ad exposures at an individual level and
connecting those exposures to an ID graph, they can find out who has seen an ad
and who has not. This tracking happens all day, every day in the digital
marketing ecosystem. What’s relatively new is the ability to measure sales
among the exposed group to sales among the control/unexposed group. The result
is an accurate and fast measurement of incremental sales.

Advice
For Making The Switch

Retail
marketers seeking to improve the accuracy of their incrementality testing
should follow three scientific rules:

1.
Test for statistical significance.

Not all
results are relevant. Test for statistical significance to determine whether
the results of marketing suggest real differences between exposed and control
groups or whether standard deviation (aka noise) accounts for the differences.
Just because a coin lands on heads in six of the first 10 flips does not mean
the odds favor heads in future flips.

My
company, Commerce Signals, worked with a large retailer to measure the sales lift driven
by 10 different digital marketing tactics. Three of the tactics showed
statistical significance, so the retailer reinvested in those approaches to
more than double the incremental in-store sales that were generated by this
campaign.

2. Use
large sample sizes.

Large
samples have big benefits beyond making statistically significant results more
likely. The bigger the sample size, the more you can reliably drill into the
results to see the incremental sales impact of varied creative, promotions,
audience targets, demand-side platforms and more.

Let’s use
the coin flip example again.
The more times you flip a coin, the less likely it is that you’ll end up with a
majority of heads. In your context, this means that with larger sample sizes,
you are less likely to get results that reflect randomness.

3.
Compare test and control groups before new campaigns.

Just like
the test market approach, you should compare the sales of your test and control
groups in a time period before your marketing starts. Essentially, this shows
you whether you have a “good” control group or your interpretation needs to be
adjusted.

For
example, if your exposed group bought 5% more than the control before your
marketing started, you’d want to know that it bought at least 6% more during
and after your ads aired to declare a positive sales lift.

Measure
sales. Shift spend. Grow faster. And look good while doing it.

Nick
Mangiapane
 is Chief Marketing Officer
of 
Commerce Signals, a data insights platform that helps marketers make better
decisions in near-real time. Mangiapane is a pragmatic consumer marketer with
leadership experience from Procter & Gamble, Newell Rubbermaid, ​and
Ingersoll Rand in addition to Commerce Signals. Mangiapane is an alum of Boston
College and Cornell’s business school.

Access The Media Kit

Interests:

Access Our Editorial Calendar




If you are downloading this on behalf of a client, please provide the company name and website information below: