Wpromote Has Acquired Metric Digital | READ MORE >

How to Test Your Paid Media For Optimal Results, Part 3

Testing is the best way to continually improve your brand’s paid media performance.

With the right mindset and framework, you can iterate and optimize your marketing channels to become brand growth powerhouses.

If you’re just joining us in this series, be sure to check out part one and part two for answers to key questions about our company’s approach to testing.

Today we’re learning from Steve Geick, Team Supervisor, (read his past articles here!) and Lauren Nadan, VP Performance Marketing (read her past writings here!)

Here are more strategies and tactics you can implement for optimal results.

# # #

Can you describe your testing framework?

As a member of a growth hacking community online, several colleagues of mine from other companies introduced a testing framework that initially inspired me. Shawn Ellis posted on his group, and I’ve adapted both for startups I’ve worked at and also for Metric Digital’s clients.

One of the biggest advantages of using a testing framework is when you have a number of stakeholders with various sets of ideas. Startups, for example, often have teams that include marketing, engineers, designers, all of whom have great ideas to grow the business. What’s often missing is a framework to organize, prioritize, track results of the tests and log learnings.

What's the structure for that?

Trello boards are one helpful format for organizing our testing framework, but companies should feel free to use whatever platform best suits their needs.

Ideation is first. This is where you bank all ideas for the different types of tests you’d like to try on accounts. Keep it short, specific and simple. Only one or two lines, i.e., “Test Google Discovery Ads,” “RLSA Campaigns,” “Custom in market audience on broad keywords with target CPA.”

Prioritization comes next. Assign each idea a score from one to ten based on three factors, also known as ICE. Impact:

Testing is straightforward. It’s where you list whatever tests are in progress.

Analysis is where you put your insights and learning. It’s helpful to include graphs, metrics, snapshots, observations and other key feedback from your experiments. As this is your final step, this stage fuels new ideas that leads you back to your initiation ideation phase, and the circle starts all over again.

How important is statistical significance in digital marketing testing?

It depends on goals. Your burden of proof will depend on if you’re aiming for efficiency with metrics like ROAS or CPA, or if you are you focused on scaling. What’s important to remember is whether the channel is new for the brand. If so, you’ll set the expectations a little bit lower right out of the gate, trusting that performance will likely improve over time once you make optimizations.

How much should brands spend on testing?

This number will vary by platform and tactic. Brands really have to decide what their outcome is and how many they want to get out of their tests. How many conversions does our ecommerce brand want commensurate with the context of our ad spend? There’s no rule of thumb.

You also don’t want to allocate too much budget to test, because ultimately we want to optimize into high performance and carve out smaller test budget, since those initial tests might not work. But every brand is different. Some companies have no budget for testing new channels, others do. Most clients do try to diversify their channel mix outside of just Facebook and Google (read our latest article about expanding your approach to customer acquisition). A good rule of thumb is that eighty percent of your budget should be spent on what you know is working.

Another factor to determine testing budget is where your company is along its growth trajectory. We work with many brands in their growth stage, who are newly launched. To an extent, everything for them is a test. More established brands might fall closer to the above twenty percent number.

Imagine a brand comes to you and says, “We just spent five hundred bucks on ads and nothing works!” What do you say?

Sometimes a brand will throw a lot at the digital marketing wall, much of which won’t work. That’s understandably frustrating, and there are several ways to prevent that from happening too often. The key is to align on expectations. To test properly, you have to have the proper inputs, whether that means having good ad creative, the right quantity of ad creative, enough budget to garner meaningful results, and so on.

How do you teach your team internally the properties of testing?

The nature of testing in digital marketing has changed drastically in the past few years. One element is that it’s not manual anymore. It’s not simply setting up manual a/b tests, making manual analyses and optimizations and finding winners. Machine learning rules the day, you are feeding the algorithm options, allowing it to optimize in a way humans can’t, and then consistency funneling spend into those results.

Another point is, brands need to be more broad with hypotheses. There’s no more preconceived notions about what’s going to work. Platforms are expert at finding who the customer is and then optimizing spend into that audience, and still putting out some sort of adjacent spend to people who might not be that narrow customer, but still add value. If we cut them out completely, we’re leaving some money on the table. Our recommendation to clients is, let the algorithm stretch and deliver to as many types of consumers as possible.

Our agency’s book of business spans multiple industries, verticals, company sizes, etc. Are there any testing misconceptions across the board?

Companies might believe they always have to be super clear on what’s worked and not worked for their brand. But because so much of digital marketing is machine learning, that is going to change rapidly.

You can’t just go hard into your one audience. It doesn't really work that way. What’s happening is, you're finding that person and then you're also expanding out proportionally around it in a bullseye formation, and accessing audiences that are similar to that person. Besides, what's working and what doesn't work changes a lot more rapidly than it did in the past. There’s a desire to log your winner and loser, from an ad creative perspective, and then stick with that. But in two weeks, it might not be the winner anymore. Only by continually testing every week will you know which audience is the winner.

# # #

Thanks Steve and Lauren for your insights!

We challenge your brand to use their ideas to iterate and optimize your marketing channels to become brand growth powerhouses.

Get our tips straight to your inbox, and start driving revenue today.

Thanks!

Scott Ginsberg Head of Content, Metric Digital The Metric Digital Blog A Blog on All Things Digital Marketing