Key Points

  • Testing is a cornerstone component of any ad yield management strategy.
  • Choosing which tests to run, how to run those tests, and analyzing the results of those tests is as much an art as a science.
  • While nothing beats a wealth of yield ops experience in getting this right, there are a few best practices that will help get you started.

The best practices for running ad yield optimization tests fall into four categories, tied with the process by which you’ll need to run tests:

  1. Start by choosing which tests to run.
  2. Run the tests that have been chosen.
  3. Review the test results to determine winners.
  4. Manage ongoing optimizations over time.

Download the Test Manager Template

Before jumping into the best practices, grab your copy of our Yield Test Manager Template to help manage your testing strategy from brainstorm to test completion.

-- Article Continues Below --

New call-to-action

Choosing Which Tests to Run

The first step in your yield testing journey is to choose which tests to run in the first place.

Best Practice: Get All the Minds into One Room

Great yield optimization ideas start with a meeting of the minds. Typically the best way to come up with a backlog of yield tests to run is to do a brainstorming session with your team members.

Get your yield optimization professionals in a room at least once a month and have them each present ideas to be torn apart by the rest of the group. At the end of the session, you’ll walk out with a list of potential tests that survived the slaughter to work on implementing over the course of the following month.

Running Tests

Having a list of ideas is great, but actually running those tests is where the rubber meets the road. Below you’ll find some best practices for ensuring your tests run as smoothly as possible, and are capable of generating results you can trust:

Best Practice: Have a Control to Compare Against

In order to determine if a test is ultimately a “winner”, you must first have something to compare the results against to determine whether the change did, in fact, result in increased revenue.

The best solution is to run a true A/B test where the control and the test conditions are run simultaneously with traffic being split between them. Whenever possible, run true A/B tests.

Unfortunately, this isn’t always possible to do. Depending on your set-up, you may not be able to run a pure A/B test. In this case, you’ll have to run a set-time-period test where the whole of your traffic is run through the test condition. Following this, you’ll need to do a pre and post analysis to review how your revenue metrics were affected during the test, comparing them to what those metrics’ trend lines looked like before and after the test.

If you have to run the pre and post analysis, just make sure you are acutely aware of any seasonality issues that might affect your results, and be careful to account for these when trying to apply causation (rather than simple correlation) to your test.

-- Article Continues Below --

New call-to-action

Read the Guide + Get the Template: How to Build Your Target CPM and Price Floor Strategy

Best Practice: Run Tests Long Enough to Gain Statistical Significance

The length of your tests will be determined by the volume of traffic that runs through them. You may be able to run tests in a matter of hours, should you have enough traffic volume running through both the control condition and test condition to gain confidence in the results.

If you want to nerd out on exactly how to determine statistical significance, this is a great resource.

Best Practice: Make Sure Tests Don’t Interfere with Each Other

If your system is built to be able to run multiple A/B tests concurrently, then you are free to run many at once. Just keep in mind that you’ll want to be able to run a volume of traffic through each condition to be able to provide statistically significant results. This means that the more tests you are running concurrently, the less traffic is running through each one, and the longer it will take to reach statistical significance.

In our experience, most mid-size publishers are simply not set up to run concurrent A/B tests. If this is the case for you, you’ll need to run a single test at a time.

Best Practice: Make Sure You Monitor

Once you start a test, make sure you are monitoring results as the test is running (and not just waiting until test completion to review the results). You’ll want to know immediately if there is a very big swing in revenue either up or down.

This will help you catch tests that hurt revenue in a big way right away, or notify you that something might be broken.

Reviewing Test Results

Now that your test has run, and you’ve reached a statistically significant volume of traffic through the test, it’s time to review the results!

Best Practice: Don’t be Surprised by Low Success Rates

Yield operations is all about finding the hidden gems. The majority of tests you run won’t bear any fruit, but the ones that do can have a huge impact on your revenue. Don’t be surprised if as few as 10% of your tests are considered “winners”.

Best Practice: Implementing Winners into Your Standard Processes

When running tests, you’ll find that tests which show positive results can be grouped into two categories:

  1. Yes or No Tests: Some tests, like say setting bidder order, basically generate a “yes” or “no” style answer (e.g. Does putting Bidder X at the top of the bidder order consistently improve revenue?). Implementing these tests into your standard process can be done quickly by changing your set-up once you see consistent performance improvements on a single or multiple tests.
  2. Lever Identification Tests: Some tests, on the other hand, will help you identify that changing a specific lever is indeed a good way to influence revenue (e.g. Does changing the timeout window that you allow for each bidder to respond with a bid influence revenue?). In this case, you may find that changing that number does, in fact, influence revenue. Now your job is continued testing with many different numbers in that slot in order to find just the right number to maximize revenue before implementing it as standard practice. This process can take a long time, and having machine learning and AI algorithms that can determine it for you sure shortens the learning curve.

-- Article Continues Below --

New call-to-action

Visit the Google Ad Manager Training Library

Ongoing Optimization

Even though tests are run for a specific amount of time, your overall ad yield optimization strategy is always a work in progress.

Best Practice: NOTHING is Set it and Forget it

Keep a watchful eye on your performance at all times to catch any dips in performance. Because of the complexity of the ad tech ecosystem, and the ever-changing pipes that make up the ad supply chain, something that has historically generated great revenue for you can, with a moment’s notice, start losing you revenue in the future.

You should have a good process in place for evaluating performance and determining if a setting needs to be changed, despite the fact that a positive test result may have been obtained in the past.

Best Practice: Re-Test Ideas Regularly

A negative test result doesn’t necessarily mean that you should throw an idea out the window. If the idea made it through the brainstorm gauntlet we described in the “Choosing Which Tests to Run” section, that means a bunch of smart people saw logical reasons why it might generate revenue.

If you ran the test, but didn’t see the gains in revenue you were hoping, review your data to see if you can clearly articulate why the test didn’t generate additional revenue. Often yield teams find that they can’t directly pinpoint why a test didn’t bear fruit.

If you can’t pinpoint a reason, put the test on a list for trying again in the future.

easy-button-icon-1  The Easy Button

Looking for the easy way out? We don’t blame you. Luckily we’ve built a team with both the depth and breadth of expertise needed to generate test ideas, run those tests, and determine winners that will continually increase your ad revenue.

Our Yield Ops team keeps a 24/7 watch on the performance of all the websites and apps in the Playwire network, ensuring that your revenue is always maximized. And, if that wasn’t enough, they are constantly working in concert with our AI and machine learning algorithms to find unique opportunities to optimize and increase your revenue faster than any human can do alone.

Apply now to learn more about the revenue you could be earning.

New call-to-action