Cutting Through the Ratings Fog Using ACR

Competition for every ad dollar has reached historic levels, which is ratcheting up pressure on measurement firms like legacy leader Nielsen.

The point is not to bury Nielsen in blame, as some are inclined to do these days. Truth is, the company deserves some praise for bringing about some improvements of its insights about viewing of late. The recent addition of Hulu and YouTube and total audience measurement (TAM) deserve credit for heading in the right direction, for instance. And, criticisms aside, Nielsen is the currency and will be for some time coming, like it or not.

Not everyone sees such harmony, however. Recently at the Advertising Research Foundation’s Audience Measurement Conference, Sequent Partners presented an extensive analysis that found third-party modeling that relies on Nielsen data yields inaccurate or weak numbers that undercounts the ROI of TV ad spending by as much as 20%.*

The biggest cause of this distortion is the fact that Nielsen data does not represent exact minute ratings, but rather breaks it into quarter-hour estimates. Guidance that undercounts TV by as much as 20% could end up giving major brands the equivalent of the digital world’s “last-click attribution” analysis, overweighting the most recent point of contact with a viewer/consumer.

One response offered by some—notably comScore—to Nielsen’s flawed, panel-based approach is set-top box data. Because set-top numbers omit cord-cutters and do not encompass over-the-air or over-the-top viewing, this isn’t a workable solution, either.

In a recent post in B&C, Nielsen executives accounted for the gaps in set-top box data and suggested, “To overcome the bias, coverage gaps and inaccuracies of big data, set-top box data must be cleaned up.” The best disinfectant, they argued, is panel-based measurement.

Confused yet? No wonder media buyers feel overwhelmed.

Given television’s dominance and scale, surely this $70 billion-plus marketplace offers some way of clearing away the fog shrouding the measurement landscape? There is: the television itself.

TVs are getting smarter all the time. And more consumers are enabling smart devices just as automatically as they used to press the “on” button on the remote. The rise of smart TVs—now tens of millions of households with an adoption curve that keeps getting steeper—has revolutionized programming and content discovery. For advertisers, it has also been a major milestone.

There is simply no more pure supply of data than that which comes directly from televisions themselves. The booming sector of smart TVs, (in which
Inscape's http://inscape.tv/ parent company, Vizio, is a major player), can gather viewing behavior as it happens—through a cable box, over-the-top or over the air. The data therefore requires dramatically less modeling than other data, which means it is more immediately actionable and offers a clearer picture of actual viewing.

TV level data delivers a more consistent supply of data within hours, not days. Perhaps just as importantly, TV level data gives unique visibility on the disparities in small and mid-sized markets that is hard to establish. Nielsen acknowledged its need to evolve in these smaller markets with the announcement it is adding 15,000 new people meters, replacing paper diaries in more than 100 local markets.

When you are trying to assess viewership of CBS in New York City, not much modeling is required. Assessing Showtime viewership in Yellow Springs, Ohio, does require extensive modeling. And as the Sequent study showed, third-party modeling based on Nielsen can often lead to inaccuracies that harm the overall marketplace. Buyers and sellers looking for near real-time, quick-turnaround insights that capture the complete 2017 viewing experience should pay closer attention to data gathered from the glass of televisions.

When you have millions of points of validation coming from millions of homes scattered across all DMAs—the modeling is minimal and the market becomes more transparent.

*After publishing this article, Nielsen requested that we caveat the Sequent Partners finding with this: "These results were rare and occurred in fewer than 10% of cases in the analysis". Be it as it may, 10% still leaves something to be desired, and only underlines the struggle legacy systems have with 100% accuracy and the opportunity ACR provides for closing the gap.