Compliance. A simple concept with all sorts of implications for PPM. Compliance is a measure of the cooperation of a PPM panelist once they have agreed to carry the meter. Unless the meter registers 8 hours of “activity” in a day, the person’s listening is not counted. There’s a motion detector in the meter hooked to an LED. When the meter is active, the light is green. After twenty minutes of inactivity the light starts flashing, and if within another 10 minutes there hasn’t been activity, the meter stops registering.
If a person agrees to carry the thing, isn’t she going to do it? What’s the impact on ratings if someone agrees to participate and then doesn’t? It turns out that the question has been studied quite a bit over the years. And by Arbitron.
In April 1982 Arbitron wanted to understand what happened when the standard diary is extended to two and four weeks. Will people still fill out the longer diary? What happens to ratings?
Arbitron’s study along with more recent research shows that the longer a person fills out a diary, the more stations they listen to, but the less radio listening they record. Sound familiar? Higher cume and lower TSL. Just like PPM.
The problem is not the methodology. The problem is that people grow tired of being asked to do the same thing over and over. You can get a person to accept a four week diary. They’ll even end up writing down more stations, but over time they become less cooperative. They fill out the diary less often, so their reported listening goes down.
PPM participants are part of a panel that is asked to continue for up to two years. For two years they have to remember to take the meter out of the charger each morning, strap it on for at least eight hours a day, and then dock it each night. Every day. Filling out a four week diary sounds pretty easy by comparison, but diary compliance starts falling off even after two weeks.
Arbitron has been somewhat circumspect about panel fatigue. Like most aspects of PPM, compliance is shrouded in acronyms, indexes, and percentages of percentages, but the numbers indicate the challenge of compliance.
Last month’s client update included several tables that shed light on the matter. For example, Houston had a 6+ in-tab of 2147 panelists. However, the average daily in-tab was 1,535 panelists. In other words, on an average day, almost one-third of panelists who ultimately ended up in the month’s report were non-compliant.
Each day about a third of people who agreed to carry the meter for some reason didn’t show up. That might mean the panelist left it home. It might mean the meter was broken, or it might mean the meter was in a woman’s purse and never “heard” the station she listened to at work all day.
Based on Arbitron’s own research we can speculate that the longer one is in the panel the greater the chance that compliance falls off and that the panelist is making a less conscientious effort to participate.
We’ve noted that as the roll-out has continued, the newest markets seem to begin with listening levels not far below diary estimates, but that over time listening declines. The evidence is only circumstantial, but it does suggest that panel fatigue may be an issue that should be investigated.
Arbitron has the evidence. It can look at listening by length of participation to see if listening declines over time. If long time panelists comply less and listen to less radio, we know that falling listening levels are a result of panelist fatigue.
These are our questions:
All good questions. We did get answers to some of these questions after the Philadelphia test, but it would be helpful to be able to monitor compliance trends in every market.
Long ago, Arbitron rejected partially filled out diaries, but in recent years began accepting them. We might have a similar slipping of standards with PPM.
Posted by: Richard Harker | June 15, 2009 at 02:31 PM
Good questions. Another might be "What is the distribution of in-tab days across the weekly in-tab sample?" In other words how many are in tab for all seven days, six out of seven, five out of seven and so on? I'd bet that very few are in-tab seven days a week and those that are are pretty skewed demographically.
What about those with only nine or ten hours of 'green light' time? Do we assume they are sleeping the rest of the day?
Producing accurate ratings from a data set with so many holes is more of an art than a science.
Posted by: John W | June 14, 2009 at 12:44 PM