Arbitron announced today the formation of a task force of radio and advertising industry leaders to develop an ongoing measure of "affinity" that is designed to reflect the advertising value of the unique relationship listeners have with their stations. The mission of this leadership team is to design a relevant, timely and accessible metric that captures audience involvement and has a lasting, balanced impact on radio planning and buying.
The press release notes, “the group is moving aggressively to articulate the parameters that will help define, architect and introduce this significant metric across the industry as soon as possible.”
So the first task of the task force will be to figure out what Affinity is. We in radio know there is a special bond between listener and radio station, and we know that it is important to ratings success, but what exactly is Affinity? And how do you measure it?
Defining Affinity in such a way that we can actually measure it is actually easier than you might think. We’ve been doing it for years.
According to the dictionary, affinity is, “A natural attraction, liking, or feeling of kinship.” That’s a pretty good working definition for radio Affinity: A natural attraction or liking of a radio station. We know that listeners feel a special kinship towards great radio stations, particularly successful morning shows.
So it turns out the task force can just use the dictionary definition. Now to measuring it. Arbitron is looking for a metric. In other words, they are looking for a number that indicates the level of Affinity.
We at Harker Research have found that there is a strong correlation between high Time Spent Listening (TSL) and affinity. By correlation we mean that the two tend to move in the same direction. As one goes up, the other one does too. That stands to reason.
If a listener has a natural attraction towards a radio station, she probably listens to it a great deal. She listens less to a station that she feels less strongly about.
That means TSL is a pretty good measure of station Affinity when looking at individual listeners. It falters as a metric when we start looking at groups of listeners instead of individuals. TSL varies by demographic and format. High TSL in one format might be only moderate TSL in another format. So using TSL isn’t a good idea.
Another possibility is to use the old P-1 idea. Stations are ranked according to how much a person listens to them. The station a person listens to most is her P-1 station. The second most listened to station is her P-2, and so on. PD Advantage uses this information to generate a report showing the proportion of P-1s in a station’s audience. The theory is that a station with a high proportion of P-1s must be a station with strong Affinity.
Like TSL, there are problems using P-1s as a measure of Affinity. The “P” in P-1 stands for preference. The assumption is that “most listened to” is the same as preference. The problem is that while stations with a high proportion of P-1s can have strong Affinity, it isn’t true in all cases.
Furthermore, like TSL, the proportion of P-1s for a station vary by format and the demo the station appeals to. We can compare stations within a format, but we can’t compare stations across formats, and that is the goal of the metric.
Complicating matters for Arbitron is the transition to PPM. PPM measures exposure, not listenership. If a PPM panel member is exposed to encoded audio, then the “listening” counts even if the panelist isn’t really listening.
The impact of PPM has been to atomize radio listening. If PPM is to be believed, very few people actually listen to radio for any length of time. Over a week’s time they might be exposed to a half dozen different radio stations, but exposure might be as little as a single quarter-hour for most of them. (At Harker Research we call this drive-by listening.)
This atomization of radio listening as measured by PPM has obsoleted most of the methods of analyzing Affinity as we do in diary markets. While upwards of 60% of a diary keeper’s TSL might go to a single station, PPM panelists are exposed to most stations about the same amount of time. With PPM, there’s much less difference in TSL between one’s P-1 and one’s P-3.
Atomization may be the reason Arbitron decided to invent a new metric. The new metric is to give broadcasters the kind of information that we once got from diaries.
A PPM skeptic might point out that the reason Arbitron wants to start measuring Affinity is that Arbitron essentially threw out Affinity when it switched to PPM. That might be true, but the reality is that for the top 50 markets, there is no turning back. If we can add an Affinity metric to PPM, then it at least compensates for some of the weaknesses of PPM.
So the question is really what we can add to PPM to make it more like the diaries. The answer is that we can start actually asking people what they listen to. In our research, it turns out that simply asking a listener what her favorite radio station is measures Affinity better than much more complicated higher tech methods.
People know what their favorite station is. They can readily tell us what their favorite station is, and the answer applies equally across demos and formats. This decidedly low-tech solution solves all the problems of ranking by TSL or by P-1 proportions.
It has the additional advantage of not being unduly influenced by heavy users of radio. Station TSL can be dramatically increased by just a few heavy listeners. (That’s why TSL can vary so much from book to book despite cume being pretty steady. Just a couple of diaries can swing TSL quite a bit.)
Here’s what Arbitron should do. Each week, panelists email them the name of their favorite station. Arbitron then ranks stations by the number of favorite votes each station gets. It can then develop metrics that relate “favorite station status” to the measures Arbitron currently publishes.
We can then again calculate such useful metrics as turn-over and recycling. These calculations will benefit both programmers as well as sales.
The good news for programmers is that an Affinity metric will get Arbitron back to rewarding compelling radio. PPM ratings are driven by cume, and cume is driven by many factors beyond the control of programmers like external marketing. In contrast, Affinity is driven by the quality of the product. A good product creates loyalty and the desire to tune back to the station frequently.
We wish the task force well in their quest to develop an Affinity metric. Large committees tend to over-complicate processes that are in reality quite easy. We hope in the creation of an Affinity metric this isn’t the case.