How important is “On Demand” in our cultural evolution? Americans are no longer satisfied with Freedom of Choice. In a digital world, we want Freedom of Choice On Demand. Consumers now expect to do what we want, when we want.
During the NCAA Basketball Tournament, we didn’t have to settle for the game CBS chose to broadcast. We could go online and watch whatever game we chose. And with time-shifting digital technology, we could watch when we were ready, on demand. According to CBS, viewers did both of these in record numbers.
In my house we watch Flash Forward, but not during Thursday prime time when ABC serves it up. We go online to ABC.com later and enjoy it with fewer commercials on demand.
In today’s On Demand world, can you imagine an airline refusing to take on-line reservations 24/7? The days of a travel agency or any other business dictating the customers’ timetable are in the rear view mirror. We are rapidly becoming an On Demand society.
As we have documented in the Kelly Music Research blog over the past several months, online research has been embraced by many other industries and we are finding digital On Demand testing can work for our industry too. Radio consultant and thought leader John Parikhal commented on our blog, “In hundreds of music tests, I have seen few statistically valid differences between properly screened online and phone studies.” Pollack Media’s Pat Welsh agrees that effective online research is “all about sample recruitment and integrity, not about the means of conducting the test.”
Traditionally, Callout and other telephone research happens between the hours of 5pm and 9pm local time. As Kelly Music Research has begun permitting our telephone research panelists to migrate to our 24/7 On Demand digital music testing platforms, it is clear the market is ready for it. Telephone research panelists are giving us their e-mail addresses without hesitation so they can participate in our music surveys when it is convenient for them. Radio listeners are happy to share their opinions about music, but not while they are sitting at the dinner table.
Prior to our 24/7 digital research option, KMR used to get callbacks to our office during the day from listeners who would say things like, “You called me last night to do a music survey and I could not do it then, but I can do it now.” Unfortunately, unless our call center was open and staffed, we would miss these willing respondents. It became increasingly frustrating for us as we wondered how much of the typical Radio listening audience was missed while we were confined to 5p to 9p telephone surveys.
Since our digital platform has launched, Kelly Music Research has been monitoring “Start Times” of our online callout and library surveys. For example, we typically give digital callout respondents a window of about 48 hours to complete a survey. We are seeing that after invitation, only about 25% start their online music survey between 5pm and 9pm local time. The remaining 75% start their surveys at other times of day or night. Seventy Five Percent! This is a huge number of listeners who may work second or third shift, or may not be available between 5p and 9p for some reason, or simply prefer not to be interrupted in the evening just because “Kelly Music Research” shows up on their Caller ID. Now that we have digital research On Demand 24/7, we are now reaching new radio listeners who otherwise went untapped.
With the introduction of our online digital option, our song survey participation rates have increased. Now that our research panelists are not restricted to doing surveys on the phone, they are more receptive to our invitations. Even better, with our On Demand model, we are now attracting new radio listeners who could not or would not pick up the phone for a telephone survey between 5pm and 9pm.
The On Demand digital model is clearly more inclusive. Of course, maintaining strict research control measures, panel management and verification procedures are critical to the success of online surveys. However, flexibility in an On Demand culture is no longer an option. If we restrict our music surveys to 5pm to 9pm, we are going to exclude a large segment of the Radio audience. Likewise, if CBS had not let me choose the game, they would have lost me. If ABC only let me watch Flash Forward on Thursdays at 8pm, I’d never see it. And if an airline restricted me to booking travel 9a to 5p, I’d rarely fly them. Would you?
Monday, April 5, 2010
Monday, March 8, 2010
The Final Results - Digital Research Delivers
Part Five – The Secret Sauce in Digital Callout
It’s time to bust a myth. Even though Radio has embraced the power of computers for every other aspect of its business, when it comes to music testing, many broadcasters are still clinging to old, pre-digital methods. Favoring telephone over computer models, many Radio programmers and consultants think that it is not possible to get reliable music research through on-line computer surveys. We have found this notion to be FALSE.
Conducted properly, online music research on a computer can be every bit as accurate and reliable as telephone callout or auditorium testing, which uses dials, pencil and paper or other accepted scoring devices. Further, utilizing digital platforms for music testing are more cost efficient. This much is crystal clear:
During the past several months Kelly Music Research has conducted nationwide testing of a new Digital Callout program for online hook research with comparisons to Telephone Callout and other Online Testing models. In all, we compiled and reviewed over 5,000 online test cases. We tested many tactics and approaches to the different facets of music testing including recruiting, screening, scoring options, survey presentation, premiums, respondent verification and data analysis. Some proved worthy, others did not. All were enlightening.
Key Finding #1 – Apparatus has no significant impact on scores
Our Digital Callout panel of online respondents was drawn primarily from the hundreds of thousands of Telephone Callout participants in the Kelly Music Research database. Instead of rating songs on the telephone, respondents rated songs on their computer. We found the test scores collected on the computer to be highly consistent with those collected on the phone. In other words, panel members rated songs the same way – regardless of whether the respondent was rating it on a phone or on a computer.
Key Finding #2 – Normal Fan (aka Passive Listener) participation is essential
Early in our research (see Part 2) we found that 96% of Radio listeners are Normal, 4% are Extreme. Many station databases are overpopulated with Extreme Fans which skews station database research findings. To accurately represent the whole audience, other forms of outreach such as random landline and cell phone recruiting are necessary to include representation from the 96% Normal Fan/Passive Listener base.
Key Finding #3 – Respondents are biased by Artist Name, Song Titles
Online surveys that display Artist Name and Song Title often produce test results that are not consistent with telephone callout and our unaided Digital Callout online model. Callout and unaided online surveys focus listener reaction on the audio. Artist and Title aided online surveys often distract listener reaction from the audio hook and draw a biased perceptual reaction to the artist names or song titles.
Key Finding #4 – Digital Callout is significantly more cost efficient
The improved cost efficiency of a well executed digital callout or library research program cannot be ignored. Our best testing digital callout models included broader recruitment and cash incentive premiums paid to survey participants. Even with a cash premium factored in, overall costs were about 20% lower than traditional callout due to improved incidence and reduced call center labor cost.
Any successful music testing strategy should include several important objectives, including:
Do Cast a wide net – Use traditional random telephone and cell phone outreach with other online and offline recruiting methods to build your listener panel.
Do Control the panel – Know at least the age, gender, ethnicity, geography and music preferences of all panelists.
Do Manage the panel – Only invite panelists who meet your screening criteria.
Do Balance the sample – Set quotas to create a well balanced control group.
Do Verify the data – Insert control mechanisms to verify respondent identity.
Do Pay incentives – Listeners appreciate the thank you and response rates improve.
Don’t Open the floodgates – You can’t control the sample group if you use everyone who opts in to your panel.
Don’t Display Artist Name & Title – Test the song, not the popularity or image of the artist.
Don’t Make it a game – Keep it research, not a promotion.
Don’t Offer prizes – Sweepstakes makes research vulnerable to manipulation. And Normal Fans don’t respond well to contests.
Don’t Use all surveys - Clean the data before you tabulate results. Scrutinize every survey and toss out all suspect cases.
Short-cutting any of the above will cheapen the process and your results. You will get what you pay for. Online computer research does cost less. But done right, it is not free.
It’s time to bust a myth. Even though Radio has embraced the power of computers for every other aspect of its business, when it comes to music testing, many broadcasters are still clinging to old, pre-digital methods. Favoring telephone over computer models, many Radio programmers and consultants think that it is not possible to get reliable music research through on-line computer surveys. We have found this notion to be FALSE.
Conducted properly, online music research on a computer can be every bit as accurate and reliable as telephone callout or auditorium testing, which uses dials, pencil and paper or other accepted scoring devices. Further, utilizing digital platforms for music testing are more cost efficient. This much is crystal clear:
The Secret Sauce in online music testing is in the SAMPLE PANEL and PROCEDURE,
NOT THE SURVEY APPARATUS - i.e. phone vs. computer.
During the past several months Kelly Music Research has conducted nationwide testing of a new Digital Callout program for online hook research with comparisons to Telephone Callout and other Online Testing models. In all, we compiled and reviewed over 5,000 online test cases. We tested many tactics and approaches to the different facets of music testing including recruiting, screening, scoring options, survey presentation, premiums, respondent verification and data analysis. Some proved worthy, others did not. All were enlightening.
CONCLUSION: PANEL MANAGEMENT AND PROCEDURE ARE CRITICAL
Random recruiting using a variety of outreach efforts combined with strict screening criteria, unbiased surveys and verification procedures yielded excellent results in our online Digital Callout research testing. Survey design has the most influence over survey results. Some key findings:Key Finding #1 – Apparatus has no significant impact on scores
Our Digital Callout panel of online respondents was drawn primarily from the hundreds of thousands of Telephone Callout participants in the Kelly Music Research database. Instead of rating songs on the telephone, respondents rated songs on their computer. We found the test scores collected on the computer to be highly consistent with those collected on the phone. In other words, panel members rated songs the same way – regardless of whether the respondent was rating it on a phone or on a computer.
Key Finding #2 – Normal Fan (aka Passive Listener) participation is essential
Early in our research (see Part 2) we found that 96% of Radio listeners are Normal, 4% are Extreme. Many station databases are overpopulated with Extreme Fans which skews station database research findings. To accurately represent the whole audience, other forms of outreach such as random landline and cell phone recruiting are necessary to include representation from the 96% Normal Fan/Passive Listener base.
Key Finding #3 – Respondents are biased by Artist Name, Song Titles
Online surveys that display Artist Name and Song Title often produce test results that are not consistent with telephone callout and our unaided Digital Callout online model. Callout and unaided online surveys focus listener reaction on the audio. Artist and Title aided online surveys often distract listener reaction from the audio hook and draw a biased perceptual reaction to the artist names or song titles.
Key Finding #4 – Digital Callout is significantly more cost efficient
The improved cost efficiency of a well executed digital callout or library research program cannot be ignored. Our best testing digital callout models included broader recruitment and cash incentive premiums paid to survey participants. Even with a cash premium factored in, overall costs were about 20% lower than traditional callout due to improved incidence and reduced call center labor cost.
Any successful music testing strategy should include several important objectives, including:
Accurate representation of your existing and potential audience
Statistically reliable data from a controlled sample group
Based on our extensive research of the research, here’s our list of suggested DO’s and DON’T’s when it comes to online music testing:Do Cast a wide net – Use traditional random telephone and cell phone outreach with other online and offline recruiting methods to build your listener panel.
Do Control the panel – Know at least the age, gender, ethnicity, geography and music preferences of all panelists.
Do Manage the panel – Only invite panelists who meet your screening criteria.
Do Balance the sample – Set quotas to create a well balanced control group.
Do Verify the data – Insert control mechanisms to verify respondent identity.
Do Pay incentives – Listeners appreciate the thank you and response rates improve.
Don’t Open the floodgates – You can’t control the sample group if you use everyone who opts in to your panel.
Don’t Display Artist Name & Title – Test the song, not the popularity or image of the artist.
Don’t Make it a game – Keep it research, not a promotion.
Don’t Offer prizes – Sweepstakes makes research vulnerable to manipulation. And Normal Fans don’t respond well to contests.
Don’t Use all surveys - Clean the data before you tabulate results. Scrutinize every survey and toss out all suspect cases.
Short-cutting any of the above will cheapen the process and your results. You will get what you pay for. Online computer research does cost less. But done right, it is not free.
Thursday, January 28, 2010
Callout and Web database Music Surveys
Part Four – Online Surveys Must Cast a Wider Net and Pay Incentives
When we look outside the radio industry for other businesses using online consumer surveys to guide strategy, there is no shortage of confidence in the online survey approach. Wherever we looked - Food, Cosmetics, Travel, Clothing, Pharmaceuticals, Political – it seems everyone else is doing online research and is supremely confident in the results. So what are they doing that Radio is not? Two answers stand out. Broader panel recruitment and paid incentives.
In the past few weeks we have looked at many examples of online research conducted for other industries targeting consumers and professionals. Several incorporate intricate multi-media in their surveys. A large supermarket chain uses virtual store shelves and a shopping cart. Coors Light showed a storyboard for a potential TV spot. Levi’s used photo images to test preferences for pocket designs and preferred fading of blue jean colors. Southwest Airlines tracked brand association to different traveler benefits. GlaxoSmithKline probed doctors on heart disease. If you think it’s hard to find radio listeners to do a music test, imagine how hard is it to get high paid, busy cardiologists to sit through an hour long pharmaceutical survey. But these surveys are all done online and the confidence level in the results at these major corporations is extremely high.
In a nationwide telephone callout survey conducted by Kelly Music Research, of 362 music radio P1’s aged 18 to 54, only 9% of callout respondents report being regular participants in online Radio surveys. However, 46% report participating in online surveys for other industries. When we asked our telephone callout panel if they would participate in online Radio surveys that pay an incentive, the number who said “Yes” jumped to 72%.
To get more qualified respondents and reliable data from online research, we need to cast a wider net. Broader recruitment strategies for online surveys, such as telephone outreach are necessary. Incentivizing panelists with a paid premium will further increase online research response rates. Unfortunately, offering “a prize” that a participant “might win” is not enough incentive for most people.
As radio stations populated their e-mail databases, many hoped it would lead to a cheap way of getting reliable research. Cheap yes, reliable no. Most radio station databases were built around contests and promotions. As mentioned earlier in our study, radio station databases attract a disproportionate number of Extreme Fans and Contest Players. Many in station databases are not in the defined target age groups and don’t even listen to the station.
Our study shows that a very small percentage of listeners in station databases clear the screening criteria for a standard radio music survey. We screened several client stations’ listener databases for potential inclusion in a music test. On average, about 1% of database members made it through a standard music test screener. These website database members were offered a cash premium to participate in an independent online music survey. No prizes or sweepstakes were offered. Overall, about 3% of all database members expressed interest. However, when screened for age, gender, ethnicity, and music preference, the number dropped to 1%. And after filtering the list for client station P1’s and P2’s, the qualified percentage dropped even lower.
In major markets and small markets, we’ve seen stations offer database members big prizes and substantial incentives to participate in music surveys. Some of these have attracted several hundred respondents. However when the stations drilled the data down to their actual target demographic, the sample became too small to be reliable.
We believe music testing can include radio station database members, but not rely on them exclusively. Taking a cue from other industries, online Radio research needs to be more inclusive by casting a wider net. And we need to reward participants for sharing their honest opinions with us.
When we look outside the radio industry for other businesses using online consumer surveys to guide strategy, there is no shortage of confidence in the online survey approach. Wherever we looked - Food, Cosmetics, Travel, Clothing, Pharmaceuticals, Political – it seems everyone else is doing online research and is supremely confident in the results. So what are they doing that Radio is not? Two answers stand out. Broader panel recruitment and paid incentives.
In the past few weeks we have looked at many examples of online research conducted for other industries targeting consumers and professionals. Several incorporate intricate multi-media in their surveys. A large supermarket chain uses virtual store shelves and a shopping cart. Coors Light showed a storyboard for a potential TV spot. Levi’s used photo images to test preferences for pocket designs and preferred fading of blue jean colors. Southwest Airlines tracked brand association to different traveler benefits. GlaxoSmithKline probed doctors on heart disease. If you think it’s hard to find radio listeners to do a music test, imagine how hard is it to get high paid, busy cardiologists to sit through an hour long pharmaceutical survey. But these surveys are all done online and the confidence level in the results at these major corporations is extremely high.
In a nationwide telephone callout survey conducted by Kelly Music Research, of 362 music radio P1’s aged 18 to 54, only 9% of callout respondents report being regular participants in online Radio surveys. However, 46% report participating in online surveys for other industries. When we asked our telephone callout panel if they would participate in online Radio surveys that pay an incentive, the number who said “Yes” jumped to 72%.
To get more qualified respondents and reliable data from online research, we need to cast a wider net. Broader recruitment strategies for online surveys, such as telephone outreach are necessary. Incentivizing panelists with a paid premium will further increase online research response rates. Unfortunately, offering “a prize” that a participant “might win” is not enough incentive for most people.
As radio stations populated their e-mail databases, many hoped it would lead to a cheap way of getting reliable research. Cheap yes, reliable no. Most radio station databases were built around contests and promotions. As mentioned earlier in our study, radio station databases attract a disproportionate number of Extreme Fans and Contest Players. Many in station databases are not in the defined target age groups and don’t even listen to the station.
Our study shows that a very small percentage of listeners in station databases clear the screening criteria for a standard radio music survey. We screened several client stations’ listener databases for potential inclusion in a music test. On average, about 1% of database members made it through a standard music test screener. These website database members were offered a cash premium to participate in an independent online music survey. No prizes or sweepstakes were offered. Overall, about 3% of all database members expressed interest. However, when screened for age, gender, ethnicity, and music preference, the number dropped to 1%. And after filtering the list for client station P1’s and P2’s, the qualified percentage dropped even lower.
In major markets and small markets, we’ve seen stations offer database members big prizes and substantial incentives to participate in music surveys. Some of these have attracted several hundred respondents. However when the stations drilled the data down to their actual target demographic, the sample became too small to be reliable.
We believe music testing can include radio station database members, but not rely on them exclusively. Taking a cue from other industries, online Radio research needs to be more inclusive by casting a wider net. And we need to reward participants for sharing their honest opinions with us.
Tuesday, January 12, 2010
The Differences between Callout and Web database Music Surveys
Part Three – The Prize Pig Factor
As we study the factors that lead to different music research results generated by random Callout samples and online station web database samples, we are taking a closer look at the listeners who make up both types of panels. So far, we have seen that Callout sample panels attract participation by more Passive listeners or Normal Fans. Station website databases are more likely to be populated by the vocal minority Actives or Extreme Fans. In our “research of the research” we found another group that is capable of corrupting research drawn from surveys using radio station databases – “Prize Pigs.” It is quite likely that Prize Pigs are in your station’s database more than once. A Prize Pig is probably not even a listener of your station except for the contests. And it is likely that the Prize Pigs in your database are also in the database of every other station in the market. Radio stations using contests as a point of entry and motivation to build the station’s e-mail database should beware. Stations relying on that same database for market research should reconsider. And particularly risky is mixing both – offering the chance to win a very cool prize for participating in a survey.
Our nationwide Callout sample of over 300 music radio station P1 listeners aged 18 to 54 in the top 25 markets reveals that only 8% of listeners have “High Interest” in radio station contests. The majority of listeners we reached in our Callout survey don’t participate in contests because they believe they’d never win. However, Prize Pigs are drawn to the challenge and the opportunity to get something for nothing.
There is a story we were told by a Radio consultant about contests. Around 1980 a Los Angeles radio station gave away a Ferrari sports car in a major on air contest. One lucky listener would be randomly drawn. Spreading the promotion over several months, hundreds of thousands of listeners fantasized about winning this very expensive performance car. Finally, a winner was selected – a very lucky and excited guy from L.A. The next year, the radio station decided to do the exact same promotion a second time. Another Ferrari. Same contest. Same buildup. Same excitement. And then the winner was drawn – same guy. The same guy had won two Ferraris in two consecutive contests. How could this happen? It turns out he was a full time contest player. He knew how to stack the deck in his favor. Today, there are many more like him.
They call themselves Contesters, Gamers or Players. Radio uses less flattering names - Prize Pigs or worse. They try to win anything your station is giving away. And our examination of Callout and Station Web-based panels suggests that Prize Pigs may be polluting radio station databases across the country. In fact this small but aggressive contest cult is more pervasive than we knew. In a recent Google search for “secrets to winning radio station contests,” more than 480,000 results surface. There are Contest Clubs - people who compete with each other tallying winnings. There are tips and suggestions for increasing the odds of winning. There are stories of people who play contests as a full time source of income. Kathy, one Contester reports winning at least twice a month – merchandise, tickets, vacations – she hasn’t paid for a trip in years. Winnings that Prize Pigs don’t use themselves, they can sell on e-Bay or StubHub for cash. Here is one example of a website that offers tips to Prize Pigs who want to win radio contests: http://contests.about.com/od/improveyourodds/tp/radiocontests.htm
Another site offers this piece of advise for increasing your chances of winning station database radio giveaways: “Submit the maximum that you can. If there is no maximum, flood the station with your entries.” If your station’s e-mail database is an entry point for giveaways, chances are you’ve been flooded. Between AOL, Gmail, Yahoo, Hotmail and all the others, one individual has unlimited ability to create as many e-mail addresses he or she would like, then seed your database with all their aliases.
One Contester claims to be in radio station databases with over 100 different e-mail addresses. This greatly improves his chance of winning e-mail entry contests. And if the station does an on-line music survey invitation with a chance to win an Ipod or something even better, this one Prize Pig will get 100 survey invitations and complete 100 different music surveys for 100 chances to win. And he may not even cume the station. Since these Prize Pigs may not even listen to your station or format, this can explain survey results that don’t look right. Radio stations report to us that when they do on line surveys to station databases with a hot prize incentive, they get many more completed surveys. “The bigger the prize, the larger the sample,” says one PD. But is it more people? Or is it just a few Prize Pigs completing multiple surveys?
When Prize Pigs who are not listeners participate in on-line music surveys, they will try to make themselves look legitimate. As the Contester websites suggest, radio stations want to award prizes to listeners. So Prize Pigs pretend to be real listeners. Even if they don’t recognize a single song in a survey, the Gamer will still score the songs familiar. They typically give higher scores to songs by artists whose names they recognize. In station database surveys which display artist names, it is common to see brand new releases by established, big name artists score very familiar and very positive after getting only very little radio airplay. Prize Pigs trying to pass themselves off as real listeners will also give lower scores to songs by artists who’ve gotten negative press or may have lost the cool, hip image. Recent examples of brand new releases from artists with highly recognizable names include AC/DC and Miley Cyrus. While unaided Callout was showing typically low Familiarity scores for their new songs, station database online tests with artist names on display showed instant Familiarity for both. In terms of Appeal scores, station database samples scored the new AC/DC song overwhelmingly positive and the new Miley Cyrus song overwhelmingly negative. In both of these cases, the respondents seemed to be scoring the artist name, regardless of what the song sounded like. Results like these can be indicative of non-listeners participating in on-line database surveys.
It’s important to remember that our findings do NOT suggest that Radio contests have negative appeal. In fact, 79% of our sample report having at least some interest in contests, but they are not Players. They enjoy the entertainment value of trivia questions, radio games and contests in the same way TV game show viewers enjoy watching the contest and playing along in their heads.
Critical to the success of the research process is to control sampling on the front end to ensure the integrity of the data on the back end. Panels built on individuals who opt-in to the database for reasons such as contesting relinquishes control and weakens the statistical reliability of the sample and ultimately the final data.
Next time, online surveys that Normal Radio Fans will do.
As we study the factors that lead to different music research results generated by random Callout samples and online station web database samples, we are taking a closer look at the listeners who make up both types of panels. So far, we have seen that Callout sample panels attract participation by more Passive listeners or Normal Fans. Station website databases are more likely to be populated by the vocal minority Actives or Extreme Fans. In our “research of the research” we found another group that is capable of corrupting research drawn from surveys using radio station databases – “Prize Pigs.” It is quite likely that Prize Pigs are in your station’s database more than once. A Prize Pig is probably not even a listener of your station except for the contests. And it is likely that the Prize Pigs in your database are also in the database of every other station in the market. Radio stations using contests as a point of entry and motivation to build the station’s e-mail database should beware. Stations relying on that same database for market research should reconsider. And particularly risky is mixing both – offering the chance to win a very cool prize for participating in a survey.
Our nationwide Callout sample of over 300 music radio station P1 listeners aged 18 to 54 in the top 25 markets reveals that only 8% of listeners have “High Interest” in radio station contests. The majority of listeners we reached in our Callout survey don’t participate in contests because they believe they’d never win. However, Prize Pigs are drawn to the challenge and the opportunity to get something for nothing.
There is a story we were told by a Radio consultant about contests. Around 1980 a Los Angeles radio station gave away a Ferrari sports car in a major on air contest. One lucky listener would be randomly drawn. Spreading the promotion over several months, hundreds of thousands of listeners fantasized about winning this very expensive performance car. Finally, a winner was selected – a very lucky and excited guy from L.A. The next year, the radio station decided to do the exact same promotion a second time. Another Ferrari. Same contest. Same buildup. Same excitement. And then the winner was drawn – same guy. The same guy had won two Ferraris in two consecutive contests. How could this happen? It turns out he was a full time contest player. He knew how to stack the deck in his favor. Today, there are many more like him.
They call themselves Contesters, Gamers or Players. Radio uses less flattering names - Prize Pigs or worse. They try to win anything your station is giving away. And our examination of Callout and Station Web-based panels suggests that Prize Pigs may be polluting radio station databases across the country. In fact this small but aggressive contest cult is more pervasive than we knew. In a recent Google search for “secrets to winning radio station contests,” more than 480,000 results surface. There are Contest Clubs - people who compete with each other tallying winnings. There are tips and suggestions for increasing the odds of winning. There are stories of people who play contests as a full time source of income. Kathy, one Contester reports winning at least twice a month – merchandise, tickets, vacations – she hasn’t paid for a trip in years. Winnings that Prize Pigs don’t use themselves, they can sell on e-Bay or StubHub for cash. Here is one example of a website that offers tips to Prize Pigs who want to win radio contests: http://contests.about.com/od/improveyourodds/tp/radiocontests.htm
Another site offers this piece of advise for increasing your chances of winning station database radio giveaways: “Submit the maximum that you can. If there is no maximum, flood the station with your entries.” If your station’s e-mail database is an entry point for giveaways, chances are you’ve been flooded. Between AOL, Gmail, Yahoo, Hotmail and all the others, one individual has unlimited ability to create as many e-mail addresses he or she would like, then seed your database with all their aliases.
One Contester claims to be in radio station databases with over 100 different e-mail addresses. This greatly improves his chance of winning e-mail entry contests. And if the station does an on-line music survey invitation with a chance to win an Ipod or something even better, this one Prize Pig will get 100 survey invitations and complete 100 different music surveys for 100 chances to win. And he may not even cume the station. Since these Prize Pigs may not even listen to your station or format, this can explain survey results that don’t look right. Radio stations report to us that when they do on line surveys to station databases with a hot prize incentive, they get many more completed surveys. “The bigger the prize, the larger the sample,” says one PD. But is it more people? Or is it just a few Prize Pigs completing multiple surveys?
When Prize Pigs who are not listeners participate in on-line music surveys, they will try to make themselves look legitimate. As the Contester websites suggest, radio stations want to award prizes to listeners. So Prize Pigs pretend to be real listeners. Even if they don’t recognize a single song in a survey, the Gamer will still score the songs familiar. They typically give higher scores to songs by artists whose names they recognize. In station database surveys which display artist names, it is common to see brand new releases by established, big name artists score very familiar and very positive after getting only very little radio airplay. Prize Pigs trying to pass themselves off as real listeners will also give lower scores to songs by artists who’ve gotten negative press or may have lost the cool, hip image. Recent examples of brand new releases from artists with highly recognizable names include AC/DC and Miley Cyrus. While unaided Callout was showing typically low Familiarity scores for their new songs, station database online tests with artist names on display showed instant Familiarity for both. In terms of Appeal scores, station database samples scored the new AC/DC song overwhelmingly positive and the new Miley Cyrus song overwhelmingly negative. In both of these cases, the respondents seemed to be scoring the artist name, regardless of what the song sounded like. Results like these can be indicative of non-listeners participating in on-line database surveys.
It’s important to remember that our findings do NOT suggest that Radio contests have negative appeal. In fact, 79% of our sample report having at least some interest in contests, but they are not Players. They enjoy the entertainment value of trivia questions, radio games and contests in the same way TV game show viewers enjoy watching the contest and playing along in their heads.
Critical to the success of the research process is to control sampling on the front end to ensure the integrity of the data on the back end. Panels built on individuals who opt-in to the database for reasons such as contesting relinquishes control and weakens the statistical reliability of the sample and ultimately the final data.
Next time, online surveys that Normal Radio Fans will do.
Monday, January 4, 2010
Callout & Web Surveys - Part 2 - Normal Fans and Extreme Fans
Exploring the Differences between Callout and Web-based Music Surveys
Part Two – Normal Fans and Extreme Fans
For insight as to why music research results can be so different between Callout and station Website based hook tests, Kelly Music Research is conducting a series of nationwide tests focusing on different components of each. We recently completed a telephone perceptual survey of over 300 random music radio station P1 listeners aged 18 to 54 in the top 25 markets including land line and cell phone households. These are typical Callout research respondents who participate in music surveys over the telephone. We asked these listeners a series of questions about radio listening and their communication with radio stations.
At least part of the reason music research results differ between radio stations’ Website based surveys and Callout surveys can be traced to the types of listeners who make up each panel sample. Callout panels have more “Normal Fans,” while Web based panels have more “Extreme Fans.” And in most radio audiences, our research shows that Normal Fans outnumber Extreme Fans by about 25 to 1.
In Kelly Music Research’s national telephone survey, only 4% of music station P1’s exhibit Extreme Fan behavior. 96% of those polled can be described as Normal Fans.
Normal Fans are P1’s, often referred to as “Passive P1’s”. Your radio station is their first choice. Yours is their “go-to” station for their favorite music. Normal Fans can be very heavy users of your station, but they will tune away if you play, say or do something that they don’t like. However, Normal Fans are not complainers. They use Radio like they use electricity or any other utility. They turn it on, they change stations and they turn it off without giving it any more thought. Normal Fans account for the lion’s share of every station’s audience and ratings. Normal Fans are the silent majority.
Extreme Fans are different. Also known as “Actives,” Extreme Fans have a strong desire to become more involved with radio stations. Reasons for that desire vary. For example Extreme Fans might have a higher than average passion for music, often called “Purists”. Or they may have an infatuation with air personalities or core artists. Some Extreme Fans have a fascination with the show biz nature of Radio and want to be a part of it. Extreme Fan behavior would include heightened interest in radio station contests, texting jocks and making requests, following personalities on Twitter and Facebook, showing up at promotions and voicing their opinion about programming. Whatever their motivation, Radio’s Extreme Fans feel a much higher emotional connection, and even worship of radio stations, than Normal Fans do. Extreme Fans are the vocal minority
Other industries have Normal and Extreme Fans too. Take the Sports and Movies industries as examples. Professional sports franchises have Normal Fans who watch games on TV and occasionally attend home games. However, Extreme Fans go to all the home games as well as some away games. Extreme Fans comment in player and coach blogs, argue game strategy on call-in shows, know all the players first and last names, where each went to college, and whether players are married and have kids. Extreme Fans paint their faces with team colors, make signs and show up at the airport wearing team uniforms to see their players off and welcome them home from road games. Similarly, in the Movie industry, Normal Fans will go to the theater to see the Star Trek movie. Extreme Fans dress up like the characters and attend Star Trek conventions.
Extreme Fans are often characterized as “groupies” or “obsessive” by Normal Fans. Like Sports and Movies, Radio should not ignore Extreme Fans; however catering to Extreme Fans can alienate the much larger base of Normal Fans. And it is dangerous to think that Extreme Fan opinion and behavior is representative of Normal Fans.
In a simple example, if a DJ says that he or she likes a particular song or artist, that opinion can influence the way a listener scores a song being tested. Normal and Extreme fans often respond differently in music testing. Careful not to be negative about anything the station does, Extremes tend to score everything in degrees of positive and can be biased if they are also a fan of the artist. In Callout surveys, if a Normal Fan doesn’t like a song, he or she tends to say so honestly. In comparative surveys, Callout respondents are not as easily influenced by outside factors such as loyalty to air personalities, stations or artists. There is reduced likelihood for Extreme Fan bias in Callout because respondents are not informed which radio station is fielding the survey and artist names are not revealed.
It is important to note that not all radio station Website visitors are Extreme Fans. Normal Fans visit station Websites too. The #1 reason Normal Fans visit music radio station websites is to listen live to an internet stream. Normal Fans may also glance at pictures, check out weather, traffic or a podcast. But for the most part, Normal Fans don’t participate in radio contests and don’t go out of their way to join advisory panels or voice their opinions.
Over the years, technology has changed our communication apparatus but behavior patterns of Normal and Extreme Fans has not. Prior to cell phones and computers, Extreme Fans jammed request lines, attended remotes and wrote letters to DJ’s or PD’s. Meanwhile, the bulk of the audience, the Normal Fans, listened quietly. Today, Normal Fans still listen quietly. And Extreme Fans text, upload, e-mail, follow on Twitter and join station databases. The key to successful research is achieving the proper balance of Normal and Extreme Fan opinion.
Next time, the “Prize Pig” factor.
Part Two – Normal Fans and Extreme Fans
For insight as to why music research results can be so different between Callout and station Website based hook tests, Kelly Music Research is conducting a series of nationwide tests focusing on different components of each. We recently completed a telephone perceptual survey of over 300 random music radio station P1 listeners aged 18 to 54 in the top 25 markets including land line and cell phone households. These are typical Callout research respondents who participate in music surveys over the telephone. We asked these listeners a series of questions about radio listening and their communication with radio stations.
At least part of the reason music research results differ between radio stations’ Website based surveys and Callout surveys can be traced to the types of listeners who make up each panel sample. Callout panels have more “Normal Fans,” while Web based panels have more “Extreme Fans.” And in most radio audiences, our research shows that Normal Fans outnumber Extreme Fans by about 25 to 1.
In Kelly Music Research’s national telephone survey, only 4% of music station P1’s exhibit Extreme Fan behavior. 96% of those polled can be described as Normal Fans.
Normal Fans are P1’s, often referred to as “Passive P1’s”. Your radio station is their first choice. Yours is their “go-to” station for their favorite music. Normal Fans can be very heavy users of your station, but they will tune away if you play, say or do something that they don’t like. However, Normal Fans are not complainers. They use Radio like they use electricity or any other utility. They turn it on, they change stations and they turn it off without giving it any more thought. Normal Fans account for the lion’s share of every station’s audience and ratings. Normal Fans are the silent majority.
Extreme Fans are different. Also known as “Actives,” Extreme Fans have a strong desire to become more involved with radio stations. Reasons for that desire vary. For example Extreme Fans might have a higher than average passion for music, often called “Purists”. Or they may have an infatuation with air personalities or core artists. Some Extreme Fans have a fascination with the show biz nature of Radio and want to be a part of it. Extreme Fan behavior would include heightened interest in radio station contests, texting jocks and making requests, following personalities on Twitter and Facebook, showing up at promotions and voicing their opinion about programming. Whatever their motivation, Radio’s Extreme Fans feel a much higher emotional connection, and even worship of radio stations, than Normal Fans do. Extreme Fans are the vocal minority
Other industries have Normal and Extreme Fans too. Take the Sports and Movies industries as examples. Professional sports franchises have Normal Fans who watch games on TV and occasionally attend home games. However, Extreme Fans go to all the home games as well as some away games. Extreme Fans comment in player and coach blogs, argue game strategy on call-in shows, know all the players first and last names, where each went to college, and whether players are married and have kids. Extreme Fans paint their faces with team colors, make signs and show up at the airport wearing team uniforms to see their players off and welcome them home from road games. Similarly, in the Movie industry, Normal Fans will go to the theater to see the Star Trek movie. Extreme Fans dress up like the characters and attend Star Trek conventions.
Extreme Fans are often characterized as “groupies” or “obsessive” by Normal Fans. Like Sports and Movies, Radio should not ignore Extreme Fans; however catering to Extreme Fans can alienate the much larger base of Normal Fans. And it is dangerous to think that Extreme Fan opinion and behavior is representative of Normal Fans.
In a simple example, if a DJ says that he or she likes a particular song or artist, that opinion can influence the way a listener scores a song being tested. Normal and Extreme fans often respond differently in music testing. Careful not to be negative about anything the station does, Extremes tend to score everything in degrees of positive and can be biased if they are also a fan of the artist. In Callout surveys, if a Normal Fan doesn’t like a song, he or she tends to say so honestly. In comparative surveys, Callout respondents are not as easily influenced by outside factors such as loyalty to air personalities, stations or artists. There is reduced likelihood for Extreme Fan bias in Callout because respondents are not informed which radio station is fielding the survey and artist names are not revealed.
It is important to note that not all radio station Website visitors are Extreme Fans. Normal Fans visit station Websites too. The #1 reason Normal Fans visit music radio station websites is to listen live to an internet stream. Normal Fans may also glance at pictures, check out weather, traffic or a podcast. But for the most part, Normal Fans don’t participate in radio contests and don’t go out of their way to join advisory panels or voice their opinions.
Over the years, technology has changed our communication apparatus but behavior patterns of Normal and Extreme Fans has not. Prior to cell phones and computers, Extreme Fans jammed request lines, attended remotes and wrote letters to DJ’s or PD’s. Meanwhile, the bulk of the audience, the Normal Fans, listened quietly. Today, Normal Fans still listen quietly. And Extreme Fans text, upload, e-mail, follow on Twitter and join station databases. The key to successful research is achieving the proper balance of Normal and Extreme Fan opinion.
Next time, the “Prize Pig” factor.
Subscribe to:
Posts (Atom)