Monday, March 8, 2010

The Final Results - Digital Research Delivers

Part Five – The Secret Sauce in Digital Callout

It’s time to bust a myth. Even though Radio has embraced the power of computers for every other aspect of its business, when it comes to music testing, many broadcasters are still clinging to old, pre-digital methods. Favoring telephone over computer models, many Radio programmers and consultants think that it is not possible to get reliable music research through on-line computer surveys. We have found this notion to be FALSE.

Conducted properly, online music research on a computer can be every bit as accurate and reliable as telephone callout or auditorium testing, which uses dials, pencil and paper or other accepted scoring devices. Further, utilizing digital platforms for music testing are more cost efficient. This much is crystal clear:

The Secret Sauce in online music testing is in the SAMPLE PANEL and PROCEDURE,
NOT THE SURVEY APPARATUS - i.e. phone vs. computer.

During the past several months Kelly Music Research has conducted nationwide testing of a new Digital Callout program for online hook research with comparisons to Telephone Callout and other Online Testing models. In all, we compiled and reviewed over 5,000 online test cases. We tested many tactics and approaches to the different facets of music testing including recruiting, screening, scoring options, survey presentation, premiums, respondent verification and data analysis. Some proved worthy, others did not. All were enlightening.

Random recruiting using a variety of outreach efforts combined with strict screening criteria, unbiased surveys and verification procedures yielded excellent results in our online Digital Callout research testing. Survey design has the most influence over survey results. Some key findings:

Key Finding #1 – Apparatus has no significant impact on scores
Our Digital Callout panel of online respondents was drawn primarily from the hundreds of thousands of Telephone Callout participants in the Kelly Music Research database. Instead of rating songs on the telephone, respondents rated songs on their computer. We found the test scores collected on the computer to be highly consistent with those collected on the phone. In other words, panel members rated songs the same way – regardless of whether the respondent was rating it on a phone or on a computer.

Key Finding #2 – Normal Fan (aka Passive Listener) participation is essential
Early in our research (see Part 2) we found that 96% of Radio listeners are Normal, 4% are Extreme. Many station databases are overpopulated with Extreme Fans which skews station database research findings. To accurately represent the whole audience, other forms of outreach such as random landline and cell phone recruiting are necessary to include representation from the 96% Normal Fan/Passive Listener base.

Key Finding #3 – Respondents are biased by Artist Name, Song Titles
Online surveys that display Artist Name and Song Title often produce test results that are not consistent with telephone callout and our unaided Digital Callout online model. Callout and unaided online surveys focus listener reaction on the audio. Artist and Title aided online surveys often distract listener reaction from the audio hook and draw a biased perceptual reaction to the artist names or song titles.

Key Finding #4 – Digital Callout is significantly more cost efficient
The improved cost efficiency of a well executed digital callout or library research program cannot be ignored. Our best testing digital callout models included broader recruitment and cash incentive premiums paid to survey participants. Even with a cash premium factored in, overall costs were about 20% lower than traditional callout due to improved incidence and reduced call center labor cost.

Any successful music testing strategy should include several important objectives, including:
Accurate representation of your existing and potential audience
Statistically reliable data from a controlled sample group
Based on our extensive research of the research, here’s our list of suggested DO’s and DON’T’s when it comes to online music testing:

Do Cast a wide net – Use traditional random telephone and cell phone outreach with other online and offline recruiting methods to build your listener panel.
Do Control the panel – Know at least the age, gender, ethnicity, geography and music preferences of all panelists.
Do Manage the panel – Only invite panelists who meet your screening criteria.
Do Balance the sample – Set quotas to create a well balanced control group.
Do Verify the data – Insert control mechanisms to verify respondent identity.
Do Pay incentives – Listeners appreciate the thank you and response rates improve.

Don’t Open the floodgates – You can’t control the sample group if you use everyone who opts in to your panel.
Don’t Display Artist Name & Title – Test the song, not the popularity or image of the artist.
Don’t Make it a game – Keep it research, not a promotion.
Don’t Offer prizes – Sweepstakes makes research vulnerable to manipulation. And Normal Fans don’t respond well to contests.
Don’t Use all surveys - Clean the data before you tabulate results. Scrutinize every survey and toss out all suspect cases.

Short-cutting any of the above will cheapen the process and your results. You will get what you pay for. Online computer research does cost less. But done right, it is not free.