Concept Testing – Ensuring Success through Quality

Published Dec 01, 2020

Share this

In September, for the launch of our Toluna Start platform, we created a range of end-to-end automated solutions including one for concept testing. These surveys can be set up, customised, and launched within minutes, but as well as offer advantages of ease and speed, they provide a baseline level of quality that users can rely on. Here are our best practice tips for any concept testing survey.

Are you reaching the right people?

The audience for your concept test will vary from survey to survey although it is quite typical for concept testing to focus on category buyers/considerers and include different brand/usage levels within market. In some cases, for concepts with broad potential appeal, the audience can be quite broad and in others relatively niche. For this flexibility of reach, you need a sufficiently large representative panel and one that has high-quality standards for recruitment, vetting, and management.

You also benefit if the panel has already profiled panel members on a variety of category behaviours and attitudes as these don’t need to be asked again in the survey, producing greater survey efficiency as respondents just need to focus on core questions. And as in Toluna’s platform, the ability to create your own custom targets and interlock those with standard demographics and profiling questions as part of the targeting process, adds another layer of customisation, relevancy, and quality.

Is your stimulus up to scratch?

Concept testing can include image, text, and/or video stimulus and these should be as high quality in resolution as possible; you don’t want the quality of the stimulus to negatively impact respondent ability to rate the concepts fairly.

Given the range of format options and device types the stimulus needs to display on, it is also important to check the stimulus can be seen/heard properly so building in a quality check for this is a key element. We tend to do a technical quality check at the start of the survey before revealing the survey stimulus and anyone who can’t answer those adequately is screened out of the survey.

Which metrics or attributes matter most?

We recognise clients have their preferences for survey content and that metrics need to fit the specific objectives of each test. Our automated solution contains a full metric set with the ability to switch on/off the relevant options each time. However, there are some typical metrics that we feel are consistently important. Our core recommendation is to include purchase intent, likeability, likes/dislikes, distinctiveness, and brand fit given strong results in these represent a higher chance of creating a winning concept. For example, one that people like/love, stands out from the rest, aligns with the brand and drives higher purchase potential. We advise linking concepts with the brand where relevant; you don’t want a concept to detract from the brand and you want to offer as realistic a test as possible.

If you add a price, then it enables a more realistic assessment by the respondent and/or you can test concepts priced and unpriced to see what the price impact is. You can then bolt on tools like text highlighter or heatmap to dig into the specific detail of the concepts you are testing; which parts work, which don’t, and which are confusing.

How to manage bias/consistency.   

Within concept testing, there are typically two approaches: sequential monadic or monadic.

The former means that one respondent tests multiple concepts in the same survey whereas the latter ensures they only see one. We enable both as we appreciate the need for flexibility and pragmatism, especially in making budgets stretch further, but in terms of managing bias monadic wins hands down as it ensures a respondent isn’t influenced by other options and a ‘clean read’ is obtained. A monadic approach is also better if the concepts are like one another, and better reflect a real-life purchase choice.

Once you gain top of mind feedback on the concept being tested, we suggest the respondent can view the stimulus again whenever they need to as we want them to understand what they are rating and can do so as accurately as possible.

This is especially important for complex or challenging concepts and those involving a lot of text such as in financial services proposition testing.

There is a lot of debate about survey answer scales but regardless of the option you choose, a key requirement is to keep them consistent throughout the survey. This makes it easier for respondents to follow but also it ensures that your results, including top boxes, can be consistently interpreted, and it also enables you to create and match results against norms (we have over 10,000 in our global database). For these reasons, the order that the metrics are asked in also needs to be kept the same within a survey/across surveys.

Earlier we talked about reaching the right target audience, but another aspect is ensuring they are evenly represented across each of the concepts being tested. If your survey is reaching females of different age groups, who are a mix of frequent and less frequent buyers in your category, and for one concept you have 70% of high frequent buyers and for another concept only 30%, then this sample bias may have skewed the results and it becomes difficult, or maybe impossible, to take an accurate decision.

Our R&D team have spent a lot of effort developing an algorithm to ensure that the mix of respondents you select will be balanced by concept, in either a monadic or sequential monadic approach. And this quota balancing considers demographics, profiling, and any custom target that you create.

Identifying a winner and making quality decisions

The combination of quota balancing, relevant and consistently designed metrics and survey content, and the right audience reach, ensure you have good quality results from which to identify a winner(s). All you need then is a real-time, easy to interpret insight dashboard showing a side-by-side comparison of concepts across all KPIs with significance testing applied, and the ability to delve more deeply into different audience groups—all of which you’ll find in our integrated and automated solution. You can then also conduct online qualitative discussions to uncover a richer understanding of why some concepts worked better than others and/or find out how to more strongly develop the winning ones further.

Share this
This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.