Published Nov 13, 2020
Debbie Senior, VP, Product Automation
In line with the growing use of technology platforms and automation for online research, there is a lot of discussion about quality. Much of the focus is on panel quality and the removal of poor-quality respondents during or post fieldwork, but very little is raised about the quality of the survey inputs or survey design. However, this has a key role to play especially in light of greater democratisation of research and increasing levels of self-serve/DIY.
Participants are just like you and me – they lead busy lives, they have constant communication, and they notice when someone is inconsiderate of their time or if a survey is boring. They take surveys for many reasons – monetary, topic interest, ability to influence decision making, just for fun. If a survey doesn’t follow a logical thread and isn’t engaging, they either drop out or, worst case, stay in and provide inaccurate data.
It is a key reason why Toluna Start, the world’s first end-to-end consumer intelligence platform, offers a variety of automated solutions templates. Built on validated methodologies from years of research expertise, any user can create and launch their own concept, pack, or ad pre- or post-test survey and trust the content quality. This includes the relevance of metrics/questions that form the design, the way they are worded and the order they are asked in, consistency in answer scales, translation accuracy, red herrings, and ensuring respondents can see/hear the stimulus properly. All of our surveys are device agnostic and mobile optimized.
In turn, high-quality inputs translate into trusted insights and more accurate decision making with less risk. A well-designed survey also creates a better experience for your audience of interest, stronger engagement with the survey and richer feedback, and optimises your budget due to higher completion rates.
There is definitely a skill in creating high-quality surveys but if a template isn’t available or you don’t have an experienced researcher to assist, and you need to create your own survey, here are a few tips that might help.
Be clear about your survey objectives and provide a logical flow.
At the outset, you should be clear about the objectives of your survey and the business questions it needs to answer. These objectives inform the data you need to collect and the questions you need to ask to provide that data – plus any other context you need to add extra interpretation to the insights. This will help you to focus the survey on content that is truly essential.
The introduction to your survey is important as it needs to be engaging – e.g. explaining the purpose and be welcoming and friendly – but also reflect best practice – e.g. explain survey length, confidentiality, and voluntary participation. In contrast, on the closure of the survey, it is important to signpost this clearly and allow final comments to be made on the topic.
Mix it up… don’t ask the same question twice.
Many surveys contain a mix of closed and open questions. The latter are good for building early rapport or at the end to make sure nothing is missed. Additionally, it allows you to explore what participants really think. But too many can result in poor quality answers or non-completion of surveys. If you have too many then it might be time for an additional qual phase – now very easily done online.
Don’t ask people for a longwinded answer. Or if you do, don’t expect a response.
Closed questions should take less time than open-ended questions, should be easier to answer, and make the survey more straightforward.
Provide logical answer choices, but don’t overwhelm the respondent.
Try to aim for no more than 12 answer options unless they are 1-2 word options.
Make keywords prominent using bold/underline/font colours. Offer an ‘other specify’ or ‘none of these’ at the end of the list or a ‘prefer not to answer’ for a more sensitive topic. If more than one answer could be applicable then enable multi-choice. Don’t use jargon, technical language, abbreviations, or complex words (‘chronological’ is only known by one-fifth of the UK population). Don’t expect the respondent to rely too much on memory.
Make the survey visual and add interest to the questions you ask.
Mix question styles to avoid too much repetition – for example, the Toluna Start platform enables drag and drop ranking questions, card sorting exercises, battle questions, and heat/click maps to mix in with typical single/multi-choice options. Don’t allow overlapping/ambiguous answer codes. Whilst grid questions are efficient, they can be monotonous and not as friendly for mobile users. In Toluna surveys, grids are automatically transferred into an accordion display on mobile devices.
If you ask sensitive questions then be sensitive.
If the survey contains sensitive or embarrassing topics like money, voting, religion, or sexual activity, it can lead to refusals to continue or inaccurate feedback. Other topics like driving under the speed limit, recycling, giving to charity, or alcohol consumption can be susceptible to social desirability or prestige bias where participants choose a response they think is viewed more favourably by society than the one that actually applies to them. In these cases, the use of indirect questioning like ‘other people have told us …’ produces a better result.
You should consider the order effect. For example, brand awareness should always be asked or shown before usage or concept/ad stimulus or logos – otherwise you prime respondents.
Randomisation is often used to reduce bias, for example, randomising the order in which brands appear when they are being compared. There are some exceptions. Brands in a long list might be easier to find if they are in alphabetical order, and sometimes a certain concept or design should be seen first to establish sufficient knowledge (e.g. a standard model before a more advanced one).
There are many different opinions about response scales and the right type to use when measuring attitudes, preferences, likelihood to buy, or satisfaction.
A simple rule is to keep them as consistent as possible – e.g. 5, 7, 10 points, whether they are labelled or not. The order of the scale options and the degree of positive to negative should be considered to avoid confusing the participant and to make it easier for you to interpret the data.
Keep it relevant. Ensure you’re asking questions that matter to you.
There is also a lot of debate about survey length. Keeping a survey short and to the point is good practice but if the survey needs to be 30 mins or more then we know respondents will engage and the survey will produce valid data if the experience is good and the survey is well designed. If the survey is in multiple languages, also be mindful that translated questionnaires can take 5-20% longer to administer than an identical questionnaire in English.
And remember that the survey is a form of conversation.
It should have a logical flow that is easy to understand, builds rapport, and takes participants on a journey. So put interesting and key questions upfront to get early engagement but leave more difficult/sensitive questions to a later point to avoid creating barriers and start with general topics through to more specific modules. You can also ask if the participants liked the survey/what they didn’t like to help you improve next time.
Finally, just before you launch, review your survey and re-check that it gives you the information you want and reaches your objectives, that the purpose is clear, and all questions are necessary. Put yourself in the shoes of the participant – would you complete it?