Incentivised surveys attract more than just honest respondents – they also attract fraudsters and inattentive users trying to game the system. To separate genuine respondents from fraudsters, we need to make it difficult for dishonest respondents to slip through (without punishing genuine participants along the way).
Protecting our surveys through in-survey checks is an essential part of a holistic quality assurance system. Surveys without quality checks are essentially fraud magnets, as they make it extremely easy for dishonest respondents to make it through and collect their incentive.
In a random selection of 61 surveys submitted to Toluna for scripting in August 2025, we found that 64% of surveys did not have any quality checks included in their questionnaire design. This highlights the need for further education on fraud prevention in the industry.
Let’s look at three core principles to designing surveys that are both fraud-resistant and human-friendly.
1. Make it difficult for fraudsters, easy for real respondents
The most effective way to fraud-proof a survey is to weave detection into the design, not bolt it on after the fact. When quality checks are subtle and embedded, they don’t interrupt the experience for real participants, but they trip up fraudsters quickly.
The most effective way to fraud-proof a survey is to weave detection into the design, not bolt it on after the fact. When quality checks are subtle and embedded, they don’t interrupt the experience for real participants, but they trip up fraudsters quickly.
Recommendations include:
Avoid obvious screening questions. Fraudsters recognise typical filters (e.g. « Did you rent a car in the last 7 days? ») and can answer strategically. Instead, embed screeners in neutral groupings, e.g. “Which of the following activities have you done in the last 7 days?” with a mix of targets and decoys. In the same dataset analyzed as above, the Toluna team found that almost 20% of surveys had obvious or leading screening questions.
- Why it’s important: Fraudsters want to qualify for as many surveys as possible, so an easy screening section is like an open invitation. If they can’t figure out how to qualify for your survey and keep getting screened out, they need to waste several survey attempts on finding a way in.
–Iin the meantime, honest respondents can fill your survey and take away opportunities for fraudsters.
Use fake or unlikely options. For example, adding a made-up brand in a list of real car brands or adding ‘space travel’ to a list of hobbies.
- Why it’s important: Fraudsters don’t care about the content of what they’re answering. They don’t know the topic they’re talking about and therefore it’s more difficult for them to identify impossible or illogical answer options. This means they’re more likely to fall into these traps.
Overstatement traps work well too. Include one or two rare or unlikely items in multi-choice lists. If someone claims they’ve bought a DVD, car insurance, and chocolates all in the past seven days, they may be over-claiming.
- Why it’s important: Fraudsters have learned over the years that if they select a lot of options in a multi-select question – especially within the screening section – they have a higher likelihood of staying in the survey. Therefore, they often tend to select lots of options in these questions.
Include open ends. The balance is fine here, as too many open ends can annoy genuine respondents – however, open ends create an extra layer of complexity for fraudsters.
- Why it’s important: Fraudsters have to program their bots to give good open ends that are at the same time not repetitive. This creates an extra hurdle for some who will abandon the survey or not put a lot of effort into their answer and make it easily detectable through automated open-end checks.
Leverage automation to remove poor responses in real time.
- Why it’s important: Fraudsters know that many checks happen after survey completion. By the time they’re identified as low quality, they’ve already received their incentive (which is often automatically awarded after completing a survey) and cashed out. This makes surveys without real-time terminations even more appealing for targeted fraud attacks.
These checks are unintrusive to honest users but stop bad actors before the data is compromised or rewards are issued.
2. Basic quality checks have their place, but smart ones are the future
Yes, speeding flags, straight-lining, and attention check questions still help. But they likely catch more real respondents – tired ones, distracted ones, those who are real people. Fraudsters? They know these checks inside out and have adapted their algorithms and tools to circumvent them.
Evolve your defences with smarter techniques:
- Logic checks: These flag inconsistencies that are very unlikely. For example: saying you’re 18 years old with five children, claiming to buy the brand, but not being aware of it.
- Behavioural signals: Keep an eye on how respondents interact with the survey, not just what they answer. Do they use copy & paste? Is the typing speed too fast? Are the mouse movements human? All this behavioural metadata from within the survey gives further insight into the nature of a survey’s respondent.
- Aggregated data checks: It is sometimes overlooked, but checking the data at an aggregate level is just as important as looking at individual respondents. If KPIs are looking very unexpected, it may be that the survey was compromised by fraudsters that aren’t visible at the individual respondent level. In such cases, looking at the overall data patterns can make it clear that something is off. Questioning our data is key to achieving good quality results.
The trick here is to continuously evolve our in-survey checks alongside fraudsters’ tricks. Traditional quality checks still have their place, but fraudsters develop their technology on an ongoing basis, and we need to do the same to keep up with them.
3. Common pitfalls to avoid
With all the quality checks, data points, and talk about fraud, it’s easy to start seeing patterns or issues that may not be there. Even well-meaning quality control can backfire. Here’s what to watch out for:
Over-correcting: Using too many trap questions, harsh speed limits, or too many ‘one strike and you’re out rules’ can remove real respondents. This does not only result in potentially biased data, but also a bad respondent experience as respondents may feel they’re being unfairly removed. This may mean they’re less likely to participate in research in the future – and this in turn creates more opportunities for fraudsters.
ChatGPT tends to use hyphens and semicolons, yes, but so do some humans. Just because an answer has perfect punctuation and spelling doesn’t mean it’s bot-generated. But it’s definitely worth combining with other data points!
Ignoring cultural, generational, and device nuance: Some regions may prefer shorter responses; others value detail. Language structure, device habits, and even survey etiquette vary market to market. This is especially important to remember when running multi-country studies.
Assuming automation is everything: Automated real-time checks are critical, but human review still plays a role, especially with open ends, contradictions, and context-specific fraud indicators. A healthy combination of artificial and human intelligence is the key to achieving high data quality.
Remember: balance is everything
To fraud-proof a survey, think like a fraudster, then design like a researcher. The best systems are layered, adaptive, and invisible to real users. Start with thoughtful questionnaire design, mix in smart quality logic, and stay agile. And always ask: Does this feel fair to a genuine respondent?
Because at the end of the day, we’re not just filtering out fraud. We’re protecting the voices we do want to hear.
Toluna’s QSphere brings these principles to life, using AI and layered logic to keep surveys high-quality and make sure real voices heard. Learn more about how QSphere safeguards survey integrity here.