Real Respondents

Matter

Sample matters because real respondents matter.

With the advent of AI bots infiltrating many of the industry’s big research panels, sampling real respondents has become critical. Harvesting survey panelists off the internet with little more than an email verification has always been suspect. GenAI has made it untenable.

There is a science to survey research – probability sampling. Probability sampling is the bedrock of reliable, accurate and projectable survey data.

With a probability sample respondents are real. They are selected from a well-defined universe with a probability that is known in advance. No other method does this.

Known Accuracy

A probability sample provides the critical data point for knowing the accuracy of your results: the chance of a respondent being selected for your sample. You thereby know the accuracy of the data (i.e., the sampling error) and can confidently cite the margin of error. 

This avoids the embarrassment of qualifying your results with now common disclaimers such as: “for comparison, a probability sample … would have a margin of error of …”

You do not have to trust us in saying this. It is a key finding from the American Association for Public Opinion Research (AAPOR) Task Force on Non-probability Sampling Report.

AAPOR is one of the few survey research organizations in the world that maintains integrity and ongoing research around proper survey methodology, including explicit standards around reporting non-probability samples.

“AAPOR has long maintained reporting margin of sampling error with opt-in or self-identified samples is misleading.”

Known population

We know the population from which we select our samples.

Knowing the universe and making sure that it is the group you are interested in is critical to being able to project survey results about that population. With unscientific sampling you simply cannot do this (although many claim they can). As AAPOR stated in their groundbreaking report:

AAPOR is one of the few survey research organizations in the world that maintains integrity and ongoing research around proper survey methodology, including explicit standards around reporting non-probability samples.

“The dramatic rise in the use of opt-in panels has been premised on a willingness to accept overwhelming coverage and selection error.”

Reliability

Unscientific sampling generates respondents that are motivated to earn modest financial rewards and so they do a lot of surveys. Such respondents will misrepresent themselves to qualify for surveys and are deeply affected by the learning effects of completing so many surveys.

Real respondents are mostly motivated by the opportunity to share their opinions. Without direct financial incentives (which Modus does not offer to its panel members), respondents lack the motivation to misrepresent themselves. That’s what makes them real.

Modus Research chooses all of its panel members randomly; respondents can’t join our panels without being invited via random probability telephone calls. They are real respondents.

A groundbreaking study conducted by the Market Research and Intelligence Association (MRIA) revealed that members of opt-in panels:

  • Are overwhelmingly motivated by financial reward
  • Belong to many panels and complete many surveys
  • Complete surveys at an alarming frequency and often speed through them

Unscientific opt-in panels are, in short, replete with professional (fake) respondents. Such respondents will often misrepresent themselves to earn modest financial rewards. Data from such panels are not reliable and should not be used to make important decisions.

With a Modus probability panel you get real respondents.