The X factor in phone surveys

Last Friday night I got a call from the Minnesota Department of Health, asking for 12 to 14 minutes of my time to take a survey. This post is not about how 12 to 14 minutes turned into 19 minutes, even though I answered “no” to most of the questions that would’ve elicited follow-up (e.g., no, I don’t have diabetes; no, I am not a full- or part-time caregiver, etc.). Instead, what really stood out to me during this phone survey was the interviewer.

About three-quarters of the way through the survey, the interviewer seemed to relax. It likely would’ve been an imperceptible shift to someone who didn’t know how strictly interviewers are instructed to stick to the script.

At one point the interviewer asked me something about the percentage of my phone calls that are taken on my mobile phone versus my landline. I didn’t quite understand the question so I asked him to repeat it and I then paraphrased to see if I understood what the survey was asking. The interviewer told me my interpretation was correct. I answered and then he said, “I think they’re trying to prove that cell phones cause cancer or something.”

I’d be willing to bet that wasn’t in the script. And I don’t think he’s supposed to speculate – and share with the respondent – the end goal of the survey questions.

Most of the discourse in the industry surrounding phone vs. Web is about the cost difference. It’s more expensive to have a real person literally talk respondents through a survey and ensure that each question is understood and thoughtfully considered. But I don’t really hear a lot about the gamble researchers take inviting in the human X factor. Whether the interviewer reads too slow or too fast or is chatty or robotic, doesn’t involving another person add another variable that can’t be controlled for when analyzing the results? Do the benefits of having a real person conduct the interview outweigh the cons?

This entry was posted in Uncategorized. Bookmark the permalink.

One Response to The X factor in phone surveys

  1. Doug says:

    Adding a human into the mix will create more variance in responses. If the goal of the survey is to gather accurate and informative responses from surveyees, having a human read the questions will both guide and distort interpretation for the surveyee. I say not enough pros to outweigh cons. I’m a firm believer in ‘if the survey is done well, you shouldn’t need a human to read the questions’. Great thought provoking blog, Emily.