7.2.1: Interpreting Survey Data
-
- Last updated
- Save as PDF
Communicators are faced with a formidable task in evaluating and interpreting information generated from polls and surveys. In order to avoid some of the common pitfalls and mistakes that are possible when using survey data, you must ask important questions about how the survey was conducted, how the respondents were selected to participate, and how questions were worded and ordered.
Evaluating the results of a survey involves both statistical analysis and interpretive skill. The person who conducts and analyzes the survey data has great sway in determining how the data get interpreted. It is up to you to carefully critique any information based on a survey and apply stringent standards of evaluation. When you are considering using survey information, you should have the answers to the following questions:
1. Who sponsored or paid for this survey, and who conducted it ? Serious bias can enter into a survey design if the sponsoring agency or the firm conducting the survey has a particular ax to grind.
2. Who was interviewed ? What population was sampled?
3. How were people selected for the interviews? In other words, was it a probability or a non-probability sample? If a non-probability sample was used, the results cannot be generalized to a larger population.
4. How many people were interviewed? What was the size of the sample? Were the results categorized into specific sub-samples or subgroups and if so, what was the size of those smaller groups?
5. If a probability sample was used, what was the range of sampling error and the level of confidence for the total sample? What were those figures for any sub-samples?
6. How were the interviews conducted? Were they telephone, face-to-face or self-administered (mail or online) interviews? Were the interviewers trained personnel or volunteers? Were they supervised or working on their own?
7. What were the actual questions that were asked? What kind of response choices did respondents have (open-ended responses; pre-set choices). Experts know that when you ask questions about something that is potentially awkward or embarrassing, people over-report socially desirable behavior and under-report behavior that is considered antisocial. If the survey asks about topics such as sexuality or illegal behavior such as drug use, the results must be interpreted and reported upon with a great deal of caution.
8. What was the wording of questions? Were there biased, loaded, double-barreled, leading or ambiguous questions? Even individual words can influence results. Did the question ask about taxes or revenues? Welfare or assistance to the poor? Universal health insurance or managed health care?
9. When were the interviews conducted? The results of a survey are good only for the time at which the questions were asked. Surveys do not have any predictive value and cannot and should not be used to predict anything. For instance, a survey may ask respondents about their preferences for one candidate over another during a three-day polling window just before an election and the results may show that candidate A has a wide margin of support over candidate B. But some last-minute revelation about candidate A may change the atmospherics around the election, influencing the outcome and making the poll data irrelevant. Outside events always hold the potential to affect results of any survey. That is why surveys should never be used to predict anything. The results are good for just one point in time.