11.3 Types of surveys

Learning Objectives

  • Define cross-sectional surveys, provide an example of a cross-sectional survey, and outline some of the drawbacks of cross-sectional research
  • Describe the three types of longitudinal surveys
  • Describe retrospective surveys and identify their strengths and weaknesses
  • Discuss the benefits and drawbacks of the various methods of administering surveys

 

There is immense variety within the realm of survey research methods. This variety comes both in terms of time—when or with what frequency a survey is administered—and in terms of administration—how a survey is delivered to respondents. In this section, we’ll look at what types of surveys exist when it comes to both time and administration.

Time

In terms of time, there are two main types of surveys: cross-sectional and longitudinal. Cross-sectional surveys are administered only one time. They provide researchers a snapshot in time and offer an idea about how things are for the respondents at the specific time that the survey is administered.

An example of a cross-sectional survey comes from Aniko Kezdy and colleagues’ study (Kezdy, Martos, Boland, & Horvath-Szabo, 2011) [1] of the association between religious attitudes, religious beliefs, and mental health among students in Hungary. These researchers administered a single, one-time-only, cross-sectional survey to a convenience sample of 403 high school and college students. The survey focused on how religious attitudes impact various aspects of respondents’ life and health. From the analysis of their cross-sectional data, the researchers found that anxiety and depression were highest among those who had both strong religious beliefs and some doubts about religion.

Yet another recent example of cross-sectional survey research can be seen in Bateman and colleagues’ study (Bateman, Pike, & Butler, 2011) [2] of how the perceived publicness of social networking sites influences self-disclosure among users. These researchers administered an online survey to undergraduate and graduate business students. They found that even though revealing information about oneself is viewed as key to realizing many of the benefits of social networking sites, respondents were less willing to disclose information about themselves as their perceptions of a social networking site’s publicness rose. That is, there was a negative relationship between perceived publicness of a social networking site and plans to self-disclose on the site.

 

a cartoon of a person thinking with a crowd standing behind them

Cross-sectional surveys can be problematic because the events, opinions, behaviors, and other phenomena that such surveys are designed to assess don’t generally remain stagnant. Many of the phenomena change over time, therefore generalizing from a cross-sectional survey can be tricky. Perhaps you can say something about the way things were in the moment that you administered your survey, but it is difficult to know whether things remained that way for long after you administered your survey. Think, for example, about how Americans might have responded if administered a survey asking for their opinions on terrorism on September 10, 2001. Now imagine how responses to the same set of questions might differ were they administered on September 12, 2001. This is not undermine the many important uses of cross-sectional survey research, however researchers must be mindful that they have captured a snapshot of life as it was at the time that the cross-sectional survey was administered.

One way to overcome this sometimes-problematic aspect of cross-sectional surveys is to administer a longitudinal survey. Longitudinal surveys enable a researcher to make observations over some extended period of time. There are several types of longitudinal surveys, including trend, panel, and cohort surveys. We’ll discuss all three types here, along with retrospective surveys, which fall somewhere in between cross-sectional and longitudinal surveys.

The first type of longitudinal survey is called a trend survey. The main focus of a trend survey is, perhaps not surprisingly, trends. Researchers conducting trend surveys are interested in how people in a specific group change over time. Each time researchers gather data, they survey different people from the identified group because they are interested in the trends of the whole group, rather than changes in specific individuals. Let’s look at an example.

The Monitoring the Future Study (http://www.monitoringthefuture.org/) is a trend study that described the substance use of high school children in the United States. It’s conducted annually by the National Institute on Drug Abuse (NIDA). Each year, the NIDA distributes surveys to children in high schools around the country to understand how substance use and abuse in that population changes over time. Perhaps surprisingly, fewer high school children reported using alcohol in the past month than at any point over the last 20 years. Recent data also reflected an increased use of e-cigarettes and the popularity of e-cigarettes with no nicotine over those with nicotine. The data points provide insight into targeting substance abuse prevention programs towards the current issues facing the high school population.

Unlike trend surveys, panel surveys require the same people participate in the survey each time it is administered. As you might imagine, panel studies can be difficult and costly. Imagine trying to administer a survey to the same 100 people every year, for 5 years in a row. Keeping track of where respondents live, when they move, and when they die takes resources that researchers often don’t have. However, when the researchers do have the resources to carry out a panel survey, the results can be quite powerful. The Youth Development Study (YDS), administered from the University of Minnesota, offers an excellent example of a panel study. You can read more about the Youth Development Study at its website: https://cla.umn.edu/sociology/graduate/collaboration-opportunities/youth-development-study.

Since 1988, YDS researchers have administered an annual survey to the same 1,000 people. Study participants were in ninth grade when the study began, and they are now in their thirties. Several hundred papers, articles, and books have been written using data from the YDS. One of the major lessons learned from this panel study is that work has a largely positive impact on young people (Mortimer, 2003). [3] Contrary to popular beliefs about the impact of work on adolescents’ school performance and transition to adulthood, work increases confidence, enhances academic success, and prepares students for success in their future careers. Without this panel study, we may not be aware of the positive impact that working can have on young people.

Another type of longitudinal survey is a cohort survey. In a cohort survey, the participants have a defining characteristic that the researcher is interested in studying. The same people don’t necessarily participate from year to year, but all participants must meet whatever categorical criteria fulfill the researcher’s primary interest. Common cohorts that researchers study include people of particular generations or people born around the same time period, graduating classes, people who began work in a given industry at the same time, or perhaps people who have some specific historical experience in common.

An example of this sort of research can be seen in Christine Percheski’s work (2008) [4] on cohort differences in women’s employment. Percheski compared women’s employment rates across seven different generational cohorts, from Progressives born between 1906 and 1915 to Generation Xers born between 1966 and 1975. Among other patterns, she found that professional women’s labor force participation had increased across all cohorts. She also found that professional women with young children from Generation X had higher labor force participation rates than similar women from previous generations, concluding that mothers do not appear to be opting out of the workforce as some journalists have speculated (Belkin, 2003). [5]

All three types of longitudinal surveys share the strength that they permit a researcher to make observations over time. Whether a major world event takes place or participants simply age, researchers can effectively capture the subsequent potential changes in the phenomenon or behavior of interest. Table 11.1 summarizes these three types of longitudinal surveys.

Table 11.1 Types of longitudinal surveys
Sample type Description
Trend Researcher examines changes in trends over time; the same people do not necessarily participate in the survey more than once.
Panel Researcher surveys the exact same sample several times over a period of time.
Cohort Researcher identifies a defining characteristic and then regularly surveys people who have that characteristic.

Finally, retrospective surveys are similar to other longitudinal studies in that they deal with changes over time, but like a cross-sectional study, they are administered only once. In a retrospective survey, participants are asked to report events from the past. By having respondents report past behaviors, beliefs, or experiences, researchers are able to gather longitudinal-like data without actually incurring the time or expense of a longitudinal survey. Of course, this benefit must be weighed against the possibility that people’s recollections of their pasts may be faulty. Imagine that you are participating in a survey that asks you to respond to questions about last Valentine’s Day. As last Valentine’s Day can’t be more than 12 months ago, there is a good chance that you are able to provide an accurate response. Now let’s imagine that the researcher wants to know how last Valentine’s Day compares to previous Valentine’s Days, so the survey asks you to report on the preceding six Valentine’s Days. How likely is it that you will remember? Will your responses be as accurate as they might have been if you were asked the question each year over the past 6 years, rather than asked to report on all years today?

In sum, when or with what frequency a survey is administered will determine whether your survey is cross-sectional or longitudinal. While longitudinal surveys are certainly preferable because they can track changes over time, the time and cost required to administer a longitudinal survey can be prohibitive. As you may have guessed, the issues of time described here are not necessarily unique to survey research. Other methods of data collection can be cross-sectional or longitudinal due to research design, but we’ve placed our discussion of these terms here because they are mostly used by survey researchers to describe the type of survey administered. Next, we will examine how surveys are administered.

Administration

Surveys vary not only in terms of when they are administered but also in terms of how they are administered. One common way to administer surveys is in the form of self-administered questionnaires. This means that a research participant is given a set of questions, in writing, to which they are asked to respond. Self-administered questionnaires can be delivered in hard copy format via mail or electronically online. We’ll consider both modes of delivery here.

Hard copy self-administered questionnaires may be delivered to participants in person or by postal mail. It is common for researchers to administer surveys in large social science classes, so perhaps you have taken a survey that was given to you in person on campus. If you are ever asked to complete a survey in a similar setting, it might be interesting to note how your perspective on the survey might be shaped by the new knowledge that you will gain about survey research methods from this chapter.

Researchers may also deliver surveys in person by going door-to-door. They may ask people to fill them out right away or arrange to return and pick up completed surveys. Though the advent of online survey tools has made door-to-door surveys less common, I still see an occasional survey researcher at my door, especially around election time. This mode of gathering data is apparently still used by political campaign workers, at least in some areas of the country.

If you are unable to personally visit each member of your sample to deliver a survey, then you might consider sending your survey through the mail. While this mode of delivery may not be ideal, sometimes it is the only available or the most practical option. As mentioned, though, this may not be the most ideal way of administering a survey because it can be difficult to convince people to take the time to complete and return your survey. Imagine how much less likely you’d be to return a survey when there is no researcher waiting at your door to take it from you.

Survey researchers who deliver their surveys via postal mail often provide some advance notice to respondents about the survey to get people thinking and preparing to complete it. They may also follow up with their sample a few weeks after their survey has been sent out. This can be done not only to remind those who have not yet completed the survey to please do so but also to thank those who have already returned the survey. Most survey researchers agree that this sort of follow-up is essential for improving mailed surveys’ return rates (Babbie, 2010). [6] Other helpful tools to increase response rate are to create an attractive and professional survey, offer monetary incentives, and provide a pre-addressed, stamped return envelope.

 

laptop with an arm holding a magnifying glass coming out of it

Earlier, I mentioned online delivery as another way to administer a survey. This delivery mechanism is becoming increasingly common because it is easy to use, relatively cheap, and may be quicker than knocking on doors or waiting for mailed surveys to be returned. To deliver a survey online, a researcher may subscribe to a service that offers online delivery or use some delivery mechanism that is available for free. SurveyMonkey offers both free and paid online survey services (https://www.surveymonkey.com). In addition to the advantages of being online, SurveyMonkey and similar services are great because they can provide your results in formats that are readable by data analysis programs like SPSS. This saves you (the researcher) the step of having to manually enter data into your analysis program, as you would if you administered your survey in hard copy format.

Many of the suggestions that were provided earlier to improve the response rate of hard copy questionnaires also apply to online questionnaires. While the incentives that can be provided online differ from those that can be given in person or by mail, online survey researchers can still offer completion incentives to their respondents. Over the years, I’ve taken numerous online surveys. Often, they did not come with any incentive other than the joy of knowing that I’d helped a fellow social scientist do their job. However, some surveys have their perks. One survey offered a coupon code to use for $30 off any order at a major online retailer and another allowed the opportunity to be entered into a lottery with other study participants to win a larger gift, such as a $50 gift card or an iPad.

Unfortunately, online surveys may not be accessible to individuals with limited, unreliable, or no access to the internet or individuals with limited computer skills. If those issues are common in your target population, online surveys may not work as well for your research study. While online surveys may be faster and cheaper than mailed surveys, mailed surveys are more likely to reach your entire sample but also more likely to be lost and not returned. The best choice of delivery mechanism depends on numerous factors, including your resources, the resources of your study participants, and the time you have available to distribute surveys and wait for responses. Understanding the characteristics of your study’s population is key to identifying the appropriate mechanism for delivering your survey.

Sometimes surveys are administered by having a researcher verbally pose questions to respondents rather than having respondents read the questions on their own. Researchers using phone or in-person surveys use an interview schedule which contains the list of questions and answer options that the researcher will read to respondents. Consistently presenting both the questions and answer options is very important with an interview schedule. By presenting each question-and-answer option in exactly the same manner to each interviewee, the researcher minimizes the potential for the ‘interviewer effect,’ which encompasses any possible changes in interviewee responses based on how or when the researcher presents question-and-answer options. Additionally, in-person surveys may be recorded and you can typically take notes without distracting the interviewee due to the closed-ended nature of survey questions.

 

two girls in a field holding phones and laughing

Interview schedules, also known as quantitative interviews, are used in both phone surveys and in-person surveys. Phone surveys are often conducted by political polling firms to understand how the electorate feels about certain candidates or policies. In both cases, researchers verbally pose questions to participants. As someone who has poor research karma, I often decline to participate in phone studies when I am called. It is easy and even socially acceptable to abruptly hang up on an unwanted caller. Additionally, a distracted participant who is cooking dinner, tending to troublesome children, or driving may not provide accurate answers to your questions. Phone surveys make it difficult to control the environment in which a person answers your survey. Another challenge comes from the increasing number of people who only have cell phones and do not use landlines (Pew Research, n.d.). [7] Unlike landlines, cell phone numbers are portable across carriers, associated with individuals as opposed to households, and do not change their first three numbers when people move to a new geographical area. Programs called computer-assisted telephone interviewing (CATI) have also been developed to assist quantitative survey researchers. They allow the interviewer to enter responses directly into a computer as they are provided, thus saving hours of time that would otherwise have to be spent entering data into an analysis program by hand.

Quantitative interviews must also be administered in such a way that the researcher asks the same question the same way each time. While questions on hard copy questionnaires may create an impression based on the way they are presented, having a person administer questions introduces a slew of additional variables that might influence a respondent. Even a slight shift in emphasis on a word may bias the respondent to answer differently. As I’ve mentioned earlier, consistency is key with quantitative data collection—and human beings are not necessarily known for their consistency. Quantitative interviews can also help reduce a respondent’s confusion. If a respondent is unsure about the meaning of a question or answer option on a self-administered questionnaire, they probably won’t have the opportunity to get clarification from the researcher. An interview, on the other hand, gives the researcher an opportunity to clarify or explain any items that may be confusing. If a participant asks for clarification, the researcher must use pre-determined responses to make sure each quantitative interview is exactly the same as the others.

Even though in-person surveys are conducted in the same way as phone surveys, they must account for non-verbal expressions and behaviors. One noteworthy benefit of in-person surveys is that they are more difficult to say “no” to because the participant is already sitting across from the researcher. Participants are less likely to decline in-person surveys and are much more likely to “delete” an emailed online survey or “hang up” during a phone survey. In-person surveys are also much more time consuming and expensive than mailing questionnaires. Thus, quantitative researchers may opt for self-administered questionnaires over in-person surveys on the grounds that they will be able to reach a large sample at a much lower cost than were they to interact personally with each and every respondent.

 

Key Takeaways

  • Time is a factor in determining the type of survey that researcher administers. Cross-sectional surveys are administered at one time and longitudinal surveys are administered over time.
  • Retrospective surveys offer some of the benefits of longitudinal research but also come with their own drawbacks.
  • Self-administered questionnaires come in both hard-copy form and digital form. Participants may receive a hard-copy questionnaire in-person or via postal mail, or they may receive a digital questionnaire online.
  • Interview schedules can be used for in-person surveys or phone surveys.
  • Each method of survey administration comes with benefits and drawbacks.

 

Glossary

Cohort survey– describes how people with a defining characteristic change over time

Cross-sectional surveys– surveys that are administered at just one point in time

Interview schedules– a researcher poses questions verbally to respondents

Longitudinal surveys– surveys in which a researcher plans to make observations over an extended time

Panel survey– describes how people in a specific group change over time; the researcher surveys the same people each time the survey is administered

Retrospective surveys– describe changes over time but are administered only once

Self-administered questionnaires– a research participant is given a set of questions, in writing, to which they are asked to respond

Trend survey– describes how people in a specific group change over time; the researcher surveys different people from the identified group each time the survey is administered

 

Image attributions

company social networks by Hurca CC-0

posts submit searching by mohamed_hassan CC-0

talk telephone by MelanieSchwolert CC-0

 


  1. Kezdy, A., Martos, T., Boland, V., & Horvath-Szabo, K. (2011). Religious doubts and mental health in adolescence and young adulthood: The association with religious attitudes. Journal of Adolescence, 34, 39–47.
  2. Bateman, P. J., Pike, J. C., & Butler, B. S. (2011). To disclose or not: Publicness in social networking sites. Information Technology & People, 24, 78–100.
  3. Mortimer, J. T. (2003). Working and growing up in America. Cambridge, MA: Harvard University Press.
  4. Percheski, C. (2008). Opting out? Cohort differences in professional women’s employment rates from 1960 to 2005. American Sociological Review, 73, 497–517.
  5. Belkin, L. (2003, October 26). The opt-out revolution. New York Times, pp. 42–47, 58, 85–86.
  6. Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth.
  7. Pew Research (n.d.) Sampling. Retrieved from: http://www.pewresearch.org/methodology/u-s-survey-research/sampling/

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Scientific Inquiry in Social Work Copyright © 2018 by Matthew DeCarlo is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book