- Identify three questions you should ask about samples when reading research results
- Describe how bias impacts sampling
We read and hear about research results so often that we might overlook questioning where the research participants came from and how they are identified for inclusion. It is easy to focus solely on findings when we are busy and when the most interesting information is in a study’s conclusions rather than its procedures. Now that you have some familiarity with the variety of procedures for selecting study participants, you are equipped to ask some very important questions about the findings you read and you are ready to be a more responsible consumer of research.
Who sampled, how, and for what purpose?
Have you ever been a participant in someone’s research? If you have ever taken an introductory psychology or sociology class at a large university, that’s probably a silly question to ask. While social science researchers on college campuses have access to a bunch of (presumably) willing and able human guinea pigs, but that luxury comes at the cost of sample representativeness. One study of top academic journals in psychology found that over two-thirds (68%) of participants in studies published by those journals were based on samples drawn in the United States (Arnett, 2008).  Further, the study found that two-thirds of the work that derived from US samples published in the Journal of Personality and Social Psychology was based on samples made up entirely of American undergraduates taking psychology courses.
These findings certainly raise the question: What do we actually learn from social scientific studies and about whom do we learn it? That is exactly the concern raised by Joseph Henrich and colleagues (Henrich, Heine, & Norenzayan, 2010),  authors of the article “The Weirdest People in the World?” In their piece, Henrich and colleagues point out that behavioral scientists commonly make sweeping claims about human nature based on samples drawn only from WEIRD (Western, Educated, Industrialized, Rich, and Democratic) societies. These claims are often based on even narrower samples, as is the case with many studies relying on samples drawn from college classrooms. As it turns out, many robust findings about the nature of human behavior like fairness, cooperation, visual perception, trust, and others are based on studies that excluded participants from outside the United States and sometimes excluded anyone outside the college classroom (Begley, 2010).  This certainly raises questions about what we know about human behavior as opposed to the behavior of US residents or US undergraduates behavior. Of course, not all research findings are based on samples of WEIRD folks like college students, but we should always pay attention to the population that studies are based on and the claims that the study makes about populations.
In the preceding discussion, the concern is with researchers making claims about populations other than those from which their samples were drawn. A related, but slightly different, potential concern is sampling bias. Bias in sampling occurs when the elements selected for inclusion in a study do not represent the larger population from which they were drawn. For example, if you were to sample people walking into the social work building on campus during each weekday, your sample would include too many social work majors and not enough non-social work majors. Furthermore, you would completely exclude graduate students whose classes are at night. Bias may be introduced by the sampling method used or due to conscious or unconscious bias introduced by the researcher (Rubin & Babbie, 2017).  A researcher might select people who “look like good research participants,” and thereby transfer their unconscious biases to their sample.
A sample may be representative in all respects that a researcher thinks are relevant, but there may be aspects that are relevant that didn’t occur to the researcher when they were drawing their sample. For example, you might not think that a person’s phone would have much to do with their voting preferences. However, if the pollsters making predictions about the 2008 presidential election results not been careful to include both cell phone-only and landline households in their surveys, it is possible that their predictions would have underestimated Barack Obama’s lead over John McCain because Obama was much more popular among cell-only users than McCain (Keeter, Dimock, & Christian, 2008). 
So how do we know when we can count on results that are being reported to us? While there might not be any magic or always-true rules we can apply, there are a couple of things we can keep in mind as we read the claims researchers make about their findings.
First, remember that sample quality is determined only by the sample actually obtained, not by the sampling method itself. A researcher may set out to administer a survey to a representative sample by correctly employing a random selection technique, but if they only receive a handful of responses, then they will have to be very careful about the claims they can make about their survey findings.
Another thing to keep in mind is that researchers may want to talk about implications of their findings as though they apply to some group other than the population that was sampled. Though this tendency is usually quite innocent, it is very tempting to talk about findings this way. As consumers of those findings, it is our responsibility to be attentive to this sort of (likely unintentional) bait and switch.
Finally, remember that samples that can compare theoretically important concepts or variables are better than samples that do not allow for such comparisons. For example, studies that utilize nonrepresentative samples can compare relevant aspects of our social processes and help us learn about the strengths of our social theories. If you’ll recall from Chapter 7, this is known as theory-testing.
At their core, questions about sample quality should address who has been sampled, how they were sampled, and for what purpose they were sampled. Being able to answer those questions will help you better understand, and more responsibly read, research results.
- Sometimes researchers may make claims about populations other than those from whom their samples were drawn; other times they may make claims about a population based on a sample that is not representative. As consumers of research, we should be attentive to both possibilities.
- A researcher’s findings do not have to generalizable to be valuable. Samples that allow for comparisons of theoretically important concepts or variables may yield findings that contribute to our social theories and our understandings of social processes.
Bias– in sampling, when the elements selected for inclusion in a study do not represent the larger population from which they were drawn due to sampling method or thought processes of the researcher
- Arnett, J. J. (2008). The neglected 95%: Why American psychology needs to become less American. American Psychologist, 63, 602–614. ↵
- Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33, 61–135. ↵
- Newsweek magazine published an interesting story about Henrich and his colleague’s study: Begley, S. (2010). What’s really human? The trouble with student guinea pigs. Retrieved from http://www.newsweek.com/2010/07/23/what-s-really- human.html ↵
- Rubin, C. & Babbie, S. (2017). Research methods for social work (9th edition). Boston, MA: Cengage. ↵
- Keeter, S., Dimock, M., & Christian, L. (2008). Calling cell phones in ’08 pre-election polls. The Pew Research Center for the People and the Press. Retrieved from http://people-press.org/files/legacy-pdf/cell-phone-commentary.pdf ↵