12.1 Experimental design: What is it and when should it be used?

Learning Objectives

  • Define experiment
  • Identify the core features of true experimental designs
  • Describe the difference between an experimental group and a control group
  • Identify and describe the various types of true experimental designs

 

Experiments are an excellent data collection strategy for social workers wishing to observe the effects of a clinical intervention or social welfare program. Understanding what experiments are and how they are conducted is useful for all social scientists, whether they plan to use this methodology or simply understand findings of experimental studies. An experiment is a method of data collection designed to test hypotheses under controlled conditions. Students in my research methods classes often use the term experiment to describe all kinds of research projects, but in social scientific research, the term has a unique meaning and should not be used to describe all research methodologies.

 

cartoon including a stopwatch and a pencil marking a checkbox on a clipboard

Experiments have a long and important history in social science. Behaviorists such as John Watson, B. F. Skinner, Ivan Pavlov, and Albert Bandura used experimental designs to demonstrate the various types of conditioning. Using strictly controlled environments, behaviorists were able to isolate a single stimulus as the cause of measurable differences in behavior or physiological responses. The foundations of social learning theory and behavior modification are found in experimental research projects. Moreover, behaviorist experiments brought psychology and social science away from the abstract world of Freudian analysis and towards empirical inquiry, grounded in real-world observations and objectively-defined variables. Experiments are used at all levels of social work inquiry, including agency-based experiments that test therapeutic interventions and policy experiments that test new programs.

Several kinds of experimental designs exist. In general, designs that are true experiments contain three key features: independent and dependent variables, pretesting and posttesting, and experimental and control groups. In a true experiment, the effect of an intervention is tested by comparing two groups. One group is exposed to the intervention (the experimental group, also known as the treatment group) and the other is not exposed to the intervention (the control group).

In some cases, it may be immoral to withhold treatment from a control group within an experiment. If you recruited two groups of people with severe addiction and only provided treatment to one group, the other group would likely suffer. For these cases, researchers use a comparison group that receives “treatment as usual,” but experimenters must clearly define what this means. For example, standard substance abuse recovery treatment involves attending twelve-step programs like Alcoholics Anonymous or Narcotics Anonymous meetings. A substance abuse researcher conducting an experiment may use twelve-step programs in their comparison group and use their experimental intervention in the experimental group. The results would show whether the experimental intervention worked better than normal treatment, which is useful information. However, using a comparison group is a deviation from true experimental design and is more associated with quasi-experimental designs.

Importantly, participants in a true experiment need to be randomly assigned to either the control or experimental groups. Random assignment uses a random process, like a random number generator, to assign participants into experimental and control groups. Random assignment is important in experimental research because it helps to ensure that the experimental group and control group are comparable and that any differences between the experimental and control groups are due to random chance. We will address more of the logic behind random assignment in the next section.

In an experiment, the independent variable is the intervention being tested. In social work, this could include a therapeutic technique, a prevention program, or access to some service or support. Social science research may have a stimulus rather than an intervention as the independent variable, but this is less common in social work research. For example, a researcher may provoke a response by using an electric shock or a reading about death.

The dependent variable is usually the intended effect of the researcher’s intervention. If the researcher is testing a new therapy for individuals with binge eating disorder, their dependent variable may be the number of binge eating episodes a participant reports. The researcher likely expects their intervention to decrease the number of binge eating episodes reported by participants. Thus, they must measure the number of episodes that occurred before the intervention (the pretest) and after the intervention (the posttest).

Let’s put these concepts in chronological order to see how an experiment runs from start to finish. Once you’ve collected your sample, you’ll need to randomly assign your participants to the experimental group and control group. Then, you will give both groups your pretest, which measures your dependent variable, to see what your participants are like before you start your intervention. Next, you will provide your intervention, or independent variable, to your experimental group. Keep in mind that many interventions take a few weeks or months to complete, particularly therapeutic treatments. Finally, you will administer your posttest to both groups to observe any changes in your dependent variable. Together, this is known as the classic experimental design and is the simplest type of true experimental design. All of the designs we review in this section are variations on this approach. Figure 12.1 visually represents these steps.

 

Figure 12.1 Steps in classic experimental design

An interesting example of experimental research can be found in Shannon K. McCoy and Brenda Major’s (2003) [1] study of peoples’ perceptions of prejudice. In one portion of this multifaceted study, all participants were given a pretest to assess their levels of depression. No significant differences in depression were found between the experimental and control groups during the pretest. Then, participants in the experimental group were asked to read an article suggesting that prejudice against their own racial group is severe and pervasive, while participants in the control group were asked to read an article suggesting that prejudice against a racial group other than their own is severe and pervasive. Clearly, their independent variables were not interventions or treatments for depression, but were stimuli designed to elicit changes in people’s depression levels. Upon measuring depression scores during the posttest period, the researchers discovered that those who had received the experimental stimulus (the article citing prejudice against their same racial group) reported greater depression than those in the control group. This is just one of many examples of social scientific experimental research.

In addition to classic experimental design, there are two other ways of designing experiments that are considered to fall within the purview of “true” experiments (Babbie, 2010; Campbell & Stanley, 1963). [2] The posttest-only control group design is almost the same as classic experimental design, except it does not use a pretest. Researchers who use posttest-only designs want to eliminate testing effects, in which a participant’s scores on a measure change because they have already been exposed to it. If you took multiple SAT or ACT practice exams before you took the final one whose scores were sent to colleges, you’ve taken advantage of testing effects to get a better score. Considering the previous example on racism and depression, participants who are given a pretest about depression before being exposed to the stimulus would likely assume that the intervention is designed to address depression. That knowledge can cause them to answer differently on the posttest than they otherwise would. Please do not assume that your participants are oblivious. More likely than not, your participants are actively trying to figure out what your study is about.

In theory, if the control and experimental groups have been randomly determined and are therefore comparable, then a pretest is not needed. However, most researchers prefer to use pretests so they may assess change over time within both the experimental and control groups. Researchers who want to account for testing effects and additionally gather pretest data can use a Solomon four-group design. In the Solomon four-group design, the researcher uses four groups. Two groups are treated as they would be in a classic experiment—pretest, experimental group intervention, and posttest. The other two groups do not receive the pretest, though one receives the intervention. All groups are given the posttest. Table 12.1 illustrates the features of each of the four groups in the Solomon four-group design. By having one set of experimental and control groups that complete the pretest (Groups 1 and 2) and another set that does not complete the pretest (Groups 3 and 4), researchers using the Solomon four-group design can account for testing effects in their analysis.

Table 12.1 Solomon four-group design
Pretest Stimulus Posttest
Group 1 X X X
Group 2 X X
Group 3 X X
Group 4 X

Solomon four-group designs are challenging to implement because they are time-consuming and resource-intensive. Researchers must recruit enough participants to create four groups and implement interventions in two of them. Overall, true experimental designs are sometimes difficult to implement in a real-world practice environment. Additionally, it may be impossible to withhold treatment from a control group or randomly assign participants in a study. In these cases, pre-experimental and quasi-experimental designs can be used, however the differences in rigor from true experimental designs leave their conclusions more open to critique.

 

Key Takeaways

  • True experimental designs require random assignment.
  • Control groups do not receive an intervention, and experimental groups receive an intervention.
  • The basic components of a true experiment include a pretest, posttest, control group, and experimental group.
  • Testing effects may cause researchers to use variations on the classic experimental design.

 

Glossary

Classic experimental design– uses random assignment, an experimental, a control group, pre-testing, and post-testing

Comparison group– a group in quasi-experimental designs that receives “treatment as usual” instead of no treatment

Control group– the group in an experiment that does not receive the intervention

Experiment– a method of data collection designed to test hypotheses under controlled conditions

Experimental group- the group in an experiment that receives the intervention

Posttest- a measurement taken after the intervention

Posttest-only control group design- a type of experimental design that uses random assignment, an experimental, a control group, and a posttest, but does not utilize a pretest

Pretest- a measurement taken prior to the intervention

Random assignment-using a random process to assign people into experimental and control groups

Solomon four-group design- uses random assignment, two experimental and two control groups, pretests for half of the groups, and posttests for all

Testing effects- when a participant’s scores on a measure change because they have already been exposed to it

True experiments- a group of experimental designs that contain independent and dependent variables, pretesting and post testing, and experimental and control groups

 

Image attributions

exam scientific experiment by mohamed_hassan CC-0

 


  1. McCoy, S. K., & Major, B. (2003). Group identification moderates emotional response to perceived prejudice. Personality and Social Psychology Bulletin, 29, 1005–1017.
  2. Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth; Campbell, D., & Stanley, J. (1963). Experimental and quasi-experimental designs for research. Chicago, IL: Rand McNally.

License

Share This Book