When all participants have equal chance of being in the experimental versus the control group this is?

In the fourth piece of this series on research study designs, we look at interventional studies (clinical trials). These studies differ from observational studies in that the investigator decides whether or not a participant will receive the exposure (or intervention). In this article, we describe the key features and types of interventional studies.

Keywords: Experimental design, randomized controlled trial, research design

In previous articles in this series, we introduced the concept of study designs[1] and have described in detail the observational study designs – descriptive[2] as well as analytical.[3] In this and another future piece, we will discuss the interventional study designs.

In observational studies, a researcher merely documents the presence of exposure(s) and outcome(s) as they occur, without trying to alter the course of natural events. By contrast, in interventional studies, the researcher actively interferes with nature – by performing an intervention in some or all study participants – to determine the effect of exposure to the intervention on the natural course of events. An example would be a study in which the investigator randomly assigns the participants to receive either aspirin or a placebo for a specific duration to determine whether the drug has an effect on the future risk of developing cerebrovascular events. In this example, aspirin (the “intervention”) is the “exposure,” and the risk of cerebrovascular events is the “outcome.” Interventional studies in humans are also commonly referred to as “trials.”

Interventional studies, by their very design, are prospective. This sometimes leads to confusion between interventional and prospective cohort study designs. For instance, the study design in the above example appears analogous to that of a prospective cohort study in which people attending a wellness clinic are asked whether they take aspirin regularly and then followed for a few years for occurrence of cerebrovascular events. The basic difference is that in the interventional study, it is the investigators who assign each person to take or not to take aspirin, whereas in the cohort study, this is determined by an extraneous factor.

Interventional studies can be divided broadly into two main types: (i) “controlled clinical trials” (or simply “clinical trials” or “trials”), in which individuals are assigned to one of two or more competing interventions, and (ii) “community trials” (or field trials), in which entire groups, e.g., villages, neighbourhoods, schools or districts, are assigned to different interventions.

The interventions can be quite varied; examples include administration of a drug or vaccine or dietary supplement, performance of a diagnostic or therapeutic procedure, and introduction of an educational tool. Depending on whether the intervention is aimed at preventing the occurrence of a disease (e.g., administration of a vaccine, boiling of water, distribution of condoms or of an educational pamphlet) or at providing relief to or curing patients with a disease (e.g., antiretroviral drugs in HIV-infected persons), a trial may also be referred to as “preventive trial” or “therapeutic trial”.

Several variations of interventional study designs with varying complexity are possible, and each of these is described below. Of these, the most commonly used and possibly the strongest design is a randomized controlled trial (RCT).

In an RCT, a group of participants fulfilling certain inclusion and exclusion criteria is “randomly” assigned to two separate groups, each receiving a different intervention. Random assignment implies that each participant has an equal chance of being allocated to the two groups.

The use of randomization is a major distinguishing feature and strength of this study design. A well-implemented randomization procedure is expected to result in two groups that are comparable overall, when both measured and unmeasured factors are taken into account. Thus, theoretically, the two groups differ only in the intervention received, and any difference in outcomes between them is thus related to the effect of intervention.

The term “controlled” refers to the presence of a concurrent control or comparator group. These studies have two or more groups – treatment and control. The control group receives no intervention or another intervention that resembles the test intervention in some ways but lacks its activity (e.g., placebo or sham procedure, referred to also as “placebo-controlled” or “sham-controlled” trials) or another active treatment (e.g., the current standard of care). The outcomes are then compared between the intervention and the comparator groups.

If an effort is made to ensure that other factors are similar across groups, then the availability of data from the comparator group allows a stronger inference about the effect of the intervention being tested than is possible in studies that lack a control group.

Some additional methodological features are often added to this study design to further improve the validity of a trial. These include allocation concealment, blinding, intention-to-treat analysis, measurement of compliance, minimizing the dropouts, and ensuring appropriate sample size. These will be discussed in the next piece.

In this design, participants are assigned to different intervention arms without following a “random” procedure. For instance, this may be based on the investigator's convenience or whether the participant can afford a particular drug or not. Although such a design can suggest a possible relationship between the intervention and the outcome, it is susceptible to bias – with patients in the two groups being potentially dissimilar – and hence validity of the results obtained is low.

When a new intervention, e.g., a new drug, becomes available, it is possible to a researcher to assign a group of persons to receive it and compare the outcome in them to that in a similar group of persons followed up in the past without this treatment (”historical controls”). This is liable to a high risk of bias, e.g., through differences in the severity of disease or other factors in the two groups or through improvement over time in the available supportive care.

In this design, a variable of interest is measured before and after an intervention in the same participants. Examples include measurement of glycated hemoglobin of a group of persons before and after administration of a new drug (in a particular dose schedule and at a particular time in relation to it) or number of traffic accident deaths in a city before and after implementation of a policy of mandatory helmet use for two-wheeler drivers.

Such studies have a single arm and lack a comparator arm. The only basis of deriving a conclusion from these studies is the temporal relationship of the measurements to the intervention. However, the outcome can instead be related to other changes that occurred around the same time as the intervention, e.g., change in diet or implementation of alcohol use restrictions, respectively, in the above examples. The change can also represent a natural variation (e.g., diurnal or seasonal) in the variable of interest or a change in the instrument used to measure it. Thus, the outcomes observed in such studies cannot be reliably attributed to the specific intervention, making this a weaker design than RCT.

Some believe that the before-after design is comparable to observational design and that only studies with a “comparator” group, as discussed above, are truly interventional studies.

If two (or more) interventions are available for a particular disease condition, the relevant question is not only whether each drug is efficacious but also whether a combination of the two is more efficacious than either of them alone.

The simplest factorial design is a 2 × 2 factorial design. Let us think of two interventions: A and B. The participants are randomly allocated to one of four combinations of these interventions – A alone, B alone, both A and B, and neither A nor B (control). This design allows (i) comparison of each intervention with the control group, (ii) comparison of the two interventions with each other, and (iii) investigation of possible interactions between the two treatments (whether the effect of the combination differs from the sum of effects of A and B when given separately). As an example, in a recent study, infants in South India being administered a rotavirus vaccine were randomly assigned to receive a zinc supplement and a probiotic, only probiotic (with zinc placebo), only zinc supplement (with probiotic placebo), or neither (probiotic placebo and zinc placebo).[4] Neither zinc nor probiotic led to any change in the immunogenicity of the vaccine, but the group receiving the zinc-probiotic combination had a modest improvement.

This design allows the study of two interventions in the same trial without unduly increasing the required number of participants, as also the study of interaction between the two treatments.

This is a special type of interventional study design, in which study participants intentionally “crossover” to the other intervention arm. Each participant first receives one intervention (usually by random allocation, as described above). At the end of this “ first” intervention, each participant is switched over to the other intervention. Most often, the two interventions are separated by a washout period to get rid of the effect of the first intervention and to allow each participant to return to the baseline state. For example, in a recent study, obese participants underwent two 5-day inpatient stays – with a 1-month washout period between them, during which they consumed a smoothie containing 48-g walnuts or a macronutrient-matched placebo smoothie without nuts and underwent measurement of several blood analytes, hemodynamics, and gut microbiota.[5]

This design has the advantages of (i) each participant serving as his/her own control, thereby reducing the effect of interindividual variability, and (ii) needing fewer participants than a parallel-arm RCT. However, this design can be used only for disease conditions which are stable and cannot be cured, and where interventions provide only transient relief. For instance, this design would be highly useful for comparing the effect of two anti-inflammatory drugs on symptoms in patients with long-standing rheumatoid arthritis.

Sometimes, an intervention cannot be easily administered to individuals but can be applied to groups. In such cases, a trial can be done by assigning “clusters” – some logical groups of participants – to receive or not receive the intervention.

As an example, a study in Greece looked at the effect of providing meals in schools on household food security.[6] The 51 schools in this study were randomly allocated to provide or not provide a healthy meal every day to students; schools in both the groups provided an educational intervention.

However, such studies need a somewhat larger sample size than individual-randomized studies and the use of special statistical tools for data analysis.

There are no conflicts of interest.

1. Ranganathan P, Aggarwal R. Study designs: Part 1 – An overview and classification. Perspect Clin Res. 2018;9:184–6. [PMC free article] [PubMed] [Google Scholar]

2. Aggarwal R, Ranganathan P. Study designs: Part 2 – Descriptive studies. Perspect Clin Res. 2019;10:34–6. [PMC free article] [PubMed] [Google Scholar]

3. Ranganathan P, Aggarwal R. Study designs: Part 3 – Analytical observational studies. Perspect Clin Res. 2019;10:91–4. [PMC free article] [PubMed] [Google Scholar]

4. Lazarus RP, John J, Shanmugasundaram E, Rajan AK, Thiagarajan S, Giri S, et al. The effect of probiotics and zinc supplementation on the immune response to oral rotavirus vaccine: A randomized, factorial design, placebo-controlled study among Indian infants. Vaccine. 2018;36:273–9. [PubMed] [Google Scholar]

5. Tuccinardi D, Farr OM, Upadhyay J, Oussaada SM, Klapa MI, Candela M, et al. Mechanisms underlying the cardiometabolic protective effect of walnut consumption in obese subjects: A cross-over, randomized, double-blinded, controlled inpatient physiology study. Diabetes Obes Metab. 2019 [In Press]. doi: 10.1111/dom.13773. [PMC free article] [PubMed] [Google Scholar]

6. Dalma A, Petralias A, Tsiampalis T, Nikolakopoulos S, Veloudaki A, Kastorini CM, et al. Effectiveness of a school food aid programme in improving household food insecurity; a cluster randomized trial. Eur J Public Health. 2019 pii: ckz091. [PubMed] [Google Scholar]