Which four functions are commonly included in historical literature on functional analysis?

The functional analysis procedures described in the seminal Iwata et al. (1982/1994) study are prominent in the applied behavior analytic literature, having been replicated hundreds of times over the past 30 years (Beavers, Iwata, & Lerman, 2013; Hanley, McCord, Iwata, 2003). However, the extent to which particular components of this functional analysis model have become more or less prominent over time is not clear from these literature reviews. We therefore conducted a review of the functional analysis literature between the years of 1965 and 2016 to determine the trends in the usage of particular components over time and to determine if the published literature reflects a standardization of the manner in which functional analyses of problem behavior are conducted. Furthermore, we discuss whether or not this standardization of a functional analysis model is currently necessary.

Keywords: Functional analysis, Problem behavior, Procedural components, Standard

Despite being preceded by various functional analyses of problem behavior (e.g., Carr, Newsom, & Binkoff, 1976; Lovaas & Simmons, 1969; Lovaas, Freitag, Gold, & Kassorla, 1965; Pinkston, Reese, LeBlanc, & Baer, 1973; Sailor, Guess, Rutherford, & Baer, 1968; Thomas, Becker, & Armstrong, 1968), the procedures used by Iwata et al. (1982/1994a) have become nearly synonymous with the term functional analysis (e.g., Cooper, Heron, & Heward, 2007; Fisher, Piazza, & Roane, 2011). Iwata et al. distinguished between three general categories of reinforcement likely to be maintaining the SIB of 9 participants admitted to an inpatient unit: social positive reinforcement in the form of statements of concern and disapproval, social negative in the form of escape from academic work, and automatic reinforcement when the behavior persisted in the absence of socially-mediated consequences. The effect of each putative reinforcer was evaluated independently in isolated test conditions (attention, demand, and alone, respectively) and the rate of SIB across those test conditions was compared to an omnibus control condition simulating a play context with leisure items. These procedures were conducted uniformly across all 9 participants creating a technology that was easily replicable and would later be applied to the assessment of a wide range of problem behavior other than SIB.

Standardization refers to the process by which a set of evidence-based procedures or practices are disseminated and adopted by others within the field (e.g., applied researchers, clinicians). The standardization of behavioral procedures, such as functional assessment, has been deemed by many as a necessary goal for the field of behavior analysis (Austin & Carr, 2000; Bergan & Kratochwill, 1990). However, it seems the reliance on single-subject research within the field has led to an overly broad scope of our behavioral technology (Smith, 2013). In other words, a board certified behavior analyst (BCBA) working with a child with autism is not guided by a designated set of widely accepted practices, which means similar cases may be treated differently depending on the BCBA and their training. Standardizing specific behavioral assessments or interventions can reduce that broad scope to a set of clear and easily consumable treatment packages.

This is not to say that standardization is without limitations. For example, deciding on what procedures to standardize could be difficult because much of the available empirical evidence in the literature could be subject to publication bias and not represent the probable outcomes to be expected. Furthermore, overly standardizing procedures could result in a form of “cookbook” approach in which clinicians may learn how to follow a treatment manual but are unable to problem solve when they encounter unique cases or if difficulties arise with the manualized process. Thus, the goal of standardization is not to prohibit clinical autonomy but to create a technological program of evidence-based procedures of practical value (Azrin, 1977).

Although Iwata et al. (1982/1994a) is the most common functional analysis format cited in the published literature (Beavers et al., 2013; Hanley, Iwata, & McCord, 2003), it is difficult to determine which of the original component procedures of the analysis are being replicated, and therefore standardized. Jessel, Hanley, and Ghaemmaghami (2016) identified four components that many published functional analyses shared with the Iwata et al. procedures. These four components were (1) multiple test conditions (e.g., attention, escape, alone), (2) uniformly applied procedures in each condition (e.g., a mild verbal reprimand delivered for each instance of problem behavior in the attention condition), (3) isolated reinforcement contingencies (i.e., each suspected reinforcer was arranged in separate test conditions), and (4) a play or omnibus control condition attempting to control for all assessed contingencies in the distinct test conditions.

These common procedural components were defined by Jessel et al. (2016) as mutually exclusive and juxtaposed with symmetrically opposing functional analysis procedures. For example, Falcomata, Roane, Muething, Stephenson, and Ing (2012) included multiple test conditions (component 1) in the initial functional analysis whereby the results of the attention, ignore, escape, and tangible conditions were each compared to that of a toy play control condition. However, the results were inconclusive and the multiple test conditions were replaced with a single test condition in which the participants were interrupted and blocked from gaining access to preferred leisure activities. During the test condition, the participants were only returned uninterrupted access to the leisure items for 30 s following problem behavior. This single test condition was compared to a control condition of uninterrupted access to the leisure items for the entire session.

Fisher, Greer, Romani, Zangrillo, and Owen (2016) conducted two functional analysis formats that differed based on the uniformity (component 2) of the procedures. The first analysis included the same three test conditions (i.e., attention, tangible, escape) and a toy play control conducted across participants, but the form of attention, verbal reprimands, was identical for every participant and to that of Iwata et al. (1982/1994a). The second functional analysis included a single test condition with a contingency unique to each of the five participants because it was informed by an open-ended interview and structured observation.

Although Iwata et al. (1982/1994a) originally evaluated sensitivity to negative and positive reinforcement in isolated conditions (component 3), Slaton, Hanley, and Raftery (2017) designed functional analyses with a test condition consisting of a unique synthesized contingency for nine participants. As an example, the general contingencies combined into the test condition of Diego’s functional analysis included escape from work, access to attention, and access to tangibles. This created one synthesized contingency emulating the specific problematic context, the information of which was obtained from open-ended interviews, of Diego escaping from handwriting work to a calming area with toys and stories read to him by caregivers.

Lastly, the play control (component 4) often includes leisure items unrelated to the contingencies represented in the test conditions—these are usually absent from the other test conditions (McCord & Neef, 2005)—and includes a qualitatively dissimilar form of attention (i.e., reprimands during the test and general praise during the control). An analysis with a component symmetrically opposing to the play control would be a matched control. A matched control includes a pair of conditions in which the only manipulation is the manner in which reinforcers are provided and withheld (Thompson & Iwata, 2005). That is, only the contingent nature of the reinforcement is altered across test and control conditions; all else remains the same.

In addition to these four components, the functional analysis was designed to analyze minimal response classes, targeting only the dangerous and severe problem behavior of interest (e.g., severe self-injurious behavior or severe aggression). Some have argued that aggregating multiple topographies of problem behavior into a single functional analysis may result in an inflation in multiply controlled outcomes (Beavers et al., 2013) or may mask functions when low-rate problem behavior is combined with high-rate problem behavior (Asmus, Franzese, Conroy, & Dozier, 2003). Because the original functional analysis format included only dangerous behavior (Iwata et al., 1982/1994a, b) and minimizing response classes has historically been recommended as best practice (Hanley et al., 2003), targeting a limited number of dangerous problem behavior could be considered another component of the Iwata et al. procedures.

The prevalence of procedural components from Iwata et al. (1982/1994a) was not reported in previously conducted reviews (Beavers et al., 2013; Hanley et al., 2003). Although Jessel et al. (2016) described these components as being integral to the frequently replicated procedures of Iwata et al. (1982/1994a), the extent to which these components have become more or less prominent over time is not clear. A better understanding of these trends is important for determining whether a standardization of functional analysis procedures has occurred. The purpose of this study therefore was to evaluate the representation of these five components in the published functional analysis literature and to begin to understand the extent to which the Iwata et al. procedures have become the conventional approach to functional analysis of problem behavior. The advantages of standardizing general behavioral practices and specific procedures relies on the premise that the general practice or specific procedures are effective. We discuss the obtained outcomes in the context of recent research evaluating the relative effectiveness of different functional analysis formats and consider further research required to determine whether standardization of a general approach or specific procedures is warranted.

Articles included in the current review were obtained from the reference lists provided by Hanley et al. (2003) and Beavers et al. (2013). Hanley et al. contained analyses from 1965 to 2000, whereas Beavers et al. contained analyses from 2001 to 2012. In addition, we collected articles from 2013 to 2016 using similar search methods and meeting the same inclusion criteria hereafter described.

To obtain the articles from 2013 to 2016, the first and third authors independently conducted a search of the published literature using PsycINFO and ERIC with the search terms functional analysis and behavioral assessment. Had an article been found by one author and not the other, the two would have discussed the variability in findings and modified the search methods accordingly. However, the same articles were found by both authors. In addition, the first author conducted a more thorough hand search of the issues of the Journal of Applied Behavior Analysis, Behavioral Interventions, and Behavior Modification.

Articles were included, using the original inclusion criteria (Beavers et al., 2013; Hanley et al., 2003), if a pretreatment assessment was conducted with the manipulation of environmental variables and direct observation/measurement of target problem behavior. Additional criteria were added for the purposes of the current review: The article had to include graphic representation of session means or report the number of sessions for each condition, analyses needed to include programmed changes to consequences for problem behavior across conditions (i.e., no antecedent-only analyses), and the analyses needed to include some form of a control condition. We excluded analyses with only a manipulation of antecedent variables from this review because we were interested in determining functional analysis formats attempting to identify the reinforcing contingencies for problem behavior (see Hanley et al., 2003, for further discussion). The published functional analyses that met the inclusion criteria, whether or not they resulted in differentiated outcomes, were reviewed to determine the presence or absence of the five components of the Iwata et al. (1982/1994a) functional analysis.

Each individual functional analysis was evaluated for the presence or absence of the five components (i.e., these components were binary and an analysis could either have the component or the alternative). In some cases, multiple applications of the functional analysis could have been conducted for a single participant. Each application was considered separately and evaluated as its own distinct functional analysis. An application was defined as a single implementation of function analysis procedures without any modifications to the procedures, therapists, or materials that required a reanalysis of functional control.

The first component, multiple test conditions, refers to the inclusion of more than one test condition per analysis. A functional analysis with multiple test conditions can often be identified when problem behavior’s sensitivity to multiple sources of reinforcement are evaluated separately (e.g., attention, escape, tangible, automatic). The alternative is to include a single test condition per analysis.

The second component is the use of uniform test conditions whereby the procedures are a replication of Iwata et al. (1982/1994a) and are not informed by individualized details of the participants. An analysis with uniform test conditions will include an attention condition with nonspecific reprimands and an escape condition with academic materials, as identified by Iwata et al. Studies with uniform test conditions can often be identified when one description for the procedures of the functional analyses is used for all participants. This is juxtaposed with the inclusion of unique test conditions explicitly informed by open-ended interviews and observations. Studies which used closed-ended interviews did not meet the criteria for including unique contingencies because the information obtained is used to replace a functional analysis (Hanley, 2012). In addition, many studies in which the conditions are pre-arranged to evaluate general classes of reinforcement1 were not considered informed even when interviews or observations were reported to have been conducted because procedures different from those described by Iwata et al. (1982/1994a) were not described. In other words, unique test conditions use the information from an interview/observation to develop the contingency whereas uniform test conditions, at most, may use information from an interview/observation to identify materials to be included in pre-determined general contingencies.

The third component consists of isolated test conditions in which consequences serving as reinforcers for problem behavior are evaluated independently with different forms of social positive reinforcement tested in separate conditions (i.e., attention condition and the tangible condition) and negative reinforcement tested in another distinct condition (i.e., escape from demands condition). The alternative would be to synthesize the contingencies into a single test condition (e.g., escape from demands to attention and tangibles).

The fourth component refers to a play control condition to act as a comparison for all the test conditions; this is opposed to a matched control condition in which the only difference between the test and control condition is the presence or absence of the reinforcement contingency. This includes studies in which the same reinforcers provided contingently in the test condition are presented non-contingently during the control (e.g., Jessel et al., 2016) or are altogether withheld (Hanley, Iwata, & Thompson, 2001).

The fifth component is the targeting of only dangerous behavior in the response class. Beavers et al. (2013) reported that in the majority of functional analysis applications the therapist measured multiple topographies of problem behavior. However, a distinction was not made between the inclusion of dangerous or non-dangerous topographies. Dangerous behaviors were considered responses that could cause harm to oneself, others, or physical property (e.g., hitting, kicking, scratching, pulling hair, throwing objects, tearing objects). A functional analysis not including this component will target multiple topographies of non-dangerous (e.g., screaming, crying, yelling) and dangerous problem behavior. An additional criterion was added specifically for this component: Studies not measuring any form of dangerous behavior at all were excluded (e.g., stereotypy, compliance). These studies were not considered in the evaluation of the current component because they constituted a test of generality of the functional analysis procedures to other topographies of behavior and not towards an understanding of dangerous problem behavior.

The first author coded all articles reviewed. In order to categorize the specific components of an application, the first author would examine the graphic representation of the results of the functional analyses in the figures. For three of the five components (multiple test conditions, isolated test conditions, play control), enough information could often be extracted from just the figure. However, the first author also crosschecked the information obtained from the figures with that described in the text of the method section. For the remaining two components (uniform test conditions, only dangerous behavior), the first author found the necessary information in the method section of the articles reviewed.

Research assistants independently coded a randomly selected portion of the analyses. Articles were selected from the Hanley et al. (2003) reference list, the Beavers et al. (2013) reference list, and the list of recent analyses conducted between the years of 2013–2016 (available on request from the first author). The secondary coder reviewed 32%, 38%, and 41% of the Hanley et al., Beavers et al., and 2013–2016 reference lists, respectively. We calculated IOA using a point-by-point agreement. An agreement was defined as both the primary and secondary coders recording identical values for each component. A disagreement was defined as the values recorded by both coders not being identical. For example, if both coders scored a single functional analysis from an article as including isolated test conditions, this was considered an agreement. If one of the coders scored the functional analysis as having isolated test conditions and the other coder scored synthesized test conditions, this was considered a disagreement. We divided the number of agreements by the total number of values coded and multiplied the quotient by 100 to obtain a percentage. The inter-coder agreement for the five components was 99%, 98%, 98%, 98%, and 95% for the multiple test conditions, uniform test conditions, isolated test conditions, play control inclusion, and dangerous behavior only, respectively.

We collected 461 studies for this review. Forty-eight studies were removed because the functional analyses were not included in the study or were not represented in graphical form. Thirty-eight studies were removed because the studies included antecedent-only analyses (i.e., without a manipulation of consequences). Last, 10 studies were removed because no control condition was included (2), the article was a summary or review (5), or only group data were presented (3). The data below represent a review of 365 studies with a total of 1148 distinct functional analyses.

Figure 1 represents the number of analyses with some combination of the five components from 1965 to 2016. For example, an analysis defined as having two components could include multiple test conditions and uniform test conditions, or isolated contingencies and a play control, or any other mix of two components. Figure 2 represents the percentage of functional analyses in 5-year bouts that included each component. The years on both figures were organized into three distinct periods of time relative to the initial publication of Iwata et al. in 1982 and the republication of the article in 1994. The results are described below within the three time frames.

Prior to the original publication of the Iwata et al. study in 1982, relatively few functional analyses had been conducted (n = 14). In addition, the functional analyses that were conducted included only three components at most (3 of 14; 21%), with seven analyses including two components (7 of 14; 50%), two analyses including only one component (14%), and the remaining two not including any of the components (14%). Few of the functional analyses included multiple test conditions (4 of 14; 29%) and none of the analyses included uniform test conditions or a play control. Lastly, many of the analyses isolated contingencies (12 of 14; 86%) and only measured dangerous problem behavior (12 of 14; 86%).

During the period in which Iwata et al. (1982/1994a) was originally published and reprinted in the Journal of Applied Behavior Analysis, there was a sharp increase in the number of functional analyses being conducted (n = 171), with many including all five components (80 of 171; 47%). More functional analyses were also being conducted with 4 components (34 of 171; 20%) and 3 components (28 of 171; 16%). Very few functional analyses were being conducted with 2 components (19 of 171; 11%) and 1 component (8 of 171; 5%). Additionally, there was no increase in the number of functional analyses being conducted with 0 components (2 of 171; 1%) during this period even though there was a greater than tenfold increase in the number of functional analyses published. The specific component used the most in the functional analyses was the isolated test conditions (163 of 171; 95%), followed by multiple test conditions (148 of 171; 87%), play control (127 of 171; 74%), uniform test conditions (123 of 171; 72%), and the inclusion of only dangerous problem behavior (104 of 171; 61%).

The most functional analyses were conducted during the period following 1994 (n = 963). Many of the functional analyses in the years from and immediately following 1994 included all five of the core components (178 of 352; 51%); however, there was a decreasing trend in the number of analyses with five components (305 of 963; 32%) and an increasing trend in the number of analyses with four components (426 of 963; 44%) starting in the year 2001. The number of functional analyses conducted with zero (23 of 963; 2%), one (89 of 963; 9%), two (49 of 963; 5%), and three components (71 of 963; 7%) remained relatively stable and low across the years. However, the final year reviewed (2016) reflects a more diverse representation of analyses with one (30 of 49; 61%) or zero (12 of 49; 24%) components2 during the post-1994 period. This shift in the use of certain procedural components reflects a number of publications explicitly relying on interview-informed synthesized contingency analyses (IISCA; Hanley, Jin, Vanselow, & Hanratty, 2014) and the multiple replications of this analysis published in the following years (Ghaemmaghami, Hanley, & Jessel, 2015; Ghaemmaghami, Hanley, Jin, & Vanselow, 2016; Jessel et al., 2016; Santiago, Hanley, Moore, & Jin, 2016), a pattern that seems to continue beyond the 2016 limit of this review (e.g., Beaulieu, Van Nostrand, Williams, & Herscovitch, 2018; Boyle et al., 2019; Herman, Healy, & Lydon, 2018; Jessel, Ingvarsson, Kirk, Whipple, & Metras, 2018a; Jessel et al., 2018b; Jessel, Metras, Hanley, Jessel, & Ingvarsson, 2019; Rose & Beaulieu, 2019; Slaton et al., 2017; Strand & Eldevik, 2018; Taylor, Phillips, & Gertzog, 2018). The fact that the IISCA may influence the shift in procedural components is important because the IISCA format has symmetrically opposing procedures that do not incorporate any of the Iwata et al. (1982/1994a) components that are the focus of this review (see Jessel et al., 2016 for more information).

Similar to the period prior to 1994, the specific component included the most in the functional analyses was isolated test conditions (882 of 963; 92%). This was followed by multiple test conditions (817 of 963; 85%), play control (766 of 963; 80%), uniform test conditions (753 of 963; 78%), and inclusion of only dangerous problem behavior (411 of 963; 43%). However, in the final five years (2012–2016) we observed a slight shift away from the core components towards analyses with single test conditions (62 of 238; 26%), synthesized contingencies (68 of 238; 29%), informed test conditions (80 of 238; 34%), matched control conditions (96 of 238; 40%), and the targeting of non-dangerous behavior (148 of 238; 62%).

With over 50% of all functional analyses being conducted using some or all of the components of the Iwata et al. (1982/1994a) procedures for more than 30 years, we found that the published literature supports the notion that a specific format had indeed become the standard. While there were still a fair amount of analyses that deviated from the procedures described by Iwata et al. in the 20 years following its publication, there were very few exceptions to the Iwata et al. procedural components in the more recent 10 yrs between 2002 and 2011. The prevalent components were multiple, uniform, and isolated contingencies analyzed against an omnibus control condition. This trend between 1982 and 2012 and adoption of the components in the overall majority of applications does indeed suggest that the Iwata et al. (1982/1994a) procedures had become the standard functional analysis. In fact, several authors in the past have referred to the Iwata et al. (1982/1994a) procedures as the standard or standardized format without formal recognition of its widespread adoption within the published literature (e.g., Bloom, Iwata, Fritz, Roscoe, & Carreau, 2011; Hagopian, Rooker, Jessel, & DeLeon, 2013; Schlichenmeyer, Roscoe, Wheeler, & Dube, 2013).

Behavior analysts conducting research on the assessment and treatment of problem behavior appear to have been reliant for a while on functional analyses with multiple test conditions, uniform test conditions, isolated contingencies, and play or omnibus control conditions. However, it seems researchers have become less inclined to continue to measure and reinforce only dangerous problem behavior in spite of previous calls to the contrary (e.g., Hanley et al., 2003). It is possible that including non-dangerous behavior created a safer analysis for both the analyst and the participant, and practical utility took precedence over analytic precision. In other words, behavior analysts may have been able to minimize unsafe situations by including other behaviors in the contingency class such as attempts to engage in dangerous behavior (e.g., Tarbox, Wallace, Tarbox, Landaburu, & Williams, 2004), mild behavior assumed to be functionally related (e.g., Peck et al., 1996), and precursors to problem behavior (e.g., Smith & Churchill, 2002). Opening up the contingency class in analyses also may not have necessarily led to an imprecise understanding of problem behavior based on the fact that studies evaluating response class membership have shown that dangerous and non-dangerous behaviors reported to co-occur are maintained by the same reinforcer (Borrero & Borrero, 2008; Fritz, Iwata, Hammond, & Bloom, 2013; Herscovitch, Roscoe, Libby, Bourret, & Ahearn, 2009; Langdon, Carr, & Owen-DeSchryver, 2008; Najdowski, Wallace, Ellsworth, MacAleese, & Cleveland, 2008; Smith & Churchill, 2002).

While some researchers have modified the measurement system or the number or duration of observations, they have maintained the components of the Iwata et al. procedures. For example, Thomason-Sassi, Iwata, Neidert, and Roscoe (2011) attempted to reduce the exposure to a context likely to evoke problem behavior by terminating sessions following a single instance. The functional analysis was termed the latency-based format because the duration from the start of the session to the first instance of problem behavior was used as an index of response strength, instead of the typical rate of problem behavior. In all other aspects, this functional analysis was identical to that of the standard format and included all five components. Similarly, Sigafoos and Saggers (1995) reduced observations to the measurement of a single instance of problem behavior in what has been termed the trial-based format. The trial-based format retained the components of the standard functional analysis with the exception of the play control. The brief functional analysis format, developed by Northup et al. (1991), reduced the analysis duration while maintaining the measurement of rate of problem behavior by conducting a minimal number of sessions (i.e., one or two sessions). It is interesting to note that the original procedures of the brief format included a matched control whereby the reinforcers from the test condition with the highest rates of problem behavior were presented on appropriate communication in a contingency reversal phase. As the prominence of the standard format continued to grow in the following years, the matched control was exchanged for a toy play and all the components of the brief format became identical to that of the standard (Kahng & Iwata, 1999; Vollmer, Marcus, Ringdahl, & Roane, 1995).

The most recent 5-year period (2012–2016) of functional analysis publications shows a return to analyses that differ markedly from the standard functional analysis procedures. Many more analyses published in the last 5 years have included single test conditions, were informed by open-ended interviews or observations, included synthesized contingencies in an attempt to emulate the manner in which the contingencies are reported or observed to occur in typical contexts experienced by the participants, and relied on matched control conditions and more open contingency classes involving both dangerous and non-dangerous topographies of problem behavior.

Although it seems many researchers are beginning to return to individualizing functional analysis procedures, there is currently a single group who oppose the shift away from commitments to the standard format. Fisher et al. (2016) conducted two functional analyses, with and without synthesized contingencies, for 5 individuals who exhibited problem behavior. They found that the results of the functional analyses did not conform to one another in four out of five cases. This led Fisher et al. to assume that the analysis with synthesized contingencies falsely identified reinforcers influencing the participants’ problem behavior and concluded that the components of the standard should not be modified in such a manner. Of course, the lack of correspondence between assessments cannot infer the relative efficacy of one over the other. Therefore, a comparative study without treatment evaluations is not particularly helpful because we are unable to determine if the isolated or synthesized contingencies would have informed better treatment outcomes. That is, an assessment should be evaluated based on its usefulness in contributing to a beneficial treatment, which has been termed treatment validity or utility (Hayes, Nelson, & Jarrett, 1987; Shapiro & Kratochwill, 2000).

Slaton and Hanley (2018) conducted a quantitative review of 55 studies to determine if combining contingencies during the functional analysis and treatment of problem behavior would in fact negatively impact treatment outcomes. Interestingly, Slaton and Hanley found that synthesized contingencies were necessary for producing interpretable functional analyses and efficacious treatments in 80% of the cases that evaluated both. Furthermore, treatments including synthesized variables overall were far more likely to produce larger reductions in problem behavior.

These studies reviewed by Slaton and Hanley (2018) support the notion of synthesized contingencies holding advantages over isolated ones. However, the studies reviewed were intermittently published over the last 20 years and the recent shift in functional analysis methods may be due more, in part, to multiple calls by Hanley (2010, 2012) suggesting a movement away from standard analyses.

The specific recommendations from Hanley (2012) were that (a) all functional analysis test conditions be informed to some extent by open-ended interviews and observation (i.e., analyses should not involve uniform test conditions of only the generic functions), (b) test-control analyses, in which the only difference between the conditions be the presence or absence of a contingency, be designed from those assessments (i.e., analyses not necessarily involve multiple conditions or omnibus control conditions; see Hanley et al., 2001, for an early example), (c) all forms of problem behavior that are reported to co-occur be included in the contingency class (i.e., analyses do not withhold reinforcement until only dangerous behavior occur), and (d) this personalized analysis be conducted in the absence of a standard analysis (i.e., one need not fail first to conduct a personalized analysis). Hanley et al. (2014) followed up with the suggestion that when reinforcement contingencies are reported to co-occur (a teacher reports that a child is provided a break to a relaxing area with their iPad following problem behavior), that they be evaluated together as part of a synthesized contingency.

It is important to note that, although Hanley and his colleagues may have contributed to a large portion of the publications indicating a shift in functional analysis methods, there are many examples of (a) deviations from the standard components prior to the influence of Hanley et al. (2014) (e.g., Bowman, Fisher, Thompson, & Piazza, 1997; Sarno et al., 2011), (b) evaluations of the Hanley et al. procedures conducted by independent researchers (e.g., Boyle et al., 2019; Fisher et al., 2016; Strohmeier, Murphy, & O’Connor, 2016) and (c) international influence among replications from three countries outside of the United States (e.g., Herman et al., 2018; Strand & Eldevik, 2018; Taylor et al. 2018).

Although recent adoptions of certain procedures in the functional assessment methodology may have deviated from the traditional procedures of Iwata et al. (1994a), this does not discount the notable contributions from the decades of historical influence the seminal publication has had. The hundreds of systematic replications in the published literature have been evaluated across various settings (e.g., inpatient unit, school, clinic, home), topographies of problem behavior (e.g., aggression, self-injury, property destruction), and participant characteristics (e.g., children, adults, developmentally disabled, no disability) with 94% of the analyses successfully identifying behavioral functions (Beavers et al., 2013; Hanley et al., 2003). Furthermore, multiple meta-analyses have found that behavioral treatments preceded by functional analyses are more likely to result in greater reductions in problem behavior (e.g., Campbell, 2003; Heyvaert, Saenen, Campbell, Maes, & Onghena, 2014). It is evident from the collection of single-subject studies that the Iwata et al. procedures are often efficacious for identifying the main effects of isolated social reinforcement contingencies. However, the question is whether the standardization of the functional analysis has led to an effective model that is (a) consistently able to identify the environmental variables maintaining problem behavior in the relevant and typical contexts in which these behaviors were originally reported to occur and (b) inform treatments that produce long-term, socially meaningful outcomes in those relevant contexts.

One thing to consider is that reviews that report a high probability of obtaining differentiated results during a functional analysis (Beavers et al., 2013; Hanley et al., 2003) may be influenced by a common publication bias. Positive, differentiated, results are more likely to be published than negative, undifferentiated, results (i.e., “file drawer problem,” Rosenthal, 1979). The exceptionally high level of differentiation seen in the reviews is therefore not representative of what practitioners who conduct functional analyses are likely experiencing. Second, the majority of the studies reviewed in the meta-analyses of treatment outcomes (Campbell, 2003; Heyvaert et al., 2014) were conducted by experts, in highly-controlled settings, for a brief period of time, often without reaching practical schedules of reinforcement. This is in addition to the meta-analyses being susceptible to the former limitation of publication bias. Overall, the countless systematic replications of the standard functional analysis provide evidence that positive results may be attainable; however, this does not imply that the positive results are probable. The validity of the latter assertion can be evaluated through a review of consecutive controlled case series (e.g., Hagopian, Fisher, Sullivan, Acquisto, & LeBlanc, 1998; Rooker, Jessel, Kurtz, & Hagopian, 2013).

A consecutive controlled case series design includes all participants meeting inclusion criteria and in the order of which they are admitted to a clinical program. This eliminates the file drawer effect because all outcomes are reported and minimizes potential selection bias by including all participants who experienced the assessment or treatment under evaluation. Hagopian et al. (2013) collected 176 consecutive applications of the standard functional analysis format conducted with those admitted to an in-patient unit. Hagopian et al. evaluated four general categories of modifications (i.e., antecedent, consequent, design, and combination) and found that 47% of the initial results were differentiated and the remaining 94 analyses required secondary, and sometimes, tertiary modifications. These results were replicated in a second consecutive controlled case series conducted in a private day school for children with autism (Slaton et al., 2017). The first 9 participants referred for the assessment and treatment of problem behavior served as participants in Slaton et al., and the standard functional analysis resulted in differentiated outcomes in only four of those nine cases (44%). Slaton et al. (2017) also compared the results of the standard functional analysis format to the IISCA format, which has symmetrically opposing procedures. Differentiated results of the IISCA were obtained in all nine consecutive cases.

Jessel, Ingvarsson, Metras, Kirk, and Whipple (2018a) implemented the IISCA format and treatment in a consecutive case series conducted in an outpatient clinic. Twenty-five patients of the clinic were provided assessment and treatment services for their problem behavior in the order of which they were admitted. The results of the IISCAs for all 25 participants were differentiated with a 99% reduction in problem behavior obtained during the treatment evaluation. In addition, Jessel et al. thinned reinforcement to practical schedules and the entire assessment and treatment process was socially validated by the caregivers. The increasing use in recent years of the IISCA provides evidence that positive results are attainable when conducting this format (see also Strand & Eldevik, 2018 or Herman et al., 2018), additionally the consecutive controlled case series support the notion that socially valid results of treatments designed from the IISCA are also highly probable. More research that compares different functional analyses on the component continuum described in this paper is clearly needed, but, in light of these recent findings, reconsideration of the Iwata et al. (1982/1994a) set of procedures as the standard is certainly warranted.

Our intent is not to supplant one set of specific functional analysis procedures with another as the standard. On the contrary, we believe that the power of functional analysis methodology is derived from the highly flexible and individualized approach characteristic of the field of applied behavior analysis (Smith, 2013) and these characteristics should be retained when analyzing the variables influencing problem behavior. For example, a practitioner may be interested in isolating the value of possible reinforcers to evaluate the functional relevance of multiple communication responses taught in a single context (Boyle et al., 2019). In other cases, the practitioner may want to identify a synthesis of reinforcers to create a highly motivating environment to teach in and strengthen the core sentence structure of an omnibus mand (Ghaemmaghami, Hanley, Jessel, & Landa, 2018). The level of isolation or synthesis should ultimately be dependent on the specifics of the individual’s therapeutic needs and not on a standard, arbitrary rule.

It may be best to standardize the general approach of individualized analyses informed by an open-ended interview and observation rather than any specific procedures. Of course, the appropriate level of standardization cannot be directly identified in this review. There are certainly many permutations that need to be understood so that a general functional analysis model can be successfully applied to all children who engage in problem behavior and all adults with intellectual disabilities who engage in severe problem behavior. For instance, procedural details like running the analysis in a fairly typical context and extending reinforcement intervals thereby stretching the time between establishing operation presentations may be necessary for children with language who are reactive to more regimented analytic procedures (e.g., 30-s reinforcement intervals regularly applied in a session room).

Our review was limited in scope in that we only identified the prevalence of core components of functional analysis and their alternatives across time in the published literature. We did not evaluate the speed with which different functional analyses yielded a differentiated outcome or the extent of control afforded by each type of analysis. Our study also did not reveal the prevalence of these standard components when functional analysis has been applied in past or current practice outside of the context of highly controlled research. Future research should evaluate the relative efficiency, analytic control, and practical utility of the functional analyses that differ in the number of components that had become the standard when conducting a functional analysis of severe problem behavior.

As for now, practitioners can alleviate themselves of the burden of committing to a specific set of procedures when conducting a functional analysis. We have identified that a standard functional analysis format may indeed exist in the research literature; however, we have also called into question whether or not the specific procedures should remain common practice among those working with individuals who exhibit problem behavior. Our current recommendation to practitioners is to choose the functional analysis format that can (a) quickly and safely identify environmental variables influencing problem behavior and (b) inform effective action on the part of those involved.

Conflict of Interest

Joshua Jessel declares that he has no conflict of interest. Gregory P. Hanley declares that he has no conflict of interest. Mahshid Ghaemmaghami declares that she has no conflict of interest.

Informed Consent

For this type of study formal consent is not required.

Ethical Approval

This article does not contain any studies with human participants performed by any of the authors.

1General classes of reinforcement or general contingencies refers to the four broad categories of reinforcement (i.e., attention, escape, tangible, automatic) identified by Iwata et al. (1982/1994a).

2The one component retained in some cases when the IISCA was conducted was the measurement of only dangerous behavior. However, the inclusion of non-dangerous behavior in the response class has more recently become a defining feature of the IISCA (Slaton et al., 2017).

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Asmus JM, Franzese JC, Conroy MA, Dozier CL. Clarifying functional analysis outcomes for disruptive behaviors by controlling consequence delivery for stereotypy. School Psychology Review. 2003;32:624–630. [Google Scholar]
  • Austin J, Carr JE. Handbook of applied behavior analysis. Reno: Context Press; 2000. [PMC free article] [PubMed] [Google Scholar]
  • Azrin NH. A strategy for applied research: Learning based but outcome oriented. American Psychologist. 1977;32:140–149. doi: 10.1037/0003-066X.32.2.140. [CrossRef] [Google Scholar]
  • Beaulieu L, Van Nostrand ME, Williams AL, Herscovitch B. Incorporating interview-informed functional analyses into practice. Behavior Analysis in Practice. 2018;11:385–389. doi: 10.1007/s40617-018-0247-7. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Beavers GA, Iwata BA, Lerman DC. Thirty years of research on the functional analysis of problem behavior. Journal of Applied Behavior Analysis. 2013;46:1–21. doi: 10.1002/jaba.30. [PubMed] [CrossRef] [Google Scholar]
  • Bergan JR, Kratochwill TR. Behavioral consultation and therapy. New York: Plenum Press; 1990. [Google Scholar]
  • Bloom SE, Iwata BA, Fritz JN, Roscoe EM, Carreau AB. Classroom application of a trial-based functional analysis. Journal of Applied Behavior Analysis. 2011;44:19–31. doi: 10.1901/jaba.2011.44-19. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Borrero CSW, Borrero JC. Descriptive and experimental analyses of potential precursors to problem behavior. Journal of Applied Behavior Analysis. 2008;41:83–96. doi: 10.1901/jaba.2008.41-83. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Bowman LG, Fisher WW, Thompson RH, Piazza CC. On the relation of mands and the function of destructive behavior. Journal of Applied Behavior Analysis. 1997;30:251–265. doi: 10.1901/jaba.1997.30-251. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Boyle, M. A., Stamper, S. M., Donaldson, E. A., Curtis, K. S., Forck, K. L., Shrimplin, M. A., et al. (2019). Functional communication training for multiple reinforcers: An evaluation of isolated control following a synthesized context. Behavior Analysis in Practice. Advanced online publication. 10.1007/s40617-018-00320-7. [PMC free article] [PubMed]
  • Campbell JM. Efficacy of behavioral interventions for reducing problem behavior in persons with autism: A quantitative synthesis of single-subject research. Research in Developmental Disabilities. 2003;24:120–138. doi: 10.1016/S0891-4222(03)00014-3. [PubMed] [CrossRef] [Google Scholar]
  • Carr EG, Newsom CD, Binkoff JA. Stimulus control of self-destructive behavior in a psychotic child. Journal of Abnormal Child Psychology. 1976;4:139–153. doi: 10.1007/BF00916518. [PubMed] [CrossRef] [Google Scholar]
  • Cooper JO, Heron TE, Heward WL. Applied behavior analysis. 2. Upper Saddle River: Prentice Hall; 2007. [Google Scholar]
  • Falcomata, Roane, Muething, Stephenson, Ing Functional communication training and chained schedules of reinforcement to treat challenging behavior maintained by terminations of activity interruptions. Behavior Modification. 2012;36:630–649. doi: 10.1177/0145445511433821. [PubMed] [CrossRef] [Google Scholar]
  • Fisher WW, Greer BD, Romani PW, Zangrillo AN, Owen TM. Comparisons of synthesized and individual reinforcement contingencies during functional analysis. Journal of Applied Behavior Analysis. 2016;49:596–616. doi: 10.1002/jaba.314. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Fisher WW, Piazza CC, Roane HS. Handbook of applied behavior analysis. New York: Guilford Press; 2011. [Google Scholar]
  • Fritz JN, Iwata BA, Hammond JL, Bloom SE. Experimental analysis of precursors to severe problem behavior. Journal of Applied Behavior Analysis. 2013;46:101–129. doi: 10.1002/jaba.27. [PubMed] [CrossRef] [Google Scholar]
  • Ghaemmaghami M, Hanley GP, Jessel J. Contingencies promote delay tolerance. Journal of Applied Behavior Analysis. 2015;49:548–575. doi: 10.1002/jaba.333. [PubMed] [CrossRef] [Google Scholar]
  • Ghaemmaghami M, Hanley GP, Jessel J, Landa R. Shaping complex functional communication responses. Journal of Applied Behavior Analysis. 2018;51:502–520. doi: 10.1002/jaba.468. [PubMed] [CrossRef] [Google Scholar]
  • Ghaemmaghami M, Hanley GP, Jin SC, Vanselow NR. Affirming control by multiple reinforcers via progressive treatment analysis. Behavioral Interventions. 2016;31:70–86. doi: 10.1002/bin.1425. [CrossRef] [Google Scholar]
  • Hagopian LP, Fisher WW, Sullivan MT, Acquisto J, LeBlanc LA. Effectiveness of functional communication training with and without extinction and punishment: A summary of 21 inpatient cases. Journal of Applied Behavior Analysis. 1998;31:211–235. doi: 10.1901/jaba.1998.31-211. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Hagopian LP, Rooker GW, Jessel J, DeLeon IG. Initial functional analysis outcomes and modifications in pursuit of differentiation: A summary of 176 inpatient cases. Journal of Applied Behavior Analysis. 2013;46:88–100. doi: 10.1002/jaba.25. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Hanley GP. Prevention and treatment of severe problem behavior. In: Mayville E, Mulick J, editors. Behavioral foundations of autism intervention. New York: Sloan Publishing; 2010. [Google Scholar]
  • Hanley GP. Functional assessment of problem behavior: Dispelling myths, overcoming implementation obstacles, and developing new lore. Behavior Analysis in Practice. 2012;5:54–72. doi: 10.1007/BF03391818. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Hanley GP, Iwata BA, McCord BE. Functional analysis of problem behavior: A review. Journal of Applied Behavior Analysis. 2003;36:147–185. doi: 10.1901/jaba.2003.36-147. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Hanley GP, Iwata BA, Thompson RT. Reinforcement schedule thinning following treatment with functional communication training. Journal of Applied Behavior Analysis. 2001;34:17–37. doi: 10.1901/jaba.2001.34-17. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Hanley GP, Jin CS, Vanselow NR, Hanratty LA. Producing meaningful improvements in problem behavior of children with autism via synthesized analyses and treatments. Journal of Applied Behavior Analysis. 2014;47:16–36. doi: 10.1002/jaba.106. [PubMed] [CrossRef] [Google Scholar]
  • Hayes SC, Nelson RO, Jarrett RB. The treatment utility of assessment. A functional approach to evaluating assessment quality. The American Psychologist. 1987;42:963–974. doi: 10.1037/0003-066X.42.11.963. [PubMed] [CrossRef] [Google Scholar]
  • Herman C, Healy O, Lydon S. An interview-informed synthesized contingency analysis to inform the treatment of challenging behavior in a young child with autism. Developmental Neurorehabilitation. 2018;21:202–207. doi: 10.1080/17518423.2018.1437839. [PubMed] [CrossRef] [Google Scholar]
  • Herscovitch B, Roscoe EM, Libby ME, Bourret JC, Ahearn WH. A procedure for identifying precursors to problem behavior. Journal of Applied Behavior Analysis. 2009;42:697–702. doi: 10.1901/jaba.2009.42-697. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Heyvaert M, Saenen L, Campbell JM, Maes B, Onghena P. Efficacy of behavioral interventions for reducing problem behavior in persons with autism: An updated quantitative synthesis of single-subject research. Research in Developmental Disabilities. 2014;35:2463–2476. doi: 10.1016/j.ridd.2014.06.017. [PubMed] [CrossRef] [Google Scholar]
  • Iwata BA, Dorsey MF, Slifer KJ, Bauman KE, Richman GS. Toward a functional analysis of self-injury. Journal of Applied Behavior Analysis. 1994;27:197–209. doi: 10.1901/jaba.1994.27-197. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Iwata BA, Pace GM, Dorsey MF, Zarcone JR, Vollmer TR, Smith RG, Willis KD. The functions of self-injurious behavior: An experimental-epidemiological analysis. Journal of Applied Behavior Analysis. 1994;27:215–240. doi: 10.1901/jaba.1994.27-215. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Jessel J, Hanley GP, Ghaemmaghami M. Interview-informed synthesized contingency analyses: Thirty replications and reanalysis. Journal of Applied Behavior Analysis. 2016;49:576–595. doi: 10.1002/jaba.316. [PubMed] [CrossRef] [Google Scholar]
  • Jessel, J., Ingvarsson, E. T., Metras, R., Kirk, H., & Whipple, R. .,(2018a). Achieving socially significant reductions in problem behavior following the interview-informed synthesized contingency analysis: A summary of 25 outpatient applications. Journal of Applied Behavior Analysis, 51, 130–157. 10.1002/jaba.436. [PubMed]
  • Jessel, J., Ingvarsson, E. T., Metras, R., Whipple, R., Kirk, H., & Solsbery, L. (2018b). Treatment of elopement following a latency-based interview-informed, synthesized contingency analysis. Behavioral Interventions.10.1002/bin.1525.
  • Jessel, J., Metras, R., Hanley, G. P., Jessel, C., & Ingvarsson, E. T. (2019). Evaluating the boundaries of analytic efficiency and control: A consecutive controlled case series of 26 functional analyses. Journal of Applied Behavior Analysis. Advanced online publication.10.1002/jaba.544. [PubMed]
  • Kahng SW, Iwata BA. Correspondence between outcomes of brief and extended functional analyses. Journal of Applied Behavior Analysis. 1999;32:149–159. doi: 10.1901/jaba.1999.32-149. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Langdon NA, Carr EG, Owen-DeSchryver J. Functional analysis of precursors for serious problem behavior and related intervention. Behavior Modification. 2008;32:804–827. doi: 10.1177/0145445508317943. [PubMed] [CrossRef] [Google Scholar]
  • Lovaas OI, Freitag G, Gold VJ, Kassorla IC. Experimental studies in childhood schizophrenia: Analysis of self-destructive behavior. Journal of Experimental Child Psychology. 1965;2:67–84. doi: 10.1016/0022-0965(65)90016-0. [CrossRef] [Google Scholar]
  • Lovaas OI, Simmons JQ. Manipulation of self-destruction in three retarded children. Journal of Applied Behavior Analysis. 1969;2:143–157. doi: 10.1901/jaba.1969.2-143. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • McCord BE, Neef NA. Leisure items as controls in the attention condition of functional analyses. Journal of Applied Behavior Analysis. 2005;38:417–426. doi: 10.1901/jaba.2005.116-04. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Najdowski AC, Wallace MD, Ellsworth CL, MacAleese AN, Cleveland JM. Functional analyses and treatment of precursor behavior. Journal of Applied Behavior Analysis. 2008;41:97–105. doi: 10.1901/jaba.2008.41-97. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Northup J, Wacker D, Sasso G, Steege M, Cigrand K, Cook J, DeRaad A. A brief functional analysis of aggressive and alternative behavior in an outclinic setting. Journal of Applied Behavior Analysis. 1991;24:509–522. doi: 10.1901/jaba.1991.24-509. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Peck SM, Wacker DP, Berg WK, Cooper LJ, Brown KA, Richman D, et al. Choice-making treatment of young children’s severe behavior problems. Journal of Applied Behavior Analysis. 1996;29:263–290. doi: 10.1901/jaba.1996.29-263. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Pinkston EM, Reese NM, LeBlanc JM, Baer DM. Independent control of a preschool child's aggression and peer interaction by contingent teacher attention. Journal of Applied Behavior Analysis. 1973;6:115–124. doi: 10.1901/jaba.1973.6-115. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Rooker GW, Jessel J, Kurtz PF, Hagopian LP. Functional communication training with and without alternative reinforcement and punishment: An analysis of 58 applications. Journal of Applied Behavior Analysis. 2013;46:708–722. doi: 10.1002/jaba.76. [PubMed] [CrossRef] [Google Scholar]
  • Rose JC, Beaulieu L. Assessing the generality and durability of interview-informed functional analyses and treatment. Journal of Applied Behavior Analysis. 2019;52:271–285. doi: 10.1002/jaba.504. [PubMed] [CrossRef] [Google Scholar]
  • Rosenthal R. File drawer problem and tolerance for null results. Psychological Bulletin. 1979;86:638–641. doi: 10.1037/0033-2909.86.3.638. [CrossRef] [Google Scholar]
  • Sailor W, Guess D, Rutherford G, Baer DM. Control of tantrum behavior by operant techniques during experimental verbal training. Journal of Applied Behavior Analysis. 1968;1:237–243. doi: 10.1901/jaba.1968.1-237. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Santiago JL, Hanley GP, Moore K, Jin CS. The generality of interview-informed functional analyses: Systematic replications in school and home. Journal of Autism and Developmental Disorders. 2016;46:797–811. doi: 10.1007/s10803-015-2617-0. [PubMed] [CrossRef] [Google Scholar]
  • Sarno JM, Sterling HE, Mueller MM, Dufrene B, Tingstrom DH, Olmi DJ. Escape-to-attention as a potential variable for maintaining problem behavior in the school setting. School Psychology Review. 2011;40:57–71. [Google Scholar]
  • Schlichenmeyer KJ, Roscoe EM, Wheeler EE, Dube WV. Idiosyncratic variables that affect functional analysis outcomes: A review (2001-2010) Journal of Applied Behavior Analysis. 2013;46:339–348. doi: 10.1002/jaba.12. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Shapiro ES, Kratochwill TR. Behavioral assessment in schools. New York: The Guilford Press; 2000. [Google Scholar]
  • Sigafoos J, Saggers E. A discrete-trial approach to the functional analysis of aggressive behaviour in two boys with autism. Australia & New Zealand Journal of Developmental Disabilities. 1995;20:287–297. doi: 10.1080/07263869500035621. [CrossRef] [Google Scholar]
  • Slaton, J., & Hanley, G. P. (2018). On the nature and scope of synthesis in functional analysis and treatment of problem behavior. Journal of Applied Behavior Analysis, 51, 943–973. 10.1002/jaba.498. [PubMed]
  • Slaton JD, Hanley GP, Raftery KJ. Interview-informed functional analyses: A comparison of synthesized and isolated components. Journal of Applied Behavior Analysis. 2017;50:252–277. doi: 10.1002/jaba.384. [PubMed] [CrossRef] [Google Scholar]
  • Smith RG, Churchill RM. Identification of environmental determinants of behavior disorders through functional analysis of precursor behaviors. Journal of Applied Behavior Analysis. 2002;35:125–136. doi: 10.1901/jaba.2002.35-125. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Smith T. What is evidence-based behavior analysis? The Behavior Analyst. 2013;36:7–33. doi: 10.1007/BF03392290. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Strand, R. C. W., & Eldevik, S. (2018). Improvements in problem behavior in a child with autism spectrum diagnosis through synthesized analysis and treatment: A replication in an EIBI home program. Behavioral Interventions.10.1002/bin.1505.
  • Strohmeier CW, Murphy A, O’Connor JT. Parent-informed test-control functional analysis and treatment of problem behavior related to combined establishing operations. Developmental Neurorehabilitation. 2016;4:247–252. doi: 10.3109/17518423.2015.1133723. [PubMed] [CrossRef] [Google Scholar]
  • Tarbox J, Wallace MD, Tarbox RSF, Landaburu HJ, Williams L. Functional analysis and treatment of low-rate problem behavior in individuals with developmental disabilities. Behavioral Interventions. 2004;19:73–90. doi: 10.1002/bin.156. [CrossRef] [Google Scholar]
  • Taylor, S. A., Phillips, K. J., & Gertzog, M. G. (2018). Use of synthesized analysis and informed treatment to promote school reintegration. Behavioral Interventions, 33, 364–379 10.1002/bin.1640.
  • Thomas, D. R., Becker, W. C., & Armstrong, M. (1968). Production and elimination of disruptive classroombehavior by systematically varying teacher's behavior. Journal of Applied Behavior Analysis, 1, 35–45. [PMC free article] [PubMed]
  • Thomason-Sassi JL, Iwata BA, Neidert PL, Roscoe EM. Response latency as an index of response strength during functional analyses of problem behavior. Journal of Applied Behavior Analysis. 2011;44:51–67. doi: 10.1901/jaba.2011.44-51. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Thompson, R. H., & Iwata, B. A. (2005). A review of reinforcement control procedures. Journal of Applied Behavior Analysis, 38, 257–278. 10.1901/jaba.2005.176-03 [PMC free article] [PubMed]
  • Vollmer TR, Marcus BA, Ringdahl JE, Roane HS. Progressing from brief assessments to extended experimental analyses in the evaluation of aberrant behavior. Journal of Applied Behavior Analysis. 1995;28:561–576. doi: 10.1901/jaba.1995.28-561. [PMC free article] [PubMed] [CrossRef] [Google Scholar]