Social Validity in the Late 20th and Early 21st Century

The Heart of ABA: The Evolution of Social-Validity Reporting in JABA Part 3 of 3

An extension of Kennedy’s work was published seven years later in 1999, by James Carr and colleagues. Their paper assessed the frequency of social-validity measures reported in the first 31 years of JABA. They analyzed the difference in trends of reporting social validity for experiments that had highly controlled analog settings versus more naturalistic and dynamic settings. The reason for doing so harkens back to Kennedy’s comments about the difference between basic and applied research. 

It seems that the initial increase in social-validity reporting was unfortunately short-lived. 

Carr et al. (1999) discovered a stagnation in the reporting of social-validity measures in JABA. While reporting of these measures was virtually non-existent up until the late ’70s, noticeable spikes occurred in the years that followed both Wolf’s and Kazdin’s articles on the importance of social validity in applied research. Despite these spikes, the prevalence of social-validity reporting did not persist. Less than 13% of the articles reviewed reported an assessment of social validity. This lack of reporting may be detrimental to applied interventions for several reasons. Without the feedback that social-validity measures (when done properly) provide, the practitioner may not be as readily capable to predict treatment rejection either before or after treatment. Also, if social-validity measures are not reported frequently, it will be difficult (if not impossible) to develop and improve these measures (Carr et al., 1999). 

Four possible reasons were noted by Carr and colleagues that may contribute to the scarcity of social-validity reporting in JABA:

  1. It is not an editorial requirement.
    • Authors are not required by journal editors to include social-validity measures in their studies.
  2. Many social-validity measures are not valid.
    • They do not measure what they are meant to measure (i.e., the true opinions of consumers).
  3. The editorial process is itself an informal assessment.
    • Studies that are not deemed to be of social importance are not accepted for publication. 
  4. There is a prevalence of experimental over applied research.
    • Some deem social validity unnecessary if the focus of the study is anything other than direct treatment implementation.

Reason number 4 seems to persist as an issue for the field of Behavior Analysis. When is it appropriate to include social-validity assessments, and when it is not? Clearly, studies involving direct implementation of interventions for an individual or group should include, even if subjectively, a thorough assessment of the opinions of consumers. Would it not also be valuable to include these measures for studies that expand or develop basic and advanced experimental knowledge? An important dimension of behavior analysis is that it be effective; this means results of a study should be socially important to the individual (Baer et al. 1968). How can we determine if an intervention is socially important for an individual if we do not ask?

We will now move on to take a look at social-validity reporting in the 21st century.

In 2018, Julia Ferguson provided an extension of the work done by both Kennedy and Carr et al., tracking social-validity reporting in the flagship Journal of Applied Behavior Analysis, from 1999-2016. Their review found that, even in the new century, not much has changed; only 12% of the articles included in the review reported taking social-validity data. Additionally, only 4% of the articles that did not report social validity (88% of the articles in the review) suggested that future researchers assess social validity. Ferguson and colleagues comment on the four reasons proposed by Carr et al. for this deficit and propose an additional reason. 

  1. First, they address the fact that social-validity reporting is not an editorial requirement. They recommend that editorial boards include “formal recommendations and guidelines for when to include measures of social validity” (Ferguson et al., 2019). These guidelines can help indicate when it is appropriate for researchers and practitioners to include social-validity data in their studies. 
  2. Second, it is likely that social-validity assessments do not actually measure what they intend to measure. For this, they recommend more objective methods for assessing social validity. They provide an example of a study where participants were simultaneously exposed to two different intervention procedures to determine their preference for one over the other.
  3. Third, the peer-review process is itself an informal assessment of social validity as only those studies deemed, by reviewers, to have social importance are published in the Journal. However, these editors and reviewers are not direct consumers of these interventions. Ferguson et al. recommend that input from the consumers of behavior interventions be sought out in order to increase the likelihood of acceptability of treatment procedures.
  4. Fourth, most of the articles in JABA focus on basic procedures and developing interventions. Another critical dimension of ABA is that it be applied to issues of social importance. Are these interventions not meant to eventually be used in applied settings? Would it not be valuable to include an assessment of social validity early in the development process of new behavior interventions? Ferguson and colleagues (along with Carr and his colleagues) seem to think that social validity should be assessed even in basic research. 
  5. Finally, Ferguson et al. suggest an additional reason for the low reporting, which has to do with the emphasis on tight methodological control in the science of behavior analysis; objective data is a critical factor for demonstrating experimental control. However, just like Wolf suggested in 1978, the use of subjective evaluations should be supplementary to objective data, not its replacement. Ferguson recommends that current behavior analysts be surveyed about their perspective on collecting subjective data. This might elucidate potential indicators for when practitioners believe it is appropriate to include a social-validity assessment. 

Let us recall Baer’s statement to Wolf on the purpose of the Journal of Applied Behavior Analysis: “It is for the publication of applications of the analysis of behavior to problems of social importance.” In 48 years of publications in JABA, from 1968-2016, the field of Applied Behavior Analysis has yet to find its heart. It does, however, appear that signs of a pulse emanate from the aforementioned reviews and reports on social validity. These founders and pioneers of the science of behavior believe that social validity will be more beneficial than detrimental to the rigorous methodological process that is behavior analysis. Taking account of how those who experience our interventions feel about them may further validate the work being done in analyzing and changing meaningful human behavior.

 


References

Carr, J. E., Austin, J. L., Britton, L. N., Kellum, K. K., & Bailey, J. S. (1999). An assessment of social-validity trends in applied behavior analysis. Behavioral Interventions, 14(4), 223-231. 3.0.CO.

Ferguson, J. L., Cihon, J. H., Leaf, J. B., Van Meter, S. M., McEachin, J., & Leaf, R. (2019). Assessment of social-validity trends in the Journal of Applied Behavior Analysis. European Journal of Behavior Analysis, 20(1), 146–157. doi:10.1080/15021149.2018.1534771

Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11(2), 203–214. doi:10.1901/jaba.1978.11-203

 

Leave a reply