The Heart of ABA: The Evolution of Social-Validity Reporting in JABA Part 2 of 3
Missed part one? Find it here.
In 1991, Schwartz and Baer publish a paper discussing the current state of social-validity measures in the field of ABA. They note an increase in the use of social-validity measures for studies published in JABA but warn against inaccurate social-validity measures. There seems to be an imbalance in the publishing of positive over negative opinions of procedures. Researchers and journal editors alike tend to report and publish those experiments with positive social-validity measures. If the participants report a positive experience, it must be mean they had a positive experience. Right?
Schwartz and Baer (1991) argue that social validity cannot be truly valid without an assessment of social invalidity. Social invalidity is not the inverse of social validity; it is instead a measure of individuals who not only disapprove of the components of an intervention but will also do something about it. This “something” is not always a blatant wholehearted rejection of an intervention. It may be subtle and covert, but it is just as detrimental to the effectiveness of the intervention. The prediction of social validity and invalidity will require assessments that allow consumers to report honestly, not only what they like but also (and just as important) what they dislike about a procedure (Schwartz & Baer, 1991). False praises should not be sought out implicitly or explicitly; instead, discontented consumers should be encouraged to report their opinions to program personnel as soon as any issues arise.
Much of the criticism surrounding social validity is that, as a subjective evaluation, social validity will replace the objective evaluation of behavior interventions. This criticism may be a direct result of the positive-reporting tendencies of the literature. After all, an instructor has (sometimes implicitly) some desire to have their intervention be approved by the people experiencing it. In this same vein, a participant who is receiving an intervention may not want to upset the instructor by telling the instructor they dislike it (especially if they just started). It seems like various attempts at social validity have created confusion around its purpose. The main goal should be to obtain accurate information about the preference of direct consumers who experience an intervention and whether they truly like it (or not). For this reason, social -validity measures in research should include even the most overwhelmingly negative opinions. This will, in turn allow a more systematic or rigorous method of identifying indicators for social invalidity.
The increase in reports of social-validity measures has propagated a need to track these reports and identify potential trends.
In 1992, Craig Kennedy, from the University of Hawaii, presented a review of trends of social-validity reporting in JABA and other journals. He wanted to see the prevalence of reporting these measures, as well as the type of measure reported, be it subjective evaluation or normative comparisons. According to his findings, from the inception of the journal, the rise in reports of social validity rose from none to around 20%. The most common type of reporting was the subjective evaluation of client opinions. He noted that one of the reasons the reporting of social validity is only at 20%, a seemingly low number, is that many of the studies published at the time were basic research rather than applied research. Applied research is focused on making meaningful changes for individuals in their everyday lives. Social-validity measures are indicators of the opinions of those who may benefit from behavior interventions, be it the client themselves or those around them. Kennedy (1992) argues that it is important to distinguish between studies that need social validity and studies that do not; this may also shed light on which measures are most appropriate for social validity.
Part three will be posted next week! Make sure to check back so you don’t miss it!