What are Empirically Supported Treatments and How Do They Related to EBP?
In 1993, the American Psychological Association Task Force on the Promotion and Dissemination of Psychological Procedures delineated three categories of for classifying the empirical support for psychological treatments. (The "promotion" role may well be worth noting!)
They propose the following three (or four) categories related to the empirical support found, or not found, for a given psychological treatment.
1) Well-established treatments, the report said, required tow studies using between group designs done by different researchers that demonstrated the superiority of the treatment to placebo or a different treatment OR equivalence to another established empirically supported treatment. This its, two experimental studies demonstrating superiority to no treatment or alternative treatments OR equivalence to an empirically supported treatment. The treatments employed must be manualized to permit replication of the treatments in other settings.
2) Probably efficacious treatments require at least two studies with superior outcomes compared to untreated control groups, two studies completed by the same researchers meeting the criteria for a well-established treatment, or a series of single-case or single-system design withdrawal studies (which are the single system design equivalent of an experimental design - the multiple replications indicate some likelihood that the treatment cause the changes noted in varied settings and with treatment done by different providers).
3) Experimental treatments which are newly developed and awaiting study but do not meet criteria for inclusion in well-established or probably efficacious categories.
4?) All other treatments lack empirical validation, thought his may simply be a matter of not enough research being done or lack of a treatment manual.
APA's ESTs differ from the criteria used in EBP in that they require manualized treatments and at least two experiments (or several single subject withdrawal designs). EBP also privileges experimental research results in its hierarchy of research review, but does not require any specific number of studies of any specific type of research design for systematic review. Of course, where good quality research is limited or lacking, the summary results of an EBP review will likely report it is of unknown quality. Most EBP reviews do not include or highlight single-system or single-case research design (based in replication logic), but look for large scale studies using sampling logic. EBP reviews also do not require manualized treatments.
In short, APA's EST's are based on empirical support of two or more experimental studies and discount all other research using other research designs, as well as any non-manualized treatments. (One report in the New York Times notes that this report and similar efforts generated a rather deep divide in psychology. Critics noted the requirements for ESTs were very much linked to behavioral or cognitive behavioral theories. A number of psychologists took the position that the EST movement was heavily ideological. Some also note that such research does not answer the question of how the treatment works - which they view as a requirement of a scientific understanding of treatment. Instead, superior results are honored without understanding - which may have practical utility but is not scientific.)
EBP allows for wider inclusion of varied types of research than does the EST approach. More studies are generally sought in a EBP systematic review before efficacy is claimed. Systematic reviews used in EBP (by the Cochrane Collaboration and the Campbell Collaboration, and potentially some qualitative research syntheses) use wider and perhaps better documented review criteria.
Note that it is possible for researches or administrators to claim a treatment is empirically supported using these standards even if a larger, more inclusive systematic review questions the efficacy of the treatment.
Note, too, that it may be useful for program
evaluation for researchers, administrators and practitioners to evaluate
their work using experimental designs to begin to establish the efficacy
of their work - even if it only in a specific city or state program
serving what is arguably a very specific and likely non-replicable
population (such as people referred by courts for services based on
local legislation). Doing such work - and using empirically
supported" logic and language may be helpful to promoting and marketing
a program or treatment. Yet it may also be confusing to
practitioners and consumers (despite good intentions).
text copyright by J. Drisko page begun 6/6/08