||Evidence based practice (EBP)
might best be viewed as an ideology or a "public idea" in Robert Reich's
(1990) terminology. The movement began with the work of Scottish
physician Archie Cochrane who sought to identify "treatments that work" using the results of
experimental research. Another important aspect of Cochrane's interest was to identify and end
treatments that do harm or are not effective. In practice, the idea was to supplement professional decision
making with the latest research knowledge. [It is worth noting that
some critics might argue "replace" professional decision making
might even be applicable.] The goal was to enhance the scientific
professional practice in several disciplines - medicine, nursing,
psychology, social work, etc. In turn, educational efforts in these
disciplines could be oriented to provide beginning professionals with
effective tools and a model for the continuing improvement and renewal of
their professional practices.
EBP as described here is not the same as an empirically supported treatment [EST]. ESTs are variously defined, but are basically treatments or services that have been empirically studied and found to be helpful or effective - in one or more settings. The difference is that ESTs may be treatments or services that are not fully replicable in other settings and most have not been studied in multiple contexts. ESTs may not have been replicated - tested more than once - to insure the result will be the same or similar in another setting. The efforts of the Cochrane Collaboration include setting rigorous standards for examining the methods by which outcome research is done as well as reporting research results. This is to insure transparency in their findings [that others can know exactly how the conclusions were drawn] and not to exaggerate the results of a single test of any treatment.
Why EBP? Some Say Any Other Approach Is Unethical
Some advocates argue that to treat anyone using treatments without known efficacy is unethical. That is, if we know a given medicine or substance abuse program or treatment for attachment problems works better than another treatment, it is an ethical obligation to use it in order to best serve clients or patients. This is an argument that is hard to challenge - at least in an ideal world. Given strong and unambiguous research evidence that is clearly useful to a given practice situation, and consistent with the client's world view and values, using EBP "best treatment" is the best way to go.
Policy and Funding Issues
In social work and psychology, advocates have also argued that only interventions with demonstrated efficacy should be supported financially. Such an argument links demonstrations of efficacy with the funding structure of the current managed care environment. It may be seen as either a way to best use limited dollars or yet another method to curtail funding for costly services. Without provision of adequate funds to do thorough research on the great variety of treatments in use, the requirement of proven efficacy may be used as a tool to limit treatment services.
Assessing EBP -- Some Key Issues
In psychology, the initial unveiling of "empirically validated treatments" by an American Psychological Association Task Force brought forth both interest and criticism. It also brought out differences regarding interpretations of the existing research literature and regarding the merits of certain research methods. One key concern was the over-reliance on randomized control trials [RCTs]. An RCT is an experiment in which participants are randomly assigned to either a treatment or a control group. Ideally, neither participant or treating clinician knows which group is which. After a course of treatment (or control), improvement is determined by comparing pre-treatment status with post-treatment status. If the treated group improves significantly more that the controls, we can say the treatment caused the change and that the treatment works (better than no treatment). In another form of RCT, the best known treatment is compared to a new treatment using random assignment. If the new treatment produces better results than does the standard treatment, it is viewed as empirically supported and "more efficacious."
Some practitioners argued that the RCTs don't always reflect "real world" conditions well, so the results of such studies may not be the same as what is found in real clinics. The core of the concern is that RCTs often use carefully assessed participants that have only a single disorder and often have relatively strong social supports. Real world clinics are rarely able to undertake similarly detailed assessments and, even if they could, would often have to treat people with co-existing (co-morbid) conditions, less persistence, perhaps fewer social supports and perhaps lower motivation to be in treatment. Thus carefully run RCTs reflect laboratory conditions rather than real world conditions. The distinction is know as "effectiveness" versus "efficacy. Laboratory RCTs produce knowledge about the "efficacy" of a treatment - that it works under ideal conditions. Experimental studies done under less carefully defined conditions reflecting the variation in real world clinics are known as "effectiveness" studies.
It should be noted that most researchers undertaking RCTs assume that the problems or disorders they are studying are completely and adequately defined. In mental health, the definition of problems most often follows the American Psychiatric Association Diagnostic and Statistical Manual of Mental Disorders (DSM) or the World Health Organization's ICD Manual. These definitions vary in clarity and in their own empirical validation.
Social workers adopt a world view that suggests problems are best understood by viewing "persons in situations." That is, both external environmental and social factors as well as internal health and psychological factors will be important in understanding the whole person. This perspective is partially incorporated in the DSM's Axes IV and V, but in a summary form.
Simply put, EBP generally applies operational definitions of problems in RCT reviews of treatment effects. This is consistent with the medical model of research and general use in psychology and social work research. The potential limitation is that such definitions of target problems locate the problem within the individual and (mostly) ignore social circumstances, supportive and/or oppressive. This may represent a limited definition of the target problem or a flaw in conceptualization.
In much organic medical treatment, causes or etiologies may be more clearer identified than is possible (or at least currently possible) in the world of mental health and social problems. Thus applying an outcome model that assumes a single, clearly identified "cause" and problems that reflect symptoms may, or may not, be optimal. Further, different "doses" of treatment may be identifiable for organic medical conditions but may be less clear cut in the functional, mental health and social world. Both conceptual and operational diagnoses in mental health pose some challenges and multiple, comorbid disorders are commonplace -- making real world practice quite different from tightly controlled and extensively tested experimental studies. (Which ties back into the Efficacy vs. Effectiveness issue described above.)
Some argue that treatment effects are due more to "common factors" shared by therapies than they are due to specific treatment techniques. The level of client motivation, the strength and quality of the therapeutic relationship, a shared vision of what treatment will include, a shared sense of hope or expectancy of improvement and even placebo effects are elements of treatment common across differences in theory and technique -- especially in psychotherapy and social services. RCTs are often designed to test differences of technique, but ignore or limit the role common factors.
Several meta-analytic studies of psychotherapy for adults demonstrate empirically that several types of therapy for depression and anxiety are effective. This indicates that common factors, rather than different treatment techniques, generate roughly equivalent change (at least for these disorders).
On the other hand, Reid (1997) did a meta-analysis of social work interventions for several quite different problems (mental retardation, smoking cessation, substance abuse. etc.). He found many types of treatments were helpful but behavioral and cognitive approaches appeared to work better than did the other techniques. Note, however, that the study compares "apples and oranges", aggregating dissimilar problems.
The common factors versus specific techniques question is as yet unresolved. Some honor it, others believe it is not particularly important.
Since most quantitative experimental studies are based on group means, we know that "on average" treatments generate a certain effect. This is valuable information. Yet it does not help the clinician distinguish which specific client is like the mean responder and who may differ. With medication some people respond to a smaller than average does, others need more than the average to be helped. We might assume the same is true in mental health - some people respond with less effort (or are able to better use opportunities and resources) while others will need much more help (or are less able to use their resources and opportunities) to improve. Thus the clinician is left to think critically and fit aggregate treatment results to the specific, unique reality of a given client.
We must also assume that clinicians vary in ability to deliver any given treatment. Referral may be indicated where a treatment in which one is not fully trained is indicated as the best practice.
We can also assume that their is variation in effectiveness even among well trained clinicians. Unlike pills, mental health issues appear heavily influenced by relationship factors and expectancy factors.
The Client's Views About the Treatment
In a profession that supports autonomous decision making by the client or client system, clinical social workers must ask the client about their views of what EBP suggest is the most likely effective treatment. If the client has concerns about the treatment, these views must be honored. Eileen Gambrill has developed this idea in several published articles.
Critical thinking and efforts to find knowledge are needed, along with efforts to individualize treatment to the person and environment (including culture) of the client.
Practice Wisdom and What Professionals Can Do
It would seem wise to allow professionals to use their knowledge, skills, training and experience to determine if the best available research knowledge fits well with the circumstances at hand. This may take some wisdom and judgment; may not be automatic. Of course, some supporters of EBP believe the purpose of EBP is in large part to limit such application of practice wisdom.
It may also be the case that what research shows is most likely to help based on other's experiences may not be something the professional is trained to provide or is not comfortable providing. Referrals may be made in such instances.
Racial, Ethnic and Social Diversity
Many scholars note that there is very little research on services and treatments to populations of color, immigrant populations (who may have culturally different ideas about mental health and its treatment), class differences in treatment effectiveness, differences in sexual orientation, and sometimes gender differences. Research on children, teens and the elder is also often minimal. EBP, as much of medicine, assumes people are people and that treatments are universally effective. This may often be so for organic disorders, but is less certain for socially complex concerns such as mental disorders. Research on the effectiveness of many treatments on diverse populations is lacking. This is a major shortcoming of EBP at this time.
Reich, R. (1988). The power of public ideas. Cambridge, MA: Ballinger Publishing.
text copyright J. Drisko - page begun 3/11/04 ; last update 08/4/12