Key Findings
- Evidence-based practice is often encouraged in most service delivery settings, yet a substantial body of research indicates that service providers often resist such practices or show limited adherence.
- Provider resistance to the uptake of evidence-based treatments and programs is well-documented in several fields, including nursing, dentistry, counseling, and other mental health services.
- Providers can resist implementation as a function of the varied responsibilities of providers in school settings or because of a mismatch of program requirements with actual resources and time available to implement the intervention.
- Although research in this area is in its infancy, training and monitoring efforts throughout several stages can help to reduce resistance and allow researchers to move toward client-centered approaches grounded in the evidence base.
Evidence-based practice approaches are often encouraged and touted as an integral part of best practices for service delivery (Eddy, 2005; Kazdin, 2008). However, even when specific treatments and programs are considered evidence-based and recommended by the scientific community, service providers in many intervention settings resist implementing them (Lilienfeld et al., 2013). Such resistance is well-documented in several fields, including nursing, dentistry, counseling, and other mental health services (Lilienfeld et al., 2013), especially when the treatment or program is a marked change from standard practice. In fact, service provider resistance is almost ubiquitous when new evidence-based programs are implemented. Service provider resistance presents challenges for study design and data collection for researchers and evaluators examining the implementation of evidence-based practices. This brief discusses reasons for service provider resistance. We want to help evaluators better plan for resistance and recommend ways to address it during different stages of an evaluation, from planning through initial training to analysis and dissemination. Recommendations for future research are discussed in the context of attending to resistance and moving toward client-centered approaches grounded in the evidence base. Although the target audience of this research brief is primarily evaluators and our focus is on school-based mental health service delivery, most of our recommendations are applicable to other service delivery settings.
Service provider resistance to new evidence-based programming is the rule rather than the exception (see Lilienfeld, Ritschel, Lynn, Cautin, and Latzman for a comprehensive review of the empirical and conceptual literature on provider resistance in clinical settings) (Lilienfeld et al., 2013). New programming can be especially challenging for service providers in system-level implementation structures, such as mental health provision in schools and other clinical and medical settings (Lever et al., 2013). Providers can resist implementation because of their varied responsibilities in school settings (e.g., testing and academic counseling in addition to mental health service provision) or because of a mismatch of program requirements with actual resources and time available to implement the intervention. Estimates of provider resistance vary and often are reported in terms of low fidelity to the evidence-based intervention protocol, a finding that is particularly striking in school settings, and often are viewed as provider resistance until implementation or fidelity challenges arise (Rojas-Andrade & Bahamondes, 2018). Resistance to evidence-based approaches is not simply stubbornness or inflexibility on the part of providers. Reasons for resistance are varied but shed light on how evaluators can address provider resistance, particularly in the planning and training stages.
Several factors are associated with service provider resistance in school settings (see Box 1). One school-specific factor is the need for low-burden interventions that are easy to integrate into school schedules. Although some evidence-based school programs are developed with these factors in mind, integration into already-busy school schedules is a continual challenge. Thus, for schools, resistance might always be an issue in implementation, given the competing demands of implementing academic interventions beyond the curriculum. Mental health service providers working in schools also have multiple roles (e.g., school psychologists conducting psychoeducational assessments in addition to providing mental health services), increasing their burden and their time constraints (Fazel et al., 2014; K. E. Hoagwood et al., 2007). Limited available incentives can amplify burden. Appreciating these multiple responsibilities early in the planning process can be useful for understanding when and how to recommend implementation and how to address the logistics of implementation. Bringing providers into the decision-making process whenever possible can also promote buy-in.
Box 1. Why Providers Resist Evidence-Based Practices
- Challenges adapting to new ways of providing services
- Increased burden and limited incentives for change
- New practice is incompatible with their priorities
- Provider relative disconnect with research
Implementation of school-based mental health often does not include sufficient integration and engagement of service providers (teachers, counselors, and other support staff [e.g., social workers, psychologists, nurses]). Service provider resistance might then be exacerbated when providers feel that their voices are not included in the decision-making process. Frustrations often arise either because requirements to implement programs conflict with other professional demands, because service providers disagree about components of the new intervention and whether students will be receptive to the intervention, or because they dispute its appropriateness for the problems faced by the students they serve (Fazel et al., 2014). Evaluators are often called to design and develop data collection and program evaluation tools. Capturing data about resistance and bringing information from providers into the assessment process is critical to determine whether resistance is affecting implementation.
Merging Clinical Expertise with Evidence-Based Practice
Clinical expertise is a critical but often under-implemented component of best practices in the delivery of evidence-based mental health practice in schools (Hasnain-Wynia, 2006; Straus et al., 2018). In fact, the essence of evidence-based practice includes “the integration of research with clinical expertise in the context of the client’s characteristics, culture and preferences” (Thomason, 2010). Provider training should acknowledge and incorporate clinical expertise to more effectively implement intervention protocols. Available research suggests that service provider resistance is related to restrictions around manualized intervention; providers may feel that a manualized intervention protocol does not have sufficient or necessary details for addressing student issues they encounter, such as student reluctance to engage with service providers or follow through on protocol assignments (Kratochwill, 2007; Masia-Warner et al., 2006; Rathvon, 2008). In these instances and within the parameters of the protocol, training should include information on how service providers can address these issues. Training should also incorporate flexibility in the use of clinical expertise in the context of intervention fidelity. Lilienfeld et al. recommend enhancing partnerships early on between researchers/evaluators and community practitioners to ensure that neither researchers nor practitioners ignore the importance of the other when crafting interventions and implementation plans (Lilienfeld et al., 2013).
Researchers and federal funding agencies are beginning to incorporate recipient and service provider perspectives and include clinical expertise in evaluations. Agencies such as the National Institute of Justice and the Patient-Centered Outcomes Research Institute are increasingly requiring patient/client and provider perspectives on program implementation and outcomes as part of research grant submissions (Sheridan et al., 2017; Slutsky et al., 2014). Clinical experiences consider the provider experiences working with their populations where they bring in recipient perspectives (provider, client, or both). To the extent possible, the protocol should be flexible and modular to allow for contextual variations and youth (client) reception (Chorpita et al., 2006; K. Hoagwood & Johnson, 2003). Contextual variations might be related to school- or clinic-specific regulations and practices that may set restrictions on implementation. Evidence-based practitioners need to understand how to merge the science that underlies school-based mental health with the art of adapting approaches to recipients and contexts (Barrera et al., 2013).
When programs are being incorporated into a school system as part of an evaluation, working with school mental health service providers and allied personnel is critical, particularly when intervention participants are randomly assigned to treatment conditions. Mental health service providers in schools assigned to experimental conditions—particularly when providers are required to deviate significantly from standard care—are likely to show the most resistance and hence pose the greatest risk to the evaluation (Lilienfeld et al., 2013). Here is where service provider buy-in is essential. There are several ways of increasing buy-in, and these considerations should be reflected in proposals, training, and work plans. Building buy-in for implementation is key but is rarely fully incorporated into evidence-based implementation (Kazdin, 2008).
Recommendations
Training
Obtaining buy-in from providers involves consideration of how ready and receptive providers are initially, not only as individuals but also as members of the organization. Brief instruments can assess this information; for example, the Evidence-Based Practice Assessment Scale (EBPAS) is widely used and captures provider attitudes toward the adoption of evidence-based practices. Questions about resistance and related constructs can be measured and monitored using this instrument. Readers are encouraged to see Aarons for additional details of this and other similar instruments (Aarons, 2005). Evaluators are encouraged to consider assessing readiness and receptivity, incorporating it into training, and monitoring it throughout evaluations. Training materials should also consider the knowledge and expertise providers already have and how they can be incorporated into the program being implemented (see Box 2). This topic should be discussed in the context of evidence-based practices and parameters that providers should try to follow to ensure fidelity (Bambara et al., 2012). Unfortunately, research on how to successfully plan for provider resistance during evaluation is in its infancy. Currently, recommendations are limited on how to mitigate resistance in the training phase. During implementation and evaluation, careful tracking of implementation dosage and adherence through appropriate implementation fidelity tools will improve the evaluation of how resistance affected implementation.
Box 2. Recommendations During the Proposal Planning and Training Phases
- Gather initial insights on specific reasons for provider resistance to targeted strategies and include these insights to inform training
- Forge closer alliances between research-oriented and practice-oriented providers
- Remind practitioners that evidence-based practice
- is not a recipe or cookbook,
- does not eliminate clinical judgment or reasoning,
- does not ignore provider or student preferences,
- is not rigid or unchangeable,
- respects clinical expertise, and
- is not focused only on randomized controlled trials.
- Closely measure implementation fidelity using the following:
- Measures that ask directly about resistance (such as EBPAS)
- Measures of dosage per provider
- Measures of adherence to treatment protocols per provider
- Attend to known barriers:
- Training: not just a “how to” but, more importantly, a “how to” that is specific to where providers work and who they work with
- Time (for ongoing training and implementation).
Recommendations Around Design and Data Analytic Issues: When Resistance Affects Implementation
As noted previously, providers’ resistance to uptake of evidence-based practices and students’ resistance to engaging in treatment are related. Resistance can also affect the analysis of data from randomized, evidence-based treatment evaluations. Specific types of resistance at the provider level fall into at least two types: poor implementation fidelity (e.g., “true” poor intervention delivery, adaptations of the intervention that were not sanctioned by protocol (Rojas-Andrade & Bahamondes, 2018)) and explicit switching into and out of randomized treatment arms. The choice among the several options available to evaluators in this situation depends on whether an evaluator is interested in making inferences about variation in treatment effectiveness across optimal and suboptimal implementation (e.g., treatment by implementation interaction models) or what the average treatment effect is across fidelity or compliance with assigned treatment conditions (e.g., complier-average causal effect [CACE] models (Lochman et al., 2006)). In such cases, measures of treatment dosage are particularly important and can be statistically controlled for.
CACE analysis (e.g., Connell (Connell, 2009) and Toth and colleagues (Toth et al., 2013)) can also model resistance to treatment at the student level. Alternatively, it can model explicit adaptations to match student needs during treatment, based on the incorporation of clinical expertise in what is needed beyond the study protocol. Strong measures of treatment adherence for each provider will need to have begun at the beginning of implementation to account for these differences due to resistance.
Efforts to Reconcile Evidence-Based Practice with Practitioner- and Patient-Centered Approaches: How to Bridge the Chasm
Language about evidence-based practice blurs priorities. For example, the terms “evidence-based treatments” and “evidence-based practices” are often used interchangeably. Evidence-based treatments are specific interventions or techniques (e.g., Coping Power for externalizing problems) that often are manualized and prescriptive. Evidence-based practice is a much broader term that includes specific approaches informed by not only evidence-based treatments but also evidence from other interventions, clinical expertise, and specific needs of the population the provider will be working with. The latter elements are often not emphasized enough (APA Presidential Task Force on Evidence-Based Practice, 2006; Kazdin, 2008; Straus et al., 2018). Thus, evaluators should be clear on this point when thinking about interventions that they will be evaluating. For example, if a treatment or program is already evidence-based, evaluators should consider implementation adaptations within the context they are working in. Relatedly, a good understanding of the barriers already in place could be an important area for an initial look at adaptations to evidence-based treatments. Adaptations to be evaluated, then—by definition—should be perceived as less rigid by practitioners while keeping to the tenets of evidence-based practice (especially if they play an active role in the decision-making process) (Fazel et al., 2014; Kratochwill, 2007; Straus et al., 2018).
Future Research: Steps Toward Rapprochement
Future research should prioritize capturing additional data, throughout the research and/or evaluation process, about potential and actual provider resistance (before and during the study, respectively). Understanding and accounting for provider resistance should be an evaluation aim. Challenges related to provider resistance and its impact on implementation should be discussed in dissemination products, a step only rarely done. A growing body of research documents the positive impact of treatment-as-usual and standard-care approaches (Kazdin, 2013, 2015). As discussed above, provider resistance is often related to provider beliefs that the current approach may already be superior to a new approach that has an evidence base. Additional research is needed to examine and dismantle components of current practices and standard care that are affecting positive outcomes (e.g., nonspecific treatment factors). Evidence-based implementation manuals, guidelines, and training protocols should more carefully consider how those components fit within the realm of expertise under the umbrella of evidence-based practices (Kazdin, 2015).
Relatedly, the next generation of providers is graduating with substantially more training about evidence-based practices than previous generations of providers had. Understanding resistance to evidence-based practices in the context of providers’ preferring to implement other components or variants of a different evidence-based practice will be important to assess and consider in future research. Adaptations can be systematic, but this requires planning and input from service providers for the greatest likelihood of success. Available methods range from adaptations of the protocols (dynamic adaptation process) to more-sophisticated designs and methods for systematic randomization of intervention adaptations (Aarons et al., 2012; Murphy, 2005; Nahum-Shani et al., 2012). The CACE methods mentioned above are useful for evaluators who need to adapt designs when implementation does not go as planned. However, if evidence-based interventions are expected to need individual-level, session-by-session adaptations, adaptive intervention approaches are available that allow for planned flexibility. For more information about these approaches, see Murphy (Murphy, 2005).
Research in this area is in its infancy, and thus, our recommendations are limited. Our goal is to bring these issues to the fore, so researchers and evaluators attend to service provider resistance in their work. We opted to focus on evaluators given the unique opportunities they often have to inform evaluation and research questions, data collection protocols, and other aspects of how implementation is evaluated. Therefore, when appropriate, evaluators can bring some of their focus onto the influence and impact service provide resistance may have on implementation. As much as possible, we recommend that evaluators work with service providers early on to get a good understanding of any barriers and reasons for resistance so providers can have their voices and perspectives heard. Examples are available in the literature (Nahum-Shani et al., 2012). Evidence-based practice has been described as a three-legged stool that incorporates (1) treatments and programs with the strongest evidence base, (2) clinical expertise, and (3) consideration of the patients or clients being served. The second and third components are where less attention is given and often where the service provider voices are heard (or not heard). In this brief, we have offered recommendations for how evaluators can help bring some balance by attending to provider resistance. Given the almost ubiquitous presence of resistance by providers, continued attention and research in this area will be critical.
Acknowledgments
This project was supported by Award No. 2015-CK-BX-0010, awarded by the National Institute of Justice, Office of Justice Programs, U.S. Department of Justice. The opinions, findings, and conclusions or recommendations expressed in this publication/program/exhibition are those of the authors and do not necessarily reflect those of the Department of Justice. This work has been possible through collaboration with and inspiration from numerous service providers, patients, students, and staff. We thank them.