Evidence Based Practice in Paramedics
The need for evidence based practice in paramedics
Introduction into evidence based practice for paramedics
The following paper will explain why researchers and health bureaucrats have enthusiastically embraced evidence-based practice, while critics have argued that the evidence on which the evidence-based practice philosophy rests has inherent weaknesses that allow it to be capable of harm as well as good. Evidence based practice will be defined, followed by an explanation of the principles and processes of evidence based practice. The proponent’s argument developing the need for EBP will then be determined by discussing the four main driving factors of EBP, followed by the critic’s argument against EBP. Lastly, an objective view of both the strengths and weaknesses of EBP will be addressed.
Evidence-based practice defined
According to Sackett, Richardson, Rosenberg & Haynes EBP is ‘the conscientious, explicit and judicious use of current best evidence in making decisions about the care of your client’ (1997, p. 2). EBP integrates individual clinical expertise with the best available external clinical evidence from systematic research; the evidence, by itself, cannot make a decision for you, but it can help support the patient care process. The initial definition of EBP was within the context of medicine, where it is was well recognised that ‘many treatments do not work as hoped’ (Doust and Del 2004, p. 474-5). EBP is a total process beginning with knowing what clinical questions to ask, how to find the best practice, and how to critically appraise the evidence for validity and applicability to the particular care situation. The best evidence then must be applied by a clinician with expertise in considering the patient’s unique values and needs. The final aspect of the process is evaluation of the effectiveness of care and the continual improvement of the process (Sacket et al, 1997, p.3).
Principles and processes of evidence-based practice explained
According to Sackett et al, the principles of EBP are based on a ‘process of lifelong, self-directed learning in which caring for our own patients creates the need for clinically important information about diagnosis, prognosis, therapy and other clinical and health care issues’ (1997, p. 3). By basing one’s clinical practice on the conscientious and judicious use of best evidence, one may enhance his or her clinical capabilities and ultimately the health outcomes of their patients.
According to Cook, Jaeschke, Guyatt, the ‘five steps of EBP were first described in 1992 and most steps have now been subjected to trials of teaching effectiveness’ (1992 p. 275-82) The process of EBP includes the following steps:
1. converting information needs into answerable questions;
2. tracking down, with maximum efficiency, the best evidence with which to answer them (whether from clinical examination, diagnostic laboratory, research evidence or other sources);
3. critically appraise that evidence for its validity and usefulness;
4. apply the results of this appraisal in our clinical practice; and
5. evaluate our performance
(Sacket et al 1997, p. 3).
This five-step model forms the basis for both clinical practice and teaching EBP, and as Rosenberg and Donald have observed ‘an immediate attraction of evidence-based medicine is that it integrates medical education with clinical practice’ (1995, p. 1122- 3). Good practice including effective clinical decision-making, step 4 of the EBP process, requires the explicit research evidence and non-research knowledge (tacit knowledge or accumulated wisdom). Clinical decision-making is the ‘end point of a process that includes clinical reasoning, problem solving, and awareness of patient and health care context’ (Maudsley 2000, p. 63).
Since 1990, when EBP was introduced, it has broadened to reflect the benefits of entire health care teams and organisations adopting a shared evidence-based approach. This emphasises the fact that evidence-based practitioners may share more attitudes in common with other evidence-based practitioners than with non evidence-based colleagues from their own profession who do not embrace an evidence-based paradigm (American Medical Association 1992, p.4).
The need for EBP
Proponents of EBP argue the need for EBP with four main points. These are: ‘the research-practice gap, poor quality of most research, information overload and practice which is not evidence-based’ (Trinder 2000, p.3-4).
During the last century there has been an exponential growth of research and knowledge (Humphreys, McCutcheon 1994, p.18). The growth of health care information has been particularly rapid in diagnostic and therapeutic technologies, with the sheer volume of medical papers published doubling every 10 to 15 years (Hook 1999, p.3) and electronic access to full text articles and journals available since 1998 (Delamothe and Smith 1998, p.1109-10). Although, with all this research it was still argued that many medically based practitioners have a research-practice gap, which basically means that there is a limited extent to which they utilize and draw upon research finding to determine or guide their actions. Ultimately, they rely on indicators such as ‘prior knowledge, prejudice, outcomes of previous cases, fads or fashions, and advice from senior colleagues’ (Trinder 2000, p.4). With this expansion of information, our knowledge should be greater and our practice should be more effective. Unfortunately this is too often not the case (Walker, Grimshaw , Johnston, Pitts, Steen , Eccles 2003, p.19). This recognised gap between best evidence and practice is one of the driving forces behind the development of EBP.
Furthermore, it can be seen that many of those who do utilise research findings note that most research is methodically weak and of a generally poor quality. For example, the studies have not utilised the ‘gold standard of research, such as a well conducted randomized controlled trial (RCT)’(Trinder 2000, p. 4).
Those who do find research papers often discover information overload, which relates to the sheer amount of research papers available. According to Hook the volume of medical papers published doubles every 10 to 15 years (1999, p.3). Therefore the task of distinguishing between rigorous and useful research and poor or unreliable research has become a much more difficult task for clinicians and practitioners (Trinder 2000, p. 4).
Lastly, it can be seen that many practitioners are utilising techniques that are not based on evidence. The consequences of these previous factors result in the continual utilisation of medical interventions that have ‘been shown to be ineffective, harmful, slow or limited adoption of interventions which have been proven to be effective or more effective, and there continue to be variances in practice’ (Trinder 2000, p.4).
Critics view against EBP
Critics have argued against EBP on the basis of many common misperceptions of EBP as well as many correct failings associated with EBP. These primarily include: the argument that many doctors were already doing these things; that good evidence is often deficient in many areas; that lack of evidence and lack of benefit are not the same; that the more data are pooled and aggregated the greater the difficulty in comparing the patients in the studies with the patients presenting; that EBP is a covert method or rationing resources, is overly simplistic and often restrains professionals; that many clinicians lack the time and resources to practice EBP and require new skills to utilize EBP (Guyatt, Cairns, Churchill 1992, p.268; Trinder 2000, p.2; and Straus 2000, p. 837-9). Furthermore, those who agree that ‘EBP makes good sense in theory, have quite appropriately demanded evidence for whether it improves patient outcomes’ (Miles, Bentley, Polychronis and Grey 1997, p. 83-5). Although, the ethical and moral implications of such a randomized controlled trial, relating to withholding evidence in the clinical treatment of patients may never be appropriately justifiable.
In developing the EBP some have argued that the new paradigm is sometimes misinterpreted. For example, many have argued that EBP recognition of the limitations of intuition, experience, and understanding of pathophysiology in permitting strong inferences are a rejection of these routes to knowledge altogether.
A common critic misperception or argument against EBP is that it ignores the clinical experience and clinical intuition of the practitioner or clinician. In many ways it is important to expose learners to exceptional clinicians who have a gift for intuitive diagnosis, a talent for precise observation, and excellent judgement in making difficult management decisions. Untested signs and symptoms should not be rejected out of hand. They may prove extremely useful, and ultimately be proved valid through rigorous testing. The more experienced clinicians can dissect the process they use in diagnosis, and clearly present it to learners, the greater the benefit. Similarly, the gain for students will be greatest when clues to optimal diagnosis and treatment are culled from the barrage of clinical information in a systematic and reproducible fashion (Craig, Irwig and Stockler 2001, p. 1-3).
Institutional experience can also provide important insights. Diagnostic tests may differ in their accuracy depending on the skill of the practitioner. A local expert in, for instance, diagnostic ultrasound, may produce far better results that the average from the published literature. The effectiveness and complications associated with therapeutic interventions, particularly surgical procedures, may also differ across institutions. When optimal care is taken to both record observations reproducibly and avoid bias, clinical and institutional experience evolves into the systematic search for knowledge that forms the core of evidence-based medicine (Straus and McAlister 2000, p.839).
Another argument is that the understanding of basic investigation and pathophysiology plays no part in evidence-based medicine. The dearth of adequate evidence demands that clinical problem-solving must rely on an understanding of underlying pathophysiology. Moreover, a good understanding of pathophysiology is necessary for interpreting clinical observations and for appropriate interpretation of evidence. However, numerous studies have ‘demonstrated the potential fallibility of extrapolating directly from the bench to the bedside without the intervening step of proving the assumptions to be valine in human subjects’ (Echt, Leibson, Mitchell, Peters, Obias 1991, p. 781).
Some critics have argued that EBP ignores standard aspects of clinical training such as the physical examination. A careful history and physical examination provides much, and often the best, evidence for diagnosis and directs treatment decisions. The clinical teacher of EBP must give considerable attention to teaching the methods of history and clinical examination, with particular attention to which items have demonstrated validity and to strategies to enhance observer agreement (Echt et al 1991, p. 781-2).
Large randomized controlled trials are extraordinarly useful for examining discrete interventions for carefully defined medical conditions. The more complex the patient population, the conditions, and the intervention, the more difficult it is to separate the treatment effect from random variation. Because of this, a number of studies obtain insignificant results, either because there is insufficient power to show a difference, or because the groups are not well-enough ‘controlled’ (Straus et al 2000, p.839).
Furthermore, the critic may argue that EBP has been most practised when the intervention tested is a drug. Applying the methods to other forms of treatment may be harder, particularly those requiring the active participation of the patient because bliding is is more difficult (Stephenson and Imrie 1998, p.1). The types of trials considered ‘gold standard’ (i.e. randomized double-blind placebo-controlled trials) are very expensive and thus funding sources play a role in what gets investigated. For example, the government funds a large number of preventive medicine studies that endeavor to improve public health as a whole, while pharmaceutical companies fund studies intended to demonstrate the efficacy and safety of a particular drugs, so long as the outcomes are in their favour (Coats 2004, p.2-3). Furtheremore, ‘determining feasibility and relevance to the real world is often difficult’ (Stephenson and Imrie 1998, p.2).
One of the fears of EBP is that purchasers and managers will control it in order to cut the costs of health care. This would not only be a misuse of EBP but suggests a fundamental misunderstanding of its financial consequences. Doctors practising EBP will identify and apply the most efficacious interventions to maximise the quality and quantity of life for individual patients; this may raise rather than lower the cost of their care (Straus et al 2000, p.839).
Many of the studies that are published in medical journals may not be representative of all the studies that are completed on a given topic (published and unpublished) or may be misleading due to conflicts of interest (i.e. publication bias); therefore the array of evidence available on particular therapies may not be well-represented in the literature.
Strengths of evidence based practice
Some of the many strengths of EBP include: finding better procedures, stopping negative procedures, learning from other people’s mistakes, providing a basis for clinical judgment, legal protection, best utilization of resources and ultimately best clinical practice (Straus et al 2000, p. 837-40; Trinder 2000, p. 2; ).
By utilising the evidence to provide the best practice possible, a clinician or practitioner is capable of reducing if not removing the possible harms of treatment. According to Trinder ‘EBP remains firmly committed to the modernist promise that risk can be assessed and controlled by expert knowledge and that potential harm of interventions can be minimised and the potential benefits maximised’ (2000, p. 7-8).
EBP is an approach that ‘promotes the collection, interpretation, and integration of client-reported, clinician-observed, and research-derived evidence’ (McKibbon, Wilczynski, Hayward, Walker-Dilks, & Haynes 1995, p. 4). It does not command your clinical decisions, but may be utilised to ensure that through the ‘conscientious, explicit and judicious use of current best evidence in making decisions about the care of your client you have developed the most effective and efficient treatment decision for your patient (1997, p. 2).
EBP may help identify procedures that are not cost-effective and may be dropped. This is not to say that it will drop procedures and change the management plan of a patient based on economic restraints, but that it will look for more effective methods of providing treatment (Straus et al 2000, p.839). Likewise, EBP may help identify new procedures and justify their cost. According to Trinder ‘supporters and advocates of EBP claim that the approach results in the best practice and the best use of resources’ (2000, p. 2).
EBP may become the common language through which different healthcare disciplines communicate, such as the medical, physiotherapy, nursing and paramedical disciplines (Sackett et al, 1997, p.17). Furtheremore EBP principles do not change from undergraduate to post-graduate education and hence are a greater advantage to the life-long process of study associated with any professional clinician or practitioner (Echt et al 1991, p. 781-2).
Weaknesses of evidence based practice
The weaknesses of EBP include: the limitations to samples in research, the required need to still make clinical judgments (EBP is only a guideline), the fact that information develops rapidly and beyond one person therefore protocols are useful, the fact that it requires new skills from clinicians, may in fact raise and not necessarily lower the cost of health care, it cannot replace experience, and the paucity of proof that EBP actually works (Straus and McAlister 2000, p. 838; Trinder 2000, p. 2).
EBP requires new skills of the clinician, including efficient literature-searching, and the application of formal rules of evidence in evaluating the clinical literature, such as the five steps in the process of EBP as described by Sacket et al (1997, p. 3).
EBP is also not an ivory tower or armchair medicine but a way of staying on top of a busy professional life. It is not an alternative to experience.
EBP is not cook-book medicine imposed from above and slavishly followed but an active process which integrates the doctor’s own expertise, the external evidence and the patients’ preferences. Clinical guidelines are similarly subject to this flexible approach. External clinical evidence can inform but never replace individual clinical expertise and it is this expertise that decides even whether the external evidence is relevant to the patient at all (Howizts 1996, p. 320).
EBP is not necessarily a cost-cutting exercise but a method of looking for the most effective ways to improve the quality and quantity of patients’ lives. This may in fact raise, not lower, the cost of care (Straus et al 2000, p.839).
Because EBP requires an upwards approach that integrates the best external evidence with individual clinical expertise and patient-choice, it cannot result in slavish, cook-book approaches to individual patient care. External clinical evidence can inform, but can never replace, individual clinical expertise, and it is this expertise that decides whether the external evidence applies to the individual patient at all and, if so, how it should be integrated into a clinical decision (Coats 2004, p.4-5). Similarly, any external guideline must be integrated with individual clinical expertise in deciding whether and how it matches the patient’s clinical state, predicament, and preferences, and thus whether it should be applied.
EBP involves tracking down the best external evidence with which to answer our clinical questions. To find out about the accuracy of a ‘diagnostic test, a clinician needs to find proper cross-sectional studies of patients clinically suspected of harbouring the relevant disorder, not a randomised trial’ (Oxman, Sackett, and Guyatt 1993, p. 2). For a question about prognosis, a clinician needs a proper follow-up studies of patients assembled at a uniform, early point in the clinical course of their disease. And sometimes the evidence required will come from the basic sciences such as genetics, immunology and basic pathophysiology. It is when asking questions about therapy that one should try to ‘avoid the non-experimental approaches, since these routinely lead to false-positive conclusions about efficacy’ (Coats 2004, p.2-3).
Because the randomised trial, and especially the systematic review of several randomised trials, is so much more likely to inform us and so much less likely to mislead us, it has become the ‘gold standard’ for judging whether a treatment does more good than harm. However, according to Straus et al, even though ‘randomised clinical trials are considered to be the ‘gold standard’ for establishing the effects of an intervention, they are not necessarily the best sources for answering questions about diagnosis, prognosis or harm’ (1997, p.839). Furthermore, some questions about therapy that would ordinarily require randomised trials, may not require randomised when trials of successful interventions would prove otherwise fatal or cannot wait for the trials to be conducted. And if no randomised trial has been carried out for our patient’s predicament, a clinician should follow the trial to the next best external evidence and work from there.
Despite its ancient origins, evidence-based medicine remains a relatively young discipline whose positive impacts are just beginning to be validated and it will continue to evolve (Oxman et al 1993, p. 2). This evolution will be enhanced as various undergraduate, post-graduate, and continuing medical education programmes adopt and adapt it to their learners’ needs. These programmes, and their evaluation, will provide further information and understanding about what EBP is, and what it is not. As yet, however, there is ‘no good evidence to suggest that EBP actually works (Trinder 2000, p.4).
Critical appraisal of clinical practice involves additional time and effort, and may be perceived as wasteful; however, this time and effort may be reduced by clinicians developing effective searching skills and simple guidelines for assessing the validity of research papers. In addition, one can emphasize that critical appraisal, as a strategy for solving clinical problems is most appropriate when the problems are common in one’s own practice (Oxman et al 1993, p. 3).
Conclusion
This paper has explained why researchers and health bureaucrats have enthusiastically embraced evidence-based practice, while critics have argued that the evidence on which the evidence-based practice philosophy rests has inherent weaknesses that allow it to be capable of harm as well as good. EBP has been defined as ‘the conscientious, explicit and judicious use of current best evidence in making decisions about the care of your client’ (Sackett et al 1997, p. 2). An explanation of the principles and processes of EBP were then discussed. The proponent’s argument developing the need for EBP was then determined by discussing the four main driving factors of EBP, followed by the critic’s argument against EBP. Lastly, an objective view of both the strengths and weaknesses of EBP were addressed, and in doing so, it was determined that in theory, EBP is an excellent idea, and there is a definite need for it within the medical context, as well as many other professions; however, while it is still early in its practice, there is no substantial evidence to suggest that EBP actually works.
References:
American Medical Association, 1992, Evidence-Based Medicine: A New Approach to Teaching the Practice of Medicine, Centre for Health Evidence, http://www.cche.net/usersguides/ebm.asp#Paradigm, Last updated, 11/7/2005, viewed on the 16/9/2005
Coats, V. 2004, Randomised controlled trials — almost the best available evidence for practice, Journal of Diabetes Medicine
Cook, D. Jaeschke, R. Guyatt, G. 1992: Critical appraisal of therapeutic interventions in the intensive care unit: human monoclonal antibody treatment in sepsis. Journal Club of the Hamilton Regional Critical Care Group.J Intensive Care Med 1992, 7:275-282.
Craig, C Irwig L and Stockler M 2001, Evidence-based medicine: useful tools for decision making, MJA
Delamothe, T. Smith R1998: The BMJ’s website scales up evidence based practice
British Medical Journal
Doust, J. Del-Mar, B. 2004: Why do doctors use treatments that do not work? British Medical Journal
Echt, D. Leibson, P. Mitchell, R. Peters, B. Obias, R. 1991, Mortality and Morbidity in Patients Receiving Encainmide and Flacainide or Placebo, New England Journal of Medicine
Guyatt, G. Cairns, J. Churchill, D. abd Evidence-Based Medicine Working Group, 1992, Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA
Hook, O 1999: Scientific communications. History, electronic journals and impact factors, British Medical Journal
Howitz, R. 1996, The Dark Side of Evidence Based Medicine, Journal of Emergency Medicine
Humphreys, L. McCutcheon, E.1985: Growth patterns in the National Library of Medicine’s serials collection and in Index Medicus journals, 1966-1985.
Bull Med Library Assoc
Maudsley, S. 2000, Science, Critical Thinking and Competence for Tomorrow’s Doctors. A review of Terms and Concepts, Journal of Medicine Education
Miles, A. Bentley, P. Polychronis A. and Grey, J. 1997, Evidence-based Medicine: Why all the fuss? This is why. J Eval Clin Pract
Oxman, A Sackett, D and Guyatt G 1993, EBM: How to Get Started, JAMA,
Rosenberg, W. Donald, A. 1995: Evidence based medicine: an approach to clinical problem-solving, British Medical Journal
Sackett, L. Richardson, S. Rosenberg, W. and Haynes, B. 1997, Evidence-based medicine: how to practice and teach EBM, Churchill Livingstone, New York, USA
Sackett, L. Rosenberg, W. Gray, J. Haynes B. Richardson, W. Evidence based medicine: what it is and what is isn’t, British Medical Journal
Straus, E. and McAlister, A. 2000, Evidence-based medicine: a commentary on common criticisms. Cannadian Medical Association Journal
Stephenson, J. and Imrie, J. 1998, Why do we need randomised controlled trials to assess behavioural interventions?, British Medical Journal
Trinder, L. 2000, Introduction: the context of evidence-based practice. In Evidence-based practice: a critical appraisal, Blackwell Science, Oxford