LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Implications of Practice Variability.

Photo from wikipedia

Clinicians strive to practice evidence-based medicine. The difficulty is that little routine care has actually been validated in robust clinical trials. Consequently, many generally accepted clinical approaches are neither supported… Click to show full abstract

Clinicians strive to practice evidence-based medicine. The difficulty is that little routine care has actually been validated in robust clinical trials. Consequently, many generally accepted clinical approaches are neither supported nor refuted by available research, having instead developed piecemeal from improvements and clinician experience. Perhaps consequently, clinical practices quite reasonably vary considerably among clinicians within institutions, and even more across institutions and around the world. Insufficient knowledge is hardly limited to anesthetic management and extends to surgical practice and all other areas of medicine. Even across major variations in practice, there is little convincing evidence for that one approach is preferable to another. Consider, for example, the limited supporting evidence for (or compelling evidence against) stress testing or tomographic angiography, volatile anesthetic toxicity in neonates, neuraxial versus general anesthesia, intravenous versus volatile anesthesia, supplemental oxygen for prevention of surgical site infection, and targeted temperature management for most any indication except neonatal hypoxia. Even less evidence supports more subtle practice differences such as amount and type of intravenous fluid, intraoperative tidal volume, and positive end-expiratory pressure. Given various approaches to a clinical problem, trials should be able to relatively easily identify the best. In fact, it has not been easy. Most major trials show that primary outcomes are similar with each tested treatment—an observation that applies to drugs, devices, clinical approaches, and health system modifications. For example, an analysis of trials funded by the National Heart, Lung, and Blood Institute showed that only 16% of the large (and expensive) trials with substantive clinical outcomes demonstrated meaningful treatment effects. Recent perioperative examples include major superiority trials of nitrous oxide, clonidine, aspirin, short red cell storage, steroids for cardiac surgery, regional analgesia for cancer recurrence, intensive care unit checklists and goal-setting, and levosimendan. Robust trials showing comparable effects of various treatments are valuable, especially if one treatment is easier to implement, less toxic, or less expensive than the alternative. Still, it is disconcerting that so many large trials (e.g., more than 1,000 patients) fail to demonstrate strong evidence for a difference in treatments when differences were expected based on preclinical or other data, especially since such trials are typically based on compelling mechanisms, strong animal data, and supportive meta-analyses of small trials. A reasonable question is why well-designed and well-conducted major trials with statistically robust results so often demonstrate that primary results are similar with experimental and reference interventions.

Keywords: practice variability; medicine; practice; treatment; implications practice; evidence

Journal Title: Anesthesiology
Year Published: 2020

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.