LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Are We Doing a Good Job? In Praise of Program Evaluation

Photo by homajob from unsplash

Regular readers of our journal may have noticed that we are introducing a new feature. Each article must now include a “clinical implications” statement in the abstract and again in… Click to show full abstract

Regular readers of our journal may have noticed that we are introducing a new feature. Each article must now include a “clinical implications” statement in the abstract and again in the discussion. Soon every article should clearly indicate “so what does this mean for the clinician in the field?” Most of the papers in our second issue of 2017 focus on program evaluation. I am very enthusiastic about program evaluation. Why?Well, for starters— the doing of clinical work is demanding enough—it takes extra energy to go to the extra step of program evaluation. Yet, systematically evaluating our programs forces us to characterize what we are doing and to investigate the extent and limits of our effectiveness. It is critical to join the knowledge we gain from program evaluation in the “real world” setting with controlled trials to arrive at a fully formed picture of good clinical care. Program evaluation maximizes external validity, typically examining utility, feasibility, and effectiveness. Clinical research maximizes internal validity, using tight controls to increase accuracy and precision. Program evaluation not only tells us “does this work?” but “who does it work for?” and “how can we improve it next time?” When we publish program evaluation we serve the community by being brave enough to open our work to scrutiny by others, and by being generous enough to share our ideas to help others. As I understand it, in advancing evidence-based care we start with evaluation of a new program, move to clinical trials, and then shift to dissemination and implementation. However, I find dissemination and implementation frameworks useful for thinking about program evaluation on the front end as well. There are many frameworks available—one with freely available resources is the “RE-AIM” framework at http://re-aim.org. Their checklists, measures, and question sets will help you to identify variables to measure that might not have occurred to you. In this issue of our journal we have evaluations of programs that are clinical in nature, as well as those focusing on mentorship of professionals and education of community members. First up is Knight and Alarie’s evaluation of a geriatric mental health day treatment service (Knight & Alarie, 2017). Several things distinguish their evaluation including its large size (N = 255), its combination of pre-post measurement as well as focus groups, and its (somewhat rare) evaluation of psychiatric day treatment for older adults. I found their reflection on the challenges of “aftercare”—their clients desire to retain some connection after discharge—to be interesting and important. Next we have Emery-Tiburcio’s group presenting results from “BRIGHTEN” (Bridging Resources of a Geriatric Health Team via Electronic Networking) (Emery-Tiburcio et al., 2017). They demonstrate the ability to enroll participants into the program, and for the program to be equally effective across racial/ethnic groups and educational levels. Their innovative model uses treatment by linguistically and culturally sensitive staff combined with email consultation with an interdisciplinary team. We are pleased to also share a program evaluation of a peer mentorship initiative with psychologists in home based primary care teams (Terry, Gordon, Steadman-Wood, & Karel, 2017). The authors describe the content of mentorship contacts (e.g., discussions of challenges with clinical care, professional communication, work-life balance), as well as participants’ high satisfaction with the program. Supporting new professionals working with older adults is one important piece of the puzzle of improving care overall. We have two clinical comments that serve as small scale evaluations. Barrera and colleagues

Keywords: work; good job; program; evaluation; program evaluation; care

Journal Title: Clinical Gerontologist
Year Published: 2017

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.