Background E-Learning has taken a firm place in postgraduate medical education. Whereas 10 years ago it was promising, it now has a definite niche and is clearly here to stay.… Click to show full abstract
Background E-Learning has taken a firm place in postgraduate medical education. Whereas 10 years ago it was promising, it now has a definite niche and is clearly here to stay. However, evaluating the effect of postgraduate medical e-learning (PGMeL) and improving upon it can be complicated. While the learning aims of e-learning are evaluated, there are no instruments to evaluate the instructional design of PGMeL. Such an evaluation instrument may be developed by following the Association for Medical Education in Europe (AMEE) 7-step process. The first 5 steps of this process were previously performed by literature reviews, focus group discussion, and an international Delphi study. Objective This study will continue with steps 6 and 7 and answer the research question: Is a content-validated PGMeL evaluation survey useful, understandable, and of added value for creators of e-learning? Methods There are five phases in this study: creating a survey from 37 items (phase A); testing readability and question interpretation (phase B); adjusting, rewriting, and translating surveys (phase C); gathering completed surveys from three PGMeL modules (phase D); and holding focus group discussions with the e-learning authors (phase E). Phase E was carried out by presenting the results of the evaluations from phase D, followed by a group discussion. There are four groups of participants in this study. Groups A and B are experienced end users of PGMeL and participated in phase B. Group C are users who undertook e-learning and were asked to complete the survey in phase D. Group D are the authors of the e-learning modules described above. Results From a list of 36 items, we developed a postgraduate Medical E-Learning Evaluation Survey (MEES). Seven residents participated in the phase B group discussion: 4 items were interpreted differently, 3 were not readable, and 2 items were double. The items from phase B were rewritten and, after adjustment, understood correctly. The MEES was translated into Dutch and again pilot-tested. All items were clear and were understood correctly. The MEES version used for the evaluation contained 3 positive domains (motivation, learning enhancers, and real-world translation) and 2 negative domains (barriers and learning discouragers), with 36 items in those domains, 5 Likert scale questions of 1 to 10, and 5 open questions asking participants to give their own comments in each domain. Three e-learning modules were evaluated from July to November 2018. There were a total of 158 responses from a Dutch module, a European OB/GYN (obstetrics and gynecology) module, and a surgical module offered worldwide. Finally, 3 focus group discussions took place with a total of 10 participants. Usefulness was much appreciated, understandability was good, and added value was high. Four items needed additional explanation by the authors, and a Creators’ Manual was written at their request. Conclusions The MEES is the first survey to evaluate the instructional design of PGMeL and was constructed following all 7 steps of the AMEE. This study completes the design of the survey and shows its usefulness and added value to the authors. It finishes with a final, publicly available survey that includes a Creators’ Manual. We briefly discuss the number of responses needed and conclude that more is better; in the end, however, one has to work with what is available. The next steps would be to see whether improvement can be measured by using the MEES and continue to work on the end understandability in different languages and cultural groups.
               
Click one of the above tabs to view related content.