With rapid developments in big data technology and the prevalence of large-scale datasets from diverse sources, the healthcare predictive analytics (HPA) field is witnessing a dramatic surge in interest. In… Click to show full abstract
With rapid developments in big data technology and the prevalence of large-scale datasets from diverse sources, the healthcare predictive analytics (HPA) field is witnessing a dramatic surge in interest. In healthcare, it is not only important to provide accurate predictions, but also critical to provide reliable explanations to the underlying black-box models making the predictions. Such explanations can play a crucial role in not only supporting clinical decision-making but also facilitating user engagement and patient safety. If users and decision makers do not have faith in the HPA model, it is highly likely that they will reject its use. Furthermore, it is extremely risky to blindly accept and apply the results derived from black-box models, which might lead to undesirable consequences or life-threatening outcomes in domains with high stakes such as healthcare. As machine learning and artificial intelligence systems are becoming more capable and ubiquitous, explainable artificial intelligence and machine learning interpretability are garnering significant attention among practitioners and researchers. The introduction of policies such as the General Data Protection Regulation (GDPR), has amplified the need for ensuring human interpretability of prediction models. In this talk I will discuss methods and applications for developing local as well as global explanations from machine learning and the value they can provide for healthcare prediction.
               
Click one of the above tabs to view related content.