Automated interpretation of cardiac images has the potential to change clinical practice in many ways. For example, it could make it possible for non-experts in primary care and rural settings… Click to show full abstract
Automated interpretation of cardiac images has the potential to change clinical practice in many ways. For example, it could make it possible for non-experts in primary care and rural settings to test the heart’s function over time. In this paper, we tested the research hypothesis that recent developments in computer vision would make it possible to create a fully automated, scalable analysis for echocardiogram interpretation, covering all of the steps from view identification and Medical Image Segmentation (MIS) to structure and function quantification and Fetal Cardiac RhabDomyomas (FCRD) detection. Even though they are rare, FCRDs are the most frequent cause of Fetal Cardiac Tumor (FCT). When it comes to diagnosing and monitoring fetuses with an injured circulatory system, imaging (particularly echocardiography (ECG)) has proven helpful in the field of fetal cardiology. Because of the severe lack of qualified and experienced sonographers, it is very challenging to diagnose Cardiac RhabDomyomas (CRD). Prior to delivery, accurate segmentation of the FC to identify structural cardiac defects is critical for minimizing the illness among newborns. To automate the process of segmenting the cardiac chamber for the CRD, we propose a novel Attention-Residual Network-based V-Net architecture (ARVNet). In this study, examinations were performed on Fetal Rhabdomyomas noted in the Right Ventricle (FRRV), Fetal Rhabdomyomas noted in the Left Ventricle (FRLV), Fetal Rhabdomyomas noted in the Right Atrium (FRRA), Fetal Rhabdomyomas noted in the Left Atrium (FRLA), Fetal Rhabdomyomas noted in the Tricuspid Valve (FRTV). Images without Rhabdomyoma mean “Normal Condition (NC)” at Selvam Hospital in Melapalayam, Tirunelveli, Tamil Nadu, India. Even with a relatively small number of datasets, the proposed technique possesses high CRD detection performance, as evidenced by the results. The results showed that the proposed model did a good job segmenting all the views, with a specificity of 99.7% and a Dice coefficient similarity of 99.8%. It also did well at finding CRDs, with an average mean accuracy of around 99.85%.
               
Click one of the above tabs to view related content.