Supplemental Digital Content is Available in the Text. Deep learning models can detect and locate retinal breaks in ultrawidefield fundus images. Using YOLO v3 architecture-based transfer learning, our model performed… Click to show full abstract
Supplemental Digital Content is Available in the Text. Deep learning models can detect and locate retinal breaks in ultrawidefield fundus images. Using YOLO v3 architecture-based transfer learning, our model performed well in both per-image classification and per-object detection. Purpose: We aimed to develop a deep learning model for detecting and localizing retinal breaks in ultrawidefield fundus (UWF) images. Methods: We retrospectively enrolled treatment-naive patients diagnosed with retinal break or rhegmatogenous retinal detachment and who had UWF images. The YOLO v3 architecture backbone was used to develop the model, using transfer learning. The performance of the model was evaluated using per-image classification and per-object detection. Results: Overall, 4,505 UWF images from 940 patients were used in the current study. Among them, 306 UWF images from 84 patients were included in the test set. In per-object detection, the average precision for the object detection model considering every retinal break was 0.840. With the best threshold, the overall precision, recall, and F1 score were 0.6800, 0.9189, and 0.7816, respectively. In the per-image classification, the model showed an area under the receiver operating characteristic curve of 0.957 within the test set. The overall accuracy, sensitivity, and specificity in the test data set were 0.9085, 0.8966, and 0.9158, respectively. Conclusion: The UWF image-based deep learning model evaluated in the current study performed well in diagnosing and locating retinal breaks.
               
Click one of the above tabs to view related content.