Current medical imaging increasingly relies on 3D volumetric data making it difficult for radiologists to thoroughly search all regions of the volume. In some applications (e.g., Digital Breast Tomosynthesis), the… Click to show full abstract
Current medical imaging increasingly relies on 3D volumetric data making it difficult for radiologists to thoroughly search all regions of the volume. In some applications (e.g., Digital Breast Tomosynthesis), the volumetric data is typically paired with a synthesized 2D image (2D-S) generated from the corresponding 3D volume. We investigate how this image pairing affects the search for spatially large and small signals. Observers searched for these signals in 3D volumes, 2D-S images, and while viewing both. We hypothesize that lower spatial acuity in the observers' visual periphery hinders the search for the small signals in the 3D images. However, the inclusion of the 2D-S guides eye movements to suspicious locations, improving the observer's ability to find the signals in 3D. Behavioral results show that the 2D-S, used as an adjunct to the volumetric data, improves the localization and detection of the small (but not large) signal compared to 3D alone. There is a concomitant reduction in search errors as well. To understand this process at a computational level, we implement a Foveated Search Model (FSM) that executes human eye movements and then processes points in the image with varying spatial detail based on their eccentricity from fixations. The FSM predicts human performance for both signals and captures the reduction in search errors when the 2D-S supplements the 3D search. Our experimental and modeling results delineate the utility of 2D-S in 3D search-reduce the detrimental impact of low-resolution peripheral processing by guiding attention to regions of interest, effectively reducing errors.
               
Click one of the above tabs to view related content.