Whole slide image (WSI) analysis is increasingly being adopted as an important tool in modern pathology. Recent deep learning-based methods have achieved state-of-the-art performance on WSI analysis tasks such as… Click to show full abstract
Whole slide image (WSI) analysis is increasingly being adopted as an important tool in modern pathology. Recent deep learning-based methods have achieved state-of-the-art performance on WSI analysis tasks such as WSI classification, segmentation, and retrieval. However, WSI analysis requires a significant amount of computation resources and computation time due to the large dimensions of WSIs. Most of the existing analysis approaches require the complete decompression of the whole image exhaustively, which limits the practical usage of these methods, especially for deep learning-based workflows. In this paper, we present compression domain processing-based computation efficient analysis workflows for WSIs classification that can be applied to state-of-the-art WSI classification models. The approaches leverage the pyramidal magnification structure of WSI files and compression domain features that are available from the raw code stream. The methods assign different decompression depths to the patches of WSIs based on the features directly retained from compressed patches or partially decompressed patches. Patches from the low-magnification level are screened by attention-based clustering, resulting in different decompression depths assigned to the high-magnification level patches at different locations. A finer-grained selection based on compression domain features from the file code stream is applied to select further a subset of the high-magnification patches that undergo a full decompression. The resulting patches are fed to the downstream attention network for final classification. Computation efficiency is achieved by reducing unnecessary access to the high zoom level and expensive full decompression. With the number of decompressed patches reduced, the time and memory costs of downstream training and inference procedures are also significantly reduced. Our approach achieves a 7.2× overall speedup, and the memory cost is reduced by 1.1 orders of magnitudes, while the resulting model accuracy is comparable to the original workflow.
               
Click one of the above tabs to view related content.