Self-supervised methods gain more and more attention, especially in the medical domain, where the number of labeled data is limited. They provide results on par or superior to their fully… Click to show full abstract
Self-supervised methods gain more and more attention, especially in the medical domain, where the number of labeled data is limited. They provide results on par or superior to their fully supervised competitors, yet the difference between information coded by both methods is unclear. This work introduces a novel comparison framework for explaining differences between supervised and self-supervised models using visual characteristics important to the human perceptual system. We apply this framework to models trained for Gleason score and conclude that self-supervised methods are more biased toward contrast and texture transformation than their supervised counterparts. At the same time, supervised methods code more information about the shape.
               
Click one of the above tabs to view related content.