For automatic disease-severity-level estimation, a large-scale medical image dataset with level annotations is generally necessary. However, attaching absolute-level annotations (such as levels 0, 1, and 3) is very costly and… Click to show full abstract
For automatic disease-severity-level estimation, a large-scale medical image dataset with level annotations is generally necessary. However, attaching absolute-level annotations (such as levels 0, 1, and 3) is very costly and even inaccurate due to the level ambiguity. In this study, we proved experimentally that using a ranking function for level estimation can relax this difficulty. We propose a multi-task learning method for automatically estimating disease-severity levels that combine learning to rank with regression. The ranking function of the proposed method is trainable by relative-level and a small number of absolute-level annotations. For relative-level annotation, an annotator only needs to specify that one image has a higher disease level than another—this is much easier than absolute-level annotation. The proposed method enables disease-severity classification by calibrating the ranking function based on relative-level annotation through regression. The effectiveness of the method was proved through a large-scale experiment of ulcerative colitis-severity estimation with colonoscopy images.
               
Click one of the above tabs to view related content.