In recent years, there were many suggestions regarding modifications of the well-known U-Net architecture in order to improve its performance. The central motivation of this work is to provide a… Click to show full abstract
In recent years, there were many suggestions regarding modifications of the well-known U-Net architecture in order to improve its performance. The central motivation of this work is to provide a fair comparison of U-Net and its five extensions using identical conditions to disentangle the influence of model architecture, model training, and parameter settings on the performance of a trained model. For this purpose each of these six segmentation architectures is trained on the same nine data sets. The data sets are selected to cover various imaging modalities (X-rays, computed tomography, magnetic resonance imaging), single- and multi-class segmentation problems, and single- and multi-modal inputs. During the training, it is ensured that the data preprocessing, data set split into training, validation, and testing subsets, optimizer, learning rate change strategy, architecture depth, loss function, supervision and inference are exactly the same for all the architectures compared. Performance is evaluated in terms of Dice coefficient, surface Dice coefficient, average surface distance, Hausdorff distance, training, and prediction time. The main contribution of this experimental study is demonstrating that the architecture variants do not improve the quality of inference related to the basic U-Net architecture while resource demand rises.
               
Click one of the above tabs to view related content.