Least mean square error reconstruction for the self-organizing network (Lmser) was proposed in 1991, featured by a bidirectional architecture with several built-in natures. In this paper, we developed Lmser into… Click to show full abstract
Least mean square error reconstruction for the self-organizing network (Lmser) was proposed in 1991, featured by a bidirectional architecture with several built-in natures. In this paper, we developed Lmser into CNN based Lmser (CLmser), highlighted by new findings on strengths of two major built-in natures of Lmser, namely duality in connection weights (DCW) and duality in paired neurons (DPN). Shown by experimental results on several real benchmark datasets, DCW and DPN bring to us relative strengths in different aspects. While DCW and DPN can both improve the generalization ability of the reconstruction model on small-scale datasets and ease the gradient vanishing problem, DPN plays the main role. Meanwhile, DPN can form shortcut links from the encoder to the decoder to pass detailed information, so it can enhance the performance of image reconstruction and enables CLmser to outperform some recent state-of-the-art methods in image inpainting with irregularly and densely distributed point-shaped masks.
               
Click one of the above tabs to view related content.