This research analyzes the affinities between two well-known learning schemes that apply randomization in the training process, namely, Extreme Learning Machines (ELMs) and the learning framework using similarity functions. These… Click to show full abstract
This research analyzes the affinities between two well-known learning schemes that apply randomization in the training process, namely, Extreme Learning Machines (ELMs) and the learning framework using similarity functions. These paradigms share a common approach to inductive learning, which combines an explicit data remapping with a linear separator; however, they seem to exploit different strategies in the design of the mapping layer. This paper shows that the theory of learning with similarity functions can stimulate a novel reinterpretation of ELM, thus leading to a common framework. This in turn allows one to improve the strategy applied by ELM for the setup of the neurons’ parameters. Experimental results confirm that the new approach may improve over the standard strategy in terms of the trade-off between classification accuracy and dimensionality of the remapped space.
               
Click one of the above tabs to view related content.