Data from real-world regression problems are quite often contaminated with outliers. In order to efficiently handle such undesirable samples, robust parameter estimation methods have been incorporated into randomized neural network… Click to show full abstract
Data from real-world regression problems are quite often contaminated with outliers. In order to efficiently handle such undesirable samples, robust parameter estimation methods have been incorporated into randomized neural network (RNN) models, usually replacing the ordinary least squares (OLS) method. Despite recent successful applications to outlier-contaminated scenarios, significant issues remain unaddressed in the design of reliable outlier-robust RNN models for regression tasks. For example, the number of hidden neurons impacts directly on the norm of the estimated output weights, since the OLS method will rely on an ill-conditioned hidden-layer output matrix. Another design concern involves the high sensitivity of RNNs to the randomization of the hidden layer weights, an issue that can be suitably handled, e.g., by intrinsic plasticity techniques. Bearing these concerns in mind, we describe several ideas introduced in previous works concerning the design of RNN models that are both robust to outliers and numerically stable. A comprehensive evaluation of their performances is carried out across several benchmarking regression datasets taking into account accuracy, weight norms, and training time as figures of merit.
               
Click one of the above tabs to view related content.