LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Deep Neural Network Regularization for Feature Selection in Learning-to-Rank

Photo from wikipedia

Learning-to-rank is an emerging area of research for a wide range of applications. Many algorithms are devised to tackle the problem of learning-to-rank. However, very few existing algorithms deal with… Click to show full abstract

Learning-to-rank is an emerging area of research for a wide range of applications. Many algorithms are devised to tackle the problem of learning-to-rank. However, very few existing algorithms deal with deep learning. Previous research depicts that deep learning makes significant improvements in a variety of applications. The proposed model makes use of the deep neural network for learning-to-rank for document retrieval. It employs a regularization technique particularly suited for the deep neural network to improve the results significantly. The main aim of regularization is optimizing the weight of neural network, selecting the relevant features with active neurons at the input layer, and pruning of the network by selecting only active neurons at hidden layer while learning. Specifically, we use group $\ell _{1}$ regularization in order to induce the group level sparsity on the network’s connections. Set of outgoing weights from each hidden layer represents the group here. The sparsity of network is measured by the sparsity ratio and it is compared with learning-to-rank models, which adopt the embedded method for feature selection. An extensive experimental evaluation considers the performance of the extended $\ell _{1}$ regularization technique against classical regularization techniques. The empirical results confirm that sparse group $\ell _{1}$ regularization is able to achieve competitive performance while simultaneously making the network compact with less number of input features. The model is analyzed with respect to evaluating measures, such as prediction accuracy, NDCG@n, MAP, and Precision on benchmark datasets, which demonstrate improved results over other state-of-the-art methods.

Keywords: tex math; neural network; learning rank; inline formula; regularization; network

Journal Title: IEEE Access
Year Published: 2019

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.