LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Robust and Sparse Linear Discriminant Analysis via an Alternating Direction Method of Multipliers

Photo by dawson2406 from unsplash

In this paper, we propose a robust linear discriminant analysis (RLDA) through Bhattacharyya error bound optimization. RLDA considers a nonconvex problem with the $L_{1}$ -norm operation that makes it less… Click to show full abstract

In this paper, we propose a robust linear discriminant analysis (RLDA) through Bhattacharyya error bound optimization. RLDA considers a nonconvex problem with the $L_{1}$ -norm operation that makes it less sensitive to outliers and noise than the $L_{2}$ -norm linear discriminant analysis (LDA). In addition, we extend our RLDA to a sparse model (RSLDA). Both RLDA and RSLDA can extract unbounded numbers of features and avoid the small sample size (SSS) problem, and an alternating direction method of multipliers (ADMM) is used to cope with the nonconvexity in the proposed formulations. Compared with the traditional LDA, our RLDA and RSLDA are more robust to outliers and noise, and RSLDA can obtain sparse discriminant directions. These findings are supported by experiments on artificial data sets as well as human face databases.

Keywords: tex math; linear discriminant; discriminant analysis; inline formula; rlda

Journal Title: IEEE Transactions on Neural Networks and Learning Systems
Year Published: 2020

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.