This paper addresses the problem of feature selection for Multi-class Support Vector Machines. Two models involving the $$\ell _{0}$$ℓ0 (the zero norm) and the $$\ell _{2}$$ℓ2–$$\ell _{0}$$ℓ0 regularizations are considered… Click to show full abstract
This paper addresses the problem of feature selection for Multi-class Support Vector Machines. Two models involving the $$\ell _{0}$$ℓ0 (the zero norm) and the $$\ell _{2}$$ℓ2–$$\ell _{0}$$ℓ0 regularizations are considered for which two continuous approaches based on DC (Difference of Convex functions) programming and DCA (DC Algorithms) are investigated. The first is DC approximation via several sparse inducing functions and the second is an exact reformulation approach using penalty techniques. Twelve versions of DCA based algorithms are developed on which empirical computational experiments are fully performed. Numerical results on real-world datasets show the efficiency and the superiority of our methods versus one of the best standard algorithms on both feature selection and classification.
               
Click one of the above tabs to view related content.