Abstract Due to the wide applications in imbalanced learning, directly optimizing AUC has gained increasing interest in recent years. Compared with traditional batch learning methods, which often suffer from poor… Click to show full abstract
Abstract Due to the wide applications in imbalanced learning, directly optimizing AUC has gained increasing interest in recent years. Compared with traditional batch learning methods, which often suffer from poor scalability, it is more challenging to design the efficient AUC maximizing algorithm for large-scale data set, especially when dimension of data is also high. To address the issue, in this paper, an adaptive stochastic gradient method for AUC maximization, termed AMAUC, is proposed. Specifically, the algorithm adopts the framework of mini-batch, and uses projection gradient method for the inner optimization. To further improve the performance, an adaptive learning rate updating strategy is also suggested, where the second order gradient information is utilized to provide the feature-wise updating. Empirical studies on the benchmark and high-dimensional data sets with large scale demonstrate the efficiency and effectiveness of the proposed AMAUC.
               
Click one of the above tabs to view related content.