Item request has been placed!
×
Item request cannot be made.
×
Processing Request
L1-norm Laplacian support vector machine for data reduction in semi-supervised learning.
Item request has been placed!
×
Item request cannot be made.
×
Processing Request
- Additional Information
- Subject Terms:
- Abstract:
As a semi-supervised learning method, Laplacian support vector machine (LapSVM) is popular. Unfortunately, the model generated by LapSVM has a poor sparsity. A sparse decision model has always been fascinating because it could implement data reduction and improve performance. To generate a sparse model of LapSVM, we propose an ℓ 1 -norm Laplacian support vector machine ( ℓ 1 -norm LapSVM), which replaces the ℓ 2 -norm with the ℓ 1 -norm in LapSVM. The ℓ 1 -norm LapSVM has two techniques that can induce sparsity: the ℓ 1 -norm regularization and the hinge loss function. We discuss two situations for the ℓ 1 -norm LapSVM, linear and nonlinear ones. In the linear ℓ 1 -norm LapSVM, the sparse decision model implies that features with nonzero coefficients are contributive. In other words, the linear ℓ 1 -norm LapSVM can perform feature selection to achieve the goal of data reduction. Moreover, the nonlinear (kernel) ℓ 1 -norm LapSVM can also implement data reduction in terms of sample selection. In addition, the optimization problem of the ℓ 1 -norm LapSVM is a convex quadratic programming one. That is, the ℓ 1 -norm LapSVM has a unique and global solution. Experimental results on semi-supervised classification tasks have shown a comparable performance of our ℓ 1 -norm LapSVM. [ABSTRACT FROM AUTHOR]
- Abstract:
Copyright of Neural Computing & Applications is the property of Springer Nature and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
No Comments.