Kernels and Ensembles: Perspectives on Statistical Learning.

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Author(s): Mu Zhu
  • Source:
    American Statistician. May2008, Vol. 62 Issue 2, p97-109. 13p. 2 Diagrams, 2 Charts, 3 Graphs.
  • Additional Information
    • Subject Terms:
    • Abstract:
      Since their emergence in the 1990s, the support vector ma- chine and the AdaBoost algorithm have spawned a wave of research in statistical machine learning. Much of this new research falls into one of two broad categories: kernel methods and ensemble methods. In this expository article, I discuss the main ideas behind these two types of methods, namely how to transform linear algorithms into nonlinear ones by using kernel functions, and how to make predictions with an ensemble or a collection of models rather than a single model. I also share my personal perspectives on how these ideas have influenced and shaped my own research. In particular, I present two recent algorithms that I have invented with my collaborators: LAGO, a fast kernel algorithm for unbalanced classification and rare tar- get detection; and Darwinian evolution in parallel universes, an ensemble method for variable selection. [ABSTRACT FROM AUTHOR]
    • Abstract:
      Copyright of American Statistician is the property of Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)