Greedy Algorithms for Classification -- Consistency, Convergence Rates, and Adaptivity
نویسندگان
چکیده
Many regression and classification algorithms proposed over the years can be described as greedy procedures for the stagewise minimization of an appropriate cost function. Some examples include additive models, matching pursuit, and boosting. In this work we focus on the classification problem, for which many recent algorithms have been proposed and applied successfully. For a specific regularized form of greedy stagewise optimization, we prove consistency of the approach under rather general conditions. Focusing on specific classes of problems we provide conditions under which our greedy procedure achieves the (nearly) minimax rate of convergence, implying that the procedure cannot be improved in a worst case setting. We also construct a fully adaptive procedure, which, without knowing the smoothness parameter of the decision boundary, converges at the same rate as if the smoothness parameter were known.
منابع مشابه
Boosting with Early Stopping: Convergence and Consistency
Boosting is one of the most significant advances in machine learning for classification and regression. In its original and computationally flexible version, boosting seeks to minimize empirically a loss function in a greedy fashion. The resulted estimator takes an additive function form and is built iteratively by applying a base estimator (or learner) to updated samples depending on the previ...
متن کاملApproximation and learning by greedy algorithms
We consider the problem of approximating a given element f from a Hilbert space H by means of greedy algorithms and the application of such procedures to the regression problem in statistical learning theory. We improve on the existing theory of convergence rates for both the orthogonal greedy algorithm and the relaxed greedy algorithm, as well as for the forward stepwise projection algorithm. ...
متن کاملConvergence Rates for Greedy Kaczmarz Algorithms
We discuss greedy and approximate greedy selection rules within Kaczmarz algorithms for solving linear systems. We show that in some applications the costs of greedy and randomized rules are similar, and that greedy selection gives faster convergence rates. Further, we give a multi-step analysis of a particular greedy rule showing it can be much faster when many rows are orthogonal.
متن کاملForward Backward Greedy Algorithms for Multi-Task Learning with Faster Rates
A large body of algorithms have been proposed for multi-task learning. However, the effectiveness of many multi-task learning algorithms highly depends on the structural regularization, which incurs bias in the resulting estimators and leads to slower convergence rate. In this paper, we aim at developing a multi-task learning algorithm with faster convergence rate. In particular, we propose a g...
متن کاملFeature Clustering for Accelerating Parallel Coordinate Descent
Large-scale `1-regularized loss minimization problems arise in high-dimensional applications such as compressed sensing and high-dimensional supervised learning, including classification and regression problems. High-performance algorithms and implementations are critical to efficiently solving these problems. Building upon previous work on coordinate descent algorithms for `1-regularized probl...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Journal of Machine Learning Research
دوره 4 شماره
صفحات -
تاریخ انتشار 2003