In this paper we study boosting methods from a new perspective. We build on recent work by Efron et al. to show that boosting approximately (and in some cases exactly) minimizes its loss criterion with an <i>L</i><sub>1</sub> constraint. For the two most commonly used loss criteria (exponential and logistic log-likelihood), we further show that as the constraint diminishes, or equivalently as the boosting iterations proceed, the solution converges in the separable case to an “<i>L</i><sub>1</sub>-optimal” separating hyper-plane. This “<i>L</i><sub>1</sub>-optimal” separating hyper-plane has the property of maximizing the minimal margin of the training data, as de£ned in the boosting literature. We illustrate through examples the regularized and asymptotic behavior of the solutions to the classifcation problem with both loss criteria.