Recently, visual tracking has been formulated as a classification problem whose task is to detect the object from the scene with a binary classifier. Boosting based online feature selection methods, which adopt the classifier to appearance changes by choosing the most discriminative features, have been demonstrated to be effective for visual tracking. A major problem of such online feature selection methods is that an inaccurate classifier may give imprecise tracking windows. Tracking error accumulates when the tracker trains the classifier with misaligned samples and finally leads to drifting. Separability-maximum boosting (SMBoost), an alternative form of AdaBoost which characterizes the separability between the object and the scene by their means and covariance matrices, is proposed. SMBoost only needs the means and covariance matrices during training and can be easily adopted to online learning problems by estimating the statistics incrementally. Experiment on UCI machine learning datasets shows that SMBoost is as accurate as offline AdaBoost, and significantly outperforms Oza’s online boosting. Accurate classifier stabilizes the tracker on challenging video sequences. Empirical results also demonstrate improvements in term of tracking precision and speed, comparing ours to those state-of-the-art ones.