Visual tracking is a challenging computer vision problem with numerous practical applications. We propose a convolutional features selection-based tracking framework to improve accuracy and robustness. First, we investigate the impact of features extracted from different convolutional neural network layers for the visual tracking problem. Second, we learn correlation filters on each layer outputs to encode the target appearance and design a fluctuation detection technique to select the appropriate convolutional layers, which can improve the target localization precision and avoid drifting caused by the challenging factors, such as occlusions and appearance variations. Third, we present an improved model update strategy to keep positive samples while removing corrupted ones. Extensive experimental results on the OTB-2013 and OTB-2015 benchmarks demonstrate that the proposed algorithm performs favorably against several state-of-the-art trackers.