Extreme learning machine (ELM) is a feedforward neural network with one hidden layer, which is similar to a multilayer perceptron (MLP). To reduce the complexity in the training process of MLP using the traditional backpropagation algorithm, the weights in ELM between input and hidden layers are random variables. The output layer in the ELM is linear, as in a radial basis function neural network (RBFNN), so the output weights can be easily estimated with a least squares solution. It has been demonstrated in our previous work that the computational cost of ELM is much lower than the standard support vector machine (SVM), and a kernel version of ELM can offer comparable performance as SVM. In our previous work, we also investigate the impact of the number of hidden neurons to the performance of ELM. Basically, more hidden neurons are needed if the number of training samples and data dimensionality are large, which results in a very large matrix inversion problem. To avoid handling such a large matrix, we propose to conduct band selection to reduce data dimensionality (i.e., the number of input neurons), thereby reducing network complexity. Experimental results show that ELM using selected bands can yield similar or even better classification accuracy than using all the original bands.