We show an unsupervised parallel approach called the annealed Hopfield neural network (AHNN) with a new cooling schedule for vector quantization in image compression. The main purpose is to combine the characteristics of neural networks and annealing strategy so that online learning and hardware implementation for vector quantization are feasible. The idea is to cast a clustering problem as a minimization problem where the criterion for the optimum vector quantization is chosen as the minimization of the average distortion between training vectors and codevectors. Although the simulated annealing method can yield the global minimum, it is very time consuming with asymptotical iterations. In addition, to resolve the optimal problem using Hopfield or simulated annealing neural networks, the designer must determine the weighting factors to combine the penalty terms. The quality of the final result is very sensitive to these weighting factors, and feasible values for them are difficult to find. Using the AHNN for vector quantization eliminates the need for finding weighting factors in the energy function, which is formulated and based on a basic concept of the ‘‘within-class scatter matrix’’ principle. In addition, the rate of convergence is much faster than that of simulated annealing. The experimental results show that better and more promising solutions can be obtained using the AHNN than the LBG in image vector quantization. The convergence rate with various cooling schedules is discussed.