Most of the traditional evaluation criteria of object classification are based on the error rate, assuming that the costs of different errors are equal. However, the subjective evaluation of the human vision system on such misclassification errors may be unequal. How do we design proper performance evaluation critera taking into account such inequalities is a kernel issue for mimicking the human vision on object classification. We propose the human confusion costs, which are derived from the statistical human confusions on the training and test data sets, for the model learning and the performance evaluation of the generic cost-sensitive object classification problem, respectively. Unlike the manually designed costs, the proposed ones can better represent the properties of human vision. Experimental results on a new data set with annotations from 20 subjects demonstrate the superiority of the proposed costs against other applicable costs.