Traditional ear recognition methods yield good results in constrained environments. However, the recognition rates fall considerably with the introduction of unconstrained environmental conditions, such as pose, illumination, background clutter, and occlusion. On the other hand, soft biometrics is by definition quite suitable for unconstrained environments and has been exploited in some previous ear recognition studies, but it has focused either on constrained acoustic soft biometrics or body-based soft biometrics. Therefore, instead of dealing with variations caused by unconstrained environmental conditions individually, we investigate the utility of ear-based soft biometrics extracted from unconstrained ear images. The proposed system is based on the fusion of traditional ear recognition and ear-based soft biometric traits using Bayesian fusion. The traditional ear recognition system is based on two texture-based feature extraction methods and support vector machine classification. Whereas the soft biometrics system is based on skin color, hair color, and mole location extracted from unconstrained ear images. The experiments conducted on an unconstrained ear database, Annotated Web Ears, show an average 5.8% improvement in the recognition results when ear-based soft biometric traits are fused with traditional ear recognition.