In many computer vision applications, an object can be described by multiple features from different views. For instance, to characterize an image well, a variety of visual features is exploited to represent color, texture, and shape information and encode each feature into a vector. Recently, we have witnessed a surge of interests of combining multiview features for image recognition and classification. However, these features are always located in different high-dimensional spaces, which challenge the features fusion, and many conventional methods fail to integrate compatible and complementary information from multiple views. To address the above issues, multifeatures fusion framework is proposed, which utilizes multiview spectral embedding and a unified distance metric to integrate features, the alternating optimization is reconstructed by learning the complementarities between different views. This method exploits complementary property of different views and obtains a low-dimensional embedding wherein the different dimensional subspace. Various experiments on several benchmark datasets have verified the excellent performance of the proposed method.