This paper presents an extension to our previously developed fusion framework  involving a depth camera and an inertial sensor in order to improve its view invariance aspect for real-time human action recognition applications. A computationally efficient view estimation based on skeleton joints is considered in order to select the most relevant depth training data when recognizing test samples. Two collaborative representation classifiers, one for depth features and one for inertial features, are appropriately weighted to generate a decision making probability. The experimental results applied to a multi-view human action dataset show that this weighted extension improves the recognition performance by about 5% over equally weighted fusion deployed in our previous fusion framework.
Depth maps captured by Kinect depth cameras are being widely used for 3D action recognition. However, such images often appear noisy and contain missing pixels or black holes. This paper presents a computationally efficient method for both denoising and hole-filling in depth images. The denoising is achieved by utilizing a combination of Gaussian kernel filtering and anisotropic filtering. The hole-filling is achieved by utilizing a combination of morphological filtering and zero block filtering. Experimental results using the publicly available datasets are provided indicating the superiority of the developed method in terms of both depth error and computational efficiency compared to three existing methods.
This paper presents a dynamic classifier selection approach for hyperspectral image classification, in which both spatial and spectral information are used to determine a pixel’s label once the remaining classified pixels’ neighborhood meets the threshold. For volumetric texture feature extraction, a volumetric gray level co-occurrence matrix is used; for spectral feature extraction, a minimum estimated abundance covariance-based band selection is used. Two hyperspectral remote sensing datasets, HYDICE Washington DC Mall and AVIRIS Indian Pines, are employed to evaluate the performance of the developed method. The classification accuracies of the two datasets are improved by 1.13% and 4.47%, respectively, compared with the traditional algorithms using spectral information. The experimental results demonstrate that the integration of spectral information with volumetric textural features can improve the classification performance for hyperspectral images.