In this work, we address the problem of content identification. We consider content identification as a special case of multiclass classification. The conventional approach towards identification is based on content fingerprinting where a short binary content description known as a fingerprint is extracted from the content. We propose an alternative solution based on elements of machine learning theory and digital communications. Similar to binary content fingerprinting, binary content representation is generated based on a set of trained binary classifiers. We consider several training/encoding strategies and demonstrate that the proposed system can achieve the upper theoretical performance limits of content identification. The experimental results were carried out both on a synthetic dataset with different parameters and the FAMOS dataset of microstructures from consumer packages.
Compressive Sensing (CS) has become one of the standard methods in face recognition due to the success of the family of Sparse Representation based Classification (SRC) algorithms. However it has been shown that in some cases, the locality of the dictionary codewords is more essential than the sparsity. Also sparse coding does not guarantee to be local which could lead to an unstable solution. We therefore consider the statistically optimal aspects of encoding that guarantee the best approximation of the query image to a dictionary that incorporates varying acquisition conditions. We focus on the investigation, analysis and experimental validation of the best robust classifier/predictor and consider frontal face image variability induced by noise, lighting, expression, pose, etc.. We compare two image representations using a pixel-wise approximation and an overcomplete block-wise approximation with two types of sparsity priors. In the first type we consider all samples from a single subject and in the second type we consider all samples from all subjects. The experiments on a publicly available dataset using low resolution images showed that several per subject sample sparsity prior approximations are as good as the results presented from SCR and that our simple overcomplete block-wise approximation provides superior performance in comparison to the SRC and WSRC algorithm.