The learning-based method and sparse-representation of signal are combined to form the algorithm for single-image
super-resolution. In the training phase, the correlation between the sparse-representation of high-resolution patches and
that of low-resolution patches for the identical image with regard to their dictionaries is applied to train jointly two
dictionaries for high- and low-resolution patches. In the super-resolution phase, the sparse-representation of each patch
of low-resolution image is found to produce the high-resolution image by using corresponding coefficients of these
representation and high-resolution patches obtained above. For the dictionary learned is a more compact representation
of patches, the method demands less computational cost. Three experimentations validated the algorithm.
More image patches in a training set making it more time-consuming has become a holdback of the real-time application
of example-based super-resolution. The paper proposes a method which clusters these training set to accelerate the
procedure. Before the super-resolution, a clustering method is used to partition the middle-frequency components in the
training set into some subsets. During super-resolution, the distances between each matching patch of low-resolution
image and each subset of training set are computed. The subset with the minimum distance is selected to carry out farther
matching. This procedure goes along until a most matching patch is found. The high-frequency patch within the training
set relevant to the found matching patch is selected as the researching output, which is used for super-resolution of
objective image. Two examples are use to illustrate the performance of the proposed algorithm, one using a factitious
image obtained by blurring and down-sampling an original image, and another using directly a true image. The results
show the proposed method can reduce effectively the computational complexity.
In this paper, a system combining mixture of Gaussian background model and CamShift algorithm has been designed to
detect and track automatically the moving target. For detecting the moving object, the region which the movement object
belongs to is firstly identified and extracted by mixture model of Gaussian, and the centroid of this region is determined
as the center of initializing window for tracking. The color feature of object is then extracted in the region, and the
CamShift algorithm is used to calculate t he exact location of the target and adjust the size of search window. During the
tracking, the information about the objective location is transmitted by serial communication to control the PTZ in order
to track the object to ensure it being inside of scene all along. The experiment is performed with high speed spherical
camera E588/G3-HP and validates the validity of the system.
To implement multi-frame super-resolution restoration of low-resolution images or video sequences with nonglobal
motion, an interpolation-filtering method which is based on the recently proposed nonlocal-mean (NLM) filter is
presented. Firstly, each frame of a sequence is interpolated to desired resolution by cubic-spline or other algorithms.
Then the NLM filter is applied on these interpolated frames, while this filter is extended from single image at spatial
domain to multiple images at spatial and temporal domain simultaneously. Finally, these high-resolution images are used
to reconstruct the anticipant images or sequence. Although the NLM does not strongly rely on the accurate motion
estimation and is not sensitive to nonglobal motion, to reduce the complexity and improve results, a two-step method for
coarse motion estimation is adopted to obtain the motion vectors between frames. The obtained vectors act as initial
candidate location for the weight computing and pixel filtering of NLM. The performances are tested on a simulated data
and on a real video sequence together with the existing methods. Results on these tests show that the proposed technique
is successful in providing super-resolution on general real sequences.