Frame rate up conversion (FRC) is the process of converting between different frame rates for targeted display
formats. Besides scanning format applications for large displays, FRC can be used to increase the frame rate of
video at the receiver end for video telephony, video streaming or playback applications for mobile platforms where
bandwidth savings are crucial. Many algorithms have been proposed for decoder/receiver side FRC. However,
most of them are from video encoding/decoding point of view. We systematically studied the strategies of
utilizing the camera 3A (auto exposure, auto white balance and auto focus) information to assist FRC process,
while in this paper we focus on the technique using camera exposure information to assist the decoder FRC.
In the proposed strategy the exposure information as well as other camera 3A related information is packetized
as the meta data which is attached to the corresponding frame and transmitted together with the main video
bit stream to the decoder side for FRC assistance. The meta data contains information such as zooming, auto
focus, AE (auto exposure), AWB (auto white balance) statistics, scene change detection, global motion detected
from motion sensors. The proposed meta data consists of camera specific information which is different than just
sending motion vectors or mode information to aid FRC process. Compared to traditional FRC approaches used
in mobile platforms, the proposed approach is a low-complexity,
low-power solution which is crucial in resource
constrained environments such as mobile platforms.
Many scene change detection techniques have been developed for scene cuts, fade in and fade out by analyzing
video encoder input signals. For real time scene change detection, sensor input signals provide first-hand
information which can be used for scene change detection. In this paper, by analyzing camcorder front end
sensor input signals with our proposed algorithms based on camera 3A (auto exposure, auto white balance and
auto focus), a novel scene change detection technique is described. Camera 3A based scene change detection
algorithm can detect scene changes in a timely manner and therefore fits well for real time scene change detection
applications. Experimental results show that this algorithm can detect scene changes with good accuracy. The
proposed algorithm is computationally efficient and easy to implement.
Compressed video is very sensitive to channel errors. A few bit losses can stop the entire decoding process.
Therefore, protecting compressed video is always necessary for reliable visual communications. In recent years,
Wyner-Ziv lossy coding has been used for error resilience and has achieved improvement over conventional
techniques. In our previous work, we proposed an unequal error protection algorithm for protecting data elements
in a video stream using a Wyner-Ziv codec. We also presented an improved method by adapting the parity
data rates of protected video information to the video content. In this paper, we describe a feedback aided error
resilience technique, based on Wyner-Ziv coding. By utilizing feedback regarding current channel packet-loss
rates, a turbo coder can adaptively adjust the amount of parity bits needed for correcting corrupted slices at the
decoder. This results in an effcient usage of the data rate budget for Wyner-Ziv coding while maintaining good
quality decoded video when the data has been corrupted by transmission errors.
Compressed video is very sensitive to channel errors. A few bit losses can derail the entire decoding process. Thus, protecting compressed video is imperative to enable visual communications. Since different elements in a compressed video stream vary in their impact on the quality of the decoded video, unequal error protection can be used to provide efficient protection. This paper describes an unequal error protection method for protecting data elements in a video stream, via a Wyner--Ziv encoder that consists of a coarse quantizer and a Turbo coder based lossless Slepian--wolf encoder. Data elements that significantly impact the visual quality of decoded video, such as modes and motion vectors as used by H.264, are provided more parity bits than coarsely quantized transform coefficients. This results in an improvement in the quality of the decoded video when the transmitted sequence is corrupted by transmission errors, than obtained by the use of equal error protection.
We investigate light concentration and field enhancement in nanometer-scale ridged aperture antennae. Resent numerical simulations have shown that nanoscale ridged apertures can concentrate light into nanometer domain. Most importantly, these ridge apertures also provide an optical transmission enhancement several orders of magnitude higher compared to regularly shaped nanoscale apertures. We employ the finite-difference time-domain (FDTD) method to design these apertures and fabricate them in thin metal films. A home-built near field scanning optical microscope (NSOM) is used to map the near-field intensity distribution of the light transmitted through these apertures. It is shown that the ridged apertures can produce a concentrated light spot far beyond the diffraction limit, with transmission enhancement orders of magnitude higher than regularly shaped apertures. Nanolithography applications of these nanoscale ridged aperture antennae are demonstrated.