Distributed video coding is typically treated as a channel coding problem among others. The encoder generates
parity bits (or syndrome bits) for the source and transmits part of them to the decoder for a certain target quality.
The decoder tries to reconstruct the source using the received parity bits along with the side information available.
In this paper, we aim to estimate the critical number of parity bits to transmit. Having observed the uncertainty
of the critical rate, we model it as a random variable, use its distribution to calculate the decoding failure
probability and formulate the expected distortion. We allocate a certain bit budget among different bit-planes
such that the expected distortion is minimized. Moreover, we introduce fast decoding at the encoder, which helps
us to estimate the critical rate far more accurately. Eventually, we achieve up to 1.5 dB gain in rate-distortion
performance at high bit rates.
Most consumer digital color cameras capture video using a single chip. Single chip cameras do not capture
RGB triples for every pixel, but a subsampled version with only one color component per pixel (e.g. Bayer
pattern). Conventionally, a full resolution video is constructed from the Bayer pattern by demosaicing before
being converted to YUV domain for compression. In order to lower the encoding complexity, we propose in
this work a novel color space conversion in the pre-processing step. Compared to the conventional method,
the proposed scheme reduces the encoding complexity almost by half. Moreover, it improves the reconstructed
video quality by up to 1.5 dB in CPSNR, when H.264/AVC is used for compression. To further lower the
encoding complexity, we additionally use our Wyner-Ziv video coder for compression. Again, we observe in our
experiments a similar gain of the proposed scheme over the conventional one.