Respiration-induced organ motion can limit the accuracy required for many clinical applications working on the
thorax or upper abdomen. One approach to reduce the uncertainty of organ location caused by respiration is
to use prior knowledge of breathing motion. In this work, we deal with the extraction and modeling of lung
motion fields based on free-breathing 4D-CT data sets of 36 patients. Since data was acquired for radiotherapy
planning, images of the same patient were available over different weeks of treatment. Motion field extraction is
performed using an iterative shape-constrained deformable model approach. From the extracted motion fields,
intra- and inter-subject motion models are built and adapted in a leave-one-out test. The created models
capture the motion of corresponding landmarks over the breathing cycle. Model adaptation is then performed
by examplarily assuming the diaphragm motion to be known. Although, respiratory motion shows a repetitive
character, it is known that patients' variability in breathing pattern impedes motion estimation. However, with
the created motion models, we obtained a mean error between the phases of maximal distance of 3.4 mm for the
intra-patient and 4.2 mm for the inter-patient study when assuming the diaphragm motion to be known.
Respiratory motion is a complicating factor in radiation therapy, tumor ablation, and other treatments of the
thorax and upper abdomen. In most cases, the treatment requires a demanding knowledge of the location of
the organ under investigation. One approach to reduce the uncertainty of organ motion caused by breathing is
to use prior knowledge of the breathing motion. In this work, we extract lung motion fields of seven patients
in 4DCT inhale-exhale images using an iterative shape-constrained deformable model approach. Since data was
acquired for radiotherapy planning, images of the same patient over different weeks of treatment were available.
Although, respiratory motion shows a repetitive character, it is well-known that patient's variability in breathing
pattern impedes motion estimation. A detailed motion field analysis is performed in order to investigate the
reproducibility of breathing motion over the weeks of treatment. For that purpose, parameters being significant
for breathing motion are derived. The analysis of the extracted motion fields provides a basis for a further
breathing motion prediction. Patient-specific motion models are derived by averaging the extracted motion
fields of each individual patient. The obtained motion models are adapted to each patient in a leave-one-out test
in order to simulate motion estimation to unseen data. By using patient-specific mean motion models 60% of
the breathing motion can be captured on average.
Standard video compression techniques apply motion-compensated prediction combined with transform coding of the prediction error. In the context of prediction with fractional-pel motion vector resolution it was shown, that aliasing components contained in an image signal are limiting the prediction accuracy obtained by motion compensation. In order to consider aliasing, quantisation and motion estimation errors, camera noise, etc., we analytically developed a two-dimensional (2D) non-separable interpolation filter, which is calculated for each frame independently by minimising the prediction error energy. For every fractional-pel position to be interpolated, an individual set of 2D filter coe±cients is determined. Since transmitting filter coefficients as side information results in an additional bit rate, which is almost independent for different total bit rates and image resolutions, the overall gain decreases when total bit rates decrease. In this paper we present an algorithm, which regards the non-separable two-dimensional filter as a polyphase filter. For each frame, predicting the interpolation filter impulse response through evaluation of the polyphase filter, we only have to encode the filter coefficients prediction error. This enables bit rate savings, needed for transmitting filter coe±cients of up to 75% compared to PCM coding. A coding gain of up to 1,2 dB Y-PSNR at same bit rate or up to 30% reduction of bit rate is obtained for HDTV-sequences compared to the standard H.264/AVC. Up to 0,5 dB (up to 10% bit rate reduction) are achieved for CIF-sequences.
Much research has been undertaken in the area of streaming video across computer networks in general and the Internet in particular, but relatively little has been undertaken in the field of streaming 3-D wireframe animation. Despite superficial similarities, both being visual media, the two are significantly different. Different data passes across the network so loss affects signal reconstruction differently. Regrettably, the perceptual effects of such loss have been poorly addressed in the context of animation to date and much of
the work that there has been in this field has relied on objective measures such as PSNR in lieu of those that take subjective effects into account.
In this paper, we bring together concepts from a number of fields to address the problem of how to achieve optimal resilience to errors in terms of the perceptual effect at the receiver. To achieve this, we partition the animation stream into a number of layers and apply Reed-Solomon (RS) forward error correction (FEC) codes to each layer independently and in such a way as to maintain the same overall bitrate whilst minimizing the perceptual effects of error, as measured by a distortion metric derived from related work in
the area of static 3-D mesh compression.
Experimental results show the efficacy of our proposed scheme under varying network bandwidth and loss conditions for different layer partitionings. The results indicate that with the proposed Unequal Error Protection (UEP) combined with Error Concealment (EC) and efficient packetization scheme, we can achieve graceful degradation of streamed animations at higher packet loss rates than other approaches that do not cater for the visual importance of the layers and use only objective layering metrics. Our experiments also demonstrate how to tune the packetization parameters in order to achieve efficient layering with respect to the subjective metric of surface smoothness.
For object—oriented analysis—synthesis coding an image analysis algorithm is required which automatically generates the parameter sets which describe moving 3D objects in an image sequence. Areas changed between two consecutive images are detected by means of change detecti()n. Special processing is carried out to compute areas which coincide with object boundaries and to eliminate areas which represent illumination changes. These areas are segmented into silhouettes of moving objects and uncovered background. The border of an ob ject silhouette is interpreted as the outermost contour of an object. These contours in combination with a simple function giving the z—distance between them provide a first estimiate for 3D shape of a model object. In order to improve the efficiency of motion analysis a concept for combining model objects with new parts of moving objects is proposed. Results of an automatic image analysis based on moving 3D objects are shown using video telephone test sequences.