Image based atlases for rats brain have a significant impact on pre-clinical research. In this project we acquired T1-weighted images from Wistar rodent brains with fine 59<i>μ</i>m isotropical resolution for generation of the atlas template image. By applying post-process procedures using a semi-automatic brain extraction method, we delineated the brain tissues from source data. Furthermore, we applied a symmetric group-wise normalization method to generate an optimized template of T1 image of rodent brain, then aligned our template to the Waxholm Space. In addition, we defined several simple and explicit landmarks to corresponding our template with the well known Paxinos stereotaxic reference system. Anchoring at the origin of the Waxholm Space, we applied piece-wise linear transformation method to map the voxels of the template into the coordinates system in Paxinos' stereotoxic coordinates to facilitate the labelling task. We also cross-referenced our data with both published rodent brain atlas and image atlases available online, methodologically labelling the template to produce a Wistar brain atlas identifying more than 130 structures. Particular attention was paid to the cortex and cerebellum, as these areas encompass the most researched aspects of brain functions. Moreover, we adopted the structure hierarchy and naming nomenclature common to various atlases, so that the names and hierarchy structure presented in the atlas are readily recognised for easy use. It is believed the atlas will present a useful tool in rodent brain functional and pharmaceutical studies.
Work presented in the paper includes two parts: first we measured the detectability and annoyance of frame dropping's effect on perceptual visual quality evaluation under different motion and framesize conditions. Then, a new logistics function and an effective yet simple motion content representation are selected to model the relationship among motion, framerate and negative impact of frame-dropping on visual quality, in one formula. The high Pearson and Spearman correlation results between the MOS and predicted MOSp, as well as the results of other two error metrics, confirm the success of the selected logistic function and motion content representation.
In this paper, we propose a new method for removing coding artifacts appeared in JPEG 2000 coded images. The proposed method uses a fuzzy control model to control the weighting function for different image edges according to the gradient of pixels and membership functions. Regularized post-processing approach and recursive line algorithm are described in this paper. Experimental results demonstrate that the proposed algorithm can significantly improve image quality in terms of objective and subjective evaluation.
This paper presents a new and general concept, PQSM (Perceptual
Quality Significance Map), to be used in measuring the visual
distortion. It makes use of the selectivity characteristic of HVS
(Human Visual System) that it pays more attention to certain
area/regions of visual signal due to one or more of the following
factors: salient features in image/video, cues from domain
knowledge, and association of other media (e.g., speech or audio).
PQSM is an array whose elements represent the relative
perceptual-quality significance levels for the corresponding
area/regions for images or video. Due to its generality, PQSM can
be incorporated into any visual distortion metrics: to improve
effectiveness or/and efficiency of perceptual metrics; or even to
enhance a PSNR-based metric. A three-stage PQSM estimation method
is also proposed in this paper, with an implementation of motion,
texture, luminance, skin-color and face mapping. Experimental
results show the scheme can improve the performance of current
image/video distortion metrics.
In this paper, we propose a new video quality evaluation method based on multi-feature and radial basis function neural network. Multi-feature is extracted from a degraded image sequence and its reference sequence, including error energy, activity-masking and luminance-masking as well as blockiness and blurring features. Based on these factors we apply a radial basis function neural network as a classifier to give quality assessment scores. After training with the subjective mean opinion scores (MOS) data of VQEG test sequences, the neural network model can be used to evaluate video quality with good correlation performance in terms of accuracy and consistency measurements.
In this paper, just noticeable distortion (JND) profile based upon the human visual system (HVS) has been exploited to guide the motion search and introduce an adaptive filter for residue error after motion compensation, in hybrid video coding (e.g., H.26x and MPEG-x). Because of the importance of accurate JND estimation, a new spatial-domain JND estimator (the nonlinear additivity model for masking-NAMM for short) is to be firstly proposed. The obtained JND profile is then utilized to determine the extent of motion search and whether a residue error after motion compensation needs to be consine-tranformed. Both theoretical analysis and experimental data indicate significant improvement in motion search speedup, perceptual visual quality measure, and most remarkably, objective quality (i.e., PSNR) measure.