The major hurdles currently preventing advance and innovation in thoracic insufficiency syndrome (TIS) assessment and treatment are the lack of standardizable objective diagnostic measurement techniques that describe the 3D thoracoabdominal structures and the dynamics of respiration. Our goal is to develop, test, and evaluate a quantitative dynamic magnetic resonance imaging (QdMRI) methodology and a biomechanical understanding for deriving key quantitative parameters from free-tidal-breathing dMRI image data for describing the 3D structure and dynamics of the thoracoabdominal organs of TIS patients. In this paper, we propose an idea of a shape sketch to codify and then quantify the overall thoracic architecture, which involves the selection of 3D landmark points and computation of 3D dynamic distances over a respiratory cycle. We perform two statistical analyses of distance sketches on 25 different TIS patients to try to understand the pathophysiological mechanisms in relation to spine deformity and to quantitatively evaluate improvements from pre-operative to post-operative states. This QdMRI methodology involves developing: (1) a 4D image construction method; (2) an algorithm for the 4D segmentation of thoraco-abdominal structures; and (3) a set of key quantitative parameters. We illustrate that the TIS dynamic distance analysis method produces results previously unknown and precisely describes the morphologic and dynamic alterations of the thorax in TIS. A set of 3D thoracoabdominal distances and/or distance differences enables the precise estimation of key measures such as left & right differences, differences over tidal breathing, and differences from pre- to post-operative condition.
Pansharpening is an effective way to enhance the spatial resolution of a multispectral (MS) image by fusing it with a provided panchromatic image. Instead of restricting the coding coefficients of low-resolution (LR) and high-resolution (HR) images to be equal, we propose a pansharpening approach via sparse regression in which the relationship between sparse coefficients of HR and LR MS images is modeled by ridge regression and elastic-net regression simultaneously learning the corresponding dictionaries. The compact dictionaries are learned based on the sampled patch pairs from the high- and low-resolution images, which can greatly characterize the structural information of the LR MS and HR MS images. Later, taking the complex relationship between the coding coefficients of LR MS and HR MS images into account, the ridge regression is used to characterize the relationship of intrapatches. The elastic-net regression is employed to describe the relationship of interpatches. Thus, the HR MS image can be almost identically reconstructed by multiplying the HR dictionary and the calculated sparse coefficient vector with the learned regression relationship. The simulated and real experimental results illustrate that the proposed method outperforms several well-known methods, both quantitatively and perceptually.
A single sensor camera can capture scenes by means of a color filter array. Each pixel samples only one of the three primary colors. We use a color demosaicking (CDM) technique to produce full color images and propose a robust adaptive sparse representation model for high quality CDM. The data fidelity term is characterized by l1 norm to suppress the heavy-tailed visual artifacts with an adaptively learned dictionary, while the regularization term is encouraged to seek sparsity by forcing sparse coding close to its nonlocal means to reduce coding errors. Based on the classical quadratic penalty function technique in optimization and an operator splitting method in convex analysis, we further present an effective iterative algorithm to solve the variational problem. The efficiency of the proposed method is demonstrated by experimental results with simulated and real camera data.
KEYWORDS: Lawrencium, Super resolution, Image quality, Simulation of CCA and DLA aggregates, Matrices, Reconstruction algorithms, Visualization, Image analysis, Image processing, Canonical correlation analysis
We proposed a superresolution (SR) method based on example-learning framework. In our framework, the relationship between the output high-resolution (HR) estimation and the HR training images is approximated by the relationship between the low-resolution (LR) test image and the HR training images. To effectively capture the strong correlation between LR and HR images, the LR and HR images are mapped onto a common feature space. Furthermore, in order to maintain their original two-dimensional (2-D) spatial structure, the original LR and HR patches are mapped onto the underlying common feature space using 2-D canonical correlation analysis. Later, the relationship between HR and LR features is established by partial least squares (PLS) with low regression errors on the derived feature space. In addition, a steering kernel regression (SKR) constraint is integrated into patch aggregation to improve the quality of the recovered images. Finally, the effectiveness of our approach is validated by extensive experimental comparisons with several SR algorithms for the natural image superresolution both quantitatively and qualitatively.
Most existing superresolution (SR) techniques focus primarily on improving the quality in the luminance component of SR images, while paying less attention to the chrominance component. We present an edge and color preserving image SR approach. First, for the luminance channel, a heavy-tailed gradient distribution of natural images is investigated as an image prior. Then, an efficient optimization algorithm is developed to recover the latent high-resolution (HR) luminance component. Second, for the chrominance channels, we propose a two-stage framework for luminance-guided chrominance SR. In the first stage, since most of the shape and structural information is contained in the luminance channel, a simple Markov random field formulation is introduced to search the optimal direction for color local interpolation guided by HR luminance components. To further improve the quality of the chrominance channels, in the second stage, a nonlocal auto regression model is utilized to refine the initial HR chrominance. Finally, we combine the SR reconstructed luminance components with the generated HR chrominance maps to get the final SR color image. Systematic experimental results demonstrated that our method outperforms some state-of-the-art methods in terms of the peak signal-to-noise ratio, structural similarity, feature similarity, and the mean color errors.
The iterative regularization method proposed by Osher et al. for total variation based image denoising can preserve textures well and has received considerable attention in the signal and image processing community in recent years. However, the iteration sequence generated by this method converges monotonically to the noisy image, and therefore this iteration must be terminated opportunely with an "optimal" stopping index, which is difficult in practice. To overcome this shortcoming, we propose a novel fractional-order iterative regularization model by introducing the fractional-order derivative. The new model can be considered as an interpolation between the traditional total variation model and the traditional iterative regularization model. Numerical results demonstrate that with a fitting order of derivative, the denoised image sequence generated by this model can converge to a denoised image with high peak signal to noise ratio and high structural similarity index after a few iteration steps, and therefore we can terminate the iteration according to some most used termination conditions. Moreover, we propose an experience method to choose the order of derivative adaptively for the partly textured images to improve the performance of noise removal and texture preservation. The adaptive method has low computational cost and can improve the result visually efficiently.
In this paper, we propose an improved method for simultaneous estimation of the bias field and segmentation of tissues
for magnetic resonance images, which is an extension of the method in. Firstly, the bias field is modeled as a linear
combination of a set of basis functions, and thereby parameterized by the coefficients of the basis functions. Then we
model the distribution of intensity in each tissue as a Gaussian distribution, and use the maximum a posteriori probability
and total variation (TV) regularization to define our objective energy function. At last, an efficient iterative algorithm
based on split Bregman method is used to minimize our energy function at a fast rate. Comparisons with other
approaches demonstrate the superior performance of this algorithm.
In this work we present a novel vision-based pipeline for automated skeleton detection and centreline
extraction of neuronal dendrite from optical microscopy image stacks. The proposed pipeline is an integrated solution
that merges image stacks pre-processing, the seed points detection, ridge traversal procedure, minimum spanning tree
optimization and tree trimming into to a unified framework to deal with the challenge problem. In image stacks preprocessing,
we first apply a curvelet transform based shrinkage and cycle spinning technique to remove the noise. This is
followed by the adaptive threshold method to compute the result of neuronal object segmentation, and the 3D distance
transformation is performed to get the distance map. According to the eigenvalues and eigenvectors of the Hessian
matrix, the skeleton seed points are detected. Staring from the seed points, the initial centrelines are obtained using ridge
traversal procedure. After that, we use minimum spanning tree to organize the geometrical structure of the skeleton
points, and then we use graph trimming post-processing to compute the final centreline. Experimental results on different
datasets demonstrate that our approach has high reliability, good robustness and requires less user interaction.
It is shown that the watermarking algorithm presented in another paper [Ganic and Eskicioglu, J. Electron. Imaging 14, 043004 (2005)] has a very high probability of a false-positive answer and has its limitations in practice. Furthermore, the intrinsic reasons of the high false-alarm probability are as follows: the basis space of singular value decomposition is image content dependent, there is no one-to-one correspondence between singular value vector and image content, because singular value vectors have no information on the structure of image. Thus, the most important reason is a result of a false conception to insert watermark singular value vectors without information on the structure of the watermark. Finally, some examples are given to prove our results of theoretical analysis.
The framework of analyzing the image singularity based on the sub-pixel multifractal measure (SPMM) is presented in this paper. Performing SPMM can give the sub-pixel local distribution of image gradient and a more precise singularity exponent distribution of the image. And the MSM detected this way reflects the most important information of the image. Meantime, using singularity exponents and the most singular manifold, the image can be decomposed into a series of sets with different statistical and physical properties automatically and easily. Using the pavement surface crack image as an example, it shows that the physical and geometrical properties of the structures can be obtained by analyzing the distribution of singularity exponents and the most singularity exponent. Furthermore, the pavement surface images with or without crack can also be distinguished.
A novel image enhancement algorithm for faint pavement cracks is proposed, based on the finite ridgelet transform (FRIT). In high-resolution FRIT subbands, typical linear singularities of the image are represented by a few coefficients, while randomly located noisy singularities are unlikely to produce significant coefficients. The parameters of an enhancement function can be determined through analyzing the distribution of the high-resolution FRIT coefficients. Therefore, a scheme based on modifying FRIT coefficients should be very effective in enhancing the linear singularities and suppressing the noise. In general, an entire pavement crack is a curve. A smooth partition method is proposed to divide a pavement crack image into local windows where the crack looks straight. Then the FRIT-based enhancement method can be applied to each piece of the whole crack. Experiments verify that faint crack signals submerged in the complicated background can be displayed efficiently by this algorithm.