Recent years have numerous algorithms to learn a sparse synthesis or analysis model from data. Recently, a generalized analysis model called the 'transform model' has been proposed. Data following the transform model is approximately sparsified when acted on by a linear operator called a sparsifying transform. While existing transform learning algorithms can learn a transform for any vectorized data, they are most often used to learn a model for overlapping image patches. However, these approaches do not exploit the redundant nature of this data and scale poorly with the dimensionality of the data and size of patches. We propose a new sparsifying transform learning framework where the transform acts on entire images rather than on patches. We illustrate the connection between existing patch-based transform learning approaches and the theory of block transforms, then develop a new transform learning framework where the transforms have the structure of an undecimated filter bank with short filters. Unlike previous work on transform learning, the filter length can be chosen independently of the number of filter bank channels. We apply our framework to accelerating magnetic resonance imaging. We simultaneously learn a sparsifying filter bank while reconstructing an image from undersampled Fourier measurements. Numerical experiments show our new model yields higher quality images than previous patch based sparsifying transform approaches.
Model based iterative reconstruction algorithms are capable of reconstructing high-quality images from lowdose CT measurements. The performance of these algorithms is dependent on the ability of a signal model to characterize signals of interest. Recent work has shown the promise of signal models that are learned directly from data. We propose a new method for low-dose tomographic reconstruction by combining adaptive sparsifying transform regularization within a statistically weighted constrained optimization problem. The new formulation removes the need to tune a regularization parameter. We propose an algorithm to solve this optimization problem, based on the Alternating Direction Method of Multipliers and FISTA proximal gradient algorithm. Numerical experiments on the FORBILD head phantom illustrate the utility of the new formulation and show that adaptive sparsifying transform regularization outperforms competing dictionary learning methods at speeds rivaling total-variation regularization.