In this paper, we propose a new algorithm to estimate a super-resolution image from a given low-resolution image, by adding high-frequency information that is extracted from natural high-resolution images in the training dataset. The selection of the high-frequency information from the training dataset is accomplished in two steps: a nearest-neighbor search algorithm is used to select the closest images from the training dataset, which can be implemented in the GPU, and a sparse-representation algorithm is used to estimate a weight parameter to combine the high-frequency information of selected images. This simple but very powerful super-resolution algorithm can produce state-of-the-art results. Qualitatively and quantitatively, we demonstrate that the proposed algorithm outperforms existing common practices.
Face recognition is a challenging research topic, especially when the training (gallery) and recognition (probe) images are acquired using different cameras under varying conditions. Even a small noise or occlusion in the images can compromise the accuracy of recognition. Lately, sparse encoding based classification algorithms gave promising results for such uncontrollable scenarios. In this paper, we introduce a novel methodology by modeling the sparse encoding with weighted patches to increase the robustness of face recognition even further. In the training phase, we define a mask (i.e., weight matrix) using a sparse representation selecting the facial regions, and in the recognition phase, we perform comparison on selected facial regions. The algorithm was evaluated both quantitatively and qualitatively using two comprehensive surveillance facial image databases, i.e., SCfaceandMFPV, with the results clearly superior to common state-of-the-art methodologies in different scenarios.
Publisher’s Note: This paper, originally published on 24 December 2013, was replaced with a revised version on 11 June 2014. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance.
Cardiac gating or breath-hold MRI acquisition is challenging. In particular, data collected in a short amount of time might be insufficient for the diagnosis of patients with impaired breath-holding capabilities and/or arrhythmia. A major challenge in cardiac MRI is the motion of the heart itself, the pulsate blood flow, and the respiratory motion. Furthermore, the motion of the diaphragm in the chest moving up and down gets translated to the heart when a patient breathes. Therefore, artifacts arise due to the changes in signal intensity or phase as a function of time, resulting in blurry images. This paper describes a novel reconstruction strategy for real time cardiac MRI without requiring the use of an electro-cardiogram or of breath holding. In this research we focused on automation and evaluation of the performance of our proposed method in real time MRI data to ensure a good basis for the signal extraction. Hence, it assists in the reconstruction. The proposed method enables one to extract cardiac beating waveforms directly from real-time cardiac MRI series collected from freely breathing patients and without cardiac gating. Our method only requires minimal user involvement as initialization step. Thereafter, the method follows the registered area in every frame and updates itself.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.