PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9401 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In Compressed Sensing (CS) Theory sparse signals can be reconstructed from far fewer measurements than the Nyquist Sampling Limit. Initial Compressed Sensing algorithms implicitly assume that sparsity domain coefficients are independently distributed. Accounting for and exploiting statistical dependencies in sparse signals can improve recovery performance. Wavelets and their theoretical principles, and the structural statistical modeling of dependencies, are applied to improve feature optimization in the presence of non-linear mixtures. Sparsifying Transforms, such as the Discrete Wavelet Transform (DWT), are used for spatial dependencies such as in natural images. This can exploit hierarchical structure and multiscale subbands of frequencies and orientation, exploiting dependencies across and within scales. Bayes Least Squares-Gaussian-scale Mixtures accurately describe statistical dependencies of wavelet coefficients in images, and, therefore, can be incorporated to address dependencies and improve performance. Sparsifying Transforms and Bayes Least Squares-Gaussian-scale Mixtures are incorporated to model and account for dependency characteristics during the coefficient-weight construction of Compressed Sensing algorithm iterations. The resulting accuracy and performance improvements of incorporating wavelets and their theoretical principles, and incorporating the structural and statistical modeling of dependencies, to account for variable-dependencies in image reconstruction algorithms are shown, both quantitatively and qualitatively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a novel algorithm to recover a sharp image from its corrupted form by deconvolution. The algorithm learns the deconvolution process. This is achieved by learning the deconvolution filter kernels for the set of learnt basic pixel patterns. The algorithm consists of the offline learning and online filtering stages. In the one-time offline learning stage, the algorithm learns the dictionary of various local characteristics of the pixel patch as the basic pixel patterns from a huge number of natural images in the training database. Later, the deconvolution filter coefficients for each pixel pattern is optimized by using the source and the corrupted image pairs in the training database. In the online stage, the algorithm only needs to find the nearest matching pixel pattern in the dictionary for each pixel and filter it using the filter optimized for the corresponding pixel pattern. Experimental results on natural images show that our method achieves the state-of-art result on an image deblurring. The proposed approach can be applied to recover a sharp image for applications such as camera, HD/UHD TV, document scanning systems etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scanning electron microscopes are some of the most versatile tools for imaging materials with nanometer resolution.
However, images collected at high scan rates to increase throughput and avoid sample damage, suffer from low signalto-
noise ratio (SNR) as a result of the Poisson distributed shot noise associated with the electron production and
interaction with the surface imaged. The signal is further degraded by additive white Gaussian noise (AWGN) from the
detection electronics. In this work, denoising frameworks are applied to this type of images, taking advantage of their
sparsity character, along with a methodology for determining the AWGN. A variance stabilization technique is applied
to the raw data followed by a patch-based denoising algorithm. Results are presented both for images with known levels
of mixed Poisson-Gaussian noise, and for raw images. The quality of the image reconstruction is assessed based both on
the PSNR as well as on measures specific to the application of the data collected. These include accurate identification
of objects of interest and structural similarity. High-quality results are recovered from noisy observations collected at
short dwell times that avoid sample damage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compressed Sensing (CS) is a novel mathematical framework that has revolutionized modern signal and image acquisition architectures ranging from one-pixel cameras, to range imaging and medical ultrasound imaging. According to CS, a sparse signal, or a signal that can be sparsely represented in an appropriate collection of elementary examples, can be recovered from a small number of random linear measurements. However, real life systems may introduce non-linearities in the encoding in order to achieve a particular goal. Quantization of the acquired measurements is an example of such a non-linearity introduced in order to reduce storage and communications requirements. In this work, we consider the case of scalar quantization of CS measurements and propose a novel recovery mechanism that enforces the constraints associated with the quantization processes during recovery. The proposed recovery mechanism, termed Quantized Orthogonal Matching Pursuit (Q-OMP) is based on a modification of the OMP greedy sparsity seeking algorithm where the process of quantization is explicit considered during decoding. Simulation results on the recovery of images acquired by a CS approach reveal that the modified framework is able to achieve significantly higher reconstruction performance compared to its naive counterpart under a wide range of sampling rates and sensing parameters, at a minimum cost in computational complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dietary intake, the process of determining what someone eats during the course of a day, provides valuable
insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer.
The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is
to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with
a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume
estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the
imaging system so that the variations of food appearance under different lighting conditions can be determined.
In this paper, we propose an image quality enhancement technique by combining image de-blurring and color
correction. The contribution consists of introducing an automatic camera shake removal method using a saliency
map and improving the polynomial color correction model using the LMS color space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many important imaging problems in material science involve reconstruction of images containing repetitive non-local structures. Model-based iterative reconstruction (MBIR) could in principle exploit such redundancies through the selection of a log prior probability term. However, in practice, determining such a log prior term that accounts for the similarity between distant structures in the image is quite challenging. Much progress has been made in the development of denoising algorithms like non-local means and BM3D, and these are known to successfully capture non-local redundancies in images. But the fact that these denoising operations are not explicitly formulated as cost functions makes it unclear as to how to incorporate them in the MBIR framework.
In this paper, we formulate a solution to bright field electron tomography by augmenting the existing bright
field MBIR method to incorporate any non-local denoising operator as a prior model. We accomplish this using a framework we call plug-and-play priors that decouples the log likelihood and the log prior probability terms in the MBIR cost function. We specifically use 3D non-local means (NLM) as the prior model in the plug-and-play framework, and showcase high quality tomographic reconstructions of a simulated aluminum spheres dataset, and two real datasets of aluminum spheres and ferritin structures. We observe that streak and smear artifacts are visibly suppressed, and that edges are preserved. Also, we report lower RMSE values compared to the conventional MBIR reconstruction using qGGMRF as the prior model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pixel-array array detectors allow single-photon counting to be performed on a massively parallel scale, with several million counting circuits and detectors in the array. Because the number of photoelectrons produced at the detector surface depends on the photon energy, these detectors offer the possibility of spectral imaging. In this work, a statistical model of the instrument response is used to calibrate the detector on a per-pixel basis. In turn, the calibrated sensor was used to perform separation of dual-energy diffraction measurements into two monochromatic images. Targeting applications include multi-wavelength diffraction to aid in protein structure determination and X-ray diffraction imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ceramic matrix composites (CMC) with continuous fiber reinforcements have the potential to enable the next generation of high speed hypersonic vehicles and/or significant improvements in gas turbine engine performance due to their exhibited toughness when subjected to high mechanical loads at extreme temperatures (2200F+). Reinforced fiber composites (RFC) provide increased fracture toughness, crack growth resistance, and strength, though little is known about how stochastic variation and imperfections in the material effect material properties. In this work, tools are developed for quantifying anomalies within the microstructure at several scales. The detection and characterization of anomalous microstructure is a critical step in linking production techniques to properties, as well as in accurate material simulation and property prediction for the integrated computation materials engineering (ICME) of RFC based components. It is desired to find statistical outliers for any number of material characteristics such as fibers, fiber coatings, and pores. Here, fiber orientation, or ‘velocity’, and ‘velocity’ gradient are developed and examined for anomalous behavior. Categorizing anomalous behavior in the CMC is approached by multivariate Gaussian mixture modeling. A Gaussian mixture is employed to estimate the probability density function (PDF) of the features in question, and anomalies are classified by their likelihood of belonging to the statistical normal behavior for that feature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phase is not easy to detect directly as intensity, but sometimes it contains the really desired
information. The transport-of-intensity equation (TIE) is a powerful tool to retrieve the phase from the
intensity. However, by considering the boundary energy exchange and the whole energy conversation in
the field of view, the current popular Fast Fourier transform (FFT) based TIE solver can only retrieve the
phase under homogeneous Neumann boundary condition. For many applications, the boundary condition
could be more complex and general. A novel TIE phase retrieval method is proposed to deal with an optical
field under a general boundary condition. In this method, an arbitrarily-shape hard aperture is added in the
optical field. In our method, the TIE is solved by using iterative discrete cosine transforms (DCT) method,
which contains a phase compensation mechanism to improve the retrieval results. The proposed method is
verified in simulation with an arbitrary phase, an arbitrarily-shaped aperture, and non-uniform intensity
distribution. Experiment is also carried out to check its feasibility and the method proposed in this work is
very easy and straightforward to use in a practical measurement as a flexible phase retrieval tool.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional image reconstruction for scanning baggage in security applications is becoming
increasingly
important. Compared to medical x-ray imaging, security imaging systems must be designed for a
greater variety of objects. There is a lot of variation in attenuation and nearly every bag scanned
has metal present, potentially yielding significant artifacts. Statistical iterative reconstruction
algorithms are known to reduce metal artifacts and yield quantitatively more accurate estimates of
attenuation than linear methods.
For iterative image reconstruction algorithms to be deployed at security checkpoints, the images
must be quantitatively accurate and the convergence speed must be increased dramatically. There are
many approaches for increasing convergence; two approaches are described in detail in this paper.
The first approach includes a scheduled change in the number of ordered subsets over iterations and
a reformulation of convergent ordered subsets that was originally proposed by Ahn, Fessler et. al.1
The second approach is based on varying the multiplication factor in front of the additive step in
the alternating minimization (AM) algorithm, resulting in
more aggressive updates in iterations. Each approach is implemented on real data from a SureScanTM
x 1000 Explosive Detection System∗ and compared to straightforward implementations of the
alternating minimization
algorithm of O’Sullivan and Benac2 with a Huber-type edge-preserving penalty, originally proposed
by Lange.3
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We experimentally explored the reconstruction of the image of two point sources using a sequence of
random aperture phase masks. The speckled intensity profiles were combined using an improved shift-and-add and
multi-frame blind deconvolution to achieve a near diffraction limited image for broadband light (600-670 nm). Using
a numerical model we also explored various algorithms in the presence of noise and phase aberration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image registration is normally solved as a regularized optimization problem. The line search procedure is commonly employed in unconstrained nonlinear optimization. At each iteration step the procedure computes a step size that achieves adequate reduction in the objective function at minimal cost.In this paper we extend the constrained line search procedure with different regularization terms so as to improve convergence. The extension is addressed in the context of constrained optimization to solve a regularized image registration problem. Specifically, the displacement field between the registered image pair is modeled as the sum of weighted Discrete Cosine Transform basis functions. A Taylor series expansion is applied to the objective function for deriving a Gauss-Newton solution. We consider two regularization terms added to the objective function. A Tikhonov regularization term constrains the magnitude of the solution and a bending energy term constrains the bending energy of the deformation field. We modify both the sufficient and curvature conditions of the Wolfe conditions to accommodate the additional regularization terms. The proposed extension is evaluated by generated test collection with known deformation. The experimental evaluation results show that a solution obtained with bending energy regularization and Wolfe condition line search achieves the smallest mean deformation field error among 100 registration pairs. This solution shows in addition an improvement in overcoming local minima.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic building extraction in satellite imagery is an important problem. Existing approaches typically involve stereo processing two or more satellite views of the same region. In this paper, we use shadow analysis coupled with line segment detection and texture segmentation to construct rectangular building approximations from a single satellite image. In addition, we extract building heights to construct a rectilinear height profile for a single region. We characterize the performance of the system in rural and urban regions of Jordan, Philippines, and Australia and demonstrate a detection rate of 76.2 - 86.1% and a false alarm rate of 26.5 - 40.1%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thousands of sensors are connected to the Internet and many of these sensors are cameras. The “Internet of
Things” will contain many “things” that are image sensors. This vast network of distributed cameras (i.e. web
cams) will continue to exponentially grow. In this paper we examine simple methods to classify an image from
a web cam as “indoor/outdoor” and having “people/no people” based on simple features. We use four types of
image features to classify an image as indoor/outdoor: color, edge, line, and text. To classify an image as having
people/no people we use HOG and texture features. The features are weighted based on their significance and
combined. A support vector machine is used for classification. Our system with feature weighting and feature
combination yields 95.5% accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simulations of flatbed scanners can shorten the development cycle of new designs, estimate image quality, and lower manufacturing costs. In this paper, we present a flatbed scanner simulation a strobe RGB scanning method that investigates the effect of the sensor height on color artifacts. The image chain model from the remote sensing community was adapted and tailored to fit flatbed scanning applications. This model allows the user to study the relationship between various internal elements of the scanner and the final image quality. Modeled parameters include: sensor height, intensity and duration of illuminant, scanning rate, sensor aperture, detector modulation transfer function (MTF), and motion blur created by the movement of the sensor during the scanning process. These variables are also modeled mathematically by utilizing Fourier analysis, functions that model the physical components, convolutions, sampling theorems, and gamma corrections. Special targets were used to validate the simulation include single frequency pattern, a radial chirp-like pattern, or a high resolution scanned document. The simulation is demonstrated to model the scanning process effectively both on a theoretical and experimental level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new promising architecture of microwave personnel screening system is analyzed in this paper with numerical simulations. This architecture is based on the concept of inverse aperture synthesis applied to a naturally moving person. The extent of the synthetic aperture is formed by a stationary vertical linear antenna array and by a length of subject’s trajectory as he moves in the vicinity of this antenna array. The coherent radar signal processing is achieved by a synchronous 3D video-sensor whose data are used to track the subject. The advantages of the proposed system architecture over currently existing systems are analyzed. Synthesized radar images are obtained by numerical simulations with a human torso model with concealed objects. Various aspects of the system architecture are considered, including: advantages of using sparse antenna arrays to decrease the number of antenna elements, the influence of positioning errors of body surface due to outer clothing. It was shown that detailed radar images of concealed objects can be obtained with a narrow-band signal due to the depth information available from the 3D video sensor. The considered ISAR architecture is considered perspective to be used on infrastructure objects owing to its superior qualities: highest throughput, small footprint, simple design of the radar sub-system, non-required co-operation of the subject.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method to measure three-dimensional gas temperature distribution without inserting a
probe into the gas using techniques of computed tomography and optical interferometry. The temperature
distribution can be reconstructed from a set of two-dimensional optical difference images for which the incident
angle of each distribution differs. The each optical difference is measured by an interferometer with four mirrors
which are movable and rotatable to control the incident angle. The temperature measurement system has two
kinds of errors. The first is the error in the reconstruction caused by the limited angle of projection; the direction
of the incident angle is limited in a certain region because of the limited arrangements of mirrors. The second
is the errors in an evaluation of the projection data which is the two-dimensional optical difference distribution,
which are included in steps to evaluate the optical difference; a carrier frequency detection of background fringe,
a carrier component filtering, phase unwrapping, and so on. This paper shows improvements of accuracy of the
reconstruction by adding a certain projection data to the original data set, and also the improvements of the
evaluation of the optical difference by using newly developed algorithms to evaluate the optical differences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Magnetic Resonance Imaging (MRI) has shown promising results in diagnosing myocarditis that can be qualitatively observed as enhanced pixels on the cardiac muscles images. In this paper, a myocarditis index, defined as the ratio between enhanced pixels, representing an inflammation, and the total pixels of myocardial muscle, is presented. In order to recognize and quantify enhanced pixels, a PCA-based recognition algorithm is used. The algorithm, implemented in Matlab, was tested by examining a group of 10 patients, referred to MRI with presumptive, clinical diagnosis of myocarditis. To assess intra- and interobserver variability, two observers blindly analyzed data related to the 10 patients by delimiting myocardial region and selecting enhanced pixels. After 5 days the same observers redid the analysis. The obtained myocarditis indexes were compared to an ordinal variable (values in the 1 - 5 range) that represented the blind assessment of myocarditis seriousness given by two radiologists on the base of the patient case histories. Results show that there is a significant correlation (P < 0:001; r = 0:94) between myocarditis indexes and the radiologists' clinical judgments. Furthermore, a good intraobserver and interobserver reproducibility was obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides a novel approach to estimating the lighting given a pair of color and depth image of non-homogeneous objects. Existing methods can be classified into two groups depending on the lighting model, either the basis model or point light model. In general, the basis model is effective for low frequency lighting while the point model is suitable for high frequency lighting. Later, a wavelet based method combines the advantages from both sides of the basis model and point light model. Because it represents all frequency lighting efficiently, we use the wavelets to reconstruct the lighting. However, all of the previous methods cannot reconstruct lighting from non-homogeneous objects. Our main contribution is to process the non-homogeneous object by dividing it into multiple homogeneous segments. From these segments, we first initialize material parameters and extract lighting coefficients accordingly. We then optimize material parameters with the estimated lighting. The iteration is repeated until the estimated lighting converged. To demonstrate the effectiveness of our method, we conduct six different experiments corresponding to the different number, size, and position of lighting. Based on the experiment study, we confirm that our algorithm is effective for identifying the light map.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, perceptually-driven super-resolution (SR) methods have been proposed to lower computational complexity. Furthermore, sparse representation based super-resolution is known to produce competitive high-resolution images with lower computational costs compared to other SR methods. Nevertheless, super-resolution is still difficult to be implemented with substantially low processing power for real-time applications. In order to speed up the processing time of SR, much effort has been made with efficient methods, which selectively incorporate elaborate computation algorithms for perceptually sensitive image regions based on a metric, such as just noticeable distortion (JND). Inspired by the previous works, we first propose a novel fast super-resolution method with sparse representation, which incorporates a no-reference just noticeable blur (JNB) metric. That is, the proposed fast super-resolution method efficiently generates super-resolution images by selectively applying a sparse representation method for perceptually sensitive image areas which are detected based on the JNB metric. Experimental results show that our JNB-based fast super-resolution method is about 4 times faster than a non-perceptual sparse representation based SR method for 256× 256 test LR images. Compared to a JND-based SR method, the proposed fast JNB-based SR method is about 3 times faster, with approximately 0.1 dB higher PSNR and a slightly higher SSIM value in average. This indicates that our proposed perceptual JNB-based SR method generates high-quality SR images with much lower computational costs, opening a new possibility for real-time hardware implementations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compressive sensing is a technique used in signal processing applications to reduce sampling time. This paper talks about an efficient sampling framework based on compressive sensing for capacitive touch technology. We aim to minimize the number of measurements required during capacitance touch sensing process and in order to achieve this, we use structured matrices which can be used as a driving sensing framework for a touch controller. The novel contribution of this research is that we have modelled our recovery algorithm according to the structure of our sampling matrix, thus making it extremely efficient and simple to implement in a practical application. In this paper, we exploit the structure of the sensing matrix and conduct experiments to test the robustness of our proposed algorithm. Calculations of the floating point multiplication operations for the reconstruction algorithm and sensing matrix have also been looked into detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.