PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11071, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Humans eat and digest food among different tracts around their body for survival. Once a meal is ingested, it passes through the esophagus, stomach, and small intestine. The small intestine absorbs nutrition earnestly. It performs a peristaltic motion to pass down the contents, which are known to produce characteristic sounds, call "bowel sounds." The frequency of small intestines' movement correlates to the number of sounds it makes; therefore, the "bowel sounds" in the medical field are mostly used to diagnose intestinal obstruction or to nurse bedridden patients. In today's medical field, a doctor uses a stethoscope for a period to diagnose the patient. However, this method relies much on experience and intuition, and it is difficult for longtime testing. It is very important to provide optimum nutritional quantity at the right time to a bedridden patient in the intensive care unit (ICU). This is because providing nutrition very late or even a gram more than required may cause malnutrition or indigestion. Therefore, by using a small microphone, the progression of the number of bowel sounds per unit time can be monitored automatically; this has succeeded in diagnosing the digestive activity of the small intestines. In this research, we propose a new diagnosis method called "two-step template matching," in which a computer is used for automatically diagnosing bowel sounds stably in real time, even under noisy environments like that of an ICU.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At the end of 2014, the number of Japanese dialysis patients was approximately 320,000, which is increasing every year. Chronic renal failure patients require hemodialysis because the workings of the kidney can't fall and take any more out a toxin in extracorporeal. During hemodialysis, vascular access is needed to secure enough blood. However, vascular access encounters the problem of hemadostenosis. Early detection of stenosis may facilitate long-term use of hemodialysis shunts. The stethoscope auscultation of vascular murmurs can be useful in the assessment of access patency; however, the sensitivity of this diagnostic approach is skill dependent. This study proposes a diagnosis system for assessing the progress of hemodialysis shunt stenosis to detect stenosis at its early stage by using vascular murmurs. The system is based on dynamic time warping (DTW), a self-organizing map (SOM), and a short-time maximum entropy method (STMEM) for data analysis. The SOM based on the spectrum of the blood flow sound obtained by STMEM qualitatively judges whether the blood vessel access of the dialysis patient is normal or abnormal. Moreover, by calculating the dissimilarity of spectrum using DTW, the narrowing of the blood vessel due to the time course of the dialysis patient is analyzed quantitatively. As a result, the degree of change in stenosis due to time course of dialysis patients could be confirmed from the qualitative aspect and the quantitative aspect. At the same time, it turned out that the blood vessel was stenotic even in a serious patient even immediately after surgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The multiwavelet packet transform and support vector machine are applied to identifying human pulse signals of heroin addicts. Firstly, using the multiwavelet packet transform based on the multiwavelet and preprocessing presented by pioneers J. S. Geronimo, D. P. Hardin and P. R. Massopust, the pulse signals of 15 heroin addicts and 15 healthy normal subjects are fully decomposed into three levels. Then, using a technique called the coefficient entropy in the feature extraction for pulse signals, two entropy values of selected coefficients on two frequency bands at the third level are calculated for every pulse signal. Every pair of entropy values is then used to form a two-dimensional feature vector. Lastly, applying the technique of support vector machine, 15 heroin addict vectors and 15 healthy subject feature vectors are successfully separated into two classes. The research results show that the method presented in this paper is really effective for identifying the human pulse signals of heroin addicts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a newly-designed method of ultra-low sidelobe pulse compression filter for linear frequency modulation (LFM) signal is proposed. In the conventional processing of pulse compression, there exists the problem that the ratio of mainlobe to sidelobe is too low. In order to solve this problem, the convex optimization method is used to design the coefficient of the pulse compression filter, and the ratio of mainlobe to sidelobe of the pulse compression output could achieve 60dB or more to be applied in specific engineering applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Removing the brain part, as the epilepsy source attack, is a surgery solution for those patients who have drug resistant. So, the epilepsy localization area is an essential step before surgery. The Electroencephalogram (EEG) signals of these areas are different and called as focal (F) where the EEG signals of other normal areas are known as non-focal (NF). Visual inspection of multi channels for detecting the F EEG signal is time consuming and along with human error. In this paper, we propose a new method based on ensemble empirical mode decomposition (EEMD) in order to distinguish the F and NF signals. For this purpose, EEG signal is decomposed by EEMD and the corresponding intrinsic mode functions (IMFs) are obtained. Then various nonlinear features including log energy (LE) entropy, Stein's unbiased risk estimate (SURE) entropy, information potential (IP) and centered correntropy (CC), are extracted. At the end, the input signal is classified as either F or NF by using support vector machine (SVM). Using nonlinear features, we achieved 89% accuracy in classification with tenfold cross validation strategy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compressive sensing (CS) theory enables direct analog-to-information conversion (AIC) of wideband analog signals at sub Nyquist sampling rates. In many applicat ions, sampling at the Nyquist rate is inefficient because the signal may be sparse. The sparse signal contains only a few significant components. This paper describes a type of analog -toinformat ion scheme which consists of demodulation, filtering and uniform sampling. Based on the scheme, the sine and linear frequency modulation (LFM) signals can be rebuilt and the results show that the AIC scheme would alleviate the problem of high sampling rate. The signal can accurately be rebuilt when it is sampled at a 0.2 sub-sampling factor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Antenna arraying is widely implemented for deep space applications currently. A number of combining algorithms have been proposed for antenna array systems, where the difference of carrier frequency, delay and carrier phase between the received signals from different antennas, are estimated and compensated. The performance of these combining algorithms is evaluated using combining efficiency which is directly related to the signal-to-noise ratio (SNR) of the received signal from each antenna and the combined signal. However, the ultimate goal of antenna arraying is to obtain a better bit error rate (BER) performance. In this paper, the impact of signal alignment error, i.e. carrier frequency estimation error, delay estimation error and carrier phase estimation error, on the combining efficiency and the BER of the combined signal are analyzed. Computer simulations proved that positive combining efficiency doesn’t guarantee better BER performance. As a result, combining efficiency may not be a proper performance evaluation criterion for antenna arraying.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a novel adaptive active noise control algorithm based on Tikhonov regularization theory. A regularized cost function consisting of the weighted sum of the most recent samples of the residual noise and its derivative is defined. By setting the gradient vector of the cost function to zero, an optimal solution for the control parameters is obtained. Based on the proposed optimal solution, a computationally efficient algorithm for adaptive adjustment of the control parameters is developed. It is shown that the regularized affine projection algorithm can be considered as a very special case of the proposed algorithm. Different computer simulation experiments show the validity and efficiency of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper develops a statistical signal processing algorithm for parameter estimation of Euler-Bernoulli beams from limited and noisy measurement. The original problem is split into two reduced-order sub-problems coupled by a linear equation. The first sub-problem is cast as an inverse problem and solved by using Bayesian approximation error analysis. The second sub-problem is cast as a forward problem and solved by using the finite element technique. An optimal solution to the original problem is then obtained by coupling the solutions to the two sub-problems. Finally, a statistical signal processing algorithm for adaptive estimation of the optimal solution is developed. Computer simulation shows the effectiveness of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Broad Learning System (BLS) offers an alternative way of machine learning in deep structure. BLS is established based on the idea of the random vector function-link neural network (RVFLNN) which eliminates the drawback of long training process and also provides the generalization capability in function approximation. In this paper, a principal polynomial features based broad learning system (PPFBLS) is proposed. In this method, the principal component analysis (PCA) is used for feature dimensionality reduction. The candidate features of degree d are constructed by the principal features of degree one and the principal features of degree d-1. The enhancement features of degree d is extracted by applying PCA on the candidate features of degree d. Ridge regression learning using the concatenated features of each degree are applied for pattern classification. Parameters in the feature extraction stage are optimized by PCA which is different with randomly initialization adopted by BLS and RVFLNN. Experimental results on the MNIST handwritten digits recognition data set and the NYU NORB object recognition data set demonstrate the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel system for frequency-doubling microwave signal generation with tunable phase shift based on a dual-polarization quadrature phase shift keying (DP-QPSK) modulator which includes two dual-parallel Mach-Zehnder modulators (DPMZM) is proposed and demonstrated. The radio frequency (RF) signals drive the top DPMZM for generating a negative first-order RF sideband on the X-polarization state and drive the bottom DPMZM to obtain a positive first-order RF sideband on the Y-polarization state. After that, the two first-order sidebands enter a three wave-plates polarization controller (PC) of half-quarter-quarter (HQQ) wave plate type and then their phases are controlled. After a polarizer and a photodiode (PD), a frequency-doubling microwave or millimeter-wave signal with tunable phase shift is produced. The results indicate a full 360-degree phase shift is realized, in the meantime, the phase deviation is less than 1-degree and amplitude deviation is no more than 0.3dB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a combination of statistical and template based pattern matching to solve the problem of authentication with very short command words. Same features are used in both methods to reduce the computational weight. The first method uses GMM-UBM (Gaussian Mixture Model with Universal Background Model) which is well known in speaker recognition field, but lacks the ability to model the temporal aspect of speech. The second method provides a remedy for this, with the classical DTW (Dynamic Time Warping) on the cepstrum features. Two scheme of combining the model is explored; firstly with layer design when DTW distance is calculated only if GMM-UBM accepts the speaker, and secondly by weighting the DTW distance using the confidence of GMM-UBM result. With this combination, a 23% and 17% improvement in EER was observed respectively, each with differing characteristics on 3 different error types that is investigated. The experiment was conducted on evaluation set of RSR2015 database part 2, which contains short words meant for command and control task. Performance analysis is done using Detection Error Tradeoff curve (DET) and Equal Error Rate (EER).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Acquisition of Direct Sequence Spread Spectrum-Minimum Shift Keying (DSSS-MSK) signal in low signal to noise (SNR) and high dynamic environment will impact the overall performance of the receiving system seriously. The proposed all-digital IF receiver has a serial structure, transforming the DSSS-MSK signal into approximating DSSSBPSK signal using the matched filter. The matched filter is designed according to the known frequency response based on convex optimization. Then, the signals are regrouped by spreading code period. Finally, combining Doppler frequency shift compensation with the parallel code acquisition algorithm based on FFT, the PN code phase difference and Doppler frequency shift are captured simultaneously. Simulation results show that the proposed algorithm has 7dB and 8dB SNR improvement than delay correlation method and ML-FFT method respectively. Furthermore, the proposed algorithm has quick acquisition rate, wide acquisition range, high acquisition accuracy, low complexity and is suitable for low SNR environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sub-Nyquist sampling jamming has become a common means in ISAR countermeasure to generate multi- false target images. In this paper, sub-Nyquist sampling jamming against bistatic ISAR with V-style modulation (V-FM) signal imaging is presented. The jamming signals, which are formed by the intercepted radar signals under the sub-Nyquist sampling theorem and scattered by moving targets, are collected to achieve high resolution range profile (HRRP) by dual-channel dechirping and take Fourier transform to obtain final false target images, analyze the influence of imaging results by sub-Nyquist sampling rate and bistatic angle. Simulated trails are used to validate the correctness of the analyses and the finally well-focused false-target images greatly support the effectiveness of the sub-Nyquist sampling idea in the countermeasures of VFM Bi-ISAR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, the robot industry has seen considerable amount of development in a wide range of robots, including pet robots and nursing care robots. However, robots that track and support specific people under all circumstances have not yet been developed. This paper proposes to create a welfare robot that tracks a specific person and provides him/her with personal care. In this research, we treated tracking systems to recognize and follow the targeted person. And compared with two trackers, KCF was more versatile than particle filter. However, since KCF also weakens the precision with shielding objects, we try to combine the detection equation of the particle filter, and try to take measures such as expanding the detection area when the target person is lost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The need to convert printed text into a computer documented form which can be edited has increased rapidly in recent years which is fulfilled by using Optical Character Recognition (OCR). The challenge is to develop a character recognition mechanism which can convert these scanned images to an electronic mode which will provide the feature to reuse this text, access to every line and word of the document. This paper analyzes the architecture and method used for text recognition in OCR performed by Tesseract and extend this to an application which can transform sources of large number of paper printed documents like magazines, books, newspapers, etc. to an editable electronic format. This paper hence provides an application system that can make digitization of the physical documents faster and better with more accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ensemble methods have been broadly used in many applications to integrate individual models for achieving better performance like accuracy and robustness. This paper focuses on using deep Convolutional Neural Networks (CNNs) models and ensembling them for image classification. To have a comprehensive and comparative study, five ensemble techniques are explored and their combinations were applied in a two-stage ensemble process. In detail, multiple basic CNN models (structures) are first pre-defined to increase model variety for the ensemble, from which each basic CNN model is trained multiple rounds based on sub-sampling on the training dataset. In the first stage of ensemble, multiple predictions (through sub-sampling) from each basic CNN structure are combined. This is followed by the second stage of ensemble to integrate the outputs from all basic CNN structures. Experiments were conducted using Kaggle’s ‘Statoil/CCORE Iceberg Classifier Challenge’ image data for iceberg and ship classification. The experimental results showed that the ensembling CNN models could improve classification accuracy in the image classification problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Illumination estimation algorithms are aimed to estimate the RGB of scene illumination color when the image was taken, which is a significant way to achieve color constancy. They can be divided into three categories: pixel-based algorithms, learning-based algorithms and combination algorithms. Compared with other two kinds of illumination estimation algorithms, pixel-based algorithms are relatively poorly performing. In this paper, we add a L0-norm smoothing preprocessing to pixel-based algorithms to improve the performance. The L0-norm smoothing can suppress insignificant details and maintain major edges of an image. Experimental results show that our optimization approach is effective to enhance the performance of pixel-based algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this research is to explore interactive design approaches for MOBA (multiplayer online battle arena) mobile games based on user experience. Interactive design cases of MOBA games are adopted as the basic information. The close connection between the flow theory and the game motivation is combined to study specific needs of MOBA mobile game players. According to the design principle of the mobile terminal and the characteristics of MOBA games, four interactive design approaches, including improvement of the gesture operation efficiency, strengthening of the process experience, emotional arousal, and objective orientation and flexible feedback, are proposed. By providing players with the optimal user experience, this research attempts to come up interactive design approaches which can improve user experience of MOBA mobile games.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In automated visual inspection of car-body painting defects, the information quality of defect image is difficult to acquire because of light properties, interference and diffraction. The research, with experimental setup imitating automotive painting production line, acquires some of one light source images taken by 24.2 million pixels digital camera. All images are fused by two pixel-by-pixel fusion methods. Maximum fusion method concerns every pixel in which the pixel, with maximum intensity, is selected. Wavelet fusion method, we merge the two images at decomposition level 1, using db2, taking the maximum for approximation and the maximum for detail. The resulted fusion images of both methods are able to provide some painting defects, such as dust, sagging and peel-off, with useful information of shapes and dimensions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to performance and low complexity, JPEG-LS become the standard of lossless and near-lossless image compression. However, it can’t accurately control code rate when it is applied in near-lossless compression. This paper is thus devoted to rate control for near-lossless image compression with JPEG-LS. A model of coding bit-rate under a high bit-rate with respect to mean absolute difference (MAD) and coding quantization parameters for prediction coding is first proposed. Then a rate control method for near-lossless compression is designed based on the model for JPEG-LS. In the process of a specific image coding, to control the bit-rate, quantitative parameters are adjusted piecewise based on the model. Experiments show that the proposed method can make final code rate close to a preset rate. It’s different from other methods that quantitative parameter fluctuating within a wide range can be avoided because of the accurate model of bit-rate. As a result, the proposed control method can achieve approximate optimal rate-distortion performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The VWorld Data Center of South Korea provides the latest high-definition 3D national spatial information. In this paper, we propose a 3D terrain object generation method using VWorld data. A tile-based 3D map has gap problems when elevation change is large like mountainous terrain. Since different level tiles are adjacent to each other, the elevation of boundary of tiles does not match, resulting in empty space or the border area are exposed between tiles. Our proposed method considering the characteristics of tiles provided by VWorld Data Center enable to minimize gaps between tiles. We enable to minimize gap problems by making the size of the outer cell equal to the adjacent high-level tile. We describe the 3D terrain generation method and experimental results. The proposed method could be used on other tile-based 3D map platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conservation biologists use camera traps to study snow leopards. In this research, we introduce a method that streamlines the process of recognizing individual snow leopards in a large camera trap study. The proposed solution is based on an open-source software called HotSpotter, which was originally developed to identify uniquely patterned animals, such as Grevy’s zebras. The legacy HotSpotter involves time-consuming tasks such as manual selection of a region of interest (ROI) within each image, manual querying of each individual image against a database, and manual interpretation of results of each query to arrive at an estimate of a population count in a camera trap study. We introduce autonomous selection of multiple ROIs in motion templates corresponding to camera trap images, automate the query process, and propose a method to build associations between individual ROIs based on clustering of similarity scores using Markov Clustering Algorithm. The proposed technique with its promising results of correctly recognizing individual snow leopards has the potential to save conservation biologists thousands of hours of manual labor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we present a discriminative and effective local texture descriptor for bark image classification. The proposed descriptor is based on three factors, namely, pixel, magnitude and direction value. Unlike most other descriptors based on original local binary pattern, the proposed descriptor is conducted the changing of local texture of bark image. The performance of the proposed descriptor is evaluated on three benchmark datasets. The experimental results show that our approach is highly effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The security analysis of sensitive issues and medical diagnosis are immensely focus to determine exact location of event happening regions. In this paper we propose a model of clustering and pooling techniques to local features using Bag-of-Words (BoW) descriptor in SVM framework for event detection in video sequences. The proposed model extracts local features from six categories of Columbia Consumer Video (CCV) event detection benchmark. We developed the clusters of these features using KD-search tree and Lloyds algorithm. The clusters of features is pooled to vectors by using bag of words model. Introducing the inferring temporal instance labelling, the model performed fast for event detection. The significant performance of the research problem can thrilled out the social media by retrieving the best possible content. The proposed model can efficiently perform the experiments of event detection related to big data problem in visual media. Furthermore, the proposed approach in the model is invariant to rotation, translation and scale changes in the video sequence and robust to the illumination and viewpoints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Positron emission tomography (PET) images are often used clinically as they can non-invasively show the accumulation of cancer cells. The standardized uptake value (SUV) is the most common semi-quantitative measurement derived from PET images. However, SUV have some limitations, such as the difficulty of expressing temporal change quantitatively and unifying imaging conditions such as uptake time after medicine administration. Also, the textural features obtained from PET images show the presence of tumors represented by a vague shadow. Although feature analysis of tumors by using SUV have been widely studied, numerical information obtained on tumors from PET images is limited, and thus the wealth of information cannot be utilized. So, parameter to evaluate quantitatively the state of tumor within PET images called texture should be more established. Texture analysis involves various mathematical methods that are applied to quantify the relationships between the grey level intensity value of pixels or voxels and their spatial pattern within PET images, and are used for classification and discrimination. In this study, we propose texture analysis statistically, specifically by using Shannon’s information entropy and KullbackLeibler divergence. We verified our method by using a simulation, and quantified the distribution inside a tumor. We also examined clinical data in the same way; however, as no appropriate evaluation result was obtained, there is room for further improvement of this system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neurological disorders typically exhibit movement disabilities and disorders such as cerebellar ataxia (CA) can cause coordination inaccuracies often manifested as disabilities associated with gait, balance and speech. Since the severity assessment of the disorder is based on the expert clinical opinion, it is likely to be subjective. Automated versions of two upper limb tests: Finger to Nose test (FNT) and Diadochokinesia (DDK) test are investigated in this paper. Inertial Measurement Units (IMU) (BioKinTM ) are employed to capture the disability by measuring limb movements. Translational and rotational accelerations considered as kinematic parameters provided the features relevant to characteristic movements intrinsic to the disability. Principal Component Analysis (PCA) and multi-class Linear Discriminant classifier (LDA) were instrumental in dominant features correlating with the clinical scores. The relationship between clinicians assessment and the objective analysis is examined using Pearson Correlation. This study found that although FNT predominantly consist of translational movements, rotation was the dominant feature while for the case of DDK that predominantly consist of rotational movements, acceleration was the dominant feature. The degree of correlation in each test was also enhanced by combining the features in different tests.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visual Field (VF), as image data, is the spatial array of visual sensations available to observation in introspectionist psychological experiments. The retina, which surrounds the inner face of the eye, also consists of retinal nerve fiber layer and can be observed with Retina Nerve Fiver Layer (RNFL) Thickness data. They contain data obtained from Ophthalmologic diagnostic equipment. VF data is generally used to diagnose disease that occurs symptoms in the optic nerve and retina, such as glaucoma or macular degeneration etc. We should put the image data manually to develop a machine learning based diagnostic model so far. In this paper, we introduce how to extract numerical data we need automatically from images by using Optical Character recognizer(OCR) technology. Furthermore, we increased the recognition rates in this study, adding a function which detects errors on recognized numbers and corrects them. Based on this accumulated data, we built a glaucoma diagnostic model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thickening agents are commonly used to prevent aspiration, a condition that can prove fatal in the elderly. However, no established indicators exist that show the extent to which sticky food reduces the risk of aspiration. VideoEndoscopy (VE) and VideoFluorography (VF) are the classic inspection methods used for evaluating the function of swallowing, but they are both have limited utility in that they are invasive. In this study, we propose a non-invasive method that exploits esophagus ultrasound videos to estimate the internal flow characteristics of foods, and facilitates quantitative evaluation of the swallowing function. The method combines optical flow with Maximally Stable Extremal Regions (MSER) to extract the movement velocity and position of the esophagus and bolus. The results suggest that movement velocity could be used as an indicator to quantify the internal flow characteristics of foods. The displacement of the esophagus indicates the esophageal opening and could be used as an indicator to evaluate swallowing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a new food-evaluation technique based on biological information of the human body. Innovating a new food evaluation technique based on biological information can allow us to develop high-valued food products on the occasion of food design. We particularly focus on a small intestine and assume that the digestive activity varies depending on an individual constitution, health condition and compatibility between an individual and a food. This method can track digests in a small intestine and determine their status by using an ultrasound B-mode movie. We acquire peristaltic activity of a small intestine using frame subtraction method and determine their status using optical flow.This allows quantifying peristaltic activity of a small intestine based on tracking of digests in an abdominal B-mode movie.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently resting-state functional magnetic resonance imaging (R-fMRI) has been applied as a powerful tool to explore potential biomarkers of autism spectrum disorder (ASD). However, in clinical data, the number of ASD patients is significantly less than that of typical development (TD) subjects, which causes the production of imbalanced data. When the imbalanced data are used to predict ASD, the prediction results are not satisfactory. To improve the ASD prediction performance of imbalanced data, this paper adopts the clustering oversampling method to enhance the representation for minority class (ASD), expecting to obtain the balanced data distribution. For the imbalanced data after feature selection, the clustering algorithm is used to form a few clusters in the ASD group and in the TD group, respectively, and then new samples for each cluster are generated by synthetic minority oversampling technique (SMOTE) to make the imbalanced data convert into the balanced data. Finally, we construct the linear support vector machine (SVM) classification model for ASD prediction. The prediction accuracy of multi-center imbalanced R-fMRI data increased from 59.70% to 66.62% using hierarchical clustering oversampling. The results of experiment show that the clustering oversampling method can effectively improve the prediction performance of imbalanced R-fMRI data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Enhancing engagement of patients with brain injury during rehabilitation exercises are beneficial for recovery. The objective of this paper is to investigate the influence of game difficulty levels of exercise on patient’s engagement. Five patients with brain injury were recruited to play a Tetris game using their affected limb in different difficulty levels. During the experiment, the patient’s EMG and EEG were monitored to analyze motor and cognitive engagement respectively. The results showed a significant difference in patient’s engagement in different difficulty levels. Moreover, a positive correlation was identified between motor engagement and cognitive engagement when the cognitive difficulty of the game increased. The indicators are able to represent the actual engagement level of the patients with brain injury during rehabilitation exercise. Proper difficulty level of the cognitive task is not only beneficial for the patient to engage cognitively but can also promote motor engagement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of multimedia and communication technologies in domains like telemedicine, it's critical to determine whether an medical image has been modified by some processing. Previous works mainly pay attention to the medical image modification based on global image processing methods. In this paper, we focus on local and global image processing detection, and propose a convolutional neural network (CNN) based method for this task. The input of the CNN is the characteristic function image (CFI) which is proposed in this paper and is extracted from the reorganized block-based discrete cosine transform coefficients. MURA dataset and 7 image processing methods are used to evaluate the proposed method and experimental results demonstrate the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Small sample condition of communication radio signal caused the poorness of individual recognition on radios. To solve this problem, a method about communication radio individual identification based on semi-supervised rectangular network was proposed innovatively. Firstly, the square integral bispectrum feature was extracted from radio signal and then was artificially injected Gaussian noise to be corrupted. The corrupted sample was passed to the encoder of semi-supervised rectangular network for supervised training. The trained parameterization was then mirrored to decoder through the lateral connection across the model. And the output was forced by decoder through unsupervised learning to be closely to the clean input. While the optimal parameters was obtained by minimizing cost function of full network, the essential feature extracted was referred as the individual feature of radio signals. Individual recognition was finally accomplished by a softmax classifier. The robustness of the method proposed was verified on several radio datasets collected in actual environment. And experiment results indicated that the method has superior performance on identifying radio individuals with the same types under small sample condition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Under the situation of ideal channel characteristics, the spacetime- polarization processing (STPAP) technique can mitigate interferences effectively in the space domain, time domain and polarization domain. However, channel mismatch is inevitable in practice, which will cause the amplitude and phase error. In the current work, the effect of channel mismatch on the performance of the STPAP is investigated. Firstly, the STPAP architecture and the channel mismatch model are established. Then, the received signal model under the situation of no channel mismatch is put forward. After that, a novel channel mismatch model for GNSS signals is proposed, based on which the received signal model under the situation of channel mismatch is presented. Furthermore, the effect of channel mismatch on the performance of the STPAP is analyzed in theory. Finally, simulation results indicate that channel mismatch can greatly degrade the performance of the STPAP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on the design of flexible and wearable antennas using textile substrates for wireless/satellite based communication and control systems supporting Internet of Things (IoT). The same are based on on-body communication in Wireless body area network (WBANs). The antennas are designed using two different textile substrates i.e. Jeans and Polyester with εr of 1.7 and 2.8 respectively. The substrates are selected for the ease of wearability and the compact size of the designed antennas. The antennas are designed to operate in the C-Band (4-8 GHz) which is popular for satellite communications. The reason that a higher frequency band is selected is to overcome the congestion issues in the lower satellite frequency bands. Various simulation parameters like bandwidth, reflection coefficient (S11), 2D and 3D radiation patterns, directivity, gain and efficiency of both the antennas are compared and analysed. The maximum achieved gain, bandwidth and efficiency are 3.8dBi, 9.8GHz and 88.4 % for jeans substrate antenna and 3.1dBi, 6.7GHz and 77.5% for polyester substrate antenna respectively. The antennas are designed using Agilent Advance Design System simulator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Secure communication in a wireless environment is very vital, and cryptographic schemes are usually used to ensure it. Secret keys that are unknown to the tappers are needed by cryptographic schemes to secure communication between two legitimate users. In recent years the secret key generation scheme that utilizes a wireless channel as a secret key generation source has become a very interesting and promising topic. The main problem of this scheme is the high wireless channel mismatch between two users due to nonsimultaneous measurements and noise from the device. In this paper, we proposed a secret key generation scheme that uses the Savitzky-Golay filter method to reduce the high mismatch wireless channel. The scheme that was built was also equipped with a multibit quantization method to increase the speed of generation of secret keys. The results of our tests show a decrease in wireless channel mismatch between two users and an increase in the speed of generation of secret keys.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.