PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9120 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we study image enhancement technologies for stereoscopic images and an image enhancement algorithm
using salient features and wavelet transform is proposed. In the proposed algorithm, the stereoscopic images are
decomposed into different subbands and wavelet coefficients are modified based on salient features. Objective and
subjective tests were performed to verify the effectiveness of the proposed algorithm. The experimental results show that
the proposed algorithm outperforms some conventional algorithms and has great potential in the enhancement of
stereoscopic images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose an extended pairs of values analysis to detect and estimate the amount of secret messages
embedded with 2LSB replacement in digital images based on chi-square attack and regularity rate in pixel values. The
detection process is separated from the estimation of the hidden message length, as it is the main requirement of any
steganalysis method. Hence, the detection process acts as a discrete classifier, which classifies a given set of images into
stego and clean classes. The method can accurately detect 2LSB replacement even when the message length is about
10% of the total capacity, it also reaches its best performance with an accuracy of higher than 0.96 and a true positive
rate of more than 0.997 when the amount of data are 20% to 100% of the total capacity. However, the method puts no
assumptions neither on the image nor the secret message, as it tested with two sets of 3000 images, compressed and
uncompressed, embedded with a random message for each case. This method of detection could also be used as an
automated tool to analyse a bulk of images for hidden contents, which could be used by digital forensics analysts in their
investigation process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such
as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and
stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into
16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby
the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate
that the proposed technique offers an effective compromise between payload capacity and stego quality of existing
embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas,
while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least
Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit
(MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane
onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect
in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when
embedding messages in higher bit-planes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Least Significant Bit (LSB) embedding technique is a well-known and broadly employed method in multimedia
steganography, used mainly in applications involving single bit-plane manipulations in the spatial domain [1]. The key
advantages of LSB procedures are they are simple to understand, easy to implement, have high embedding capacity, and
can be resistant to steganalysis attacks. Additionally, the LSB approach has spawned numerous applications and can be
used as the basis of more complex techniques for multimedia data embedding. In the last several decades, hundreds of
new LSB or LSB variant methods have been developed in an effort to optimize capacity while minimizing detectability,
taking advantage of the overall simplicity of this method. LSB-steganalysis research has also intensified in an effort to
find new or improved ways to evaluate the performance of this widely used steganographic system. This paper reviews
and categorizes some of these major techniques of LSB embedding, focusing specifically on the spatial domain. Some
justification for establishing and identifying promising uses of a proposed SD-LSB centric taxonomy are discussed.
Specifically, we define a new taxonomy for SD-LSB embedding techniques with the goal of aiding researchers in tool
classification methodologies that can lead to advances in the state-of-the-art in steganography. With a common
framework to work with, researchers can begin to more concretely identify core tools and common techniques to
establish common standards of practice for steganography in general. Finally, we provide a summary on some of the
most common LSB embedding techniques followed by a proposed taxonomy standard for steganalysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging Techniques, Requirements, and Emerging Applications
Power centroid radar (PC-Radar) is a fast and powerful adaptive radar scheme that naturally surfaced from the
recent discovery of the time-dual for information theory which has been named “latency theory.” Latency theory
itself was born from the universal cybernetics duality (UC-Duality), first identified in the late 1970s, that has also
delivered a time dual for thermodynamics that has been named “lingerdynamics” and anchors an emerging lifespan
theory for biological systems. In this paper the rise of PC-Radar from the UC-Duality is described. The
development of PC-Radar, US patented, started with Defense Advanced Research Projects Agency (DARPA)
funded research on knowledge-aided (KA) adaptive radar of the last decade. The outstanding signal to interference
plus noise ratio (SINR) performance of PC-Radar under severely taxing environmental disturbances will be
established. More specifically, it will be seen that the SINR performance of PC-Radar, either KA or knowledgeunaided
(KU), approximates that of an optimum KA radar scheme. The explanation for this remarkable result is
that PC-Radar inherently arises from the UC-Duality, which advances a “first principles” duality guidance theory
for the derivation of synergistic storage-space/computational-time compression solutions. Real-world synthetic
aperture radar (SAR) images will be used as prior-knowledge to illustrate these results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Content-based image retrieval is an automatic process of retrieving images according to image visual contents instead of
textual annotations. It has many areas of application from automatic image annotation and archive, image classification
and categorization to homeland security and law enforcement. The key issues affecting the performance of such retrieval
systems include sensible image features that can effectively capture the right amount of visual contents and suitable
similarity measures to find similar and relevant images ranked in a meaningful order. Many different approaches,
methods and techniques have been developed as a result of very intensive research in the past two decades. Among
many existing approaches, is a cluster-based approach where clustering methods are used to group local feature
descriptors into homogeneous regions, and search is conducted by comparing the regions of the query image against
those of the stored images. This paper serves as a review of works in this area. The paper will first summarize the
existing work reported in the literature and then present the authors’ own investigations in this field. The paper intends to
highlight not only achievements made by recent research but also challenges and difficulties still remaining in this area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a mobile system for aiding doctors in skin cancer diagnosis and other persons in skin cancer
monitoring. The basic idea is to use image retrieval techniques to help the users to find the similar skin cancer cases
stored in a database by using smart phones. The query image can be taken by a smart phone from a patient or can be
uploaded from other resources. The shapes of the skin lesions are used for matching two skin lesions, which are
segmented from skin images using the skin lesion extraction method developed in 1. The features used in the proposed
system are obtained by Fourier descriptor. A prototype application has been developed and can be installed in an iPhone.
In this application, the iPhone users can use the iPhone as a diagnosis tool to find the potential skin lesions in a persons’
skin and compare the skin lesions detected by the iPhone with the skin lesions stored in a database in a remote server.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound is an effective multipurpose imaging modality that has been widely used for monitoring and diagnosing early
pregnancy events. Technology developments coupled with wide public acceptance has made ultrasound an ideal tool for
better understanding and diagnosing of early pregnancy. The first measurable signs of an early pregnancy are the
geometric characteristics of the Gestational Sac (GS). Currently, the size of the GS is manually estimated from
ultrasound images. The manual measurement involves multiple subjective decisions, in which dimensions are taken in
three planes to establish what is known as Mean Sac Diameter (MSD). The manual measurement results in inter- and
intra-observer variations, which may lead to difficulties in diagnosis. This paper proposes a fully automated diagnosis
solution to accurately identify miscarriage cases in the first trimester of pregnancy based on automatic quantification of
the MSD. Our study shows a strong positive correlation between the manual and the automatic MSD estimations. Our
experimental results based on a dataset of 68 ultrasound images illustrate the effectiveness of the proposed scheme in
identifying early miscarriage cases with classification accuracies comparable with those of domain experts using K
nearest neighbor classifier on automatically estimated MSDs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ethnicity identification of face images is of interest in many areas of application. Different from face recognition of
individuals, ethnicity identification classifies faces according to the common features of a specific ethnic group. This
paper presents a multi-level fusion scheme for ethnicity identification that combines texture features of local areas of a
face using local binary patterns with color features using HSV binning. The scheme fuses the decisions from a k-nearest
neighbor classifier and a support vector machine classifier into a final identification decision. We have tested the scheme
on a collection of face images from a number of publicly available databases. The results demonstrate the effectiveness
of the combined features and improvements on accuracy of identification by the fusion scheme over the identification
using individual features and other state-of-art techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate
search efficiency in application domains which involve searching over a hypothesis space of reference templates or
models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration
rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm
computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration
rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses
to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the
framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used
to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on
simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general
formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over
a geometric misalignment transform hypothesis space. We present numerical results validating the modeling
assumptions and derived formulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced Kalman Filters has been used extensively in the domain of video based tracking of target objects. They can be
viewed as an extension of Kalman Filtering principle. Instead of using object point mass as a tracker as used in the
Kalman filter, alterations are made to incorporate advanced strategies. This is the typical formulation of the Kalman
Enhanced Filter (KEF). Even though this allows the use of non-linearity for state prediction, it is constrained by its
choice of the Kalman state transition function. Furthermore the KEF does not provide a methodology of selection of the
distribution of the prior. The proper tuning of the above choices is critical for performance of the KEF. This work
addresses these constraints of the KEF. It particularly targets two significant areas. Firstly it automates the state matrix
generation process by fusing alternate tracking mechanism to the KEF. This novel technique is tested for tracking of real
video sequence and its efficacy is quantified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face classification of multiple cameras has wide applications in surveillance. In this paper, the efficacy of a multi-frame
decision-level fusion scheme for face classification based on the photon-counting linear discriminant analysis is
investigated. The photon-counting linear discriminant analysis method is able to realize Fisher’s criterion without
preprocessing for dimensionality reduction. The decision-level fusion scheme is comprised of three stages: score
normalization, score validation, and score combination. After normalization, the candidate scores are selected and
combined by means of a score validation process and a fusion rule, respectively, in order to generate a final score. In the
experiments, out-of-focus and motion blurs are rendered on test images simulating harsh conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose an analytic sequential methods for detecting port-scan attackers
which routinely perform random “portscans” of IP addresses to find vulnerable servers to
compromise. In addition to rigorously control the probability of falsely implicating benign
remote hosts as malicious, our method performs significantly faster than other current solutions.
We have developed explicit formulae for quick determination of the parameters of the
new detection algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Video compression and encryption became very essential in a secured real time video transmission. Applying both
techniques simultaneously is one of the challenges where the size and the quality are important in multimedia
transmission. In this paper we proposed a new technique for video compression and encryption. Both encryption and
compression are based on edges extracted from the high frequency sub-bands of wavelet decomposition. The
compression algorithm based on hybrid of: discrete wavelet transforms, discrete cosine transform, vector quantization,
wavelet based edge detection, and phase sensing. The compression encoding algorithm treats the video reference and
non-reference frames in two different ways. The encryption algorithm utilized A5 cipher combined with chaotic logistic
map to encrypt the significant parameters and wavelet coefficients. Both algorithms can be applied simultaneously after
applying the discrete wavelet transform on each individual frame. Experimental results show that the proposed
algorithms have the following features: high compression, acceptable quality, and resistance to the statistical and bruteforce
attack with low computational processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the world of computer and network security, there are myriad ways to launch an attack, which, from the perspective of
a network, can usually be defined as "traffic that has huge malicious intent." Firewall acts as one of the measure in order
to secure the device from incoming unauthorized data. There are infinite number of computer attacks that no firewall can
prevent, such as those executed locally on the machine by a malicious user. From the network's perspective, there are
numerous types of attack. All the attacks that degrade the effectiveness of data can be grouped into two types: brute force
and precision. The Firewall that belongs to Juniper has the capability to protect against both types of attack. Denial of
Service (DoS) attacks are one of the most well-known network security threats under brute force attacks, which is largely
due to the high-profile way in which they can affect networks. Over the years, some of the largest, most respected
Internet sites have been effectively taken offline by Denial of Service (DOS) attacks. A DoS attack typically has a
singular focus, namely, to cause the services running on a particular host or network to become unavailable. Some DoS
attacks exploit vulnerabilities in an operating system and cause it to crash, such as the infamous Win nuke attack. Others
submerge a network or device with traffic so that there are no more resources to handle legitimate traffic. Precision
attacks typically involve multiple phases and often involves a bit more thought than brute force attacks, all the way from
reconnaissance to machine ownership. Before a precision attack is launched, information about the victim needs to be
gathered. This information gathering typically takes the form of various types of scans to determine available hosts,
networks, and ports. The hosts available on a network can be determined by ping sweeps. The available ports on a
machine can be located by port scans. Screens cover a wide variety of attack traffic as they are configured on a per-zone
basis. Depending on the type of screen being configured, there may be additional settings beyond simply blocking the
traffic. Attack prevention is also a native function of any firewall. Juniper Firewall handles traffic on a per-flow basis.
We can use flows or sessions as a way to determine whether traffic attempting to traverse the firewall is legitimate. We
control the state-checking components resident in Juniper Firewall by configuring "flow" settings. These settings allow
you to configure state checking for various conditions on the device. You can use flow settings to protect against TCP
hijacking, and to generally ensure that the fire-wall is performing full state processing when desired. We take a case
study of attack on a network and perform study of the detection of the malicious packets on a Net screen Firewall. A new
solution for securing enterprise networks will be developed here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As speech based operation becomes a main hand-free interaction solution between human and mobile devices (i.e., smartphones,
Google Glass), privacy preserving speaker verification receives much attention nowadays. Privacy preserving
speaker verification can be achieved through many different ways, such as fuzzy vault and encryption. Encryption based
solutions are promising as cryptography is based on solid mathematic foundations and the security properties can be easily
analyzed in a well established framework. Most current asymmetric encryption schemes work on finite algebraic structures,
such as finite group and finite fields. However, the encryption scheme for privacy preserving speaker verification
must handle floating point numbers. This gap must be filled to make the overall scheme practical. In this paper, we propose
a number system that meets the requirements of both speaker verification and the encryption scheme used in the process.
It also supports addition homomorphic property of Pailliers encryption, which is crucial for privacy preserving speaker
verification. As asymmetric encryption is expensive, we propose a method of packing several numbers into one plain-text
and the computation overhead is greatly reduced. To evaluate the performance of this method, we implement Pailliers
encryption scheme over proposed number system and the packing technique. Our findings show that the proposed solution
can fulfill the gap between speaker verification and encryption scheme very well, and the packing technique improves
the overall performance. Furthermore, our solution is a building block of encryption based privacy preserving speaker
verification, the privacy protection and accuracy rate are not affected.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems
used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements
cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor’s image resolution.
Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented.
This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where
the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera
designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from
the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording
omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time
video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next
generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently
under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in
surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange
imaging which are beyond standard stitching and panorama generation methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image or video inpainting is the process/art of retrieving missing portions of an image without introducing undesirable
artifacts that are undetectable by an ordinary observer. An image/video can be damaged due to a variety of factors, such
as deterioration due to scratches, laser dazzling effects, wear and tear, dust spots, loss of data when transmitted through a
channel, etc. Applications of inpainting include image restoration (removing laser dazzling effects, dust spots, date, text,
time, etc.), image synthesis (texture synthesis), completing panoramas, image coding, wireless transmission (recovery of
the missing blocks), digital culture protection, image de-noising, fingerprint recognition, and film special effects and
production. Most inpainting methods can be classified in two key groups: global and local methods. Global methods are
used for generating large image regions from samples while local methods are used for filling in small image gaps. Each
method has its own advantages and limitations. For example, the global inpainting methods perform well on textured
image retrieval, whereas the classical local methods perform poorly. In addition, some of the techniques are
computationally intensive; exceeding the capabilities of most currently used mobile devices. In general, the inpainting
algorithms are not suitable for the wireless environment.
This paper presents a new and efficient scheme that combines the advantages of both local and global methods into a
single algorithm. Particularly, it introduces a blind inpainting model to solve the above problems by adaptively selecting
support area for the inpainting scheme. The proposed method is applied to various challenging image restoration tasks,
including recovering old photos, recovering missing data on real and synthetic images, and recovering the specular
reflections in endoscopic images. A number of computer simulations demonstrate the effectiveness of our scheme and
also illustrate the main properties and implementation steps of the presented algorithm. Furthermore, the simulation
results show that the presented method is among the state-of-the-art and compares favorably against many available
methods in the wireless environment. Robustness in the wireless environment with respect to the shape of the manually
selected “marked” region is also illustrated. Currently, we are working on the expansion of this work to video and 3-D
data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We send the raw captured image data to the host computer by using WiFi wireless communication and then use GPU hardware and CUDA programming for implementation of real time three-dimensional stereo image by synthesizing the depth of ROI(region of interest). We also try to find and declare the mechanism of deblurring of CMOS camera module based on the Kirchhoff diffraction formula and propose a deblurring model. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Security and surveillance videos, due to usage in open environments, are likely subjected to low resolution,
underexposed, and overexposed conditions that reduce the amount of useful details available in the collected images.
We propose an approach to improve the image quality of low resolution images captured in extreme lighting conditions
to obtain useful details for various security applications. This technique is composed of a combination of a nonlinear
intensity enhancement process and a single image super resolution process that will provide higher resolution and better
visibility. The nonlinear intensity enhancement process consists of dynamic range compression, contrast enhancement,
and color restoration processes. The dynamic range compression is performed by a locally tuned inverse sine nonlinear
function to provide various nonlinear curves based on neighborhood information. A contrast enhancement technique is
used to obtain sufficient contrast and a nonlinear color restoration process is used to restore color from the enhanced
intensity image. The single image super resolution process is performed in the phase space, and consists of defining
neighborhood characteristics of each pixel to estimate the interpolated pixels in the high resolution image. The
combination of these approaches shows promising experimental results that indicate an improvement in visibility and an
increase in usable details. In addition, the process is demonstrated to improve tracking applications. A quantitative
evaluation is performed to show an increase in image features from Harris corner detection and improved statistics of
visual representation. A quantitative evaluation is also performed on Kalman tracking results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper concerns available steganographic techniques that can be used for sending hidden data through public
network. Typically, in steganographic communication it is advised to use popular/often used method for sending hidden
data and amount of that data need to be high as much as possible. We confirmed this by choosing a Domain Name
System (DNS) as a vital protocol of each network and choosing Distributed denial of service (DDoS) attacks that are
most popular network attacks currently represented in the world. Apart from characterizing existing steganographic
methods we provide new insights by presenting two new techniques. The first one is network steganography solution
which exploits free/unused protocols fields and is known for IP, UDP or TCP protocols, but has never been applied to
DNS (Domain Name Server) which are the fundamental part of network communications. The second explains the usage
of DNS Amplification DDoS Attack to send seamlessly data through public network. The calculation that was performed
to estimate the total amount of data that can be covertly transferred by using these technique, regardless of steganalysis,
is included in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on a machine learning approach for objective inpainting quality assessment. Inpainting has received a
lot of attention in recent years and quality assessment is an important task to evaluate different image reconstruction
approaches. Quantitative metrics for successful image inpainting currently do not exist; researchers instead are relying
upon qualitative human comparisons in order to evaluate their methodologies and techniques. We present an approach
for objective inpainting quality assessment based on natural image statistics and machine learning techniques. Our
method is based on observation that when images are properly normalized or transferred to a transform domain, local
descriptors can be modeled by some parametric distributions. The shapes of these distributions are different for noninpainted
and inpainted images. Approach permits to obtain a feature vector strongly correlated with a subjective image
perception by a human visual system. Next, we use a support vector regression learned on assessed by human images to
predict perceived quality of inpainted images. We demonstrate how our predicted quality value repeatably correlates
with a qualitative opinion in a human observer study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color image quality measures have been used for many computer vision tasks. In practical applications, the no-reference
(NR) measures are desirable because reference images are not always accessible. However, only limited success has
been achieved. Most existing NR quality assessments require that the types of image distortion is known a-priori. In this
paper, three NR color image attributes: colorfulness, sharpness and contrast are quantified by new metrics. Using these
metrics, a new Color Quality Measure (CQM), which is based on the linear combination of these three color image
attributes, is presented. We evaluated the performance of several state-of-the-art no-reference measures for comparison
purposes. Experimental results demonstrate the CQM correlates well with evaluations obtained from human observers
and it operates in real time. The results also show that the presented CQM outperforms previous works with respect to
ranking image quality among images containing the same or different contents. Finally, the performance of CQM is
independent of distortion types, which is demonstrated in the experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper deals with presentation of the IVAS system within the 7FP EU INDECT project. The INDECT project aims at
developing the tools for enhancing the security of citizens and protecting the confidentiality of recorded and stored
information. It is a part of the Seventh Framework Programme of European Union. We participate in INDECT portal and
the Interactive Video Audio System (IVAS). This IVAS system provides a communication gateway between police
officers working in dispatching centre and police officers in terrain. The officers in dispatching centre have capabilities
to obtain information about all online police officers in terrain, they can command officers in terrain via text messages,
voice or video calls and they are able to manage multimedia files from CCTV cameras or other sources, which can be
interesting for officers in terrain. The police officers in terrain are equipped by smartphones or tablets. Besides common
communication, they can reach pictures or videos sent by commander in office and they can respond to the command via
text or multimedia messages taken by their devices. Our IVAS system is unique because we are developing it according
to the special requirements from the Police of the Czech Republic. The IVAS communication system is designed to use
modern Voice over Internet Protocol (VoIP) services. The whole solution is based on open source software including
linux and android operating systems. The technical details of our solution are presented in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a novel texture descriptor based indices of degrees of local approximating polynomials. An input image is divided into non-overlapping patches which are reshaped into a one-dimensional source vectors. These vectors are approximated using local polynomial functions of various degrees. For each element of the source vector, these approximations are ranked according to the difference between the original and approximated values. A set of indices of polynomial degrees form a local feature. This procedure is repeated for every pixel. Finally, a proposed texture descriptor is obtained from the frequency histogram of all obtained local features. A nearest neighbor classifier utilizing distance metric is used to evaluate a performance of the introduced descriptor on the following datasets: Brodatz, KTH-TIPS, KTH-TIPS2b, UCLA and Columbia-Utrecht (CUReT) with respect to different methods of texture analysis and classification. A proper parameter setup of the proposed texture descriptor is discussed. The results of this comparison demonstrate that the proposed method is competitive with the recent statistical approaches such as local binary patterns (LBP), local ternary patterns, completed LBP, Weber’s local descriptor, and VZ algorithms (VZ-MR8 and VZ-Joint). At the same time, on KTH-TIPS2-b and KTH-TIPS datasets, the proposed method is slightly inferior to some of the state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image alignment and mosaicing are usually performed on a set of overlapping images, using features in the area of
overlap for seamless stitching. In many cases such images have different size and shape. So we need to crop panoramas
or to use image extrapolation for them. This paper focuses on novel image inpainting method based on modified
exemplar-based technique. The basic idea is to find an example (patch) from an image using local binary patterns, and
replacing non-existed (‘lost’) data with it. We propose to use multiple criteria for a patch similarity search since often in
practice existed exemplar-based methods produce unsatisfactory results. The criteria for searching the best matching uses
several terms, including Euclidean metric for pixel brightness and Chi-squared histogram matching distance for local
binary patterns. A combined use of textural geometric characteristics together with color information allows to get more
informative description of the patches. In particular, we show how to apply this strategy for image extrapolation for
photo stitching. Several examples considered in this paper show the effectiveness of the proposed approach on several
test images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.