PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 6579, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider dense networks of surveillance cameras capturing overlapped images of the same scene from different
viewing directions, such a scenario being referred to as multi-view. Data compression is paramount in such a system due
to the large amount of captured data. In this paper, we propose a Multi-view Distributed Video Coding approach. It
allows for low complexity / low power consumption at the encoder side, and the exploitation of inter-view correlation
without communications among the cameras. We introduce a combination of temporal intra-view side information and
homography inter-view side information. Simulation results show both the improvement of the side information, as well
as a significant gain in terms of coding efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a memory-efficient, contour-based, region-of-interest (ROI) algorithm designed for ultra-low-bit-
rate compression of very large images. The proposed technique is integrated into a user-interactive wavelet-based
image coding system in which multiple ROIs of any shape and size can be selected and coded efficiently. The coding
technique compresses region-of-interest and background (non-ROI) information independently by allocating more bits to
the selected targets and fewer bits to the background data. This allows the user to transmit large images at very low
bandwidths with lossy/lossless ROI coding, while preserving the background content to a certain level for contextual
purposes. Extremely large images (e.g., 65000 X 65000 pixels) with multiple large ROIs can be coded with minimal
memory usage by using intelligent ROI tiling techniques. The foreground information at the encoder/decoder is
independently extracted for each tile without adding extra ROI side information to the bit stream. The arbitrary ROI
contour is down-sampled and differential chain coded (DCC) for efficient transmission. ROI wavelet masks for each tile
are generated and processed independently to handle any size image and any shape/size of overlapping ROIs. The
resulting system dramatically reduces the data storage and transmission bandwidth requirements for large digital images
with multiple ROIs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a software-only, real-time video coder/decoder (codec) with super-resolution-based enhancement
for ultra-low-bit-rate compression. The codec incorporates a modified JPEG2000 core and interframe predictive
coding, and can operate with network bandwidths as low as 500 bits/second. Highly compressed video exhibits
severe coding artifacts that degrade visual quality. To lower the level of noise and retain the sharpness of the video
frames, we build on our previous work in super-resolution-based video enhancement and propose a new version that is
suitable for real-time video coding systems. The adopted super-resolution-based enhancement uses a constrained set
of motion vectors that is computed from the original (uncompressed) video at the encoder. Artificial motion is also
added to the difference frame to maximize the enhancement performance. The encoder can transmit either the full set
of motion vectors or the constrained set of motion vectors depending upon the available bandwidth. At the decoder,
each pixel of the decoded frame is assigned to a motion vector from the constrained motion vector set. L2-norm
minimization super-resolution is then applied to the decoded frame set (previous frame, current frame, and next frame).
A selective motion estimation scheme is proposed to prevent ghosting, which otherwise would result from the super-resolution
enhancement when the motion estimation fails to find appropriate motion vectors. Results using the
proposed system demonstrate significant improvements in the quantitative and visual quality of the coded video
sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While bandwidth and storage capabilities are increasing with advances in technology, so are the demands of users as
quantities of data to transmit and store grow simultaneously. The need for effective signal compression is always
present, as reduction in data size is required while maintaining minimal signal destruction. This paper presents a new
signal compression scheme that uses coordinate logic transforms in combination with Boolean minimized
representations. The processing of the coordinate logic transforms on signal data helps reduce unnecessary signal
complexity allowing more effective data reduction during the Boolean minimized form encoding. The coordinate logic
transforms are an alternate method for calculating coordinate logic filters which simplify the computational complexity
through algorithmic parallelism. In this work, the combination of coordinate logic transforms with a Boolean minimized
form encoding scheme allows for a new compression technique that is applicable to both binary and grayscale input images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose an intelligent bird information retrieval system which aims to construct a mobility-learning
activity under the up-to-date wireless technology. The system consists of a Tablet PC and PDAs with wireless networking
capabilities. The PDA is equipped with a friendly retrieval interface and a good learning environment. In our system, users
only need to click the buttons or input the keywords to retrieve bird information. Besides, users can discuss or share their
information and knowledge via the wireless network. Our system saves bird information in four categories including "Introduction," "Images," "Sound," "Streaming Media," and "Ecological Memo." The integral knowledge helps users
understand more about birds. Data mining and fuzzy association rules are applied to recommend users those birds they
may be interested in. A streaming server on the Tablet PC is built to provide the streaming media for PDA users. By this
way, PDA users can enjoy the multimedia from Tablet PC in real time without downloading completely. Finally, the
system is a perfect tool for outdoor teaching and can be easily extended to provide navigation and touring services for
national parks or museums.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Malicious nodes can seriously impair the performance of wireless ad hoc networks as a result of different actions
such as packet dropping. Secure routes are shortest paths on which every node on the route is trusted
even if unknown. Secure route discovery requires the adoption of mechanisms of associating trust to nodes. Most
existing secure route discovery mechanisms rely on shared keys and digital signature. In the absence of central
nodes that act as certification authority, such protocols suffer from heavy computational burden and are vulnerable
to malicious attacks. In this paper we shall review existing techniques for secure routing and propose to
complement route finding with creditability scores. Each node would have a credit list for its neighbors. Each node
monitors its neighbors' pattern of delivering packets and regularly credits are reviewed and updated accordingly.
Unlike most existing schemes the focus of our work is based on post route discovery stage, i.e. when packets are
transmitted on discovered routes. The level of trust in any route will be based on the credits associated with the
neighbors belonging to the discovered route. We shall evaluate the performance of the proposed scheme by
modifying our simulation system so that each node has a dynamic changing "credit list" for its neighbors'
behavior. We shall conduct a series of simulations with and without the proposed scheme and compare the results.
We will demonstrate that the proposed mechanism is capable of isolating malicious nodes and thereby
counteracting black hole attacks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new method which enhances the quality-of-service (QoS) and hence the response time and
queuing delay of real-time interactive multimedia over the Internet. A service class based on differentiated services
mechanism has been developed. Evaluation of response time under different traffic conditions has been conducted via
simulation. Specifically, the impact on routers performance at the boundary of a DS-enabled domain was evaluated
using OPNET and the results are presented. Since audio and video traffic have different needs, priority schemes for
different types of interactive multimedia traffic have been studied to provide control and predictable service, and
therefore, better quality of service.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cellular communications constitute a significant portion of the global telecommunications market. Therefore, the
need for secured communication over a mobile platform has increased exponentially. Steganography is an art of hiding
critical data into an innocuous signal, which provide answers to the above needs. The JPEG is one of commonly used
format for storing and transmitting images on the web. In addition, the pictures captured using mobile cameras are in
mostly in JPEG format.
In this article, we introduce a switching theory based steganographic system for JPEG images which is applicable
for mobile and computer platforms. The proposed algorithm uses the fact that energy distribution among the quantized
AC coefficients varies from block to block and coefficient to coefficient. Existing approaches are effective with a part
of these coefficients but when employed over all the coefficients they show there ineffectiveness. Therefore, we propose
an approach that works each set of AC coefficients with different frame work thus enhancing the performance of the
approach. The proposed system offers a high capacity and embedding efficiency simultaneously withstanding to simple
statistical attacks. In addition, the embedded information could be retrieved without prior knowledge of the cover
image. Based on simulation results, the proposed method demonstrates an improved embedding capacity over existing
algorithms while maintaining a high embedding efficiency and preserving the statistics of the JPEG image after hiding
information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern steganography is a secure communication of information by embedding a secret-message within a "cover"
digital multimedia without any perceptual distortion to the cover media, so the presence of the hidden message is
indiscernible. Recently, the Joint Photographic Experts Group (JPEG) format attracted the attention of researchers as the
main steganographic format due to the following reasons: It is the most common format for storing images, JPEG
images are very abundant on the Internet bulletin boards and public Internet sites, and they are almost solely used for
storing natural images. Well-known JPEG steganographic algorithms such as F5 and Model-based Steganography
provide high message capacity with reasonable security.
In this paper, we present a method to increase security using JPEG images as the cover medium. The key element of the
method is using a new parametric key-dependent quantization matrix. This new quantization table has practically the
same performance as the JPEG table as far as compression ratio and image statistics. The resulting image is indiscernible
from an image that was created using the JPEG compression algorithm. This paper presents the key-dependent
quantization table algorithm and then analyzes the new table performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are several security issues tied to multimedia when implementing the various applications in the cellular phone
and wireless industry. One primary concern is the potential ease of implementing a steganography system. Traditionally,
the only mechanism to embed information into a media file has been with a desktop computer. However, as the cellular
phone and wireless industry matures, it becomes much simpler for the same techniques to be performed using a cell
phone. In this paper, two methods are compared that classify cell phone images as either an anomaly or clean, where a
clean image is one in which no alterations have been made and an anomalous image is one in which information has
been hidden within the image. An image in which information has been hidden is known as a stego image. The main
concern in detecting steganographic content with machine learning using cell phone images is in training specific
embedding procedures to determine if the method has been used to generate a stego image. This leads to a possible flaw
in the system when the learned model of stego is faced with a new stego method which doesn't match the existing
model. The proposed solution to this problem is to develop systems that detect steganography as anomalies, making the
embedding method irrelevant in detection. Two applicable classification methods for solving the anomaly detection of
steganographic content problem are single class support vector machines (SVM) and Parzen-window. Empirical
comparison of the two approaches shows that Parzen-window outperforms the single class SVM most likely due to the fact that Parzen-window generalizes less.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the great challenges of the existing watermarking methods is their limited resistance to the extensive geometric
attacks. Geometric attacks can be decomposed into two classes: global distortion such as rotations and translations and
local distortion such as the StirMark attack. We have found that the weakness of multiple watermark embedding methods
that were initially designed to resist geometric attacks is the inability to withstand the combination of geometric attacks.
In this paper, the watermark is used in the gray-scale authentication image. We propose a robust image watermarking
scheme that can withstand the geometric attacks by using local tri-mesh feature points. Our proposed method can resynchronize
the attacked images and is independent of the embedding and authentication process. The geometric
invariant scheme is combined with the complementary modulation embedding strategy to enhance the resistance of
geometric attacks. The experimental results verify that the proposed scheme is effective for geometric attacks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this contribution, we present a novel technique for imperceptible and robust watermarking of digital images.
It is based on the host image 2-th level decomposition using the Fibonacci-Haar Transform (FHT) and on the
Singular Value Decomposition (SVD) of the transformed subbands. The main contributions of this approach are
the use of the FHT for hiding purposes, the flexibility in data hiding capacity, and the key-dependent secrecy of
the used transform. The experimental results show the effectiveness of the proposed approach both in perceived
quality of the watermarked image and in robustness against the most common attacks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we introduce a new chaotic stream cipher Mmohocc which utilizes the
fundamental chaos characteristics. The designs of the major components of the cipher are
given. Its cryptographic properties of period, auto- and cross-correlations, and the
mixture of Markov processes and spatiotemporal effects are investigated. The cipher is
resistant to the related-key-IV, Time/Memory/Data tradeoff, algebraic, and chosen-text
attacks. The keystreams successfully passed two batteries of statistical tests and the
encryption speed is comparable with RC4.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the last decade a lot of efforts have been devoted to the development of biometrics-based authentication
systems. In this paper we propose a signature-based biometric authentication system, where watermarking
techniques are used to embed some dynamic signature features in a static representation of the signature itself,
stored either in a centralized database or in a smartcard. The user authentication can be performed either by using
some static features extracted from the acquired signature or by using both the aforementioned static features
together with the dynamic features embedded in the enrollment stage. A multi-level authentication system,
which is capable to provide various degree of security, is thus obtained. The proposed watermarking techniques
are tailored to images with sharp edges, like a signature picture, in order to obtain a robust embedding method
while keeping intact the original structure of the host signal. Experimental results show the two different levels
of security which can be reached when either static features or both static and dynamic features are employed in the authentication process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Receiver operating characteristic (ROC) curve is widely used in biometric identification. It is a plot of the detection power virus false alarm rate. It is an objective measure of accuracy. Positive biometrics identification is one-to-many match. ROC curve has been served as a "golden" criterion in measuring the accuracy of biometrics system for positive biometric identification. However, in this paper, we will analyze the problems of using ROC curve as the sole criterion in positive biometrics identification. From the view of detection and estimation theory, ROC curve only took concerns of system variance, and would not be able to detect the system bias, which could give wrong conclusion in evaluating system accuracy across multiple databases. ROC curve does not reflect the cost function, the database size, the quality of the image, and many other factors that are important in system performance and accuracy. We will use iris recognition as an example to discuss these issues. At the end, we will discuss some possible solutions to solve these problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel authentications system combining biometric cryptosystems with digital watermarking is
presented. One of the main vulnerabilities of the existing data hiding systems is the public knowledge of the
embedding domain. We propose the use of biometric data, minutiae fingerprint set, for generating the encryption
key needed to decompose an image in the Tree structured Haar transform. The uniqueness of the biometrics key
together with other, embedded, biometric information guarantee the authentication of the user. Experimental tests show the effectiveness of the proposed system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new computationally efficient framework for vehicle tracking on a mobile platform is proposed. The principal component of the framework is the log-polar transformation applied to video frames captured from a standard uniformly sampled format camera. The log-polar transformation provides two major benefits to real-time vehicle tracking from a mobile vehicle platform moving along a single or multi-lane road. First, it significantly reduces the amount of data required to be processed since it collapses the original Cartesian video frames into log-polar images with much smaller dimensions. Second, the log-polar transformation is capable of mitigating perspective distortion due to its scale invariance property. This second aspect is of interest for vehicle tracking because the target vehicle appearance is preserved for all distances from the observer (camera). This works however only if the center of log-polar transformation is coincident with the vanishing point of perspective view. Therefore, a road following algorithm is proposed to keep the center of log-polar transform on the vanishing point at every video frame compensating for the carrying vehicle movements. Since the algorithm is intended to be used in the mobile embedded devices, it is developed to achieve both mathematical simplicity and algorithmic efficiency while avoiding computationally expensive mathematical functions. The use of trigonometric and exponential functions is minimized comparing to the log-Hough transform traditionally used in log-polar space. This new algorithm focuses on straight radial line fragments, thus shifting its mathematical engine to the linear equations' domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method of image enhancement using an adaptive thresholding method based on the human visual system. We utilize a number of different enhancement algorithms applied selectively to the different regions of an image to achieve a better overall enhancement than applying a single technique globally. The presented method is useful for images that contain various regions of improper illumination. It is also practical for correcting shadows. This thresholding system allows various enhancement algorithms to be used on different sections of the image based on the local visual characteristics. It further allows the parameters to be tuned differently for the specific regions, giving a more visually pleasing output image.
We demonstrate the algorithm and present results for several high quality images as well as lower quality images such as those captured using a cell phone camera. We then compare and contrast our method to other state-of-the-art enhancement algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The wavelet transform is a powerful tool for image and video processing, useful in a range of applications. This
paper is concerned with the efficiency of a certain fast-wavelet-transform (FWT) implementation and several
wavelet filters, more suitable for constrained devices. Such constraints are typically found on mobile (cell) phones
or personal digital assistants (PDA). These constraints can be a combination of; limited memory, slow floating
point operations (compared to integer operations, most often as a result of no hardware support) and limited
local storage. Yet these devices are burdened with demanding tasks such as processing a live video or audio
signal through on-board capturing sensors.
In this paper we present a new wavelet software library, HeatWave, that can be used efficiently for image/video
processing/analysis tasks on mobile phones and PDA's. We will demonstrate that HeatWave is suitable for realtime
applications with fine control and range to suit transform demands. We shall present experimental results
to substantiate these claims. Finally this library is intended to be of real use and applied, hence we considered
several well known and common embedded operating system platform differences; such as a lack of common
routines or functions, stack limitations, etc. This makes HeatWave suitable for a range of applications and
research projects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new watermarking scheme using elemental images in integral imaging for protection of information is
proposed. Elemental images, which have an effect of distributing information of three-dimensional objects, are used as a
watermark. The elemental images watermark with depth information of embedded patterns provides us to reconstruct
entire embedded pattern with partial information. To show the usefulness of the proposed scheme, we carry out
preliminary experiments and show some experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose an Orthogonal Frequency Division Multiplexing Ultra Wide Band (OFDM-UWB) system that
introduces encryption, mutual authentication, and data integrity functions, at the physical layer, without impairing
spectral efficiency. Encryption is performed by rotating the constellation employed in each band by means of a pseudorandom
phase-hopping sequence. Authentication and data integrity, based on encrypted-hash, are directly coupled with
Forward Error Correction (FEC). Dependence of the phase hopping sequence on the transmitted message deny the use of
the phase hopping obtained by means of known and chosen plaintext attacks for decryption of further messages.
Moreover, since phase hopping generation keys change very rapidly they are also difficultly detectable from a hypothetic
man in the middle. Computer simulations confirm the superior performance, even in terms of BER, to a standard PSKOFDM
system, due to the FEC capabilities of encrypted hash.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Measurement of image similarity is important for a number of image processing applications. Image
similarity assessment is closely related to image quality assessment in that quality is based on the apparent
differences between a degraded image and the original, unmodified image. Automated evaluation of image
compression systems relies on accurate quality measurement. Current algorithms for measuring similarity
include mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM).
They have some limitations: such as consistent, accuracy and incur greater computational cost.
In this paper, we show that a modified version of the measurement of enhancement by entropy (EME) can
be used as an image similarity measure, and thus an image quality measure. Until now, EME has generally
been used to measure the level of enhancement obtained using a given enhancement algorithm and
enhancement parameter. The similarity-EME (SEME) is based on the EME for enhancement. We will
compare SEME to existing measures over a set of images subjectively judged by humans. Computer
simulations have demonstrated its promise through a set of examples, as well as comparison to both
subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years there has been an increased interest in audio steganography and watermarking. This is due primarily to
two reasons. First, an acute need to improve our national security capabilities in light of terrorist and criminal activity
has driven new ideas and experimentation. Secondly, the explosive proliferation of digital media has forced the music
industry to rethink how they will protect their intellectual property. Various techniques have been implemented but the
phase domain remains a fertile ground for improvement due to the relative robustness to many types of distortion and
immunity to the Human Auditory System. A new method for embedding data in the phase domain of the Discrete
Fourier Transform of an audio signal is proposed. Focus is given to robustness and low perceptibility, while maintaining
a relatively high capacity rate of up to 172 bits/s.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An empirical method for Canny filter optimization is explored and applied to the problem of measuring rigid motion
between targets for nanometer motion detection. Operating with an image space pixel size of 3 μm, we are able to
obtain static target localization to 6 nm at 2σ variation. To discriminate target roughness from sub-pixel measurement
noise we use a Laplacian filter method. To extend the resolution beyond the limits of a single sub-pixel sample we use
multiple adjacent edge locations along a single target to statistically reduce the overall resolution. With sufficient
samples we obtain near .001 pixels resolving power. Even at this resolution we have not reached the limits of sampling
which are possible from simultaneously sampling sets of parallel lines allowing for future refinement of method to
localize well below .001 pixels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.