Skin prick test is a commonly used method for diagnosis of allergic diseases (e.g., pollen allergy, food allergy, etc.) in allergy clinics. The results of this test are erythema and wheal provoked on the skin where the test is applied. The sensitivity of the patient against a specific allergen is determined by the physical size of the wheal, which can be estimated from images captured by digital cameras. Accurate wheal detection from these images is an important step for precise estimation of wheal size. In this paper, we propose a method for improved wheal detection on prick test images captured by digital cameras. Our method operates by first localizing the test region by detecting calibration marks drawn on the skin. The luminance variation across the localized region is eliminated by applying a color transformation from RGB to YCbCr and discarding the luminance channel. We enhance the contrast of the captured images for the purpose of wheal detection by performing principal component analysis on the blue-difference (Cb) and red-difference (Cr) color channels. We finally, perform morphological operations on the contrast enhanced image to detect the wheal on the image plane. Our experiments performed on images acquired from 36 different patients show the efficiency of the proposed method for wheal detection from skin prick test images captured in an uncontrolled environment.
Video surveillance is used extensively in intelligent transportation systems to enforce laws, collect tolls, and regularize traffic flow. Benefits to society include reduced fuel consumption and emissions, improved safety, and reduced traffic congestion. These video cameras installed at traffic lights, highways, toll booths, etc., continuously capture video and hence generate a vast amount of data that are stored in large databases. The captured video is typically compressed before being transmitted and/or stored. While all the archived information is present in the compressed video, most current applications operate on uncompressed video. The aim is to improve the efficiency of processing by utilizing features of the compression process and the compressed video stream. Key methods that are employed involve intelligent selection of reference frames (I-frames) and exploitation of the compression motion vectors. Although specific applications in the transportation imaging domain are presented, the methods proposed here can generally impact the ability to mine vast amounts of video data for usable information in many diverse settings. Applications presented include rapid search for target vehicles (Amber Alert, Silver Alert, stolen car, etc.), vehicle counting, stop sign/light enforcement, and vehicle speed estimation.
Urban parking management is receiving significant attention due to its potential to reduce traffic congestion, fuel consumption, and emissions. Real-time parking occupancy detection is a critical component of on-street parking management systems, where occupancy information is relayed to drivers via smart phone apps, radio, Internet, on-road signs, or global positioning system auxiliary signals. Video-based parking occupancy detection systems can provide a cost-effective solution to the sensing task while providing additional functionality for traffic law enforcement and surveillance. We present a video-based on-street parking occupancy detection system that can operate in real time. Our system accounts for the inherent challenges that exist in on-street parking settings, including illumination changes, rain, shadows, occlusions, and camera motion. Our method utilizes several components from video processing and computer vision for motion detection, background subtraction, and vehicle detection. We also present three traffic law enforcement applications: parking angle violation detection, parking boundary violation detection, and exclusion zone violation detection, which can be integrated into the parking occupancy cameras as a value-added option. Our experimental results show that the proposed parking occupancy detection method performs in real-time at 5 frames/s and achieves better than 90% detection accuracy across several days of videos captured in a busy street block under various weather conditions such as sunny, cloudy, and rainy, among others.
Authentication of content in printed images poses a challenge that cannot be addressed by conventional digital signature schemes because under the analog transport provided by the printing channel the verifier does not have access to the original digital content in pristine form. We present a method for cryptography-based authentication of the content in printed images that also provides the capability for identifying localized changes made by informed malicious attackers—key functionality that is missing in print scan robust hashes that have traditionally been used for print content authentication. The proposed method operates by embedding, within the printed image, an authentication signature that consists of an encrypted thumbnail of the image using a high capacity data hiding method for halftone images. To authenticate the content, the embedded signature is extracted from a scan of the printed image and, after decryption, compared with the printed content. An implementation of the method that incorporates human or automated verification and identifies potential local tampering by informed malicious attackers is developed and successfully demonstrated.
Video cameras are widely deployed along city streets, interstate highways, traffic lights, stop signs and toll booths by entities that perform traffic monitoring and law enforcement. The videos captured by these cameras are typically compressed and stored in large databases. Performing a rapid search for a specific vehicle within a large database of compressed videos is often required and can be a time-critical life or death situation. In this paper, we propose video compression and decompression algorithms that enable fast and efficient vehicle or, more generally, event searches in large video databases. The proposed algorithm selects reference frames (i.e., I-frames) based on a vehicle having been detected at a specified position within the scene being monitored while compressing a video sequence. A search for a specific vehicle in the compressed video stream is performed across the reference frames only, which does not require decompression of the full video sequence as in traditional search algorithms. Our experimental results on videos captured in a local road show that the proposed algorithm significantly reduces the search space (thus reducing time and computational resources) in vehicle search tasks within compressed video streams, particularly those captured in light traffic volume conditions.
Two-dimensional barcodes are widely used for encoding data in printed documents. In a number of applications,
the visual appearance of the barcode constitutes a fundamental restriction. In this paper, we propose high
capacity color image barcodes that encode data in an image while preserving its basic appearance. Our method
aims at high embedding rates and sacrifices image fidelity in favor of embedding robustness in regions where
these two goals conflict with each other. The method operates by utilizing cyan, magenta, and yellow printing
channels with elongated dots whose orientations are modulated in order to encode the data. At the receiver, by
using the complementary sensor channels to estimate the colorant channels, data is extracted in each individual
colorant channel. In order to recover from errors introduced in the channel, error correction coding is employed.
Our simulation and experimental results indicate that the proposed method can achieve high encoding rates
while preserving the appearance of the base image.
Moiré in color printing is an undesirable visible artifact that can arise from overlaying multiple halftone color separations. Halftone geometric configurations designed to avoid moiré in the overlays strictly require that individual halftone color separations must possess a low degree of relative distortion. However, optical and mechanical errors of multiple imaging systems within a printer usually produce differences between the color planes in the trajectory and placement of the exposure spots. We study color halftone moiré due to these optical and mechanical errors for otherwise moiré-free halftone configurations. Distortions due to commonly used imaging systems in xerography (i.e., raster output scanners and image bars) are categorized into two classes that depend on the direction of the displacement errors [i.e., process direction distortions (such as shear, bow, and skew) and cross-process direction distortions (such as scanline magnification, magnification imbalance, and high-order scanline distortions)]. Using frequency vector representation of color halftones, we derive analytical expressions for acceptability bounds on these distortions. We evaluate the analytical expressions for a classical halftone screen configuration and a minimum rosette geometry to enable specification allocations for different imaging components in the design of an imaging system.
Individual halftone color separations must possess a low degree of distortion to avoid undesirable moiré in the
overlays that produce the process colors. Achieving low relative distortion requires precise registration between
the exposure devices used to write the halftone separations. However, optical and mechanical errors within the
multiple Raster Output Scanners (ROS's) or image bars of a printer result in differences in the trajectory and
placement of the exposure spots among color planes. In this paper, color halftone moiré due to ROS errors is
analyzed using a frequency vector representation of color halftones. We analyze three forms of process-direction
distortion: skew, shear, and bow. Each distortion is inspired from a practical printing system (i.e. while shear
and bow are observed in ROS systems, skew is observed in image bar imaging systems). The frequency vector
formalism is used to derive bounds on distortion for a classical halftone screen configuration (square cell equal
frequency halftones at 15°, 45°, and 75°). The bounds are examined for distortion of one halftone screen and the
analysis can be readily applied to distortion of multiple screens. The bounds can be used to develop specifications
for imaging components in the design of a ROS or image bar imaging system.
Barcodes are widely utilized for embedding data in printed format to provide automated identification and
tracking capabilities in a number of applications. In these applications, it is desirable to maximize the number
of bits embedded per unit print area in order to either reduce the area requirements of the barcodes or to
offer an increased payload, which in turn enlarges the class of applications for these barcodes. In this paper,
we present a new high capacity color barcode. Our method operates by embedding independent data in two
different printer colorant channels via halftone-dot orientation modulation. In the print, the dots of the two
colorants occupy the same spatial region. At the detector, however, by using the complementary sensor channels
to estimate the colorant channels we can recover the data in each individual colorant channel. The method
therefore (approximately) doubles the capacity of encoding methods based on a single colorant channel and
provides an embedding rate that is higher than other known barcode alternatives. The effectiveness of the
proposed technique is demonstrated by experiments conducted on Xerographic printers. Data embedded at
a high density by using the two cyan and yellow colorant channels for halftone dot orientation modulation is
successfully recovered by using the red and blue channels for the detection, with an overall symbol error rate
that is quite small.
The principal challenge in hardcopy data hiding is achieving robustness to the print-scan process. Conventional
robust hiding schemes are not well-suited because they do not adapt to the print-scan distortion channel, and hence are fundamentally limited in a detection theoretic sense. We consider data embedding in images printed with clustered dot halftones. The input to the print-scan channel in this scenario is a binary halftone image, and hence the distortions are also intimately tied to the nature of the halftoning algorithm employed. We propose a new framework for hardcopy data hiding based on halftone dot orientation modulation. We develop analytic halftone threshold functions that generate elliptically shaped halftone dots in any desired orientation. Our hiding strategy then embeds a binary symbol as a particular choice of the orientation. The orientation is identified at the decoder via statistically motivated moments following appropriate global and local synchronization to adress the geometric distortion introduced by the print scan channel. A probabilistic model of the print-scan process, which conditions received moments on input orientation, allows for Maximum Likelihood (ML) optimal decoding. Our method bears similarities to the paradigms of informed coding and QIM, but also makes departures from classical results in that constant and smooth image areas are better suited for embedding via our scheme as opposed to busy or "high entropy" regions. Data extraction is automatically done from a scanned hardcopy, and results indicate significantly higher embedding rate than existing methods, a majority of which rely on visual or manual detection.
Spread spectrum (SS) modulation is utilized in many watermarking applications because it offers exceptional
robustness against several attacks. The embedding rate-distortion performance of SS embedding however, is
relatively weak compared to quantization index modulation (QIM). This limits the relative embedding rate of
SS watermarks. In this paper, we illustrate that both the embedding effciency, i.e. bits embedded per unit
distortion and robustness against additive white gaussian noise (AWGN) can be improved by pre-coding of
message followed by constellation adjustment on the SS detector to minimize the distortion on the cover image
introduced by coded data. Our pre-coding method encodes <i>p</i> bits as a <i>2p</i> x 1 binary vector with a single nonzero
entry whose index indicates the value of the embedded bits. Our analysis show that the method improves
embedding rate by approximately <i>p/4</i> without increasing embedding distortion or sacrificing robustness to AWGN
attacks. Experimental evaluation of the method using a set theoretic embedding framework for the watermark
insertion validates our analysis.