With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is
becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics,
based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data
associated with Digital Watermarking.
Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data,
and has since been applied to Accounting Forensics for detecting fraudulent income tax returns . More recently,
Benford's Law has been further applied to image processing and image forensics. For example, Fu et al.  proposed a
Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our
previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown
JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new
image compression standard) offers higher compression rates and better image quality as compared to JPEG
compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image
forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial
results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000
compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the
deviation between the probabilities and Benford's Law.
Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than
DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108,
which is much higher than uncompressed DWT coefficients. This result clearly indicates a presence of compression in
the image. Moreover, we compare the results of 1st digit probability and divergence among JPEG2000 compression rates
at 0.1, 0.3, 0.5 and 0.9. The initial results show that the expected difference among them could be used for further
analysis to estimate the unknown JPEG2000 compression rates.
We propose a new method for tamper localization and restoration using noise pixels in binary document images. For such images, it is difficult to find a sufficient number of low-distortion pixels in individual blocks with blind detection property. Also, a perceptual watermark cannot be embedded in white regions of the document image, making such regions insecure against hostile attacks. An erasable watermark is embedded in each block of the document image independently. The embedding process introduces some background noise. However, the content in the document can be interpreted by the user, because human vision has the inherent capability to recognize various patterns in the presence of noise. If authenticity is verified for the content of each block, the exact copy of original image is restored at the blind detector for further use and analysis. Experimental results show that an erasable watermark of necessary data length can be embedded in individual blocks to attain effective localization and restoration capability. Using the proposed method, it is possible to restore the original text sequence in text document images after multiple alterations like text deletion, insertion, substitution, and block swapping.
A hybrid encryption and decryption technique for optical information security is proposed. In this method, the iterative Fourier transform algorithm is employed to optimize the encrypted hologram and the decryption key as binary phase-only diffractive optical elements, which were fabricated by electron-beam lithography. In a simple optical setup, the optical decryption is implemented by superimposing the encrypted hologram and the decryption key. Numerical simulation and optical experiment confirm the proposed technique as a simple and easy implementation for optical decryption.
In this paper, we propose a digital watermarking algorithm based on the Slant transform for the copyright protection of images. Our earlier research work associated with the fast Hadamard transform for robust watermark embedding and retrieval of images and characters suggests that this transform could also provide a good “hidden” space for digital watermarking. The Slant transform has many similar properties to the Walsh-Hadamard transform. In terms of transform coding, the Slant transform is considered to be a sub-optimum orthogonal transform for energy compaction. For digital watermarking, the energy spread becomes a significant advantage, as there is now a good spread of middle to higher frequencies with significant energies for robust information hiding. In this paper, an analytical comparative study on the performance of the Slant transform adapting our earlier watermarking schemes for fast Hadamrd transform will be performed based on its robustness against various Stirmark attacks. The performance results of the Slant transform for image watermarking against other transforms such as Cosine transform will also be presented.
In this paper, a character-embedded watermarking algorithm is proposed for copyright protection of satellite images based on the Fast Hadamard transform (FHT). By using a private-key watermarking scheme, the watermark can be retrieved without using the original image. To increase the invisibility of the watermark, a visual model based on original image characteristics, such as edges and textures are incorporated to determine the watermarking strength factor. This factor determines the strength of watermark bits embedded according to the region complexity of the image. Detailed or coarse areas will be assigned more strength and smooth areas with less strength.
Error correction coding is also used to increase the reliability of the information bits. A post-processing technique based on log-polar mapping is incorporated to enhance the robustness against geometric distortion attacks. Experiments showed that the proposed watermarking scheme was able to survive more than 70% of attacks from a common benchmarking tool called Stirmark, and about 90% against Checkmark non-geometric attacks. These attacks were performed on a number of SPOT images of size 512×512×8bit embedded with 32 characters.
The proposed FHT algorithm also has the advantage of easy software and hardware implementation as well as speed, comparing to other orthogonal transforms such as Cosine, Fourier and wavelet transform.
In this paper, we propose a robust image-in-image watermarking algorithm based on the fast Hadamard transform (FHT) for the copyright protection of digital images. Most current research makes use of a normally distributed random vector as a watermark and where the watermark can only be detected by cross-correlating the received coefficients with the watermark generated by secret key and then comparing an experimental threshold value. However, the FHT image-in-image method involves a "blind" watermarking process that retrieves the watermark without the need for an original image present.
In the proposed approach, a number of pseudorandom selected 8×8 sub-blocks of original image and a watermark image are decomposed into Hadamard coefficients. To increase the invisibility of the watermark, a visual model based on original image characteristics, such as edges and textures are incorporated to determine the watermarking strength factor. All the AC Hadamard coefficients of watermark image is scaled by the watermarking strength factor and inserted into several middle and high frequency AC components of the Hadamard coefficients from the sub-blocks of original image. To further increase the reliability of the watermarking against the common geometric distortions, such as rotation and scaling, a post-processing technique is proposed. Understanding the type of distortion provides a mean to apply a reversal of the attack on the watermarked image, enabling the restoration to the synchronization of the embedding positions.
The performance of the proposed algorithm is evaluated using Stirmark. The experiment uses container image of size 512×512×8bits and the watermark image of size 64×64×8bits. It survives about 60% of all Stirmark attacks. The simplicity of Hadamard transform offers a significant advantage in shorter processing time and ease of hardware implementation than the commonly used DCT and DWT techniques.
The automatic storage and extraction of high-level information characters of 3D entity is important to Geoinformation Visualization system. To address this problem, an effort has ben taken to develop a 3D visualization system integrated with a database when we are focusing on depicting large information spaces. Instead of using serial separated files, we explore the issues in the integration of database with the visualization system; make the database as the kernel of the system to specialize in the storage and management of all types of data. Although database management system does not have the analytical and visualization capabilities of the visualization system, it plays an integral part in our visualization system due to its data management capabilities, and helps to provide the user with a single data model.
This novel feature-based method is able to reduce the computation overheads without compromising the matching accuracy of satellite images. It incorporates the bi-orthogonal wavelet filter using B-splines designed by Yu and Ho. The bi-orthogonal wavelet filter is used to perform multi-resolution edge extraction and multi-resolution matching. Edges are matched using adaptive matching windows that vary their shapes according to the directions of the edges. An adaptive searching range is applied because the searching range of each edge point may be different. Moreover, the matched results for low resolution levels are utilized for interpolating high resolution mismatched pixels. Detailed comparison with other new feature-based algorithm on SPOT and aerial stereo images was performed. The results obtained show that the proposed algorithm was computationally more efficient as well as achieving an overall improved matched accuracy.
A novel multi-resolution hybrid matching method based on wavelets to improve stereo matching accuracy of satellite images is presented. It is a feature-based system. Wavelets are used to perform multi-resolution edge extraction and multi-resolution matching. And edge pixels are matched using adaptive matching windows that vary their shapes according to the directions of edges. Unlike conventional matching methods, an adaptive searching range is applied here, which means each edge point's searching range may be different. And the low resolution level matched results are utilized for interpolating high resolution mismatched pixels.
A phase-shifting Twyman-Green interferometer has been constructed. Using three consecutively captured interferograms, the phase profile of a reflective surface can be determined. Results using various fringe processing techniques are compared. These methods include uniform averaging, Gaussian mask and spin filtering. For simulated fringes superimposed with random noise and fixed-pattern noise, it has been observed that a combination of weighted averaging and spin filtering could generate the best results. The computerized system has been applied to the measurement of the form errors of a silicon wafer and a cosmetic mirror, respectively. The root-mean-square error of the wafer is determined to be 11.13 nm.
We present a practical approach for detecting and localizing clouds in satellite remote sensing images. Cloud detection is useful in improving the accuracy of land cover classification when there are clouds present in the images. After detection and removal of clouds we can selectively merge classification results from two temporally separate images of the same area to minimize the cloud effect. We emphasize the ease of implementation of the algorithm so that practitioners can easily adapt the method for their own use