Low-frequency watermarks and watermarks generated using spread spectrum techniques have complementary robustness properties. In this paper, we combine both watermarking paradigms to design an oblivious watermark that is capable of surviving an extremely wide range of severe image distortions. An image is first watermarked with a low- frequency pattern and then a spread spectrum signal is added to the watermarked image. Since both watermarks are embedded in a different portion of the frequency space, they do not interfere. For the low-frequency watermark, we modify the NEC scheme so that the original image is not needed for watermark extraction. The image is first normalized nd the watermark is embedded into the lowest frequency discrete cosine modes by encoding a binary pattern using a special quantization-like function. The low-frequency watermark is combined with a spread spectrum signal added to the middle frequencies of a DCT. The resulting double watermarked image is extremely robust with respect to a very wide range of quite severe image distortions including low-pass filtering, pixel permutations, JPEG compression, noise adding, and nonlinear deformations of the signal, such as gamma correction, histogram manipulations, and dithering.
This paper presents result of applying the minimax eigenvalue decomposition (MED), a morphology type transform, that hides data within digital images as part of a flexible, computationally robust algorithm. This new algorithm presents a general method for hiding information within an image, although the strength of this algorithm lies in authentication. Authentication is the establishment of ownership of digital information, and is a type of watermarking. While no self-authenticating techniques are currently known, the algorithm presented here provides a certain level of self-authentication regardless of the particular information embedded in the data. The algorithm is applied to ten different images acquired over the internet, three of which are included in this document. Information in the form of a binary bit stream is inserted into each image data. A measure is created to determine how close an image containing message data is to its original image. A visual comparison is also performed. Keys, or information separate from the message data that is generated by the embedding techniques, are used to establish authenticity of the image data. This is different from most current steganography techniques that rely on embedded data integrity to establish authenticity. An analysis of the results is presented.
Notwithstanding the high availability of image contents, multimedia products propose very low quality pictures, primarily due to the fact that effective systems for the copyright protection of multimedia works are unavailable. A digital image can, in fact, be easily reproduced obtaining as many identical copies of it as we want, without any possibility to prevent it.To get around the problem, further than scrambling the data to be protected by means of cryptographic techniques, a code carrying information about IPR could be invisibly embedded into them, in such a way to provide a mean to control their distribution. This is the aim of image watermarking techniques. In this paper general issues related to copyright protection of digital data as well as some items referring to the embedding of a watermark in the frequency domain, are discussed. Results are also presented showing the robustness of the proposed algorithms.
Developers of the UF Distributed Conferencing System, version 2 have proposed a protocol that involves the distribution of a session key by a trusted server to a principal with whom it will communicate. The protocol differs form standard key exchange protocols in that it attempts to address the issue of the delivery of a private key as well as the desired session key in a secure fashion. Because of the complexity of asymmetric keys, it is necessary for human principals to sort either private keys until needed. The DCS v. 2 protocol assumes that principals encrypt their private keys and store them on a network. When they seek to communicate with other principals, they must request that the appropriate encrypted key be issued to them by the server which will in turn be used to decrypt the session key. This paper present the protocol and analysis is using BAN logic.
Symmetric code cryptosystems are particularly suitable when large data sets have to be transmitted or stored at high rate. The existing algebraic code based cryptosystems are time consuming and the encoding/decoding speed depends on the key length. They were mainly designed for communications and magnetic media, which have an expected lifetime of up to only a few years. As for transmission there is virtually no lifetime of the transferred information, since the data vanished by itself. The data that is stored on CD or DVD can not be changed, and must remain secure for the lifetime of the sensitive/classified information or the media, which might extend to 100 years or more. This paper proposes a private key cryptosystem based on algebraic coding theory, uses a simple code such as Reed-Solomon code. This new approach is ideal for use in optical media applications. It incorporates with the CD and DVD encoding system, and increases the power of the error correction as a by-product. The system appears to be secure and very efficient because of the low overhead for encryption and decryption.
The mergence of fast, embeddable parallel processors such as SIMD meshes and networked multiprocessors has motivated increased parallel algorithm development for image and signal processing (ISP) and automated target recognition (ATR). Among such applications are real-time video compression for Internet communication, videotelephony, and videoteleconferencing. In general, image or signal compression transforms tend to be attractive candidates for parallel implementations. For example, due to a rectangular, non-overlapping partition structure, block-oriented transforms such as JPEG can be processed in pipeline fashion. In contrast, implementational challenges accrue as a result of between-block data and control dependencies encountered in various pyramid-structured or hierarchical compression transforms such as wavelet-based coding. This paper summarizes ongoing research in the mapping of image compression transforms to SIMD-parallel computers. Three classes of algorithms are considered: (1) streaming, (2) block-oriented, and (3) hierarchically structured. It is shown that classes 1 and 2 are suitable for SIMD computation, particularly where mesh segments can be connected to form a pipeline. Computation is facilitated by modifying a SIMD mesh to form a brute-force synchronous MIMD processor, which is called a multi-SIMD or MSIMD architecture. Several designs for pipelined compression transform implementation on an MSIMD mesh are analyzed in terms of critical computational complexity and error. Analysis also emphasize theory, software, and parallelism required to support resolution of data and control dependencies encountered in ISP/ATR practice.
The paper presents a low bit-rate video compression algorithm for digital monochrome image sequence transmission. Presented method is based on wavelet transform and an original image segmentation algorithm for intra- and inter-frames. The proposed algorithm divides an image into rectangular blocks, which may overlap. The width and height of these blocks are set independently and can have optical values from a present range. Blocks are filled with a mean value of pixels form original image and their seizes are increased until the mean square error value for the block is smaller than the preset value. The algorithm is strongly parameterized, so it can be used for compressing the subimages created by the discrete wavelet transform. In 2D wavelet transform, an image is interpreted as a sum of details that appear at different resolution. As the result each subimage produced by WT can be compressed and transmitted separately to decrease the sensitivity to errors that may occur in the channel. The paper also presented the results obtained during off-line image compression. These results show better quality of restored images in compare to H.261 and H.263 algorithms. The presented algorithm achieves good restored image quality with high compression ratio and ability of simple hardware implementation.
We present some initial study results on image representation and compression using multi-Gabor representations. 2D multi-Gabor representations (MGRs) are a technique of Gabor representations incorporating a set of nonseparable 2D window waveforms in a basis-like system. Windows use in a MGR can be customized to have different spatial and frequency orientations and different spatial and frequency localizations. MGRs have obvious values in efficient image representations and/or compression since an image signal typically has a variety of spatial and frequency contents and orientations. An image could also have textures of different orientations. A multi-Gabor representation that incorporates a set of windows of different localizations and orientations has a build-in adaptive potential, which can provide the most concise representation.
In this paper we propose a novel method for computing JPEG quantization matrices based on desired mean square error, avoiding the classical trial and error procedure. First, we use a relationship between a Laplacian source and its quantization error when uniform quantization is used in order to find a model for uniform quantization error. Then we apply this model to the coefficients obtained in the JPEG standard once the image to be compressed has been transformed by the discrete cosine transform. This allows us to compress an image using JPEG standard under a global MSE constraints and a set of local constraints determined by JPEG standard and visual criteria. Simulations show that our method generates better quantization matrices than the classical method scaling the JPEG default quantization matrix, with a cost lower than the coding, decoding and error measuring procedure.
This work mainly deal with the compression of remote sensed images gathered from a radiometer flown aboard an environmental satellite. In this work some NOAA/AVHRR images have been used with focus on the southern Europe and the Mediterranean Basin. It is possible to find out a relationship between pixels close one each other. In such a way it is possible to say that there is a certain degree of correlation within pixels belonging to the same band in a close neighborhood. Based upon this observation, a simple compression scheme was used to compress single band images. The coefficients of this relationship are, in turn, quite probably, similar to the ones calculated on one of the other bands. Based upon this second observation, an algorithm was developed, able to reduce the number of bit/pixel from 16 to 4 in multispectral images. A comparison was carried out between these different methods about their speed and compression ratio. As reference it was taken the behavior of two commonly used programs: PKZIP and LHA. The presented methods have similar result in both speed and compression ratio to the commonly used programs and are to be preferred when the decompression must be carried out on line, inside a main program or when there is the need of a custom made compression algorithm.
It is shown by evaluation against the standard fractal encoding by partitioned IFS that global IFS are suited best because which are often self similar even not always exactly. Global iterated function system (IFS) represent an object by the union of affine contractive transformed copies of the object itself. The objects are decomposed into a minimal set of copies by calculating their touching points (TP) - which have no neighborhood that can be affinely and expansively be mapped to a neighborhood of any other TP - on the object boundary. This boundary is computed by a fractal hull. An affine invariant representation of feature points are mapped to those of the sub objects to calculate their affine transformations. This technique can be generalized to encode assemblies of arbitrary colored objects, using extensions of the IFS-Theory. According to Barnsley's collage theorem not exactly affine self similar objects can be also encoded. Even for worse approximations by an IFS, compression ratios in the range of the standard encoding methods can be reached. For good approximations the compression is even far better.
This work presents algorithmic methods of errors reduction and 2D signals compression in a combined PCM-DM formats. Also methods of defining Delta Modulation (DM) sampling rate and maximum and minimum DM step size in real time, basing on maximum derivative a priori estimate, are presented. This estimate is proposed to be defined by a spectrum of the previous signal realization, which is known. Further data compression in real time is achieved with help of external analysis (EA). EA of pictures is based on the 2D differences in rows or in diagonals. This type of data compression is based only on the definition the extreme signal, and distance between these extremes. The algorithm of rigorous, unrigorous, and unclear extreme defining is universal, and independently for many method of digitizing of pixels is presented. Also methods of errors analysis and their decrease are described. After compression with EA, the image is reconstructed in real time.
In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.
Recent studies have shown that chaotic processes provide an appropriate mechanism for the encryption of sensitive signal information. Thus, a Volterra nonlinear integral equation is used to generate such chaotic processes as solutions of an initial-value problem.
Interleaved echo-planar imaging (EPI) and spiral imaging are two fast clinical magnetic resonance imaging (MRI) schemes that obtain k-space signal more efficiency for encoding an object. Since points along their k-space trajectories are from different echo times that may carry different phase and amplified errors originating from the static magnetic field inhomogeneity and nuclear spin relaxation, to form an image free of artifacts, both phase and amplitude errors need to be compensated properly in the image reconstruction. To address this issue, we have develop a general image reconstruction technique which is capable of accomplishing 2D phase correction for image reconstruction of interleaved EPI and spiral data. In this technique we formulated the image reconstructions as a problem of finding an optical solution to a set of linear algebraic equations corresponding to a specific imaging measurement. Furthermore, the phase errors, as well as other constraints known of the image, can be incorporated into these equations. The final solution can be obtained by solving the equation via an iterative procedure, free of k-space data gridding. Images with 2D phase correction have been successfully reconstructed using a set of imaging data acquired on a clinical MRI scanner. The significance of the work is that it has demonstrated that the 2D spatial phase correction can be accomplished for a set of interleaved EPI acquisition. Also, this is a flexible image reconstruction method for further improving the resulting image quality.
A novel method for reversible compression of 2D and 3D data is presented. An adaptive spatial prediction is followed by a context-based classification with arithmetic coding of the outcome residuals. Prediction of a pixel to be encoded is obtained from the fuzzy-switching of a set of linear predictors. The coefficients of each predictor are calculated to minimize prediction MSE for pixels belonging to a cluster in the hyperspace of graylevel patterns lying on a preset causal neighborhood. In the 3D cases, piles both on the current slice and on previously encoded slices may be used. The size and shape of the causal neighborhood, as well as the number of predictors to be switched, may be chosen before running the algorithm and determine the trade-off between coding performances and computational cost. The method exhibits impressive performances, for both 2D and 3D data, mainly thanks to the optimality of predictors, due to their skill in fitting data patterns.
Image compression is increasingly employed in the exploitation of limited communication channel bandwidth and maximization of storage system efficiency. Although traditional measures of signal or image quality (e.g., mean-squared error or MSE) are useful in a communication context, few such measures are evident in the open literature that effectively address implementational issues such as visual quality or suitability for automated target recognition (ATh) of a decompressed image. For example, reconstruction errors such as blocking effect, aliasing or suppression of high frequencies, preservation of image or neighborhood statistics, and quantization-induced perturbation of line segments or arcs can degrade visual image quality. In numerous ATh applications, such error is an important source of ATh algorithm performance degradation. Unfortunately, the majority of published image quality measures (IQMs) tend to be based on ad hoc or first-order models of human visual system (HVS) function. Such models do not necessarily correlate with the appearance of image details (e.g., higherorder models) or mathematical descriptions of ATR filters (e.g., non-HVS models). Additionally, published IQMs tend not to address the primary ATR problem of feature visibility or the secondary problem of estimating image or neighborhood error distributions in decompressed imagery. In this paper, a collection of performance measures and IQMs designed specifically for image compression is presented. Beginning with the customary performance measures of MSE and space-time bandwidth product, we progress to spatial measures such as cutoff frequency, effect of greyscale quantization on variance, measures of texture preservation, and statistics of the modulation transfer function (MTF). Texture is estimated by an area-based variance descriptor derived from the Lorentzian distribution, which has been successfully employed in ATh practice . Measures for disruption of linear features such as line segments and arcs are based on analysis of the Hough transform and spatial linear regression. Performance analysis of each measure is couched in terms of computational cost, sensitivity to error, and relevance to ATh scenarios. Examples are given for template- and codebook-based transforms such as JPEG, VPIC, EPIC, and EBLAST [2-5].
As the development of computational hardware capable of variable precision progresses, the customary requirement of high-precision arithmetic is being relaxed, particularly in automated target recognition applications. Such algorithms typically admit noisy imagery that often has, for example, two to three bits of noise per eight-bit pixel. Since the resulting precision p of five or six bits is nonincreasing throughout the course of a computational cascade, it is reasonable to assume that p equals 8 bits could suffice for most ATR applications. Practical design constraints in addition to noise include limited processor size, weight, and power supply, as well as available computational bandwidth and frame rate. Although limited precision computation can decrease size and power requirement as well as computationally cost, the accrual of computational error can severely compromise resultant accuracy, leading to a design tradeoff between error, computational precision, and speed/power requirements that we call the limited precision problem. In this paper, a restricted instance of the LPP is analyzed, namely, the effect of reduced precision on image compression transforms. Particular emphasis is placed upon the compounding of representational error in the compression process as computation passes through various stages of a given algorithm. Analysis emphasizes effects of noise and computational error on common compression transforms such as visual pattern image coding, vector quantization, and a recently-developed algorithm called. Tests for preservation of input statistics and minimization of mean-squared error (MSE) indicate that, in eight-bit imagery with two to 2.5 bits of noise, as few as five bits of precision suffice for the aforementioned compression algorithms to retain acceptable visual appearance and MSE for ATR operations.
A temporal error concealment (TEC) algorithm is proposed, which is specifically designed for the data partitioning profile of MPEG-2 and for MPEG-2-compatible scalable schemes. Among the four methods to produce a scalable video bitstream for MPEG-2, the data partitioning profile is chosen for its simple implementation and high overall video quality due to reduced overhead information. The proposed TEC algorithm, namely MCTEC-DISP, applies also on I-picture since it operates on the motion vectors of the current reference P-frame. Subject to the hypothesis of motion continuity, MCTEC-DISP improve layered MPEG-2 video quality in the presence of packet losses. This is particularly important when layered video is transmitted over dual- priority networks. The algorithm presents a reduced computational complexity with respect to similar TEC schemes and it is not designed for post-processing implementation, but it operates during the decoding process for real-time decoding.
The emergence of distributed, heterogeneous media such as the Internet has established the practical importance of progressive image transmission, in which an image is transmitted in such a way as to admit coarse rendering and recognition at the decoder as early as possible in the bitstream. This paper presents a wavelet-based progressive image transmission algorithm that attempts to achieve several goals not addressed by other image compression algorithms in the literature. First, the algorithm evaluates the tradeoff between rate and distortion as a criterion for selecting wavelet coefficients for transmission. The distortion metric is not limited to mean squared error; the algorithm provides a framework for investigating any distortion function including psychovisual and segmentation-based distortion metrics. Second, it provides a high degree of spatial scalability by sending coarser resolution information earlier in the bitstream than detail information and does not waste bits by refining high frequency subbands early. Finally, the algorithm is computationally asymmetric, pairing a very fast decoder with an encoder that can be as computationally intensive as required. The performance of the algorithm is comparable with current coders at low bitrates.