We describe a semi-empirical model that predicts the soft error rate of a magnetic recording system with magnetoresistive heads. The model has a small number of input parameters that define the read and write widths of the head as well as the on-track performance and the susceptibility to off-track interference of the read channel. The model has been tested by comparison to off-track error rate data from a peak detect channel, with good agreement, and by comparison to another soft error rate model developed by Wiesen, et. al. The two models agree on the optimal write width for a 4.3 micrometer track pitch, but our model predicts a smaller optimal read width than the Wiesen model. Off-track error rate curves and on-track error vs. head width curves from the two models are compared to understand this difference in predicted optimal read width.
Designing read-back subsystems in magnetic recording requires precise knowledge about the signal picked up by the reading head. As areal densities in longitudinal magnetic recording increase, the read-back signal becomes more corrupted by intersymbol interference, media noise, and intertrack interference. Due to the spatial distribution of transitions on a disk surface and the nonlinear character of bit interactions, two dimensional media plane models are widely used to model the write process. If a two dimensional (or three dimensional) read- head model is utilized, intertrack interferences are also observed. Micromagnetic media modeling, coupled with appropriate read-head models, have been successfully used to model the 'raw' magnetic recording channel. However, due to its high computational complexity, micromagnetic modeling is an impractical tool in statistical signal analyses such as error rate studies where thousands of transitions need to be created. We propose a much simpler, yet realistic, two dimensional write process model. We call it the triangle zig-zag transition (TZ- ZT) model since the transition boundary is modeled by lateral sides of isosceles triangles of alternating orientations truncated on a common basis line across the track width. Formulas are presented that relate the parameters of the model, the probability density function of triangle heights and the constant vertex angle, to the magnetization transition profile of an isolated transition and to the cross track correlation width, respectively. Although stochastic zig-zag models have been proposed in the past, our model has the advantage that it is stable across the track, that is, it is not an independent increment process and it therefore doesn't exhibit a cross track drift. Compared to micromagnetic modeling, the TZ-ZT model offers computational savings of 4 orders of magnitude, while transition shapes and media noise are modeled with comparable accuracy, as our results show. For these reasons, the TZ-ZT model, combined with an appropriate head-sensitivity function, is an attractive 'raw channel' model for applications such as statistical performance analyses where large numbers of bits are needed.
Wavelet transform theory, in the context of multiresolution analysis, has brought out useful properties of Hilbert spaces generated by bases consisting of translates of a single function. In particular, biorthogonal wavelet decomposition schemes provide a simple decomposition and reconstruction arrangement by using scaling functions and duals whose translates form biorthogonal bases. Based on this concept, schemes are developed for data transmission over a partial response channel where one waveform is used to modulate the transmit data and another, its dual, is used to demodulate the data. The schemes do not require a precoder and are not restricted to binary data. In one of the schemes suited for bandlimited channels, it is possible to construct several different waveform pairs with a moderate sacrifice in bandwidth.
A new model, called the MicroTrack Model, has been developed for signal dependent transition noise and partial erasure that occurs in the readback signal from thin film magnetic disks. A track is modeled as being made up of a number of 'microtracks.' On each microtrack, the actual position of a transition in magnetization is offset by a random amount from the ideal position. This offset is chosen according to a cumulative probability distribution which is derived by scaling the magnetization profile function. Thus, the response of each microtrack is a randomly offset ideal transition response. The partial erasure effect is then easily added by assuming that magnetization transitions which occur on the same microtrack will eradicate each other if they are positioned too closely together. The final result is obtained by averaging the output of all of the microtracks. Three primary effects of media noise are position jitter, width variation, and amplitude degradation in the output pulses. The probability distributions of these effects as functions of the model parameters are presented. All of these effects result as consequences of the microtrack model. The relative amounts of the various effects are examined along with the way that they vary with the model's parameters. The autocorrelation function of the magnetization for the media noise is examined with and without partial erasure. In this way, the effect of partial erasure and its effect on the system can be examined.
Medium noise in magnetic recording has been shown to be caused in part by the magnetic microstructure of the medium, which is determined at the time of manufacture of the medium. Systems can be designed to exploit this spatial dependence to increase recording density. Signal precompensation to achieve this is described here using two possible models for medium noise: additive and multiplicative. Based upon recent experimental results, the multiplicative model should be the more realistic of the two models, and hence is discussed in more detail. The anticipated increase in performance is quantified in terms of the signals written on the medium and the noise power.
This paper considers a model for the magnetic recording channel that includes both signal- dependent and signal independent noises. Detectors that use the information about the signal contained in the noise are described. Application of one such a detector to cases where the distribution of the signal-dependent noise has non zero means and is a function of previously recorded transitions is given. The paper also makes comparative evaluations of the performance of such a detector against the regular Viterbi detector.
The increasing demand on storage density poses new challenges to the equalization and detection techniques. In this paper we investigate the application of artificial neural networks for equalization in magnetic recording channels. Two methods are explored: case (1) uses a neural network based equalizer to perform equalization/detection bit by bit by using multiple samples per bit and case (2) uses a tapped delay line (TDL) to feed several bits into the network.
Multi-channel readback using array heads has been reported in optical recording. A method to reduce both interference along and across the tracks using multi-channel readback is presented. In this method, the non-linear multi-channel decision feedback equalization is used to remove both forms of interference. Simulation results show good improvement (in performance) by using the multi-channel equalization. By this readback method, tracks can be brought closer, thus increasing the areal density. Another advantage of this method is the high data rate possible.
Magneto-optic (MO) recording can benefit from partial response (PR) spectral shaping of the readback signal to achieve higher recording densities for similar reasons that PR signaling applies well to magnetic recording. We compare various PR equalization schemes by using the bit error rate (BER) as a means of comparison. We also compare the adaptive partial response maximum-likelihood (PRML) detection schemes in the presence of additive white Gaussian noise (AWGN) and transition noise. Our conclusions are for densities in the range of 50 - 60 kbpi, for disks with carrier-to-noise ratio (CNR) of 50 dB and optical system with full width at half maximum (FWHM) of 0.68 micrometer. We show results which indicate that PR1 equalized signals followed by the Viterbi algorithm have the best performance. They not only perform well in the absence of transition noise, but also do well in the presence of transition noise up to 12 ns.
The areal density on a magnetic recording medium can be increased by increasing the linear bit density, or by increasing the track density, or, more commonly, by a combination of both. Equalizing the signal to a higher order partial response polynomial (i.e. EPR4), and employing a trellis code in conjunction with a PR4 channel (i.e. TCPR4) are among the techniques to achieve this goal by providing performance gains over a PR4 system. In this study, the potential increases in linear and track densities that could be afforded by EPR4 and TCPR4 are investigated by simulating a read channel model which incorporates the effects of non-ideal timing recovery, A/D conversion, and finite register-lengths, using spinstand data. In order to quantify the effects of the increased track density, intertrack interference (ITI) is taken into account in evaluating the performance, by defining the interference as a function of the track misregistration. The trade-offs concerning the equalizer loss due to a higher linear bit density and the SNR loss due to a narrower track, as a function of PW50/T, are addressed. The measure of performance used captures the coding gain, the rate loss due to coding, the loss of the equalizer, and the degradation due to ITI. Assuming similar hardware complexity for the detectors, the two alternatives are compared in terms of the areal density increase they provide over a range of user bit densities that are of current practical interest.
We consider Class 4 partial response (PR) channels, and examine off-track performance of maximum likelihood sequence estimators for these channels that ignore inter-track interference (ITI). We assume that the pulse response to the head from an adjacent track is the same Class 4 channel, and only its amplitude varies with the track-to-head distance, in a way not known to the receiver. For each of these channels, we find analytical expressions for off-track performance, as well as sets of sequences most susceptible to errors in the ITI environment. We also discuss how the problem of off-track error rate can be alleviated through coding.
High-density magnetic recording channels may be modeled as partial response (PR) channels. While it is possible to use PR channels for 'uncoded' information, it is likely that further improvement can be achieved by using an error correcting code. Codes for PR channels should have large minimum Euclidean distances, short maximum zero-run lengths, and high rates. A fast decoding algorithm with short decoding delay must also be available. It is shown that there exist convolutional code cosets that satisfy these requirements.
A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.
A major problem with a VQ based image compression scheme is its codebook search complexity. Recently, a new VQ scheme called predictive residual vector quantizer (PRVQ) was proposed which has a performance very close to that of the predictive vector quantizer (PVQ) with very low search complexity. This paper presents a new variable-rate VQ scheme called entropy-constrained PRVQ (EC-PRVQ), which is designed by imposing a constraint on the output entropy of the PRVQ. We emphasized the design of EC-PRVQ for bit rates ranging from 0.2 bpp to 1.00 bpp. This corresponds to the compression ratios of 8 through 40, which are likely to be used by most of the real life applications permitting lossy compression. The proposed EC-PRVQ is found to give a good rate-distortion performance and clearly outperforms the state-of-the-art image compression algorithms developed by Joint Photographic Experts Group (JPEG). The robustness of EC-PRVQ is demonstrated by encoding several test images taken from outside the training data. EC-PRVQ not only gives better performance than JPEG, at a manageable encoder complexity, but also retains the inherent simplicity of VQ decoder.
Real-time multimedia communication over PSTN (Public Switched Telephone Network) or wireless channel requires video signals to be encoded at the bit rate well below 64 kbits/second. Most of the current works on such very low bit rate video coding are based on H.261 or H.263 scheme. The H.263 encoding scheme, for example, consists mainly of motion estimation and compensation, discrete cosine transform, and run and variable/fixed length coding. Vector quantization (VQ) is an efficient and alternative scheme for coding at very low bit rate. One such VQ code applied to video coding is interframe hierarchical vector quantization (IHVQ). One problem of IHVQ, and VQ in general, is the computational complexity due to codebook search. A number of techniques have been proposed to reduce the search time which include tree-structured VQ, finite-state VQ, cache VQ, and hashing based codebook reorganization. In this paper, we present an IHVQ code with a hashing based scheme to reorganize the codebook so that codebook search time, and thus encoding time, can be significantly reduced. We applied the algorithm to the same test environment as in H.263 and evaluated coding performance. It turned out that the performance of the proposed scheme is significantly better than that of IHVQ without hashed codebook. Also, the performance of the proposed scheme was comparable to and often better than that of the H.263, due mainly to hashing based reorganized codebook.
This paper presents an efficient temporal, spatial and frequency decomposition method for improving the performance of the existing methods for 3D subband coding of video signals. In this method, a given video sequence is first partitioned into constant-sized 'groups' of frames. Within each group, every two consecutive frames are decomposed into low and high temporal subbands by a two-tap temporal filter. The redundancy in the low temporal subbands is removed by a closed loop DPCM unit. The low and high subbands of the first pair of frames, the difference of the low subbands from the DPCM loop and the high subbands of the subsequent pairs of frames in the 'group' are then divided into image blocks of equal size, such that each block can be efficiently decomposed by an adaptive wavelet packet based on a rate-distortion criterion. The subbands of wavelet packets for all blocks are quantized by a hybrid scalar/pyramidal lattice vector quantizer. This scheme achieves minimum distortion under a rate constraint specified for the 'groups' of frames and under the structural constraints of the algorithm. In this scheme, the motion compensation is not used explicitly, but the effects of motion are accounted for through the low-high temporal subbanding and the DPCM procedure. The results compare favorably with those of traditional video coding techniques. Although the computational complexity of this scheme is higher than that of the existing 3D subband method, it is suitable for parallel implementation.
It is difficult to achieve a good low bit rate image compression performance with traditional block coding schemes such as transform coding and vector quantization, without regard for the human visual perception or signal dependency. These classical block coding schemes are based on minimizing the MSE at a certain rate. This procedure results in more bits being allocated to areas which may not be visually important and the resulting quantization noise manifests as a blocking artifact. Blocking artifacts are known to be psychologically more annoying than white noise when the human visual response is considered. While image adaptive vector quantization (IAVQ) attempts to address this problem for traditional vector quantization (VQ) schemes by exploiting image dependency, it ignores the human visual perception when allocating bits. This paper addresses this problem through a new IAVQ scheme based on the human visual perception. In this method, the input image is partitioned into visual classes and each class, depending on its visual importance, is adaptively or universally encoded. The objective and subjective quality of this scheme has been compared with JPEG and a previously proposed image adaptive VQ scheme. The new scheme subjectively outperforms both schemes at low bit rates.
A new coding scheme for linear quadtree and a basic geometric operation are proposed in this paper. There are two objectives in this paper. The first objective is to find a coding scheme with which the requirement of large storage for storing raster data can be improved from the past research. The second objective is to verify the feasibility of the proposed coding scheme for spatial data operations. The proposed linear quadtree coding scheme is developed on the basis of the hierarchical structure of the quadtree. Techniques of breadth-first search, Morton sequence with two data structures of branch list and data list are used to derive the proposed coding scheme. The branch list maintains the outputs of the quadtree decomposition, and uses only one bit to record each node; all terminal nodes, representing different spatial data, in the quadtree are recorded in the data list. The application of the data list has an advantage that the number of bits required to represent various objects are extendible, so that the requirement of the construction of new quadtrees is no longer necessary. The feasibility of the proposed linear quadtree coding scheme is verified by two raster images of spatial data. Results of experimental tests reveal that the proposed scheme has the least storage requirement among various coding schemes. A basic operation, rotation, is also implemented to demonstrate the applicability of the proposed coding scheme for geometric operations. Due to the characteristic of the proposed coding scheme, the problem of multicolor quadtrees is also solvable by the method proposed in this paper.
In this work the new spectral representation for noisy images is investigated. The orthogonal image expansion is based on the eigenvectors of symmetric CK-criterion which takes into account both correlation properties of signal and noise ensembles. We suppose that such representation provides maximal concentration of noiseless image in main spectral components, and so the effective noise suppression under spectrum truncation can be achieved. An effective algorithm for KL and CK-eigenvectors calculation has been proposed. Performed numerical experiments verify the algorithm efficiency and its suitability for image processing problems.
Identification of pulse (dibit) and step (transition) responses for magnetic storage channels is important for the design of detection circuitry and for comparison of various media, heads, and other channel components. One of the standard techniques for channel identification is measuring the read-head response to any known data sequence written on the medium and then applying least squares procedure to identify the dibit and transition responses. The other techniques involve either measuring the average response of the system to an isolated transition or performing a discrete Fourier transform on the read-head response to a pseudorandom data pattern. We propose a technique that improves on the least squares estimate by taking advantage of the statistical information available from the over sampled channel output. Reliable estimates can be obtained even when the training sequence is not long enough to estimate the channel response using only conventional least squares. Since the resulting adaptive identifier uses both a short training sequence (nonblind technique) and properties of the transmitted signal (blind technique) to estimate the channel response, it is called a semiblind or a partially blind technique. This can also be looked at from the adaptive system identification point of view where both the direct (training signal) and indirect (data statistics) knowledge about the system are used for better system identification.
We propose a new blind equalization technique based on second and fourth order cumulants. The algorithm proposed will equalize the second and fourth order cumulants of the input and output sequences. It is entirely driven by statistics, only requiring knowledge of the variance (power) of the input signal. Because of insensitivity of the higher-order cumulants to Gaussian processes, the algorithm performs well with additive Gaussian noise. Simulation examples are presented in which the proposed technique is compared with other existing equalization algorithms.
Non-Gaussian statistical signal processing is important when the signals deviate from the ideal Gaussian model. One of the most important non-Gaussian distributions is the family of the alpha-stable (0 less than (alpha) less than 2) distributions. They are proven to be effective in modeling the impulsive signal environments both in theory and in practice. This paper presents a brief introduction to the alpha-stable distributions, their applications to signal processing using fractional lower-order statistics, and the alpha-spectrum, a new spectral analysis tool for blind channel identification in impulsive environments.
Some methods are proposed for the blind identification of finite-order discrete-time nonlinear models with non-Gaussian circular inputs. The nonlinear models consist of two finite memory linear time invariant (LTI) filters separated by a zero-memory nonlinearity (ZMNL) of the polynomial type (the LTI-ZMNL-LTI models). The linear subsystems are allowed to be of non-minimum phase (NMP). The methods base their estimates of the impulse responses on slices of the N plus 1th order polyspectra of the output sequence. It is shown that the identification of LTI-ZMNL systems requires only a 1-D moment or polyspectral slice. The coefficients of the ZMNL are not estimated, and need not be known. The order of the nonlinearity can, in theory, be estimated from the received signal. These methods possess several noise and interference suppression characteristics, and have applications in modeling nonlinearly amplified QAM/QPSK signals in digital satellite and microwave communications.
We present the CBEA algorithm, namely the cross-correlation based blind equalization algorithm, for the reconstruction of an unknown nonwhite signal from on two or more filtered and noisy versions of it. The proposed approach reconstructs the unknown generally nonminimum phase channels in an adaptive fashion, by minimizing two new error criteria. The adaptation rule is computationally very simple since it involves second order cross- correlation operators applied to the observations. Simulations exhibit fast convergence and robustness to low signal-to-noise ratios, as long as the noise is uncorrelated to the input signal.
We consider class of M-ary runlength-limited codes that achieve capacity and have the fewest number of encoder states. The codes apply to all M-ary (d,k) constraints with rational capacity. The codes have finite state encoders and sliding block with sliding block window size d plus 1.
The performances of various data detectors, including the maximum likelihood sequence detector (MLSD), are analyzed as a function of media thickness and run-length limited (RLL) code d-constraint. The analysis includes the variation of pulse width and magnetic transition width with media thickness. Performance is calculated in the presence of non-linear partial erasure, or transition amplitude reduction, which changes with media thickness. The RLL code d-constraint effects the media non-linearity and detector processing gain. The results presented predict an optimum media thickness for a given detector and show the relative performance of different detectors and RLL d-constraints.
Significant improvements in magnetic storage densities have been made feasible recently by the application of partial response signaling combined with maximum-likelihood sequence estimation (PRML). To enhance the performance of this technique when applied to the class- IV partial response channel, which is recognized as being appropriate to model the magnetic recording channel, it is often required to bound the number of consecutive zeros in the recorded data sequence by some positive integer G and the number of consecutive zeros in each of the odd and the even subsequences of the recorded data sequence by some positive integer I. Such a constraint is referred to as a (0, G/I) constraint. Codes are used to map arbitrary unconstrained sequences to (0, G/I) constrained sequences. We investigate the strictest possible constraints that can be achieved by block codes of high rates and compare between these constraints and the strictest possible constraints that can be achieved by arbitrary coding schemes.
Partial-response maximum-likelihood (PRML) methods are now being adopted in many digital magnetic recording systems. It is expected that as linear densities continue to increase, there will be a need to use 'extended' PRML techniques. In fact, commercial systems incorporating extended partial-response target channels, denoted EPRML and EEPRML, employing the EPR4 transfer polynomial h(D) equals 1 plus D minus D2 minus D3 and the EEPR4 transfer polynomial h(D) equals 1 plus 2D minus 2D3 minus D4, respectively, have recently appeared. Among these systems, several apply the rate 2/3, (d,k) equals (1,7) runlength-limited code, originally designed for use with peak-detection, in combination with a detector trellis structure reflecting the d equals 1 constraint. In the EEPR4 case, the d equals 1 constraint is known to provide a coding gain of 2.2 dB, unnormalized for the rate loss, relative to the uncoded channel. In this paper, we describe a nested family of code constraints, properly containing the d equals 1 constraint, intended for use on the EEPR4 channel. These constraints are shown to have the same distance-enhancing properties as the d equals 1 constraint. They permit the design of practical codes for EEPR4 that offer the same coding gain as the (1,7)-coded system, but with higher achievable code rates. The paper concludes with the construction for such a code which, having rate 4/5, offers a 20% increase over the 1,7) code.