The increasing importance of large vectors in processing and parallel computing in many scientific and engineering applications requires new ideas for designing superefficient algorithms of the transforms and their implementations. In the past decade, fast orthogonal transforms have been widely used in areas such as data compression, pattern recognition and image reconstruction, interpolation, linear filtering, spectral analysis, watermarking, cryptography, and communication systems. The computation of unitary transforms is complicated and time consuming. However, it would not be possible to use orthogonal transforms in signal and image processing applications without effective algorithms to calculate them. The increasing requirements of speed and cost in many applications have stimulated the development of new fast unitary transforms such as Fourier, cosine, sine, Hartley, Hadamard, and slant transforms.
A class of HTs (such as the Hadamard matrices ordered by Walsh and Paley) plays an imperfect role among these orthogonal transforms. These matrices are known as nonsinusoidal orthogonal transform matrices and have been applied in digital signal processing. Recently, HTs and their variations have been widely used in audio and video processing. For efficient computation of these transforms, fast algorithms were developed. These algorithms require only N log2 N addition and subtraction operations (N = 2k, N = 12 · 2k, N = 4k, and several others). Alternatively, the achievement of commonly used transforms has motivated many researchers in recent years to generalize and parameterize these transforms in order to expand the range of their applications and provide more flexibility in representing, encrypting, interpreting, and processing signals.
Many of today's advanced workstations (for example, IBM RISC/system 6000, model 530) and other signal processors are designed for efficient, fused multiply/add operations in which the primitive operation is a multiply/add ±a ±bc operation, where a, b, and c are real numbers.