As system solutions move to volume production it is often necessary to integrate part or all of the solution. Integration may be done to lower cost, power or noise. Or it may be done to increase performance or even security of the system. This may be accomplished by migrating some hard wired logic to programmable devices and other logic to ASICs. However, integration becomes more difficult if the system includes a Digital Signal Processor (DSP), as the operating speeds increase while power limitations are lowered. Integrating memory with the processor is a possibility. However, this requires a line of DSP devices with varying amounts of memory. The paper examines different techniques for system integration. It includes discussions of advantages of the different techniques. The paper also discusses the required design environments and tools for integrating DSPs with other required circuits.
Active noise control (ANC) is achieved by introducing an “anti-noise” through an appropriate array of secondary sources. These secondary sources are interconnected through an electronic system using a specific signal processing algorithm for the particular noise cancellation scheme. ANC has application to a wide variety of problems in manufacturing, industrial operations, and consumer products. This paper presents the development of ANC systems using adaptive signal processing and digital signal processors. Concise derivations and analysis of commonly used adaptive structures and algorithms for ANC applications are included.
Smart structures generally consist of abase material (e.g., composite) containing large numbers of embedded and interconnected sensors, actuators, and processors. With these embedded components, smart structures have a built-in ability to sense and respond to environmental stimuli without requiring externally mounted transducers. Research is currently underway to develop smart structures for a variety of applications, including the self-diagnosis of the structure for damage detection and health monitoring. Industries that have a particular interest in this area include aerospace, marine, ground transportation, power utilities, and manufacturing. In recent years, the research has focused on the materials science issues related to embedding the transducers. However, significant barriers still remain that are preventing wide spread use of smart materials for health monitoring. This paper will discuss the barriers caused by the difficult problem of integrating and processing the wealth of information from the large numbers of transducers.
Signal processing problems dealing with linear non-Gaussian signals, nonlinearities, and nonstationarities, cannot be addressed completely using time-invariant second-order statistical descriptors. Traditional correlation and spectral analysis are currently generalized to higher-order moments, cumulants, and polyspectra. At the same time there is an effort to cope with structured nonstationarities and in particular with cyclostationary processes which are signals exhibiting periodicity in their statistical behavior. A critical overview of higher-order and cyclic spectral analysis is attempted herein with emphasis on statistical signal processing aspects. Major advances and limitations are described along with some directions for future research.
Modern speech understanding systems merge interdisciplinary technologies from Signal Processing, Pattern Recognition, Natural Language, and Linguistics into a unified statistical framework. These systems, which have applications in a wide range of signal processing problems, represent a revolution in Digital Signal Processing (DSP). Once a field dominated by vector-oriented processors and linear algebra-based mathematics, the current generation of DSP-based systems rely on sophisticated statistical models implemented using a complex software paradigm. Such systems are now capable of understanding continuous speech input for vocabularies of several thousand words in operational environments. The current generation of deployed systems, based on small vocabularies of isolated words, will soon be replaced by a new technology offering natural language access to vast information resources such as the Internet, and provide completely automated voice interfaces for mundane tasks such as travel planning and directory assistance.
Although speech coding has been an ongoing area of research for several decades, the recent advances in real-time DSP and the emergence of new applications have spurred a renewed interest in the area. Several speech coding algorithms have been adopted in international standards and study groups are drafting new standards for existing and emerging mobile and multimedia applications.
In this paper, we provide a survey of speech coding technologies with emphasis on those methods that are part of recent communication standards. The paper starts with an introduction to speech coding and continues with descriptions of linear predictive vocoders, analysis-by-synthesis linear prediction, sub-band and transform coders, and sinusoidal analysis-synthesis systems. We conclude with a summary of this critical review paper along with a brief discussion on opportunities for future research.
High quality digital audio compression algorithms are a compelling application for digital signal processing (DSP). At audio data rates, inexpensive, high performance device technology allows complex algorithms using models of human hearing to optimize compression. We discuss the principles of operation of state-of-the-art audio compression algorithms and their implementation using DSP techniques. The recent MPEG-1 and MPEG-2 Audio International Standards are used as examples.
Wavelets are a new family of signal transformations. In a wavelet transform, the signal is decomposed in terms of dilates and translates of a single function, the mother wavelet. Wavelet transforms have a number of properties that can be exploited in many signal processing applications. These properties include an ability to trade time and frequency resolutions in a controlled manner, a relationship between the time behavior of a signal and the structure of its wavelet transform and compact representations for wide classes of deterministic and stochastic signals. This paper provides an overview of continuous and discrete wavelet transforms and reviews some of their applications.
This article is designed to serve as a review of digital image restoration, and as an analysis of the current and possible future trends in the field. The primary issues addressed here are the past accomplishments in the area, the present status of research in digital image restoration, and the future paths that researchers in this field may take. This critical analysis also serves to outline many of the important driving applications in this field.
The evolution of commercial signal processors has enabled migrating these devices toward the sensor of a detection system while eliminating specialized hardware to perform similar operations. As the technology of the signal processor has evolved, systems have been formulated with multiple instances of the signal processor to achieve a linear improvement in throughput performance. Two forces that enable this linear improvement are primarily the ability of an algorithm to be mathematically separated into the several components and secondarily the interconnection structure established between the signal processors. A classic problem achieving this linear improvement is implementing the Fast Fourier Transform (FFT) across several signal processor systems. Other signal-processing and image-processing algorithms that exhibit similar linear improvement of performance over the number of processors also exist. A critical evaluation of generic signal- and image-processing algorithms on multiple signal-processor architectures has been performed, using the singular metric that illustrates true performance—execution time. In this evaluation, several strategies were assessed to reveal strengths and weaknesses sensitive to the target architecture in maximizing performance gain. This paper describes the methodology of this study and presents the results of the types of algorithms that meet this linear improvement with respect to number of signal processors.
Many aerospace/defense sensing and dual-use applications require high-performance computing, extensive high-bandwidth interconnect and realtime deterministic operation. This paper will describe the architecture of a scalable multicomputer that includes DSP and RISC processors. A single chassis implementation is capable of delivering in excess of 10 GFLOPS of DSP processing power with 2 Gbytes/s of realtime sensor I/O. A software approach to implementing parallel algorithms called the Parallel Application System (PAS) is also presented. An example of applying PAS to a DSP application is shown.
Recent image processing systems are exploiting available high computing throughputs in parallel processing architectures. Algorithms are becoming more adaptive to specific clutter and scene content. Data dependencies are being used for more robust operation. This, in turn, increases the number of compute operations required per pixel or image frame.
This paper describes the relationships between key enablers that meet system performance, density, and program requirements. Specifically, it addresses advances in processing element throughput and parallel processing technology for meeting system timelines and use of COTS technology for meeting program cost and schedule requirements.
The High Performance Adaptive Weight Computation (HAWC) processing element is a CORDIC based application specific DSP element that, when connected in a linear array, can perform extremely high throughput (100s of GFLOPS) matrix arithmetic operations on linear systems of equations in real time. In particular, it very efficiently performs the numerically intense computation of optimal least squares solutions for large, over-determined linear systems. Most techniques for computing solutions to these types of problems have used either a hard-wired, non-programmable systolic array approach, or more commonly, programmable DSP or microprocessor approaches. The custom logic methods can be efficient, but are generally inflexible. Approaches using multiple programmable generic DSP devices are very flexible, but suffer from poor efficiency and high computation latencies, primarily due to the large number of DSP devices that must be utilized to achieve the necessary arithmetic throughput. The HAWC processor is implemented as a highly optimized systolic array, yet retains some of the flexibility of a programmable data-flow system, allowing efficient implementation of algorithm variations. This provides flexible matrix processing capabilities that are one to three orders of magnitude less expensive and more dense than the current state of the art, and more importantly, allows a realizable solution to matrix processing problems that were previously considered impractical to physically implement. HAWC has direct applications in RADAR, SONAR, communications, and image processing, as well as in many other types of systems.
Neural networks are non-linear static or dynamical systems that learn to solve problems from examples. Those learning algorithms that require a lot of computing power could benefit from fast dedicated hardware. This paper presents an overview of digital systems to implement neural networks. We consider three options for implementing neural networks in digital systems: serial computers, parallel systems with standard digital components, and parallel systems with special-purpose digital devices. We describe many examples under each option, with an emphasis on commercially available systems. We discuss the trend toward more general architectures, we mention a few hybrid and analog systems that can complement digital systems, and we try to answer questions that came to our minds as prospective users of these systems. We conclude that support software and in general, system integration, is beginning to reach the level of versatility that many researchers will require. The next step appears to be integrating all of these technologies together, in a new generation of big, fast and user-friendly neurocomputers.