Louis Baker has a Ph.D. in astronomy and has written books with code in C and Ada for McGraw-Hill. He may be reached at Mission Research Corp., 1720 Randolph SE, Albuquerque NM 87106.
Numerous chips are now optimized for digital signal processing (DSP). To get the high performance necessary for many applications, these chips have special instruction sets and register structures. As a consequence, they often have special compilers and languages as well. DSP chips play important roles in applications as diverse as radar, sonar, and compact disk players. Here is an overview of digital signal processing. It should give you a feeling for what it is and how you can use it.
Time Domain And Frequency Domain
In the good old days, signal processing was accomplished with analog networks. Components such as inductors and capacitors were used. Such a network is conveniently characterized by its frequency response, which describes the behavior of the system at each frequency. A signal in the form of a sine wave with a period T between adjacent wave crests has a frequency of 1/T. (See Figure 1. ) The units for frequency is the inverse of a second. It is officially named the Hertz (Hz).An example of signal processing is sending and receiving an amplitude-modulated (AM) radio signal. Amplitude modulation multiplies the radio frequency signal (called the carrier) by the sound signal. If the sound signal is a sine wave, the resulting signal is that shown in Figure 2a. The envelope of the signal has the same frequency as the sound wave. The resulting signal can be shown to be equivalent to three pure sine waves, at frequencies F, F+f, and F+f, where F is the carrier frequency and f is the modulating signal. The signal represented as a collection of pure sine waves is called the frequency-domain representation. The signal represented as a function of time is the time-domain representation. In addition to the carrier, two sidebands are present. With a technique called single-sideband or SSB modulation, you can transmit only one of the sidebands.
At the receiver, you want to extract the audio signal, in this case the pure frequency f. First, you use a rectifier to block the negative values of the signal. (See Figure 2b. ) I don't show the frequency-domain picture because rectifying produces an infinite sequence of frequencies, all multiples of f. Finally, you smooth this result by putting this signal through a low-pass filter (see below), to get the audio signal (Figure 2c) .
With the advent of microelectronics capable of computing at megahertz frequencies, much of this processing can be done by periodically sampling the signal at high frequency. For example, if you sampled at some multiple of the carrier frequency, you would get the results similar to those shown in Figure 2d. These samples could then be digitally processed by the equivalent of a low-pass filter to get similar results.
Fourier And Z Transforms
Fourier analysis is the process of resolving a signal that is a complicated function of time into the summation of a number of sine waves. Many readers of The C Users Journal have probably heard of the FFT or Fast Fourier transform, which performs such a decomposition. Listing 1, taken from C Tools for Scientists and Engineers [1], gives a program to do the FFT. It illustrates its use for a simple decaying pulse. The code computes with complex numbers [2]. It uses a header file, complex. h (see Listing 2) , which is contained in references 1 and 2. To check the results, the same code performs the inverse FFT. You should get back the input you started with. See Listing 3 for the results. The decomposition of signals into their frequency components is fundamental to many forms of signal analysis, for example, determining the power spectrum of a signal.The Z transform is a closely related form specially suited to analyzing periodically sampled functions. If the signal is sampled with period T, (i.e., every T seconds), it will have values that can be denoted s[t]. You can treat s[0] as the value at t=0, s[1] the value at t=T, etc. If you denote by 1/Z the operation of delaying by one unit of T, then you can symbolically denote the signal s as
S(Z) = s[0] + (1/Z)s[1] + (1/Z)(1/Z)s[2]and so forth. If you assign Z = ei2pfT, then S(Z) is the Fourier transform of the time domain sequence s[i] into the function of frequency S(f). (Since S is a function of Z and Z is a function of the frequency f, S may be expressed as a function of f.)This list does not exhaust all the possible transforms of interest in DSP. The Walsh transform is similar to the Fourier transform, but it uses rectangular pulses instead of sine waves. The Walsh transform, therefore, is somewhat less expensive computationally because transcendental functions do not need to be evaluated. See the review article by Bracewell [3] for a more complete list of tranforms.
Filters
A filter takes an input signal and selectively passes its desirable frequency components through, filtering out or reducing other frequencies. The low-pass filter previously mentioned passes the lower frequencies, such as the audio signal, and blocks the high frequencies, such as the carrier.You can implement a filter by starting with the sequence of signal values s[n]. Fourier transform them into S[f], multiply by a filtering function, and perform the inverse transformation to obtain the transformed sequence of time values. For a low-pass filter, for example, you might attenuate or even set to zero the higher frequency components. In doing so, you must avoid a number of pitfalls. You must treat negative frequencies properly. In effect, they are present to correctly include phase shifts of the frequency components. Aliasing effects, in which one frequency component can erroneously contribute to the effects of another, can also cause problems. See reference 1 for a more complete discussion.
The more economical approach followed by most DSP applications is to work in the time domain. If you let y[t] denote the filtered sequence, the general digital filter looks like:
y[n] = Axy[n] + Bxy[n-1] + ... + axs[n] + bxs[n-1] + ...Note that in general the filter output at any time can depend upon previous output values as well as previous input values. If it does depend upon previous output values, it is called recursive. Filters that are not recursive have the property that if the input shuts off after some time, the output will do the same. Such filters are called Finite-Impulse Response (FIR) filters, because their response to an impulse input (a signal that lasts for just one sampling period) is finite in extent. In general, if the coefficients A, B, etc. are nonzero and the filter output depends upon previous output values, this will not be the case. The filter can ring forever. Such filters are called Infinite-Impulse Response (IIR) filters. Filters with b=c=...=0 are called all-pole filters. They arise from designs using the maximum entropy method to develop optimal filters.
Filters And Digital Filtering
Filter design requires rather sophisticated mathematics. (See references 4 and 5.) Listing 4 contains a generic digitial filter routine. It is a series of multiply-accumulate cycles. Such MAC operations are the bread and butter of DSP chips, which are designed to perform such operation sequences naturally. The simplicity of the operations is what makes digital filtering attractive, particularly in hardware.Consider the simplest imaginable low-pass filter:
y [n] = .5(s[n]+s[n-1])The output is the average of the most recent two input signal values. A constant (DC) component of the signal, such as the sequence 1, 1, 1, 1,... will be unaffected. The higest-frequency component representable, namely 1, -1, 1, -1, etc. will be completely attenuated, with output 0, 0, 0, ... This is the simplest type of moving-average filter, in which the window of visible signal values is the most recent two values. A more general filter of the form:
y[n] = (s[n] + s[n-1] + ... + s[n-K])/(K+1)can be easily constructed. Other common windows can be based on other weightings, such as the Hamming window with size 2N--1:
a[n] = .54 + .46xcos((n+(N-1))p/N)The choice of windowing function decides the compromise between attenuating undesired frequencies and minimizing effects on the desired frequencies.Averaging or windowing filters can often be constructed more economically by using recursive filters. An example from reference 4 is:
y[n] = y[n-1] + .05x(x[n]-x[n-20])which behaves similarly to a filter in which we average over the most recent 20 signal values. In effect, the recursive term y[n-1] maintains memory of the summation.The simplest high-pass filter would be:
y[n] = (s[n] - s[n-1])Note that a constant component would be zeroed out. A more general form of high-pass filter would be
y[n] =-ay[n-1] + (s[n] - s[n-1])Many other types of filters are possible. Band-pass filters block or attenuate both lower and higher frequencies outside of a the band. Band-stop filters block or reject a range of frequencies. This article just scratches the surface of this subject. For more details, see references 4 and 5.
References
[1] L. Baker, C Tools for Scientists and Engineers, McGrawHill, 1989.[2] L. Baker, "Complex Numbers and Matrices in C," CUJ, May, 1990, p.59.
[3] R. N. Bracewell, "Numerical Transforms," Science, 248, May 11, 1990, p. 697-704.
[4] P. A. Lynn, An Introduction to The Analysis and Processing of Signals, 2nd. ed, (Indianapolis, IN: H. W. Sams, 1982).
[5] N. K. Bose, Digital Filters, (N.Y.: North-Holland, 1985).