Why should I use digital filters rather than simply manipulate signals in the frequency domain and then recover them into the time domain?

I'm quite a novice in signal processing and I know this question may be too broad. But I would still like to hear hints from experts. I was taught to use butter (to design Butterworth filter aka the maximally flat magnitude filter) and filtfilt (Zero-phase digital filtering) functions for bandpass filtering of EEG (electroencephalogram) signals in MATLAB offline (i.e. after the completion of recording). This way you can avoid inevitable "delay" caused by the digital filter (i.e. zero phase filtering). Then, someone asked me why we cannot use fft (Fast Fourier transform) to get the frequency-domain representation of the signal, and then set the power of unwanted frequencies to zero, followed by ifft (Inverse fast Fourier transform) to recover the filtered data in the time domain for the same purpose. This manipulation in frequency domain sounded simpler and reasonable to me, and I couldn't really answer why. What are the advantages and disadvantages of using the simple fft/ifft method for bandpass filtering? Why do people prefer to use FIR or IIR digital filters? For example, is the fft/ifft method more prone to spectral leakage or ripples compared to the established digital filters? Does the method also suffer from phase delay? Is there a way to visualize the impulse response for this filtering method for comparison?

336 5 5 silver badges 17 17 bronze badges asked May 10, 2014 at 0:00 Kouichi C. Nakamura Kouichi C. Nakamura 333 1 1 gold badge 2 2 silver badges 8 8 bronze badges Commented May 10, 2014 at 0:29

\$\begingroup\$ Using an FFT to filter a signal is absolutely valid, but there are a few things to look out for. See this similar question/answer for more info: stackoverflow.com/a/2949227/565542 \$\endgroup\$

Commented May 10, 2014 at 0:57

\$\begingroup\$ Questions like this might be more appropriate for the Signal Processing site. \$\endgroup\$

Commented May 10, 2014 at 13:00

\$\begingroup\$ I think that The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith has an answer. I think that he says that sample in - sample out is much more efficient with a digital filter. But, there is a minimal width of the window (64 samples or more, I do not remember exactly) when it is more appropriate to involeve the FFT conversion where you can have a brick filter in the freq domain. Efficiency is not the only issue. The brick filter implies that you need to use samples from the future, which is impossible in real time. \$\endgroup\$

Commented May 10, 2014 at 15:18

\$\begingroup\$ Thanks, I was looking for something like the Signal Processing site, but couldn't find it. \$\endgroup\$

Commented May 10, 2014 at 20:52

3 Answers 3

\$\begingroup\$

The main reason that frequency-domain processing isn't done directly is the latency involved. In order to do, say, an FFT on a signal, you have to first record the entire time-domain signal, beginning to end, before you can convert it to frequency domain. Then you can do your processing, convert it back to time domain and play the result. Even if the two conversions and the signal processing in the middle are effectively instantaneous, you don't get the first result sample until the last input sample has been recorded. But you can get "ideal" frequency-domain results if you're willing to put up with this. For example, a 3-minute song recorded at 44100 samples/second would require you to do 8 million point transforms, but that's not a big deal on a modern CPU.

You might be tempted to break the time-domain signal into smaller, fixed-size blocks of data and process them individually, reducing the latency to the length of a block. However, this doesn't work because of "edge effects" — the samples at either end of a given block won't line up properly with the corresponding samples of the adjacent blocks, creating objectionable artifacts in the results.

This happens because of assumptions that are implicit in the process that converts between time domain and frequency domain (and vice-versa). For example, the FFT and IFFT "assume" that the data is cyclic; in other words, that blocks of identical time-domain data come before and after the block being processed. Since this is in general not true, you get the artifacts.

Time-domain processing may have its issues, but the fact that you can control the latency and it doesn't produce periodic artifacts make it a clear winner in most real-time signal-processing applications.

(This is an expanded version of my previous answer.)