signal processing - fourier series and fourier transform
- Details
- Last Updated: Saturday, 05 October 2019 14:23
- Published: Tuesday, 17 September 2019 18:20
- Hits: 3264
Signal processing refers to processing any i/p signal to generate a transformed o/p signal. The black box that does this transformation may be analog or digital circuit. The i/p, o/p signals may be continuous or discrete.
In Signal theory, we refer to signals as varying in time, (i.e signal F(t)). However signals can be function of any var "x" (i.e signal F(x)). It so happens that for any signal of interest in real scenario, x happens to be time "t". Although "t" is real, it can be complex too. Thus signal F(x) can be real or complex, with x itself being real or complex. As you can guess, in real world life scenario, signal F(x) is real.
Joseph Fourier proved that any periodic continuous function can be represented exactly via infinite sum of discrete sine and cosine waveforms. This series came to be known as Fourier series (FS). For non-periodic continuous signals, instead of discrete freq components of sine and cosine, it was a continuous freq spectrum, and came to be known as Fourier Transform (FT). This was very important discovery of 18th/19th century by Joseph Fourier, who introduced it for purpose of solving heat equations in a metal plate.
These are some good links for learning Fourier:
http://www.thefouriertransform.com/ => Superb explanation of fourier
Few imp formula before we get into FS:
1. freq in radian (ω) = 2Π*F = 2Π/T, where F = freq in hertz, T=time period in sec. Ex: linear freq of 1 Hz = 1 rotation/sec = 2*pi radians/sec = 6.28 radians/sec. This is very important formula to remember, as many textbooks compute Fourier in F, while many do it in ω. There is a constant factor 2Π that appears depending on whether you are dealing in Hz or radians.
2. Euler's formula: eix = cos(x) + i sin(x) => This is regarded as most remarkable formula in mathematics, as it establishes relationship b/w trignometric functions and complex exponential functions. "i" is defined as square root of -1 ( i = √-1 ) In maths, we use "i", but in physics, we use "j". Euler's formula can be proved easily, by writing taylor's expansion of exponential, and then separating it in 2 series, with 1 series corresponding to sin(x), while other series corresponding to cos(x). But it was Euler who saw it first.
3. Euler identity is Euler's formula with x=Π: eiπ = cos(Π) + i sin(Π) = -1. For x=0 or 2Π, e0 = ei2π = cos(2Π) + i sin(2Π) = 1. From this, it implies that ejω0T0 = ei2π = 1.
4. Complex number: Any complex number c=a+ib can be represented in magnitude and phase form. a+ib = mag.eiθ where mag = √(a2 + b2), and tan(θ) = b/a. So, c=|c|.ei∠c. This is how we'll represent complex numbers in signal theory.
5. Impulse function or delta dirac function: This is a hypothetical function which is 1 at t=0, but is 0 everywhere else. This can be thought of as a gaussian function, with limit t->0. i.e Limit t->0 ∫ x(t) dt =1, where x(t) is gaussian function. This impulse function is used extensively in signal theory, since mathematically impulse function allows conversion from cont domain to discrete domain. NOTE: x(t) is a continuous func. The resulting integral is the area of that func, and thus is a cont func of "t". As t->0, in continuous form, the value at t=0 approaches ∞ for impulse function, as base "x" axis time scale reduces to 0, to keep area = 1. in discrete form, impulse function is defined as x[n]=1 which is 1 at n=0, and 0 everywhere else.
Impulse function δ(x) defined as limit x->0 ∫ δ(x) dx =1
Just as we define impulse func at x=0, we can define it at any x=x0. limit x->0 ∫ δ(x-x0) dx =1. Here δ(x) is 1 at x=x0, but 0 everywhere else. So, the gaussian func is just shifted from being centered at x=0, to being centered at x=x0.
Few Sine/Cosine formula:
0. cos(x) = (eix + e-ix )/2 ; sin(x) = (eix - e-ix )/2i => derived by using Euler's formula above
1. sin(x) = cos(x-Π/2)
2. ∫ sin(2Πnt)*sin(2Πmt)dt = 0 for n≠m, and ∫ sin(2Πnt)*sin(2Πmt)dt = 1/2 for n=m. Here n, m are both integers, and limit of t is from 0 to 1
3. ∫ cos(2Πnt)*cos(2Πmt)dt = 0 for n≠m, and ∫ cos(2Πnt)*cos(2Πmt)dt = 1/2 for n=m. Here n, m are both integers, and limit of t is from 0 to 1
4. ∫ cos(2Πnt)*sin(2Πmt)dt = 0 for any n, m. Here n, m are both integers, and limit of t is from 0 to 1. These 3 eqn are used in proof of FS later.
Continuous time (CT) signals: here F(x) is a continuous function over all values of x, where -∞ < x < ∞. Fourier series or Fourier transform of CT signals is always aperiodic in freq domain.
1. Continuous Time Fourier Series (CTFS): For continuous and periodic signals in time domain, we can compute CTFS. This CTFS is discrete and aperiodic in freq domain.
Any cont periodic signal is represented as infinite summation of cos and sine functions. This is called FS representation of function. CTFS is the fundamental eqn, and all other series/transform for cont/discrete waveforms are derived from CTFS. CTFS is what was derived by Fourier, and is surprisingly very easy to prove.
x(t) = a0 + ∑ ak * cos(ω0kt) + ∑ bk * sin(ω0kt) , where k = 1 to ∞. Here feq ω0 = 2Π / T0, where T0 is the freq of periodic signal (1/T0 = f0). Here a0 is the DC freq component, while ω0 = fundamental or 1st harmonic, 2ω0 = second harmonic and so on. Coefficients a0, ak, bk are called Fourier coefficents, and are computed as follows. Integral is taken over 1 period of the periodic signal.
a0 = 1/T0 ∫ x(t) dt,
ak = 1/T0 ∫ x(t) cos(2Πkt/T0) dt
bk = 1/T0 ∫ x(t) sin(2Πkt/T0) dt
NOTE: This series contains only real terms, and is called real FS. k above is +ve, and goes from 1 to infinity. In reality, k = 0 to ∞. It just so happens, that for k=0, cos(2Πkt/T0) =1, while sin(2Πkt/T0) = 0. So, a0 = 1/T0 ∫ x(t) dt, while b0 = 0. No -ve k allowed. From now on, we'll refer to FS coeff as {ak, bk} with k from 0 to ∞. The motivation for a0 is to provide dc offset, since all sine/cosine functions are always centered around X-axis with y=0 (i.e no dc offset, they start from 0 and go back to 0 over 1 cycle). In order to represent any offset from x-axis, we need a constant, which is what a0 provides.
Also, x(t+T) = x(t) is true when calc thru FS, as ω0T0 = 2Π.
Proof: We can prove FS above very easily. Proof is presented very nicely here: http://faculty.olin.edu/bstorey/Notes/Fourier.pdf.
Let's assume that FS is true, and we can indeed rep any function as sum of sine and cosine.
x(t) = a0 + ∑ ak * cos(ω0kt) + ∑ bk * sin(ω0kt)
Multiply both sides of eqn by sin(2Πmt), and then integrating over limit 0 to 1, we get values of a0, ak and bk. So that implies that eqn above is true since we can get coeff which satisfy this eqn.
We can make above series simpler. We put, ak = dk cos(θk), bk = dk sin(θk). We can do this since any 2 numbers on x and y axis can be represented on a circle with a certain radius and angle.
x(t) = ∑ dk cos(θk) * cos(ω0kt) + ∑ dk sin(θk) * sin(ω0kt) = ∑ dk cos(θk+ω0kt) where dk = √(ak2 + bk2), and tan(θk) = bk /ak , with k = 0 to ∞.
Above eqn is easier to work with. It shows that any function can be rep as cosine series only with different magnitude and offset for each freq component. This seems logical, since sine can be converted to cosine with 90 degree phase offset. Instead of representing series in cosine series only, we could represent it in sine series only (as it's just phase shift of 90 degrees). Though above series, whether in sine or cosine, is ok to work in maths, we prefer to work in complex exponential in physics. This makes it easier to solve them. So, we try to get it in Euler's form.
We use the formula: cos(θ) = (ejθ + e-jθ)/2 to convert real numbers to complex numbers. We also note that subscript k can be both +ve/-ve value. Substituting this, we get the below form:
x(t) = ∑ Ck * ejω0kt ( -∞ < k < ∞ ), Here Ck is complex number = dk/2 * ejθk = 1/2 * {dk cos(θk) + dk j.sin(θk)}.
Ck = 1/T0 ∫ x(t) e-jω0kt dt, Integral is taken over 1 period of the periodic signal.
Magnitude of this complex number Ck is dk/2 = √(ak2 + bk2)/2, while phase is θk where tan(θk) = bk /ak ,
NOTE: This series contains complex terms, and is called complex FS. Here k is both -ve and +ve, so we have freq spectrum on both sides of x axis (while FS terms were only on +ve x axis). This is why we have a 1/2 multiplier factor when going from real FS to complex FS, to account for it existing on both sides (we half the magnitude on both sides, when converting cosine to complex exponential). Also, FS coeff have -ve term in exponent, while inverse FS (i.e reconstruction of original signal) has +ve term in exponent. This is going to be true in all series and transforms as we'll see later (though it could have been other way around too, this is just a convention ?? FIXME?).
From now on, we will work with complex FS only. It's easy to represent, as we can draw magnitude and phase plot of this one complex number (instead of having 2 real numbers ak and bk). This plot runs over all values of k, so we see this plot on both sides of x axis. The magnitude plot of Ck is symmetrical around x=0 axis, while phase plot of Ck is -ve of each other on 2 sides of x-axis. Note that even though Ck is complex, the fourier series sum can still yield real function.
We can calculate FS coefficients for a variety of periodic signals, and observe many interesting things about sinusoidal wave, square wave, triangular wave, etc. We'll discuss those later.
2. Continuous Time Fourier Transform (CTFT): For continuous and aperiodic signals in time domain, we can compute CTFT. This CTFT is continuous and aperiodic in freq domain. Non periodic signal can be considered as a special case of a periodic signal, where Time period T0 is expanded to infinity, and we consider only 1 cycle of periodic signal.
We can derive FS for aperiodic signals (attach proof, FIXME). On deriving, we come up with below eqn: NOTE: factor T0 goes away and gets replaced by (2Π) when deriving this.
x(t) = 1/(2Π) ∫ X(ω) ejωt dω, where X(ω) is the CTFT of x(t). Integral is taken over full -∞ < ω < ∞. Only difference compared to CTFS is that ω0k is replaced by ω, and infinite summation is replaced by integral. This is also called inverse CTFT.
X(ω) = ∫ x(t) e-jωt dt, Integral is taken over full time of the aperiodic signal -∞ < t < ∞. Note: this looks similar to Ck for periodic signals, except that Ck for aperiodic signals has become continuous now. Also, 1/T0 is absent, as we folded that factor 1/(2Π) outside the integral in x(t) eqn above, so that it is no longer a part of FT. This is the forward CTFT. We can also write X(ω) as X(jω) so as to be explicit that ω is complex.
So, to conclude, we see that CT periodic signals have discrete freq at +/-ω0, +/-2ω0, +/-3ω0 and so on. But CT aperiodic signals have a continuum of freq from -∞ < ω < ∞. Other way to see is that freq is kω0 for both periodic and aperiodic. But for periodic, k can take integer values only (i.e 0,1,2,3, ..), but for aperiodic signals, k can take all real values (i.e, 0, 0.001, 0.002, ....). CTFT is a superset of CTFS, as we can calculate CTFT of both periodic and aperiodic signals. There is nothing that prohibits taking CTFT of periodic signals. CTFT of aperiodic signals yields cont freq spectrum as we saw above. However, for periodic signals, we saw that we get discrete values for Ck. Since CTFT is derived from CTFS, CTFT should also yield same answer. However, we saw that CTFT is cont, and integral can never yield non cont discrete values. Dirac delta function comes to our rescue here, as they allow discrete values to be represented as the limiting case of gaussian func. So, CTFT of periodic signals will yield impulse or dirac delta at discrete freq, so that the func remains cont. So CTFT will serve us well for all kinds of waveforms (both periodic and aperiodic). Going forward, we'll learn CTFT only.
NOTE: we derived FT in terms of ω, which gives us a factor of (2Π). If we derive FT in terms of f, then we get rid of (2Π).
X(2Πf) = ∫ x(t) e-j(2Πf)t dt => X(f) = ∫ x(t) e-j(2Πf)t dt => remains same as X(ω), as integral is done over "t" and not "f"
x(t) = 1/(2Π) ∫ X(2Πf) ej(2Πf)t d(2Πf) = 1/(2Π) * (2Π) ∫ X(2Πf) ej(2Πf)t d(f) = ∫ X(2Πf) ej(2Πf)t df => x(t) = ∫ X(f) ej(2Πf)t df => gets rid of factor 2Π compared to X(ω), as integral is over "f".
3. Laplace transform: LT is defined for CT signals, in exactly same way as FT is defined for CT signals. In LT, complex var "s" has both real and imaginary parts (s = σ + jω). FT happens to a special case of LT, with σ=0, so s=jω. LT and FT are kind of same. When dealing with signals, we use FT, while when dealing with systems, we use LT. LT allows us to solve ordinary differential eqn encountered in RLC ckt, in an easier way by transforming it from time domain to laplace "s" domain (we can solve it equally easily by transforming it to Fourier "jω" domain). We eventually end up substituting s with jω, so both LT and FT end up same.
LT[x(t)] = X(s) = ∫ x(t) e-st dt, Integral is taken over full time of the aperiodic signal -∞ < t < ∞. This is called two sided LT. Since most signals are 0 for t<0, we usually evaluate integral from 0 < t < ∞. This is called one sided LT. Note: this looks similar to FT eqn above, except that jω is replaced by s.
Discrete time (DT) signals: here F{x[n]} is a discrete function over all values of n, where x[n] is the input signal with n=0, +/-1, +/-2, ... ∞. Fourier series or Fourier transform of DT signals is always periodic in freq domain. This is the main diff b/w CT and DT. Otherwise DT are similar to CT.
1. Discrete Time Fourier Series (DTFS): For discrete and periodic signals in time domain, we can compute DTFS. This DTFS is discrete and periodic in freq domain. It's similar to it's CT counterpart CTFS.
We can derive DTFS from CTFS.
First let's derive x[n] from x[(t):
For CTFS, x(t) = ∑ Ck * ejω0kt ( -∞ < k < ∞ ). However, now x(t) is not continuous, but has discrete values. Assume N samples for x(t) equally spaced by T0/N intervals, from t=0*T0/N to t=(N-1)*T0/N. So samples are x(0) to x((N-1)*T0/N).
x(t=n*T0/N) = ∑ Ck * ej(2Π/T0)k(n*T0/N) ( -∞ < k < ∞ ).
Represent discrete x(t) as x[n], where x[n] = x(n*T0/N), and n = 0 to N-1.
x[n] = ∑ Ck * ej(2Π/N)kn, Here summation is over 1 period of Ck (Need to find out how summation went from ∞ to one period). Since Ck is periodic with period N (as we'll see below), we choose k as 0 < k < N-1. It doesn't matter what values of k we take, as long as it represents one full cycle over N values. So, Ck may have total N samples from C0 to CN-1 . i.e 0 < k < N-1, or C1 to CN , i.e 1 < k < N, or any other period. This looks similar to CTFS, except that it has period ω0 replaced by 2Π/N. T0 = 2Π / ω0 = N.
Now, let's derive Ck for x[n] from Ck for x[(t):
For CTFS, Ck = 1/T0 ∫ x(t) e-jω0kt dt, However, since x(t) is not continuous any more, but has discrete values, we can't calculate integral. However, we can make approximation of this integral, by taking only those "t" for which values of x(t) is known.
Then we use same value of x(t) for interval T0/N. So, x(t) at any time t is x(n*T0/N) where n = 0 to N-1.
Ck = 1/T0 ∫ x(t) e-jω0kt dt ≈ 1/T0 ∑ x(n*T0/N) e-jω0kn*T0/N * T0/N, where t=n*T0/N, ans summation is over n=0 to N-1. We approximate each integral piece as a rectangular box with height=x(n*T0/N) and width=T0/N.
=> Ck = 1/N ∑ x(n*T0/N) e-j(2Π/T0)kn*T0/N = 1/N ∑ x[n] e-j(2Π/N)kn, where x[n] = x(n*T0/N), and n = 0 to N-1.
Ck = 1/N ∑ x[n] e-j(2Π/N)kn, summation is over 1 period of the periodic input signal with period=N samples, i.e 0 < n < N-1. Ck is complex number, and is periodic. with 0 < k < N-1. Ck is periodic can be deduced from the fact that for k=0 or k=N, C0 = CN as ej(0) = ej(2Π) . This looks similar to CTFS, except that it has summation instead of integral and Ck is periodic with period N.
Proof here: https://www.math.ubc.ca/~feldman/m267/dft.pdf
2. Discrete Time Fourier Transform (DTFT or DTF): For discrete and aperiodic signals in time domain, we can compute DTFT. This DTFT is continuous and periodic in freq domain. It's similar to it's CT counterpart CTFT.
We can derive DTFT from DTFS, the same way we derived CTFT from CTFS (by treating summation as integral when T -> ∞)
x[n] = 1/(2Π) ∫ X(ω) ejωn dω, where X(ω) is the DTFT of x[n]. Integral is taken over 1 cycle of ω, i.e λ < ω < λ+2Π. Only difference compared to CTFT is that t is replaced by n, and integral over all values of ω, is replaced by integral over 1 cycle of ω. This is also called inverse DTFT.
X(ω) = ∑ x[n] e-jωn, summation is taken over all samples of the aperiodic signal -∞ < n < ∞. Note: this looks similar to CTFT, except that t is replaced by n, and integral is replaced by summation, Also, X(ω) is periodic now, since X(ω+2Π) = X(ω). This is also called forward DTFT.
Since, same X(ω) notation is used for both CTFT and DTFT, it can cause confusion, as to whose FT is it representing (discrete or continuous). So, we for DTFT, we write it as X(Ω) or X(ejω), while for CTFT, we write it as X(ω) or X(jω).
CTFT: X(ω) or X(jω) = |X(jω)|ei∠X(jw).
DTFT: X(Ω) or X(ejω) = |X(ejω)|ei∠X(ejω).
3. z transform (ZT): ZT is defined for DT signals, in exactly same way as LT is defined for CT signals. It is same as DTFT, except that complex var "z" is of form Aejω. DTFT is special case of ZT, where complex var "Ω" is of form ejω (i.e z with unit magnitude only).
ZT{x[n]} = X(z) = ∑ x[n] z-n, -∞ < n < ∞. This is same as DTFT with z=ejω.
ZT is used to solve discrete diff eqn in time domain. Concept is same as that of LT for solving cont diff eqn in time domain.
----
Discrete Fourier Transform (DFT):
This is a very popular topic in any signals theory. When we talk about anything in DSP (digital signal processing), DFT is what we are talking about. As we saw above, we can take DTFT of discrete signals which are not periodic. Since in real life, most signals are not periodic and are sampled, DTFT is what we use mostly for discrete signals (instead of DTFS). Computers can work with discrete signals (samples of analog signals at discrete times), and process them to generate DTFT of those samples. However, DTFT that we get above is itself continuous, and that is a problem for machines that deal with discrete samples only. Computers can't generate analog DTFT waveform. They can generate samples of DTFT. We can generate samples of DTFT so close by that we can approximate analog DTFT signal. Those samples of DTFT are called DFT.
DTFT of x[n] = X(Ω) = ∑ x[n] e-jωn,
DFT of x[n] = Xk or X[k] = samples of X(Ω) at discrete freq = X(Ω) | ω=2Πk/N where 0 <= k <= N-1. Here we are taking N samples, X0 to XN-1
But discrete freq components implies periodic x[n], since only periodic signals can have discrete freq components. As we saw above, DTFS has discrete freq components, while DTFT has continuous freq spectrum. Basically as signal becomes aperiodic (or T -> ∞), more and more freq components start showing up, until it becomes continuous freq spectrum.
So, if we try to reconstruct original signal x[n] by taking inverse DFT of Xk , we'll not get original x[n], but periodic x[n]. As we have more and more samples of Xk, we'll get more and more original values of x[n] with larger period (i.e T starts going to ∞). So, DFT of x[n] is basically same as taking DTFS of periodic x[n].
DTFS of x[n] = Ck = 1/N ∑ x[n] e-j(2Π/N)kn
DFT of x[n] = Xk or X[k] = 1/N ∑ x[n] e-j(2Π/N)kn = 1/N ∑ x[n] e-jωkn , where ωk =2Πk/N where 0 <= k <= N-1.
Thus DFT of x[n] can be calculated by either of 2 ways:
1. Compute DTFT of x[n]. Then take samples of DTFT at discrete freq ωk =2Πk/N where 0 <= k <= N-1.
2. Compute DTFS of x[n]. This DTFS gives same result as 1 (taking DTFT samples) above.
Inverse DFT of Xk = x[n] = ∑ Xk ejωkn, where ωk =2Πk/N where 0 <= k <= N-1.
This is same as inverse DTFS = x[n] = ∑ Ck * ej(2Π/N)kn, where 0 <= k <= N-1.
Thus from now on, we'll treat DFT the same as DTFS. So, when we say DFT, we mean DTFS, and not DTFT.