General Conditions Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search

General Conditions

This section summarizes and extends the above derivations in a more formal manner (following portions of chapter 4 of $ \cite{Noble}$). In particular, we establish that the sum of projections of $ x\in{\bf C}^N$ onto $ N$ vectors $ \sv_k\in{\bf C}^N$ will give back the original vector $ x$ whenever the set $ \{\sv_k\}_0^{N-1}$ is an orthogonal basis for $ {\bf C}^N$.



Definition: A set of vectors is said to form a vector space if, given any two members $ x$ and $ y$ from the set, the vectors $ x+y$ and $ cx$ are also in the set, where $ c\in{\bf C}^N$ is any scalar.



Definition: The set of all $ N$-dimensional complex vectors is denoted $ {\bf C}^N$. That is, $ {\bf C}^N$ consists of all vectors $ \underline{x}=
(x_0,\ldots,x_{N-1})$ defined as a list of $ N$ complex numbers $ x_i\in{\bf C}$.



Theorem: $ {\bf C}^N$ is a vector space under elementwise addition and multiplication by complex scalars.



Proof: This is a special case of the following more general theorem.



Theorem: Let $ M\le N$ be an integer greater than 0. Then the set of all linear combinations of $ M$ vectors from $ {\bf C}^N$ forms a vector space under elementwise addition and multiplication by complex scalars.



Proof: Let the original set of $ M$ vectors be denoted $ \sv_0,\ldots,\sv_{M-1}$. Form

$\displaystyle x_0 = \alpha_0 \sv_0 + \cdots + \alpha_{M-1}\sv_{M-1}
$

as a particular linear combination of $ \{\sv_i\}_{i=0}^{M-1}$. Then

$\displaystyle cx_0 = c\alpha_0 \sv_0 + \cdots + c\alpha_{M-1}\sv_{M-1}
$

is also a linear combination of $ \{\sv_i\}_{i=0}^{M-1}$, since complex numbers are closed under multiplication ( $ c\alpha_i\in{\bf C}$ for each $ i$). Now, given any second linear combination of $ \{\sv_i\}_{i=0}^{M-1}$,

$\displaystyle x_1 = \beta_0 \sv_0 + \cdots + \beta_{M-1}\sv_{M-1},
$

the sum is

\begin{eqnarray*}
x_0 + x_1 &=& (\alpha_0 \sv_0 + \cdots \alpha_{M-1}\sv_{M-1}) ...
... \beta_0) \sv_0 + \cdots + (\alpha_{M-1} + \beta_{M-1})\sv_{M-1}
\end{eqnarray*}

which is yet another linear combination of the original vectors (since complex numbers are closed under addition). Since we have shown that scalar multiples and vector sums of linear combinations of the original $ M$ vectors from $ {\bf C}^N$ are also linear combinations of those same original $ M$ vectors from $ {\bf C}^N$, we have that the defining properties of a vector space are satisfied. $ \Box$

Note that the closure of vector addition and scalar multiplication are ``inherited'' from the closure of complex numbers under addition and multiplication.



Corollary: The set of all linear combinations of $ M$ real vectors $ x\in{\bf R}^N$, using real scalars $ \alpha_i\in{\bf R}$, form a vector space.



Definition: The set of all linear combinations of a set of $ M$ complex vectors from $ {\bf C}^N$, using complex scalars, is called a complex vector space of dimension $ M$.



Definition: The set of all linear combinations of a set of $ M$ real vectors from $ {\bf R}^N$, using real scalars, is called a real vector space of dimension $ M$.



Definition: If a vector space consists of the set of all linear combinations of a finite set of vectors $ \sv_0,\ldots,\sv_{M-1}$, then those vectors are said to span the space.



Example: The coordinate vectors in $ {\bf C}^N$ span $ {\bf C}^N$ since every vector $ x\in{\bf C}^N$ can be expressed as a linear combination of the coordinate vectors as

$\displaystyle x= x_0 \underline{e}_0 + x_1 \underline{e}_1 + \cdots + x_{N-1}\underline{e}_{N-1}
$

where $ x_i\in{\bf C}$, and

\begin{eqnarray*}
\underline{e}_0 &=& (1,0,0,\ldots,0),\\
\underline{e}_1 &=& (...
...x{, and so on up to }\\
\underline{e}_{N-1} &=& (0,\ldots,0,1).
\end{eqnarray*}



Definition: The vector space spanned by a set of $ M<N$ vectors from $ {\bf C}^N$ is called an $ M$-dimensional subspace of $ {\bf C}^N$.



Definition: A vector $ \underline{s}\in{\bf C}^N$ is said to be linearly dependent on a set of $ M\le N$ vectors $ \sv_i\in{\bf C}^N$, $ m=0,\ldots,M-1$, if $ \underline{s}$ can be expressed as a linear combination of those $ M$ vectors.

Thus, $ \underline{s}$ is linearly dependent on $ \{\sv_i\}_{i=0}^{M-1}$ if there exist scalars $ \{\alpha_i\}_{i=0}^{M-1}$ such that $ \underline{s}= \alpha_0\sv_0 +
\alpha_1\sv_1 + \cdots + \alpha_{M-1}\sv_{M-1}$. Note that the zero vector is linearly dependent on every collection of vectors.



Theorem: (i) If $ \sv_0,\ldots,\sv_{M-1}$ span a vector space, and if one of them, say $ \sv_m$, is linearly dependent on the others, then the same vector space is spanned by the set obtained by omitting $ \sv_m$ from the original set. (ii) If $ \sv_0,\ldots,\sv_{M-1}$ span a vector space, we can always select from these a linearly independent set that spans the same space.



Proof: Any $ x$ in the space can be represented as a linear combination of the vectors $ \sv_0,\ldots,\sv_{M-1}$. By expressing $ \sv_m$ as a linear combination of the other vectors in the set, the linear combination for $ x$ becomes a linear combination of vectors other than $ \sv_m$. Thus, $ \sv_m$ can be eliminated from the set, proving (i). To prove (ii), we can define a procedure for forming the required subset of the original vectors: First, assign $ \sv_0$ to the set. Next, check to see if $ \sv_0$ and $ \sv_1$ are linearly dependent. If so (i.e., $ \sv_1$ is a scalar times $ \sv_0$), then discard $ \sv_1$; otherwise assign it also to the new set. Next, check to see if $ \sv_2$ is linearly dependent on the vectors in the new set. If it is (i.e., $ \sv_2$ is some linear combination of $ \sv_0$ and $ \sv_1$) then discard it; otherwise assign it also to the new set. When this procedure terminates after processing $ \sv_{M-1}$, the new set will contain only linearly independent vectors which span the original space.



Definition: A set of linearly independent vectors which spans a vector space is called a basis for that vector space.



Definition: The set of coordinate vectors in $ {\bf C}^N$ is called the natural basis for $ {\bf C}^N$, where the $ n$th basis vector is

$\displaystyle \underline{e}_n = [\;0\;\;\cdots\;\;0\;\underbrace{1}_{\mbox{$n$th}}\;\;0\;\;\cdots\;\;0].
$



Theorem: The linear combination expressing a vector in terms of basis vectors for a vector space is unique.



Proof: Suppose a vector $ x\in{\bf C}^N$ can be expressed in two different ways as a linear combination of basis vectors $ \sv_0,\ldots,\sv_{N-1}$:

\begin{eqnarray*}
x&=& \alpha_0 \sv_0 + \cdots \alpha_{N-1}\sv_{N-1} \\
&=& \beta_0 \sv_0 + \cdots \beta_{N-1}\sv_{N-1}
\end{eqnarray*}

where $ \alpha_i\neq\beta_i$ for at least one value of $ i\in[0,N-1]$. Subtracting the two representations gives

$\displaystyle \underline{0}= (\alpha_0 - \beta_0)\sv_0 + \cdots +
(\alpha_{N-1} - \beta_{N-1})\sv_{N-1}.
$

Since the vectors are linearly independent, it is not possible to cancel the nonzero vector $ (\alpha_i-\beta_i)\sv_i$ using some linear combination of the other vectors in the sum. Hence, $ \alpha_i=\beta_i$ for all $ i=0,1,2,\ldots,N-1$.

Note that while the linear combination relative to a particular basis is unique, the choice of basis vectors is not. For example, given any basis set in $ {\bf C}^N$, a new basis can be formed by rotating all vectors in $ {\bf C}^N$ by the same angle. In this way, an infinite number of basis sets can be generated.

As we will soon show, the DFT can be viewed as a change of coordinates from coordinates relative to the natural basis in $ {\bf C}^N$, $ \{\underline{e}_n\}_{n=0}^{N-1}$, to coordinates relative to the sinusoidal basis for $ {\bf C}^N$, $ \{\sv_k\}_{k=0}^{N-1}$, where $ \sv_k(n)=
e^{j\omega_k t_n}$. The sinusoidal basis set for $ {\bf C}^N$ consists of length $ N$ sampled complex sinusoids at frequencies $ \omega_k=2\pi k f_s/N,
k=0,1,2,\ldots,N-1$. Any scaling of these vectors in $ {\bf C}^N$ by complex scale factors could also be chosen as the sinusoidal basis (i.e., any nonzero amplitude and any phase will do). However, for simplicity, we will only use unit-amplitude, zero-phase complex sinusoids as the Fourier ``frequency-domain'' basis set. To summarize this paragraph, the time-domain samples of a signal are its coordinates relative to the natural basis for $ {\bf C}^N$, while its spectral coefficients are the coordinates of the signal relative to the sinusoidal basis for $ {\bf C}^N$.



Theorem: Any two bases of a vector space contain the same number of vectors.



Proof: Left as an exercise (or see [46]).



Definition: The number of vectors in a basis for a particular space is called the dimension of the space. If the dimension is $ N$, the space is said to be an $ N$ dimensional space, or $ N$-space.

In this book, we will only consider finite-dimensional vector spaces in any detail. However, the discrete-time Fourier transform (DTFT) and Fourier transform (FT) both require infinite-dimensional basis sets, because there is an infinite number of points in both the time and frequency domains. (See Appendix B for details regarding the FT and DTFT.)


Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search

[How to cite this work] [Order a printed hardcopy]

``Mathematics of the Discrete Fourier Transform (DFT), with Music and Audio Applications'', by Julius O. Smith III, W3K Publishing, 2003, ISBN 0-9745607-0-7.
Copyright © 2007-02-02 by Julius O. Smith III
Center for Computer Research in Music and Acoustics (CCRMA),   Stanford University
CCRMA  [Automatic-links disclaimer]