In practice, measurements are never perfect. Let
denote the measured output signal, where
is a vector of
``measurement noise'' samples. Then we have
By the orthogonality principle [38], the
least-squares estimate of
is obtained by orthogonally projecting
onto the space spanned by the columns of
. Geometrically
speaking, choosing
to minimize the Euclidean distance between
and
is the same thing as choosing it to minimize the
sum of squared estimated measurement errors
.
The distance from
to
is minimized when the
projection error
is orthogonal to every
column of
, which is true if and only if
[83].
Thus, we have, applying the orthogonality principle,
Solving for
yields Eq. (5.16) as before, but this time we have
derived it as the least squares estimate of
in the presence of
output measurement error.
It is also straightforward to introduce a weighting function in
the least-squares estimate for
by replacing
in the
derivations above by
, where is any positive definite
matrix (often taking to be diagonal and positive). In the present
context, (time-domain formulations), it is difficult to choose a
weighting function that corresponds well to audio perception.
Therefore, in audio applications, frequency-domain formulations are
generally more powerful for linear-time-invariant system
identification. A practical example is the frequency-domain
equation-error method described in §G.4.3 [75].