LA home
 N χ
 Eigen v.
 Least squares
 Lagrange mult.

also see
 poly fit
For functions f and g defined on the range [lo..hi], define their inner product to be
<f, g> = lo..hi f(x) g(x) dx,
and the norm
||f||2 = <f, f>1/2     (also written as just ||f||);
<c f, g> = <f, c g> = c <f, g>
||c f||2 = c ||f||2,   where c is a constant.
Functions {fi | i ≥ 0} are orthogonal if
<fi, fj> = 0,   ∀ i, j s.t. i≠j
and are orthonormal if also
||fi|| = 1,   ∀ i.
Note that
-1..+1 f(x) dx = 1/a -a..+a f(y/a) dy
and that
-a..+a f(x) dx = -a+b..a+b f(y-b) dy
so one can scale and shift a set of orthogonal functions to another range, [lo, hi], and normalise them (individually).

Series of Functions

If a function f(x) has an expansion
f(x) = a0f0(x) + a1f1(x) + ...
over [lo, hi] in terms of orthogonal basis functions, {f0(x), f1(x), ...}, then
<f, fi> = [lo..hi] f(x) fi(x) dx
= [lo..hi] a0 f0(x) fi(x) + a1 f1(x) fi(x) + ... dx
= a0.<f0,fi> + a1.<f1,fi> + ...
= a0 . 0 + ... + ai-1 . 0 + ai . ||fi||2 + ai+i . 0 + ...
= ai . ||fi||2
ai = <f, fi> / ||fi||2
If the basis functions are orthonormal then ai = <f, fi>.


Suppose we are given "training" data points {(x1, y1), (x2, y2), ..., (xN, yN)}, where the xi ∈ [lo, hi], and want to find a function, f(x), that is a "good" fit to the data, that is each f(xi) is close to its corresponding yi. A common approach is to try to find some function, f(), expressed as a series in terms of a set of "simple" functions (orthonormal basis functions have advantages), which amounts to solving the resulting linear regression problem to minimise the sum of the squared errors, i (yi - f(xi))2.
A random example computed by JavaScript (if it is on) ...
www #ad:

↑ © L. Allison, www.allisons.org/ll/   (or as otherwise indicated).
Created with "vi (Linux)",  charset=iso-8859-1,   fetched Saturday, 24-Feb-2024 23:15:02 UTC.

Free: Linux, Ubuntu operating-sys, OpenOffice office-suite, The GIMP ~photoshop, Firefox web-browser, FlashBlock flash on/off.