
Given a series, A[1], A[2], ..., A[N] ,
and a number, G<=N,
what is the optimal way to partition the series
into G segments (i.e. groups, clusters, ...):
A[1..P_{1}],
A[P_{1}+1..P_{2}],
...,
A[P_{G1}+1..N]
The aim is to minimise the sum of squared differences (SSD)
of values from the means of their respective segments,
summed over all of the segments.
SSD_{i..j}
= Σ_{k=i..j} (A[k]  mean_{i..j})^{2}
= sumSq_{i..j}
 (sum_{i..j})^{2}/(ji+1)
where
mean_{i..j} = sum_{i..j}/(ji+1)
and
sum_{i..j}
= (Σ_{k=i..j} A[k])
and
sumSq_{i..j}
= (Σ_{k=i..j} A[k]^{2})
 see [μ&σ].
Now note that
sum_{i..j}
= sum_{1..j}
 sum_{1..i1} and that
sumSq_{i..j}
= sumSq_{1..j}
 sumSq_{1..i1}
We can compute
sum_{1..i} and
sumSq_{1..i} ,
for all i, in O(N)time.
Now treat
SSD_{i+1..j}
like an "edgeweight"
or a distance, d(i,j) if j>i,
in a graph having vertices 0, 1, 2, ..., N, and
edges <i,j> for j>i.
Seek a path of exactly G edges (i.e. segments) from vertex 0 to vertex N.
A dynamic programming
approach can be used (Fisher 1958).
Let P_{k}[j] be the minimum total SSD achievable for A[1..j]
using exactly k segments.
P_{1}[j] = SSD_{1..j}, for j=1..N
P_{k}[j]
= MIN_{i<j}{P_{k1}[i]
+ SSD_{i+1..j}},
for k = 2..G
This suggests an O(G*N^{2})time algorithm.
Choosing the Number of Segments.
There are
^{N1}C_{G}
segmentations of [1..N] into G segments.
This gives 2^{N1}
possible segmentations in total
if the maximum allowed number of segments equals N.
Leastsquares segmentation
does not indicate how to choose an optimal value for G.
As G increases towards N, the lowest achievable total SSD must decrease
towards zero because
d(i1,i) = SSD_{i..i} = 0.
Some other technique must be used to decide at what value to stop G.
Various statistical techniques can be used including some
based on informationtheory.
Having a large number of segments is clearly a more complex
hypothesis than having a small number of segments.
This can be made explicit by introducing a cost
for the hypothesis, i.e. the cost of stating G,
P_{1},
P_{2}, ...,
P_{G1}, and
the mean of each segment.
Minimum Message Length
[MML]
is a general machinelearning technique that takes this approach.
If the total cost of stating a segment can be attributed to the segment
(and consequently d(i,j)>0 for all i, j where i<j)
then, for example, a variation on Dijkstra's algorithm for
singlesource, shortest paths
can be used to find an optimal segmentation (over all possible values of G)
in O(N^{2})time.
Notes
 W. D. Fisher.
On grouping for maximum homogeneity.
Jrnl. Am. Stat. Soc. 53 pp.789798, 1958.
Abstract:
Given a set of arbitrary numbers, what is a practical
procedure for grouping them so that the variance within groups
is minimized? An answer to this question, including
a description of an automatic computer program, is given
for problems up to the size where 200 numbers are to be
placed in 10 groups. Two basic types of problem are
discussed and illustrated.
[An^{ }early paper on such problems.
Fisher used the Illiac computer,
and quotes 14 minutes running time for N=200, G=10;
note the date! His algorithm's timecomplexity is not explicitly stated, but
it is probably O(N^{2}) for a given G
because he also quotes 3 minutes for N=96, G=10.
The problem is also called
the onedimensional histogram problem.]
 R. A. Baxter & J. J. Oliver.
The kindest cut: minimum message length segmentation.
Proc. 7th Int. Workshop on Algorithmic Learning Theory,
pp.8390, LCNS V1160, SpringerVerlag, 1996.
Binary data, investigates precision of
stating cutpoints. Has stopping criterion.
 L. J. Fitzgibbon, L. Allison & D. L. Dowe.
Minimum message length grouping of ordered data.
Proc. 11th Int. Workshop on Algorithmic Learning Theory
(ALT'2000),
pp.5670, Sydney, LNCS, Vol.1968, SpringerVerlag, December 2000
Multivariate data, dynamic programming algorithm (DPA),
avoids stating cutpoints! Has a stopping criterion.
 C. S. Wallace & D. M. Boulton.
An Information Measure for Classification.
Computer Journal 11(2) pp.185194,
1968.
A computer program
called Snob solves a more general problem
where there are N things, each thing having K attributes,
and the problem is to find the optimal number of
classes (groups, clusters, species, types, ...) for the things.
A class can depend on some, not necessarily all, of the attributes.
The mean and variance of an attribute can vary from class to class.
Also_{ }see:
C. S. Wallace,
Statistical and Inductive Inference by Minimum Message Length,
Springer Verlag, isbn:038723795X, 2005.
 T. Edgoose & L. Allison.
Minimum Message Length Hidden Markov Modelling.
IEEE Data Compression Conference (DCC), Snowbird, pp.169178, 1998.
Extends the Snob model to sequences e.g. timeseries.
© L.A. 1999

