Given a series,
A, A, ..., A[N],
and a number, G<=N,
what is the optimal way to partition the series
into G segments (i.e. groups, clusters, ...):
The aim is to minimise the sum of squared differences (SSD)
of values from the means of their respective segments,
summed over all of the segments.
- see [μ&σ].
= Σk=i..j (A[k] - meani..j)2
meani..j = sumi..j/(j-i+1)
= (Σk=i..j A[k])
= (Σk=i..j A[k]2)
Now note that
We can compute
- sum1..i-1 and that
for all i, in O(N)-time.
like an "edge-weight"
or a distance, d(i,j) if j>i,
in a graph having vertices 0, 1, 2, ..., N, and
edges <i,j> for j>i.
Seek a path of exactly G edges (i.e. segments) from vertex 0 to vertex N.
A dynamic programming
approach can be used (Fisher 1958).
Let Pk[j] be the minimum total SSD achievable for A[1..j]
using exactly k segments.
This suggests an O(G*N2)-time algorithm.
P1[j] = SSD1..j, for j=1..N
for k = 2..G
Choosing the Number of Segments.
segmentations of [1..N] into G segments.
This gives 2N-1
possible segmentations in total
if the maximum allowed number of segments equals N.
does not indicate how to choose an optimal value for G.
As G increases towards N, the lowest achievable total SSD must decrease
towards zero because
d(i-1,i) = SSDi..i = 0.
Some other technique must be used to decide at what value to stop G.
Various statistical techniques can be used including some
based on information-theory.
Having a large number of segments is clearly a more complex
hypothesis than having a small number of segments.
This can be made explicit by introducing a cost
for the hypothesis, i.e. the cost of stating G,
the mean of each segment.
Minimum Message Length
is a general machine-learning technique that takes this approach.
If the total cost of stating a segment can be attributed to the segment
(and consequently d(i,j)>0 for all i, j where i<j)
then, for example, a variation on Dijkstra's algorithm for
single-source, shortest paths
can be used to find an optimal segmentation (over all possible values of G)
© L.A. 1999
- W. D. Fisher.
On grouping for maximum homogeneity.
Jrnl. Am. Stat. Soc. 53 pp.789-798, 1958.
Given a set of arbitrary numbers, what is a practical
procedure for grouping them so that the variance within groups
is minimized? An answer to this question, including
a description of an automatic computer program, is given
for problems up to the size where 200 numbers are to be
placed in 10 groups. Two basic types of problem are
discussed and illustrated.
[An early paper on such problems.
Fisher used the Illiac computer,
and quotes 14 minutes running time for N=200, G=10;
note the date! His algorithm's time-complexity is not explicitly stated, but
it is probably O(N2) for a given G
because he also quotes 3 minutes for N=96, G=10.
The problem is also called
the one-dimensional histogram problem.]
- R. A. Baxter & J. J. Oliver.
The kindest cut: minimum message length segmentation.
Proc. 7th Int. Workshop on Algorithmic Learning Theory,
pp.83-90, LCNS V1160, Springer-Verlag, 1996.
Binary data, investigates precision of
stating cut-points. Has stopping criterion.
- L. J. Fitzgibbon, L. Allison & D. L. Dowe.
Minimum message length grouping of ordered data.
Proc. 11th Int. Workshop on Algorithmic Learning Theory
pp.56-70, Sydney, LNCS, Vol.1968, Springer-Verlag, December 2000
Multivariate data, dynamic programming algorithm (DPA),
avoids stating cut-points! Has a stopping criterion.
- C. S. Wallace & D. M. Boulton.
An Information Measure for Classification.
Computer Journal 11(2) pp.185-194,
A computer program
called Snob solves a more general problem
where there are N things, each thing having K attributes,
and the problem is to find the optimal number of
classes (groups, clusters, species, types, ...) for the things.
A class can depend on some, not necessarily all, of the attributes.
The mean and variance of an attribute can vary from class to class.
C. S. Wallace,
Statistical and Inductive Inference by Minimum Message Length,
Springer Verlag, isbn:0-387-23795-X, 2005.
- T. Edgoose & L. Allison.
Minimum Message Length Hidden Markov Modelling.
IEEE Data Compression Conference (DCC), Snowbird, pp.169-178, 1998.
Extends the Snob model to sequences e.g. time-series.