Least-Squares Segmentation of a Series
Given a series, A[1], A[2], ..., A[N]
,
and a number, G<=N,
what is the optimal way to partition the series
into G segments (i.e. groups, clusters, ...):
A[1..P1], A[P1+1..P2], ..., A[PG-1+1..N]
SSDi..j = Σk=i..j (A[k] - meani..j)2
= sumSqi..j - (sumi..j)2/(j-i+1)
where meani..j = sumi..j/(j-i+1)
and sumi..j = (Σk=i..j A[k])
and sumSqi..j = (Σk=i..j A[k]2)
Now note that
sumi..j = sum1..j - sum1..i-1 and that
sumSqi..j = sumSq1..j - sumSq1..i-1
sum1..i
and
sumSq1..i
,
for all i, in O(N)-time.
Now treat SSDi+1..j like an "edge-weight" or a distance, d(i,j) if j>i, in a graph having vertices 0, 1, 2, ..., N, and edges <i,j> for j>i. Seek a path of exactly G edges (i.e. segments) from vertex 0 to vertex N. A dynamic programming approach can be used (Fisher 1958). Let Pk[j] be the minimum total SSD achievable for A[1..j] using exactly k segments.
P1[j] = SSD1..j, for j=1..N
Pk[j] = MINi<j{Pk-1[i] + SSDi+1..j}, for k = 2..G
Choosing the Number of Segments.
There are N-1CG segmentations of [1..N] into G segments. This gives 2N-1 possible segmentations in total if the maximum allowed number of segments equals N. Least-squares segmentation does not indicate how to choose an optimal value for G. As G increases towards N, the lowest achievable total SSD must decrease towards zero because d(i-1,i) = SSDi..i = 0. Some other technique must be used to decide at what value to stop G. Various statistical techniques can be used including some based on information-theory.
Having a large number of segments is clearly a more complex hypothesis than having a small number of segments. This can be made explicit by introducing a cost for the hypothesis, i.e. the cost of stating G, P1, P2, ..., PG-1, and the mean of each segment. Minimum Message Length [MML] is a general machine-learning technique that takes this approach.
If the total cost of stating a segment can be attributed to the segment (and consequently d(i,j)>0 for all i, j where i<j) then, for example, a variation on Dijkstra's algorithm for single-source, shortest paths can be used to find an optimal segmentation (over all possible values of G) in O(N2)-time.
Notes
- W. D. Fisher.
On grouping for maximum homogeneity.
Jrnl. Am. Stat. Soc. 53 pp.789-798, 1958.
Abstract: Given a set of arbitrary numbers, what is a practical procedure for grouping them so that the variance within groups is minimized? An answer to this question, including a description of an automatic computer program, is given for problems up to the size where 200 numbers are to be placed in 10 groups. Two basic types of problem are discussed and illustrated.
[An early paper on such problems. Fisher used the Illiac computer, and quotes 14 minutes running time for N=200, G=10; note the date! His algorithm's time-complexity is not explicitly stated, but it is probably O(N2) for a given G because he also quotes 3 minutes for N=96, G=10. The problem is also called the one-dimensional histogram problem.] - R. A. Baxter & J. J. Oliver.
The kindest cut: minimum message length segmentation.
Proc. 7th Int. Workshop on Algorithmic Learning Theory,
pp.83-90, LCNS V1160, Springer-Verlag, 1996.
Binary data, investigates precision of stating cut-points. Has stopping criterion. - L. J. Fitzgibbon, L. Allison & D. L. Dowe.
Minimum message length grouping of ordered data.
Proc. 11th Int. Workshop on Algorithmic Learning Theory
(ALT'2000),
pp.56-70, Sydney, LNCS, Vol.1968, Springer-Verlag, December 2000
Multivariate data, dynamic programming algorithm (DPA), avoids stating cut-points! Has a stopping criterion.
- C. S. Wallace & D. M. Boulton.
An Information Measure for Classification.
Computer Journal 11(2) pp.185-194,
1968.
A computer program called Snob solves a more general problem where there are N things, each thing having K attributes, and the problem is to find the optimal number of classes (groups, clusters, species, types, ...) for the things. A class can depend on some, not necessarily all, of the attributes. The mean and variance of an attribute can vary from class to class. Also see:
C. S. Wallace, Statistical and Inductive Inference by Minimum Message Length, Springer Verlag, isbn:0-387-23795-X, 2005. - T. Edgoose & L. Allison.
Minimum Message Length Hidden Markov Modelling.
IEEE Data Compression Conference (DCC), Snowbird, pp.169-178, 1998.
Extends the Snob model to sequences e.g. time-series.