LA home

This chapter presents some algorithms on the in-built types. It is an introduction by examples to some principles of algorithm design.

Swap Segments

Imagine the problem of exchanging two segments of an array. Given an array A of n elements and an integer m, 0<=m<=n, exchange A(1..m) with A(m+1..n). A(i..j) stands for the elements A(i), A(i+1), ..., A(j):

A(1) A(2) A(3) A(4) A(5) A(6) A(7) A(8) A(9) A(10)

initial state:
  10   20   30   40   50   60   70   80   90   100

m=4 --------------|
final state:
  50   60   70   80   90  100   10   20   30   40

Example, swap A(1..m) and A(m+1..n).

Note the special cases. If m equals zero then A(1..m) is empty and if m equals n then A(m+1..n) is empty. An algorithmic solution will be simpler and more likely correct if it does not have to deal with these as special exceptions.

In general the elements of the array might be integers, as in the example, or characters or other objects. This problem can arise when sorting sequences of elements. Variable length sequences can be stored as segments of an array and sorting the sequences might involve swapping two sequences (segments) that are out of order. This is an exercise in algorithm design and in what follows we want an algorithm that is correct (solves the problem), terminates, and is efficient in terms of space (compact) and time (fast).

In order to get ideas, it often helps to think of simpler but related problems. If the values of two scalar variables, X and Y, are to be swapped, a temporary object must be used to hold the value of X (say) while Y is assigned to X. The temporary is then assigned to Y:

const temp:=X; X:=Y; Y:=temp  % swap X and Y

The use of the temporary prevents the loss of the initial value of X. A similar technique can be used to swap the array segments. A(1..m) is stored in a temporary array, A(m+1..n) is moved to A(1..n-m) and finally the initial value of A(1..m) is moved from the temporary to A(n-m+1..n):

var temp :array 1..m of int
for i :1..m          % temp := A(1..m)
end for

for i :m+1..n        % A(1.. ):=A(m+1..n)
end for

for i :1..m          % A(m+1..n):=temp
end for

Swap-segments using a Temporary Array.

In addition to the given array A, the algorithm uses extra space for the variable i and for the temporary array. Assuming m is reasonably large, temp is the extra object of dominant size and the algorithm uses extra space approximately proportional to m. This turns out to be unnecessarily extravagant. The algorithm contains three sequential loops that are executed m, n-m and m times respectively. The bodies of all loops do similar amounts of work and take similar times to execute once. The total running time is therefore approximately proportional to n+m. m<=n so the time is at most proportional to n. Note that each element of A is moved either once or twice (via temp). Any algorithm for this problem must move each element at least once and thus take time at least proportional to n, unless m=0 or m=n. It is therefore unlikely that there is any much faster algorithm for the problem. However, a computer scientist should not stop at the first algorithm discovered but should try to improve on it. The space used is an obvious target here.

Algorithm design often involves restating the problem in some other way. Looking at the problem again, it can be seen that the elements of the array are to be left-shifted by m places cyclically or right-shifted by n-m places. This can form the basis of an algorithm. The following code shifts left by one place:

const temp:=A(1)
for i :1..n-1
end for

Left-shift 1 place.

The time that this takes to execute is roughly proportional to n. Shifting left by m places can be done by shifting by one place, m times:

for j :1..m
   const temp:=A(1)
   for i :1..n-1
   end for
end for

Left-shift 1 place, m times.

This algorithm uses only a small, constant amount of extra space for a temporary scalar object and the loop control variables. The space does not depend on m or n. This is much better than the first solution. However the algorithm contains two nested loops. The outer loop is executed m times. For each iteration of the outer loop, the inner loop is executed n-1 times, making m×(n-1) time in total. This is where the algorithm spends most of its time. Thus it takes time approximately proportional to m×n in total to run. This is much worse than the first solution. A natural question to ask is, can we have the best of both worlds?

Note that the second algorithm moves each element of the array m times. If each element were moved a bounded number of times, as in the first algorithm, then a solution running in time proportional to n (only) might be possible.

Looking at an example of the problem, note that A(1) is replaced by A(m+1) in the final state, A(m+1) is replaced by A(2m+1) and so on, counting m places to the right at each step.

m=4, n=10

  1                 m+1                 2m+1
A(1) A(2) A(3) A(4) A(5) A(6) A(7) A(8) A(9) A(10)
  ^                 | ^                 |
  |                 | |                 |
  |-------<---------- |-------<----------  etc.

one cycle:
 temp <- A(1) <- A(5) <- A(9) <- A(3) <- A(7) <- temp

Example: Partial Swapping of Segments.

Care must be taken not to "drop off" the end of the array. The array must be treated as a circle so that A(n) is followed by A(1). This can be done by using arithmetic modulo (mod) n. In general, A(i) is replaced by A(i+m-1)mod n +1. Note that the plus and minus ones could be dispensed with if the array were indexed from zero to n-1; this will crop up again in the chapter on queues.

A cycle of replacements can be carried out in this way until A(1), which must have been stored in a temporary, is reached again. Each element is moved just once or twice. Unfortunately it is not necessarily true that all elements are moved in such a cycle. In the above example only half of the elements are moved. In general, a cycle fails to move all elements when m and n have a common factor greater than one. In the example, m and n have a highest common factor two. However other cycles can be started at positions 2, 3 and so on until all elements have been moved during some one cycle. This forms the basis of an algorithm that moves each element just once or twice.

Although the algorithm contains two nested loops, it can be seen that the total number of times that the body of the inner loop is executed is approximately n and that the total running time is therefore approximately proportional to n. Only a small, constant amount of extra space is used and so both the time and space goals have been achieved.

In order to run in time proportional to n it is sufficient to move each element a bounded number of times and there is another much simpler algorithm that also achieves this. It requires looking at the problem in yet another way and most people are a little "shocked" when they first see it, often rather annoyed if they did not discover it themselves. Rather than thinking about cyclic shifts it relies upon reversals of segments of the array. Reversing A(1..m) and reversing A(m+1..n) gets the desired elements adjacent to each other, particularly A(1) and A(n), but in reverse order. Thus finally reversing the whole array A(1..n) solves the original problem.

A(1) A(2) A(3) A(4) A(5) A(6) A(7) A(8) A(9) A(10)
  10   20   30   40   50   60   70   80   90   100

reverse A(1..m)
  40   30   20   10   50   60   70   80   90   100

reverse A(m+1..n)
  40   30   20   10  100   90   80   70   60    50

reverse A(1..n)
  50   60   70   80   90  100   10   20   30    40

Swapping Segments by Reversals.

To reverse an array segment A(lo..hi), swap A(lo) with A(hi), A(lo+1) with A(hi-1) and so on until the middle of the array is reached. This gives another algorithm.

Only a small, constant amount of extra space is used. Each array element is moved a bounded number of times - between two and four times - so this algorithm also runs in time proportional to n only. The elements are moved more often than in the cyclic algorithm so the reversing algorithm is slower by a constant factor, but on the other hand it is simpler.

Note that the way of looking at the problem, as a swap, as cyclic shifts or as reversals, has a profound effect on the algorithmic solution. It is important to be open to new ways of viewing a problem. The computer scientist's first responsibility is to find a correct algorithm but finding an efficient one comes a close second.

When it comes to testing and debugging the algorithms note what the special cases of data for the problem are. There are infinitely many possible inputs and they cannot all be tried but some thought reveals that passing a few key tests will give considerable confidence in a program. The values of the elements of the array do not matter, except that they should be different so that they can be recognised. Interesting values of n include 0, 1, 2 and one or two larger values. Interesting values of m include 0, 1, 2, n-1, n and some values where m and n do and do not have common factors.

Do the algorithms presented exhaust all the possibilities for this apparently simple problem? It seems unlikely that there are any more sensible and yet radically different algorithms but one can never tell.

Manipulating Vectors and Matrices

A one-dimensional array (array l..h of T) is often called a vector and a two dimensional array (array l1..h1, l2..h2 of T) is often called a matrix. Vectors and matrices have many uses.

A simple example based around the weather is given here to illustrate some vector and matrix operations. A simple (!) model of the weather might consider only two states: sunny or raining. Long observations may allow the probability of the state tomorrow to be estimated given the state of the weather today. This can be represented by a matrix W where Wij is the probability of state j tomorrow given state i today.

             sun  rain

       sun | 0.8   0.2 |
today      |           |
      rain | 0.3   0.7 |

Transition Probabilities, W.

Clearly each row of W must add up to one. For example, if it is sunny today it will probably (p=0.8) be sunny tomorrow but it might rain (p=0.2).

Suppose that somehow we know the probability of each state on some future day D. This information can be represented by a vector [S,R] where S+R=1.0. What about the day after D? It might be sunny on D and continue sunny, with probability 0.8×S, or it might be raining on D but fine up, with probability 0.3×R, in total 0.8×S+0.3×R chance of being sunny on D+1. Similarly the probability of rain on D+1 is 0.2×S+0.7×R. The probabilities for D+1 are [0.8×S+0.3×R, 0.2×S+0.7×R]. This calculation is an example of multipling a vector and a matrix.

        |0.8 0.2|
[S,R] × |       | = [S',R'] = [0.8×S+0.3×R, 0.2×S+0.7×R]
        |0.3 0.7|

Exercise: show that S'+R'=1.0.

In general, a vector and a matrix are multiplied as follows. The vector must have as many elements as the matrix has rows. Each column of the matrix is treated as a vector. Corresponding elements of the given vector and the first column of the matrix are multiplied and the products are added together. This gives the first element of the result. This is repeated for the second column and so on.

Note that if V has m elements and M is an m×n matrix then the result has n elements and that the algorithm takes time approximately proportional to m×n.

The transition probabilities for two days ahead can be computed from the previous matrix W. Consider all the possibilities:

 D      D+1     D+2
sun --->sun --->sun   0.82
sun --->rain--->sun  +0.2×0.3

sun --->sun --->rain  0.8×0.2
sun --->rain--->rain +0.2×0.7

rain--->sun --->sun   0.3×0.8
rain--->rain--->sun  +0.3×0.7

rain--->sun --->rain  0.3×0.2
rain--->rain--->rain +0.72

         day after tomorrow
             sun  rain

today  sun | 0.7   0.3 |
           |           | = W2
      rain | 0.45  0.55|

Transition Probabilities for the Day After Tomorrow.

This is an example of matrix-multiplication.

Two matrices A and B are multiplied to give a matrix Ans as follows. The ith row of A and the jth column of B are treated as vectors. Corresponding elements are multiplied and the products added together to give the (i,j)th element of Ans. Note that this is equivalent to doing a vector-matrix multiplication for each row of A.

In the weather example, W gives the transition probabilities one day ahead, W2 gives the transition probabilities two days ahead, W3 three days ahead and so on. The matrices being multiplied need not be square; it is sufficient (and necessary) for the second dimension of the first matrix to match the first dimension of the second matrix. Multiplying an l×m matrix and an m×n matrix gives a l×n matrix. The algorithm clearly takes time roughly proportional to l×m×n as it spends most of its time within the triply-nested loops. This equals n3 if l=m=n. Strassen gave an algorithm that is faster for very large matrices but for most practical applications the given algorithm is faster.

Matrix addition and subtraction can also be defined on m×n matrices. It can be shown that +, - and × on square matrices obey most, but not all, of the usual laws of real and integer +, - and ×.

X=Y+Z where Xij=Yij+Zij
X=Y-Z where Xij=Yij-Zij
X=Y×Z where Xij=SIGMAk Yik×Zkj

Identity:         I where Iii=1, Iij=0 if i~=j
                  X×I = I×X = X
associative:      X×(Y×Z) = (X×Y)×Z
                  X+(Y+Z) = (X+Y)+Z
× non-commutative:X×Y ~= Y×X   in general
distributive:     X×(Y+Z) = X×Y + X×Z

Matrix Operations.

Some, but not all n×n matrices have an inverse X-1 such that X×X-1=I.


Strassen showed how to multiply two n×n matrices in O(nlog2(7)) time, which is faster than O(n3); note that 3=log2(8). The algorithm divides each array into four quarters. It then performs seven (hence the log2(7)) matrix multiplications, recursively, on arrays of this smaller size.

The algorithm is easy to write in Algol68 because it has built-in array slicing operations, e.g. a[i:j,m:n], which select sub-arrays. The algorithm can of course be coded in any reasonable programming language.

Subsequent to Strassen, slightly faster and even more complex matrix multiplication algorithms have been devised.

-- © L.A. 1984


  1. Insert "counting variables" in the various swap-segments algorithms so as to count the number of times an element of the array is accessed or stored. Gather these counts for runs on various data. Also add statements to time the algorithms if there is a routine to access the machine clock. Which is the faster algorithm? By how much?
  2. Prove that HCF(m,n) cycles are required in the cyclic swap-segments algorithm.
  3. Program a solution to the following generalisation of the swap segments problem. Given A(1..n) and i, j, k, l such that 1<=i<=j<=k<=l<=n+1, A(i..j-1) must be swapped with A(k..l-1).
    eg.       i=2, j=4, k=6, l=9, n=10
                 i   j   k     l
    before A = 1 2 3 4 5 6 7 8 9 10
                 ---     -----
    after  A = 1 6 7 8 4 5 2 3 9 10
                 -----     ---
www #ad:

↑ © L. Allison, www.allisons.org/ll/   (or as otherwise indicated).
Created with "vi (Linux)",  charset=iso-8859-1,   fetched Sunday, 16-Jun-2024 06:21:18 UTC.

Free: Linux, Ubuntu operating-sys, OpenOffice office-suite, The GIMP ~photoshop, Firefox web-browser, FlashBlock flash on/off.