Examples 

This chapter presents some algorithms on the inbuilt types. It is an introduction by examples to some principles of algorithm design. Swap SegmentsImagine the problem of exchanging two segments of an array. Given an array A of n elements and an integer m, 0<=m<=n, exchange A(1..m) with A(m+1..n). A(i..j) stands for the elements A(i), A(i+1), ..., A(j):
In general the elements of the array might be integers, as in the example, or characters or other objects. This problem can arise when sorting sequences of elements. Variable length sequences can be stored as segments of an array and sorting the sequences might involve swapping two sequences (segments) that are out of order. This is an exercise in algorithm design and in what follows we want an algorithm that is correct (solves the problem), terminates, and is efficient in terms of space (compact) and time (fast). In order to get ideas, it often helps to think of simpler but related problems. If the values of two scalar variables, X and Y, are to be swapped, a temporary object must be used to hold the value of X (say) while Y is assigned to X. The temporary is then assigned to Y:
Algorithm design often involves restating the problem in some other way. Looking at the problem again, it can be seen that the elements of the array are to be leftshifted by m places cyclically or rightshifted by nm places. This can form the basis of an algorithm. The following code shifts left by one place:
Note that the second algorithm moves each element of the array m times. If each element were moved a bounded number of times, as in the first algorithm, then a solution running in time proportional to n (only) might be possible. Looking at an example of the problem, note that A(1) is replaced by A(m+1) in the final state, A(m+1) is replaced by A(2m+1) and so on, counting m places to the right at each step.
A cycle of replacements can be carried out in this way until A(1), which must have been stored in a temporary, is reached again. Each element is moved just once or twice. Unfortunately it is not necessarily true that all elements are moved in such a cycle. In the above example only half of the elements are moved. In general, a cycle fails to move all elements when m and n have a common factor greater than one. In the example, m and n have a highest common factor two. However other cycles can be started at positions 2, 3 and so on until all elements have been moved during some one cycle. This forms the basis of an algorithm that moves each element just once or twice. Although the algorithm contains two nested loops, it can be seen that the total number of times that the body of the inner loop is executed is approximately n and that the total running time is therefore approximately proportional to n. Only a small, constant amount of extra space is used and so both the time and space goals have been achieved. In order to run in time proportional to n it is sufficient to move each element a bounded number of times and there is another much simpler algorithm that also achieves this. It requires looking at the problem in yet another way and most people are a little "shocked" when they first see it, often rather annoyed if they did not discover it themselves. Rather than thinking about cyclic shifts it relies upon reversals of segments of the array. Reversing A(1..m) and reversing A(m+1..n) gets the desired elements adjacent to each other, particularly A(1) and A(n), but in reverse order. Thus finally reversing the whole array A(1..n) solves the original problem.
Only a small, constant amount of extra space is used. Each array element is moved a bounded number of times  between two and four times  so this algorithm also runs in time proportional to n only. The elements are moved more often than in the cyclic algorithm so the reversing algorithm is slower by a constant factor, but on the other hand it is simpler. Note that the way of looking at the problem, as a swap, as cyclic shifts or as reversals, has a profound effect on the algorithmic solution. It is important to be open to new ways of viewing a problem. The computer scientist's first responsibility is to find a correct algorithm but finding an efficient one comes a close second. When it comes to testing and debugging the algorithms note what the special cases of data for the problem are. There are infinitely many possible inputs and they cannot all be tried but some thought reveals that passing a few key tests will give considerable confidence in a program. The values of the elements of the array do not matter, except that they should be different so that they can be recognised. Interesting values of n include 0, 1, 2 and one or two larger values. Interesting values of m include 0, 1, 2, n1, n and some values where m and n do and do not have common factors. Do the algorithms presented exhaust all the possibilities for this apparently simple problem? It seems unlikely that there are any more sensible and yet radically different algorithms but one can never tell. Manipulating Vectors and MatricesA onedimensional array (array l..h of T) is often called a vector and a two dimensional array (array l1..h1, l2..h2 of T) is often called a matrix. Vectors and matrices have many uses. A simple example based around the weather is given here to illustrate some vector and matrix operations. A simple (!) model of the weather might consider only two states: sunny or raining. Long observations may allow the probability of the state tomorrow to be estimated given the state of the weather today. This can be represented by a matrix W where W_{ij} is the probability of state j tomorrow given state i today.
Suppose that somehow we know the probability of each state on some future day D. This information can be represented by a vector [S,R] where S+R=1.0. What about the day after D? It might be sunny on D and continue sunny, with probability 0.8×S, or it might be raining on D but fine up, with probability 0.3×R, in total 0.8×S+0.3×R chance of being sunny on D+1. Similarly the probability of rain on D+1 is 0.2×S+0.7×R. The probabilities for D+1 are [0.8×S+0.3×R, 0.2×S+0.7×R]. This calculation is an example of multipling a vector and a matrix.
In general, a vector and a matrix are multiplied as follows. The vector must have as many elements as the matrix has rows. Each column of the matrix is treated as a vector. Corresponding elements of the given vector and the first column of the matrix are multiplied and the products are added together. This gives the first element of the result. This is repeated for the second column and so on. Note that if V has m elements and M is an m×n matrix then the result has n elements and that the algorithm takes time approximately proportional to m×n.The transition probabilities for two days ahead can be computed from the previous matrix W. Consider all the possibilities:
Two matrices A and B are multiplied to give a matrix Ans as follows. The i^{th} row of A and the j^{th} column of B are treated as vectors. Corresponding elements are multiplied and the products added together to give the (i,j)^{th} element of Ans. Note that this is equivalent to doing a vectormatrix multiplication for each row of A. In the weather example, W gives the transition probabilities one day ahead, W^{2} gives the transition probabilities two days ahead, W^{3} three days ahead and so on. The matrices being multiplied need not be square; it is sufficient (and necessary) for the second dimension of the first matrix to match the first dimension of the second matrix. Multiplying an l×m matrix and an m×n matrix gives a l×n matrix. The algorithm clearly takes time roughly proportional to l×m×n as it spends most of its time within the triplynested loops. This equals n^{3} if l=m=n. Strassen gave an algorithm that is faster for very large matrices but for most practical applications the given algorithm is faster.Matrix addition and subtraction can also be defined on m×n matrices. It can be shown that +,  and × on square matrices obey most, but not all, of the usual laws of real and integer +,  and ×.
StrassenStrassen showed how to multiply two n×n matrices in O(n^{log2(7)}) time, which is faster than O(n^{3}); note that 3=log_{2}(8). The algorithm divides each array into four quarters. It then performs seven (hence the log_{2}(7)) matrix multiplications, recursively, on arrays of this smaller size. The algorithm is easy to write in Algol68 because it has builtin array slicing operations, e.g. a[i:j,m:n], which select subarrays. The algorithm can of course be coded in any reasonable programming language.Subsequent to Strassen, slightly faster and even more complex matrix multiplication algorithms have been devised.  © L.A. 1984
Exercises


