In algorithmic analysis, one is interested in the amount of resources used by an algorithm and in the algorithm's correctness. The time complexity measures the amount of time used. The space complexity measures the amount of space used. The best case, average case and worst case complexities are of interest. The asymptotic complexity when a problem becomes "large" is usually important and O( ) notation is used to quantify this. Logic is invaluable in program proof.
The mathematics required for this analysis is summarised here. A mathematically able reader should skip this chapter. Other readers should scan it quickly and refer back to it when the material is needed in later sections.
Recurrence relations often arise in calculating the time and space complexity of algorithms. The amount of time (or space) that a program takes, Tk, is typically a function of the size, k, of the input data. A recurrence relation arises when Tk is some function of Tk' for k'<k. The solutions of recurrence relations are often best proved by induction as illustrated in some of the following sections.
The following recurrence
has the solution
when n is a power of 2. The solution should be no surprise because when the problem size is doubled from k/2 to k the resource used only increase by a constant a. Repeated doubling, i.e. 1, 2, 4, 8, 16, 32, ..., reaches any limit n in about log2(n) steps. Unless otherwise stated, logarithms are to base two.
The mathematical proof of the solution by induction has two stages. First the base case for k=1 is shown to be a particular case of the general solution. Secondly, assuming the solution up to n/2, it is then shown to hold for the next value, n.
The binary-search algorithm over n (sorted) objects has this time-complexity. It is a very good complexity to have because Tn grows only slowly as n gets large.
The solution of the following recurrence
can be found as follows
Many linear-algorithms, linearly-recursive or iterative, have this time-complexity. See the section on linear search. While not as good as logarithmic complexity, linear complexity is somehow "fair": double the size of the problem and the algorithm uses twice as much resource.
A third class of algorithms has complexity given by the following recurrence relation.
The solution for Tn is:
The solution is by induction:
Many binary-recursive procedures have this time-complexity; see for example the (slow) Fibonacci routine. Exponential complexity grows very rapidly indeed with the size of the problem. Just adding one to the problem size causes the complexity to roughly double. If a problem requires algorithms with this complexity and the problem size is large then its solution by computer may be possible in principle but infeasible in practice.
Complexity and big O
The time or space used by an implementation of an algorithm can be measured exactly in seconds or in bytes for a particular type of computer but will be different on another computer. Thus the precise constant of proportionality for a particular computer is often not of interest. Alternatively, some operations may be chosen and the number of these executed can be used as a machine-independent measure of the time complexity. For example, the number of comparisons and assignments is usually chosen to compare sorting algorithms. It may be difficult to calculate the exact complexity of an algorithm but its long-term asymptotic behaviour for big problems may be easier to work out. Again, the constant of proportionality may be difficult to calculate exactly and unimportant in many cases.
To address the issues discussed above, big `O' notation is used. Given two functions f and g, f is said to be order g, or f is O(g), if f grows no more quickly than some constant times g past some fixed point. In symbols:
Beyond m, f does not exceed k*g for some fixed k and m. Eg.
It can be seen that if m<n then xm is O(xn). Only the dominant term with the largest exponent is significant in a polynomial. Normally we write O( ) of a polynomial by giving the highest order term with a coefficient of one, eg. we write O(x3) rather than O(7x3+x2).
Eventually the exponential functions outstrip any polynomial ones. As a rough classification, problems having polynomial-time algorithms are often called feasible and those with only exponential-time algorithms are called infeasible. Let us say we have a problem and current technology and budget allows us to solve it for sizes up to n. If a new machine with twice the power becomes available, then if the problem is O(n) the new machine enables problems of size 2n to be solved. If the problem is O(n2), problems of size 1.4n can now be solved. These are big steps forward. However, if the problem is O(2n), problems only up to size n+1 can be solved!
The time or space used by an algorithm generally depends on some characteristic of the input data, often its size or length. Looking ahead to various algorithms are defined in later chapters, the simple 2-loop sorting algorithms on n items take O(n2) time and the faster algorithms such as heap-sort take O(n*log(n)) time. Linear-search over n items takes O(n) time on average and in the worst case. Binary-search takes O(log(n)) time.
Care must be taken in specifying the yardstick for measuring a problem's size. For example, the length of an integer is the number of digits and is proportional to the logarithm of the integer's value. To avoid confusion it must be clear which quantity is taken as its size.
Note that slightly more general Computer Science definitions can be given where f and/or g may take negative values but the above are sufficient for time- and space-complexity functions. More general definitions have been used in mathematics. See Knuth 1976.
Some Common Forms
The complexity of a single loop is often obvious.
If the body of the loop always takes (approximately) the same amount of
time to execute, regardless of i or n, the loop's overall complexity
Nested loops often, but not always, have worse time complexity than a single loop.
Again provided that the time to execute the body is independent of i, j, m and n, the complexity is O(m*n). Many programs on two-dimensional arrays have this form. For example the dynamic-programming algorithm on strings A and B takes O(m*n) time where m=|A| and n=|B|. If the inner and outer loops are both executed from 1 to n, the complexity is O(n2). If the inner loop is executed i times the complexity is of the same order.
The body is executed 1+2+3+...+n = n*(n+1)/2 times which is still O(n2).
Estimation of Complexity
Sometimes it is not easy to get a formula for the complexity of an algorithm. In such cases it may be possible to estimate it by experiment. Counting-variables can be added to the program, incremented when some critical operation is carried out and the final totals printed. The running time can also be measured, either by a stop-watch or better by calling a routine to print the computer system's clock. The complexity might be inferred by examining how such measures vary with the problem size.
The accuracy of timing a program or an operation can be improved by timing a number of executions, perhaps in a loop, and dividing the total time taken by that number. A time-shared computer is used by many people simultaneously. The elapsed time taken by a program depends on the system load. Therefore any timing done on a shared machine must be based on the central processor time devoted to the particular program under study and not on the elapsed time.
Examining differences between adjacent terms in a series
can indicate the form of the underlying function that defines the series.
A linear function, T(n)=a*n+b
gives rise to constant difference between T(n) and T(n-1):
Plotting graphs of measured quantities against the problem size
is another useful technique.
If the complexity has the form T(n) = a*nd for some order d
then log(T(n)) = log(nd)+log(a) = d*log(n)+log(a).
In this case a plot of log(T(n)) against log(n)
should give a straight line with
If T(n) = an then log(T(n)) = log(an) = log(a)*n and a plot of log(T(n)) against n should give a straight line with slope log(a).
Extra care must be taken if the behaviour of an algorithm depends not only on the size of the input but also on other characteristics of it. For example, some sorting algorithms examined later run faster or slower depending on the amount of ordering in the input. The results of a large number of runs on random data may have to be averaged to enable conclusions to be drawn.
Logic and Boolean
The use of logic is indispensable in computer programming. It is essential to know that a program is correct. For this reason some programming languages provide forms to help in proving programs correct, e.g.
These forms are used to include tests of conditions that are necessary to the correct functioning of a program. If a test fails, the system halts the program with an informative message rather than allowing the program to run amok. Most systems also have flags or switches that allow this checking to be turned off once confidence in the program has been established. When carefully chosen, the tests can also be used to prove that a program is correct. To do this it is necessary to prove that the program terminates and that it produces the correct results when it terminates. (If your programming language does not have `assert' etc. or equivalents, you can generally create your own with suitable procedures or macros.)
The tests that can be included in a program are boolean expressions.
Operators vary between programming languages. Note that these rules are mathematical ideals for a language to attempt to implement. A point of difference is that in some languages, e.g. Turing
only when both p and q always terminate correctly
and evaluate to true or to false (and have no side-effects),
similarly for `
will return true or false for all values of x. But
is not defined when x is zero due to trying to divide one by zero. The former is making use of
to avoid evaluating p when x=0. The fact that C and some other programming languages do this "short-cut" evaluation is often very convenient. A similar shortcut is taken for "true or p". Some languages, such as Pascal, do not take these short-cuts.
Sometimes to prove a program it is necessary to use logical expressions that are not easily expressed as boolean expressions in the programming language. In such cases comments can be used. They are not checked automatically but they make very informative comments for another reader.
Also see the [Wff] pages for more on logic.
-- © L.A. 1984
↑ © L. Allison, www.allisons.org/ll/ (or as otherwise indicated).
Created with "vi (Linux)", charset=iso-8859-1, fetched Sunday, 12-Jul-2020 05:26:40 EDT.
Free: Linux, Ubuntu operating-sys, OpenOffice office-suite, The GIMP ~photoshop, Firefox web-browser, FlashBlock flash on/off.