A complexity of an algorithm state, how fast the algorithm is (how many elementary operations are performed) with respect to the input data set. For algorithm classification is usually used the so called asymptotic complexity. For every (asymptotic) complexity class it holds, that an algorithm from the previous class is for all input data greater than some lower bound always faster than an algorithm from the following class regardless of the speed of computers used to do this measurement (one computer may be c-times slower than the other (c is a constant)).
To distinguish classes of asymptotic complexity we may use scale of powers. It states that when data size approaches infinity, there exists no multiplicative constant , such that algorithm from the higher complexity class is faster than an algorithm from some lower class.
What does it say?
Suppose we have two algorithms of a comparable complexity – one has to perform operations and the second , than we may run the second algorithm on a machine that is twice as fast and we won't be able to notice any difference between them. But if one algorithm has a complexity and the second , than there will not help any however fast computer. Because if we double the data, than the second computer will have to be 4 times faster. Provided that the data are ten times bigger, the computer will have to be 100 times faster...
In simple terms: if two algorithms belong to a different complexity class, than there always exist an input of some length, from which will be one of the algorithms always faster regardless of the speed of both computers.
In the illustration we can see that even if we multiply function by a very small constant (e.g. 0.001), there will still exist an intersection of both functions (), hence there will be some size of input, from which for all sizes of the input it holds, that the algorithm with complexity is always faster. By modifying the multiplicative constant we only move the intersection along the x-axis.
How the asymptotic complexity can be computed?
The asymptotic complexity can be computed as a count of all elementary operations, or more easily a count of all operations modifying data, or even only as a count of all comparisons. All these methods usually produce the same results.
Orders of growth
When analyzing most of the algorithms, we cannot determine exactly one complexity class, because the number of operations performed depends on the data itself. For example, we may say that the algorithms is in and , which means that the algorithm will never terminate in less than operations, but on the other side its complexity is never worse than quadratic.
– Omicron f(x) – algorithm always operates asymptotically better than or equally as f(x)
– Omega f(x) – algorithm always operates asymptotically worse than or equally as f(x).
– Omega f(x) – algorithm always operates asymptotically equally as f(x). That means the algorithm is in and in as well.
Example
/** * Bubble sort * @param array array to be sorted */ public static void bubbleSort(int[] array){ for (int i = 0; i < array.length - 1; i++) { for (int j = 0; j < array.length - i - 1; j++) { if(array[j] < array[j+1]){ int tmp = array[j]; array[j] = array[j+1]; array[j+1] = tmp; } } } }
The inner loop of this algorithm – bubble sort – is performed times, which is according to the sum of the arithmetic progression operations. Because we are calculating the asymptotic complexity, we can omit the multiplicative contant. If the contained some additive constants, we would omit them as well. So the number of operations performed by bubble sort is . Because this calculation describes both best and worst case scenarios, the asymptotic complexity is .