Computer scientists and software developers often need to assess the running time of algorithms. An algorithm is a set of instructions, which a computer must take to solve a particular problem. This set of steps must be finite, which means that the algorithm must end at some point of a program's execution. How does this concept of the algorithms relate to Big-O notation?
What is Big-O notation?
Simply stated, Big-O notation is a mathematical tool, which computer scientists and software developers use to assess the running time of an algorithm. There are a few popular mathematical functions, which are used to measure the running time of a C# method, for example. These functions were chosen based on how long a particular operation takes to accomplish its goal(s). Note that this is a theoretical tool to measure the running time. If one wishes to know the exact running time of an algorithm running on a particular processor, he or she must write benchmarks to obtain such information. An Intel processor may take fourteen milliseconds to add two integers, while an AMD processor may take fifteen to accomplish the same goal. Thus, why learn Big-O notation?
Big-O notation is useful, if one wishes to abstract away and assess the running time by utilizing the code, which is being considered, rather than by always having to write benchmarks every single time the algorithm is being assessed. Big-O notation gives us a more practical, general manner by which one can evaluate the performance of an algorithm at the code level, using any programming language, any computer and any operating system. With this in mind, let us consider how exactly Big-O is used and the different functions used to assess the running time.
When we read Big-O notation, we say that an algorithm is of order x, where x is the function used to assess a certain algorithm's running time. For example, when measuring a searching algorithm, we might say that this search algorithm is of order n. The phrase order of is denoted by O(), which is where we obtain the term Big-O. The parenthesis contains the function, which is used to measure the running time of an algorithm.
Considering our example above once more, we may say that our hypothetical search algorithm is an O(n) algorithm.
Let us now introduce the most popular functions used by the computer scientists and the software developers when assessing the running time of the algorithms.
We first introduce the use of O(1). We use this notation to denote the algorithms whose running time is constant. Examples of such algorithms include adding two numbers, variable assignment, accessing array elements, return and break statements and subtracting two numbers. Notice however, that the tvariable declarations are not to be counted and only assignments are to be counted.
A second notation, which is used is that of O(lg n). This means that an algorithm is of order log n, which is how it is read. Suffice it to say that there are algorithms, which splits the number of elements on which they have to work with into two. For this reason, log base two of n, where n is the input size is abbreviated as lg n. Typical algorithms, which are of O(lg n) includes binary search.
Other notations, which are used include O(n), O(n lg n), ( n^2 ), O(n^3 ), O( 2^n ), and O( n! ). As we go on in our exploration of data structures and algorithms, we will encounter these notations. Notice that we are using these notations to measure the best and worst cases of algorithms. Average-case scenarios are more difficult to measure. By best and worst case scenarios, we are referring to how long an algorithm will run under a given certain circumstance. For example, in the case of our hypothetical search algorithm, we might check whether an array is empty or not. If so, we may indeed say that it is the best case running time of this algorithm of O(1) (since comparisons take the same time to run). However, if an array is not empty, we may say that in the worst-case scenario, the algorithm is of O(n).
Let us end this article by examining some sample C# code and evaluating its running time, using Big-O notation
- public int Sum(int n, List < int > elements) {
- int sum = 0;
- int i = 0;
- for (; i < n; i++) {
- sum += elements[i];
- }
- return sum;
- }
Because i = 0, i < n, and i++ take the same time to run, we count three operations total (three because i++ is a shortcut for assigning both to the i variable and adding one to it).
sum += elements[i]; take the same time to run and these lines totals to the three operations. These two lines are multiplied by the input size n of this algorithm, so we have 3n+3n, because these operations will run, which depends on how large n is.
Add to this, the return statement and the variable assignments sum = 0 and i = 0, we have 3n+3n+1+1+1 = 6n+3. When it comes to Big-O notation however, we throw away the constant values such as numbers and we only use the letters or other functions as our answer to every algorithm analysis. Thus, for our Sum() algorithm, we have n as the final answer and we say that our algorithm is an O(n) operation.
If we were to consider an analysis function, such as 3N^2 +n!+3+2, we would say that our algorithm is of O(n!), because we eliminated the constants, but we also performed an extra step. We chose the highest term in this sum, which is the final step when analyzing an algorithm.
In future articles, we will develop data structures, which are commonly used and we will measure the running time of each of its methods (e.g., adding elements, sorting, searching etc.
For now, consider practicing algorithm analysis.