Basics of Programming

Understanding Time Complexity – For you and me

Understanding what is time complexity or use of asymptotic notation is an important part of the learning curve of a software engineer. When you are comparing two algorithms for the same purpose or designing an algorithm and trying to optimize it, comparing it in terms of the asymptotic notation is the simplest and effective of the ways available. In fact, comparing the efficiency of the algorithm in terms of the asymptotic notation is a way software engineers talk about the efficiency of an algorithm.

Of course, one can go ahead and use a micro-second or nano-second precision clock to time your algorithm and compare it. But, what if we change the system, what if the precision of the clock varies from system to system. How do we gauge the effect of the various lengths of input?

The asymptotic analysis gives a simple solution to represent the time complexity (execution complexity) of an algorithm in terms of the size of the input to the algorithm. There is a bit of math behind it, though not exact math, however for this particular blog, we are going to avoid all of the math and understand the asymptotic analysis that is just enough (good enough or must know) for a software engineer. 

With that context, I thought I will quickly write down some of the basics of asymptotic analysis since I was doing a quick refresh of the concept with some of my friends. Hence this is not authoritative content but is bound to be simple and meaningful.

When we talk about asymptotic notation, three major symbols and what they represent are important. 

  • Big Oh: The upper bound or the worst case scenario.
  • Big Omega: The lower bound or the best case scenario.
  • Theta: Upper and Lower Bound or the average’ish’ scenario.
The three important symbols of asymptotic analysis

Of the three, in most cases, we worry about only the ‘Big Oh’ notation. As an algorithm designer, you are worried about the worst case scenario. In simple words, it means that given the worst possible input, how much time your algorithm is going to take in terms of the size of the input.

Some of the common scales of asymptotic notation are as follows. Do not worry if you do not understand it right away. We ultimately will by the end of this blog.

TermNotationExample
ConstantO (1)Not affected by input size
LograthamicO (log n)Binary Search
LinearO (n)Single Loop
N log N (Linearithmic)O (n log n)Merge Sort – Divide and Conquer
QuadraticO (n2)Bubble Sort
CubicO (n3)Three Nested Loops
Exponential2O(n)Brute Force
FactorialO (n!)Travelling Salesman

From the table above, we did know about some of the common terms that are used in terms of the ‘Big Oh’. It is pretty straight forward that they are in the increasing order of their complexity. Simply put, an algorithm having the complexity of O(1) is better than the one with O(log n) which is better than one with O(n log n) and so on. The next graph shows an easy to remember illustration about the various complexities.

Time Complexity Comparison Chart

The illustration is taken from here and I feel is a good resource for a quick reference of time complexity charts for various algorithms and data structures.

A quick look at some of the time complexities in terms of the various operations on different data structures as shown below will now give an authoritative idea on why some data structures are chosen for some operations. For example, searching in a hash table or set is efficient (because its complexity is O(1) – though that is debatable). The search on a linked list is not efficient compared to a hash table since it has a complexity of O(n).

Time Complexities of Data Structures
Continue reading →
Posted by Arun Thundyill Saseendran in Basics of Programming, 1 comment