Big O Notation Explained

Big O Notation
Some common Big O Notation time complexities

Ah, Big O Notation. This is something that boggles the mind of most web developers and for a good reason — it's complicated stuff.

But, for that very reason I have written this blog post in the hopes that it can give other developers started with the basics of Big O Notation, because beyond these it can quickly get mathematical, and complicated.

The good news however, is that most web developers do not need to worry about going too deep into the details about Big O Notation. The minute details are mostly for academics, researchers, or for developers dealing with vast amounts of data. If you fall into this camp, then you may want to check out this detailed article on Algorithm Complexity Analysis.

So, What is Big O Notation Anyway?

All Big O Notation is, is a way for us to compare the way our programs are written in regards to how long they take to complete their task/execute.

So, say you wrote a method that takes in an array of names and outputs it to the console. How long does it take to complete? Will it always take the same amount of time to execute?

These are the questions Big O Notation answers for us. But, with one small caveat, namely that we will always be looking at the worst possible situation when dealing with our data. In technical terms this is referred to as the, "worst-case scenario".

This is done so that we will not be surprised and left in awe if our program is taking too long to do what it was designed to do. We can use Big O Notation to figure out why our program was taking too long by dissecting our code, and finding which implementation was the cause for its slow execution time.

Therefore, all that's left for us to do now that we understand the why and the what of Big O Notation, is for us to dive into the basic details. By doing so, we will be able to read our code, and identify how we can fix it so that it can be fast, or at the very least, have the best speed performance possible.

O(1) Constant Time

An illustration of constant time.

When measuring Big O Notation, we use the syntax O(n).

What is within the parenthesis will be what describes in a mathematical way, the time complexity, that is to say, the time it takes for our code to execute.

When talking about Constant Time, we use the notation O(1). Where (1) is representing the idea that our code will always take the same amount of time to execute, regardless of how much data our code has to go through.

Therefore, time is constant. Just like how our (1) in the parenthesis will always be a (1). Nothing is causing the 1 within the parenthesis to change into something else.

O(n) Linear Time

A Big O Notation Graph
An illustration of linear time.

Linear time is described by the syntax: O(n). (n) represents the idea that our function will be very fast or very slow depending on how much data we have.

So unlike constant time O(1), where the data size never affects the time it takes to execute, in linear time it does.

Added a string to your array? Do you need to loop through that array? Well then, you've just made the process take a bit longer to do than before.

Likewise, remove an item from your array? Need to iterate/repeat over it? It will take a little less time to go through that data structure.

In summary, the time it takes to execute your code will increase or decrease like a line. That's way it is called linear time.

O(log n) Logarithmic Time

An illustration of logarithmic time.

Logarithmic time is described by the expression: O(log n). This simply means that the time it takes for our code to execute all the data we have is done in a logarithmic way.

Our code executing in a logarithmic way means that it is using the divide-and-conquer strategy to go through our data set (arrays, objects, lists, etc...).

Take for instance, that time you were in school and you split into groups to complete an assignment. In order to complete your group project each of you researched the same topic.

But, since this one person researched this one thing about the topic you are researching, and that person researched that one thing about the same topic you are researching, hey! You don't need to look at that stuff!

They already did it for you!

All you have to decide is whether those pieces of information are what you are looking for.

Here's another example.

You have a phonebook in your hands. You are looking for the word tuna.

So you open up the phonebook to more or less the middle of itself. You see you are at the section where all the words beginning with the letter m present themselves.

From there, you turn a wad of pages some more, and you arrive at all the words that start with the letter Q.

So, you turn the wad of pages again, and this time you arrive at the section where all the words beginning with the letter t are. Not only that, but you are towards the end of the section because you see the word tuna at the bottom right corner!

Lucky you!

This is how logarithmic time works. It does not go through all of the data and only goes through the data that is relevant.

This is great because it will remain relatively fast even if we have a lot of data. Constant time and Logarithmic time are a programmer's best friends.

O(n^2) Quadratic Time

An illustration of quadratic time.

Quadratic time is described by the expression: O(n^2). It describes the time it takes to complete the execution of our data as being done in a quadratic way.

Okay, but what does that mean?

Think back to math class where we learned about squared numbers.

A squared number is the value we get when we take an integer and multiply it by itself.

For example, 3 * 3 = 9. Or, 5 * 5 = 25. So on and so forth.

Take this same concept and apply it to programming.

We have a multidimensional array full of strings. So, for each position at i we go over it j times.

Furthermore, through this process the time it takes for our code to complete depends upon the formula i * j = totalTimeNeeded.

Quadratic time should be avoided if possible, since the time needed to go through our large data can add up quickly.

However, also keep in mind that at times we also have no other choice, but to use two for loops to traverse the data.

O(2^n) Exponential Time

An illustration of exponential time. It as an even steeper time growth rate.

Exponential time is described using the expression: O(2^n).

When we write our code in an exponential way, what this means is that the time it will take for our method to go through our data will double with each addition to the data.

This is not ideal, and programmers should strive to avoid if possible, writing any code that will go through our data in this way.

It will slow down our website or app tremendously.

O(n!) Factorial Time

Avoid this at all costs.

Factorial time is conveyed to us by the expression: O(n!).

What the n! means in O(n!), is that the amount of time it takes for us to go through all of our data depends upon the factorial of the length of our data.

Alright, but what is a factorial?

A factorial is the value of the product/multiplication of each integer from the number 1, to the number we have in mind.

Here are some examples:

The factorial of 3 is: 1 * 2 * 3. Which is equal to 6. Why? Because 1 * 2 = 2. And we take that 2 and then multiply it by 3 which is 6.

So what's the factorial of 5?

Well its: 1 * 2 * 3 * 4 * 5, which is equal to 120.

And as you can see, the gap, or rather jump, from 6 to 120 if very large. Which is exactly why all programmers want to avoid coding in a factorial way, if possible.

It will quickly slow down any website and app.

Closing Comments

I hope this helped you out! This was more of a conceptual explanation of how Big O Notation works. I was thinking of adding some code to demonstrate these concepts, but the article was already getting pretty long to read.

So, keep an eye out for my next article where I go over these concepts again, but with code to help illustrate Big O Notation.