There are some common Big-O notations that you should become familiar with as well as what kind of code leads to them. This episode continues the discussion of Big-O notation so make sure to listen to episode 37 first. Knowing the signs of these will help you write more efficient code and for some of them could actually mean the difference between your program working vs. never completing at all.

Here are the Big-O notations in order of execution speed from fastest to slowest:
Big-O (1) runs in constant time no matter how big your problem becomes.
Big-O (lg N) runs in time proportional to the log of N.
Big-O (N) runs in time proportional to N.
Big-O (N lg N) runs in time proportional to N times the log of N. this will be a little slower than just N.
Big-O (N2) runs in time proportional to N times N. You might also find varieties of this such as Big-O (N3)
Big-O (N!) forget about this completing at all for items in the mid to upper teens. This really starts becoming impractical with N as small as just 10.

Note that there are more Big-O notations than just these. But learning these will help you understand almost all the situations you’re likely to encounter. And hopefully, you don’t come across N factorial at all. Listen to the full episode or you can read the full transcript below.


You’ve already learned about a linear algorithm. That’s one where the code must perform some amount of work on each item. Your first question might be what’s an item? As long as you can divide the work into specific and separate tasks performed on specific pieces of information, then you have items. How many items you have is the key to understanding Big-O notation. You see, Big-O notation doesn’t talk about one hundred items or one thousand items or any specific number. We take it to the limit and just refer to the number of items as N. That’s a capital letter N by the way.

A linear algorithm is referred to as Big-O of N. Whenever you see this, you know right away what the code will be doing, it will be moving from one item to the next until it decides to return. It’s important to realize that the code doesn’t have to actually visit each item. It might decide after 5 hundred items that it has enough. It could even decide on the very first item that the algorithm is done and the answer is ready. It doesn’t matter. This is still a Big-O of N or a linear algorithm. The reason for this is because while the algorithm might sometimes decide to return early, it’s also just as likely to need to keep on working. If you have somewhere in your code a for loop or a foreach loop that moves from one item to the next sequentially, then you have a linear algorithm.

You also learned about another way of moving about your items by going straight to the middle item and then up or down the items by another half. This type of algorithm doesn’t go through a hundred items one by one. It’s able to eliminate half of the remaining items each step of the way. Let’s say you have one hundred numbers that have been sorted so the smallest number is first and the biggest number is last and you’re looking to see if the number 75 is anywhere in this list. Well, since you know that you have one hundred numbers and they are already sorted, you go straight to the number at position 50. If the number at that spot is less than 75, then you know that 75 can only be in the second half if at all. You just eliminated the first 50 numbers without looking at them at all. The you divide the second set of 50 numbers in half by looking at the number in location 75. Note that this is the location 75 and not the value 75 you’re looking for. You keep doing the same steps but each time, you’re able to cut your work in half. When you get down to just a single item and it’s still not the value you’re looking for, then you can answer that the number 75 is not in the list of numbers. Of course, if you found it earlier, then you’re done. You see, even this algorithm can return early. But when you get to the last check and still haven’t found the number you’re looking for, then at least you were able to make this decision without needing to analyze each and every single number.

This algorithm is called a binary search and is also called Big-O of lg N. Remember that the log of a number N is much less than the number N itself for large values of N.

Can you do better than this? Sure. The fastest Big-O notation is called Big-O of one. You might wonder, one what? How can it just be one? Is that one second? One microsecond? One month? It’s actually all of those. Here’s what I mean. Big-O of one just means that the algorithm has no dependence on the number of items. If you have code that can do some work and it’s always the same amount of work that needs to be done no matter how big your program gets, no matter how many pages, or numbers, or watermelons, or whatever your program is dealing with, then you have a Big-O of one algorithm. This is the fastest but that doesn’t mean it’s always fast. What do I mean by this. Well let’s say you have some code that returns a number that has some special meaning to you. It might even be the answer to everything. And let’s say that the computer needs to think about this before it can return the answer and that it takes 10 minutes to calculate this answer each time you ask. That’s not very fast. But it’s faster than both Big-O of N and Big-O of lg N algorithms.

If your computer needs to spend 1 second for each item in order to find the same answer to everything that we’re considering, then sure, the linear algorithm will finish first if you only have one hundred items. It’ll take about a minute and a half. And if this was a Big-O of lg N, then you’d have your answer in less than ten seconds. How can code that takes 10 minutes be faster that that? Didn’t we just prove that it was slower? Remember that Big-O notation is all about the limit. the problem is that those other solutions will slow down as the number of items increases. that’s because they are both dependent on the number of items. While the Big-O of one will remain at 10 minutes. This means that because there are 600 seconds in 10 minutes, then the linear algorithm will be slower after just 600 items are added.

There are three more common Big-O notations that I’ll explain right after this word from our sponsor.

( Message from Sponsor )

Alright, you know, it might seem silly that Take Up Code is sponsoring the Take Up Code podcast and I hope you still get value from listening. This podcast is just getting started and eventually there will be a real sponsor. But for now, telling you about the live classes is a great way to introduce a sponsor and give you another way to learn programming. Okay, back to Big-O.

The next common notation is called Big-O of N lg N. The reason this is common is because it’s often found in sorting algorithms. Now that we’re adding more specific Big-O notations, let me clarify a few things that I explained earlier. When you’re trying to figure out the Big-O complicity of some algorithm, the guidelines I gave you earlier are just general guidelines. What it really comes down to is this. You want to compare how many steps are needed to solve your problem with the size of your problem. If you need one step per item in your problem, then you have a Big-O of N algorithm. You’re probably going to have a loop but maybe your loop advances by three each time through the loop. This will cut your running time by a third. But it’s still a Big-O of N algorithm because you could achieve the same result by just getting a computer that runs three times as fast.

And likewise, the binary search algorithm doesn’t always have to cut the problem in exactly half each time. The point is that sometimes these algorithms are a bit tricky to spot. When I explained the binary search, I said that the algorithm is able to ignore half of the problem space for each step. But what if instead of just ignoring the other half, the algorithm continued processing both halfs? By processing both halfs, that means it’s not able to ignore anything. so we’re back to the point where we need to work with each item. But in addition to each item, we’re also doing extra work of the binary search. This is the type of extra work that leads to a Big-O of N lg N running time. It’s sort of a multiplication of the other two and ends up being a bit slower than just a plain linear algorithm of Big-O of N. The reason this running time is common for sorting is, well, just think about this logically for a moment. Do you think it’s possible to sort a bunch of numbers without looking at each number at least once? No. I mean, if you don’t look at each number then how can you possible claim that they’re all sorted, right? But what do you think your chances are of examining a random set of numbers and being so lucky that you just happen to pick up each number in the proper order the first time? That’s not very likely either. So in addition to needing to work with each number, you’re going to have to do a little more work to put those numbers in the proper order. that’s why a good sorting algorithm will run in a Big-O of N lg N time.

The next Big-O notation I’ll explain is starting to get slow. Sometimes you have no choice though and the extra work really is needed. This notation is called a Big-O of N squared. You can usually spot this in code as a for loop inside another for loop. And by the way, if you have a for loop inside a for loop inside another for loop so that you have 3 loops total, then you’re likely going to find a Big-O of N cubed complexity which’ll be much slower than N squared. To see just how much slower this type of algorithm is, let’s say we need 1 second per step again and we have 600 items. This algorithm will need 600 times 600 seconds, That’s 360,000 seconds or 100 hours.

The last Big-O notation I’ll explain here is called Big-O of N factorial. This is the slowest of all and if you have this type of algorithm, then you’re in big trouble. How big? Well first of all what a factorial? A factorial is where you start multiplying a number by one less and then by one less again and you keep going until you multiply by one. So 5 factorial means 5 times 4 times 3 times 2 times 1 and gives you 120. Let me put it to you this way, when I type 600 in my calculator on my iPhone and then press the factorial button, the answer is Error. That’s right, the answer is so huge that it becomes meaningless to us mere mortals. In fact, just the number 12 factorial becomes 479 million, 1 thousand, 6 hundred. If those were items that take 1 second each to process, then your computer will be spending over 15 years to get the answer. The reason you’re in such big trouble with an algorithm this slow is that you really can only work with just 10 items or so before your program slows to a halt. You know that you’re in this situation when your code tries to combine the first item with all the others, then moves to the second item and tries to combine it with all the remaining items, then moves on to the third item and combines it with all the others, and so on. It seems simple to write code like this, but if you do, watch out because it may never have enough time to complete.