00:00 The next algorithm I’ll discuss is called quicksort. You might be wondering why quicksort is written as one word, as opposed to the other kinds of sorts that I’ve covered so far, which were two words. And the reasons for that are really just historical.
00:23 It has very different characteristics than some of the others we’ve talked about. One of those things is that merge sort, insertion sort, and bubble sort—which I’ve already mentioned—are what are called stable sorts.
00:35 But quicksort is not a stable sort, and what that means is that with the preceding three algorithms, if you sort by one characteristic—so, say you sort alphabetically, and then you sort again numerically, the elements that are the same numerically will remain sorted alphabetically, whereas you don’t quite get that with quicksort.
01:04 The basic idea of quicksort is that you choose some pivot element, which is normally somewhere near, kind of, the middle of the list, ideally. Then you divide the list into three sections—the elements that are less than, equal to, and greater than the pivot element.
01:48 The best thing about quicksort, in my opinion, is how easy it is to actually implement. Once you understand the idea of it and why it works, then the actual implementation is really straightforward.
Just like merge sort, you want to say if the length of the items is less than or equal to
1, then you can just return those items because you know that a list of length
0 is already sorted by definition. Now you need to choose a pivot.
You could say
less_than_pivot = [x for x in items if x < pivot]. This gets you all of the items that are less than the pivot. Now you can do the exact same thing here for the
And then the simplest part of this algorithm is just the end. You just say
return quicksort(less_than_pivot) + equal_to_pivot, so concatenating these lists together or just sticking them together,
03:10 And of course, the reason you don’t need to quicksort this middle list is because you know all of these elements have the same value. They’re all equal to the pivot, so there’s no need to actually sort it. And guess what? That’s it.
That’s the whole algorithm, which is kind of crazy. It seems like it shouldn’t work, but it really does because you just divide it up into less than, equal to, and greater than. And you know that the
equal_to_pivot, those are already in the correct place, so you just kind of keep on recursing down until you get to these one element lists and you know those are already sorted.
03:39 And then you just stick them all back together again. So, it’s really quite simple. Let’s take a look and see how it works, and then I’ll just do the same timing. Because this is also a recursive algorithm, so I’m just going to do the same timing that I did in the last video with merge sort.
So, let’s watch it run.
quicksort. Wow, so this took two thousandths of a second to run on a list of a thousand elements. Now what about 10,000 elements? That only took a little less than two hundredths of a second.
04:17 Let’s bump it straight up to a million again with the same thing as the merge sort and let’s see how long it takes. It’s definitely going to take a little bit longer than it has in the past, but this only took three seconds to sort a million elements, which is pretty darn good because even the merge sort was running at a much longer time. Now something that’s interesting about quicksort is that it can be kind of a variable algorithm.
04:42 It’s worst-case performance is actually on the order of n squared. Consider what happens if this list is already sorted when you pass it in. So, for example, you choose the pivot is the first element, but since it’s already sorted, you know that all of the other items are actually greater than the pivot.
04:59 So the length of this list is really just the length of the whole list minus one. And what that means is that every time you choose a pivot, you’re just choosing the smallest element, and so you’re actually trying to sort something that’s already sorted and doing a lot of extra work, which actually results in an order of n squared runtime.
But if the list is random or is semi-random or is not sorted, then quicksort tends to be really fast. One thing that I could do is I could just say here, instead of doing
randint(), I could just say
[_ for _ in range(1000000)], and then I could try to run this again. So as you’ll notice here, I actually got a
maximum recursion depth exceeded, and that’s because I’m calling this
quicksort() function a million times, right?
Because every time I call
quicksort(), it’s actually just getting the very first element and
greater_than_pivot just has
len(items) - 1 length, and so
quicksort() will just keep getting called for as many items as there are items.
05:57 So that becomes both an O(n**2) runtime and you risk running into the maximum recursion depth on your interpreter. So, a better pivot is actually… If I change this import to be at the very beginning of things…
A better pivot is
items. And so what that does is it just chooses a random pivot, which means that even if the list is sorted, the
random.choice() will in the vast, vast majority of cases choose an element that’s farther away from the bottom or the top of the random array. So now if I say
python quicksort.py, it still will take a little bit longer on this million-element list, but it only took about five seconds as opposed to reaching a maximum recursion depth and taking a huge amount of time. So this is a better pivot.
06:54 There are also other ways that you can choose the pivot, right? You could make a few different random choices and you could take the median of those or the mean of those, and that can also have some performance gains depending on the kind of data that you have. So there are a lot of different ways to make this pivot choice.
07:21 So, you know, merge sort is great if your data structures tend to be recursively defined. Quicksort is great because it can actually sort in place—my version of this algorithm does not actually do that, it allocates extra lists. But if you want to, you can implement a version of this that actually does some swapping so that you actually use only the space for the items that you take in.
07:45 You don’t have to allocate any more space. I’ll leave that to you to do a little bit of research and to see if you can implement a version that works that way and then I’ll probably include one in the source code for this lesson as well.
Become a Member to join the conversation.