Hint: You can adjust the default video playback speed in your account settings.
Hint: You can set your subtitle preferences in your account settings.
Sorry! Looks like there’s an issue with video playback 🙁 This might be due to a temporary outage or because of a configuration issue with your browser. Please refer to our video player troubleshooting guide for assistance.

Understanding Linked Lists

Here are resources for more information about Big O Notation:

00:00 Just like lists, linked lists are ordered collections of objects. What’s different is how data is stored in memory. With a list, pointers to the objects are stored sequentially in a contiguous block of memory. That looks like this.

00:18 Here is our list represented as an array in memory.

00:23 Each item in the list is stored at an address, and the addresses are contiguous. That means that each element is stored at the next available memory address, instead of being scattered around. Keep in mind, these memory addresses are just an example, and are not real.

00:43 The Data represents the data stored at the address. In CPython, this data is actually the memory address to what’s called a PyObject in memory.

00:55 In other words, each element in the array points to the data that belongs there. In this case, we’re storing integers.

01:05 Note that the pointers are not pointing to contiguous memory addresses. The first element is stored at address 50, while the next one is at address 22.

01:17 Because the pointers that make up the underlying array are all stored sequentially in a contiguous block of memory, we don’t need to traverse—or “step past”—previous elements to access the one we want. In other words, if we want to get at the element at list index 2, we can just jump right there and follow the pointer to access the data.

01:43 We don’t have to step past index 0 and 1, first. This ability to jump to any random memory address is called random access. In fact, it’s why RAM is called random access memory.

02:00 Linked lists work a little bit differently. Linked lists are made up of nodes. Every node stores not only some data but also a pointer to the next node in the linked list.

02:15 A special head pointer stores a reference to the first node in the list. The last pointer points to None since there are no more nodes.

02:27 In order to access the third item, we must start at the head. Then, we can traverse our way through the list by following each pointer. There’s no way to start in the middle, like with a list, and that’s because our head pointer points only to the first node.

02:46 This might seem like a less efficient version of a list, but it does have its benefits. For example, because each node stores an object and a pointer to the next node, nodes in a linked list don’t have to be stored sequentially in a contiguous block of memory.

03:04 They can be scattered.

03:07 We can’t talk about data structures like lists and linked lists without mentioning Big O notation. Big O notation is used to describe the performance of an algorithm in the worst-case scenario—aka the scenario where it will run the slowest.

03:25 It allows us to compare different algorithms, or operations, on different data structures so we can decide what data structure best fits our need. For example, some operations—like inserting an element or searching for one—might be faster with an array than a linked list.

03:45 Big O notation gives us a way to describe roughly how long an operation will take. There are many time complexities, but the two most important ones for this course are O(1) and O(N).

04:01 An O(1) algorithm, or operation, will always take the same amount of time to complete, regardless of how many elements are in the collection.

04:12 It doesn’t matter if there are 100 elements or a million; the algorithm will always complete in the same amount of time, or constant time. This is considered the fastest time complexity, but it’s not always achievable.

04:28 Moving down in speed a little bit, we have O(N). Here, the time the algorithm takes to complete scales with how many elements are in the collection: the more elements, the slower.

04:41 If your algorithm involves iterating through an entire collection once, such as with a for loop, that’s a good sign that it’s an O(N) algorithm.

04:52 There’s a lot more to be said about Big O notation and lots of other time complexities to learn about. This should prepare you for the rest of this course, but I’ll put a link down below to a great resource for learning more about this.

05:06 In the next video, you’ll compare the performance of lists and linked lists and better understand how they work under the hood.

Become a Member to join the conversation.