Understanding Linked Lists
Here are resources for more information about Big O Notation:
00:00 Just like lists, linked lists are ordered collections of objects. What’s different is how data is stored in memory. With a list, pointers to the objects are stored sequentially in a contiguous block of memory. That looks like this.
00:23 Each item in the list is stored at an address, and the addresses are contiguous. That means that each element is stored at the next available memory address, instead of being scattered around. Keep in mind, these memory addresses are just an example, and are not real.
Because the pointers that make up the underlying array are all stored sequentially in a contiguous block of memory, we don’t need to traverse—or “step past”—previous elements to access the one we want. In other words, if we want to get at the element at list index
2, we can just jump right there and follow the pointer to access the data.
02:27 In order to access the third item, we must start at the head. Then, we can traverse our way through the list by following each pointer. There’s no way to start in the middle, like with a list, and that’s because our head pointer points only to the first node.
02:46 This might seem like a less efficient version of a list, but it does have its benefits. For example, because each node stores an object and a pointer to the next node, nodes in a linked list don’t have to be stored sequentially in a contiguous block of memory.
03:07 We can’t talk about data structures like lists and linked lists without mentioning Big O notation. Big O notation is used to describe the performance of an algorithm in the worst-case scenario—aka the scenario where it will run the slowest.
03:25 It allows us to compare different algorithms, or operations, on different data structures so we can decide what data structure best fits our need. For example, some operations—like inserting an element or searching for one—might be faster with an array than a linked list.
04:12 It doesn’t matter if there are 100 elements or a million; the algorithm will always complete in the same amount of time, or constant time. This is considered the fastest time complexity, but it’s not always achievable.
04:52 There’s a lot more to be said about Big O notation and lots of other time complexities to learn about. This should prepare you for the rest of this course, but I’ll put a link down below to a great resource for learning more about this.
Become a Member to join the conversation.