Understanding the GIL
00:00 In the previous lesson, I showed you what can cause a race condition. In this lesson, I’ll be applying everything you’ve learned so far to Python memory management, the ultimate reason for the GIL.
00:11 Unlike some programming languages, Python manages memory for you. When you create a new object, it allocates the memory and when you’re done with the object, Python deallocates, that memory, freeing it up for further usage down the road.
00:24 Without this kind of deallocation, you’d eventually run out of memory.
00:29 The mechanism that Python uses to track objects is called reference counting. The definition is right in the title. This is a counter to how many references any object has.
00:40
When the number of references to an object reaches zero, Python can deallocate its memory, freeing it up for later. The sys
module actually has a function in it that lets you see the reference count of an object.
00:52 Let’s go to the REPL and use it.
00:56
Let’s start by creating an object. How about a list? To look at the reference count, I need a function from the sys
module.
01:05
The function is called getrefcount()
.
01:09
You might be a little surprised here that the answer is two rather than one. That’s because a
got passed into the getrefcount()
function.
01:18
The arguments to a function are themselves references, so when getrefcount()
is counting inside the function you get the original reference a
as one, and the argument to the function as a second reference giving you a total of two.
01:33 If I could count the references without calling the function, now that I’m back out at the REPL, the count would be down to one as the reference used by the function call is now gone.
01:43
If I call the function again, that would add one back to the count, putting me back at two inside of getrefcount()
.
01:51
Remember that Python variables aren’t boxes that contain things. They’re all references. By typing b = a
, I’m creating a new reference called b
that is pointing to the same list that a
is pointing to.
02:04
So if I call getrefcount()
again, I get 3
, one for a
, one for b
, and one for the argument to the function.
02:14 The GIL is kind of like the shark in Jaws. Up until now, you’ve seen hints of it. The fin showing above the water. Now it’s surfaced and taking a bite out of the boat.
02:24 You’ve seen threads, you’ve seen locks, and you’ve seen unprotected bank balances. The reference count that I just mentioned is like the bank balance. If modifying the reference count isn’t an atomic action, you’re going to have incorrect counts.
02:37 With incorrect counts, you might free memory that isn’t supposed to be freed or never free something that should have been. The global interpreter lock is a lock that applies across the interpreter and manages the atomicity of reference count updates.
02:54 It’s important to remember that Python the language and Python the compiler and interpreter are two different things.
02:58 CPython is one implementation of the Python language. It’s by far the most common, but there are others. The GIL is a solution used by CPython to deal with reference count atomicity.
03:09 Other interpreters out there may or may not use the same solution. For example, Jython is a Python interpreter built on top of the Java Virtual Machine. Java has a different kind of memory management technique known as garbage collection, so Jython doesn’t need the GIL.
03:26 In fact, as a joke, you can import GIL into Jython and get an error telling you it’s never going to happen. So if you can solve the problem without the GIL, why GIL? Garbage collection is more complex than reference counting and in a single-threaded case reference counting is faster.
03:43 Aren’t the trade-offs within computing so much fun?
03:47 Some of the reasoning behind the GIL is a result of time and place. Python was actually written before threads were commonly found on most computers, so the single-threaded mode was by far the most likely.
03:58 The other complicating factor is how Python interacts with C. One of the ways Python can be both a robust dynamic language and a fast number cruncher, which tend to be polar opposites, is that Python has a mechanism for integrating with low-level languages known as C-extensions or the C API.
04:16 NumPy, for example, is written in C, and so you can have both the flexibility of Python and the speed of NumPy in your code.
04:24 A lot of early C extensions were thin wrappers around existing C libraries. For example, the time module is compiled and mostly just interacts with equivalent C time functions.
04:35 But these extensions don’t operate in isolation. If I call a function in the time module, it may allocate a new object, say a time, and then that object gets used up at the interpreter level.
04:47 That means reference counting can be affected by a C-extension. The GIL ensures thread safety, including for C libraries that are not thread-safe. The global nature of the GIL is that C extension programmers and Python programmers don’t have to worry about memory management.
05:05 Does this mean Python can’t have concurrency? Not at all. You’ve already seen how a process gets its own copy of the code to execute well. Python’s multiprocessing library can create just these kinds of processes.
05:18 Each process gets its own copy of the interpreter, which means it gets its own GIL, independent of the other processes. Of course, this only helps for CPU-bound parallelism and has more overhead than threads.
05:32
Python also supports threads, both operating system ones and through coroutines in the asyncio
library, but this is where the parallelism bogs down a bit.
05:41 When the GIL activates, a lock globally affects all threads in a process. This means you really can’t use threads to do parallelism for CPU bound programs.
05:52 The threads aren’t going to get spread out across CPUs, and so you’re stuck with the whim of the global lock. This doesn’t mean threads aren’t useful, though.
06:01 If your code is I/O-bound, you can get significant speed up. In our concurrency course, I show you how to read multiple websites at a time and you can get an order of magnitude speed up by using threads.
06:11 That’s because one thread sleeps while data comes back from the cafe in France, while the other processes the page that it just got from Belgium.
06:21 If all this wasn’t messy enough, things are more complicated by the fact that the GIL is an internal solution to CPython and the core developers are free to optimize and change its behavior.
Become a Member to join the conversation.