Understanding Concurrency and Processes
00:00 In the previous lesson, I gave an overview of the course. In this lesson, I’ll give you background on concurrency so that you can better understand why the GIL exists.
00:09 When you run a program, your operating system is in charge of managing when it runs and what resources on your machine it can use at any given time. Your OS does this by managing a process, which is a grouping of code and resources.
00:25 In olden times, a program was the only kind of process, and even nowadays, programs that don’t use concurrency mechanisms are still single process entities.
00:35 Each process contains your code, some memory allocated to it, often split into two categories, the stack and the heap, where the stack is for memory used by the program to track what functions have been called and the heap is general-purpose memory.
00:50 And then in modern operating systems, all of this is controlled by some set of permissions. These range from what resources a process can access to making sure that my code doesn’t foul up your code when we’re running on the same machine.
01:04 The core part of this when dealing with multiple programs and processes is that the operating system is in charge of what process has control of a CPU at any given time.
01:14 For multi-CPU systems, this becomes more complicated. The OS is now also controlling which processes map to which CPUs, but conceptually the idea is the same.
01:25 Oddly enough, complex process management almost pre-dates simple process management. Early computers were large machines being simultaneously used by multiple people.
01:34 The simple case of one CPU and one user introduced by the advent of PCs was actually a digression. How those early machines worked was to time-slice access to the processor. Program 1 might have control for a little while, and then the OS decides it’s time for Program 2.
01:51 It suspends Program 1 and lets Program 2 do its thing and then later, it switches back to Program 1.
02:01 This process goes on for the duration of each of those programs.
02:05 This diagram is a vast oversimplification. Most operating systems have dozens of programs they are running in the background that are treated the same way, swapping them and the user’s code in and out.
02:16 A simple way of deciding what program to run when is called time slicing. Each program gets an equal amount of the CPU’s time. This is not ideal as Program 1 might be hungry and have lots to do while Program 2 could be waiting around doing nothing during its allocated spot.
02:32 There’s an entire research field dedicated to this topic. How do you schedule these things optimally?
02:39 Say you want to do two things at once, given the simple model I just showed you, you could write two different programs and have them time sliced. It didn’t take long before this concept was baked into operating systems and the program became a collection of one or more processes.
02:53 The program starts up and says it wants a new process. Then the operating system creates a new process with a full copy of the code and its own allocation of memory.
03:04 This is kind of like automatically creating a new program for you, but having it be a copy of the old one. There’s a concept in parallel programming called trivial concurrency.
03:13 That’s where each process can do its own thing with no information from the other processes. This does happen, but not often. Say you’ve got a large data file that you want to process, and so you split it up into pieces.
03:26 Even if the pieces aren’t dependent on each other, something has to be responsible for splitting the file up and typically something also has to be responsible for aggregating the results.
03:36 Most concurrency requires some communication between processes.
03:41 Since a process is an independent copy of your code with its own chunk of memory, one process can’t affect the other simply by changing a value in memory.
03:50 Instead, operating systems provide inter-process communication tools, also known as IPC. This is so that two processes can talk to each other. You can think of these as being a chat channel that two or more processes can post and read messages from.
04:07 So how does multi-processing affect our previous time-slicing picture? Well, not by much actually. You start off with your program, which then asks the operating system to create a new process.
04:18 On Unix-based systems, this is called forking your process,
04:23 and from then on the picture’s the same. Just now the operating system is time-slicing processes instead of programs. Processes get a complete copy of the code and their own memory space, and so can be considered to have a lot of overhead.
04:38 Threads address this problem. I’ll talk about those next.
Become a Member to join the conversation.