Loading video player…

Speed Up Python With Concurrency (Overview)

Concurrency is the act of having your computer do multiple things at the same time. If you’ve heard lots of talk about asyncio being added to Python but are curious how it compares to other concurrency methods or are wondering what concurrency is and how it might speed up your program, you’ve come to the right place.

In this course, you’ll learn the following:

  • How I/O-bound programs are affected by latency
  • Which concurrent programming patterns to use
  • What the differences are between the Python concurrency libraries
  • How to write code that uses the threading, asyncio, and multiprocessing libraries

Sample code was tested using Python 3.8.5. Much of the asyncio library has been in flux since Python 3.4, it is recommended to use at least Python 3.7 for the asyncio portions of the course.


Sample Code (.zip)

8.2 KB

Course Slides (.pdf)

1.6 MB

00:00 Welcome to Speed Up Python with Concurrency. My name is Chris and I will be your guide. In this course, you will learn what the different types of concurrency are; how to use the three standard libraries in Python that cover concurrency—threading, asyncio, and multiprocessing; and when to use concurrency and when possibly to avoid it. Before I get started, a quick note on versions.

00:26 The code that I demo inside of this course was tested with Python 3.8.5. The asyncio library in particular has been under active development in the last few Python versions. If you want to play around with asyncio, I recommend using at least Python 3.7. Anything before that, and you’ll have to make some changes to get the code that you see in this course to work. So, what is concurrency?

00:51 It’s the simple act of doing multiple things at the same time inside of your computer. For the moment, just consider a single processor computer. It wasn’t that long ago that these were very common in the household.

01:03 It would be easy to think that a single processor computer was actually doing multiple things at a time—but it wasn’t. This was an illusion. Think of it like a film in a film projector.

01:13 The projector is showing you multiple frames per second, switching between these frames quickly, providing the illusion of motion. A single processor computer is doing the same thing.

01:24 A lot of computing workloads are I/O-bound. This means they’re waiting for the disk or network. Because a lot of programs spend a lot of time waiting, your operating system can take advantage of this and switch back and forth between the programs, providing the illusion that multiple programs are running at the same time, when in reality, a CPU can only do one thing at a time. In the last 10 years or so, multiple processor computers have become cheap enough that they’ve become common at home.

01:52 This course will talk about the differences between creating concurrency in I/O-bound situations versus processor-bound situations, and the techniques involved in solving both of these kinds of problems.

02:04 Python provides three different mechanisms in the standard library for concurrency: threading, asyncio, and multiprocessing. Threading and async I/O are two different mechanisms for handling I/O-bound computing. Multiprocessing is actually how to use multiple processors. Solving I/O-bound computing problems, you use the first two. Solving multiprocessor problems, you use the third one.

02:30 One thing to be aware of when dealing with concurrency in Python is the GIL, or the Global Interpreter Lock. For now, just understand that this is a locking mechanism that makes sure only one thing is happening at a time inside of the Python interpreter.

02:45 When programming for concurrency, the GIL can get in your way and you have to understand how to use it and how to get around it. To delve into I/O-bound concurrency, you first need to understand latency in processing, so I’ll be talking about that next up.

Become a Member to join the conversation.