Join us and get access to hundreds of tutorials and a community of expert Pythonistas.

Unlock This Lesson

This lesson is for members only. Join us and get access to hundreds of tutorials and a community of expert Pythonistas.

Unlock This Lesson

Hint: You can adjust the default video playback speed in your account settings.
Hint: You can set the default subtitles language in your account settings.
Sorry! Looks like there’s an issue with video playback 🙁 This might be due to a temporary outage or because of a configuration issue with your browser. Please see our video player troubleshooting guide to resolve the issue.

The concurrent.futures Module

Give Feedback

In this lesson, you’ll see that using the concurrent.futures module is the newer way of doing asynchronous computation in Python. It has a clean interface for working with process pools and thread pools and is only available in Python 3.

You’ll replace your multiprocessing code with code from the concurrent.futures module. When working with this new module, you use various classes that have Executor in their names. There are different execution strategies for how your code is run in parallel, whether that’s across multiple processes or multiple threads within a single process, and they all follow the context manager protocol.

00:00 All right. Let’s replace this code with the concurrent.futures module.

00:07 This is the new and shiny way to do asynchronous computation in Python. It has a clean interface for working with process pools and also thread pools,

00:20 and it’s kind of cool. It’s only available in Python 3. The first thing I’m going to show you is how we can replace this multiprocessing code here with code from the concurrent.futures module.

00:29 Let’s just get this set up. Here, I can go concurrent.futures.ProcessPoolExecutor(). The way this interface works in the concurrent.futures module is that you have these different classes that are called executors, and they are different execution strategies for how your code is run in parallel, whether that’s across multiple processes or multiple threads within a single process. They all follow the context manager protocols, so we can just enter this executor here and then do stuff with it.

01:08 It makes it very easy to do the cleanup here, as well. Here, I can just go result = executor.map and—again—you can see here the central importance of this map() function as a parallel processing primitive. I’m just going to pass it my transform() function and my input data, and hopefully, this is going to run. All right, now as you can see here, we’re pretty much getting the same result that we did get with the multiprocessing-based implementation. Again, this is fanning out and it’s running across four processes in parallel—it’s doing the calculations here that transforms in parallel. It takes about two seconds to complete, and then we’re getting this <itertools.chain object>.

01:54 This is maybe a small difference to what you’ve seen before, where multiprocessing, or a multiprocessing.Pool.map()it gives you a list of results, whereas this will give you an iterator, here.

02:08 And if I wanted to convert that back into a immutable data structure,

02:15 I’d probably just call tuple() on it. And again, you know, maybe you want to go back to some of the previous videos to see why I needed to do that—because I explained it, I think, in the video on doing the map() operation that’s built into Python directly. Again, we’re going to rerun this and now we’re getting the expected output because we’re just converting. We’re consuming this iterator, turning it into a tuple here with all these output elements so we can print them nice and cleanly.

Become a Member to join the conversation.