Parallel Processing With multiprocessing: Conclusion
In this section, you learned how to do parallel programming in Python using functional programming principles and the
multiprocessing module. You used the example data set based on an immutable data structure that you previously transformed using the built-in
map() function. But this time, you processed the data it in parallel, across multiple CPU cores using the Python
multiprocessing module available in the standard library.
You saw, step by step, how to parallelize an existing piece of Python code so that it can execute much faster and leverage all of your available CPU cores. You learned how to use the
multiprocessing.Pool class and its parallel
map implementation, which makes parallelizing most Python code that’s written in a functional style a breeze.
You built a little testbed program that you used to measure execution time with the
time.time() function, so that you could compare the single-threaded and multithreaded implementations of the same algorithm. Stay tuned for the next section in the course, where you’ll learn how to make your Python programs multithreaded using the
concurrent.futures module as an alternative way to implement concurrency.
I just love this way of parallel programming, because it is very easy to do if you write your code in a functional programming style. And if you’re using a
map() function, then it’s very easy to parallelize this code, as you’ve seen here, right?
I didn’t change anything, here, really. I mean, we made some cosmetic changes just to be able to trace what’s going on with this
transform() function, but, really, all I did was change these two lines of code.
00:29 And now all of a sudden, these calculations are running in parallel across multiple CPU cores. I think that’s a really powerful concept. Now, there’s more to this, of course. This is more, you know—this is really just scratching the surface, but I hope it is enough for you to see the value that this programming model brings with it.
00:48 I really want to encourage you with this video to go out and do your own experimentation, right? Maybe turn this into a little web scraper. Maybe you can do some I/O or some more expensive computations here, and then use the same technique to speed this up and actually get your result a lot faster. If you imagine you had to fetch 10,000 websites—well, if you did that sequentially, this could take a really long time, but if you can parallelize them and fetch these websites in batches of 100 items at a time, then you’re going to get a huge speedup.
Become a Member to join the conversation.