Locked learning resources

Join us and get access to thousands of tutorials and a community of expert Pythonistas.

Unlock This Lesson

Locked learning resources

This lesson is for members only. Join us and get access to thousands of tutorials and a community of expert Pythonistas.

Unlock This Lesson

Using Semaphores

00:00 In the previous lesson, I showed you reentrant locks and how they can help with the nested locking problem. In this lesson, I’ll show you a higher level synchronization primitive: the semaphore.

00:11 A semaphore is a thread-safe counter. In fact, you can implement your own by building an object with a counter protected by a lock. Their main use is to protect a resource.

00:21 Say you’ve got a connection to a database and you don’t want to overwhelm it, you could put a semaphore around it, limiting how many threads can use the connection at a time.

00:30 Semaphores have the same semantics as locks, including .acquire() and .release() methods, but they allow multiple threads up to the counter’s maximum value to acquire the semaphore at a time.

00:43 I’ll keep with the banking theme, but let’s move on to a new problem. Say you’ve got two tellers at a bank and a bunch of customers. As our customers are independent of each other, they’ll each get their own thread, but they can’t interact with a teller unless they can acquire the tellers’ semaphore. At the top here, I’m creating the Semaphore object with a max value of 2. Inside the serve_customer() routine, I’m printing out how many seconds have passed since the program started, and the fact that a customer is lining up for the tellers.

01:15 This is a context block used to acquire the semaphore. If the current value of the semaphore is one or two, then the block can be entered. Otherwise, the thread has to wait here. Inside the block, I simulate the time it takes to serve a customer by sleeping a random amount, and then print out a time when the customer finished.

01:37 Since I’ve got a thread for each customer, this time, instead of hard-coding the future responses, I’m going to put all of the futures in a list. Again, futures aren’t necessary for the code to work, but if there is an error, you won’t see it unless you call result().

01:52 Here’s the version of ThreadPool for this time around, creating one thread for each of our five customers. Inside, each instantiated thread calls serve_customer() and the resulting future gets tracked.

02:06 Finally, at the bottom, I await for all the futures to respond with results.

02:11 Let’s go do this.

02:23 Let’s talk about what happened. In the first four lines, customers A and B queued and acquired the semaphore immediately, so they got served. Add your own dated joke about a dance competition movie here.

02:35 Then customers C, D, and E attempted to do the same thing, but they blocked waiting for the semaphore. At the one-second mark, one of the threads finished with B, allowing C to get the semaphore.

02:48 Two seconds later, the other thread finished with A, allowing D to be served, and very shortly after that, C finished allowing E to be served. After that, D and E were complete, and the program was done.

03:02 Semaphores are a useful tool for resource management and can be key to making sure you don’t bog a connection down. Without that kind of control, your bank tellers might quit on you.

03:14 So far, the threads haven’t had to communicate with each other. In the next lesson, I’ll show you events, a thread-safe way of allowing this to happen.

Become a Member to join the conversation.