Quantcast
Channel: Planet Python
Viewing all articles
Browse latest Browse all 22462

Mike Driscoll: Python 3 Concurrency – The concurrent.futures Module

$
0
0

The concurrent.futures module was added in Python 3.2. According to the Python documentation it provides the developer with a high-level interface for asynchronously executing callables. Basically concurrent.futures is an abstraction layer on top of Python’s threading and multiprocessing modules that simplifies using them. However it should be noted that while the abstraction layer simplifies the usage of these modules, it also removes a lot of their flexibility, so if you need to do something custom, then this might not be the best module for you.

Concurrent.futures includes an abstract class called Executor. It cannot be used directly though, so you will need to use one of its two subclasses: ThreadPoolExecutor or ProcessPoolExecutor. As you’ve probably guessed, these two subclasses are mapped to Python’s threading and multiprocessing APIs respectively. Both of these subclasses will provide a pool that you can put threads or processes into.

The term future has a special meaning in computer science. It refers to a construct that can be used for synchronization when using concurrent programming techniques. The future is actually a way to describe the result of a process or thread before it has finished processing. I like to think of them as a pending result.


Creating a Pool

Creating a pool of workers is extremely easy when you’re using the concurrent.futures module. Let’s start out by rewriting our downloading code from my asyncioarticle so that it now uses the concurrent.futures module. Here’s my version:

importosimporturllib.request 
from concurrent.futuresimport ThreadPoolExecutor
from concurrent.futuresimport as_completed
 
 
def downloader(url):
    """
    Downloads the specified URL and saves it to disk
    """
    req = urllib.request.urlopen(url)
    filename = os.path.basename(url)
    ext = os.path.splitext(url)[1]ifnot ext:
        raiseRuntimeError('URL does not contain an extension') 
    with open(filename, 'wb') as file_handle:
        whileTrue:
            chunk = req.read(1024)ifnotchunk:
                break
            file_handle.write(chunk)
    msg = 'Finished downloading {filename}'.format(filename=filename)return msg
 
 
def main(urls):
    """
    Create a thread pool and download specified urls
    """
    with ThreadPoolExecutor(max_workers=5) as executor:
        futures = [executor.submit(downloader, url)for url in urls]for future in as_completed(futures):
            print(future.result()) 
if __name__ == '__main__':
    urls = ["http://www.irs.gov/pub/irs-pdf/f1040.pdf",
            "http://www.irs.gov/pub/irs-pdf/f1040a.pdf",
            "http://www.irs.gov/pub/irs-pdf/f1040ez.pdf",
            "http://www.irs.gov/pub/irs-pdf/f1040es.pdf",
            "http://www.irs.gov/pub/irs-pdf/f1040sb.pdf"]
    main(urls)

First off we do the imports that we need. Then we create our downloader function. I went ahead and updated it slightly so it checks to see if the URL has an extension on the end of it. If it doesn’t, then we’ll raise a RuntimeError. Next we create a main function, which is where the thread pool gets instantiated. You can actually use Python’s with statement with the ThreadPoolExecutor and the ProcessPoolExecutor, which is pretty handy.

Anyway, we set our pool so that it has five workers. Then we use a list comprehension to create a group of futures (or jobs) and finally we call the as_complete function. This handy function is an iterator that yields the futures as they complete. When they complete, we print out the result, which is a string that was returned from our downloader function.

If the function we were using was very computation intensive, then we could easily swap out ThreadPoolExecutor for ProcessPoolExecutor and only have a one line code change.

We can clean this code up a bit by using the concurrent.future’s map method. Let’s rewrite our pool code slightly to take advantage of this:

importosimporturllib.request 
from concurrent.futuresimport ThreadPoolExecutor
from concurrent.futuresimport as_completed
 
 
def downloader(url):
    """
    Downloads the specified URL and saves it to disk
    """
    req = urllib.request.urlopen(url)
    filename = os.path.basename(url)
    ext = os.path.splitext(url)[1]ifnot ext:
        raiseRuntimeError('URL does not contain an extension') 
    with open(filename, 'wb') as file_handle:
        whileTrue:
            chunk = req.read(1024)ifnotchunk:
                break
            file_handle.write(chunk)
    msg = 'Finished downloading {filename}'.format(filename=filename)return msg
 
 
def main(urls):
    """
    Create a thread pool and download specified urls
    """
    with ThreadPoolExecutor(max_workers=5) as executor:
        return executor.map(downloader, urls, timeout=60) 
if __name__ == '__main__':
    urls = ["http://www.irs.gov/pub/irs-pdf/f1040.pdf",
            "http://www.irs.gov/pub/irs-pdf/f1040a.pdf",
            "http://www.irs.gov/pub/irs-pdf/f1040ez.pdf",
            "http://www.irs.gov/pub/irs-pdf/f1040es.pdf",
            "http://www.irs.gov/pub/irs-pdf/f1040sb.pdf"]
    results = main(urls)for result in results:
        print(result)

The primary difference here is in the main function, which has been reduced by two lines of code. The map method is just like Python’s map in that it takes a function and an iterable and then calls the function for each item in the iterable. You can also add a timeout for each of your threads so that if one of them hangs, it will get stopped. Lastly, starting in Python 3.5, they added a chunksize argument, which can help performance when using the Thread pool when you have a very large iterable. However if you happen to be using the Process pool, the chunksize will have no effect.


Deadlocks

One of the pitfalls to the concurrent.futures module is that you can accidentally create deadlocks when the caller to associate with a Future is also waiting on the results of another future. This sounds kind of confusing, so let’s look at an example:

from concurrent.futuresimport ThreadPoolExecutor
 
def wait_forever():
    """
    This function will wait forever if there's only one
    thread assigned to the pool
    """
    my_future = executor.submit(zip, [1, 2, 3], [4, 5, 6])
    result = my_future.result()print(result) 
if __name__ == '__main__':
    executor = ThreadPoolExecutor(max_workers=1)
    executor.submit(wait_forever)

Here we import the ThreadPoolExecutor class and create an instance of it. Take note that we set its maximum number of workers to one thread. Then we submit our function, wait_forever. Inside of our function, we submit another job to the thread pool that is supposed to zip two lists together, get the result of that operation and print it out. However we’ve just created a deadlock! The reason is that we are having one Future wait for another Future to finish. Basically we want a pending operation to wait on another pending operation which doesn’t work very well.

Let’s rewrite the code a bit to make it work:

from concurrent.futuresimport ThreadPoolExecutor
 
def wait_forever():
    """
    This function will wait forever if there's only one
    thread assigned to the pool
    """
    my_future = executor.submit(zip, [1, 2, 3], [4, 5, 6]) 
    return my_future
 
if __name__ == '__main__':
    executor = ThreadPoolExecutor(max_workers=3)
    fut = executor.submit(wait_forever) 
    result = fut.result()print(list(result.result()))

In this case, we just return the inner future from the function and then ask for its result. The result of calling result on our returned future is another future that actually. If we call the result method on this nested future, we get a zip object back, so to find out what the actual result is, we wrap the zip with Python’s list function and print it out.


Wrapping Up

Now you have another neat concurrency tool to use. You can easily create thread or process pools depending on your needs. Should you need to run a process that is network or I/O bound, you can use the thread pool class. If you have a computationally heavy task, then you’ll want to use the process pool class instead. Just be careful of calling futures incorrectly or you might get a deadlock.


Related Reading


Viewing all articles
Browse latest Browse all 22462

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>