New in version 2.6.
multiprocessing is a package that supports spawning processes using an API similar to the threading module. The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads. Due to this, the multiprocessing module allows the programmer to fully leverage multiple processors on a given machine. It runs on both Unix and Windows.
Warning
Some of this package’s functionality requires a functioning shared semaphore implementation on the host operating system. Without one, the multiprocessing.synchronize module will be disabled, and attempts to import it will result in an ImportError. See issue 3770 for additional information.
In multiprocessing, processes are spawned by creating a Process object and then calling its start() method. Process follows the API of threading.Thread. A trivial example of a multiprocess program is
from multiprocessing import Process
def f(name):
print 'hello', name
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
Here the function f is run in a child process.
For an explanation of why (on Windows) the if __name__ == '__main__' part is necessary, see Programming guidelines.
multiprocessing supports two types of communication channel between processes:
Queues
The Queue class is a near clone of Queue.Queue. For example:
from multiprocessing import Process, Queue def f(q): q.put([42, None, 'hello']) if __name__ == '__main__': q = Queue() p = Process(target=f, args=(q,)) p.start() print q.get() # prints "[42, None, 'hello']" p.join()Queues are thread and process safe.
Pipes
The Pipe() function returns a pair of connection objects connected by a pipe which by default is duplex (two-way). For example:
from multiprocessing import Process, Pipe def f(conn): conn.send([42, None, 'hello']) conn.close() if __name__ == '__main__': parent_conn, child_conn = Pipe() p = Process(target=f, args=(child_conn,)) p.start() print parent_conn.recv() # prints "[42, None, 'hello']" p.join()The two connection objects returned by Pipe() represent the two ends of the pipe. Each connection object has send() and recv() methods (among others). Note that data in a pipe may become corrupted if two processes (or threads) try to read from or write to the same end of the pipe at the same time. Of course there is no risk of corruption from processes using different ends of the pipe at the same time.
multiprocessing contains equivalents of all the synchronization primitives from threading. For instance one can use a lock to ensure that only one process prints to standard output at a time:
from multiprocessing import Process, Lock
def f(l, i):
l.acquire()
print 'hello world', i
l.release()
if __name__ == '__main__':
lock = Lock()
for num in range(10):
Process(target=f, args=(lock, num)).start()
Without using the lock output from the different processes is liable to get all mixed up.
As mentioned above, when doing concurrent programming it is usually best to avoid using shared state as far as possible. This is particularly true when using multiple processes.
However, if you really do need to use some shared data then multiprocessing provides a couple of ways of doing so.
Shared memory
Data can be stored in a shared memory map using Value or Array. For example, the following code
from multiprocessing import Process, Value, Array def f(n, a): n.value = 3.1415927 for i in range(len(a)): a[i] = -a[i] if __name__ == '__main__': num = Value('d', 0.0) arr = Array('i', range(10)) p = Process(target=f, args=(num, arr)) p.start() p.join() print num.value print arr[:]will print
3.1415927 [0, -1, -2, -3, -4, -5, -6, -7, -8, -9]The 'd' and 'i' arguments used when creating num and arr are typecodes of the kind used by the array module: 'd' indicates a double precision float and 'i' indicates a signed integer. These shared objects will be process and thread safe.
For more flexibility in using shared memory one can use the multiprocessing.sharedctypes module which supports the creation of arbitrary ctypes objects allocated from shared memory.
Server process
A manager object returned by Manager() controls a server process which holds Python objects and allows other processes to manipulate them using proxies.
A manager returned by Manager() will support types list, dict, Namespace, Lock, RLock, Semaphore, BoundedSemaphore, Condition, Event, Queue, Value and Array. For example,
from multiprocessing import Process, Manager def f(d, l): d[1] = '1' d['2'] = 2 d[0.25] = None l.reverse() if __name__ == '__main__': manager = Manager() d = manager.dict() l = manager.list(range(10)) p = Process(target=f, args=(d, l)) p.start() p.join() print d print lwill print
{0.25: None, 1: '1', '2': 2} [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]Server process managers are more flexible than using shared memory objects because they can be made to support arbitrary object types. Also, a single manager can be shared by processes on different computers over a network. They are, however, slower than using shared memory.
The Pool class represents a pool of worker processes. It has methods which allows tasks to be offloaded to the worker processes in a few different ways.
For example:
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
pool = Pool(processes=4) # start 4 worker processes
result = pool.applyAsync(f, [10]) # evaluate "f(10)" asynchronously
print result.get(timeout=1) # prints "100" unless your computer is *very* slow
print pool.map(f, range(10)) # prints "[0, 1, 4,..., 81]"
The multiprocessing package mostly replicates the API of the threading module.
Process objects represent activity that is run in a separate process. The Process class has equivalents of all the methods of threading.Thread.
The constructor should always be called with keyword arguments. group should always be None; it exists solely for compatibility with threading.Thread. target is the callable object to be invoked by the run() method. It defaults to None, meaning nothing is called. name is the process name. By default, a unique name is constructed of the form ‘Process-N1:N2:...:Nk‘ where N1,N2,...,Nk is a sequence of integers whose length is determined by the generation of the process. args is the argument tuple for the target invocation. kwargs is a dictionary of keyword arguments for the target invocation. By default, no arguments are passed to target.
If a subclass overrides the constructor, it must make sure it invokes the base class constructor (Process.__init__()) before doing anything else to the process.
Method representing the process’s activity.
You may override this method in a subclass. The standard run() method invokes the callable object passed to the object’s constructor as the target argument, if any, with sequential and keyword arguments taken from the args and kwargs arguments, respectively.
Start the process’s activity.
This must be called at most once per process object. It arranges for the object’s run() method to be invoked in a separate process.
Block the calling thread until the process whose join() method is called terminates or until the optional timeout occurs.
If timeout is None then there is no timeout.
A process can be joined many times.
A process cannot join itself because this would cause a deadlock. It is an error to attempt to join a process before it has been started.
The process’s name.
The name is a string used for identification purposes only. It has no semantics. Multiple processes may be given the same name. The initial name is set by the constructor.
Return whether the process is alive.
Roughly, a process object is alive from the moment the start() method returns until the child process terminates.
The process’s daemon flag, a Boolean value. This must be called before start() is called.
The initial value is inherited from the creating process.
When a process exits, it attempts to terminate all of its daemonic child processes.
Note that a daemonic process is not allowed to create child processes. Otherwise a daemonic process would leave its children orphaned if it gets terminated when its parent process exits.
In addition to the Threading.Thread API, Process objects also support the following attributes and methods:
The process’s authentication key (a byte string).
When multiprocessing is initialized the main process is assigned a random string using os.random().
When a Process object is created, it will inherit the authentication key of its parent process, although this may be changed by setting authkey to another byte string.
See Authentication keys.
Terminate the process. On Unix this is done using the SIGTERM signal; on Windows TerminateProcess is used. Note that exit handlers and finally clauses, etc., will not be executed.
Note that descendant processes of the process will not be terminated – they will simply become orphaned.
Warning
If this method is used when the associated process is using a pipe or queue then the pipe or queue is liable to become corrupted and may become unusable by other process. Similarly, if the process has acquired a lock or semaphore etc. then terminating it is liable to cause other processes to deadlock.
Note that the start(), join(), is_alive() and exit_code methods should only be called by the process that created the process object.
Example usage of some of the methods of Process:
>>> import processing, time, signal
>>> p = processing.Process(target=time.sleep, args=(1000,))
>>> print p, p.is_alive()
<Process(Process-1, initial)> False
>>> p.start()
>>> print p, p.is_alive()
<Process(Process-1, started)> True
>>> p.terminate()
>>> print p, p.is_alive()
<Process(Process-1, stopped[SIGTERM])> False
>>> p.exitcode == -signal.SIGTERM
True
Exception raised by Connection.recv_bytes_into() when the supplied buffer object is too small for the message read.
If e is an instance of BufferTooShort then e.args[0] will give the message as a byte string.
When using multiple processes, one generally uses message passing for communication between processes and avoids having to use any synchronization primitives like locks.
For passing messages one can use Pipe() (for a connection between two processes) or a queue (which allows multiple producers and consumers).
The Queue and JoinableQueue types are multi-producer, multi-consumer FIFO queues modelled on the Queue.Queue class in the standard library. They differ in that Queue lacks the task_done() and join() methods introduced into Python 2.5’s Queue.Queue class.
If you use JoinableQueue then you must call JoinableQueue.task_done() for each task removed from the queue or else the semaphore used to count the number of unfinished tasks may eventually overflow raising an exception.
Note that one can also create a shared queue by using a manager object – see Managers.
Note
multiprocessing uses the usual Queue.Empty and Queue.Full exceptions to signal a timeout. They are not available in the multiprocessing namespace so you need to import them from Queue.
Warning
If a process is killed using Process.terminate() or os.kill() while it is trying to use a Queue, then the data in the queue is likely to become corrupted. This may cause any other processes to get an exception when it tries to use the queue later on.
Warning
As mentioned above, if a child process has put items on a queue (and it has not used JoinableQueue.cancel_join_thread()), then that process will not terminate until all buffered items have been flushed to the pipe.
This means that if you try joining that process you may get a deadlock unless you are sure that all items which have been put on the queue have been consumed. Similarly, if the child process is non-daemonic then the parent process may hang on exit when it tries to join all its non-daemonic children.
Note that a queue created using a manager does not have this issue. See Programming guidelines.
For an example of the usage of queues for interprocess communication see Examples.
Returns a pair (conn1, conn2) of Connection objects representing the ends of a pipe.
If duplex is True (the default) then the pipe is bidirectional. If duplex is False then the pipe is unidirectional: conn1 can only be used for receiving messages and conn2 can only be used for sending messages.
Returns a process shared queue implemented using a pipe and a few locks/semaphores. When a process first puts an item on the queue a feeder thread is started which transfers objects from a buffer into the pipe.
The usual Queue.Empty and Queue.Full exceptions from the standard library’s Queue module are raised to signal timeouts.
Queue implements all the methods of Queue.Queue except for task_done() and join().
Return the approximate size of the queue. Because of multithreading/multiprocessing semantics, this number is not reliable.
Note that this may raise NotImplementedError on Unix platforms like Mac OS X where sem_getvalue() is not implemented.
multiprocessing.Queue has a few additional methods not found in Queue.Queue. These methods are usually unnecessary for most code:
Join the background thread. This can only be used after close() has been called. It blocks until the background thread exits, ensuring that all data in the buffer has been flushed to the pipe.
By default if a process is not the creator of the queue then on exit it will attempt to join the queue’s background thread. The process can call cancel_join_thread() to make join_thread() do nothing.
JoinableQueue, a Queue subclass, is a queue which additionally has task_done() and join() methods.
Indicate that a formerly enqueued task is complete. Used by queue consumer threads. For each get() used to fetch a task, a subsequent call to task_done() tells the queue that the processing on the task is complete.
If a join() is currently blocking, it will resume when all items have been processed (meaning that a task_done() call was received for every item that had been put() into the queue).
Raises a ValueError if called more times than there were items placed in the queue.
Block until all items in the queue have been gotten and processed.
The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls task_done() to indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, join() unblocks.
Return list of all live children of the current process.
Calling this has the side affect of “joining” any processes which have already finished.
Return the Process object corresponding to the current process.
An analogue of threading.current_thread().
Add support for when a program which uses multiprocessing has been frozen to produce a Windows executable. (Has been tested with py2exe, PyInstaller and cx_Freeze.)
One needs to call this function straight after the if __name__ == '__main__' line of the main module. For example:
from multiprocessing import Process, freeze_support
def f():
print 'hello world!'
if __name__ == '__main__':
freeze_support()
Process(target=f).start()
If the freeze_support() line is missed out then trying to run the frozen executable will raise RuntimeError.
If the module is being run normally by the Python interpreter then freeze_support() has no effect.
Sets the path of the python interpreter to use when starting a child process. (By default sys.executable is used). Embedders will probably need to do some thing like
setExecutable(os.path.join(sys.exec_prefix, 'pythonw.exe')) before they can create child processes. (Windows only)
Note
multiprocessing contains no analogues of threading.active_count(), threading.enumerate(), threading.settrace(), threading.setprofile(), threading.Timer, or threading.local.
Connection objects allow the sending and receiving of picklable objects or strings. They can be thought of as message oriented connected sockets.
Connection objects usually created using Pipe() – see also Listeners and Clients.
Send an object to the other end of the connection which should be read using recv().
The object must be picklable.
Close the connection.
This is called automatically when the connection is garbage collected.
Return whether there is any data available to be read.
If timeout is not specified then it will return immediately. If timeout is a number then this specifies the maximum time in seconds to block. If timeout is None then an infinite timeout is used.
Send byte data from an object supporting the buffer interface as a complete message.
If offset is given then data is read from that position in buffer. If size is given then that many bytes will be read from buffer.
Return a complete message of byte data sent from the other end of the connection as a string. Raises EOFError if there is nothing left to receive and the other end has closed.
If maxlength is specified and the message is longer than maxlength then IOError is raised and the connection will no longer be readable.
Read into buffer a complete message of byte data sent from the other end of the connection and return the number of bytes in the message. Raises EOFError if there is nothing left to receive and the other end was closed.
buffer must be an object satisfying the writable buffer interface. If offset is given then the message will be written into the buffer from that position. Offset must be a non-negative integer less than the *length of *buffer (in bytes).
If the buffer is too short then a BufferTooShort exception is raised and the complete message is available as e.args[0] where e is the exception instance.
For example:
>>> from multiprocessing import Pipe
>>> a, b = Pipe()
>>> a.send([1, 'hello', None])
>>> b.recv()
[1, 'hello', None]
>>> b.send_bytes('thank you')
>>> a.recv_bytes()
'thank you'
>>> import array
>>> arr1 = array.array('i', range(5))
>>> arr2 = array.array('i', [0] * 10)
>>> a.send_bytes(arr1)
>>> count = b.recv_bytes_into(arr2)
>>> assert count == len(arr1) * arr1.itemsize
>>> arr2
array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])
Warning
The Connection.recv() method automatically unpickles the data it receives, which can be a security risk unless you can trust the process which sent the message.
Therefore, unless the connection object was produced using Pipe() you should only use the recv() and send() methods after performing some sort of authentication. See Authentication keys.
Warning
If a process is killed while it is trying to read or write to a pipe then the data in the pipe is likely to become corrupted, because it may become impossible to be sure where the message boundaries lie.
Generally synchronization primitives are not as necessary in a multiprocess program as they are in a multithreaded program. See the documentation for threading module.
Note that one can also create synchronization primitives by using a manager object – see Managers.
A bounded semaphore object: a clone of threading.BoundedSemaphore.
(On Mac OS X this is indistinguishable from Semaphore because sem_getvalue() is not implemented on that platform).
A condition variable: a clone of threading.Condition.
If lock is specified then it should be a Lock or RLock object from multiprocessing.
Note
The acquire() method of BoundedSemaphore, Lock, RLock and Semaphore has a timeout parameter not supported by the equivalents in threading. The signature is acquire(block=True, timeout=None) with keyword parameters being acceptable. If block is True and timeout is not None then it specifies a timeout in seconds. If block is False then timeout is ignored.
Note
If the SIGINT signal generated by Ctrl-C arrives while the main thread is blocked by a call to BoundedSemaphore.acquire(), Lock.acquire(), RLock.acquire(), Semaphore.acquire(), Condition.acquire() or Condition.wait() then the call will be immediately interrupted and KeyboardInterrupt will be raised.
This differs from the behaviour of threading where SIGINT will be ignored while the equivalent blocking calls are in progress.
Managers provide a way to create data which can be shared between different processes. A manager object controls a server process which manages shared objects. Other processes can access the shared objects by using proxies.
Manager processes will be shutdown as soon as they are garbage collected or their parent process exits. The manager classes are defined in the multiprocessing.managers module:
Create a BaseManager object.
Once created one should call start() or serve_forever() to ensure that the manager object refers to a started manager process.
address is the address on which the manager process listens for new connections. If address is None then an arbitrary one is chosen.
authkey is the authentication key which will be used to check the validity of incoming connections to the server process. If authkey is None then current_process().authkey. Otherwise authkey is used and it must be a string.
Stop the process used by the manager. This is only available if start() has been used to start the server process.
This can be called multiple times.
A classmethod which can be used for registering a type or callable with the manager class.
typeid is a “type identifier” which is used to identify a particular type of shared object. This must be a string.
callable is a callable used for creating objects for this type identifier. If a manager instance will be created using the from_address() classmethod or if the create_method argument is False then this can be left as None.
proxytype is a subclass of BaseProxy which is used to create proxies for shared objects with this typeid. If None then a proxy class is created automatically.
exposed is used to specify a sequence of method names which proxies for this typeid should be allowed to access using BaseProxy._callMethod(). (If exposed is None then proxytype._exposed_ is used instead if it exists.) In the case where no exposed list is specified, all “public methods” of the shared object will be accessible. (Here a “public method” means any attribute which has a __call__() method and whose name does not begin with '_'.)
method_to_typeid is a mapping used to specify the return type of those exposed methods which should return a proxy. It maps method names to typeid strings. (If method_to_typeid is None then proxytype._method_to_typeid_ is used instead if it exists.) If a method’s name is not a key of this mapping or if the mapping is None then the object returned by the method will be copied by value.
create_method determines whether a method should be created with name typeid which can be used to tell the server process to create a new shared object and return a proxy for it. By default it is True.
BaseManager instances also have one read-only property:
A subclass of BaseManager which can be used for the synchronization of processes. Objects of this type are returned by multiprocessing.Manager().
It also supports creation of shared lists and dictionaries.
Create a shared threading.Condition object and return a proxy for it.
If lock is supplied then it should be a proxy for a threading.Lock or threading.RLock object.
A namespace object has no public methods, but does have writable attributes. Its representation shows the values of its attributes.
However, when using a proxy for a namespace object, an attribute beginning with '_' will be an attribute of the proxy and not an attribute of the referent:
>>> manager = multiprocessing.Manager()
>>> Global = manager.Namespace()
>>> Global.x = 10
>>> Global.y = 'hello'
>>> Global._z = 12.3 # this is an attribute of the proxy
>>> print Global
Namespace(x=10, y='hello')
To create one’s own manager, one creates a subclass of BaseManager and use the resgister() classmethod to register new types or callables with the manager class. For example:
from multiprocessing.managers import BaseManager
class MathsClass(object):
def add(self, x, y):
return x + y
def mul(self, x, y):
return x * y
class MyManager(BaseManager):
pass
MyManager.register('Maths', MathsClass)
if __name__ == '__main__':
manager = MyManager()
manager.start()
maths = manager.Maths()
print maths.add(4, 3) # prints 7
print maths.mul(7, 8) # prints 56
It is possible to run a manager server on one machine and have clients use it from other machines (assuming that the firewalls involved allow it).
Running the following commands creates a server for a single shared queue which remote clients can access:
>>> from multiprocessing.managers import BaseManager
>>> import Queue
>>> queue = Queue.Queue()
>>> class QueueManager(BaseManager): pass
...
>>> QueueManager.register('getQueue', callable=lambda:queue)
>>> m = QueueManager(address=('', 50000), authkey='abracadabra')
>>> m.serveForever()
One client can access the server as follows:
>>> from multiprocessing.managers import BaseManager
>>> class QueueManager(BaseManager): pass
...
>>> QueueManager.register('getQueue')
>>> m = QueueManager.from_address(address=('foo.bar.org', 50000),
>>> authkey='abracadabra')
>>> queue = m.getQueue()
>>> queue.put('hello')
Another client can also use it:
>>> from multiprocessing.managers import BaseManager
>>> class QueueManager(BaseManager): pass
...
>>> QueueManager.register('getQueue')
>>> m = QueueManager.from_address(address=('foo.bar.org', 50000), authkey='abracadabra')
>>> queue = m.getQueue()
>>> queue.get()
'hello'
A proxy is an object which refers to a shared object which lives (presumably) in a different process. The shared object is said to be the referent of the proxy. Multiple proxy objects may have the same referent.
A proxy object has methods which invoke corresponding methods of its referent (although not every method of the referent will necessarily be available through the proxy). A proxy can usually be used in most of the same ways that its referent can:
>>> from multiprocessing import Manager
>>> manager = Manager()
>>> l = manager.list([i*i for i in range(10)])
>>> print l
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
>>> print repr(l)
<ListProxy object, typeid 'list' at 0xb799974c>
>>> l[4]
16
>>> l[2:5]
[4, 9, 16]
Notice that applying str() to a proxy will return the representation of the referent, whereas applying repr() will return the representation of the proxy.
An important feature of proxy objects is that they are picklable so they can be passed between processes. Note, however, that if a proxy is sent to the corresponding manager’s process then unpickling it will produce the referent itself. This means, for example, that one shared object can contain a second:
>>> a = manager.list()
>>> b = manager.list()
>>> a.append(b) # referent of a now contains referent of b
>>> print a, b
[[]] []
>>> b.append('hello')
>>> print a, b
[['hello']] ['hello']
Note
The proxy types in multiprocessing do nothing to support comparisons by value. So, for instance,
manager.list([1,2,3]) == [1,2,3]
will return False. One should just use a copy of the referent instead when making comparisons.
Proxy objects are instances of subclasses of BaseProxy.
Call and return the result of a method of the proxy’s referent.
If proxy is a proxy whose referent is obj then the expression
proxy._call_method(methodname, args, kwds)
will evaluate the expression
getattr(obj, methodname)(*args, **kwds)
in the manager’s process.
The returned value will be a copy of the result of the call or a proxy to a new shared object – see documentation for the method_to_typeid argument of BaseManager.register().
If an exception is raised by the call, then then is re-raised by _call_method(). If some other exception is raised in the manager’s process then this is converted into a RemoteError exception and is raised by _call_method().
Note in particular that an exception will be raised if methodname has not been exposed
An example of the usage of _call_method():
>>> l = manager.list(range(10))
>>> l._call_method('__len__')
10
>>> l._call_method('__getslice__', (2, 7)) # equiv to `l[2:7]`
[2, 3, 4, 5, 6]
>>> l._call_method('__getitem__', (20,)) # equiv to `l[20]`
Traceback (most recent call last):
...
IndexError: list index out of range
Return a copy of the referent.
If the referent is unpicklable then this will raise an exception.
A proxy object uses a weakref callback so that when it gets garbage collected it deregisters itself from the manager which owns its referent.
A shared object gets deleted from the manager process when there are no longer any proxies referring to it.
One can create a pool of processes which will carry out tasks submitted to it with the Pool class.
A process pool object which controls a pool of worker processes to which jobs can be submitted. It supports asynchronous results with timeouts and callbacks and has a parallel map implementation.
processes is the number of worker processes to use. If processes is None then the number returned by cpu_count() is used. If initializer is not None then each worker process will call initializer(*initargs) when it starts.
A variant of the apply() method which returns a result object.
If callback is specified then it should be a callable which accepts a single argument. When the result becomes ready callback is applied to it (unless the call failed). callback should complete immediately since otherwise the thread which handles the results will get blocked.
A parallel equivalent of the map() builtin function. It blocks till the result is ready.
This method chops the iterable into a number of chunks which it submits to the process pool as separate tasks. The (approximate) size of these chunks can be specified by setting chunksize to a positive integer.
A variant of the map() method which returns a result object.
If callback is specified then it should be a callable which accepts a single argument. When the result becomes ready callback is applied to it (unless the call failed). callback should complete immediately since otherwise the thread which handles the results will get blocked.
An equivalent of itertools.imap().
The chunksize argument is the same as the one used by the map() method. For very long iterables using a large value for chunksize can make make the job complete much faster than using the default value of 1.
Also if chunksize is 1 then the next() method of the iterator returned by the imap() method has an optional timeout parameter: next(timeout) will raise multiprocessing.TimeoutError if the result cannot be returned within timeout seconds.
The class of the result returned by Pool.apply_async() and Pool.map_async().
The following example demonstrates the use of a pool:
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
pool = Pool(processes=4) # start 4 worker processes
result = pool.applyAsync(f, (10,)) # evaluate "f(10)" asynchronously
print result.get(timeout=1) # prints "100" unless your computer is *very* slow
print pool.map(f, range(10)) # prints "[0, 1, 4,..., 81]"
it = pool.imap(f, range(10))
print it.next() # prints "0"
print it.next() # prints "1"
print it.next(timeout=1) # prints "4" unless your computer is *very* slow
import time
result = pool.applyAsync(time.sleep, (10,))
print result.get(timeout=1) # raises TimeoutError
Usually message passing between processes is done using queues or by using Connection objects returned by Pipe().
However, the multiprocessing.connection module allows some extra flexibility. It basically gives a high level message oriented API for dealing with sockets or Windows named pipes, and also has support for digest authentication using the hmac module.
Send a randomly generated message to the other end of the connection and wait for a reply.
If the reply matches the digest of the message using authkey as the key then a welcome message is sent to the other end of the connection. Otherwise AuthenticationError is raised.
Receive a message, calculate the digest of the message using authkey as the key, and then send the digest back.
If a welcome message is not received, then AuthenticationError is raised.
Attempt to set up a connection to the listener which is using address address, returning a Connection.
The type of the connection is determined by family argument, but this can generally be omitted since it can usually be inferred from the format of address. (See Address Formats)
If authentication is True or authkey is a string then digest authentication is used. The key used for authentication will be either authkey or current_process().authkey) if authkey is None. If authentication fails then AuthenticationError is raised. See Authentication keys.
A wrapper for a bound socket or Windows named pipe which is ‘listening’ for connections.
address is the address to be used by the bound socket or named pipe of the listener object.
family is the type of socket (or named pipe) to use. This can be one of the strings 'AF_INET' (for a TCP socket), 'AF_UNIX' (for a Unix domain socket) or 'AF_PIPE' (for a Windows named pipe). Of these only the first is guaranteed to be available. If family is None then the family is inferred from the format of address. If address is also None then a default is chosen. This default is the family which is assumed to be the fastest available. See Address Formats. Note that if family is 'AF_UNIX' and address is None then the socket will be created in a private temporary directory created using tempfile.mkstemp().
If the listener object uses a socket then backlog (1 by default) is passed to the listen() method of the socket once it has been bound.
If authenticate is True (False by default) or authkey is not None then digest authentication is used.
If authkey is a string then it will be used as the authentication key; otherwise it must be None.
If authkey is None and authenticate is True then current_process().authkey is used as the authentication key. If authkey is None and authentication is False then no authentication is done. If authentication fails then AuthenticationError is raised. See Authentication keys.
Listener objects have the following read-only properties:
The module defines two exceptions:
Examples
The following server code creates a listener which uses 'secret password' as an authentication key. It then waits for a connection and sends some data to the client:
from multiprocessing.connection import Listener
from array import array
address = ('localhost', 6000) # family is deduced to be 'AF_INET'
listener = Listener(address, authkey='secret password')
conn = listener.accept()
print 'connection accepted from', listener.last_accepted
conn.send([2.25, None, 'junk', float])
conn.send_bytes('hello')
conn.send_bytes(array('i', [42, 1729]))
conn.close()
listener.close()
The following code connects to the server and receives some data from the server:
from multiprocessing.connection import Client
from array import array
address = ('localhost', 6000)
conn = Client(address, authkey='secret password')
print conn.recv() # => [2.25, None, 'junk', float]
print conn.recv_bytes() # => 'hello'
arr = array('i', [0, 0, 0, 0, 0])
print conn.recv_bytes_into(arr) # => 8
print arr # => array('i', [42, 1729, 0, 0, 0])
conn.close()
An 'AF_INET' address is a tuple of the form (hostname, port) where hostname is a string and port is an integer.
An 'AF_UNIX' address is a string representing a filename on the filesystem.
r'\\\\.\\pipe\\PipeName'. To use Client() to connect to a named pipe on a remote computer called ServerName* one should use an address of the form r'\\\\ServerName\\pipe\\PipeName' instead.
Note that any string beginning with two backslashes is assumed by default to be an 'AF_PIPE' address rather than an 'AF_UNIX' address.
When one uses Connection.recv(), the data received is automatically unpickled. Unfortunately unpickling data from an untrusted source is a security risk. Therefore Listener and Client() use the hmac module to provide digest authentication.
An authentication key is a string which can be thought of as a password: once a connection is established both ends will demand proof that the other knows the authentication key. (Demonstrating that both ends are using the same key does not involve sending the key over the connection.)
If authentication is requested but do authentication key is specified then the return value of current_process().authkey is used (see Process). This value will automatically inherited by any Process object that the current process creates. This means that (by default) all processes of a multi-process program will share a single authentication key which can be used when setting up connections between the themselves.
Suitable authentication keys can also be generated by using os.urandom().
Some support for logging is available. Note, however, that the logging package does not use process shared locks so it is possible (depending on the handler type) for messages from different processes to get mixed up.
Returns the logger used by multiprocessing. If necessary, a new one will be created.
When first created the logger has level logging.NOTSET and has a handler which sends output to sys.stderr using format '[%(levelname)s/%(processName)s] %(message)s'. (The logger allows use of the non-standard '%(processName)s' format.) Message sent to this logger will not by default propagate to the root logger.
Note that on Windows child processes will only inherit the level of the parent process’s logger – any other customization of the logger will not be inherited.
Below is an example session with logging turned on:
>>> import processing, logging
>>> logger = processing.getLogger()
>>> logger.setLevel(logging.INFO)
>>> logger.warning('doomed')
[WARNING/MainProcess] doomed
>>> m = processing.Manager()
[INFO/SyncManager-1] child process calling self.run()
[INFO/SyncManager-1] manager bound to '\\\\.\\pipe\\pyc-2776-0-lj0tfa'
>>> del m
[INFO/MainProcess] sending shutdown message to manager
[INFO/SyncManager-1] manager exiting with exitcode 0
There are certain guidelines and idioms which should be adhered to when using multiprocessing.
Avoid shared state
As far as possible one should try to avoid shifting large amounts of data between processes.
It is probably best to stick to using queues or pipes for communication between processes rather than using the lower level synchronization primitives from the threading module.
Picklability
Ensure that the arguments to the methods of proxies are picklable.
Thread safety of proxies
Do not use a proxy object from more than one thread unless you protect it with a lock.
(There is never a problem with different processes using the same proxy.)
Joining zombie processes
On Unix when a process finishes but has not been joined it becomes a zombie. There should never be very many because each time a new process starts (or active_children() is called) all completed processes which have not yet been joined will be joined. Also calling a finished process’s Process.is_alive() will join the process. Even so it is probably good practice to explicitly join all the processes that you start.
Better to inherit than pickle/unpickle
On Windows many types from multiprocessing need to be picklable so that child processes can use them. However, one should generally avoid sending shared objects to other processes using pipes or queues. Instead you should arrange the program so that a process which need access to a shared resource created elsewhere can inherit it from an ancestor process.
Avoid terminating processes
Using the Process.terminate() method to stop a process is liable to cause any shared resources (such as locks, semaphores, pipes and queues) currently being used by the process to become broken or unavailable to other processes.
Therefore it is probably best to only consider using Process.terminate() on processes which never use any shared resources.
Joining processes that use queues
Bear in mind that a process that has put items in a queue will wait before terminating until all the buffered items are fed by the “feeder” thread to the underlying pipe. (The child process can call the Queue.cancel_join_thread() method of the queue to avoid this behaviour.)
This means that whenever you use a queue you need to make sure that all items which have been put on the queue will eventually be removed before the process is joined. Otherwise you cannot be sure that processes which have put items on the queue will terminate. Remember also that non-daemonic processes will be automatically be joined.
An example which will deadlock is the following:
from multiprocessing import Process, Queue def f(q): q.put('X' * 1000000) if __name__ == '__main__': queue = Queue() p = Process(target=f, args=(queue,)) p.start() p.join() # this deadlocks obj = queue.get()A fix here would be to swap the last two lines round (or simply remove the p.join() line).
Explicitly pass resources to child processes
On Unix a child process can make use of a shared resource created in a parent process using a global resource. However, it is better to pass the object as an argument to the constructor for the child process.
Apart from making the code (potentially) compatible with Windows this also ensures that as long as the child process is still alive the object will not be garbage collected in the parent process. This might be important if some resource is freed when the object is garbage collected in the parent process.
So for instance
from multiprocessing import Process, Lock def f(): ... do something using "lock" ... if __name__ == '__main__': lock = Lock() for i in range(10): Process(target=f).start()should be rewritten as
from multiprocessing import Process, Lock def f(l): ... do something using "l" ... if __name__ == '__main__': lock = Lock() for i in range(10): Process(target=f, args=(lock,)).start()
Since Windows lacks os.fork() it has a few extra restrictions:
More picklability
Ensure that all arguments to Process.__init__() are picklable. This means, in particular, that bound or unbound methods cannot be used directly as the target argument on Windows — just define a function and use that instead.
Also, if you subclass Process then make sure that instances will be picklable when the Process.start() method is called.
Global variables
Bear in mind that if code run in a child process tries to access a global variable, then the value it sees (if any) may not be the same as the value in the parent process at the time that Process.start() was called.
However, global variables which are just module level constants cause no problems.
Safe importing of main module
Make sure that the main module can be safely imported by a new Python interpreter without causing unintended side effects (such a starting a new process).
For example, under Windows running the following module would fail with a RuntimeError:
from multiprocessing import Process def foo(): print 'hello' p = Process(target=foo) p.start()Instead one should protect the “entry point” of the program by using if __name__ == '__main__': as follows:
from multiprocessing import Process, freeze_support def foo(): print 'hello' if __name__ == '__main__': freeze_support() p = Process(target=foo) p.start()(The freeze_support() line can be omitted if the program will be run normally instead of frozen.)
This allows the newly spawned Python interpreter to safely import the module and then run the module’s foo() function.
Similar restrictions apply if a pool or manager is created in the main module.
Demonstration of how to create and use customized managers and proxies:
#
# This module shows how to use arbitrary callables with a subclass of
# `BaseManager`.
#
from multiprocessing import freeze_support
from multiprocessing.managers import BaseManager, BaseProxy
import operator
##
class Foo(object):
def f(self):
print 'you called Foo.f()'
def g(self):
print 'you called Foo.g()'
def _h(self):
print 'you called Foo._h()'
# A simple generator function
def baz():
for i in xrange(10):
yield i*i
# Proxy type for generator objects
class GeneratorProxy(BaseProxy):
_exposed_ = ('next', '__next__')
def __iter__(self):
return self
def next(self):
return self._callmethod('next')
def __next__(self):
return self._callmethod('__next__')
# Function to return the operator module
def get_operator_module():
return operator
##
class MyManager(BaseManager):
pass
# register the Foo class; make `f()` and `g()` accessible via proxy
MyManager.register('Foo1', Foo)
# register the Foo class; make `g()` and `_h()` accessible via proxy
MyManager.register('Foo2', Foo, exposed=('g', '_h'))
# register the generator function baz; use `GeneratorProxy` to make proxies
MyManager.register('baz', baz, proxytype=GeneratorProxy)
# register get_operator_module(); make public functions accessible via proxy
MyManager.register('operator', get_operator_module)
##
def test():
manager = MyManager()
manager.start()
print '-' * 20
f1 = manager.Foo1()
f1.f()
f1.g()
assert not hasattr(f1, '_h')
assert sorted(f1._exposed_) == sorted(['f', 'g'])
print '-' * 20
f2 = manager.Foo2()
f2.g()
f2._h()
assert not hasattr(f2, 'f')
assert sorted(f2._exposed_) == sorted(['g', '_h'])
print '-' * 20
it = manager.baz()
for i in it:
print '<%d>' % i,
print
print '-' * 20
op = manager.operator()
print 'op.add(23, 45) =', op.add(23, 45)
print 'op.pow(2, 94) =', op.pow(2, 94)
print 'op.getslice(range(10), 2, 6) =', op.getslice(range(10), 2, 6)
print 'op.repeat(range(5), 3) =', op.repeat(range(5), 3)
print 'op._exposed_ =', op._exposed_
##
if __name__ == '__main__':
freeze_support()
test()
Using Pool:
#
# A test of `multiprocessing.Pool` class
#
import multiprocessing
import time
import random
import sys
#
# Functions used by test code
#
def calculate(func, args):
result = func(*args)
return '%s says that %s%s = %s' % (
multiprocessing.current_process().name,
func.__name__, args, result
)
def calculatestar(args):
return calculate(*args)
def mul(a, b):
time.sleep(0.5*random.random())
return a * b
def plus(a, b):
time.sleep(0.5*random.random())
return a + b
def f(x):
return 1.0 / (x-5.0)
def pow3(x):
return x**3
def noop(x):
pass
#
# Test code
#
def test():
print 'cpu_count() = %d\n' % multiprocessing.cpu_count()
#
# Create pool
#
PROCESSES = 4
print 'Creating pool with %d processes\n' % PROCESSES
pool = multiprocessing.Pool(PROCESSES)
print 'pool = %s' % pool
print
#
# Tests
#
TASKS = [(mul, (i, 7)) for i in range(10)] + \
[(plus, (i, 8)) for i in range(10)]
results = [pool.apply_async(calculate, t) for t in TASKS]
imap_it = pool.imap(calculatestar, TASKS)
imap_unordered_it = pool.imap_unordered(calculatestar, TASKS)
print 'Ordered results using pool.apply_async():'
for r in results:
print '\t', r.get()
print
print 'Ordered results using pool.imap():'
for x in imap_it:
print '\t', x
print
print 'Unordered results using pool.imap_unordered():'
for x in imap_unordered_it:
print '\t', x
print
print 'Ordered results using pool.map() --- will block till complete:'
for x in pool.map(calculatestar, TASKS):
print '\t', x
print
#
# Simple benchmarks
#
N = 100000
print 'def pow3(x): return x**3'
t = time.time()
A = map(pow3, xrange(N))
print '\tmap(pow3, xrange(%d)):\n\t\t%s seconds' % \
(N, time.time() - t)
t = time.time()
B = pool.map(pow3, xrange(N))
print '\tpool.map(pow3, xrange(%d)):\n\t\t%s seconds' % \
(N, time.time() - t)
t = time.time()
C = list(pool.imap(pow3, xrange(N), chunksize=N//8))
print '\tlist(pool.imap(pow3, xrange(%d), chunksize=%d)):\n\t\t%s' \
' seconds' % (N, N//8, time.time() - t)
assert A == B == C, (len(A), len(B), len(C))
print
L = [None] * 1000000
print 'def noop(x): pass'
print 'L = [None] * 1000000'
t = time.time()
A = map(noop, L)
print '\tmap(noop, L):\n\t\t%s seconds' % \
(time.time() - t)
t = time.time()
B = pool.map(noop, L)
print '\tpool.map(noop, L):\n\t\t%s seconds' % \
(time.time() - t)
t = time.time()
C = list(pool.imap(noop, L, chunksize=len(L)//8))
print '\tlist(pool.imap(noop, L, chunksize=%d)):\n\t\t%s seconds' % \
(len(L)//8, time.time() - t)
assert A == B == C, (len(A), len(B), len(C))
print
del A, B, C, L
#
# Test error handling
#
print 'Testing error handling:'
try:
print pool.apply(f, (5,))
except ZeroDivisionError:
print '\tGot ZeroDivisionError as expected from pool.apply()'
else:
raise AssertionError, 'expected ZeroDivisionError'
try:
print pool.map(f, range(10))
except ZeroDivisionError:
print '\tGot ZeroDivisionError as expected from pool.map()'
else:
raise AssertionError, 'expected ZeroDivisionError'
try:
print list(pool.imap(f, range(10)))
except ZeroDivisionError:
print '\tGot ZeroDivisionError as expected from list(pool.imap())'
else:
raise AssertionError, 'expected ZeroDivisionError'
it = pool.imap(f, range(10))
for i in range(10):
try:
x = it.next()
except ZeroDivisionError:
if i == 5:
pass
except StopIteration:
break
else:
if i == 5:
raise AssertionError, 'expected ZeroDivisionError'
assert i == 9
print '\tGot ZeroDivisionError as expected from IMapIterator.next()'
print
#
# Testing timeouts
#
print 'Testing ApplyResult.get() with timeout:',
res = pool.apply_async(calculate, TASKS[0])
while 1:
sys.stdout.flush()
try:
sys.stdout.write('\n\t%s' % res.get(0.02))
break
except multiprocessing.TimeoutError:
sys.stdout.write('.')
print
print
print 'Testing IMapIterator.next() with timeout:',
it = pool.imap(calculatestar, TASKS)
while 1:
sys.stdout.flush()
try:
sys.stdout.write('\n\t%s' % it.next(0.02))
except StopIteration:
break
except multiprocessing.TimeoutError:
sys.stdout.write('.')
print
print
#
# Testing callback
#
print 'Testing callback:'
A = []
B = [56, 0, 1, 8, 27, 64, 125, 216, 343, 512, 729]
r = pool.apply_async(mul, (7, 8), callback=A.append)
r.wait()
r = pool.map_async(pow3, range(10), callback=A.extend)
r.wait()
if A == B:
print '\tcallbacks succeeded\n'
else:
print '\t*** callbacks failed\n\t\t%s != %s\n' % (A, B)
#
# Check there are no outstanding tasks
#
assert not pool._cache, 'cache = %r' % pool._cache
#
# Check close() methods
#
print 'Testing close():'
for worker in pool._pool:
assert worker.is_alive()
result = pool.apply_async(time.sleep, [0.5])
pool.close()
pool.join()
assert result.get() is None
for worker in pool._pool:
assert not worker.is_alive()
print '\tclose() succeeded\n'
#
# Check terminate() method
#
print 'Testing terminate():'
pool = multiprocessing.Pool(2)
DELTA = 0.1
ignore = pool.apply(pow3, [2])
results = [pool.apply_async(time.sleep, [DELTA]) for i in range(100)]
pool.terminate()
pool.join()
for worker in pool._pool:
assert not worker.is_alive()
print '\tterminate() succeeded\n'
#
# Check garbage collection
#
print 'Testing garbage collection:'
pool = multiprocessing.Pool(2)
DELTA = 0.1
processes = pool._pool
ignore = pool.apply(pow3, [2])
results = [pool.apply_async(time.sleep, [DELTA]) for i in range(100)]
results = pool = None
time.sleep(DELTA * 2)
for worker in processes:
assert not worker.is_alive()
print '\tgarbage collection succeeded\n'
if __name__ == '__main__':
multiprocessing.freeze_support()
assert len(sys.argv) in (1, 2)
if len(sys.argv) == 1 or sys.argv[1] == 'processes':
print ' Using processes '.center(79, '-')
elif sys.argv[1] == 'threads':
print ' Using threads '.center(79, '-')
import multiprocessing.dummy as multiprocessing
else:
print 'Usage:\n\t%s [processes | threads]' % sys.argv[0]
raise SystemExit(2)
test()
Synchronization types like locks, conditions and queues:
#
# A test file for the `multiprocessing` package
#
import time, sys, random
from Queue import Empty
import multiprocessing # may get overwritten
#### TEST_VALUE
def value_func(running, mutex):
random.seed()
time.sleep(random.random()*4)
mutex.acquire()
print '\n\t\t\t' + str(multiprocessing.current_process()) + ' has finished'
running.value -= 1
mutex.release()
def test_value():
TASKS = 10
running = multiprocessing.Value('i', TASKS)
mutex = multiprocessing.Lock()
for i in range(TASKS):
p = multiprocessing.Process(target=value_func, args=(running, mutex))
p.start()
while running.value > 0:
time.sleep(0.08)
mutex.acquire()
print running.value,
sys.stdout.flush()
mutex.release()
print
print 'No more running processes'
#### TEST_QUEUE
def queue_func(queue):
for i in range(30):
time.sleep(0.5 * random.random())
queue.put(i*i)
queue.put('STOP')
def test_queue():
q = multiprocessing.Queue()
p = multiprocessing.Process(target=queue_func, args=(q,))
p.start()
o = None
while o != 'STOP':
try:
o = q.get(timeout=0.3)
print o,
sys.stdout.flush()
except Empty:
print 'TIMEOUT'
print
#### TEST_CONDITION
def condition_func(cond):
cond.acquire()
print '\t' + str(cond)
time.sleep(2)
print '\tchild is notifying'
print '\t' + str(cond)
cond.notify()
cond.release()
def test_condition():
cond = multiprocessing.Condition()
p = multiprocessing.Process(target=condition_func, args=(cond,))
print cond
cond.acquire()
print cond
cond.acquire()
print cond
p.start()
print 'main is waiting'
cond.wait()
print 'main has woken up'
print cond
cond.release()
print cond
cond.release()
p.join()
print cond
#### TEST_SEMAPHORE
def semaphore_func(sema, mutex, running):
sema.acquire()
mutex.acquire()
running.value += 1
print running.value, 'tasks are running'
mutex.release()
random.seed()
time.sleep(random.random()*2)
mutex.acquire()
running.value -= 1
print '%s has finished' % multiprocessing.current_process()
mutex.release()
sema.release()
def test_semaphore():
sema = multiprocessing.Semaphore(3)
mutex = multiprocessing.RLock()
running = multiprocessing.Value('i', 0)
processes = [
multiprocessing.Process(target=semaphore_func,
args=(sema, mutex, running))
for i in range(10)
]
for p in processes:
p.start()
for p in processes:
p.join()
#### TEST_JOIN_TIMEOUT
def join_timeout_func():
print '\tchild sleeping'
time.sleep(5.5)
print '\n\tchild terminating'
def test_join_timeout():
p = multiprocessing.Process(target=join_timeout_func)
p.start()
print 'waiting for process to finish'
while 1:
p.join(timeout=1)
if not p.is_alive():
break
print '.',
sys.stdout.flush()
#### TEST_EVENT
def event_func(event):
print '\t%r is waiting' % multiprocessing.current_process()
event.wait()
print '\t%r has woken up' % multiprocessing.current_process()
def test_event():
event = multiprocessing.Event()
processes = [multiprocessing.Process(target=event_func, args=(event,))
for i in range(5)]
for p in processes:
p.start()
print 'main is sleeping'
time.sleep(2)
print 'main is setting event'
event.set()
for p in processes:
p.join()
#### TEST_SHAREDVALUES
def sharedvalues_func(values, arrays, shared_values, shared_arrays):
for i in range(len(values)):
v = values[i][1]
sv = shared_values[i].value
assert v == sv
for i in range(len(values)):
a = arrays[i][1]
sa = list(shared_arrays[i][:])
assert a == sa
print 'Tests passed'
def test_sharedvalues():
values = [
('i', 10),
('h', -2),
('d', 1.25)
]
arrays = [
('i', range(100)),
('d', [0.25 * i for i in range(100)]),
('H', range(1000))
]
shared_values = [multiprocessing.Value(id, v) for id, v in values]
shared_arrays = [multiprocessing.Array(id, a) for id, a in arrays]
p = multiprocessing.Process(
target=sharedvalues_func,
args=(values, arrays, shared_values, shared_arrays)
)
p.start()
p.join()
assert p.exitcode == 0
####
def test(namespace=multiprocessing):
global multiprocessing
multiprocessing = namespace
for func in [ test_value, test_queue, test_condition,
test_semaphore, test_join_timeout, test_event,
test_sharedvalues ]:
print '\n\t######## %s\n' % func.__name__
func()
ignore = multiprocessing.active_children() # cleanup any old processes
if hasattr(multiprocessing, '_debug_info'):
info = multiprocessing._debug_info()
if info:
print info
raise ValueError, 'there should be no positive refcounts left'
if __name__ == '__main__':
multiprocessing.freeze_support()
assert len(sys.argv) in (1, 2)
if len(sys.argv) == 1 or sys.argv[1] == 'processes':
print ' Using processes '.center(79, '-')
namespace = multiprocessing
elif sys.argv[1] == 'manager':
print ' Using processes and a manager '.center(79, '-')
namespace = multiprocessing.Manager()
namespace.Process = multiprocessing.Process
namespace.current_process = multiprocessing.current_process
namespace.active_children = multiprocessing.active_children
elif sys.argv[1] == 'threads':
print ' Using threads '.center(79, '-')
import multiprocessing.dummy as namespace
else:
print 'Usage:\n\t%s [processes | manager | threads]' % sys.argv[0]
raise SystemExit, 2
test(namespace)
An showing how to use queues to feed tasks to a collection of worker process and collect the results:
#
# Simple example which uses a pool of workers to carry out some tasks.
#
# Notice that the results will probably not come out of the output
# queue in the same in the same order as the corresponding tasks were
# put on the input queue. If it is important to get the results back
# in the original order then consider using `Pool.map()` or
# `Pool.imap()` (which will save on the amount of code needed anyway).
#
import time
import random
from multiprocessing import Process, Queue, current_process, freeze_support
#
# Function run by worker processes
#
def worker(input, output):
for func, args in iter(input.get, 'STOP'):
result = calculate(func, args)
output.put(result)
#
# Function used to calculate result
#
def calculate(func, args):
result = func(*args)
return '%s says that %s%s = %s' % \
(current_process().name, func.__name__, args, result)
#
# Functions referenced by tasks
#
def mul(a, b):
time.sleep(0.5*random.random())
return a * b
def plus(a, b):
time.sleep(0.5*random.random())
return a + b
#
#
#
def test():
NUMBER_OF_PROCESSES = 4
TASKS1 = [(mul, (i, 7)) for i in range(20)]
TASKS2 = [(plus, (i, 8)) for i in range(10)]
# Create queues
task_queue = Queue()
done_queue = Queue()
# Submit tasks
for task in TASKS1:
task_queue.put(task)
# Start worker processes
for i in range(NUMBER_OF_PROCESSES):
Process(target=worker, args=(task_queue, done_queue)).start()
# Get and print results
print 'Unordered results:'
for i in range(len(TASKS1)):
print '\t', done_queue.get()
# Add more tasks using `put()`
for task in TASKS2:
task_queue.put(task)
# Get and print some more results
for i in range(len(TASKS2)):
print '\t', done_queue.get()
# Tell child processes to stop
for i in range(NUMBER_OF_PROCESSES):
task_queue.put('STOP')
if __name__ == '__main__':
freeze_support()
test()
An example of how a pool of worker processes can each run a SimpleHTTPServer.HttpServer instance while sharing a single listening socket.
#
# Example where a pool of http servers share a single listening socket
#
# On Windows this module depends on the ability to pickle a socket
# object so that the worker processes can inherit a copy of the server
# object. (We import `multiprocessing.reduction` to enable this pickling.)
#
# Not sure if we should synchronize access to `socket.accept()` method by
# using a process-shared lock -- does not seem to be necessary.
#
import os
import sys
from multiprocessing import Process, current_process, freeze_support
from BaseHTTPServer import HTTPServer
from SimpleHTTPServer import SimpleHTTPRequestHandler
if sys.platform == 'win32':
import multiprocessing.reduction # make sockets pickable/inheritable
def note(format, *args):
sys.stderr.write('[%s]\t%s\n' % (current_process().name, format%args))
class RequestHandler(SimpleHTTPRequestHandler):
# we override log_message() to show which process is handling the request
def log_message(self, format, *args):
note(format, *args)
def serve_forever(server):
note('starting server')
try:
server.serve_forever()
except KeyboardInterrupt:
pass
def runpool(address, number_of_processes):
# create a single server object -- children will each inherit a copy
server = HTTPServer(address, RequestHandler)
# create child processes to act as workers
for i in range(number_of_processes-1):
Process(target=serve_forever, args=(server,)).start()
# main process also acts as a worker
serve_forever(server)
def test():
DIR = os.path.join(os.path.dirname(__file__), '..')
ADDRESS = ('localhost', 8000)
NUMBER_OF_PROCESSES = 4
print 'Serving at http://%s:%d using %d worker processes' % \
(ADDRESS[0], ADDRESS[1], NUMBER_OF_PROCESSES)
print 'To exit press Ctrl-' + ['C', 'Break'][sys.platform=='win32']
os.chdir(DIR)
runpool(ADDRESS, NUMBER_OF_PROCESSES)
if __name__ == '__main__':
freeze_support()
test()
Some simple benchmarks comparing multiprocessing with threading:
#
# Simple benchmarks for the multiprocessing package
#
import time, sys, multiprocessing, threading, Queue, gc
if sys.platform == 'win32':
_timer = time.clock
else:
_timer = time.time
delta = 1
#### TEST_QUEUESPEED
def queuespeed_func(q, c, iterations):
a = '0' * 256
c.acquire()
c.notify()
c.release()
for i in xrange(iterations):
q.put(a)
q.put('STOP')
def test_queuespeed(Process, q, c):
elapsed = 0
iterations = 1
while elapsed < delta:
iterations *= 2
p = Process(target=queuespeed_func, args=(q, c, iterations))
c.acquire()
p.start()
c.wait()
c.release()
result = None
t = _timer()
while result != 'STOP':
result = q.get()
elapsed = _timer() - t
p.join()
print iterations, 'objects passed through the queue in', elapsed, 'seconds'
print 'average number/sec:', iterations/elapsed
#### TEST_PIPESPEED
def pipe_func(c, cond, iterations):
a = '0' * 256
cond.acquire()
cond.notify()
cond.release()
for i in xrange(iterations):
c.send(a)
c.send('STOP')
def test_pipespeed():
c, d = multiprocessing.Pipe()
cond = multiprocessing.Condition()
elapsed = 0
iterations = 1
while elapsed < delta:
iterations *= 2
p = multiprocessing.Process(target=pipe_func,
args=(d, cond, iterations))
cond.acquire()
p.start()
cond.wait()
cond.release()
result = None
t = _timer()
while result != 'STOP':
result = c.recv()
elapsed = _timer() - t
p.join()
print iterations, 'objects passed through connection in',elapsed,'seconds'
print 'average number/sec:', iterations/elapsed
#### TEST_SEQSPEED
def test_seqspeed(seq):
elapsed = 0
iterations = 1
while elapsed < delta:
iterations *= 2
t = _timer()
for i in xrange(iterations):
a = seq[5]
elapsed = _timer()-t
print iterations, 'iterations in', elapsed, 'seconds'
print 'average number/sec:', iterations/elapsed
#### TEST_LOCK
def test_lockspeed(l):
elapsed = 0
iterations = 1
while elapsed < delta:
iterations *= 2
t = _timer()
for i in xrange(iterations):
l.acquire()
l.release()
elapsed = _timer()-t
print iterations, 'iterations in', elapsed, 'seconds'
print 'average number/sec:', iterations/elapsed
#### TEST_CONDITION
def conditionspeed_func(c, N):
c.acquire()
c.notify()
for i in xrange(N):
c.wait()
c.notify()
c.release()
def test_conditionspeed(Process, c):
elapsed = 0
iterations = 1
while elapsed < delta:
iterations *= 2
c.acquire()
p = Process(target=conditionspeed_func, args=(c, iterations))
p.start()
c.wait()
t = _timer()
for i in xrange(iterations):
c.notify()
c.wait()
elapsed = _timer()-t
c.release()
p.join()
print iterations * 2, 'waits in', elapsed, 'seconds'
print 'average number/sec:', iterations * 2 / elapsed
####
def test():
manager = multiprocessing.Manager()
gc.disable()
print '\n\t######## testing Queue.Queue\n'
test_queuespeed(threading.Thread, Queue.Queue(),
threading.Condition())
print '\n\t######## testing multiprocessing.Queue\n'
test_queuespeed(multiprocessing.Process, multiprocessing.Queue(),
multiprocessing.Condition())
print '\n\t######## testing Queue managed by server process\n'
test_queuespeed(multiprocessing.Process, manager.Queue(),
manager.Condition())
print '\n\t######## testing multiprocessing.Pipe\n'
test_pipespeed()
print
print '\n\t######## testing list\n'
test_seqspeed(range(10))
print '\n\t######## testing list managed by server process\n'
test_seqspeed(manager.list(range(10)))
print '\n\t######## testing Array("i", ..., lock=False)\n'
test_seqspeed(multiprocessing.Array('i', range(10), lock=False))
print '\n\t######## testing Array("i", ..., lock=True)\n'
test_seqspeed(multiprocessing.Array('i', range(10), lock=True))
print
print '\n\t######## testing threading.Lock\n'
test_lockspeed(threading.Lock())
print '\n\t######## testing threading.RLock\n'
test_lockspeed(threading.RLock())
print '\n\t######## testing multiprocessing.Lock\n'
test_lockspeed(multiprocessing.Lock())
print '\n\t######## testing multiprocessing.RLock\n'
test_lockspeed(multiprocessing.RLock())
print '\n\t######## testing lock managed by server process\n'
test_lockspeed(manager.Lock())
print '\n\t######## testing rlock managed by server process\n'
test_lockspeed(manager.RLock())
print
print '\n\t######## testing threading.Condition\n'
test_conditionspeed(threading.Thread, threading.Condition())
print '\n\t######## testing multiprocessing.Condition\n'
test_conditionspeed(multiprocessing.Process, multiprocessing.Condition())
print '\n\t######## testing condition managed by a server process\n'
test_conditionspeed(multiprocessing.Process, manager.Condition())
gc.enable()
if __name__ == '__main__':
multiprocessing.freeze_support()
test()
An example/demo of how to use the managers.SyncManager, Process and others to build a system which can distribute processes and work via a distributed queue to a “cluster” of machines on a network, accessible via SSH. You will need to have private key authentication for all hosts configured for this to work.
# # Module to allow spawning of processes on foreign host # # Depends on `multiprocessing` package -- tested with `processing-0.60` # __all__ = ['Cluster', 'Host', 'get_logger', 'current_process'] # # Imports # import sys import os import tarfile import shutil import subprocess import logging import itertools import Queue try: import cPickle as pickle except ImportError: import pickle from multiprocessing import Process, current_process, cpu_count from multiprocessing import util, managers, connection, forking, pool # # Logging # def get_logger(): return _logger _logger = logging.getLogger('distributing') _logger.propogate = 0 util.fix_up_logger(_logger) _formatter = logging.Formatter(util.DEFAULT_LOGGING_FORMAT) _handler = logging.StreamHandler() _handler.setFormatter(_formatter) _logger.addHandler(_handler) info = _logger.info debug = _logger.debug # # Get number of cpus # try: slot_count = cpu_count() except NotImplemented: slot_count = 1 # # Manager type which spawns subprocesses # class HostManager(managers.SyncManager): ''' Manager type used for spawning processes on a (presumably) foreign host ''' def __init__(self, address, authkey): managers.SyncManager.__init__(self, address, authkey) self._name = 'Host-unknown' def Process(self, group=None, target=None, name=None, args=(), kwargs={}): if hasattr(sys.modules['__main__'], '__file__'): main_path = os.path.basename(sys.modules['__main__'].__file__) else: main_path = None data = pickle.dumps((target, args, kwargs)) p = self._RemoteProcess(data, main_path) if name is None: temp = self._name.split('Host-')[-1] + '/Process-%s' name = temp % ':'.join(map(str, p.get_identity())) p.set_name(name) return p @classmethod def from_address(cls, address, authkey): manager = cls(address, authkey) managers.transact(address, authkey, 'dummy') manager._state.value = managers.State.STARTED manager._name = 'Host-%s:%s' % manager.address manager.shutdown = util.Finalize( manager, HostManager._finalize_host, args=(manager._address, manager._authkey, manager._name), exitpriority=-10 ) return manager @staticmethod def _finalize_host(address, authkey, name): managers.transact(address, authkey, 'shutdown') def __repr__(self): return '<Host(%s)>' % self._name # # Process subclass representing a process on (possibly) a remote machine # class RemoteProcess(Process): ''' Represents a process started on a remote host ''' def __init__(self, data, main_path): assert not main_path or os.path.basename(main_path) == main_path Process.__init__(self) self._data = data self._main_path = main_path def _bootstrap(self): forking.prepare({'main_path': self._main_path}) self._target, self._args, self._kwargs = pickle.loads(self._data) return Process._bootstrap(self) def get_identity(self): return self._identity HostManager.register('_RemoteProcess', RemoteProcess) # # A Pool class that uses a cluster # class DistributedPool(pool.Pool): def __init__(self, cluster, processes=None, initializer=None, initargs=()): self._cluster = cluster self.Process = cluster.Process pool.Pool.__init__(self, processes or len(cluster), initializer, initargs) def _setup_queues(self): self._inqueue = self._cluster._SettableQueue() self._outqueue = self._cluster._SettableQueue() self._quick_put = self._inqueue.put self._quick_get = self._outqueue.get @staticmethod def _help_stuff_finish(inqueue, task_handler, size): inqueue.set_contents([None] * size) # # Manager type which starts host managers on other machines # def LocalProcess(**kwds): p = Process(**kwds) p.set_name('localhost/' + p.name) return p class Cluster(managers.SyncManager): ''' Represents collection of slots running on various hosts. `Cluster` is a subclass of `SyncManager` so it allows creation of various types of shared objects. ''' def __init__(self, hostlist, modules): managers.SyncManager.__init__(self, address=('localhost', 0)) self._hostlist = hostlist self._modules = modules if __name__ not in modules: modules.append(__name__) files = [sys.modules[name].__file__ for name in modules] for i, file in enumerate(files): if file.endswith('.pyc') or file.endswith('.pyo'): files[i] = file[:-4] + '.py' self._files = [os.path.abspath(file) for file in files] def start(self): managers.SyncManager.start(self) l = connection.Listener(family='AF_INET', authkey=self._authkey) for i, host in enumerate(self._hostlist): host._start_manager(i, self._authkey, l.address, self._files) for host in self._hostlist: if host.hostname != 'localhost': conn = l.accept() i, address, cpus = conn.recv() conn.close() other_host = self._hostlist[i] other_host.manager = HostManager.from_address(address, self._authkey) other_host.slots = other_host.slots or cpus other_host.Process = other_host.manager.Process else: host.slots = host.slots or slot_count host.Process = LocalProcess self._slotlist = [ Slot(host) for host in self._hostlist for i in range(host.slots) ] self._slot_iterator = itertools.cycle(self._slotlist) self._base_shutdown = self.shutdown del self.shutdown def shutdown(self): for host in self._hostlist: if host.hostname != 'localhost': host.manager.shutdown() self._base_shutdown() def Process(self, group=None, target=None, name=None, args=(), kwargs={}): slot = self._slot_iterator.next() return slot.Process( group=group, target=target, name=name, args=args, kwargs=kwargs ) def Pool(self, processes=None, initializer=None, initargs=()): return DistributedPool(self, processes, initializer, initargs) def __getitem__(self, i): return self._slotlist[i] def __len__(self): return len(self._slotlist) def __iter__(self): return iter(self._slotlist) # # Queue subclass used by distributed pool # class SettableQueue(Queue.Queue): def empty(self): return not self.queue def full(self): return self.maxsize > 0 and len(self.queue) == self.maxsize def set_contents(self, contents): # length of contents must be at least as large as the number of # threads which have potentially called get() self.not_empty.acquire() try: self.queue.clear() self.queue.extend(contents) self.not_empty.notifyAll() finally: self.not_empty.release() Cluster.register('_SettableQueue', SettableQueue) # # Class representing a notional cpu in the cluster # class Slot(object): def __init__(self, host): self.host = host self.Process = host.Process # # Host # class Host(object): ''' Represents a host to use as a node in a cluster. `hostname` gives the name of the host. If hostname is not "localhost" then ssh is used to log in to the host. To log in as a different user use a host name of the form "username@somewhere.org" `slots` is used to specify the number of slots for processes on the host. This affects how often processes will be allocated to this host. Normally this should be equal to the number of cpus on that host. ''' def __init__(self, hostname, slots=None): self.hostname = hostname self.slots = slots def _start_manager(self, index, authkey, address, files): if self.hostname != 'localhost': tempdir = copy_to_remote_temporary_directory(self.hostname, files) debug('startup files copied to %s:%s', self.hostname, tempdir) p = subprocess.Popen( ['ssh', self.hostname, 'python', '-c', '"import os; os.chdir(%r); ' 'from distributing import main; main()"' % tempdir], stdin=subprocess.PIPE ) data = dict( name='BoostrappingHost', index=index, dist_log_level=_logger.getEffectiveLevel(), dir=tempdir, authkey=str(authkey), parent_address=address ) pickle.dump(data, p.stdin, pickle.HIGHEST_PROTOCOL) p.stdin.close() # # Copy files to remote directory, returning name of directory # unzip_code = '''" import tempfile, os, sys, tarfile tempdir = tempfile.mkdtemp(prefix='distrib-') os.chdir(tempdir) tf = tarfile.open(fileobj=sys.stdin, mode='r|gz') for ti in tf: tf.extract(ti) print tempdir "''' def copy_to_remote_temporary_directory(host, files): p = subprocess.Popen( ['ssh', host, 'python', '-c', unzip_code], stdout=subprocess.PIPE, stdin=subprocess.PIPE ) tf = tarfile.open(fileobj=p.stdin, mode='w|gz') for name in files: tf.add(name, os.path.basename(name)) tf.close() p.stdin.close() return p.stdout.read().rstrip() # # Code which runs a host manager # def main(): # get data from parent over stdin data = pickle.load(sys.stdin) sys.stdin.close() # set some stuff _logger.setLevel(data['dist_log_level']) forking.prepare(data) # create server for a `HostManager` object server = managers.Server(HostManager._registry, ('', 0), data['authkey']) current_process()._server = server # report server address and number of cpus back to parent conn = connection.Client(data['parent_address'], authkey=data['authkey']) conn.send((data['index'], server.address, slot_count)) conn.close() # set name etc current_process().set_name('Host-%s:%s' % server.address) util._run_after_forkers() # register a cleanup function def cleanup(directory): debug('removing directory %s', directory) shutil.rmtree(directory) debug('shutting down host manager') util.Finalize(None, cleanup, args=[data['dir']], exitpriority=0) # start host manager debug('remote host manager starting in %s', data['dir']) server.serve_forever()