问题:Python多处理PicklingError:无法腌制
很抱歉,我无法使用更简单的示例重现该错误,并且我的代码过于复杂,无法发布。如果我在IPython Shell中而不是在常规Python中运行该程序,那么效果会很好。
我查阅了有关此问题的以前的笔记。它们都是由使用池调用在类函数中定义的函数引起的。但这不是我的情况。
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib64/python2.7/threading.py", line 552, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.7/threading.py", line 505, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 313, in _handle_tasks
put(task)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
我将不胜感激任何帮助。
更新:我的泡菜功能定义在模块的顶层。虽然它调用包含嵌套函数的函数。即f()
要求g()
调用h()
具有嵌套函数i()
,和我打电话pool.apply_async(f)
。f()
,g()
,h()
都在顶层定义。我用这种模式尝试了更简单的示例,尽管它可以工作。
I am sorry that I can’t reproduce the error with a simpler example, and my code is too complicated to post. If I run the program in IPython shell instead of the regular Python, things work out well.
I looked up some previous notes on this problem. They were all caused by using pool to call function defined within a class function. But this is not the case for me.
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib64/python2.7/threading.py", line 552, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.7/threading.py", line 505, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 313, in _handle_tasks
put(task)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
I would appreciate any help.
Update: The function I pickle is defined at the top level of the module. Though it calls a function that contains a nested function. i.e, f()
calls g()
calls h()
which has a nested function i()
, and I am calling pool.apply_async(f)
. f()
, g()
, h()
are all defined at the top level. I tried simpler example with this pattern and it works though.
回答 0
这是可以腌制的食物的清单。特别是,只有在模块的顶层定义了功能时,这些功能才是可腌制的。
这段代码:
import multiprocessing as mp
class Foo():
@staticmethod
def work(self):
pass
if __name__ == '__main__':
pool = mp.Pool()
foo = Foo()
pool.apply_async(foo.work)
pool.close()
pool.join()
产生的错误几乎与您发布的错误相同:
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 505, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 315, in _handle_tasks
put(task)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
问题在于,pool
所有方法都使用a mp.SimpleQueue
将任务传递给工作进程。mp.SimpleQueue
必须经过的所有内容都必须是可选取的,并且foo.work
不可选取,因为它不是在模块的顶层定义的。
可以通过在顶层定义一个函数来修复该问题,该函数调用foo.work()
:
def work(foo):
foo.work()
pool.apply_async(work,args=(foo,))
请注意,它foo
是可拾取的,因为它Foo
是在顶层定义的并且 foo.__dict__
是可拾取的。
Here is a list of what can be pickled. In particular, functions are only picklable if they are defined at the top-level of a module.
This piece of code:
import multiprocessing as mp
class Foo():
@staticmethod
def work(self):
pass
if __name__ == '__main__':
pool = mp.Pool()
foo = Foo()
pool.apply_async(foo.work)
pool.close()
pool.join()
yields an error almost identical to the one you posted:
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 505, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 315, in _handle_tasks
put(task)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
The problem is that the pool
methods all use a mp.SimpleQueue
to pass tasks to the worker processes. Everything that goes through the mp.SimpleQueue
must be pickable, and foo.work
is not picklable since it is not defined at the top level of the module.
It can be fixed by defining a function at the top level, which calls foo.work()
:
def work(foo):
foo.work()
pool.apply_async(work,args=(foo,))
Notice that foo
is pickable, since Foo
is defined at the top level and foo.__dict__
is picklable.
回答 1
我会用pathos.multiprocesssing
,而不是multiprocessing
。 pathos.multiprocessing
是一个叉multiprocessing
使用dill
。dill
可以在python中序列化几乎所有内容,因此您可以并行发送更多内容。该pathos
叉也有直接与多个参数的函数工作的能力,因为你需要为类方法。
>>> from pathos.multiprocessing import ProcessingPool as Pool
>>> p = Pool(4)
>>> class Test(object):
... def plus(self, x, y):
... return x+y
...
>>> t = Test()
>>> p.map(t.plus, x, y)
[4, 6, 8, 10]
>>>
>>> class Foo(object):
... @staticmethod
... def work(self, x):
... return x+1
...
>>> f = Foo()
>>> p.apipe(f.work, f, 100)
<processing.pool.ApplyResult object at 0x10504f8d0>
>>> res = _
>>> res.get()
101
在此处获取pathos
(并且,如果愿意dill
):https :
//github.com/uqfoundation
I’d use pathos.multiprocesssing
, instead of multiprocessing
. pathos.multiprocessing
is a fork of multiprocessing
that uses dill
. dill
can serialize almost anything in python, so you are able to send a lot more around in parallel. The pathos
fork also has the ability to work directly with multiple argument functions, as you need for class methods.
>>> from pathos.multiprocessing import ProcessingPool as Pool
>>> p = Pool(4)
>>> class Test(object):
... def plus(self, x, y):
... return x+y
...
>>> t = Test()
>>> p.map(t.plus, x, y)
[4, 6, 8, 10]
>>>
>>> class Foo(object):
... @staticmethod
... def work(self, x):
... return x+1
...
>>> f = Foo()
>>> p.apipe(f.work, f, 100)
<processing.pool.ApplyResult object at 0x10504f8d0>
>>> res = _
>>> res.get()
101
Get pathos
(and if you like, dill
) here:
https://github.com/uqfoundation
回答 2
正如其他人所说multiprocessing
,只能将Python对象传输到可以腌制的工作进程。如果您不能按照unutbu的描述重新组织代码,则可以使用dill
扩展的酸洗/酸洗功能来传输数据(尤其是代码数据),如下所示。
此解决方案仅需要安装以下dill
库,而无需安装其他库pathos
:
import os
from multiprocessing import Pool
import dill
def run_dill_encoded(payload):
fun, args = dill.loads(payload)
return fun(*args)
def apply_async(pool, fun, args):
payload = dill.dumps((fun, args))
return pool.apply_async(run_dill_encoded, (payload,))
if __name__ == "__main__":
pool = Pool(processes=5)
# asyn execution of lambda
jobs = []
for i in range(10):
job = apply_async(pool, lambda a, b: (a, b, a * b), (i, i + 1))
jobs.append(job)
for job in jobs:
print job.get()
print
# async execution of static method
class O(object):
@staticmethod
def calc():
return os.getpid()
jobs = []
for i in range(10):
job = apply_async(pool, O.calc, ())
jobs.append(job)
for job in jobs:
print job.get()
As others have said multiprocessing
can only transfer Python objects to worker processes which can be pickled. If you cannot reorganize your code as described by unutbu, you can use dill
s extended pickling/unpickling capabilities for transferring data (especially code data) as I show below.
This solution requires only the installation of dill
and no other libraries as pathos
:
import os
from multiprocessing import Pool
import dill
def run_dill_encoded(payload):
fun, args = dill.loads(payload)
return fun(*args)
def apply_async(pool, fun, args):
payload = dill.dumps((fun, args))
return pool.apply_async(run_dill_encoded, (payload,))
if __name__ == "__main__":
pool = Pool(processes=5)
# asyn execution of lambda
jobs = []
for i in range(10):
job = apply_async(pool, lambda a, b: (a, b, a * b), (i, i + 1))
jobs.append(job)
for job in jobs:
print job.get()
print
# async execution of static method
class O(object):
@staticmethod
def calc():
return os.getpid()
jobs = []
for i in range(10):
job = apply_async(pool, O.calc, ())
jobs.append(job)
for job in jobs:
print job.get()
回答 3
我发现通过尝试在其上使用探查器,我还可以在完美工作的代码段上准确生成该错误输出。
请注意,这是在Windows上进行的(分叉不太优雅)。
我之前在跑步:
python -m profile -o output.pstats <script>
并发现删除配置文件可以消除错误,并放置配置文件可以恢复错误。我也很生气,因为我知道以前的代码可以工作。我正在检查是否有更新过的pool.py …然后有下沉的感觉并消除了配置文件,仅此而已。
如果有人遇到,请在此处发布档案。
I have found that I can also generate exactly that error output on a perfectly working piece of code by attempting to use the profiler on it.
Note that this was on Windows (where the forking is a bit less elegant).
I was running:
python -m profile -o output.pstats <script>
And found that removing the profiling removed the error and placing the profiling restored it. Was driving me batty too because I knew the code used to work. I was checking to see if something had updated pool.py… then had a sinking feeling and eliminated the profiling and that was it.
Posting here for the archives in case anybody else runs into it.
回答 4
解决此问题时,multiprocessing
一个简单的解决方案是从切换Pool
到ThreadPool
。只需导入-
from multiprocessing.pool import ThreadPool as Pool
这是可行的,因为ThreadPool与主线程共享内存,而不是创建新进程-这意味着不需要酸洗。
这种方法的缺点是python并不是处理线程的最佳语言-它使用一种称为Global Interpreter Lock的方法来保持线程安全,这可能会减慢此处的一些用例。但是,如果您主要是与其他系统进行交互(运行HTTP命令,与数据库进行交谈,写入文件系统),则您的代码很可能不受CPU的束缚,因此不会受到太大的影响。实际上,在编写HTTP / HTTPS基准测试时,我发现这里使用的线程模型具有较少的开销和延迟,因为创建新进程的开销比创建新线程的开销高得多。
因此,如果要在python用户空间中处理大量内容,这可能不是最好的方法。
When this problem comes up with multiprocessing
a simple solution is to switch from Pool
to ThreadPool
. This can be done with no change of code other than the import-
from multiprocessing.pool import ThreadPool as Pool
This works because ThreadPool shares memory with the main thread, rather than creating a new process- this means that pickling is not required.
The downside to this method is that python isn’t the greatest language with handling threads- it uses something called the Global Interpreter Lock to stay thread safe, which can slow down some use cases here. However, if you’re primarily interacting with other systems (running HTTP commands, talking with a database, writing to filesystems) then your code is likely not bound by CPU and won’t take much of a hit. In fact I’ve found when writing HTTP/HTTPS benchmarks that the threaded model used here has less overhead and delays, as the overhead from creating new processes is much higher than the overhead for creating new threads.
So if you’re processing a ton of stuff in python userspace this might not be the best method.
回答 5
此解决方案仅需要安装莳萝,而无需其他库作为安装程序
def apply_packed_function_for_map((dumped_function, item, args, kwargs),):
"""
Unpack dumped function as target function and call it with arguments.
:param (dumped_function, item, args, kwargs):
a tuple of dumped function and its arguments
:return:
result of target function
"""
target_function = dill.loads(dumped_function)
res = target_function(item, *args, **kwargs)
return res
def pack_function_for_map(target_function, items, *args, **kwargs):
"""
Pack function and arguments to object that can be sent from one
multiprocessing.Process to another. The main problem is:
«multiprocessing.Pool.map*» or «apply*»
cannot use class methods or closures.
It solves this problem with «dill».
It works with target function as argument, dumps it («with dill»)
and returns dumped function with arguments of target function.
For more performance we dump only target function itself
and don't dump its arguments.
How to use (pseudo-code):
~>>> import multiprocessing
~>>> images = [...]
~>>> pool = multiprocessing.Pool(100500)
~>>> features = pool.map(
~... *pack_function_for_map(
~... super(Extractor, self).extract_features,
~... images,
~... type='png'
~... **options,
~... )
~... )
~>>>
:param target_function:
function, that you want to execute like target_function(item, *args, **kwargs).
:param items:
list of items for map
:param args:
positional arguments for target_function(item, *args, **kwargs)
:param kwargs:
named arguments for target_function(item, *args, **kwargs)
:return: tuple(function_wrapper, dumped_items)
It returs a tuple with
* function wrapper, that unpack and call target function;
* list of packed target function and its' arguments.
"""
dumped_function = dill.dumps(target_function)
dumped_items = [(dumped_function, item, args, kwargs) for item in items]
return apply_packed_function_for_map, dumped_items
它也适用于numpy数组。
This solution requires only the installation of dill and no other libraries as pathos
def apply_packed_function_for_map((dumped_function, item, args, kwargs),):
"""
Unpack dumped function as target function and call it with arguments.
:param (dumped_function, item, args, kwargs):
a tuple of dumped function and its arguments
:return:
result of target function
"""
target_function = dill.loads(dumped_function)
res = target_function(item, *args, **kwargs)
return res
def pack_function_for_map(target_function, items, *args, **kwargs):
"""
Pack function and arguments to object that can be sent from one
multiprocessing.Process to another. The main problem is:
«multiprocessing.Pool.map*» or «apply*»
cannot use class methods or closures.
It solves this problem with «dill».
It works with target function as argument, dumps it («with dill»)
and returns dumped function with arguments of target function.
For more performance we dump only target function itself
and don't dump its arguments.
How to use (pseudo-code):
~>>> import multiprocessing
~>>> images = [...]
~>>> pool = multiprocessing.Pool(100500)
~>>> features = pool.map(
~... *pack_function_for_map(
~... super(Extractor, self).extract_features,
~... images,
~... type='png'
~... **options,
~... )
~... )
~>>>
:param target_function:
function, that you want to execute like target_function(item, *args, **kwargs).
:param items:
list of items for map
:param args:
positional arguments for target_function(item, *args, **kwargs)
:param kwargs:
named arguments for target_function(item, *args, **kwargs)
:return: tuple(function_wrapper, dumped_items)
It returs a tuple with
* function wrapper, that unpack and call target function;
* list of packed target function and its' arguments.
"""
dumped_function = dill.dumps(target_function)
dumped_items = [(dumped_function, item, args, kwargs) for item in items]
return apply_packed_function_for_map, dumped_items
It also works for numpy arrays.
回答 6
Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
如果您在传递给异步作业的模型对象中有任何内置函数,也会出现此错误。
因此,请确保检查传递的模型对象没有内置函数。(在我们的例子中,我们使用模型内部的django-model-utilsFieldTracker()
函数来跟踪某个字段)。这是相关的GitHub问题的链接。
Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
This error will also come if you have any inbuilt function inside the model object that was passed to the async job.
So make sure to check the model objects that are passed doesn’t have inbuilt functions. (In our case we were using FieldTracker()
function of django-model-utils inside the model to track a certain field). Here is the link to relevant GitHub issue.
回答 7
建立在@rocksportrocker解决方案的基础上,在发送和接收结果时应该莳萝。
import dill
import itertools
def run_dill_encoded(payload):
fun, args = dill.loads(payload)
res = fun(*args)
res = dill.dumps(res)
return res
def dill_map_async(pool, fun, args_list,
as_tuple=True,
**kw):
if as_tuple:
args_list = ((x,) for x in args_list)
it = itertools.izip(
itertools.cycle([fun]),
args_list)
it = itertools.imap(dill.dumps, it)
return pool.map_async(run_dill_encoded, it, **kw)
if __name__ == '__main__':
import multiprocessing as mp
import sys,os
p = mp.Pool(4)
res = dill_map_async(p, lambda x:[sys.stdout.write('%s\n'%os.getpid()),x][-1],
[lambda x:x+1]*10,)
res = res.get(timeout=100)
res = map(dill.loads,res)
print(res)
Building on @rocksportrocker solution,
It would make sense to dill when sending and RECVing the results.
import dill
import itertools
def run_dill_encoded(payload):
fun, args = dill.loads(payload)
res = fun(*args)
res = dill.dumps(res)
return res
def dill_map_async(pool, fun, args_list,
as_tuple=True,
**kw):
if as_tuple:
args_list = ((x,) for x in args_list)
it = itertools.izip(
itertools.cycle([fun]),
args_list)
it = itertools.imap(dill.dumps, it)
return pool.map_async(run_dill_encoded, it, **kw)
if __name__ == '__main__':
import multiprocessing as mp
import sys,os
p = mp.Pool(4)
res = dill_map_async(p, lambda x:[sys.stdout.write('%s\n'%os.getpid()),x][-1],
[lambda x:x+1]*10,)
res = res.get(timeout=100)
res = map(dill.loads,res)
print(res)