标签归档:pickle

存储Python字典

问题:存储Python字典

我习惯于使用.csv文件将数据导入和导出Python,但这存在明显的挑战。关于将字典(或字典集)存储在json或pck文件中的简单方法的任何建议?例如:

data = {}
data ['key1'] = "keyinfo"
data ['key2'] = "keyinfo2"

我想知道如何保存它,然后再将其加载回去。

I’m used to bringing data in and out of Python using CSV files, but there are obvious challenges to this. Are there simple ways to store a dictionary (or sets of dictionaries) in a JSON or pickle file?

For example:

data = {}
data ['key1'] = "keyinfo"
data ['key2'] = "keyinfo2"

I would like to know both how to save this, and then how to load it back in.


回答 0

泡菜保存:

try:
    import cPickle as pickle
except ImportError:  # python 3.x
    import pickle

with open('data.p', 'wb') as fp:
    pickle.dump(data, fp, protocol=pickle.HIGHEST_PROTOCOL)

有关该参数的其他信息,请参见pickle模块文档protocol

酸洗负荷:

with open('data.p', 'rb') as fp:
    data = pickle.load(fp)

JSON保存:

import json

with open('data.json', 'w') as fp:
    json.dump(data, fp)

提供额外的参数,例如sort_keysindent以获得漂亮的结果。参数sort_keys将按字母顺序对键进行排序,而indent将使用indent=N空格缩进您的数据结构。

json.dump(data, fp, sort_keys=True, indent=4)

JSON加载:

with open('data.json', 'r') as fp:
    data = json.load(fp)

Pickle save:

try:
    import cPickle as pickle
except ImportError:  # Python 3.x
    import pickle

with open('data.p', 'wb') as fp:
    pickle.dump(data, fp, protocol=pickle.HIGHEST_PROTOCOL)

See the pickle module documentation for additional information regarding the protocol argument.

Pickle load:

with open('data.p', 'rb') as fp:
    data = pickle.load(fp)

JSON save:

import json

with open('data.json', 'w') as fp:
    json.dump(data, fp)

Supply extra arguments, like sort_keys or indent, to get a pretty result. The argument sort_keys will sort the keys alphabetically and indent will indent your data structure with indent=N spaces.

json.dump(data, fp, sort_keys=True, indent=4)

JSON load:

with open('data.json', 'r') as fp:
    data = json.load(fp)

回答 1

最小的示例,直接写入文件:

import json
json.dump(data, open(filename, 'wb'))
data = json.load(open(filename))

或安全地打开/关闭:

import json
with open(filename, 'wb') as outfile:
    json.dump(data, outfile)
with open(filename) as infile:
    data = json.load(infile)

如果要将其保存为字符串而不是文件:

import json
json_str = json.dumps(data)
data = json.loads(json_str)

Minimal example, writing directly to a file:

import json
json.dump(data, open(filename, 'wb'))
data = json.load(open(filename))

or safely opening / closing:

import json
with open(filename, 'wb') as outfile:
    json.dump(data, outfile)
with open(filename) as infile:
    data = json.load(infile)

If you want to save it in a string instead of a file:

import json
json_str = json.dumps(data)
data = json.loads(json_str)

回答 2

另请参阅加速包ujson。 https://pypi.python.org/pypi/ujson

import ujson
with open('data.json', 'wb') as fp:
    ujson.dump(data, fp)

Also see the speeded-up package ujson:

import ujson

with open('data.json', 'wb') as fp:
    ujson.dump(data, fp)

回答 3

要写入文件:

import json
myfile.write(json.dumps(mydict))

要读取文件:

import json
mydict = json.loads(myfile.read())

myfile 是存储字典的文件的文件对象。

To write to a file:

import json
myfile.write(json.dumps(mydict))

To read from a file:

import json
mydict = json.loads(myfile.read())

myfile is the file object for the file that you stored the dict in.


回答 4

如果您正在序列化之后但不需要其他程序中的数据,则强烈建议您使用该shelve模块。将其视为持久性字典。

myData = shelve.open('/path/to/file')

# check for values.
keyVar in myData

# set values
myData[anotherKey] = someValue

# save the data for future use.
myData.close()

If you’re after serialization, but won’t need the data in other programs, I strongly recommend the shelve module. Think of it as a persistent dictionary.

myData = shelve.open('/path/to/file')

# Check for values.
keyVar in myData

# Set values
myData[anotherKey] = someValue

# Save the data for future use.
myData.close()

回答 5

如果您想要替代picklejson,则可以使用klepto

>>> init = {'y': 2, 'x': 1, 'z': 3}
>>> import klepto
>>> cache = klepto.archives.file_archive('memo', init, serialized=False)
>>> cache        
{'y': 2, 'x': 1, 'z': 3}
>>>
>>> # dump dictionary to the file 'memo.py'
>>> cache.dump() 
>>> 
>>> # import from 'memo.py'
>>> from memo import memo
>>> print memo
{'y': 2, 'x': 1, 'z': 3}

使用klepto,如果使用过serialized=True,则该字典将被memo.pkl作为腌制的字典写入,而不是使用明文。

你可以在klepto这里找到:https : //github.com/uqfoundation/klepto

dill酸洗可能比酸洗更好pickle,因为dill可以在python中序列化几乎所有内容。 klepto也可以使用dill

你可以在dill这里找到:https : //github.com/uqfoundation/dill

前几行中额外的mumbo-jumbo是因为klepto可以配置为将字典存储到文件,目录上下文或SQL数据库中。无论选择什么作为后端存档,API都是相同的。它为您提供了一个“可存档”字典,您可以使用该字典loaddump与档案进行交互。

If you want an alternative to pickle or json, you can use klepto.

>>> init = {'y': 2, 'x': 1, 'z': 3}
>>> import klepto
>>> cache = klepto.archives.file_archive('memo', init, serialized=False)
>>> cache        
{'y': 2, 'x': 1, 'z': 3}
>>>
>>> # dump dictionary to the file 'memo.py'
>>> cache.dump() 
>>> 
>>> # import from 'memo.py'
>>> from memo import memo
>>> print memo
{'y': 2, 'x': 1, 'z': 3}

With klepto, if you had used serialized=True, the dictionary would have been written to memo.pkl as a pickled dictionary instead of with clear text.

You can get klepto here: https://github.com/uqfoundation/klepto

dill is probably a better choice for pickling then pickle itself, as dill can serialize almost anything in python. klepto also can use dill.

You can get dill here: https://github.com/uqfoundation/dill

The additional mumbo-jumbo on the first few lines are because klepto can be configured to store dictionaries to a file, to a directory context, or to a SQL database. The API is the same for whatever you choose as the backend archive. It gives you an “archivable” dictionary with which you can use load and dump to interact with the archive.


回答 6

这是一个老话题,但是为了完整起见,我们应该包括ConfigParser和configparser,它们分别是Python 2和3中的标准库的一部分。该模块读取和写入config / ini文件,并且(至少在Python 3中)其行为类似于字典。它的另一个好处是,您可以将多个词典存储到config / ini文件的不同部分中,并对其进行调用。甜!

Python 2.7.x示例。

import ConfigParser

config = ConfigParser.ConfigParser()

dict1 = {'key1':'keyinfo', 'key2':'keyinfo2'}
dict2 = {'k1':'hot', 'k2':'cross', 'k3':'buns'}
dict3 = {'x':1, 'y':2, 'z':3}

# make each dictionary a separate section in config
config.add_section('dict1')
for key in dict1.keys():
    config.set('dict1', key, dict1[key])

config.add_section('dict2')
for key in dict2.keys():
    config.set('dict2', key, dict2[key])

config.add_section('dict3')
for key in dict3.keys():
    config.set('dict3', key, dict3[key])

# save config to file
f = open('config.ini', 'w')
config.write(f)
f.close()

# read config from file
config2 = ConfigParser.ConfigParser()
config2.read('config.ini')

dictA = {}
for item in config2.items('dict1'):
    dictA[item[0]] = item[1]

dictB = {}
for item in config2.items('dict2'):
    dictB[item[0]] = item[1]

dictC = {}
for item in config2.items('dict3'):
    dictC[item[0]] = item[1]

print(dictA)
print(dictB)
print(dictC)

Python 3.X示例。

import configparser

config = configparser.ConfigParser()

dict1 = {'key1':'keyinfo', 'key2':'keyinfo2'}
dict2 = {'k1':'hot', 'k2':'cross', 'k3':'buns'}
dict3 = {'x':1, 'y':2, 'z':3}

# make each dictionary a separate section in config
config['dict1'] = dict1
config['dict2'] = dict2
config['dict3'] = dict3

# save config to file
f = open('config.ini', 'w')
config.write(f)
f.close()

# read config from file
config2 = configparser.ConfigParser()
config2.read('config.ini')

# ConfigParser objects are a lot like dictionaries, but if you really
# want a dictionary you can ask it to convert a section to a dictionary
dictA = dict(config2['dict1'] )
dictB = dict(config2['dict2'] )
dictC = dict(config2['dict3'])

print(dictA)
print(dictB)
print(dictC)

控制台输出

{'key2': 'keyinfo2', 'key1': 'keyinfo'}
{'k1': 'hot', 'k2': 'cross', 'k3': 'buns'}
{'z': '3', 'y': '2', 'x': '1'}

config.ini的内容

[dict1]
key2 = keyinfo2
key1 = keyinfo

[dict2]
k1 = hot
k2 = cross
k3 = buns

[dict3]
z = 3
y = 2
x = 1

For completeness, we should include ConfigParser and configparser which are part of the standard library in Python 2 and 3, respectively. This module reads and writes to a config/ini file and (at least in Python 3) behaves in a lot of ways like a dictionary. It has the added benefit that you can store multiple dictionaries into separate sections of your config/ini file and recall them. Sweet!

Python 2.7.x example.

import ConfigParser

config = ConfigParser.ConfigParser()

dict1 = {'key1':'keyinfo', 'key2':'keyinfo2'}
dict2 = {'k1':'hot', 'k2':'cross', 'k3':'buns'}
dict3 = {'x':1, 'y':2, 'z':3}

# Make each dictionary a separate section in the configuration
config.add_section('dict1')
for key in dict1.keys():
    config.set('dict1', key, dict1[key])
   
config.add_section('dict2')
for key in dict2.keys():
    config.set('dict2', key, dict2[key])

config.add_section('dict3')
for key in dict3.keys():
    config.set('dict3', key, dict3[key])

# Save the configuration to a file
f = open('config.ini', 'w')
config.write(f)
f.close()

# Read the configuration from a file
config2 = ConfigParser.ConfigParser()
config2.read('config.ini')

dictA = {}
for item in config2.items('dict1'):
    dictA[item[0]] = item[1]

dictB = {}
for item in config2.items('dict2'):
    dictB[item[0]] = item[1]

dictC = {}
for item in config2.items('dict3'):
    dictC[item[0]] = item[1]

print(dictA)
print(dictB)
print(dictC)

Python 3.X example.

import configparser

config = configparser.ConfigParser()

dict1 = {'key1':'keyinfo', 'key2':'keyinfo2'}
dict2 = {'k1':'hot', 'k2':'cross', 'k3':'buns'}
dict3 = {'x':1, 'y':2, 'z':3}

# Make each dictionary a separate section in the configuration
config['dict1'] = dict1
config['dict2'] = dict2
config['dict3'] = dict3

# Save the configuration to a file
f = open('config.ini', 'w')
config.write(f)
f.close()

# Read the configuration from a file
config2 = configparser.ConfigParser()
config2.read('config.ini')

# ConfigParser objects are a lot like dictionaries, but if you really
# want a dictionary you can ask it to convert a section to a dictionary
dictA = dict(config2['dict1'] )
dictB = dict(config2['dict2'] )
dictC = dict(config2['dict3'])

print(dictA)
print(dictB)
print(dictC)

Console output

{'key2': 'keyinfo2', 'key1': 'keyinfo'}
{'k1': 'hot', 'k2': 'cross', 'k3': 'buns'}
{'z': '3', 'y': '2', 'x': '1'}

Contents of config.ini

[dict1]
key2 = keyinfo2
key1 = keyinfo

[dict2]
k1 = hot
k2 = cross
k3 = buns

[dict3]
z = 3
y = 2
x = 1

回答 7

如果保存到json文件,最好的和最简单的方法是:

import json
with open("file.json", "wb") as f:
    f.write(json.dumps(dict).encode("utf-8"))

If save to a JSON file, the best and easiest way of doing this is:

import json
with open("file.json", "wb") as f:
    f.write(json.dumps(dict).encode("utf-8"))

回答 8

我的用例是将多个json对象保存到文件中,而marty的回答对我有所帮助。但是要满足我的用例,答案并不完整,因为每次保存新条目时,它都会覆盖旧数据。

为了将多个条目保存在一个文件中,必须检查旧内容(即在写入之前先读取)。存放json数据的典型文件将具有a listobjectas根。因此,我认为我的json文件始终具有a,list of objects并且每次向其添加数据时,我只会首先加载列表,在其中添加新数据,然后将其转储回文件(w)的仅可写实例:

def saveJson(url,sc): #this function writes the 2 values to file
    newdata = {'url':url,'sc':sc}
    json_path = "db/file.json"

    old_list= []
    with open(json_path) as myfile:  #read the contents first
        old_list = json.load(myfile)
    old_list.append(newdata)

    with open(json_path,"w") as myfile:  #overwrite the whole content
        json.dump(old_list,myfile,sort_keys=True,indent=4)

    return "sucess"

新的json文件将如下所示:

[
    {
        "sc": "a11",
        "url": "www.google.com"
    },
    {
        "sc": "a12",
        "url": "www.google.com"
    },
    {
        "sc": "a13",
        "url": "www.google.com"
    }
]

注意:必须file.json使用[]以初始数据命名的文件,此方法才能正常工作

PS:与原始问题无关,但是通过首先检查我们的条目是否已经存在(基于1 /多个键),然后仅追加并保存数据,也可以进一步改进此方法。让我知道是否有人需要该支票,我将添加到答案中

My use case was to save multiple JSON objects to a file and marty’s answer helped me somewhat. But to serve my use case, the answer was not complete as it would overwrite the old data every time a new entry was saved.

To save multiple entries in a file, one must check for the old content (i.e., read before write). A typical file holding JSON data will either have a list or an object as root. So I considered that my JSON file always has a list of objects and every time I add data to it, I simply load the list first, append my new data in it, and dump it back to a writable-only instance of file (w):

def saveJson(url,sc): # This function writes the two values to the file
    newdata = {'url':url,'sc':sc}
    json_path = "db/file.json"

    old_list= []
    with open(json_path) as myfile:  # Read the contents first
        old_list = json.load(myfile)
    old_list.append(newdata)

    with open(json_path,"w") as myfile:  # Overwrite the whole content
        json.dump(old_list, myfile, sort_keys=True, indent=4)

    return "success"

The new JSON file will look something like this:

[
    {
        "sc": "a11",
        "url": "www.google.com"
    },
    {
        "sc": "a12",
        "url": "www.google.com"
    },
    {
        "sc": "a13",
        "url": "www.google.com"
    }
]

NOTE: It is essential to have a file named file.json with [] as initial data for this approach to work

PS: not related to original question, but this approach could also be further improved by first checking if our entry already exists (based on one or multiple keys) and only then append and save the data.


将类实例序列化为JSON

问题:将类实例序列化为JSON

我正在尝试创建类实例的JSON字符串表示形式并且遇到困难。假设该类的构建如下:

class testclass:
    value1 = "a"
    value2 = "b"

像这样对json.dumps进行调用:

t = testclass()
json.dumps(t)

失败了,并告诉我testclass不可JSON序列化。

TypeError: <__main__.testclass object at 0x000000000227A400> is not JSON serializable

我也尝试过使用pickle模块:

t = testclass()
print(pickle.dumps(t, pickle.HIGHEST_PROTOCOL))

它提供类实例信息,但不提供类实例的序列化内容。

b'\x80\x03c__main__\ntestclass\nq\x00)\x81q\x01}q\x02b.'

我究竟做错了什么?

I am trying to create a JSON string representation of a class instance and having difficulty. Let’s say the class is built like this:

class testclass:
    value1 = "a"
    value2 = "b"

A call to the json.dumps is made like this:

t = testclass()
json.dumps(t)

It is failing and telling me that the testclass is not JSON serializable.

TypeError: <__main__.testclass object at 0x000000000227A400> is not JSON serializable

I have also tried using the pickle module :

t = testclass()
print(pickle.dumps(t, pickle.HIGHEST_PROTOCOL))

And it gives class instance information but not a serialized content of the class instance.

b'\x80\x03c__main__\ntestclass\nq\x00)\x81q\x01}q\x02b.'

What am I doing wrong?


回答 0

基本问题是,JSON编码器json.dumps()默认仅知道如何序列化一组有限的对象类型(所有内置类型)。在此处列出:https//docs.python.org/3.3/library/json.html#encoders-and-decoders

一个好的解决方案是使您的类继承自该类JSONEncoder,然后实现该JSONEncoder.default()函数,并使该函数为您的类发出正确的JSON。

一个简单的解决方案是调用该实例json.dumps().__dict__成员。那是一个标准的Python dict,如果您的类很简单,它将是JSON可序列化的。

class Foo(object):
    def __init__(self):
        self.x = 1
        self.y = 2

foo = Foo()
s = json.dumps(foo) # raises TypeError with "is not JSON serializable"

s = json.dumps(foo.__dict__) # s set to: {"x":1, "y":2}

在此博客文章中讨论了上述方法:

    使用__dict__将任意Python对象序列化为JSON

The basic problem is that the JSON encoder json.dumps() only knows how to serialize a limited set of object types by default, all built-in types. List here: https://docs.python.org/3.3/library/json.html#encoders-and-decoders

One good solution would be to make your class inherit from JSONEncoder and then implement the JSONEncoder.default() function, and make that function emit the correct JSON for your class.

A simple solution would be to call json.dumps() on the .__dict__ member of that instance. That is a standard Python dict and if your class is simple it will be JSON serializable.

class Foo(object):
    def __init__(self):
        self.x = 1
        self.y = 2

foo = Foo()
s = json.dumps(foo) # raises TypeError with "is not JSON serializable"

s = json.dumps(foo.__dict__) # s set to: {"x":1, "y":2}

The above approach is discussed in this blog posting:

    Serializing arbitrary Python objects to JSON using __dict__


回答 1

您可以尝试一种对我有用的方法:

json.dumps()可以采用默认的可选参数,您可以在其中为未知类型指定自定义序列化函数,在我看来,

def serialize(obj):
    """JSON serializer for objects not serializable by default json code"""

    if isinstance(obj, date):
        serial = obj.isoformat()
        return serial

    if isinstance(obj, time):
        serial = obj.isoformat()
        return serial

    return obj.__dict__

前两个if用于日期和时间序列化,然后obj.__dict__返回任何其他对象。

最后的通话如下:

json.dumps(myObj, default=serialize)

当序列化一个集合并且不想__dict__显式地为每个对象调用时,这特别好。在这里,它会自动为您完成。

到目前为止,对我来说非常好,期待您的想法。

There’s one way that works great for me that you can try out:

json.dumps() can take an optional parameter default where you can specify a custom serializer function for unknown types, which in my case looks like

def serialize(obj):
    """JSON serializer for objects not serializable by default json code"""

    if isinstance(obj, date):
        serial = obj.isoformat()
        return serial

    if isinstance(obj, time):
        serial = obj.isoformat()
        return serial

    return obj.__dict__

First two ifs are for date and time serialization and then there is a obj.__dict__ returned for any other object.

the final call looks like:

json.dumps(myObj, default=serialize)

It’s especially good when you are serializing a collection and you don’t want to call __dict__ explicitly for every object. Here it’s done for you automatically.

So far worked so good for me, looking forward for your thoughts.


回答 2

您可以defaultjson.dumps()函数中指定命名参数:

json.dumps(obj, default=lambda x: x.__dict__)

说明:

形成文档(2.73.6):

``default(obj)`` is a function that should return a serializable version
of obj or raise TypeError. The default simply raises TypeError.

(适用于Python 2.7和Python 3.x)

注意:在这种情况下,您需要instance变量而不是class变量,如问题中的示例所示。(我假设询问者是class instance一个类的对象)

我首先从@phihag的答案中学到了这一点。发现它是最简单,最干净的方法。

You can specify the default named parameter in the json.dumps() function:

json.dumps(obj, default=lambda x: x.__dict__)

Explanation:

Form the docs (2.7, 3.6):

``default(obj)`` is a function that should return a serializable version
of obj or raise TypeError. The default simply raises TypeError.

(Works on Python 2.7 and Python 3.x)

Note: In this case you need instance variables and not class variables, as the example in the question tries to do. (I am assuming the asker meant class instance to be an object of a class)

I learned this first from @phihag’s answer here. Found it to be the simplest and cleanest way to do the job.


回答 3

我只是做:

data=json.dumps(myobject.__dict__)

这不是完整的答案,如果您有某种复杂的对象类,那么您肯定不会得到所有。但是,我将其用于一些简单的对象。

在OptionParser模块中获得的“ options”类确实非常有效。这里是JSON请求本身。

  def executeJson(self, url, options):
        data=json.dumps(options.__dict__)
        if options.verbose:
            print data
        headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}
        return requests.post(url, data, headers=headers)

I just do:

data=json.dumps(myobject.__dict__)

This is not the full answer, and if you have some sort of complicated object class you certainly will not get everything. However I use this for some of my simple objects.

One that it works really well on is the “options” class that you get from the OptionParser module. Here it is along with the JSON request itself.

  def executeJson(self, url, options):
        data=json.dumps(options.__dict__)
        if options.verbose:
            print data
        headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}
        return requests.post(url, data, headers=headers)

回答 4

使用jsonpickle

import jsonpickle

object = YourClass()
json_object = jsonpickle.encode(object)

Using jsonpickle

import jsonpickle

object = YourClass()
json_object = jsonpickle.encode(object)

回答 5

JSON并非真正意在序列化任意Python对象。这对于序列化dict对象pickle非常有用,但是实际上通常应该使用该模块。的输出pickle不是真正的人类可读的,但是应该可以使它顺畅运行。如果您坚持使用JSON,则可以签出jsonpickle模块,这是一种有趣的混合方法。

https://github.com/jsonpickle/jsonpickle

JSON is not really meant for serializing arbitrary Python objects. It’s great for serializing dict objects, but the pickle module is really what you should be using in general. Output from pickle is not really human-readable, but it should unpickle just fine. If you insist on using JSON, you could check out the jsonpickle module, which is an interesting hybrid approach.

https://github.com/jsonpickle/jsonpickle


回答 6

这是两个简单的函数,用于对任何不复杂的类进行序列化,没有任何花哨的地方。

我将其用于配置类型的东西,因为我可以在不进行代码调整的情况下将新成员添加到类中。

import json

class SimpleClass:
    def __init__(self, a=None, b=None, c=None):
        self.a = a
        self.b = b
        self.c = c

def serialize_json(instance=None, path=None):
    dt = {}
    dt.update(vars(instance))

    with open(path, "w") as file:
        json.dump(dt, file)

def deserialize_json(cls=None, path=None):
    def read_json(_path):
        with open(_path, "r") as file:
            return json.load(file)

    data = read_json(path)

    instance = object.__new__(cls)

    for key, value in data.items():
        setattr(instance, key, value)

    return instance

# Usage: Create class and serialize under Windows file system.
write_settings = SimpleClass(a=1, b=2, c=3)
serialize_json(write_settings, r"c:\temp\test.json")

# Read back and rehydrate.
read_settings = deserialize_json(SimpleClass, r"c:\temp\test.json")

# results are the same.
print(vars(write_settings))
print(vars(read_settings))

# output:
# {'c': 3, 'b': 2, 'a': 1}
# {'c': 3, 'b': 2, 'a': 1}

Here are two simple functions for serialization of any non-sophisticated classes, nothing fancy as explained before.

I use this for configuration type stuff because I can add new members to the classes with no code adjustments.

import json

class SimpleClass:
    def __init__(self, a=None, b=None, c=None):
        self.a = a
        self.b = b
        self.c = c

def serialize_json(instance=None, path=None):
    dt = {}
    dt.update(vars(instance))

    with open(path, "w") as file:
        json.dump(dt, file)

def deserialize_json(cls=None, path=None):
    def read_json(_path):
        with open(_path, "r") as file:
            return json.load(file)

    data = read_json(path)

    instance = object.__new__(cls)

    for key, value in data.items():
        setattr(instance, key, value)

    return instance

# Usage: Create class and serialize under Windows file system.
write_settings = SimpleClass(a=1, b=2, c=3)
serialize_json(write_settings, r"c:\temp\test.json")

# Read back and rehydrate.
read_settings = deserialize_json(SimpleClass, r"c:\temp\test.json")

# results are the same.
print(vars(write_settings))
print(vars(read_settings))

# output:
# {'c': 3, 'b': 2, 'a': 1}
# {'c': 3, 'b': 2, 'a': 1}

回答 7

关于如何开始执行此操作,有一些很好的答案。但是要记住一些事情:

  • 如果实例嵌套在大型数据结构中怎么办?
  • 如果还想要类的名称怎么办?
  • 如果要反序列化实例怎么办?
  • 如果您正在使用该怎么办 __slots__而不是__dict__呢?
  • 如果您只是不想自己做,该怎么办?

json-tricks是一个库(由我创建,其他人对此做出了贡献)已经有一段时间了。例如:

class MyTestCls:
    def __init__(self, **kwargs):
        for k, v in kwargs.items():
            setattr(self, k, v)

cls_instance = MyTestCls(s='ub', dct={'7': 7})

json = dumps(cls_instance, indent=4)
instance = loads(json)

您将恢复实例。这里的json看起来像这样:

{
    "__instance_type__": [
        "json_tricks.test_class",
        "MyTestCls"
    ],
    "attributes": {
        "s": "ub",
        "dct": {
            "7": 7
        }
    }
}

如果您想提出自己的解决方案,请查看以下内容的来源 json-tricks以免忘记某些特殊情况(例如__slots__)。

它还执行其他类型,例如numpy数组,日期时间,复数;它还允许发表评论。

There are some good answers on how to get started on doing this. But there are some things to keep in mind:

  • What if the instance is nested inside a large data structure?
  • What if also want the class name?
  • What if you want to deserialize the instance?
  • What if you’re using __slots__ instead of __dict__?
  • What if you just don’t want to do it yourself?

json-tricks is a library (that I made and others contributed to) which has been able to do this for quite a while. For example:

class MyTestCls:
    def __init__(self, **kwargs):
        for k, v in kwargs.items():
            setattr(self, k, v)

cls_instance = MyTestCls(s='ub', dct={'7': 7})

json = dumps(cls_instance, indent=4)
instance = loads(json)

You’ll get your instance back. Here the json looks like this:

{
    "__instance_type__": [
        "json_tricks.test_class",
        "MyTestCls"
    ],
    "attributes": {
        "s": "ub",
        "dct": {
            "7": 7
        }
    }
}

If you like to make your own solution, you might look at the source of json-tricks so as not to forget some special cases (like __slots__).

It also does other types like numpy arrays, datetimes, complex numbers; it also allows for comments.


回答 8

Python3.x

我所能达到的最好的方法就是这个。
注意,此代码也对待set()。
这种方法是通用的,只需要扩展类(在第二个示例中)。
请注意,我只是在处理文件,但是很容易根据自己的喜好修改行为。

但是,这是CoDec。

通过更多的工作,您可以用其他方式构造您的类。我假定使用默认的构造函数来实例化它,然后更新类dict。

import json
import collections


class JsonClassSerializable(json.JSONEncoder):

    REGISTERED_CLASS = {}

    def register(ctype):
        JsonClassSerializable.REGISTERED_CLASS[ctype.__name__] = ctype

    def default(self, obj):
        if isinstance(obj, collections.Set):
            return dict(_set_object=list(obj))
        if isinstance(obj, JsonClassSerializable):
            jclass = {}
            jclass["name"] = type(obj).__name__
            jclass["dict"] = obj.__dict__
            return dict(_class_object=jclass)
        else:
            return json.JSONEncoder.default(self, obj)

    def json_to_class(self, dct):
        if '_set_object' in dct:
            return set(dct['_set_object'])
        elif '_class_object' in dct:
            cclass = dct['_class_object']
            cclass_name = cclass["name"]
            if cclass_name not in self.REGISTERED_CLASS:
                raise RuntimeError(
                    "Class {} not registered in JSON Parser"
                    .format(cclass["name"])
                )
            instance = self.REGISTERED_CLASS[cclass_name]()
            instance.__dict__ = cclass["dict"]
            return instance
        return dct

    def encode_(self, file):
        with open(file, 'w') as outfile:
            json.dump(
                self.__dict__, outfile,
                cls=JsonClassSerializable,
                indent=4,
                sort_keys=True
            )

    def decode_(self, file):
        try:
            with open(file, 'r') as infile:
                self.__dict__ = json.load(
                    infile,
                    object_hook=self.json_to_class
                )
        except FileNotFoundError:
            print("Persistence load failed "
                  "'{}' do not exists".format(file)
                  )


class C(JsonClassSerializable):

    def __init__(self):
        self.mill = "s"


JsonClassSerializable.register(C)


class B(JsonClassSerializable):

    def __init__(self):
        self.a = 1230
        self.c = C()


JsonClassSerializable.register(B)


class A(JsonClassSerializable):

    def __init__(self):
        self.a = 1
        self.b = {1, 2}
        self.c = B()

JsonClassSerializable.register(A)

A().encode_("test")
b = A()
b.decode_("test")
print(b.a)
print(b.b)
print(b.c.a)

编辑

通过更多的研究,我找到了一种使用元类进行泛化而无需SUPERCLASS寄存器方法调用的方法。

import json
import collections

REGISTERED_CLASS = {}

class MetaSerializable(type):

    def __call__(cls, *args, **kwargs):
        if cls.__name__ not in REGISTERED_CLASS:
            REGISTERED_CLASS[cls.__name__] = cls
        return super(MetaSerializable, cls).__call__(*args, **kwargs)


class JsonClassSerializable(json.JSONEncoder, metaclass=MetaSerializable):

    def default(self, obj):
        if isinstance(obj, collections.Set):
            return dict(_set_object=list(obj))
        if isinstance(obj, JsonClassSerializable):
            jclass = {}
            jclass["name"] = type(obj).__name__
            jclass["dict"] = obj.__dict__
            return dict(_class_object=jclass)
        else:
            return json.JSONEncoder.default(self, obj)

    def json_to_class(self, dct):
        if '_set_object' in dct:
            return set(dct['_set_object'])
        elif '_class_object' in dct:
            cclass = dct['_class_object']
            cclass_name = cclass["name"]
            if cclass_name not in REGISTERED_CLASS:
                raise RuntimeError(
                    "Class {} not registered in JSON Parser"
                    .format(cclass["name"])
                )
            instance = REGISTERED_CLASS[cclass_name]()
            instance.__dict__ = cclass["dict"]
            return instance
        return dct

    def encode_(self, file):
        with open(file, 'w') as outfile:
            json.dump(
                self.__dict__, outfile,
                cls=JsonClassSerializable,
                indent=4,
                sort_keys=True
            )

    def decode_(self, file):
        try:
            with open(file, 'r') as infile:
                self.__dict__ = json.load(
                    infile,
                    object_hook=self.json_to_class
                )
        except FileNotFoundError:
            print("Persistence load failed "
                  "'{}' do not exists".format(file)
                  )


class C(JsonClassSerializable):

    def __init__(self):
        self.mill = "s"


class B(JsonClassSerializable):

    def __init__(self):
        self.a = 1230
        self.c = C()


class A(JsonClassSerializable):

    def __init__(self):
        self.a = 1
        self.b = {1, 2}
        self.c = B()


A().encode_("test")
b = A()
b.decode_("test")
print(b.a)
# 1
print(b.b)
# {1, 2}
print(b.c.a)
# 1230
print(b.c.c.mill)
# s

Python3.x

The best aproach I could reach with my knowledge was this.
Note that this code treat set() too.
This approach is generic just needing the extension of class (in the second example).
Note that I’m just doing it to files, but it’s easy to modify the behavior to your taste.

However this is a CoDec.

With a little more work you can construct your class in other ways. I assume a default constructor to instance it, then I update the class dict.

import json
import collections


class JsonClassSerializable(json.JSONEncoder):

    REGISTERED_CLASS = {}

    def register(ctype):
        JsonClassSerializable.REGISTERED_CLASS[ctype.__name__] = ctype

    def default(self, obj):
        if isinstance(obj, collections.Set):
            return dict(_set_object=list(obj))
        if isinstance(obj, JsonClassSerializable):
            jclass = {}
            jclass["name"] = type(obj).__name__
            jclass["dict"] = obj.__dict__
            return dict(_class_object=jclass)
        else:
            return json.JSONEncoder.default(self, obj)

    def json_to_class(self, dct):
        if '_set_object' in dct:
            return set(dct['_set_object'])
        elif '_class_object' in dct:
            cclass = dct['_class_object']
            cclass_name = cclass["name"]
            if cclass_name not in self.REGISTERED_CLASS:
                raise RuntimeError(
                    "Class {} not registered in JSON Parser"
                    .format(cclass["name"])
                )
            instance = self.REGISTERED_CLASS[cclass_name]()
            instance.__dict__ = cclass["dict"]
            return instance
        return dct

    def encode_(self, file):
        with open(file, 'w') as outfile:
            json.dump(
                self.__dict__, outfile,
                cls=JsonClassSerializable,
                indent=4,
                sort_keys=True
            )

    def decode_(self, file):
        try:
            with open(file, 'r') as infile:
                self.__dict__ = json.load(
                    infile,
                    object_hook=self.json_to_class
                )
        except FileNotFoundError:
            print("Persistence load failed "
                  "'{}' do not exists".format(file)
                  )


class C(JsonClassSerializable):

    def __init__(self):
        self.mill = "s"


JsonClassSerializable.register(C)


class B(JsonClassSerializable):

    def __init__(self):
        self.a = 1230
        self.c = C()


JsonClassSerializable.register(B)


class A(JsonClassSerializable):

    def __init__(self):
        self.a = 1
        self.b = {1, 2}
        self.c = B()

JsonClassSerializable.register(A)

A().encode_("test")
b = A()
b.decode_("test")
print(b.a)
print(b.b)
print(b.c.a)

Edit

With some more of research I found a way to generalize without the need of the SUPERCLASS register method call, using a metaclass

import json
import collections

REGISTERED_CLASS = {}

class MetaSerializable(type):

    def __call__(cls, *args, **kwargs):
        if cls.__name__ not in REGISTERED_CLASS:
            REGISTERED_CLASS[cls.__name__] = cls
        return super(MetaSerializable, cls).__call__(*args, **kwargs)


class JsonClassSerializable(json.JSONEncoder, metaclass=MetaSerializable):

    def default(self, obj):
        if isinstance(obj, collections.Set):
            return dict(_set_object=list(obj))
        if isinstance(obj, JsonClassSerializable):
            jclass = {}
            jclass["name"] = type(obj).__name__
            jclass["dict"] = obj.__dict__
            return dict(_class_object=jclass)
        else:
            return json.JSONEncoder.default(self, obj)

    def json_to_class(self, dct):
        if '_set_object' in dct:
            return set(dct['_set_object'])
        elif '_class_object' in dct:
            cclass = dct['_class_object']
            cclass_name = cclass["name"]
            if cclass_name not in REGISTERED_CLASS:
                raise RuntimeError(
                    "Class {} not registered in JSON Parser"
                    .format(cclass["name"])
                )
            instance = REGISTERED_CLASS[cclass_name]()
            instance.__dict__ = cclass["dict"]
            return instance
        return dct

    def encode_(self, file):
        with open(file, 'w') as outfile:
            json.dump(
                self.__dict__, outfile,
                cls=JsonClassSerializable,
                indent=4,
                sort_keys=True
            )

    def decode_(self, file):
        try:
            with open(file, 'r') as infile:
                self.__dict__ = json.load(
                    infile,
                    object_hook=self.json_to_class
                )
        except FileNotFoundError:
            print("Persistence load failed "
                  "'{}' do not exists".format(file)
                  )


class C(JsonClassSerializable):

    def __init__(self):
        self.mill = "s"


class B(JsonClassSerializable):

    def __init__(self):
        self.a = 1230
        self.c = C()


class A(JsonClassSerializable):

    def __init__(self):
        self.a = 1
        self.b = {1, 2}
        self.c = B()


A().encode_("test")
b = A()
b.decode_("test")
print(b.a)
# 1
print(b.b)
# {1, 2}
print(b.c.a)
# 1230
print(b.c.c.mill)
# s

回答 9

我相信与其采用公认的答案所建议的继​​承,不如使用多态性更好。否则,必须有一个很大的if else语句才能自定义每个对象的编码。这意味着为JSON创建通用的默认编码器,如下所示:

def jsonDefEncoder(obj):
   if hasattr(obj, 'jsonEnc'):
      return obj.jsonEnc()
   else: #some default behavior
      return obj.__dict__

然后jsonEnc()在要序列化的每个类中都有一个函数。例如

class A(object):
   def __init__(self,lengthInFeet):
      self.lengthInFeet=lengthInFeet
   def jsonEnc(self):
      return {'lengthInMeters': lengthInFeet * 0.3 } # each foot is 0.3 meter

然后你打电话 json.dumps(classInstance,default=jsonDefEncoder)

I believe instead of inheritance as suggested in accepted answer, it’s better to use polymorphism. Otherwise you have to have a big if else statement to customize encoding of every object. That means create a generic default encoder for JSON as:

def jsonDefEncoder(obj):
   if hasattr(obj, 'jsonEnc'):
      return obj.jsonEnc()
   else: #some default behavior
      return obj.__dict__

and then have a jsonEnc() function in each class you want to serialize. e.g.

class A(object):
   def __init__(self,lengthInFeet):
      self.lengthInFeet=lengthInFeet
   def jsonEnc(self):
      return {'lengthInMeters': lengthInFeet * 0.3 } # each foot is 0.3 meter

Then you call json.dumps(classInstance,default=jsonDefEncoder)


多重处理:如何在类中定义的函数上使用Pool.map?

问题:多重处理:如何在类中定义的函数上使用Pool.map?

当我运行类似:

from multiprocessing import Pool

p = Pool(5)
def f(x):
     return x*x

p.map(f, [1,2,3])

它工作正常。但是,将其作为类的函数:

class calculate(object):
    def run(self):
        def f(x):
            return x*x

        p = Pool()
        return p.map(f, [1,2,3])

cl = calculate()
print cl.run()

给我以下错误:

Exception in thread Thread-1:
Traceback (most recent call last):
  File "/sw/lib/python2.6/threading.py", line 532, in __bootstrap_inner
    self.run()
  File "/sw/lib/python2.6/threading.py", line 484, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/sw/lib/python2.6/multiprocessing/pool.py", line 225, in _handle_tasks
    put(task)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed

我看过Alex Martelli的一篇文章,涉及类似的问题,但还不够明确。

When I run something like:

from multiprocessing import Pool

p = Pool(5)
def f(x):
     return x*x

p.map(f, [1,2,3])

it works fine. However, putting this as a function of a class:

class calculate(object):
    def run(self):
        def f(x):
            return x*x

        p = Pool()
        return p.map(f, [1,2,3])

cl = calculate()
print cl.run()

Gives me the following error:

Exception in thread Thread-1:
Traceback (most recent call last):
  File "/sw/lib/python2.6/threading.py", line 532, in __bootstrap_inner
    self.run()
  File "/sw/lib/python2.6/threading.py", line 484, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/sw/lib/python2.6/multiprocessing/pool.py", line 225, in _handle_tasks
    put(task)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed

I’ve seen a post from Alex Martelli dealing with the same kind of problem, but it wasn’t explicit enough.


回答 0

我也对pool.map可以接受哪种功能的限制感到恼火。为了避免这种情况,我写了以下内容。即使递归使用parmap,它似乎也可以工作。

from multiprocessing import Process, Pipe
from itertools import izip

def spawn(f):
    def fun(pipe, x):
        pipe.send(f(x))
        pipe.close()
    return fun

def parmap(f, X):
    pipe = [Pipe() for x in X]
    proc = [Process(target=spawn(f), args=(c, x)) for x, (p, c) in izip(X, pipe)]
    [p.start() for p in proc]
    [p.join() for p in proc]
    return [p.recv() for (p, c) in pipe]

if __name__ == '__main__':
    print parmap(lambda x: x**x, range(1, 5))

I also was annoyed by restrictions on what sort of functions pool.map could accept. I wrote the following to circumvent this. It appears to work, even for recursive use of parmap.

from multiprocessing import Process, Pipe
from itertools import izip

def spawn(f):
    def fun(pipe, x):
        pipe.send(f(x))
        pipe.close()
    return fun

def parmap(f, X):
    pipe = [Pipe() for x in X]
    proc = [Process(target=spawn(f), args=(c, x)) for x, (p, c) in izip(X, pipe)]
    [p.start() for p in proc]
    [p.join() for p in proc]
    return [p.recv() for (p, c) in pipe]

if __name__ == '__main__':
    print parmap(lambda x: x**x, range(1, 5))

回答 1

我无法使用到目前为止发布的代码,因为使用“ multiprocessing.Pool”的代码不适用于lambda表达式,并且不使用“ multiprocessing.Pool”的代码会产生与工作项一样多的进程。

我修改了代码,它生成了预定义数量的工作程序,并且仅在存在空闲工作程序时才迭代输入列表。我还为工作程序st ctrl-c按预期方式启用了“守护程序”模式。

import multiprocessing


def fun(f, q_in, q_out):
    while True:
        i, x = q_in.get()
        if i is None:
            break
        q_out.put((i, f(x)))


def parmap(f, X, nprocs=multiprocessing.cpu_count()):
    q_in = multiprocessing.Queue(1)
    q_out = multiprocessing.Queue()

    proc = [multiprocessing.Process(target=fun, args=(f, q_in, q_out))
            for _ in range(nprocs)]
    for p in proc:
        p.daemon = True
        p.start()

    sent = [q_in.put((i, x)) for i, x in enumerate(X)]
    [q_in.put((None, None)) for _ in range(nprocs)]
    res = [q_out.get() for _ in range(len(sent))]

    [p.join() for p in proc]

    return [x for i, x in sorted(res)]


if __name__ == '__main__':
    print(parmap(lambda i: i * 2, [1, 2, 3, 4, 6, 7, 8]))

I could not use the codes posted so far because the codes using “multiprocessing.Pool” do not work with lambda expressions and the codes not using “multiprocessing.Pool” spawn as many processes as there are work items.

I adapted the code s.t. it spawns a predefined amount of workers and only iterates through the input list if there exists an idle worker. I also enabled the “daemon” mode for the workers s.t. ctrl-c works as expected.

import multiprocessing


def fun(f, q_in, q_out):
    while True:
        i, x = q_in.get()
        if i is None:
            break
        q_out.put((i, f(x)))


def parmap(f, X, nprocs=multiprocessing.cpu_count()):
    q_in = multiprocessing.Queue(1)
    q_out = multiprocessing.Queue()

    proc = [multiprocessing.Process(target=fun, args=(f, q_in, q_out))
            for _ in range(nprocs)]
    for p in proc:
        p.daemon = True
        p.start()

    sent = [q_in.put((i, x)) for i, x in enumerate(X)]
    [q_in.put((None, None)) for _ in range(nprocs)]
    res = [q_out.get() for _ in range(len(sent))]

    [p.join() for p in proc]

    return [x for i, x in sorted(res)]


if __name__ == '__main__':
    print(parmap(lambda i: i * 2, [1, 2, 3, 4, 6, 7, 8]))

回答 2

除非您跳出标准库,否则多重处理和酸洗将受到破坏和限制。

如果您使用 multiprocessingpathos.multiprocesssing,你可以直接使用类和类方法在多处理的map功能。这是因为dill用代替picklecPickle,并且dill可以在python中序列化几乎所有内容。

pathos.multiprocessing还提供了异步映射功能…,并且可以map使用多个参数(例如map(math.pow, [1,2,3], [4,5,6]))运行

查看讨论: 多重处理和莳萝可以一起做什么?

和:http//matthewrocklin.com/blog/work/2013/12/05/Parallelism-and-Serialization

它甚至可以处理您最初编写的代码,而无需进行修改,也可以从解释器中进行处理。 为什么还有其他更脆弱且针对单个案例的问题?

>>> from pathos.multiprocessing import ProcessingPool as Pool
>>> class calculate(object):
...  def run(self):
...   def f(x):
...    return x*x
...   p = Pool()
...   return p.map(f, [1,2,3])
... 
>>> cl = calculate()
>>> print cl.run()
[1, 4, 9]

在此处获取代码:https : //github.com/uqfoundation/pathos

而且,只是为了炫耀它可以做什么:

>>> from pathos.multiprocessing import ProcessingPool as Pool
>>> 
>>> p = Pool(4)
>>> 
>>> def add(x,y):
...   return x+y
... 
>>> x = [0,1,2,3]
>>> y = [4,5,6,7]
>>> 
>>> p.map(add, x, y)
[4, 6, 8, 10]
>>> 
>>> class Test(object):
...   def plus(self, x, y): 
...     return x+y
... 
>>> t = Test()
>>> 
>>> p.map(Test.plus, [t]*4, x, y)
[4, 6, 8, 10]
>>> 
>>> res = p.amap(t.plus, x, y)
>>> res.get()
[4, 6, 8, 10]

Multiprocessing and pickling is broken and limited unless you jump outside the standard library.

If you use a fork of multiprocessing called pathos.multiprocesssing, you can directly use classes and class methods in multiprocessing’s map functions. This is because dill is used instead of pickle or cPickle, and dill can serialize almost anything in python.

pathos.multiprocessing also provides an asynchronous map function… and it can map functions with multiple arguments (e.g. map(math.pow, [1,2,3], [4,5,6]))

See discussions: What can multiprocessing and dill do together?

and: http://matthewrocklin.com/blog/work/2013/12/05/Parallelism-and-Serialization

It even handles the code you wrote initially, without modification, and from the interpreter. Why do anything else that’s more fragile and specific to a single case?

>>> from pathos.multiprocessing import ProcessingPool as Pool
>>> class calculate(object):
...  def run(self):
...   def f(x):
...    return x*x
...   p = Pool()
...   return p.map(f, [1,2,3])
... 
>>> cl = calculate()
>>> print cl.run()
[1, 4, 9]

Get the code here: https://github.com/uqfoundation/pathos

And, just to show off a little more of what it can do:

>>> from pathos.multiprocessing import ProcessingPool as Pool
>>> 
>>> p = Pool(4)
>>> 
>>> def add(x,y):
...   return x+y
... 
>>> x = [0,1,2,3]
>>> y = [4,5,6,7]
>>> 
>>> p.map(add, x, y)
[4, 6, 8, 10]
>>> 
>>> class Test(object):
...   def plus(self, x, y): 
...     return x+y
... 
>>> t = Test()
>>> 
>>> p.map(Test.plus, [t]*4, x, y)
[4, 6, 8, 10]
>>> 
>>> res = p.amap(t.plus, x, y)
>>> res.get()
[4, 6, 8, 10]

回答 3

据我所知,目前还没有解决您的问题的方法:您map()必须通过导入模块来访问所赋予的功能。这就是robert的代码起作用的原因:f()可以通过导入以下代码来获得该函数:

def f(x):
    return x*x

class Calculate(object):
    def run(self):
        p = Pool()
        return p.map(f, [1,2,3])

if __name__ == '__main__':
    cl = Calculate()
    print cl.run()

我实际上添加了一个“主要”部分,因为它遵循Windows平台建议(“确保主要模块可以由新的Python解释器安全地导入,而不会引起意外的副作用”)。

我还在前面加上了一个大写字母Calculate,以便遵循PEP 8。:)

There is currently no solution to your problem, as far as I know: the function that you give to map() must be accessible through an import of your module. This is why robert’s code works: the function f() can be obtained by importing the following code:

def f(x):
    return x*x

class Calculate(object):
    def run(self):
        p = Pool()
        return p.map(f, [1,2,3])

if __name__ == '__main__':
    cl = Calculate()
    print cl.run()

I actually added a “main” section, because this follows the recommendations for the Windows platform (“Make sure that the main module can be safely imported by a new Python interpreter without causing unintended side effects”).

I also added an uppercase letter in front of Calculate, so as to follow PEP 8. :)


回答 4

mrule的解决方案是正确的,但有一个错误:如果子级发送回大量数据,则它可以填充管道的缓冲区,阻塞子级pipe.send(),而父级正在等待子级退出pipe.join()。解决方案是在给孩子join()打电话之前先读取孩子的数据。此外,孩子应关闭父母的管道末端以防止死锁。下面的代码解决了该问题。另请注意,这parmap会为中的每个元素创建一个进程X。更高级的解决方案是使用multiprocessing.cpu_count()划分X成多个块,然后合并结果,然后再返回。我将其作为练习留给读者,以免破坏mrule的简洁答案的简洁性。;)

from multiprocessing import Process, Pipe
from itertools import izip

def spawn(f):
    def fun(ppipe, cpipe,x):
        ppipe.close()
        cpipe.send(f(x))
        cpipe.close()
    return fun

def parmap(f,X):
    pipe=[Pipe() for x in X]
    proc=[Process(target=spawn(f),args=(p,c,x)) for x,(p,c) in izip(X,pipe)]
    [p.start() for p in proc]
    ret = [p.recv() for (p,c) in pipe]
    [p.join() for p in proc]
    return ret

if __name__ == '__main__':
    print parmap(lambda x:x**x,range(1,5))

The solution by mrule is correct but has a bug: if the child sends back a large amount of data, it can fill the pipe’s buffer, blocking on the child’s pipe.send(), while the parent is waiting for the child to exit on pipe.join(). The solution is to read the child’s data before join()ing the child. Furthermore the child should close the parent’s end of the pipe to prevent a deadlock. The code below fixes that. Also be aware that this parmap creates one process per element in X. A more advanced solution is to use multiprocessing.cpu_count() to divide X into a number of chunks, and then merge the results before returning. I leave that as an exercise to the reader so as not to spoil the conciseness of the nice answer by mrule. ;)

from multiprocessing import Process, Pipe
from itertools import izip

def spawn(f):
    def fun(ppipe, cpipe,x):
        ppipe.close()
        cpipe.send(f(x))
        cpipe.close()
    return fun

def parmap(f,X):
    pipe=[Pipe() for x in X]
    proc=[Process(target=spawn(f),args=(p,c,x)) for x,(p,c) in izip(X,pipe)]
    [p.start() for p in proc]
    ret = [p.recv() for (p,c) in pipe]
    [p.join() for p in proc]
    return ret

if __name__ == '__main__':
    print parmap(lambda x:x**x,range(1,5))

回答 5

我也为此感到挣扎。作为简化的示例,我具有作为类的数据成员的功能:

from multiprocessing import Pool
import itertools
pool = Pool()
class Example(object):
    def __init__(self, my_add): 
        self.f = my_add  
    def add_lists(self, list1, list2):
        # Needed to do something like this (the following line won't work)
        return pool.map(self.f,list1,list2)  

我需要在同一类的Pool.map()调用中使用self.f函数,而self.f没有将元组作为参数。由于此函数嵌入在类中,因此我不清楚如何编写包装器的类型以及其他建议的答案。

我通过使用另一个接受元组/列表的包装器解决了这个问题,其中第一个元素是函数,其余元素是该函数的参数,称为eval_func_tuple(f_args)。使用此功能,有问题的行可以用return pool.map(eval_func_tuple,itertools.izip(itertools.repeat(self.f),list1,list2))代替。这是完整的代码:

档案:util.py

def add(a, b): return a+b

def eval_func_tuple(f_args):
    """Takes a tuple of a function and args, evaluates and returns result"""
    return f_args[0](*f_args[1:])  

档案:main.py

from multiprocessing import Pool
import itertools
import util  

pool = Pool()
class Example(object):
    def __init__(self, my_add): 
        self.f = my_add  
    def add_lists(self, list1, list2):
        # The following line will now work
        return pool.map(util.eval_func_tuple, 
            itertools.izip(itertools.repeat(self.f), list1, list2)) 

if __name__ == '__main__':
    myExample = Example(util.add)
    list1 = [1, 2, 3]
    list2 = [10, 20, 30]
    print myExample.add_lists(list1, list2)  

运行main.py将得到[11,22,33]。随时进行改进,例如也可以将eval_func_tuple修改为采用关键字参数。

另一方面,在另一个答案中,对于进程数多于可用CPU数的情况,可以使函数“ parmap”更有效。我在下面复制一个编辑后的版本。这是我的第一篇文章,我不确定是否应该直接编辑原始答案。我还重命名了一些变量。

from multiprocessing import Process, Pipe  
from itertools import izip  

def spawn(f):  
    def fun(pipe,x):  
        pipe.send(f(x))  
        pipe.close()  
    return fun  

def parmap(f,X):  
    pipe=[Pipe() for x in X]  
    processes=[Process(target=spawn(f),args=(c,x)) for x,(p,c) in izip(X,pipe)]  
    numProcesses = len(processes)  
    processNum = 0  
    outputList = []  
    while processNum < numProcesses:  
        endProcessNum = min(processNum+multiprocessing.cpu_count(), numProcesses)  
        for proc in processes[processNum:endProcessNum]:  
            proc.start()  
        for proc in processes[processNum:endProcessNum]:  
            proc.join()  
        for proc,c in pipe[processNum:endProcessNum]:  
            outputList.append(proc.recv())  
        processNum = endProcessNum  
    return outputList    

if __name__ == '__main__':  
    print parmap(lambda x:x**x,range(1,5))         

I’ve also struggled with this. I had functions as data members of a class, as a simplified example:

from multiprocessing import Pool
import itertools
pool = Pool()
class Example(object):
    def __init__(self, my_add): 
        self.f = my_add  
    def add_lists(self, list1, list2):
        # Needed to do something like this (the following line won't work)
        return pool.map(self.f,list1,list2)  

I needed to use the function self.f in a Pool.map() call from within the same class and self.f did not take a tuple as an argument. Since this function was embedded in a class, it was not clear to me how to write the type of wrapper other answers suggested.

I solved this problem by using a different wrapper that takes a tuple/list, where the first element is the function, and the remaining elements are the arguments to that function, called eval_func_tuple(f_args). Using this, the problematic line can be replaced by return pool.map(eval_func_tuple, itertools.izip(itertools.repeat(self.f), list1, list2)). Here is the full code:

File: util.py

def add(a, b): return a+b

def eval_func_tuple(f_args):
    """Takes a tuple of a function and args, evaluates and returns result"""
    return f_args[0](*f_args[1:])  

File: main.py

from multiprocessing import Pool
import itertools
import util  

pool = Pool()
class Example(object):
    def __init__(self, my_add): 
        self.f = my_add  
    def add_lists(self, list1, list2):
        # The following line will now work
        return pool.map(util.eval_func_tuple, 
            itertools.izip(itertools.repeat(self.f), list1, list2)) 

if __name__ == '__main__':
    myExample = Example(util.add)
    list1 = [1, 2, 3]
    list2 = [10, 20, 30]
    print myExample.add_lists(list1, list2)  

Running main.py will give [11, 22, 33]. Feel free to improve this, for example eval_func_tuple could also be modified to take keyword arguments.

On another note, in another answers, the function “parmap” can be made more efficient for the case of more Processes than number of CPUs available. I’m copying an edited version below. This is my first post and I wasn’t sure if I should directly edit the original answer. I also renamed some variables.

from multiprocessing import Process, Pipe  
from itertools import izip  

def spawn(f):  
    def fun(pipe,x):  
        pipe.send(f(x))  
        pipe.close()  
    return fun  

def parmap(f,X):  
    pipe=[Pipe() for x in X]  
    processes=[Process(target=spawn(f),args=(c,x)) for x,(p,c) in izip(X,pipe)]  
    numProcesses = len(processes)  
    processNum = 0  
    outputList = []  
    while processNum < numProcesses:  
        endProcessNum = min(processNum+multiprocessing.cpu_count(), numProcesses)  
        for proc in processes[processNum:endProcessNum]:  
            proc.start()  
        for proc in processes[processNum:endProcessNum]:  
            proc.join()  
        for proc,c in pipe[processNum:endProcessNum]:  
            outputList.append(proc.recv())  
        processNum = endProcessNum  
    return outputList    

if __name__ == '__main__':  
    print parmap(lambda x:x**x,range(1,5))         

回答 6

我回答了klaus se和aganders3的回答,并制作了一个文档化的模块,该模块更具可读性,并保存在一个文件中。您可以将其添加到您的项目中。它甚至还有一个可选的进度条!

"""
The ``processes`` module provides some convenience functions
for using parallel processes in python.

Adapted from http://stackoverflow.com/a/16071616/287297

Example usage:

    print prll_map(lambda i: i * 2, [1, 2, 3, 4, 6, 7, 8], 32, verbose=True)

Comments:

"It spawns a predefined amount of workers and only iterates through the input list
 if there exists an idle worker. I also enabled the "daemon" mode for the workers so
 that KeyboardInterupt works as expected."

Pitfalls: all the stdouts are sent back to the parent stdout, intertwined.

Alternatively, use this fork of multiprocessing: 
https://github.com/uqfoundation/multiprocess
"""

# Modules #
import multiprocessing
from tqdm import tqdm

################################################################################
def apply_function(func_to_apply, queue_in, queue_out):
    while not queue_in.empty():
        num, obj = queue_in.get()
        queue_out.put((num, func_to_apply(obj)))

################################################################################
def prll_map(func_to_apply, items, cpus=None, verbose=False):
    # Number of processes to use #
    if cpus is None: cpus = min(multiprocessing.cpu_count(), 32)
    # Create queues #
    q_in  = multiprocessing.Queue()
    q_out = multiprocessing.Queue()
    # Process list #
    new_proc  = lambda t,a: multiprocessing.Process(target=t, args=a)
    processes = [new_proc(apply_function, (func_to_apply, q_in, q_out)) for x in range(cpus)]
    # Put all the items (objects) in the queue #
    sent = [q_in.put((i, x)) for i, x in enumerate(items)]
    # Start them all #
    for proc in processes:
        proc.daemon = True
        proc.start()
    # Display progress bar or not #
    if verbose:
        results = [q_out.get() for x in tqdm(range(len(sent)))]
    else:
        results = [q_out.get() for x in range(len(sent))]
    # Wait for them to finish #
    for proc in processes: proc.join()
    # Return results #
    return [x for i, x in sorted(results)]

################################################################################
def test():
    def slow_square(x):
        import time
        time.sleep(2)
        return x**2
    objs    = range(20)
    squares = prll_map(slow_square, objs, 4, verbose=True)
    print "Result: %s" % squares

编辑:添加了@ alexander-mcfarlane建议和一个测试功能

I took klaus se’s and aganders3’s answer, and made a documented module that is more readable and holds in one file. You can just add it to your project. It even has an optional progress bar !

"""
The ``processes`` module provides some convenience functions
for using parallel processes in python.

Adapted from http://stackoverflow.com/a/16071616/287297

Example usage:

    print prll_map(lambda i: i * 2, [1, 2, 3, 4, 6, 7, 8], 32, verbose=True)

Comments:

"It spawns a predefined amount of workers and only iterates through the input list
 if there exists an idle worker. I also enabled the "daemon" mode for the workers so
 that KeyboardInterupt works as expected."

Pitfalls: all the stdouts are sent back to the parent stdout, intertwined.

Alternatively, use this fork of multiprocessing: 
https://github.com/uqfoundation/multiprocess
"""

# Modules #
import multiprocessing
from tqdm import tqdm

################################################################################
def apply_function(func_to_apply, queue_in, queue_out):
    while not queue_in.empty():
        num, obj = queue_in.get()
        queue_out.put((num, func_to_apply(obj)))

################################################################################
def prll_map(func_to_apply, items, cpus=None, verbose=False):
    # Number of processes to use #
    if cpus is None: cpus = min(multiprocessing.cpu_count(), 32)
    # Create queues #
    q_in  = multiprocessing.Queue()
    q_out = multiprocessing.Queue()
    # Process list #
    new_proc  = lambda t,a: multiprocessing.Process(target=t, args=a)
    processes = [new_proc(apply_function, (func_to_apply, q_in, q_out)) for x in range(cpus)]
    # Put all the items (objects) in the queue #
    sent = [q_in.put((i, x)) for i, x in enumerate(items)]
    # Start them all #
    for proc in processes:
        proc.daemon = True
        proc.start()
    # Display progress bar or not #
    if verbose:
        results = [q_out.get() for x in tqdm(range(len(sent)))]
    else:
        results = [q_out.get() for x in range(len(sent))]
    # Wait for them to finish #
    for proc in processes: proc.join()
    # Return results #
    return [x for i, x in sorted(results)]

################################################################################
def test():
    def slow_square(x):
        import time
        time.sleep(2)
        return x**2
    objs    = range(20)
    squares = prll_map(slow_square, objs, 4, verbose=True)
    print "Result: %s" % squares

EDIT: Added @alexander-mcfarlane suggestion and a test function


回答 7

我知道这个问题是在6年前提出的,但是只是想添加我的解决方案,因为上面的一些建议看起来非常复杂,但是我的解决方案实际上非常简单。

我要做的就是将pool.map()调用包装到一个辅助函数中。将方法的类对象和args作为元组传递,看起来有点像这样。

def run_in_parallel(args):
    return args[0].method(args[1])

myclass = MyClass()
method_args = [1,2,3,4,5,6]
args_map = [ (myclass, arg) for arg in method_args ]
pool = Pool()
pool.map(run_in_parallel, args_map)

I know this was asked over 6 years ago now, but just wanted to add my solution, as some of the suggestions above seem horribly complicated, but my solution was actually very simple.

All I had to do was wrap the pool.map() call to a helper function. Passing the class object along with args for the method as a tuple, which looked a bit like this.

def run_in_parallel(args):
    return args[0].method(args[1])

myclass = MyClass()
method_args = [1,2,3,4,5,6]
args_map = [ (myclass, arg) for arg in method_args ]
pool = Pool()
pool.map(run_in_parallel, args_map)

回答 8

在类中定义的函数(即使在类中的函数内部)也不是真正的泡菜。但是,这可行:

def f(x):
    return x*x

class calculate(object):
    def run(self):
        p = Pool()
    return p.map(f, [1,2,3])

cl = calculate()
print cl.run()

Functions defined in classes (even within functions within classes) don’t really pickle. However, this works:

def f(x):
    return x*x

class calculate(object):
    def run(self):
        p = Pool()
    return p.map(f, [1,2,3])

cl = calculate()
print cl.run()

回答 9

我知道这个问题是在8年零10个月前提出的,但我想向您介绍我的解决方案:

from multiprocessing import Pool

class Test:

    def __init__(self):
        self.main()

    @staticmethod
    def methodForMultiprocessing(x):
        print(x*x)

    def main(self):
        if __name__ == "__main__":
            p = Pool()
            p.map(Test.methodForMultiprocessing, list(range(1, 11)))
            p.close()

TestObject = Test()

您只需要使类函数成为静态方法即可。但是也可以使用类方法:

from multiprocessing import Pool

class Test:

    def __init__(self):
        self.main()

    @classmethod
    def methodForMultiprocessing(cls, x):
        print(x*x)

    def main(self):
        if __name__ == "__main__":
            p = Pool()
            p.map(Test.methodForMultiprocessing, list(range(1, 11)))
            p.close()

TestObject = Test()

在Python 3.7.3中测试

I know that this question was asked 8 years and 10 months ago but I want to present you my solution:

from multiprocessing import Pool

class Test:

    def __init__(self):
        self.main()

    @staticmethod
    def methodForMultiprocessing(x):
        print(x*x)

    def main(self):
        if __name__ == "__main__":
            p = Pool()
            p.map(Test.methodForMultiprocessing, list(range(1, 11)))
            p.close()

TestObject = Test()

You just need to make your class function into a static method. But it’s also possible with a class method:

from multiprocessing import Pool

class Test:

    def __init__(self):
        self.main()

    @classmethod
    def methodForMultiprocessing(cls, x):
        print(x*x)

    def main(self):
        if __name__ == "__main__":
            p = Pool()
            p.map(Test.methodForMultiprocessing, list(range(1, 11)))
            p.close()

TestObject = Test()

Tested in Python 3.7.3


回答 10

我修改了klaus se的方法,因为当它以较小的列表为我工作时,当项目数大于或等于1000时,它将挂起。None我没有一次在停止条件下一次推送作业,而是一次全部加载了输入队列,只是让进程在其上进行修改直到它变空。

from multiprocessing import cpu_count, Queue, Process

def apply_func(f, q_in, q_out):
    while not q_in.empty():
        i, x = q_in.get()
        q_out.put((i, f(x)))

# map a function using a pool of processes
def parmap(f, X, nprocs = cpu_count()):
    q_in, q_out   = Queue(), Queue()
    proc = [Process(target=apply_func, args=(f, q_in, q_out)) for _ in range(nprocs)]
    sent = [q_in.put((i, x)) for i, x in enumerate(X)]
    [p.start() for p in proc]
    res = [q_out.get() for _ in sent]
    [p.join() for p in proc]

    return [x for i,x in sorted(res)]

编辑:不幸的是,现在我在系统上遇到此错误:Multiprocessing Queue maxsize限制为32767,希望那里的解决方法会有所帮助。

I modified klaus se’s method because while it was working for me with small lists, it would hang when the number of items was ~1000 or greater. Instead of pushing the jobs one at a time with the None stop condition, I load up the input queue all at once and just let the processes munch on it until it’s empty.

from multiprocessing import cpu_count, Queue, Process

def apply_func(f, q_in, q_out):
    while not q_in.empty():
        i, x = q_in.get()
        q_out.put((i, f(x)))

# map a function using a pool of processes
def parmap(f, X, nprocs = cpu_count()):
    q_in, q_out   = Queue(), Queue()
    proc = [Process(target=apply_func, args=(f, q_in, q_out)) for _ in range(nprocs)]
    sent = [q_in.put((i, x)) for i, x in enumerate(X)]
    [p.start() for p in proc]
    res = [q_out.get() for _ in sent]
    [p.join() for p in proc]

    return [x for i,x in sorted(res)]

Edit: unfortunately now I am running into this error on my system: Multiprocessing Queue maxsize limit is 32767, hopefully the workarounds there will help.


回答 11

如果您以某种方式手动忽略了类中的Pool对象列表中的对象,则可以运行您的代码而不会出现任何问题,因为pickle错误无法表明该对象。您可以使用以下__getstate__功能(也请参见此处)执行此操作。该Pool对象将尝试查找__getstate____setstate__函数,并在运行时找到它们并执行它们mapmap_async等等:

class calculate(object):
    def __init__(self):
        self.p = Pool()
    def __getstate__(self):
        self_dict = self.__dict__.copy()
        del self_dict['p']
        return self_dict
    def __setstate__(self, state):
        self.__dict__.update(state)

    def f(self, x):
        return x*x
    def run(self):
        return self.p.map(self.f, [1,2,3])

然后做:

cl = calculate()
cl.run()

将为您提供输出:

[1, 4, 9]

我已经在Python 3.x中测试了上面的代码,并且可以正常工作。

You can run your code without any issues if you somehow manually ignore the Pool object from the list of objects in the class because it is not pickleable as the error says. You can do this with the __getstate__ function (look here too) as follow. The Pool object will try to find the __getstate__ and __setstate__ functions and execute them if it finds it when you run map, map_async etc:

class calculate(object):
    def __init__(self):
        self.p = Pool()
    def __getstate__(self):
        self_dict = self.__dict__.copy()
        del self_dict['p']
        return self_dict
    def __setstate__(self, state):
        self.__dict__.update(state)

    def f(self, x):
        return x*x
    def run(self):
        return self.p.map(self.f, [1,2,3])

Then do:

cl = calculate()
cl.run()

will give you the output:

[1, 4, 9]

I’ve tested the above code in Python 3.x and it works.


回答 12

我不确定是否采用了这种方法,但是我正在使用的解决方法是:

from multiprocessing import Pool

t = None

def run(n):
    return t.f(n)

class Test(object):
    def __init__(self, number):
        self.number = number

    def f(self, x):
        print x * self.number

    def pool(self):
        pool = Pool(2)
        pool.map(run, range(10))

if __name__ == '__main__':
    t = Test(9)
    t.pool()
    pool = Pool(2)
    pool.map(run, range(10))

输出应为:

0
9
18
27
36
45
54
63
72
81
0
9
18
27
36
45
54
63
72
81

I’m not sure if this approach has been taken but a work around i’m using is:

from multiprocessing import Pool

t = None

def run(n):
    return t.f(n)

class Test(object):
    def __init__(self, number):
        self.number = number

    def f(self, x):
        print x * self.number

    def pool(self):
        pool = Pool(2)
        pool.map(run, range(10))

if __name__ == '__main__':
    t = Test(9)
    t.pool()
    pool = Pool(2)
    pool.map(run, range(10))

Output should be:

0
9
18
27
36
45
54
63
72
81
0
9
18
27
36
45
54
63
72
81

回答 13

class Calculate(object):
  # Your instance method to be executed
  def f(self, x, y):
    return x*y

if __name__ == '__main__':
  inp_list = [1,2,3]
  y = 2
  cal_obj = Calculate()
  pool = Pool(2)
  results = pool.map(lambda x: cal_obj.f(x, y), inp_list)

您可能希望将此函数应用于类的每个不同实例。那么这也是解决方案

class Calculate(object):
  # Your instance method to be executed
  def __init__(self, x):
    self.x = x

  def f(self, y):
    return self.x*y

if __name__ == '__main__':
  inp_list = [Calculate(i) for i in range(3)]
  y = 2
  pool = Pool(2)
  results = pool.map(lambda x: x.f(y), inp_list)
class Calculate(object):
  # Your instance method to be executed
  def f(self, x, y):
    return x*y

if __name__ == '__main__':
  inp_list = [1,2,3]
  y = 2
  cal_obj = Calculate()
  pool = Pool(2)
  results = pool.map(lambda x: cal_obj.f(x, y), inp_list)

There is a possibility that you would want to apply this function for each different instance of the class. Then here is the solution for that also

class Calculate(object):
  # Your instance method to be executed
  def __init__(self, x):
    self.x = x

  def f(self, y):
    return self.x*y

if __name__ == '__main__':
  inp_list = [Calculate(i) for i in range(3)]
  y = 2
  pool = Pool(2)
  results = pool.map(lambda x: x.f(y), inp_list)

回答 14

这是我的解决方案,我认为它比这里的大多数其他解决方案都没有那么强大。这类似于nightowl的答案。

someclasses = [MyClass(), MyClass(), MyClass()]

def method_caller(some_object, some_method='the method'):
    return getattr(some_object, some_method)()

othermethod = partial(method_caller, some_method='othermethod')

with Pool(6) as pool:
    result = pool.map(othermethod, someclasses)

Here is my solution, which I think is a bit less hackish than most others here. It is similar to nightowl’s answer.

someclasses = [MyClass(), MyClass(), MyClass()]

def method_caller(some_object, some_method='the method'):
    return getattr(some_object, some_method)()

othermethod = partial(method_caller, some_method='othermethod')

with Pool(6) as pool:
    result = pool.map(othermethod, someclasses)

回答 15

http://www.rueckstiess.net/research/snippets/show/ca1d7d90 http://qingkaikong.blogspot.com/2016/12/python-parallel-method-in-class.html

我们可以创建一个外部函数,并使用类self对象将其作为种子:

from joblib import Parallel, delayed
def unwrap_self(arg, **kwarg):
    return square_class.square_int(*arg, **kwarg)

class square_class:
    def square_int(self, i):
        return i * i

    def run(self, num):
        results = []
        results = Parallel(n_jobs= -1, backend="threading")\
            (delayed(unwrap_self)(i) for i in zip([self]*len(num), num))
        print(results)

或没有joblib:

from multiprocessing import Pool
import time

def unwrap_self_f(arg, **kwarg):
    return C.f(*arg, **kwarg)

class C:
    def f(self, name):
        print 'hello %s,'%name
        time.sleep(5)
        print 'nice to meet you.'

    def run(self):
        pool = Pool(processes=2)
        names = ('frank', 'justin', 'osi', 'thomas')
        pool.map(unwrap_self_f, zip([self]*len(names), names))

if __name__ == '__main__':
    c = C()
    c.run()

From http://www.rueckstiess.net/research/snippets/show/ca1d7d90 and http://qingkaikong.blogspot.com/2016/12/python-parallel-method-in-class.html

We can make an external function and seed it with the class self object:

from joblib import Parallel, delayed
def unwrap_self(arg, **kwarg):
    return square_class.square_int(*arg, **kwarg)

class square_class:
    def square_int(self, i):
        return i * i

    def run(self, num):
        results = []
        results = Parallel(n_jobs= -1, backend="threading")\
            (delayed(unwrap_self)(i) for i in zip([self]*len(num), num))
        print(results)

OR without joblib:

from multiprocessing import Pool
import time

def unwrap_self_f(arg, **kwarg):
    return C.f(*arg, **kwarg)

class C:
    def f(self, name):
        print 'hello %s,'%name
        time.sleep(5)
        print 'nice to meet you.'

    def run(self):
        pool = Pool(processes=2)
        names = ('frank', 'justin', 'osi', 'thomas')
        pool.map(unwrap_self_f, zip([self]*len(names), names))

if __name__ == '__main__':
    c = C()
    c.run()

回答 16

这可能不是一个很好的解决方案,但就我而言,我是这样解决的。

from multiprocessing import Pool

def foo1(data):
    self = data.get('slf')
    lst = data.get('lst')
    return sum(lst) + self.foo2()

class Foo(object):
    def __init__(self, a, b):
        self.a = a
        self.b = b

    def foo2(self):
        return self.a**self.b   

    def foo(self):
        p = Pool(5)
        lst = [1, 2, 3]
        result = p.map(foo1, (dict(slf=self, lst=lst),))
        return result

if __name__ == '__main__':
    print(Foo(2, 4).foo())

我必须传递self给函数,因为我必须通过该函数访问类的属性和函数。这对我有用。始终欢迎提出纠正和建议。

This may not be a very good solution but in my case, I solve it like this.

from multiprocessing import Pool

def foo1(data):
    self = data.get('slf')
    lst = data.get('lst')
    return sum(lst) + self.foo2()

class Foo(object):
    def __init__(self, a, b):
        self.a = a
        self.b = b

    def foo2(self):
        return self.a**self.b   

    def foo(self):
        p = Pool(5)
        lst = [1, 2, 3]
        result = p.map(foo1, (dict(slf=self, lst=lst),))
        return result

if __name__ == '__main__':
    print(Foo(2, 4).foo())

I had to pass self to my function as I have to access attributes and functions of my class through that function. This is working for me. Corrections and suggestions are always welcome.


回答 17

这是我为在python3中使用多处理池而编写的样板,特别是使用python3.7.7来运行测试。我使用跑得最快imap_unordered。只需插入您的方案并尝试一下即可。您可以使用timeit或仅time.time()找出最适合您的方法。

import multiprocessing
import time

NUMBER_OF_PROCESSES = multiprocessing.cpu_count()
MP_FUNCTION = 'starmap'  # 'imap_unordered' or 'starmap' or 'apply_async'

def process_chunk(a_chunk):
    print(f"processig mp chunk {a_chunk}")
    return a_chunk


map_jobs = [1, 2, 3, 4]

result_sum = 0

s = time.time()
if MP_FUNCTION == 'imap_unordered':
    pool = multiprocessing.Pool(processes=NUMBER_OF_PROCESSES)
    for i in pool.imap_unordered(process_chunk, map_jobs):
        result_sum += i
elif MP_FUNCTION == 'starmap':
    pool = multiprocessing.Pool(processes=NUMBER_OF_PROCESSES)
    try:
        map_jobs = [(i, ) for i in map_jobs]
        result_sum = pool.starmap(process_chunk, map_jobs)
        result_sum = sum(result_sum)
    finally:
        pool.close()
        pool.join()
elif MP_FUNCTION == 'apply_async':
    with multiprocessing.Pool(processes=NUMBER_OF_PROCESSES) as pool:
        result_sum = [pool.apply_async(process_chunk, [i, ]).get() for i in map_jobs]
    result_sum = sum(result_sum)
print(f"result_sum is {result_sum}, took {time.time() - s}s")

在上述情况下,imap_unordered实际上似乎对我而言表现最差。试用您的案例,并在计划运行的计算机上对其进行基准测试。也请继续阅读过程池。干杯!

Here is a boilerplate I wrote for using multiprocessing Pool in python3, specifically python3.7.7 was used to run the tests. I got my fastest runs using imap_unordered. Just plug in your scenario and try it out. You can use timeit or just time.time() to figure out which works best for you.

import multiprocessing
import time

NUMBER_OF_PROCESSES = multiprocessing.cpu_count()
MP_FUNCTION = 'starmap'  # 'imap_unordered' or 'starmap' or 'apply_async'

def process_chunk(a_chunk):
    print(f"processig mp chunk {a_chunk}")
    return a_chunk


map_jobs = [1, 2, 3, 4]

result_sum = 0

s = time.time()
if MP_FUNCTION == 'imap_unordered':
    pool = multiprocessing.Pool(processes=NUMBER_OF_PROCESSES)
    for i in pool.imap_unordered(process_chunk, map_jobs):
        result_sum += i
elif MP_FUNCTION == 'starmap':
    pool = multiprocessing.Pool(processes=NUMBER_OF_PROCESSES)
    try:
        map_jobs = [(i, ) for i in map_jobs]
        result_sum = pool.starmap(process_chunk, map_jobs)
        result_sum = sum(result_sum)
    finally:
        pool.close()
        pool.join()
elif MP_FUNCTION == 'apply_async':
    with multiprocessing.Pool(processes=NUMBER_OF_PROCESSES) as pool:
        result_sum = [pool.apply_async(process_chunk, [i, ]).get() for i in map_jobs]
    result_sum = sum(result_sum)
print(f"result_sum is {result_sum}, took {time.time() - s}s")

In the above scenario imap_unordered actually seems to perform the worst for me. Try out your case and benchmark it on the machine you plan to run it on. Also read up on Process Pools. Cheers!


Python多处理PicklingError:无法腌制

问题:Python多处理PicklingError:无法腌制

很抱歉,我无法使用更简单的示例重现该错误,并且我的代码过于复杂,无法发布。如果我在IPython Shell中而不是在常规Python中运行该程序,那么效果会很好。

我查阅了有关此问题的以前的笔记。它们都是由使用池调用在类函数中定义的函数引起的。但这不是我的情况。

Exception in thread Thread-3:
Traceback (most recent call last):
  File "/usr/lib64/python2.7/threading.py", line 552, in __bootstrap_inner
    self.run()
  File "/usr/lib64/python2.7/threading.py", line 505, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/lib64/python2.7/multiprocessing/pool.py", line 313, in _handle_tasks
    put(task)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed

我将不胜感激任何帮助。

更新:我的泡菜功能定义在模块的顶层。虽然它调用包含嵌套函数的函数。即f()要求g()调用h()具有嵌套函数i(),和我打电话pool.apply_async(f)f()g()h()都在顶层定义。我用这种模式尝试了更简单的示例,尽管它可以工作。

I am sorry that I can’t reproduce the error with a simpler example, and my code is too complicated to post. If I run the program in IPython shell instead of the regular Python, things work out well.

I looked up some previous notes on this problem. They were all caused by using pool to call function defined within a class function. But this is not the case for me.

Exception in thread Thread-3:
Traceback (most recent call last):
  File "/usr/lib64/python2.7/threading.py", line 552, in __bootstrap_inner
    self.run()
  File "/usr/lib64/python2.7/threading.py", line 505, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/lib64/python2.7/multiprocessing/pool.py", line 313, in _handle_tasks
    put(task)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed

I would appreciate any help.

Update: The function I pickle is defined at the top level of the module. Though it calls a function that contains a nested function. i.e, f() calls g() calls h() which has a nested function i(), and I am calling pool.apply_async(f). f(), g(), h() are all defined at the top level. I tried simpler example with this pattern and it works though.


回答 0

这是可以腌制的食物清单。特别是,只有在模块的顶层定义了功能时,这些功能才是可腌制的。

这段代码:

import multiprocessing as mp

class Foo():
    @staticmethod
    def work(self):
        pass

if __name__ == '__main__':   
    pool = mp.Pool()
    foo = Foo()
    pool.apply_async(foo.work)
    pool.close()
    pool.join()

产生的错误几乎与您发布的错误相同:

Exception in thread Thread-2:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 505, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/lib/python2.7/multiprocessing/pool.py", line 315, in _handle_tasks
    put(task)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed

问题在于,pool所有方法都使用a mp.SimpleQueue将任务传递给工作进程。mp.SimpleQueue必须经过的所有内容都必须是可选取的,并且foo.work不可选取,因为它不是在模块的顶层定义的。

可以通过在顶层定义一个函数来修复该问题,该函数调用foo.work()

def work(foo):
    foo.work()

pool.apply_async(work,args=(foo,))

请注意,它foo是可拾取的,因为它Foo是在顶层定义的并且 foo.__dict__是可拾取的。

Here is a list of what can be pickled. In particular, functions are only picklable if they are defined at the top-level of a module.

This piece of code:

import multiprocessing as mp

class Foo():
    @staticmethod
    def work(self):
        pass

if __name__ == '__main__':   
    pool = mp.Pool()
    foo = Foo()
    pool.apply_async(foo.work)
    pool.close()
    pool.join()

yields an error almost identical to the one you posted:

Exception in thread Thread-2:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 505, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/lib/python2.7/multiprocessing/pool.py", line 315, in _handle_tasks
    put(task)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed

The problem is that the pool methods all use a mp.SimpleQueue to pass tasks to the worker processes. Everything that goes through the mp.SimpleQueue must be pickable, and foo.work is not picklable since it is not defined at the top level of the module.

It can be fixed by defining a function at the top level, which calls foo.work():

def work(foo):
    foo.work()

pool.apply_async(work,args=(foo,))

Notice that foo is pickable, since Foo is defined at the top level and foo.__dict__ is picklable.


回答 1

我会用pathos.multiprocesssing,而不是multiprocessingpathos.multiprocessing是一个叉multiprocessing使用dilldill可以在python中序列化几乎所有内容,因此您可以并行发送更多内容。该pathos叉也有直接与多个参数的函数工作的能力,因为你需要为类方法。

>>> from pathos.multiprocessing import ProcessingPool as Pool
>>> p = Pool(4)
>>> class Test(object):
...   def plus(self, x, y): 
...     return x+y
... 
>>> t = Test()
>>> p.map(t.plus, x, y)
[4, 6, 8, 10]
>>> 
>>> class Foo(object):
...   @staticmethod
...   def work(self, x):
...     return x+1
... 
>>> f = Foo()
>>> p.apipe(f.work, f, 100)
<processing.pool.ApplyResult object at 0x10504f8d0>
>>> res = _
>>> res.get()
101

在此处获取pathos(并且,如果愿意dill):https : //github.com/uqfoundation

I’d use pathos.multiprocesssing, instead of multiprocessing. pathos.multiprocessing is a fork of multiprocessing that uses dill. dill can serialize almost anything in python, so you are able to send a lot more around in parallel. The pathos fork also has the ability to work directly with multiple argument functions, as you need for class methods.

>>> from pathos.multiprocessing import ProcessingPool as Pool
>>> p = Pool(4)
>>> class Test(object):
...   def plus(self, x, y): 
...     return x+y
... 
>>> t = Test()
>>> p.map(t.plus, x, y)
[4, 6, 8, 10]
>>> 
>>> class Foo(object):
...   @staticmethod
...   def work(self, x):
...     return x+1
... 
>>> f = Foo()
>>> p.apipe(f.work, f, 100)
<processing.pool.ApplyResult object at 0x10504f8d0>
>>> res = _
>>> res.get()
101

Get pathos (and if you like, dill) here: https://github.com/uqfoundation


回答 2

正如其他人所说multiprocessing,只能将Python对象传输到可以腌制的工作进程。如果您不能按照unutbu的描述重新组织代码,则可以使用dill扩展的酸洗/酸洗功能来传输数据(尤其是代码数据),如下所示。

此解决方案仅需要安装以下dill库,而无需安装其他库pathos

import os
from multiprocessing import Pool

import dill


def run_dill_encoded(payload):
    fun, args = dill.loads(payload)
    return fun(*args)


def apply_async(pool, fun, args):
    payload = dill.dumps((fun, args))
    return pool.apply_async(run_dill_encoded, (payload,))


if __name__ == "__main__":

    pool = Pool(processes=5)

    # asyn execution of lambda
    jobs = []
    for i in range(10):
        job = apply_async(pool, lambda a, b: (a, b, a * b), (i, i + 1))
        jobs.append(job)

    for job in jobs:
        print job.get()
    print

    # async execution of static method

    class O(object):

        @staticmethod
        def calc():
            return os.getpid()

    jobs = []
    for i in range(10):
        job = apply_async(pool, O.calc, ())
        jobs.append(job)

    for job in jobs:
        print job.get()

As others have said multiprocessing can only transfer Python objects to worker processes which can be pickled. If you cannot reorganize your code as described by unutbu, you can use dills extended pickling/unpickling capabilities for transferring data (especially code data) as I show below.

This solution requires only the installation of dill and no other libraries as pathos:

import os
from multiprocessing import Pool

import dill


def run_dill_encoded(payload):
    fun, args = dill.loads(payload)
    return fun(*args)


def apply_async(pool, fun, args):
    payload = dill.dumps((fun, args))
    return pool.apply_async(run_dill_encoded, (payload,))


if __name__ == "__main__":

    pool = Pool(processes=5)

    # asyn execution of lambda
    jobs = []
    for i in range(10):
        job = apply_async(pool, lambda a, b: (a, b, a * b), (i, i + 1))
        jobs.append(job)

    for job in jobs:
        print job.get()
    print

    # async execution of static method

    class O(object):

        @staticmethod
        def calc():
            return os.getpid()

    jobs = []
    for i in range(10):
        job = apply_async(pool, O.calc, ())
        jobs.append(job)

    for job in jobs:
        print job.get()

回答 3

我发现通过尝试在其上使用探查器,我还可以在完美工作的代码段上准确生成该错误输出。

请注意,这是在Windows上进行的(分叉不太优雅)。

我之前在跑步:

python -m profile -o output.pstats <script> 

并发现删除配置文件可以消除错误,并放置配置文件可以恢复错误。我也很生气,因为我知道以前的代码可以工作。我正在检查是否有更新过的pool.py …然后有下沉的感觉并消除了配置文件,仅此而已。

如果有人遇到,请在此处发布档案。

I have found that I can also generate exactly that error output on a perfectly working piece of code by attempting to use the profiler on it.

Note that this was on Windows (where the forking is a bit less elegant).

I was running:

python -m profile -o output.pstats <script> 

And found that removing the profiling removed the error and placing the profiling restored it. Was driving me batty too because I knew the code used to work. I was checking to see if something had updated pool.py… then had a sinking feeling and eliminated the profiling and that was it.

Posting here for the archives in case anybody else runs into it.


回答 4

解决此问题时,multiprocessing一个简单的解决方案是从切换PoolThreadPool。只需导入-

from multiprocessing.pool import ThreadPool as Pool

这是可行的,因为ThreadPool与主线程共享内存,而不是创建新进程-这意味着不需要酸洗。

这种方法的缺点是python并不是处理线程的最佳语言-它使用一种称为Global Interpreter Lock的方法来保持线程安全,这可能会减慢此处的一些用例。但是,如果您主要是与其他系统进行交互(运行HTTP命令,与数据库进行交谈,写入文件系统),则您的代码很可能不受CPU的束缚,因此不会受到太大的影响。实际上,在编写HTTP / HTTPS基准测试时,我发现这里使用的线程模型具有较少的开销和延迟,因为创建新进程的开销比创建新线程的开销高得多。

因此,如果要在python用户空间中处理大量内容,这可能不是最好的方法。

When this problem comes up with multiprocessing a simple solution is to switch from Pool to ThreadPool. This can be done with no change of code other than the import-

from multiprocessing.pool import ThreadPool as Pool

This works because ThreadPool shares memory with the main thread, rather than creating a new process- this means that pickling is not required.

The downside to this method is that python isn’t the greatest language with handling threads- it uses something called the Global Interpreter Lock to stay thread safe, which can slow down some use cases here. However, if you’re primarily interacting with other systems (running HTTP commands, talking with a database, writing to filesystems) then your code is likely not bound by CPU and won’t take much of a hit. In fact I’ve found when writing HTTP/HTTPS benchmarks that the threaded model used here has less overhead and delays, as the overhead from creating new processes is much higher than the overhead for creating new threads.

So if you’re processing a ton of stuff in python userspace this might not be the best method.


回答 5

此解决方案仅需要安装莳萝,而无需其他库作为安装程序

def apply_packed_function_for_map((dumped_function, item, args, kwargs),):
    """
    Unpack dumped function as target function and call it with arguments.

    :param (dumped_function, item, args, kwargs):
        a tuple of dumped function and its arguments
    :return:
        result of target function
    """
    target_function = dill.loads(dumped_function)
    res = target_function(item, *args, **kwargs)
    return res


def pack_function_for_map(target_function, items, *args, **kwargs):
    """
    Pack function and arguments to object that can be sent from one
    multiprocessing.Process to another. The main problem is:
        «multiprocessing.Pool.map*» or «apply*»
        cannot use class methods or closures.
    It solves this problem with «dill».
    It works with target function as argument, dumps it («with dill»)
    and returns dumped function with arguments of target function.
    For more performance we dump only target function itself
    and don't dump its arguments.
    How to use (pseudo-code):

        ~>>> import multiprocessing
        ~>>> images = [...]
        ~>>> pool = multiprocessing.Pool(100500)
        ~>>> features = pool.map(
        ~...     *pack_function_for_map(
        ~...         super(Extractor, self).extract_features,
        ~...         images,
        ~...         type='png'
        ~...         **options,
        ~...     )
        ~... )
        ~>>>

    :param target_function:
        function, that you want to execute like  target_function(item, *args, **kwargs).
    :param items:
        list of items for map
    :param args:
        positional arguments for target_function(item, *args, **kwargs)
    :param kwargs:
        named arguments for target_function(item, *args, **kwargs)
    :return: tuple(function_wrapper, dumped_items)
        It returs a tuple with
            * function wrapper, that unpack and call target function;
            * list of packed target function and its' arguments.
    """
    dumped_function = dill.dumps(target_function)
    dumped_items = [(dumped_function, item, args, kwargs) for item in items]
    return apply_packed_function_for_map, dumped_items

它也适用于numpy数组。

This solution requires only the installation of dill and no other libraries as pathos

def apply_packed_function_for_map((dumped_function, item, args, kwargs),):
    """
    Unpack dumped function as target function and call it with arguments.

    :param (dumped_function, item, args, kwargs):
        a tuple of dumped function and its arguments
    :return:
        result of target function
    """
    target_function = dill.loads(dumped_function)
    res = target_function(item, *args, **kwargs)
    return res


def pack_function_for_map(target_function, items, *args, **kwargs):
    """
    Pack function and arguments to object that can be sent from one
    multiprocessing.Process to another. The main problem is:
        «multiprocessing.Pool.map*» or «apply*»
        cannot use class methods or closures.
    It solves this problem with «dill».
    It works with target function as argument, dumps it («with dill»)
    and returns dumped function with arguments of target function.
    For more performance we dump only target function itself
    and don't dump its arguments.
    How to use (pseudo-code):

        ~>>> import multiprocessing
        ~>>> images = [...]
        ~>>> pool = multiprocessing.Pool(100500)
        ~>>> features = pool.map(
        ~...     *pack_function_for_map(
        ~...         super(Extractor, self).extract_features,
        ~...         images,
        ~...         type='png'
        ~...         **options,
        ~...     )
        ~... )
        ~>>>

    :param target_function:
        function, that you want to execute like  target_function(item, *args, **kwargs).
    :param items:
        list of items for map
    :param args:
        positional arguments for target_function(item, *args, **kwargs)
    :param kwargs:
        named arguments for target_function(item, *args, **kwargs)
    :return: tuple(function_wrapper, dumped_items)
        It returs a tuple with
            * function wrapper, that unpack and call target function;
            * list of packed target function and its' arguments.
    """
    dumped_function = dill.dumps(target_function)
    dumped_items = [(dumped_function, item, args, kwargs) for item in items]
    return apply_packed_function_for_map, dumped_items

It also works for numpy arrays.


回答 6

Can't pickle <type 'function'>: attribute lookup __builtin__.function failed

如果您在传递给异步作业的模型对象中有任何内置函数,也会出现此错误。

因此,请确保检查传递的模型对象没有内置函数。(在我们的例子中,我们使用模型内部的django-model-utilsFieldTracker()函数来跟踪某个字段)。这是相关的GitHub问题的链接

Can't pickle <type 'function'>: attribute lookup __builtin__.function failed

This error will also come if you have any inbuilt function inside the model object that was passed to the async job.

So make sure to check the model objects that are passed doesn’t have inbuilt functions. (In our case we were using FieldTracker() function of django-model-utils inside the model to track a certain field). Here is the link to relevant GitHub issue.


回答 7

建立在@rocksportrocker解决方案的基础上,在发送和接收结果时应该莳萝。

import dill
import itertools
def run_dill_encoded(payload):
    fun, args = dill.loads(payload)
    res = fun(*args)
    res = dill.dumps(res)
    return res

def dill_map_async(pool, fun, args_list,
                   as_tuple=True,
                   **kw):
    if as_tuple:
        args_list = ((x,) for x in args_list)

    it = itertools.izip(
        itertools.cycle([fun]),
        args_list)
    it = itertools.imap(dill.dumps, it)
    return pool.map_async(run_dill_encoded, it, **kw)

if __name__ == '__main__':
    import multiprocessing as mp
    import sys,os
    p = mp.Pool(4)
    res = dill_map_async(p, lambda x:[sys.stdout.write('%s\n'%os.getpid()),x][-1],
                  [lambda x:x+1]*10,)
    res = res.get(timeout=100)
    res = map(dill.loads,res)
    print(res)

Building on @rocksportrocker solution, It would make sense to dill when sending and RECVing the results.

import dill
import itertools
def run_dill_encoded(payload):
    fun, args = dill.loads(payload)
    res = fun(*args)
    res = dill.dumps(res)
    return res

def dill_map_async(pool, fun, args_list,
                   as_tuple=True,
                   **kw):
    if as_tuple:
        args_list = ((x,) for x in args_list)

    it = itertools.izip(
        itertools.cycle([fun]),
        args_list)
    it = itertools.imap(dill.dumps, it)
    return pool.map_async(run_dill_encoded, it, **kw)

if __name__ == '__main__':
    import multiprocessing as mp
    import sys,os
    p = mp.Pool(4)
    res = dill_map_async(p, lambda x:[sys.stdout.write('%s\n'%os.getpid()),x][-1],
                  [lambda x:x+1]*10,)
    res = res.get(timeout=100)
    res = map(dill.loads,res)
    print(res)

使用pickle.dump-TypeError:必须为str,而不是字节

问题:使用pickle.dump-TypeError:必须为str,而不是字节

我正在使用python3.3,并且在尝试腌制一个简单的字典时遇到一个神秘的错误。

这是代码:

import os
import pickle
from pickle import *
os.chdir('c:/Python26/progfiles/')

def storvars(vdict):      
    f = open('varstor.txt','w')
    pickle.dump(vdict,f,)
    f.close()
    return

mydict = {'name':'john','gender':'male','age':'45'}
storvars(mydict)

我得到:

Traceback (most recent call last):
  File "C:/Python26/test18.py", line 31, in <module>
    storvars(mydict)
  File "C:/Python26/test18.py", line 14, in storvars
    pickle.dump(vdict,f,)
TypeError: must be str, not bytes

I’m using python3.3 and I’m having a cryptic error when trying to pickle a simple dictionary.

Here is the code:

import os
import pickle
from pickle import *
os.chdir('c:/Python26/progfiles/')

def storvars(vdict):      
    f = open('varstor.txt','w')
    pickle.dump(vdict,f,)
    f.close()
    return

mydict = {'name':'john','gender':'male','age':'45'}
storvars(mydict)

and I get:

Traceback (most recent call last):
  File "C:/Python26/test18.py", line 31, in <module>
    storvars(mydict)
  File "C:/Python26/test18.py", line 14, in storvars
    pickle.dump(vdict,f,)
TypeError: must be str, not bytes

回答 0

需要以二进制模式打开输出文件:

f = open('varstor.txt','w')

需要是:

f = open('varstor.txt','wb')

The output file needs to be opened in binary mode:

f = open('varstor.txt','w')

needs to be:

f = open('varstor.txt','wb')

回答 1

只是有同样的问题。在Python 3中,必须指定二进制模式’wb’,’rb’,而在Python 2x中则不需要。当您遵循基于Python 2x的教程时,这就是您在这里的原因。

import pickle

class MyUser(object):
    def __init__(self,name):
        self.name = name

user = MyUser('Peter')

print("Before serialization: ")
print(user.name)
print("------------")
serialized = pickle.dumps(user)
filename = 'serialized.native'

with open(filename,'wb') as file_object:
    file_object.write(serialized)

with open(filename,'rb') as file_object:
    raw_data = file_object.read()

deserialized = pickle.loads(raw_data)


print("Loading from serialized file: ")
user2 = deserialized
print(user2.name)
print("------------")

Just had same issue. In Python 3, Binary modes ‘wb’, ‘rb’ must be specified whereas in Python 2x, they are not needed. When you follow tutorials that are based on Python 2x, that’s why you are here.

import pickle

class MyUser(object):
    def __init__(self,name):
        self.name = name

user = MyUser('Peter')

print("Before serialization: ")
print(user.name)
print("------------")
serialized = pickle.dumps(user)
filename = 'serialized.native'

with open(filename,'wb') as file_object:
    file_object.write(serialized)

with open(filename,'rb') as file_object:
    raw_data = file_object.read()

deserialized = pickle.loads(raw_data)


print("Loading from serialized file: ")
user2 = deserialized
print(user2.name)
print("------------")

保存对象(数据持久性)

问题:保存对象(数据持久性)

我创建了一个像这样的对象:

company1.name = 'banana' 
company1.value = 40

我想保存该对象。我怎样才能做到这一点?

I’ve created an object like this:

company1.name = 'banana' 
company1.value = 40

I would like to save this object. How can I do that?


回答 0

您可以使用pickle标准库中的模块。这是您的示例的基本应用:

import pickle

class Company(object):
    def __init__(self, name, value):
        self.name = name
        self.value = value

with open('company_data.pkl', 'wb') as output:
    company1 = Company('banana', 40)
    pickle.dump(company1, output, pickle.HIGHEST_PROTOCOL)

    company2 = Company('spam', 42)
    pickle.dump(company2, output, pickle.HIGHEST_PROTOCOL)

del company1
del company2

with open('company_data.pkl', 'rb') as input:
    company1 = pickle.load(input)
    print(company1.name)  # -> banana
    print(company1.value)  # -> 40

    company2 = pickle.load(input)
    print(company2.name) # -> spam
    print(company2.value)  # -> 42

您还可以定义自己的简单实用程序,如下所示,该实用程序打开文件并向其中写入单个对象:

def save_object(obj, filename):
    with open(filename, 'wb') as output:  # Overwrites any existing file.
        pickle.dump(obj, output, pickle.HIGHEST_PROTOCOL)

# sample usage
save_object(company1, 'company1.pkl')

更新资料

由于这是一个很受欢迎的答案,因此,我想谈谈一些较高级的用法主题。

cPickle(或_pickle)与pickle

实际使用cPickle模块几乎总是可取的,而不是pickle因为模块是用C编写的并且速度更快。它们之间有一些细微的差异,但是在大多数情况下它们是等效的,并且C版本将提供非常优越的性能。切换到它再简单不过,只需将import语句更改为:

import cPickle as pickle

在Python 3中,它cPickle已被重命名_pickle,但是不再需要执行此操作,因为该pickle模块现在可以自动执行此操作-请参阅python 3中的pickle和_pickle有什么区别?

总结是,您可以使用类似以下内容的代码来确保您的代码在Python 2和3中都可用时始终使用C版本:

try:
    import cPickle as pickle
except ModuleNotFoundError:
    import pickle

数据流格式(协议)

pickle可以读写多种不同的特定于Python的格式的文件,称为文档中所述的协议,“协议版本0”为ASCII,因此“易于阅读”。> 0的版本是二进制的,可用的最高版本取决于所使用的Python版本。默认值还取决于Python版本。在Python 2中,默认值是Protocol版本,但在Python 3.8.1中,它是Protocol版本。在Python 3.x中,该模块已添加,但在Python 2中不存在。04pickle.DEFAULT_PROTOCOL

幸运的是,pickle.HIGHEST_PROTOCOL在每个调用中都有一个写速记的方法(假设这就是您想要的,并且您通常会这样做),只需使用文字数字-1-类似于通过负索引引用序列的最后一个元素。因此,与其编写:

pickle.dump(obj, output, pickle.HIGHEST_PROTOCOL)

您可以这样写:

pickle.dump(obj, output, -1)

无论哪种方式,如果您创建了一个Pickler用于多个酸洗操作的对象,则只需指定一次协议:

pickler = pickle.Pickler(output, -1)
pickler.dump(obj1)
pickler.dump(obj2)
   etc...

注意:如果您正在运行不同版本的Python的环境中,则可能需要显式地使用(即,硬编码)它们都可以读取的特定协议编号(较新的版本通常可以读取较早版本产生的文件) 。

多个物件

虽然泡菜文件可以包含如上述样品中,当有这些数目不详的任何数量的腌制对象的,它往往更容易将其全部保存在某种可变大小的容器,就像一个listtupledict写字一次调用即可将它们全部保存到文件中:

tech_companies = [
    Company('Apple', 114.18), Company('Google', 908.60), Company('Microsoft', 69.18)
]
save_object(tech_companies, 'tech_companies.pkl')

然后使用以下命令恢复列表及其中的所有内容:

with open('tech_companies.pkl', 'rb') as input:
    tech_companies = pickle.load(input)

主要优点是您无需知道要保存多少个对象实例即可在以后加载它们(尽管如果没有该信息可以这样做,但它需要一些专门的代码)。请参阅相关问题的答案在腌制文件中保存和加载多个对象?有关执行此操作的不同方法的详细信息。个人喜欢@Lutz Prechelt的答案。它适用于此处的示例:

class Company:
    def __init__(self, name, value):
        self.name = name
        self.value = value

def pickled_items(filename):
    """ Unpickle a file of pickled data. """
    with open(filename, "rb") as f:
        while True:
            try:
                yield pickle.load(f)
            except EOFError:
                break

print('Companies in pickle file:')
for company in pickled_items('company_data.pkl'):
    print('  name: {}, value: {}'.format(company.name, company.value))

You could use the pickle module in the standard library. Here’s an elementary application of it to your example:

import pickle

class Company(object):
    def __init__(self, name, value):
        self.name = name
        self.value = value

with open('company_data.pkl', 'wb') as output:
    company1 = Company('banana', 40)
    pickle.dump(company1, output, pickle.HIGHEST_PROTOCOL)

    company2 = Company('spam', 42)
    pickle.dump(company2, output, pickle.HIGHEST_PROTOCOL)

del company1
del company2

with open('company_data.pkl', 'rb') as input:
    company1 = pickle.load(input)
    print(company1.name)  # -> banana
    print(company1.value)  # -> 40

    company2 = pickle.load(input)
    print(company2.name) # -> spam
    print(company2.value)  # -> 42

You could also define your own simple utility like the following which opens a file and writes a single object to it:

def save_object(obj, filename):
    with open(filename, 'wb') as output:  # Overwrites any existing file.
        pickle.dump(obj, output, pickle.HIGHEST_PROTOCOL)

# sample usage
save_object(company1, 'company1.pkl')

Update

Since this is such a popular answer, I’d like touch on a few slightly advanced usage topics.

cPickle (or _pickle) vs pickle

It’s almost always preferable to actually use the cPickle module rather than pickle because the former is written in C and is much faster. There are some subtle differences between them, but in most situations they’re equivalent and the C version will provide greatly superior performance. Switching to it couldn’t be easier, just change the import statement to this:

import cPickle as pickle

In Python 3, cPickle was renamed _pickle, but doing this is no longer necessary since the pickle module now does it automatically—see What difference between pickle and _pickle in python 3?.

The rundown is you could use something like the following to ensure that your code will always use the C version when it’s available in both Python 2 and 3:

try:
    import cPickle as pickle
except ModuleNotFoundError:
    import pickle

Data stream formats (protocols)

pickle can read and write files in several different, Python-specific, formats, called protocols as described in the documentation, “Protocol version 0” is ASCII and therefore “human-readable”. Versions > 0 are binary and the highest one available depends on what version of Python is being used. The default also depends on Python version. In Python 2 the default was Protocol version 0, but in Python 3.8.1, it’s Protocol version 4. In Python 3.x the module had a pickle.DEFAULT_PROTOCOL added to it, but that doesn’t exist in Python 2.

Fortunately there’s shorthand for writing pickle.HIGHEST_PROTOCOL in every call (assuming that’s what you want, and you usually do), just use the literal number -1 — similar to referencing the last element of a sequence via a negative index. So, instead of writing:

pickle.dump(obj, output, pickle.HIGHEST_PROTOCOL)

You can just write:

pickle.dump(obj, output, -1)

Either way, you’d only have specify the protocol once if you created a Pickler object for use in multiple pickle operations:

pickler = pickle.Pickler(output, -1)
pickler.dump(obj1)
pickler.dump(obj2)
   etc...

Note: If you’re in an environment running different versions of Python, then you’ll probably want to explicitly use (i.e. hardcode) a specific protocol number that all of them can read (later versions can generally read files produced by earlier ones).

Multiple Objects

While a pickle file can contain any number of pickled objects, as shown in the above samples, when there’s an unknown number of them, it’s often easier to store them all in some sort of variably-sized container, like a list, tuple, or dict and write them all to the file in a single call:

tech_companies = [
    Company('Apple', 114.18), Company('Google', 908.60), Company('Microsoft', 69.18)
]
save_object(tech_companies, 'tech_companies.pkl')

and restore the list and everything in it later with:

with open('tech_companies.pkl', 'rb') as input:
    tech_companies = pickle.load(input)

The major advantage is you don’t need to know how many object instances are saved in order to load them back later (although doing so without that information is possible, it requires some slightly specialized code). See the answers to the related question Saving and loading multiple objects in pickle file? for details on different ways to do this. Personally I like @Lutz Prechelt’s answer the best. Here’s it adapted to the examples here:

class Company:
    def __init__(self, name, value):
        self.name = name
        self.value = value

def pickled_items(filename):
    """ Unpickle a file of pickled data. """
    with open(filename, "rb") as f:
        while True:
            try:
                yield pickle.load(f)
            except EOFError:
                break

print('Companies in pickle file:')
for company in pickled_items('company_data.pkl'):
    print('  name: {}, value: {}'.format(company.name, company.value))

回答 1

我认为,假设该对象是个,这是一个很强的假设class。如果不是,该class怎么办?还有一种假设是该对象未在解释器中定义。如果在解释器中定义该怎么办?另外,如果属性是动态添加的,该怎么办?当某些python对象__dict__在创建后向其添加了属性时,pickle就不考虑这些属性的添加(即“忘记”了它们的添加-因为pickle是通过引用对象定义进行序列化的)。

在所有这些情况,pickle并且cPickle可以可怕的失败你。

如果您要保存object(任意创建的)具有属性(在对象定义中添加,或之后添加)的属性,则最好的选择是使用dill,它可以在python中序列化几乎所有内容。

我们从上课开始…

Python 2.7.8 (default, Jul 13 2014, 02:29:54) 
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pickle
>>> class Company:
...     pass
... 
>>> company1 = Company()
>>> company1.name = 'banana'
>>> company1.value = 40
>>> with open('company.pkl', 'wb') as f:
...     pickle.dump(company1, f, pickle.HIGHEST_PROTOCOL)
... 
>>> 

现在关闭,然后重新启动…

Python 2.7.8 (default, Jul 13 2014, 02:29:54) 
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pickle
>>> with open('company.pkl', 'rb') as f:
...     company1 = pickle.load(f)
... 
Traceback (most recent call last):
  File "<stdin>", line 2, in <module>
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 1378, in load
    return Unpickler(file).load()
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 858, in load
dispatch[key](self)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 1090, in load_global
    klass = self.find_class(module, name)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 1126, in find_class
    klass = getattr(mod, name)
AttributeError: 'module' object has no attribute 'Company'
>>> 

糟糕… pickle无法处理。让我们尝试一下dill。我们将引入另一种对象类型(a lambda)以取得良好的效果。

Python 2.7.8 (default, Jul 13 2014, 02:29:54) 
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import dill       
>>> class Company:
...     pass
... 
>>> company1 = Company()
>>> company1.name = 'banana'
>>> company1.value = 40
>>> 
>>> company2 = lambda x:x
>>> company2.name = 'rhubarb'
>>> company2.value = 42
>>> 
>>> with open('company_dill.pkl', 'wb') as f:
...     dill.dump(company1, f)
...     dill.dump(company2, f)
... 
>>> 

现在读取文件。

Python 2.7.8 (default, Jul 13 2014, 02:29:54) 
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import dill
>>> with open('company_dill.pkl', 'rb') as f:
...     company1 = dill.load(f)
...     company2 = dill.load(f)
... 
>>> company1 
<__main__.Company instance at 0x107909128>
>>> company1.name
'banana'
>>> company1.value
40
>>> company2.name
'rhubarb'
>>> company2.value
42
>>>    

有用。pickle失败的原因(dill并非如此)是(在大多数情况下)dill将其视为__main__一个模块,并且还可以腌制类定义而不是通过引用进行腌制(就像这样pickle做)。dill腌制a 的原因lambda是它给它起了个名字……然后就会出现腌制魔术。

实际上,有一种保存所有这些对象的简便方法,尤其是当您创建了很多对象时。只需转储整个python会话,然后稍后再返回即可。

Python 2.7.8 (default, Jul 13 2014, 02:29:54) 
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import dill
>>> class Company:
...     pass
... 
>>> company1 = Company()
>>> company1.name = 'banana'
>>> company1.value = 40
>>> 
>>> company2 = lambda x:x
>>> company2.name = 'rhubarb'
>>> company2.value = 42
>>> 
>>> dill.dump_session('dill.pkl')
>>> 

现在关闭计算机,享用意式浓缩咖啡或其他任何东西,然后再回来…

Python 2.7.8 (default, Jul 13 2014, 02:29:54) 
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import dill
>>> dill.load_session('dill.pkl')
>>> company1.name
'banana'
>>> company1.value
40
>>> company2.name
'rhubarb'
>>> company2.value
42
>>> company2
<function <lambda> at 0x1065f2938>

唯一的主要缺点是它dill不是python标准库的一部分。因此,如果您无法在服务器上安装python软件包,则无法使用它。

但是,如果你能够在系统上安装Python包,你可以得到最新的dillgit+https://github.com/uqfoundation/dill.git@master#egg=dill。您可以使用下载最新版本pip install dill

I think it’s a pretty strong assumption to assume that the object is a class. What if it’s not a class? There’s also the assumption that the object was not defined in the interpreter. What if it was defined in the interpreter? Also, what if the attributes were added dynamically? When some python objects have attributes added to their __dict__ after creation, pickle doesn’t respect the addition of those attributes (i.e. it ‘forgets’ they were added — because pickle serializes by reference to the object definition).

In all these cases, pickle and cPickle can fail you horribly.

If you are looking to save an object (arbitrarily created), where you have attributes (either added in the object definition, or afterward)… your best bet is to use dill, which can serialize almost anything in python.

We start with a class…

Python 2.7.8 (default, Jul 13 2014, 02:29:54) 
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pickle
>>> class Company:
...     pass
... 
>>> company1 = Company()
>>> company1.name = 'banana'
>>> company1.value = 40
>>> with open('company.pkl', 'wb') as f:
...     pickle.dump(company1, f, pickle.HIGHEST_PROTOCOL)
... 
>>> 

Now shut down, and restart…

Python 2.7.8 (default, Jul 13 2014, 02:29:54) 
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pickle
>>> with open('company.pkl', 'rb') as f:
...     company1 = pickle.load(f)
... 
Traceback (most recent call last):
  File "<stdin>", line 2, in <module>
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 1378, in load
    return Unpickler(file).load()
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 858, in load
dispatch[key](self)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 1090, in load_global
    klass = self.find_class(module, name)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 1126, in find_class
    klass = getattr(mod, name)
AttributeError: 'module' object has no attribute 'Company'
>>> 

Oops… pickle can’t handle it. Let’s try dill. We’ll throw in another object type (a lambda) for good measure.

Python 2.7.8 (default, Jul 13 2014, 02:29:54) 
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import dill       
>>> class Company:
...     pass
... 
>>> company1 = Company()
>>> company1.name = 'banana'
>>> company1.value = 40
>>> 
>>> company2 = lambda x:x
>>> company2.name = 'rhubarb'
>>> company2.value = 42
>>> 
>>> with open('company_dill.pkl', 'wb') as f:
...     dill.dump(company1, f)
...     dill.dump(company2, f)
... 
>>> 

And now read the file.

Python 2.7.8 (default, Jul 13 2014, 02:29:54) 
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import dill
>>> with open('company_dill.pkl', 'rb') as f:
...     company1 = dill.load(f)
...     company2 = dill.load(f)
... 
>>> company1 
<__main__.Company instance at 0x107909128>
>>> company1.name
'banana'
>>> company1.value
40
>>> company2.name
'rhubarb'
>>> company2.value
42
>>>    

It works. The reason pickle fails, and dill doesn’t, is that dill treats __main__ like a module (for the most part), and also can pickle class definitions instead of pickling by reference (like pickle does). The reason dill can pickle a lambda is that it gives it a name… then pickling magic can happen.

Actually, there’s an easier way to save all these objects, especially if you have a lot of objects you’ve created. Just dump the whole python session, and come back to it later.

Python 2.7.8 (default, Jul 13 2014, 02:29:54) 
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import dill
>>> class Company:
...     pass
... 
>>> company1 = Company()
>>> company1.name = 'banana'
>>> company1.value = 40
>>> 
>>> company2 = lambda x:x
>>> company2.name = 'rhubarb'
>>> company2.value = 42
>>> 
>>> dill.dump_session('dill.pkl')
>>> 

Now shut down your computer, go enjoy an espresso or whatever, and come back later…

Python 2.7.8 (default, Jul 13 2014, 02:29:54) 
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import dill
>>> dill.load_session('dill.pkl')
>>> company1.name
'banana'
>>> company1.value
40
>>> company2.name
'rhubarb'
>>> company2.value
42
>>> company2
<function <lambda> at 0x1065f2938>

The only major drawback is that dill is not part of the python standard library. So if you can’t install a python package on your server, then you can’t use it.

However, if you are able to install python packages on your system, you can get the latest dill with git+https://github.com/uqfoundation/dill.git@master#egg=dill. And you can get the latest released version with pip install dill.


回答 2

您可以使用anycache为您完成这项工作。它考虑了所有细节:

  • 它使用莳萝作为后端,扩展了python pickle模块以处理lambda和所有不错的python功能。
  • 它将不同的对象存储到不同的文件,并正确地重新加载它们。
  • 限制缓存大小
  • 允许清除缓存
  • 允许在多次运行之间共享对象
  • 允许尊重会影响结果的输入文件

假设您有一个myfunc创建实例的函数:

from anycache import anycache

class Company(object):
    def __init__(self, name, value):
        self.name = name
        self.value = value

@anycache(cachedir='/path/to/your/cache')    
def myfunc(name, value)
    return Company(name, value)

Anycache首次调用myfunc,并cachedir使用唯一标识符(取决于函数名称及其参数)作为文件名将结果腌制到文件中。在任何连续运行中,将加载已腌制的对象。如果在cachedir两次python运行之间保留了,则腌制的对象将从先前的python运行中获取。

有关更多详细信息,请参见文档

You can use anycache to do the job for you. It considers all the details:

  • It uses dill as backend, which extends the python pickle module to handle lambda and all the nice python features.
  • It stores different objects to different files and reloads them properly.
  • Limits cache size
  • Allows cache clearing
  • Allows sharing of objects between multiple runs
  • Allows respect of input files which influence the result

Assuming you have a function myfunc which creates the instance:

from anycache import anycache

class Company(object):
    def __init__(self, name, value):
        self.name = name
        self.value = value

@anycache(cachedir='/path/to/your/cache')    
def myfunc(name, value)
    return Company(name, value)

Anycache calls myfunc at the first time and pickles the result to a file in cachedir using an unique identifier (depending on the function name and its arguments) as filename. On any consecutive run, the pickled object is loaded. If the cachedir is preserved between python runs, the pickled object is taken from the previous python run.

For any further details see the documentation


回答 3

使用company1您的问题和python3的快速示例。

import pickle

# Save the file
pickle.dump(company1, file = open("company1.pickle", "wb"))

# Reload the file
company1_reloaded = pickle.load(open("company1.pickle", "rb"))

但是,正如该答案指出的那样,泡菜经常失败。所以你应该真正使用dill

import dill

# Save the file
dill.dump(company1, file = open("company1.pickle", "wb"))

# Reload the file
company1_reloaded = dill.load(open("company1.pickle", "rb"))

Quick example using company1 from your question, with python3.

import pickle

# Save the file
pickle.dump(company1, file = open("company1.pickle", "wb"))

# Reload the file
company1_reloaded = pickle.load(open("company1.pickle", "rb"))

However, as this answer noted, pickle often fails. So you should really use dill.

import dill

# Save the file
dill.dump(company1, file = open("company1.pickle", "wb"))

# Reload the file
company1_reloaded = dill.load(open("company1.pickle", "rb"))

如何使用泡菜保存字典?

问题:如何使用泡菜保存字典?

我已经仔细阅读了Python文档提供的信息,但仍然有些困惑。有人可以张贴示例代码来编写新文件,然后使用pickle将字典转储到其中吗?

I have looked through the information that the Python docs give, but I’m still a little confused. Could somebody post sample code that would write a new file then use pickle to dump a dictionary into it?


回答 0

尝试这个:

import pickle

a = {'hello': 'world'}

with open('filename.pickle', 'wb') as handle:
    pickle.dump(a, handle, protocol=pickle.HIGHEST_PROTOCOL)

with open('filename.pickle', 'rb') as handle:
    b = pickle.load(handle)

print a == b

Try this:

import pickle

a = {'hello': 'world'}

with open('filename.pickle', 'wb') as handle:
    pickle.dump(a, handle, protocol=pickle.HIGHEST_PROTOCOL)

with open('filename.pickle', 'rb') as handle:
    b = pickle.load(handle)

print a == b

回答 1

import pickle

your_data = {'foo': 'bar'}

# Store data (serialize)
with open('filename.pickle', 'wb') as handle:
    pickle.dump(your_data, handle, protocol=pickle.HIGHEST_PROTOCOL)

# Load data (deserialize)
with open('filename.pickle', 'rb') as handle:
    unserialized_data = pickle.load(handle)

print(your_data == unserialized_data)

的优点HIGHEST_PROTOCOL是文件变小。这使得脱皮有时更快。

重要提示:泡菜的最大文件大小约为2GB。

替代方式

import mpu
your_data = {'foo': 'bar'}
mpu.io.write('filename.pickle', data)
unserialized_data = mpu.io.read('filename.pickle')

替代格式

对于您的应用程序,以下内容可能很重要:

  • 其他编程语言的支持
  • 阅读/写作表现
  • 紧凑度(文件大小)

另请参阅:数据序列化格式的比较

如果您想寻找一种制作配置文件的方法,则可能需要阅读我的短文《Python中的配置文件》。

import pickle

your_data = {'foo': 'bar'}

# Store data (serialize)
with open('filename.pickle', 'wb') as handle:
    pickle.dump(your_data, handle, protocol=pickle.HIGHEST_PROTOCOL)

# Load data (deserialize)
with open('filename.pickle', 'rb') as handle:
    unserialized_data = pickle.load(handle)

print(your_data == unserialized_data)

The advantage of HIGHEST_PROTOCOL is that files get smaller. This makes unpickling sometimes much faster.

Important notice: The maximum file size of pickle is about 2GB.

Alternative way

import mpu
your_data = {'foo': 'bar'}
mpu.io.write('filename.pickle', data)
unserialized_data = mpu.io.read('filename.pickle')

Alternative Formats

For your application, the following might be important:

  • Support by other programming languages
  • Reading / writing performance
  • Compactness (file size)

See also: Comparison of data serialization formats

In case you are rather looking for a way to make configuration files, you might want to read my short article Configuration files in Python


回答 2

# Save a dictionary into a pickle file.
import pickle

favorite_color = {"lion": "yellow", "kitty": "red"}  # create a dictionary
pickle.dump(favorite_color, open("save.p", "wb"))  # save it into a file named save.p

# -------------------------------------------------------------
# Load the dictionary back from the pickle file.
import pickle

favorite_color = pickle.load(open("save.p", "rb"))
# favorite_color is now {"lion": "yellow", "kitty": "red"}
# Save a dictionary into a pickle file.
import pickle

favorite_color = {"lion": "yellow", "kitty": "red"}  # create a dictionary
pickle.dump(favorite_color, open("save.p", "wb"))  # save it into a file named save.p

# -------------------------------------------------------------
# Load the dictionary back from the pickle file.
import pickle

favorite_color = pickle.load(open("save.p", "rb"))
# favorite_color is now {"lion": "yellow", "kitty": "red"}

回答 3

通常,dict除非仅包含简单的对象(例如字符串和整数),否则酸洗a 将失败。

Python 2.7.9 (default, Dec 11 2014, 01:21:43) 
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from numpy import *
>>> type(globals())     
<type 'dict'>
>>> import pickle
>>> pik = pickle.dumps(globals())
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 1374, in dumps
    Pickler(file, protocol).dump(obj)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 224, in dump
    self.save(obj)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 649, in save_dict
    self._batch_setitems(obj.iteritems())
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 663, in _batch_setitems
    save(v)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 306, in save
    rv = reduce(self.proto)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy_reg.py", line 70, in _reduce_ex
    raise TypeError, "can't pickle %s objects" % base.__name__
TypeError: can't pickle module objects
>>> 

即使是非常简单的方法dict也会经常失败。它仅取决于内容。

>>> d = {'x': lambda x:x}
>>> pik = pickle.dumps(d)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 1374, in dumps
    Pickler(file, protocol).dump(obj)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 224, in dump
    self.save(obj)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 649, in save_dict
    self._batch_setitems(obj.iteritems())
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 663, in _batch_setitems
    save(v)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 748, in save_global
    (obj, module, name))
pickle.PicklingError: Can't pickle <function <lambda> at 0x102178668>: it's not found as __main__.<lambda>

但是,如果使用更好的序列化器(例如dill或)cloudpickle,则可以对大多数词典进行腌制:

>>> import dill
>>> pik = dill.dumps(d)

或者,如果您想将dict文件保存到文件中…

>>> with open('save.pik', 'w') as f:
...   dill.dump(globals(), f)
... 

后一个示例与此处发布的任何其他好的答案相同(除了忽略商品内容的可腌性之外dict)。

In general, pickling a dict will fail unless you have only simple objects in it, like strings and integers.

Python 2.7.9 (default, Dec 11 2014, 01:21:43) 
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from numpy import *
>>> type(globals())     
<type 'dict'>
>>> import pickle
>>> pik = pickle.dumps(globals())
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 1374, in dumps
    Pickler(file, protocol).dump(obj)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 224, in dump
    self.save(obj)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 649, in save_dict
    self._batch_setitems(obj.iteritems())
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 663, in _batch_setitems
    save(v)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 306, in save
    rv = reduce(self.proto)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy_reg.py", line 70, in _reduce_ex
    raise TypeError, "can't pickle %s objects" % base.__name__
TypeError: can't pickle module objects
>>> 

Even a really simple dict will often fail. It just depends on the contents.

>>> d = {'x': lambda x:x}
>>> pik = pickle.dumps(d)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 1374, in dumps
    Pickler(file, protocol).dump(obj)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 224, in dump
    self.save(obj)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 649, in save_dict
    self._batch_setitems(obj.iteritems())
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 663, in _batch_setitems
    save(v)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py", line 748, in save_global
    (obj, module, name))
pickle.PicklingError: Can't pickle <function <lambda> at 0x102178668>: it's not found as __main__.<lambda>

However, if you use a better serializer like dill or cloudpickle, then most dictionaries can be pickled:

>>> import dill
>>> pik = dill.dumps(d)

Or if you want to save your dict to a file…

>>> with open('save.pik', 'w') as f:
...   dill.dump(globals(), f)
... 

The latter example is identical to any of the other good answers posted here (which aside from neglecting the picklability of the contents of the dict are good).


回答 4

>>> import pickle
>>> with open("/tmp/picklefile", "wb") as f:
...     pickle.dump({}, f)
... 

通常,最好使用cPickle实现

>>> import cPickle as pickle
>>> help(pickle.dump)
Help on built-in function dump in module cPickle:

dump(...)
    dump(obj, file, protocol=0) -- Write an object in pickle format to the given file.

    See the Pickler docstring for the meaning of optional argument proto.
>>> import pickle
>>> with open("/tmp/picklefile", "wb") as f:
...     pickle.dump({}, f)
... 

normally it’s preferable to use the cPickle implementation

>>> import cPickle as pickle
>>> help(pickle.dump)
Help on built-in function dump in module cPickle:

dump(...)
    dump(obj, file, protocol=0) -- Write an object in pickle format to the given file.

    See the Pickler docstring for the meaning of optional argument proto.

回答 5

如果您只想将字典存储在单个文件中,请pickle像这样使用

import pickle

a = {'hello': 'world'}

with open('filename.pickle', 'wb') as handle:
    pickle.dump(a, handle)

with open('filename.pickle', 'rb') as handle:
    b = pickle.load(handle)

如果要在多个文件中保存和还原多个词典以进行缓存和存储更复杂的数据,请使用anycache。它可以完成您需要的所有其他工作pickle

from anycache import anycache

@anycache(cachedir='path/to/files')
def myfunc(hello):
    return {'hello', hello}

Anycache myfunc根据不同文件的参数存储不同的结果,cachedir然后重新加载它们。

有关更多详细信息,请参见文档

If you just want to store the dict in a single file, use pickle like that

import pickle

a = {'hello': 'world'}

with open('filename.pickle', 'wb') as handle:
    pickle.dump(a, handle)

with open('filename.pickle', 'rb') as handle:
    b = pickle.load(handle)

If you want to save and restore multiple dictionaries in multiple files for caching and store more complex data, use anycache. It does all the other stuff you need around pickle

from anycache import anycache

@anycache(cachedir='path/to/files')
def myfunc(hello):
    return {'hello', hello}

Anycache stores the different myfunc results depending on the arguments to different files in cachedir and reloads them.

See the documentation for any further details.


回答 6

将Python数据(例如字典)转储到pickle文件的简单方法。

import pickle

your_dictionary = {}

pickle.dump(your_dictionary, open('pickle_file_name.p', 'wb'))

Simple way to dump a Python data (e.g. dictionary) to a pickle file.

import pickle

your_dictionary = {}

pickle.dump(your_dictionary, open('pickle_file_name.p', 'wb'))

回答 7

import pickle

dictobj = {'Jack' : 123, 'John' : 456}

filename = "/foldername/filestore"

fileobj = open(filename, 'wb')

pickle.dump(dictobj, fileobj)

fileobj.close()
import pickle

dictobj = {'Jack' : 123, 'John' : 456}

filename = "/foldername/filestore"

fileobj = open(filename, 'wb')

pickle.dump(dictobj, fileobj)

fileobj.close()

回答 8

我发现酸洗令人困惑(可能是因为我很胖)。我发现这可行,但是:

myDictionaryString=str(myDictionary)

然后可以将其写入文本文件。我遇到错误并告诉我将整数写入.dat文件时,我放弃尝试使用pickle。很抱歉没有使用泡菜。

I’ve found pickling confusing (possibly because I’m thick). I found that this works, though:

myDictionaryString=str(myDictionary)

Which you can then write to a text file. I gave up trying to use pickle as I was getting errors telling me to write integers to a .dat file. I apologise for not using pickle.