问题:字典与对象-哪个更有效,为什么?
在内存使用和CPU消耗方面,在Python中更有效的方法是-字典还是对象?
背景: 我必须将大量数据加载到Python中。我创建了一个只是字段容器的对象。创建4M实例并将其放入字典中大约需要10分钟和约6GB的内存。字典准备就绪后,只需眨眼即可访问。
示例: 为了检查性能,我编写了两个简单的程序,它们执行相同的操作-一个使用对象,另一个使用字典:
对象(执行时间〜18sec):
class Obj(object):
def __init__(self, i):
self.i = i
self.l = []
all = {}
for i in range(1000000):
all[i] = Obj(i)
字典(执行时间约12秒):
all = {}
for i in range(1000000):
o = {}
o['i'] = i
o['l'] = []
all[i] = o
问题: 我做错什么了吗?字典比对象快?如果确实字典表现更好,有人可以解释为什么吗?
What is more efficient in Python in terms of memory usage and CPU consumption – Dictionary or Object?
Background: I have to load huge amount of data into Python. I created an object that is just a field container. Creating 4M instances and putting them into a dictionary took about 10 minutes and ~6GB of memory. After dictionary is ready, accessing it is a blink of an eye.
Example: To check the performance I wrote two simple programs that do the same – one is using objects, other dictionary:
Object (execution time ~18sec):
class Obj(object):
def __init__(self, i):
self.i = i
self.l = []
all = {}
for i in range(1000000):
all[i] = Obj(i)
Dictionary (execution time ~12sec):
all = {}
for i in range(1000000):
o = {}
o['i'] = i
o['l'] = []
all[i] = o
Question: Am I doing something wrong or dictionary is just faster than object? If indeed dictionary performs better, can somebody explain why?
回答 0
您是否尝试过使用__slots__
?
从文档中:
默认情况下,新旧类的实例都有用于属性存储的字典。这浪费了具有很少实例变量的对象的空间。创建大量实例时,空间消耗会变得非常大。
可以通过__slots__
在新式类定义中进行定义来覆盖默认值。该__slots__
声明采用一系列实例变量,并且在每个实例中仅保留足够的空间来容纳每个变量的值。因为__dict__
未为每个实例创建空间,所以节省了空间。
那么,这样既节省时间又节省内存吗?
比较计算机上的三种方法:
test_slots.py:
class Obj(object):
__slots__ = ('i', 'l')
def __init__(self, i):
self.i = i
self.l = []
all = {}
for i in range(1000000):
all[i] = Obj(i)
test_obj.py:
class Obj(object):
def __init__(self, i):
self.i = i
self.l = []
all = {}
for i in range(1000000):
all[i] = Obj(i)
test_dict.py:
all = {}
for i in range(1000000):
o = {}
o['i'] = i
o['l'] = []
all[i] = o
test_namedtuple.py(在2.6中受支持):
import collections
Obj = collections.namedtuple('Obj', 'i l')
all = {}
for i in range(1000000):
all[i] = Obj(i, [])
运行基准测试(使用CPython 2.5):
$ lshw | grep product | head -n 1
product: Intel(R) Pentium(R) M processor 1.60GHz
$ python --version
Python 2.5
$ time python test_obj.py && time python test_dict.py && time python test_slots.py
real 0m27.398s (using 'normal' object)
real 0m16.747s (using __dict__)
real 0m11.777s (using __slots__)
使用CPython 2.6.2,包括命名的元组测试:
$ python --version
Python 2.6.2
$ time python test_obj.py && time python test_dict.py && time python test_slots.py && time python test_namedtuple.py
real 0m27.197s (using 'normal' object)
real 0m17.657s (using __dict__)
real 0m12.249s (using __slots__)
real 0m12.262s (using namedtuple)
因此,是的(不是很意外),使用__slots__
是一种性能优化。使用命名元组的性能与相似__slots__
。
Have you tried using __slots__
?
From the documentation:
By default, instances of both old and new-style classes have a dictionary for attribute storage. This wastes space for objects having very few instance variables. The space consumption can become acute when creating large numbers of instances.
The default can be overridden by defining __slots__
in a new-style class definition. The __slots__
declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because __dict__
is not created for each instance.
So does this save time as well as memory?
Comparing the three approaches on my computer:
test_slots.py:
class Obj(object):
__slots__ = ('i', 'l')
def __init__(self, i):
self.i = i
self.l = []
all = {}
for i in range(1000000):
all[i] = Obj(i)
test_obj.py:
class Obj(object):
def __init__(self, i):
self.i = i
self.l = []
all = {}
for i in range(1000000):
all[i] = Obj(i)
test_dict.py:
all = {}
for i in range(1000000):
o = {}
o['i'] = i
o['l'] = []
all[i] = o
test_namedtuple.py (supported in 2.6):
import collections
Obj = collections.namedtuple('Obj', 'i l')
all = {}
for i in range(1000000):
all[i] = Obj(i, [])
Run benchmark (using CPython 2.5):
$ lshw | grep product | head -n 1
product: Intel(R) Pentium(R) M processor 1.60GHz
$ python --version
Python 2.5
$ time python test_obj.py && time python test_dict.py && time python test_slots.py
real 0m27.398s (using 'normal' object)
real 0m16.747s (using __dict__)
real 0m11.777s (using __slots__)
Using CPython 2.6.2, including the named tuple test:
$ python --version
Python 2.6.2
$ time python test_obj.py && time python test_dict.py && time python test_slots.py && time python test_namedtuple.py
real 0m27.197s (using 'normal' object)
real 0m17.657s (using __dict__)
real 0m12.249s (using __slots__)
real 0m12.262s (using namedtuple)
So yes (not really a surprise), using __slots__
is a performance optimization. Using a named tuple has similar performance to __slots__
.
回答 1
对象中的属性访问使用幕后的字典访问-因此,使用属性访问会增加额外的开销。另外,在对象情况下,由于例如额外的内存分配和代码执行(例如__init__
方法的执行),您将承担额外的开销。
在您的代码中,如果o
是一个Obj
实例,o.attr
则等效于o.__dict__['attr']
少量的额外开销。
Attribute access in an object uses dictionary access behind the scenes – so by using attribute access you are adding extra overhead. Plus in the object case, you are incurring additional overhead because of e.g. additional memory allocations and code execution (e.g. of the __init__
method).
In your code, if o
is an Obj
instance, o.attr
is equivalent to o.__dict__['attr']
with a small amount of extra overhead.
回答 2
Have you considered using a namedtuple? (link for python 2.4/2.5)
It’s the new standard way of representing structured data that gives you the performance of a tuple and the convenience of a class.
It’s only downside compared with dictionaries is that (like tuples) it doesn’t give you the ability to change attributes after creation.
回答 3
这是python 3.6.1的@hughdbrown答案的副本,我将计数增加了5倍,并在每次运行结束时添加了一些代码来测试python进程的内存占用量。
在不愿接受投票的人之前,请注意,这种计算对象大小的方法并不准确。
from datetime import datetime
import os
import psutil
process = psutil.Process(os.getpid())
ITER_COUNT = 1000 * 1000 * 5
RESULT=None
def makeL(i):
# Use this line to negate the effect of the strings on the test
# return "Python is smart and will only create one string with this line"
# Use this if you want to see the difference with 5 million unique strings
return "This is a sample string %s" % i
def timeit(method):
def timed(*args, **kw):
global RESULT
s = datetime.now()
RESULT = method(*args, **kw)
e = datetime.now()
sizeMb = process.memory_info().rss / 1024 / 1024
sizeMbStr = "{0:,}".format(round(sizeMb, 2))
print('Time Taken = %s, \t%s, \tSize = %s' % (e - s, method.__name__, sizeMbStr))
return timed
class Obj(object):
def __init__(self, i):
self.i = i
self.l = makeL(i)
class SlotObj(object):
__slots__ = ('i', 'l')
def __init__(self, i):
self.i = i
self.l = makeL(i)
from collections import namedtuple
NT = namedtuple("NT", ["i", 'l'])
@timeit
def profile_dict_of_nt():
return [NT(i=i, l=makeL(i)) for i in range(ITER_COUNT)]
@timeit
def profile_list_of_nt():
return dict((i, NT(i=i, l=makeL(i))) for i in range(ITER_COUNT))
@timeit
def profile_dict_of_dict():
return dict((i, {'i': i, 'l': makeL(i)}) for i in range(ITER_COUNT))
@timeit
def profile_list_of_dict():
return [{'i': i, 'l': makeL(i)} for i in range(ITER_COUNT)]
@timeit
def profile_dict_of_obj():
return dict((i, Obj(i)) for i in range(ITER_COUNT))
@timeit
def profile_list_of_obj():
return [Obj(i) for i in range(ITER_COUNT)]
@timeit
def profile_dict_of_slot():
return dict((i, SlotObj(i)) for i in range(ITER_COUNT))
@timeit
def profile_list_of_slot():
return [SlotObj(i) for i in range(ITER_COUNT)]
profile_dict_of_nt()
profile_list_of_nt()
profile_dict_of_dict()
profile_list_of_dict()
profile_dict_of_obj()
profile_list_of_obj()
profile_dict_of_slot()
profile_list_of_slot()
这些是我的结果
Time Taken = 0:00:07.018720, provile_dict_of_nt, Size = 951.83
Time Taken = 0:00:07.716197, provile_list_of_nt, Size = 1,084.75
Time Taken = 0:00:03.237139, profile_dict_of_dict, Size = 1,926.29
Time Taken = 0:00:02.770469, profile_list_of_dict, Size = 1,778.58
Time Taken = 0:00:07.961045, profile_dict_of_obj, Size = 1,537.64
Time Taken = 0:00:05.899573, profile_list_of_obj, Size = 1,458.05
Time Taken = 0:00:06.567684, profile_dict_of_slot, Size = 1,035.65
Time Taken = 0:00:04.925101, profile_list_of_slot, Size = 887.49
我的结论是:
- 插槽具有最佳的内存占用,并且速度合理。
- dict是最快的,但使用最多的内存。
Here is a copy of @hughdbrown answer for python 3.6.1, I’ve made the count 5x larger and added some code to test the memory footprint of the python process at the end of each run.
Before the downvoters have at it, Be advised that this method of counting the size of objects is not accurate.
from datetime import datetime
import os
import psutil
process = psutil.Process(os.getpid())
ITER_COUNT = 1000 * 1000 * 5
RESULT=None
def makeL(i):
# Use this line to negate the effect of the strings on the test
# return "Python is smart and will only create one string with this line"
# Use this if you want to see the difference with 5 million unique strings
return "This is a sample string %s" % i
def timeit(method):
def timed(*args, **kw):
global RESULT
s = datetime.now()
RESULT = method(*args, **kw)
e = datetime.now()
sizeMb = process.memory_info().rss / 1024 / 1024
sizeMbStr = "{0:,}".format(round(sizeMb, 2))
print('Time Taken = %s, \t%s, \tSize = %s' % (e - s, method.__name__, sizeMbStr))
return timed
class Obj(object):
def __init__(self, i):
self.i = i
self.l = makeL(i)
class SlotObj(object):
__slots__ = ('i', 'l')
def __init__(self, i):
self.i = i
self.l = makeL(i)
from collections import namedtuple
NT = namedtuple("NT", ["i", 'l'])
@timeit
def profile_dict_of_nt():
return [NT(i=i, l=makeL(i)) for i in range(ITER_COUNT)]
@timeit
def profile_list_of_nt():
return dict((i, NT(i=i, l=makeL(i))) for i in range(ITER_COUNT))
@timeit
def profile_dict_of_dict():
return dict((i, {'i': i, 'l': makeL(i)}) for i in range(ITER_COUNT))
@timeit
def profile_list_of_dict():
return [{'i': i, 'l': makeL(i)} for i in range(ITER_COUNT)]
@timeit
def profile_dict_of_obj():
return dict((i, Obj(i)) for i in range(ITER_COUNT))
@timeit
def profile_list_of_obj():
return [Obj(i) for i in range(ITER_COUNT)]
@timeit
def profile_dict_of_slot():
return dict((i, SlotObj(i)) for i in range(ITER_COUNT))
@timeit
def profile_list_of_slot():
return [SlotObj(i) for i in range(ITER_COUNT)]
profile_dict_of_nt()
profile_list_of_nt()
profile_dict_of_dict()
profile_list_of_dict()
profile_dict_of_obj()
profile_list_of_obj()
profile_dict_of_slot()
profile_list_of_slot()
And these are my results
Time Taken = 0:00:07.018720, provile_dict_of_nt, Size = 951.83
Time Taken = 0:00:07.716197, provile_list_of_nt, Size = 1,084.75
Time Taken = 0:00:03.237139, profile_dict_of_dict, Size = 1,926.29
Time Taken = 0:00:02.770469, profile_list_of_dict, Size = 1,778.58
Time Taken = 0:00:07.961045, profile_dict_of_obj, Size = 1,537.64
Time Taken = 0:00:05.899573, profile_list_of_obj, Size = 1,458.05
Time Taken = 0:00:06.567684, profile_dict_of_slot, Size = 1,035.65
Time Taken = 0:00:04.925101, profile_list_of_slot, Size = 887.49
My conclusion is:
- Slots have the best memory footprint and are reasonable on speed.
- dicts are the fastest, but use the most memory.
回答 4
from datetime import datetime
ITER_COUNT = 1000 * 1000
def timeit(method):
def timed(*args, **kw):
s = datetime.now()
result = method(*args, **kw)
e = datetime.now()
print method.__name__, '(%r, %r)' % (args, kw), e - s
return result
return timed
class Obj(object):
def __init__(self, i):
self.i = i
self.l = []
class SlotObj(object):
__slots__ = ('i', 'l')
def __init__(self, i):
self.i = i
self.l = []
@timeit
def profile_dict_of_dict():
return dict((i, {'i': i, 'l': []}) for i in xrange(ITER_COUNT))
@timeit
def profile_list_of_dict():
return [{'i': i, 'l': []} for i in xrange(ITER_COUNT)]
@timeit
def profile_dict_of_obj():
return dict((i, Obj(i)) for i in xrange(ITER_COUNT))
@timeit
def profile_list_of_obj():
return [Obj(i) for i in xrange(ITER_COUNT)]
@timeit
def profile_dict_of_slotobj():
return dict((i, SlotObj(i)) for i in xrange(ITER_COUNT))
@timeit
def profile_list_of_slotobj():
return [SlotObj(i) for i in xrange(ITER_COUNT)]
if __name__ == '__main__':
profile_dict_of_dict()
profile_list_of_dict()
profile_dict_of_obj()
profile_list_of_obj()
profile_dict_of_slotobj()
profile_list_of_slotobj()
结果:
hbrown@hbrown-lpt:~$ python ~/Dropbox/src/StackOverflow/1336791.py
profile_dict_of_dict ((), {}) 0:00:08.228094
profile_list_of_dict ((), {}) 0:00:06.040870
profile_dict_of_obj ((), {}) 0:00:11.481681
profile_list_of_obj ((), {}) 0:00:10.893125
profile_dict_of_slotobj ((), {}) 0:00:06.381897
profile_list_of_slotobj ((), {}) 0:00:05.860749
from datetime import datetime
ITER_COUNT = 1000 * 1000
def timeit(method):
def timed(*args, **kw):
s = datetime.now()
result = method(*args, **kw)
e = datetime.now()
print method.__name__, '(%r, %r)' % (args, kw), e - s
return result
return timed
class Obj(object):
def __init__(self, i):
self.i = i
self.l = []
class SlotObj(object):
__slots__ = ('i', 'l')
def __init__(self, i):
self.i = i
self.l = []
@timeit
def profile_dict_of_dict():
return dict((i, {'i': i, 'l': []}) for i in xrange(ITER_COUNT))
@timeit
def profile_list_of_dict():
return [{'i': i, 'l': []} for i in xrange(ITER_COUNT)]
@timeit
def profile_dict_of_obj():
return dict((i, Obj(i)) for i in xrange(ITER_COUNT))
@timeit
def profile_list_of_obj():
return [Obj(i) for i in xrange(ITER_COUNT)]
@timeit
def profile_dict_of_slotobj():
return dict((i, SlotObj(i)) for i in xrange(ITER_COUNT))
@timeit
def profile_list_of_slotobj():
return [SlotObj(i) for i in xrange(ITER_COUNT)]
if __name__ == '__main__':
profile_dict_of_dict()
profile_list_of_dict()
profile_dict_of_obj()
profile_list_of_obj()
profile_dict_of_slotobj()
profile_list_of_slotobj()
Results:
hbrown@hbrown-lpt:~$ python ~/Dropbox/src/StackOverflow/1336791.py
profile_dict_of_dict ((), {}) 0:00:08.228094
profile_list_of_dict ((), {}) 0:00:06.040870
profile_dict_of_obj ((), {}) 0:00:11.481681
profile_list_of_obj ((), {}) 0:00:10.893125
profile_dict_of_slotobj ((), {}) 0:00:06.381897
profile_list_of_slotobj ((), {}) 0:00:05.860749
回答 5
没问题。
您有没有其他属性的数据(没有方法,没有任何东西)。因此,您有一个数据容器(在本例中为字典)。
我通常更喜欢在数据建模方面进行思考。如果存在巨大的性能问题,那么我可以放弃抽象中的某些内容,但是只有非常好的理由。
编程是关于管理复杂性的,维护正确的抽象常常是实现这种结果的最有用的方法之一。
关于物体变慢的原因,我认为您的测量不正确。
您在for循环内执行的分配太少,因此看到的实例化dict(本机对象)和“ custom”对象所需的时间不同。尽管从语言角度看它们是相同的,但它们的实现却大不相同。
之后,两者的分配时间应几乎相同,因为最终成员将保留在词典中。
There is no question.
You have data, with no other attributes (no methods, nothing). Hence you have a data container (in this case, a dictionary).
I usually prefer to think in terms of data modeling. If there is some huge performance issue, then I can give up something in the abstraction, but only with very good reasons.
Programming is all about managing complexity, and the maintaining the correct abstraction is very often one of the most useful way to achieve such result.
About the reasons an object is slower, I think your measurement is not correct.
You are performing too little assignments inside the for loop, and therefore what you see there is the different time necessary to instantiate a dict (intrinsic object) and a “custom” object. Although from the language perspective they are the same, they have quite a different implementation.
After that, the assignment time should be almost the same for both, as in the end members are maintained inside a dictionary.
回答 6
如果数据结构不应包含参考周期,则还有另一种减少内存使用的方法。
让我们比较两个类:
class DataItem:
__slots__ = ('name', 'age', 'address')
def __init__(self, name, age, address):
self.name = name
self.age = age
self.address = address
和
$ pip install recordclass
>>> from recordclass import structclass
>>> DataItem2 = structclass('DataItem', 'name age address')
>>> inst = DataItem('Mike', 10, 'Cherry Street 15')
>>> inst2 = DataItem2('Mike', 10, 'Cherry Street 15')
>>> print(inst2)
>>> print(sys.getsizeof(inst), sys.getsizeof(inst2))
DataItem(name='Mike', age=10, address='Cherry Street 15')
64 40
由于structclass
基于类的类不支持循环垃圾收集,在这种情况下不需要,因此成为可能。
与__slots__
基于类的类相比,还有一个优点:您可以添加额外的属性:
>>> DataItem3 = structclass('DataItem', 'name age address', usedict=True)
>>> inst3 = DataItem3('Mike', 10, 'Cherry Street 15')
>>> inst3.hobby = ['drawing', 'singing']
>>> print(inst3)
>>> print(sizeof(inst3), 'has dict:', bool(inst3.__dict__))
DataItem(name='Mike', age=10, address='Cherry Street 15', **{'hobby': ['drawing', 'singing']})
48 has dict: True
There is yet another way to reduce memory usage if data structure isn’t supposed to contain reference cycles.
Let’s compare two classes:
class DataItem:
__slots__ = ('name', 'age', 'address')
def __init__(self, name, age, address):
self.name = name
self.age = age
self.address = address
and
$ pip install recordclass
>>> from recordclass import structclass
>>> DataItem2 = structclass('DataItem', 'name age address')
>>> inst = DataItem('Mike', 10, 'Cherry Street 15')
>>> inst2 = DataItem2('Mike', 10, 'Cherry Street 15')
>>> print(inst2)
>>> print(sys.getsizeof(inst), sys.getsizeof(inst2))
DataItem(name='Mike', age=10, address='Cherry Street 15')
64 40
It became possible since structclass
-based classes doesn’t support cyclic garbage collection, which is not needed in such cases.
There is also one advantage over __slots__
-based class: you are able to add extra attributes:
>>> DataItem3 = structclass('DataItem', 'name age address', usedict=True)
>>> inst3 = DataItem3('Mike', 10, 'Cherry Street 15')
>>> inst3.hobby = ['drawing', 'singing']
>>> print(inst3)
>>> print(sizeof(inst3), 'has dict:', bool(inst3.__dict__))
DataItem(name='Mike', age=10, address='Cherry Street 15', **{'hobby': ['drawing', 'singing']})
48 has dict: True
回答 7
这是我对@ Jarrod-Chesney非常好的脚本的测试运行。为了进行比较,我还针对python2运行了它,将“ range”替换为“ xrange”。
出于好奇,我还使用OrderedDict(ordict)添加了类似的测试以进行比较。
Python 3.6.9:
Time Taken = 0:00:04.971369, profile_dict_of_nt, Size = 944.27
Time Taken = 0:00:05.743104, profile_list_of_nt, Size = 1,066.93
Time Taken = 0:00:02.524507, profile_dict_of_dict, Size = 1,920.35
Time Taken = 0:00:02.123801, profile_list_of_dict, Size = 1,760.9
Time Taken = 0:00:05.374294, profile_dict_of_obj, Size = 1,532.12
Time Taken = 0:00:04.517245, profile_list_of_obj, Size = 1,441.04
Time Taken = 0:00:04.590298, profile_dict_of_slot, Size = 1,030.09
Time Taken = 0:00:04.197425, profile_list_of_slot, Size = 870.67
Time Taken = 0:00:08.833653, profile_ordict_of_ordict, Size = 3,045.52
Time Taken = 0:00:11.539006, profile_list_of_ordict, Size = 2,722.34
Time Taken = 0:00:06.428105, profile_ordict_of_obj, Size = 1,799.29
Time Taken = 0:00:05.559248, profile_ordict_of_slot, Size = 1,257.75
Python 2.7.15+:
Time Taken = 0:00:05.193900, profile_dict_of_nt, Size = 906.0
Time Taken = 0:00:05.860978, profile_list_of_nt, Size = 1,177.0
Time Taken = 0:00:02.370905, profile_dict_of_dict, Size = 2,228.0
Time Taken = 0:00:02.100117, profile_list_of_dict, Size = 2,036.0
Time Taken = 0:00:08.353666, profile_dict_of_obj, Size = 2,493.0
Time Taken = 0:00:07.441747, profile_list_of_obj, Size = 2,337.0
Time Taken = 0:00:06.118018, profile_dict_of_slot, Size = 1,117.0
Time Taken = 0:00:04.654888, profile_list_of_slot, Size = 964.0
Time Taken = 0:00:59.576874, profile_ordict_of_ordict, Size = 7,427.0
Time Taken = 0:10:25.679784, profile_list_of_ordict, Size = 11,305.0
Time Taken = 0:05:47.289230, profile_ordict_of_obj, Size = 11,477.0
Time Taken = 0:00:51.485756, profile_ordict_of_slot, Size = 11,193.0
因此,在两个主要版本上,@ Jarrod-Chesney的结论仍然看起来不错。
Here are my test runs of the very nice script of @Jarrod-Chesney. For comparison, I also run it against python2 with “range” replaced by “xrange”.
By curiosity, I also added similar tests with OrderedDict (ordict) for comparison.
Python 3.6.9:
Time Taken = 0:00:04.971369, profile_dict_of_nt, Size = 944.27
Time Taken = 0:00:05.743104, profile_list_of_nt, Size = 1,066.93
Time Taken = 0:00:02.524507, profile_dict_of_dict, Size = 1,920.35
Time Taken = 0:00:02.123801, profile_list_of_dict, Size = 1,760.9
Time Taken = 0:00:05.374294, profile_dict_of_obj, Size = 1,532.12
Time Taken = 0:00:04.517245, profile_list_of_obj, Size = 1,441.04
Time Taken = 0:00:04.590298, profile_dict_of_slot, Size = 1,030.09
Time Taken = 0:00:04.197425, profile_list_of_slot, Size = 870.67
Time Taken = 0:00:08.833653, profile_ordict_of_ordict, Size = 3,045.52
Time Taken = 0:00:11.539006, profile_list_of_ordict, Size = 2,722.34
Time Taken = 0:00:06.428105, profile_ordict_of_obj, Size = 1,799.29
Time Taken = 0:00:05.559248, profile_ordict_of_slot, Size = 1,257.75
Python 2.7.15+:
Time Taken = 0:00:05.193900, profile_dict_of_nt, Size = 906.0
Time Taken = 0:00:05.860978, profile_list_of_nt, Size = 1,177.0
Time Taken = 0:00:02.370905, profile_dict_of_dict, Size = 2,228.0
Time Taken = 0:00:02.100117, profile_list_of_dict, Size = 2,036.0
Time Taken = 0:00:08.353666, profile_dict_of_obj, Size = 2,493.0
Time Taken = 0:00:07.441747, profile_list_of_obj, Size = 2,337.0
Time Taken = 0:00:06.118018, profile_dict_of_slot, Size = 1,117.0
Time Taken = 0:00:04.654888, profile_list_of_slot, Size = 964.0
Time Taken = 0:00:59.576874, profile_ordict_of_ordict, Size = 7,427.0
Time Taken = 0:10:25.679784, profile_list_of_ordict, Size = 11,305.0
Time Taken = 0:05:47.289230, profile_ordict_of_obj, Size = 11,477.0
Time Taken = 0:00:51.485756, profile_ordict_of_slot, Size = 11,193.0
So, on both major versions, the conclusions of @Jarrod-Chesney are still looking good.
声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。