问题:如何确定Python中对象的大小?

我想知道如何在Python中获取对象的大小,例如字符串,整数等。

相关问题:Python列表(元组)中每个元素有多少个字节?

我使用的XML文件包含指定值大小的大小字段。我必须解析此XML并进行编码。当我想更改特定字段的值时,我将检查该值的大小字段。在这里,我想比较输入的新值是否与XML中的值相同。我需要检查新值的大小。如果是字符串,我可以说它的长度。但是如果是int,float等,我会感到困惑。

I want to know how to get size of objects like a string, integer, etc. in Python.

Related question: How many bytes per element are there in a Python list (tuple)?

I am using an XML file which contains size fields that specify the size of value. I must parse this XML and do my coding. When I want to change the value of a particular field, I will check the size field of that value. Here I want to compare whether the new value that I’m gong to enter is of the same size as in XML. I need to check the size of new value. In case of a string I can say its the length. But in case of int, float, etc. I am confused.


回答 0

只需使用模块中定义的sys.getsizeof函数即可sys

sys.getsizeof(object[, default])

返回对象的大小(以字节为单位)。该对象可以是任何类型的对象。所有内置对象都将返回正确的结果,但是对于第三方扩展,这不一定成立,因为它是特定于实现的。

default参数允许定义一个值,如果对象类型不提供检索大小的方法并导致,则将返回该值 TypeError

getsizeof__sizeof__如果对象由垃圾收集器管理,则调用该对象的 方法并添加额外的垃圾收集器开销。

用法示例,在python 3.0中:

>>> import sys
>>> x = 2
>>> sys.getsizeof(x)
24
>>> sys.getsizeof(sys.getsizeof)
32
>>> sys.getsizeof('this')
38
>>> sys.getsizeof('this also')
48

如果您使用的是python <2.6及以下版本,则sys.getsizeof可以使用此扩展模块。虽然从未使用过。

Just use the sys.getsizeof function defined in the sys module.

sys.getsizeof(object[, default]):

Return the size of an object in bytes. The object can be any type of object. All built-in objects will return correct results, but this does not have to hold true for third-party extensions as it is implementation specific.

The default argument allows to define a value which will be returned if the object type does not provide means to retrieve the size and would cause a TypeError.

getsizeof calls the object’s __sizeof__ method and adds an additional garbage collector overhead if the object is managed by the garbage collector.

Usage example, in python 3.0:

>>> import sys
>>> x = 2
>>> sys.getsizeof(x)
24
>>> sys.getsizeof(sys.getsizeof)
32
>>> sys.getsizeof('this')
38
>>> sys.getsizeof('this also')
48

If you are in python < 2.6 and don’t have sys.getsizeof you can use this extensive module instead. Never used it though.


回答 1

如何确定Python中对象的大小?

答案“仅使用sys.getsizeof”不是一个完整的答案。

该答案确实直接适用于内置对象,但没有考虑这些对象可能包含的内容,特别是不包含哪些类型,例如自定义对象,元组,列表,字典和集合所包含的类型。它们可以互相包含实例,以及数字,字符串和其他对象。

更完整的答案

使用Anaconda发行版中的64位Python 3.6和sys.getsizeof,我确定了以下对象的最小大小,并请注意set和dict预分配了空间,因此空的对象直到设定的数量后才再次增长。因语言的实现而异):

Python 3:

Empty
Bytes  type        scaling notes
28     int         +4 bytes about every 30 powers of 2
37     bytes       +1 byte per additional byte
49     str         +1-4 per additional character (depending on max width)
48     tuple       +8 per additional item
64     list        +8 for each additional
224    set         5th increases to 736; 21nd, 2272; 85th, 8416; 341, 32992
240    dict        6th increases to 368; 22nd, 1184; 43rd, 2280; 86th, 4704; 171st, 9320
136    func def    does not include default args and other attrs
1056   class def   no slots 
56     class inst  has a __dict__ attr, same scaling as dict above
888    class def   with slots
16     __slots__   seems to store in mutable tuple-like structure
                   first slot grows to 48, and so on.

您如何解释呢?好吧,说您有一套10件物品。如果每个项目都是100字节,那么整个数据结构有多大?该集合本身为736,因为它的大小增加了一倍,达到736字节。然后,添加项目的大小,因此总计1736字节

有关函数和类定义的一些警告:

请注意,每个类定义都有一个__dict__用于类attrs 的代理(48字节)结构。每个插槽property在类定义中都有一个描述符(如)。

开槽实例在其第一个元素上以48个字节开头,并且每增加一个字节就增加8个字节。只有空的带槽对象具有16个字节,而没有数据的实例意义不大。

此外,每个函数定义都有代码对象(可能是文档字符串)和其他可能的属性,甚至是__dict__

还要注意,我们sys.getsizeof()之所以使用,是因为我们关心的是边际空间使用情况,其中包括docs中对象的垃圾回收开销:

__sizeof__如果对象是由垃圾收集器管理的,则getsizeof()调用对象的方法并增加额外的垃圾收集器开销。

还要注意,调整列表的大小(例如重复添加到列表中)会使它们预先分配空间,类似于集合和字典。从listobj.c源代码

    /* This over-allocates proportional to the list size, making room
     * for additional growth.  The over-allocation is mild, but is
     * enough to give linear-time amortized behavior over a long
     * sequence of appends() in the presence of a poorly-performing
     * system realloc().
     * The growth pattern is:  0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ...
     * Note: new_allocated won't overflow because the largest possible value
     *       is PY_SSIZE_T_MAX * (9 / 8) + 6 which always fits in a size_t.
     */
    new_allocated = (size_t)newsize + (newsize >> 3) + (newsize < 9 ? 3 : 6);

历史数据

Python 2.7分析,通过guppy.hpy和确认sys.getsizeof

Bytes  type        empty + scaling notes
24     int         NA
28     long        NA
37     str         + 1 byte per additional character
52     unicode     + 4 bytes per additional character
56     tuple       + 8 bytes per additional item
72     list        + 32 for first, 8 for each additional
232    set         sixth item increases to 744; 22nd, 2280; 86th, 8424
280    dict        sixth item increases to 1048; 22nd, 3352; 86th, 12568 *
120    func def    does not include default args and other attrs
64     class inst  has a __dict__ attr, same scaling as dict above
16     __slots__   class with slots has no dict, seems to store in 
                   mutable tuple-like structure.
904    class def   has a proxy __dict__ structure for class attrs
104    old class   makes sense, less stuff, has real dict though.

请注意,字典(而非集合)在Python 3.6中得到了更紧凑的表示形式

我认为在64位计算机上,每个附加项目要引用8个字节是很有意义的。这8个字节指向所包含项在内存中的位置。如果我没记错的话,Python 2的unicode的4个字节是固定宽度的,但是在Python 3中,str变成的unicode的宽度等于字符的最大宽度。

(有关插槽的更多信息,请参见此答案

更完整的功能

我们需要一个功能来搜索列表,元组,集合,字典,obj.__dict__‘s和中的元素obj.__slots__,以及我们可能尚未想到的其他内容。

我们希望依靠gc.get_referents此搜索,因为它可以在C级别上运行(使其变得非常快)。缺点是get_referents可以返回冗余成员,因此我们需要确保不会重复计算。

类,模块和函数是单例-它们在内存中存在一次。我们对它们的大小不太感兴趣,因为我们对此无能为力-它们是程序的一部分。因此,如果碰巧引用了它们,我们将避免计算它们。

我们将使用类型的黑名单,因此我们不将整个程序包括在我们的大小计数中。

import sys
from types import ModuleType, FunctionType
from gc import get_referents

# Custom objects know their class.
# Function objects seem to know way too much, including modules.
# Exclude modules as well.
BLACKLIST = type, ModuleType, FunctionType


def getsize(obj):
    """sum size of object & members."""
    if isinstance(obj, BLACKLIST):
        raise TypeError('getsize() does not take argument of type: '+ str(type(obj)))
    seen_ids = set()
    size = 0
    objects = [obj]
    while objects:
        need_referents = []
        for obj in objects:
            if not isinstance(obj, BLACKLIST) and id(obj) not in seen_ids:
                seen_ids.add(id(obj))
                size += sys.getsizeof(obj)
                need_referents.append(obj)
        objects = get_referents(*need_referents)
    return size

为了与下面的白名单功能形成对比,大多数对象都知道如何遍历自身以进行垃圾回收(当我们想知道某些对象在内存中有多昂贵时,这正是我们要寻找的东西。gc.get_referents。)但是,如果我们不谨慎的话,这一措施的范围将比我们预期的要广泛得多。

例如,函数对创建它们的模块非常了解。

另一个对比点是,字典中作为键的字符串通常会被保留,因此不会重复。检查id(key)还将使我们避免计算重复项,这将在下一部分中进行。黑名单解决方案会跳过对全部为字符串的键的计数。

白名单类型,递归访问者(旧的实现)

为了亲自涵盖其中的大多数类型,我编写了此递归函数以尝试估算大多数Python对象的大小,包括大多数内建函数,集合模块中的类型以及自定义类型(有槽或其他),而不是依赖于gc模块。 。

这种功能可以对要计算内存使用情况的类型进行更细粒度的控制,但存在将类型排除在外的危险:

import sys
from numbers import Number
from collections import Set, Mapping, deque

try: # Python 2
    zero_depth_bases = (basestring, Number, xrange, bytearray)
    iteritems = 'iteritems'
except NameError: # Python 3
    zero_depth_bases = (str, bytes, Number, range, bytearray)
    iteritems = 'items'

def getsize(obj_0):
    """Recursively iterate to sum size of object & members."""
    _seen_ids = set()
    def inner(obj):
        obj_id = id(obj)
        if obj_id in _seen_ids:
            return 0
        _seen_ids.add(obj_id)
        size = sys.getsizeof(obj)
        if isinstance(obj, zero_depth_bases):
            pass # bypass remaining control flow and return
        elif isinstance(obj, (tuple, list, Set, deque)):
            size += sum(inner(i) for i in obj)
        elif isinstance(obj, Mapping) or hasattr(obj, iteritems):
            size += sum(inner(k) + inner(v) for k, v in getattr(obj, iteritems)())
        # Check for custom object instances - may subclass above too
        if hasattr(obj, '__dict__'):
            size += inner(vars(obj))
        if hasattr(obj, '__slots__'): # can have __slots__ with __dict__
            size += sum(inner(getattr(obj, s)) for s in obj.__slots__ if hasattr(obj, s))
        return size
    return inner(obj_0)

我相当随意地测试了它(我应该对其进行单元测试):

>>> getsize(['a', tuple('bcd'), Foo()])
344
>>> getsize(Foo())
16
>>> getsize(tuple('bcd'))
194
>>> getsize(['a', tuple('bcd'), Foo(), {'foo': 'bar', 'baz': 'bar'}])
752
>>> getsize({'foo': 'bar', 'baz': 'bar'})
400
>>> getsize({})
280
>>> getsize({'foo':'bar'})
360
>>> getsize('foo')
40
>>> class Bar():
...     def baz():
...         pass
>>> getsize(Bar())
352
>>> getsize(Bar().__dict__)
280
>>> sys.getsizeof(Bar())
72
>>> getsize(Bar.__dict__)
872
>>> sys.getsizeof(Bar.__dict__)
280

此实现违反了类定义和函数定义,因为我们没有使用它们的所有属性,但是由于它们在该进程的内存中应该只存在一次,因此它们的大小实际上并没有太大关系。

How do I determine the size of an object in Python?

The answer, “Just use sys.getsizeof” is not a complete answer.

That answer does work for builtin objects directly, but it does not account for what those objects may contain, specifically, what types, such as custom objects, tuples, lists, dicts, and sets contain. They can contain instances each other, as well as numbers, strings and other objects.

A More Complete Answer

Using 64 bit Python 3.6 from the Anaconda distribution, with sys.getsizeof, I have determined the minimum size of the following objects, and note that sets and dicts preallocate space so empty ones don’t grow again until after a set amount (which may vary by implementation of the language):

Python 3:

Empty
Bytes  type        scaling notes
28     int         +4 bytes about every 30 powers of 2
37     bytes       +1 byte per additional byte
49     str         +1-4 per additional character (depending on max width)
48     tuple       +8 per additional item
64     list        +8 for each additional
224    set         5th increases to 736; 21nd, 2272; 85th, 8416; 341, 32992
240    dict        6th increases to 368; 22nd, 1184; 43rd, 2280; 86th, 4704; 171st, 9320
136    func def    does not include default args and other attrs
1056   class def   no slots 
56     class inst  has a __dict__ attr, same scaling as dict above
888    class def   with slots
16     __slots__   seems to store in mutable tuple-like structure
                   first slot grows to 48, and so on.

How do you interpret this? Well say you have a set with 10 items in it. If each item is 100 bytes each, how big is the whole data structure? The set is 736 itself because it has sized up one time to 736 bytes. Then you add the size of the items, so that’s 1736 bytes in total

Some caveats for function and class definitions:

Note each class definition has a proxy __dict__ (48 bytes) structure for class attrs. Each slot has a descriptor (like a property) in the class definition.

Slotted instances start out with 48 bytes on their first element, and increase by 8 each additional. Only empty slotted objects have 16 bytes, and an instance with no data makes very little sense.

Also, each function definition has code objects, maybe docstrings, and other possible attributes, even a __dict__.

Also note that we use sys.getsizeof() because we care about the marginal space usage, which includes the garbage collection overhead for the object, from the docs:

getsizeof() calls the object’s __sizeof__ method and adds an additional garbage collector overhead if the object is managed by the garbage collector.

Also note that resizing lists (e.g. repetitively appending to them) causes them to preallocate space, similarly to sets and dicts. From the listobj.c source code:

    /* This over-allocates proportional to the list size, making room
     * for additional growth.  The over-allocation is mild, but is
     * enough to give linear-time amortized behavior over a long
     * sequence of appends() in the presence of a poorly-performing
     * system realloc().
     * The growth pattern is:  0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ...
     * Note: new_allocated won't overflow because the largest possible value
     *       is PY_SSIZE_T_MAX * (9 / 8) + 6 which always fits in a size_t.
     */
    new_allocated = (size_t)newsize + (newsize >> 3) + (newsize < 9 ? 3 : 6);

Historical data

Python 2.7 analysis, confirmed with guppy.hpy and sys.getsizeof:

Bytes  type        empty + scaling notes
24     int         NA
28     long        NA
37     str         + 1 byte per additional character
52     unicode     + 4 bytes per additional character
56     tuple       + 8 bytes per additional item
72     list        + 32 for first, 8 for each additional
232    set         sixth item increases to 744; 22nd, 2280; 86th, 8424
280    dict        sixth item increases to 1048; 22nd, 3352; 86th, 12568 *
120    func def    does not include default args and other attrs
64     class inst  has a __dict__ attr, same scaling as dict above
16     __slots__   class with slots has no dict, seems to store in 
                   mutable tuple-like structure.
904    class def   has a proxy __dict__ structure for class attrs
104    old class   makes sense, less stuff, has real dict though.

Note that dictionaries (but not sets) got a more compact representation in Python 3.6

I think 8 bytes per additional item to reference makes a lot of sense on a 64 bit machine. Those 8 bytes point to the place in memory the contained item is at. The 4 bytes are fixed width for unicode in Python 2, if I recall correctly, but in Python 3, str becomes a unicode of width equal to the max width of the characters.

(And for more on slots, see this answer )

A More Complete Function

We want a function that searches the elements in lists, tuples, sets, dicts, obj.__dict__‘s, and obj.__slots__, as well as other things we may not have yet thought of.

We want to rely on gc.get_referents to do this search because it works at the C level (making it very fast). The downside is that get_referents can return redundant members, so we need to ensure we don’t double count.

Classes, modules, and functions are singletons – they exist one time in memory. We’re not so interested in their size, as there’s not much we can do about them – they’re a part of the program. So we’ll avoid counting them if they happen to be referenced.

We’re going to use a blacklist of types so we don’t include the entire program in our size count.

import sys
from types import ModuleType, FunctionType
from gc import get_referents

# Custom objects know their class.
# Function objects seem to know way too much, including modules.
# Exclude modules as well.
BLACKLIST = type, ModuleType, FunctionType


def getsize(obj):
    """sum size of object & members."""
    if isinstance(obj, BLACKLIST):
        raise TypeError('getsize() does not take argument of type: '+ str(type(obj)))
    seen_ids = set()
    size = 0
    objects = [obj]
    while objects:
        need_referents = []
        for obj in objects:
            if not isinstance(obj, BLACKLIST) and id(obj) not in seen_ids:
                seen_ids.add(id(obj))
                size += sys.getsizeof(obj)
                need_referents.append(obj)
        objects = get_referents(*need_referents)
    return size

To contrast this with the following whitelisted function, most objects know how to traverse themselves for the purposes of garbage collection (which is approximately what we’re looking for when we want to know how expensive in memory certain objects are. This functionality is used by gc.get_referents.) However, this measure is going to be much more expansive in scope than we intended if we are not careful.

For example, functions know quite a lot about the modules they are created in.

Another point of contrast is that strings that are keys in dictionaries are usually interned so they are not duplicated. Checking for id(key) will also allow us to avoid counting duplicates, which we do in the next section. The blacklist solution skips counting keys that are strings altogether.

Whitelisted Types, Recursive visitor (old implementation)

To cover most of these types myself, instead of relying on the gc module, I wrote this recursive function to try to estimate the size of most Python objects, including most builtins, types in the collections module, and custom types (slotted and otherwise).

This sort of function gives much more fine-grained control over the types we’re going to count for memory usage, but has the danger of leaving types out:

import sys
from numbers import Number
from collections import Set, Mapping, deque

try: # Python 2
    zero_depth_bases = (basestring, Number, xrange, bytearray)
    iteritems = 'iteritems'
except NameError: # Python 3
    zero_depth_bases = (str, bytes, Number, range, bytearray)
    iteritems = 'items'

def getsize(obj_0):
    """Recursively iterate to sum size of object & members."""
    _seen_ids = set()
    def inner(obj):
        obj_id = id(obj)
        if obj_id in _seen_ids:
            return 0
        _seen_ids.add(obj_id)
        size = sys.getsizeof(obj)
        if isinstance(obj, zero_depth_bases):
            pass # bypass remaining control flow and return
        elif isinstance(obj, (tuple, list, Set, deque)):
            size += sum(inner(i) for i in obj)
        elif isinstance(obj, Mapping) or hasattr(obj, iteritems):
            size += sum(inner(k) + inner(v) for k, v in getattr(obj, iteritems)())
        # Check for custom object instances - may subclass above too
        if hasattr(obj, '__dict__'):
            size += inner(vars(obj))
        if hasattr(obj, '__slots__'): # can have __slots__ with __dict__
            size += sum(inner(getattr(obj, s)) for s in obj.__slots__ if hasattr(obj, s))
        return size
    return inner(obj_0)

And I tested it rather casually (I should unittest it):

>>> getsize(['a', tuple('bcd'), Foo()])
344
>>> getsize(Foo())
16
>>> getsize(tuple('bcd'))
194
>>> getsize(['a', tuple('bcd'), Foo(), {'foo': 'bar', 'baz': 'bar'}])
752
>>> getsize({'foo': 'bar', 'baz': 'bar'})
400
>>> getsize({})
280
>>> getsize({'foo':'bar'})
360
>>> getsize('foo')
40
>>> class Bar():
...     def baz():
...         pass
>>> getsize(Bar())
352
>>> getsize(Bar().__dict__)
280
>>> sys.getsizeof(Bar())
72
>>> getsize(Bar.__dict__)
872
>>> sys.getsizeof(Bar.__dict__)
280

This implementation breaks down on class definitions and function definitions because we don’t go after all of their attributes, but since they should only exist once in memory for the process, their size really doesn’t matter too much.


回答 2

Pympler封装的asizeof模块可以做到这一点。

用法如下:

from pympler import asizeof
asizeof.asizeof(my_object)

sys.getsizeof与之不同,它适用于您自己创建的对象。它甚至可以与numpy一起使用。

>>> asizeof.asizeof(tuple('bcd'))
200
>>> asizeof.asizeof({'foo': 'bar', 'baz': 'bar'})
400
>>> asizeof.asizeof({})
280
>>> asizeof.asizeof({'foo':'bar'})
360
>>> asizeof.asizeof('foo')
40
>>> asizeof.asizeof(Bar())
352
>>> asizeof.asizeof(Bar().__dict__)
280
>>> A = rand(10)
>>> B = rand(10000)
>>> asizeof.asizeof(A)
176
>>> asizeof.asizeof(B)
80096

正如提到的

可以通过设置option来包含类,函数,方法,模块等对象的(字节)代码大小code=True

如果您需要其他有关实时数据的视图,Pympler的

该模块muppy用于对Python应用程序进行在线监视,该模块Class Tracker提供对所选Python对象生命周期的离线分析。

The Pympler package’s asizeof module can do this.

Use as follows:

from pympler import asizeof
asizeof.asizeof(my_object)

Unlike sys.getsizeof, it works for your self-created objects. It even works with numpy.

>>> asizeof.asizeof(tuple('bcd'))
200
>>> asizeof.asizeof({'foo': 'bar', 'baz': 'bar'})
400
>>> asizeof.asizeof({})
280
>>> asizeof.asizeof({'foo':'bar'})
360
>>> asizeof.asizeof('foo')
40
>>> asizeof.asizeof(Bar())
352
>>> asizeof.asizeof(Bar().__dict__)
280
>>> A = rand(10)
>>> B = rand(10000)
>>> asizeof.asizeof(A)
176
>>> asizeof.asizeof(B)
80096

As mentioned,

The (byte)code size of objects like classes, functions, methods, modules, etc. can be included by setting option code=True.

And if you need other view on live data, Pympler’s

module muppy is used for on-line monitoring of a Python application and module Class Tracker provides off-line analysis of the lifetime of selected Python objects.


回答 3

对于numpy数组,getsizeof它不起作用-对于我来说,由于某种原因它总是返回40:

from pylab import *
from sys import getsizeof
A = rand(10)
B = rand(10000)

然后(在ipython中):

In [64]: getsizeof(A)
Out[64]: 40

In [65]: getsizeof(B)
Out[65]: 40

令人高兴的是:

In [66]: A.nbytes
Out[66]: 80

In [67]: B.nbytes
Out[67]: 80000

For numpy arrays, getsizeof doesn’t work – for me it always returns 40 for some reason:

from pylab import *
from sys import getsizeof
A = rand(10)
B = rand(10000)

Then (in ipython):

In [64]: getsizeof(A)
Out[64]: 40

In [65]: getsizeof(B)
Out[65]: 40

Happily, though:

In [66]: A.nbytes
Out[66]: 80

In [67]: B.nbytes
Out[67]: 80000

回答 4

这可能比看起来要复杂得多,具体取决于您要如何计算事物。例如,如果您有一个整数列表,您是否想要包含整数引用的列表的大小?(即仅列出,而不列出其中的内容),还是要包括指向的实际数据,在这种情况下,您需要处理重复的引用,以及当两个对象包含对引用的引用时如何防止重复计算同一对象。

您可能想看看其中一种python内存分析器,例如pysizer,看看它们是否满足您的需求。

This can be more complicated than it looks depending on how you want to count things. For instance, if you have a list of ints, do you want the size of the list containing the references to the ints? (ie. list only, not what is contained in it), or do you want to include the actual data pointed to, in which case you need to deal with duplicate references, and how to prevent double-counting when two objects contain references to the same object.

You may want to take a look at one of the python memory profilers, such as pysizer to see if they meet your needs.


回答 5

Raymond Hettinger 在此宣布sys.getsizeof,Python 3.8(2019年第一季度)将更改的某些结果:

在64位版本中,Python容器要小8字节。

tuple ()  48 -> 40       
list  []  64 ->56
set()    224 -> 216
dict  {} 240 -> 232

这是在问题33597Inada Naoki(methane围绕Compact PyGC_Head和PR 7043开展的工作之后

这个想法将PyGC_Head的大小减少到两个单词

目前,PyGC_Head包含三个词gc_prevgc_nextgc_refcnt

  • gc_refcnt 收集时用于尝试删除。
  • gc_prev 用于跟踪和取消跟踪。

因此,如果我们可以避免在尝试删除时进行跟踪/取消跟踪,gc_prev并且gc_refcnt可以共享相同的内存空间。

参见commit d5c875b

Py_ssize_t从中删除一名成员PyGC_Head
所有GC跟踪的对象(例如,元组,列表,字典)的大小都减少了4或8个字节。

Python 3.8 (Q1 2019) will change some of the results of sys.getsizeof, as announced here by Raymond Hettinger:

Python containers are 8 bytes smaller on 64-bit builds.

tuple ()  48 -> 40       
list  []  64 ->56
set()    224 -> 216
dict  {} 240 -> 232

This comes after issue 33597 and Inada Naoki (methane)‘s work around Compact PyGC_Head, and PR 7043

This idea reduces PyGC_Head size to two words.

Currently, PyGC_Head takes three words; gc_prev, gc_next, and gc_refcnt.

  • gc_refcnt is used when collecting, for trial deletion.
  • gc_prev is used for tracking and untracking.

So if we can avoid tracking/untracking while trial deletion, gc_prev and gc_refcnt can share same memory space.

See commit d5c875b:

Removed one Py_ssize_t member from PyGC_Head.
All GC tracked objects (e.g. tuple, list, dict) size is reduced 4 or 8 bytes.


回答 6

我本人多次遇到此问题,然后写了一个小函数(受@ aaron-hall的启发)和测试,实现了sys.getsizeof的期望:

https://github.com/bosswissam/pysize

如果您对背景故事感兴趣,请在这里

编辑:附加下面的代码,以方便参考。要查看最新代码,请检查github链接。

    import sys

    def get_size(obj, seen=None):
        """Recursively finds size of objects"""
        size = sys.getsizeof(obj)
        if seen is None:
            seen = set()
        obj_id = id(obj)
        if obj_id in seen:
            return 0
        # Important mark as seen *before* entering recursion to gracefully handle
        # self-referential objects
        seen.add(obj_id)
        if isinstance(obj, dict):
            size += sum([get_size(v, seen) for v in obj.values()])
            size += sum([get_size(k, seen) for k in obj.keys()])
        elif hasattr(obj, '__dict__'):
            size += get_size(obj.__dict__, seen)
        elif hasattr(obj, '__iter__') and not isinstance(obj, (str, bytes, bytearray)):
            size += sum([get_size(i, seen) for i in obj])
        return size

Having run into this problem many times myself, I wrote up a small function (inspired by @aaron-hall’s answer) & tests that does what I would have expected sys.getsizeof to do:

https://github.com/bosswissam/pysize

If you’re interested in the backstory, here it is

EDIT: Attaching the code below for easy reference. To see the most up-to-date code, please check the github link.

    import sys

    def get_size(obj, seen=None):
        """Recursively finds size of objects"""
        size = sys.getsizeof(obj)
        if seen is None:
            seen = set()
        obj_id = id(obj)
        if obj_id in seen:
            return 0
        # Important mark as seen *before* entering recursion to gracefully handle
        # self-referential objects
        seen.add(obj_id)
        if isinstance(obj, dict):
            size += sum([get_size(v, seen) for v in obj.values()])
            size += sum([get_size(k, seen) for k in obj.keys()])
        elif hasattr(obj, '__dict__'):
            size += get_size(obj.__dict__, seen)
        elif hasattr(obj, '__iter__') and not isinstance(obj, (str, bytes, bytearray)):
            size += sum([get_size(i, seen) for i in obj])
        return size

回答 7

这是我根据先前的答案编写的一个快速脚本,用于列出所有变量的大小

for i in dir():
    print (i, sys.getsizeof(eval(i)) )

Here is a quick script I wrote based on the previous answers to list sizes of all variables

for i in dir():
    print (i, sys.getsizeof(eval(i)) )

回答 8

您可以序列化对象以得出与对象大小密切相关的度量:

import pickle

## let o be the object, whose size you want to measure
size_estimate = len(pickle.dumps(o))

如果您要测量无法腌制的对象(例如,由于lambda表达式),则可以使用混浊解决方案。

You can serialize the object to derive a measure that is closely related to the size of the object:

import pickle

## let o be the object, whose size you want to measure
size_estimate = len(pickle.dumps(o))

If you want to measure objects that cannot be pickled (e.g. because of lambda expressions) cloudpickle can be a solution.


回答 9

如果不想包含链接(嵌套)对象的大小,请使用sys.getsizeof()

但是,如果您要计算嵌套在列表,字典,集合,元组中的子对象(通常这就是您要查找的内容),请使用递归的deep sizeof()函数,如下所示:

import sys
def sizeof(obj):
    size = sys.getsizeof(obj)
    if isinstance(obj, dict): return size + sum(map(sizeof, obj.keys())) + sum(map(sizeof, obj.values()))
    if isinstance(obj, (list, tuple, set, frozenset)): return size + sum(map(sizeof, obj))
    return size

您还可以在漂亮的工具箱中找到此功能,以及许多其他有用的单行代码:

https://github.com/mwojnars/nifty/blob/master/util.py

Use sys.getsizeof() if you DON’T want to include sizes of linked (nested) objects.

However, if you want to count sub-objects nested in lists, dicts, sets, tuples – and usually THIS is what you’re looking for – use the recursive deep sizeof() function as shown below:

import sys
def sizeof(obj):
    size = sys.getsizeof(obj)
    if isinstance(obj, dict): return size + sum(map(sizeof, obj.keys())) + sum(map(sizeof, obj.values()))
    if isinstance(obj, (list, tuple, set, frozenset)): return size + sum(map(sizeof, obj))
    return size

You can also find this function in the nifty toolbox, together with many other useful one-liners:

https://github.com/mwojnars/nifty/blob/master/util.py


回答 10

如果您不需要对象的确切大小,但大致了解对象的大小,一种快速(又脏)的方法是让程序运行,睡眠较长时间并检查内存使用情况(例如:Mac的活动监视器)通过此特定的python进程。当您尝试在python进程中查找单个大对象的大小时,这将是有效的。例如,我最近想检查一个新数据结构的内存使用情况,并将其与Python的set数据结构进行比较。首先,我将元素(大型公共领域书中的单词)写到一个集合中,然后检查流程的大小,然后对其他数据结构执行相同的操作。我发现一组Python进程占用的内存是新数据结构的两倍。再一次,你不会 不能准确地说出进程使用的内存等于对象的大小。随着对象的大小变大,与要监视的对象的大小相比,该过程的其余部分所消耗的内存可以忽略不计,这变得接近。

If you don’t need the exact size of the object but roughly to know how big it is, one quick (and dirty) way is to let the program run, sleep for an extended period of time, and check the memory usage (ex: Mac’s activity monitor) by this particular python process. This would be effective when you are trying to find the size of one single large object in a python process. For example, I recently wanted to check the memory usage of a new data structure and compare it with that of Python’s set data structure. First I wrote the elements (words from a large public domain book) to a set, then checked the size of the process, and then did the same thing with the other data structure. I found out the Python process with a set is taking twice as much memory as the new data structure. Again, you wouldn’t be able to exactly say the memory used by the process is equal to the size of the object. As the size of the object gets large, this becomes close as the memory consumed by the rest of the process becomes negligible compared to the size of the object you are trying to monitor.


回答 11

您可以使用如下所述的getSizeof()来确定对象的大小

import sys
str1 = "one"
int_element=5
print("Memory size of '"+str1+"' = "+str(sys.getsizeof(str1))+ " bytes")
print("Memory size of '"+ str(int_element)+"' = "+str(sys.getsizeof(int_element))+ " bytes")

You can make use of getSizeof() as mentioned below to determine the size of an object

import sys
str1 = "one"
int_element=5
print("Memory size of '"+str1+"' = "+str(sys.getsizeof(str1))+ " bytes")
print("Memory size of '"+ str(int_element)+"' = "+str(sys.getsizeof(int_element))+ " bytes")

回答 12

我使用这个技巧…可能在小对象上不准确,但是我认为它对于复杂对象(如pygame表面)比sys.getsizeof()更准确

import pygame as pg
import os
import psutil
import time


process = psutil.Process(os.getpid())
pg.init()    
vocab = ['hello', 'me', 'you', 'she', 'he', 'they', 'we',
         'should', 'why?', 'necessarily', 'do', 'that']

font = pg.font.SysFont("monospace", 100, True)

dct = {}

newMem = process.memory_info().rss  # don't mind this line
Str = f'store ' + f'Nothing \tsurface use about '.expandtabs(15) + \
      f'0\t bytes'.expandtabs(9)  # don't mind this assignment too

usedMem = process.memory_info().rss

for word in vocab:
    dct[word] = font.render(word, True, pg.Color("#000000"))

    time.sleep(0.1)  # wait a moment

    # get total used memory of this script:
    newMem = process.memory_info().rss
    Str = f'store ' + f'{word}\tsurface use about '.expandtabs(15) + \
          f'{newMem - usedMem}\t bytes'.expandtabs(9)

    print(Str)
    usedMem = newMem

在我的Windows 10(python 3.7.3)上,输出为:

store hello          surface use about 225280    bytes
store me             surface use about 61440     bytes
store you            surface use about 94208     bytes
store she            surface use about 81920     bytes
store he             surface use about 53248     bytes
store they           surface use about 114688    bytes
store we             surface use about 57344     bytes
store should         surface use about 172032    bytes
store why?           surface use about 110592    bytes
store necessarily    surface use about 311296    bytes
store do             surface use about 57344     bytes
store that           surface use about 110592    bytes

I use this trick… May won’t be accurate on small objects, but I think it’s much more accurate for a complex object (like pygame surface) rather than sys.getsizeof()

import pygame as pg
import os
import psutil
import time


process = psutil.Process(os.getpid())
pg.init()    
vocab = ['hello', 'me', 'you', 'she', 'he', 'they', 'we',
         'should', 'why?', 'necessarily', 'do', 'that']

font = pg.font.SysFont("monospace", 100, True)

dct = {}

newMem = process.memory_info().rss  # don't mind this line
Str = f'store ' + f'Nothing \tsurface use about '.expandtabs(15) + \
      f'0\t bytes'.expandtabs(9)  # don't mind this assignment too

usedMem = process.memory_info().rss

for word in vocab:
    dct[word] = font.render(word, True, pg.Color("#000000"))

    time.sleep(0.1)  # wait a moment

    # get total used memory of this script:
    newMem = process.memory_info().rss
    Str = f'store ' + f'{word}\tsurface use about '.expandtabs(15) + \
          f'{newMem - usedMem}\t bytes'.expandtabs(9)

    print(Str)
    usedMem = newMem

On my windows 10, python 3.7.3, the output is:

store hello          surface use about 225280    bytes
store me             surface use about 61440     bytes
store you            surface use about 94208     bytes
store she            surface use about 81920     bytes
store he             surface use about 53248     bytes
store they           surface use about 114688    bytes
store we             surface use about 57344     bytes
store should         surface use about 172032    bytes
store why?           surface use about 110592    bytes
store necessarily    surface use about 311296    bytes
store do             surface use about 57344     bytes
store that           surface use about 110592    bytes

声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。