标签归档:memory

Python结构的内存大小

问题:Python结构的内存大小

是否有关于32位和64位平台上Python数据结构的内存大小的参考?

如果没有,那么将其放在SO上会很好。越详尽越好!那么以下Python结构使用了多少字节(取决于相关时的len和内容类型)?

  • int
  • float
  • 参考
  • str
  • unicode字符串
  • tuple
  • list
  • dict
  • set
  • array.array
  • numpy.array
  • deque
  • 新型类对象
  • 旧式类对象
  • …以及我忘记的一切!

(对于仅保留对其他对象的引用的容器,我们显然不希望自己计算项目的大小,因为它可能是共享的。)

此外,是否有一种方法可以获取对象在运行时使用的内存(递归与否)?

Is there a reference for the memory size of Python data stucture on 32- and 64-bit platforms?

If not, this would be nice to have it on SO. The more exhaustive the better! So how many bytes are used by the following Python structures (depending on the len and the content type when relevant)?

  • int
  • float
  • reference
  • str
  • unicode string
  • tuple
  • list
  • dict
  • set
  • array.array
  • numpy.array
  • deque
  • new-style classes object
  • old-style classes object
  • … and everything I am forgetting!

(For containers that keep only references to other objects, we obviously do not want to count the size of the item themselves, since it might be shared.)

Furthermore, is there a way to get the memory used by an object at runtime (recursively or not)?


回答 0

对此先前的问题提出的建议是使用sys.getsizeof(),并引用:

>>> import sys
>>> x = 2
>>> sys.getsizeof(x)
14
>>> sys.getsizeof(sys.getsizeof)
32
>>> sys.getsizeof('this')
38
>>> sys.getsizeof('this also')
48

您可以采用这种方法:

>>> import sys
>>> import decimal
>>> 
>>> d = {
...     "int": 0,
...     "float": 0.0,
...     "dict": dict(),
...     "set": set(),
...     "tuple": tuple(),
...     "list": list(),
...     "str": "a",
...     "unicode": u"a",
...     "decimal": decimal.Decimal(0),
...     "object": object(),
... }
>>> for k, v in sorted(d.iteritems()):
...     print k, sys.getsizeof(v)
...
decimal 40
dict 140
float 16
int 12
list 36
object 8
set 116
str 25
tuple 28
unicode 28

2012-09-30

python 2.7(Linux,32位):

decimal 36
dict 136
float 16
int 12
list 32
object 8
set 112
str 22
tuple 24
unicode 32

python 3.3(Linux,32位)

decimal 52
dict 144
float 16
int 14
list 32
object 8
set 112
str 26
tuple 24
unicode 26

2016-08-01

OSX,Python 2.7.10(默认,2015年10月23日,19:19:21)[darwin上的[GCC 4.2.1兼容Apple LLVM 7.0.0(clang-700.0.59.5)]

decimal 80
dict 280
float 24
int 24
list 72
object 16
set 232
str 38
tuple 56
unicode 52

The recommendation from an earlier question on this was to use sys.getsizeof(), quoting:

>>> import sys
>>> x = 2
>>> sys.getsizeof(x)
14
>>> sys.getsizeof(sys.getsizeof)
32
>>> sys.getsizeof('this')
38
>>> sys.getsizeof('this also')
48

You could take this approach:

>>> import sys
>>> import decimal
>>> 
>>> d = {
...     "int": 0,
...     "float": 0.0,
...     "dict": dict(),
...     "set": set(),
...     "tuple": tuple(),
...     "list": list(),
...     "str": "a",
...     "unicode": u"a",
...     "decimal": decimal.Decimal(0),
...     "object": object(),
... }
>>> for k, v in sorted(d.iteritems()):
...     print k, sys.getsizeof(v)
...
decimal 40
dict 140
float 16
int 12
list 36
object 8
set 116
str 25
tuple 28
unicode 28

2012-09-30

python 2.7 (linux, 32-bit):

decimal 36
dict 136
float 16
int 12
list 32
object 8
set 112
str 22
tuple 24
unicode 32

python 3.3 (linux, 32-bit)

decimal 52
dict 144
float 16
int 14
list 32
object 8
set 112
str 26
tuple 24
unicode 26

2016-08-01

OSX, Python 2.7.10 (default, Oct 23 2015, 19:19:21) [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin

decimal 80
dict 280
float 24
int 24
list 72
object 16
set 232
str 38
tuple 56
unicode 52

回答 1

我一直很高兴地将pympler用于此类任务。它与许多版本的Python兼容- asizeof特别是该模块可以追溯到2.2!

例如,使用hughdbrown的示例,但from pympler import asizeof在开头和print asizeof.asizeof(v)结尾处都看到了(MacOSX 10.5上的系统Python 2.5):

$ python pymp.py 
set 120
unicode 32
tuple 32
int 16
decimal 152
float 16
list 40
object 0
dict 144
str 32

显然这里有一些近似值,但是我发现它对于足迹分析和调整非常有用。

I’ve been happily using pympler for such tasks. It’s compatible with many versions of Python — the asizeof module in particular goes back to 2.2!

For example, using hughdbrown’s example but with from pympler import asizeof at the start and print asizeof.asizeof(v) at the end, I see (system Python 2.5 on MacOSX 10.5):

$ python pymp.py 
set 120
unicode 32
tuple 32
int 16
decimal 152
float 16
list 40
object 0
dict 144
str 32

Clearly there is some approximation here, but I’ve found it very useful for footprint analysis and tuning.


回答 2

这些答案都收集浅层尺寸信息。我怀疑访问此问题的访客最终将在这里回答以下问题:“此复杂对象在内存中有多大?”

这里有一个很好的答案:https : //goshippo.com/blog/measure-real-size-any-python-object/

重点:

import sys

def get_size(obj, seen=None):
    """Recursively finds size of objects"""
    size = sys.getsizeof(obj)
    if seen is None:
        seen = set()
    obj_id = id(obj)
    if obj_id in seen:
        return 0
    # Important mark as seen *before* entering recursion to gracefully handle
    # self-referential objects
    seen.add(obj_id)
    if isinstance(obj, dict):
        size += sum([get_size(v, seen) for v in obj.values()])
        size += sum([get_size(k, seen) for k in obj.keys()])
    elif hasattr(obj, '__dict__'):
        size += get_size(obj.__dict__, seen)
    elif hasattr(obj, '__iter__') and not isinstance(obj, (str, bytes, bytearray)):
        size += sum([get_size(i, seen) for i in obj])
    return size

像这样使用:

In [1]: get_size(1)
Out[1]: 24

In [2]: get_size([1])
Out[2]: 104

In [3]: get_size([[1]])
Out[3]: 184

如果您想更深入地了解Python的内存模型,这里有一篇很棒的文章,其中有类似的“总大小”代码段,作为较长说明的一部分:https : //code.tutsplus.com/tutorials/understand-how-您的Python记忆体大量使用–CMS-25609

These answers all collect shallow size information. I suspect that visitors to this question will end up here looking to answer the question, “How big is this complex object in memory?”

There’s a great answer here: https://goshippo.com/blog/measure-real-size-any-python-object/

The punchline:

import sys

def get_size(obj, seen=None):
    """Recursively finds size of objects"""
    size = sys.getsizeof(obj)
    if seen is None:
        seen = set()
    obj_id = id(obj)
    if obj_id in seen:
        return 0
    # Important mark as seen *before* entering recursion to gracefully handle
    # self-referential objects
    seen.add(obj_id)
    if isinstance(obj, dict):
        size += sum([get_size(v, seen) for v in obj.values()])
        size += sum([get_size(k, seen) for k in obj.keys()])
    elif hasattr(obj, '__dict__'):
        size += get_size(obj.__dict__, seen)
    elif hasattr(obj, '__iter__') and not isinstance(obj, (str, bytes, bytearray)):
        size += sum([get_size(i, seen) for i in obj])
    return size

Used like so:

In [1]: get_size(1)
Out[1]: 24

In [2]: get_size([1])
Out[2]: 104

In [3]: get_size([[1]])
Out[3]: 184

If you want to know Python’s memory model more deeply, there’s a great article here that has a similar “total size” snippet of code as part of a longer explanation: https://code.tutsplus.com/tutorials/understand-how-much-memory-your-python-objects-use–cms-25609


回答 3

尝试使用内存探查器。 内存分析器

Line #    Mem usage  Increment   Line Contents
==============================================
     3                           @profile
     4      5.97 MB    0.00 MB   def my_func():
     5     13.61 MB    7.64 MB       a = [1] * (10 ** 6)
     6    166.20 MB  152.59 MB       b = [2] * (2 * 10 ** 7)
     7     13.61 MB -152.59 MB       del b
     8     13.61 MB    0.00 MB       return a

Try memory profiler. memory profiler

Line #    Mem usage  Increment   Line Contents
==============================================
     3                           @profile
     4      5.97 MB    0.00 MB   def my_func():
     5     13.61 MB    7.64 MB       a = [1] * (10 ** 6)
     6    166.20 MB  152.59 MB       b = [2] * (2 * 10 ** 7)
     7     13.61 MB -152.59 MB       del b
     8     13.61 MB    0.00 MB       return a

回答 4

您也可以使用guppy模块。

>>> from guppy import hpy; hp=hpy()
>>> hp.heap()
Partition of a set of 25853 objects. Total size = 3320992 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0  11731  45   929072  28    929072  28 str
     1   5832  23   469760  14   1398832  42 tuple
     2    324   1   277728   8   1676560  50 dict (no owner)
     3     70   0   216976   7   1893536  57 dict of module
     4    199   1   210856   6   2104392  63 dict of type
     5   1627   6   208256   6   2312648  70 types.CodeType
     6   1592   6   191040   6   2503688  75 function
     7    199   1   177008   5   2680696  81 type
     8    124   0   135328   4   2816024  85 dict of class
     9   1045   4    83600   3   2899624  87 __builtin__.wrapper_descriptor
<90 more rows. Type e.g. '_.more' to view.>

和:

>>> hp.iso(1, [1], "1", (1,), {1:1}, None)
Partition of a set of 6 objects. Total size = 560 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0      1  17      280  50       280  50 dict (no owner)
     1      1  17      136  24       416  74 list
     2      1  17       64  11       480  86 tuple
     3      1  17       40   7       520  93 str
     4      1  17       24   4       544  97 int
     5      1  17       16   3       560 100 types.NoneType

Also you can use guppy module.

>>> from guppy import hpy; hp=hpy()
>>> hp.heap()
Partition of a set of 25853 objects. Total size = 3320992 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0  11731  45   929072  28    929072  28 str
     1   5832  23   469760  14   1398832  42 tuple
     2    324   1   277728   8   1676560  50 dict (no owner)
     3     70   0   216976   7   1893536  57 dict of module
     4    199   1   210856   6   2104392  63 dict of type
     5   1627   6   208256   6   2312648  70 types.CodeType
     6   1592   6   191040   6   2503688  75 function
     7    199   1   177008   5   2680696  81 type
     8    124   0   135328   4   2816024  85 dict of class
     9   1045   4    83600   3   2899624  87 __builtin__.wrapper_descriptor
<90 more rows. Type e.g. '_.more' to view.>

And:

>>> hp.iso(1, [1], "1", (1,), {1:1}, None)
Partition of a set of 6 objects. Total size = 560 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0      1  17      280  50       280  50 dict (no owner)
     1      1  17      136  24       416  74 list
     2      1  17       64  11       480  86 tuple
     3      1  17       40   7       520  93 str
     4      1  17       24   4       544  97 int
     5      1  17       16   3       560 100 types.NoneType

回答 5

也可以使用tracemallocPython标准库中的模块。对于类是用C实现的对象来说,它似乎工作得很好(例如,与Pympler不同)。

One can also make use of the tracemalloc module from the Python standard library. It seems to work well for objects whose class is implemented in C (unlike Pympler, for instance).


回答 6

使用dir([object])内置功能时,可以获得__sizeof__内置功能的。

>>> a = -1
>>> a.__sizeof__()
24

When you use the dir([object]) built-in function, you can get the __sizeof__ of the built-in function.

>>> a = -1
>>> a.__sizeof__()
24

Python子进程。Popen“ OSError:[Errno 12]无法分配内存”

问题:Python子进程。Popen“ OSError:[Errno 12]无法分配内存”

注意:此问题最初是在此处提出的但赏金时间已过,即使实际上未找到可接受的答案。我正在重新询问这个问题,包括原始问题中提供的所有详细信息。

python脚本使用sched模块每60秒运行一组类函数:

# sc is a sched.scheduler instance
sc.enter(60, 1, self.doChecks, (sc, False))

该脚本使用此处的代码作为守护进程运行。

在doChecks中调用的许多类方法使用子过程模块来调用系统函数,以获取系统统计信息:

ps = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE).communicate()[0]

在整个脚本崩溃并出现以下错误之前,它可以正常运行一段时间:

File "/home/admin/sd-agent/checks.py", line 436, in getProcesses
File "/usr/lib/python2.4/subprocess.py", line 533, in __init__
File "/usr/lib/python2.4/subprocess.py", line 835, in _get_handles
OSError: [Errno 12] Cannot allocate memory

脚本崩溃后,服务器上的free -m输出为:

$ free -m
                  total       used       free     shared     buffers    cached
Mem:                894        345        549          0          0          0
-/+ buffers/cache:  345        549
Swap:                 0          0          0

服务器正在运行CentOS 5.3。我无法在自己的CentOS盒子上或任何其他报告相同问题的用户上进行复制。

我已经尝试了许多方法来调试此问题,如原始问题中所建议:

  1. 在Popen调用之前和之后记录free -m的输出。内存使用没有显着变化,即,脚本运行时内存不会逐渐消耗完。

  2. 我在Popen调用中添加了close_fds = True,但这没有什么不同-脚本仍然因相同的错误而崩溃。建议在这里这里

  3. 我检查了这所建议双方RLIMIT_DATA和RLIMIT_AS显示(-1,-1)的rlimits 这里

  4. 一篇文章建议没有交换空间可能是原因,但是交换实际上是按需提供的(根据Web主机),这在这里也被认为是虚假的原因。

  5. 进程已关闭,因为这是使用.communicate()的行为,该行为由Python源代码和此处的注释支持。

可以在GitHub上的第442行定义的getProcesses函数中找到整个检查。此操作由从第520行开始的doChecks()调用。

崩溃前,该脚本使用strace运行,输出如下:

recv(4, "Total Accesses: 516662\nTotal kBy"..., 234, 0) = 234
gettimeofday({1250893252, 887805}, NULL) = 0
write(3, "2009-08-21 17:20:52,887 - checks"..., 91) = 91
gettimeofday({1250893252, 888362}, NULL) = 0
write(3, "2009-08-21 17:20:52,888 - checks"..., 74) = 74
gettimeofday({1250893252, 888897}, NULL) = 0
write(3, "2009-08-21 17:20:52,888 - checks"..., 67) = 67
gettimeofday({1250893252, 889184}, NULL) = 0
write(3, "2009-08-21 17:20:52,889 - checks"..., 81) = 81
close(4)                                = 0
gettimeofday({1250893252, 889591}, NULL) = 0
write(3, "2009-08-21 17:20:52,889 - checks"..., 63) = 63
pipe([4, 5])                            = 0
pipe([6, 7])                            = 0
fcntl64(7, F_GETFD)                     = 0
fcntl64(7, F_SETFD, FD_CLOEXEC)         = 0
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb7f12708) = -1 ENOMEM (Cannot allocate memory)
write(2, "Traceback (most recent call last"..., 35) = 35
open("/usr/bin/sd-agent/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/bin/sd-agent/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python24.zip/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/plat-linux2/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python2.4/lib-tk/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/lib-dynload/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/site-packages/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
write(2, "  File \"/usr/bin/sd-agent/agent."..., 52) = 52
open("/home/admin/sd-agent/daemon.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/bin/sd-agent/daemon.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python24.zip/daemon.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/daemon.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/plat-linux2/daemon.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python2.4/lib-tk/daemon.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/lib-dynload/daemon.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/site-packages/daemon.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
write(2, "  File \"/home/admin/sd-agent/dae"..., 60) = 60
open("/usr/bin/sd-agent/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/bin/sd-agent/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python24.zip/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/plat-linux2/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python2.4/lib-tk/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/lib-dynload/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/site-packages/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
write(2, "  File \"/usr/bin/sd-agent/agent."..., 54) = 54
open("/usr/lib/python2.4/sched.py", O_RDONLY|O_LARGEFILE) = 8
write(2, "  File \"/usr/lib/python2.4/sched"..., 55) = 55
fstat64(8, {st_mode=S_IFREG|0644, st_size=4054, ...}) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7d28000
read(8, "\"\"\"A generally useful event sche"..., 4096) = 4054
write(2, "    ", 4)                     = 4
write(2, "void = action(*argument)\n", 25) = 25
close(8)                                = 0
munmap(0xb7d28000, 4096)                = 0
open("/usr/bin/sd-agent/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/bin/sd-agent/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python24.zip/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/plat-linux2/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python2.4/lib-tk/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/lib-dynload/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/site-packages/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
write(2, "  File \"/usr/bin/sd-agent/checks"..., 60) = 60
open("/usr/bin/sd-agent/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/bin/sd-agent/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python24.zip/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/plat-linux2/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python2.4/lib-tk/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/lib-dynload/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/site-packages/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
write(2, "  File \"/usr/bin/sd-agent/checks"..., 64) = 64
open("/usr/lib/python2.4/subprocess.py", O_RDONLY|O_LARGEFILE) = 8
write(2, "  File \"/usr/lib/python2.4/subpr"..., 65) = 65
fstat64(8, {st_mode=S_IFREG|0644, st_size=39931, ...}) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7d28000
read(8, "# subprocess - Subprocesses with"..., 4096) = 4096
read(8, "lso, the newlines attribute of t"..., 4096) = 4096
read(8, "code < 0:\n        print >>sys.st"..., 4096) = 4096
read(8, "alse does not exist on 2.2.0\ntry"..., 4096) = 4096
read(8, " p2cread\n        # c2pread    <-"..., 4096) = 4096
write(2, "    ", 4)                     = 4
write(2, "errread, errwrite)\n", 19)    = 19
close(8)                                = 0
munmap(0xb7d28000, 4096)                = 0
open("/usr/lib/python2.4/subprocess.py", O_RDONLY|O_LARGEFILE) = 8
write(2, "  File \"/usr/lib/python2.4/subpr"..., 71) = 71
fstat64(8, {st_mode=S_IFREG|0644, st_size=39931, ...}) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7d28000
read(8, "# subprocess - Subprocesses with"..., 4096) = 4096
read(8, "lso, the newlines attribute of t"..., 4096) = 4096
read(8, "code < 0:\n        print >>sys.st"..., 4096) = 4096
read(8, "alse does not exist on 2.2.0\ntry"..., 4096) = 4096
read(8, " p2cread\n        # c2pread    <-"..., 4096) = 4096
read(8, "table(self, handle):\n           "..., 4096) = 4096
read(8, "rrno using _sys_errlist (or siml"..., 4096) = 4096
read(8, " p2cwrite = None, None\n         "..., 4096) = 4096
write(2, "    ", 4)                     = 4
write(2, "self.pid = os.fork()\n", 21)  = 21
close(8)                                = 0
munmap(0xb7d28000, 4096)                = 0
write(2, "OSError", 7)                  = 7
write(2, ": ", 2)                       = 2
write(2, "[Errno 12] Cannot allocate memor"..., 33) = 33
write(2, "\n", 1)                       = 1
unlink("/var/run/sd-agent.pid")         = 0
close(3)                                = 0
munmap(0xb7e0d000, 4096)                = 0
rt_sigaction(SIGINT, {SIG_DFL, [], SA_RESTORER, 0x589978}, {0xb89a60, [], SA_RESTORER, 0x589978}, 8) = 0
brk(0xa022000)                          = 0xa022000
exit_group(1)                           = ?

Note: This question was originally asked here but the bounty time expired even though an acceptable answer was not actually found. I am re-asking this question including all details provided in the original question.

A python script is running a set of class functions every 60 seconds using the sched module:

# sc is a sched.scheduler instance
sc.enter(60, 1, self.doChecks, (sc, False))

The script is running as a daemonised process using the code here.

A number of class methods that are called as part of doChecks use the subprocess module to call system functions in order to get system statistics:

ps = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE).communicate()[0]

This runs fine for a period of time before the entire script crashing with the following error:

File "/home/admin/sd-agent/checks.py", line 436, in getProcesses
File "/usr/lib/python2.4/subprocess.py", line 533, in __init__
File "/usr/lib/python2.4/subprocess.py", line 835, in _get_handles
OSError: [Errno 12] Cannot allocate memory

The output of free -m on the server once the script has crashed is:

$ free -m
                  total       used       free     shared     buffers    cached
Mem:                894        345        549          0          0          0
-/+ buffers/cache:  345        549
Swap:                 0          0          0

The server is running CentOS 5.3. I am unable to reproduce on my own CentOS boxes nor with any other user reporting the same problem.

I have tried a number of things to debug this as suggested in the original question:

  1. Logging the output of free -m before and after the Popen call. There is no significant change in memory usage i.e. memory is not gradually being used up as the script runs.

  2. I added close_fds=True to the Popen call but this made no difference – the script still crashed with the same error. Suggested here and here.

  3. I checked the rlimits which showed (-1, -1) on both RLIMIT_DATA and RLIMIT_AS as suggested here.

  4. An article suggested the having no swap space might be the cause but swap is actually available on demand (according to the web host) and this was also suggested as a bogus cause here.

  5. The processes are being closed because that is the behaviour of using .communicate() as backed up by the Python source code and comments here.

The entire checks can be found at on GitHub here with the getProcesses function defined from line 442. This is called by doChecks() starting at line 520.

The script was run with strace with the following output before the crash:

recv(4, "Total Accesses: 516662\nTotal kBy"..., 234, 0) = 234
gettimeofday({1250893252, 887805}, NULL) = 0
write(3, "2009-08-21 17:20:52,887 - checks"..., 91) = 91
gettimeofday({1250893252, 888362}, NULL) = 0
write(3, "2009-08-21 17:20:52,888 - checks"..., 74) = 74
gettimeofday({1250893252, 888897}, NULL) = 0
write(3, "2009-08-21 17:20:52,888 - checks"..., 67) = 67
gettimeofday({1250893252, 889184}, NULL) = 0
write(3, "2009-08-21 17:20:52,889 - checks"..., 81) = 81
close(4)                                = 0
gettimeofday({1250893252, 889591}, NULL) = 0
write(3, "2009-08-21 17:20:52,889 - checks"..., 63) = 63
pipe([4, 5])                            = 0
pipe([6, 7])                            = 0
fcntl64(7, F_GETFD)                     = 0
fcntl64(7, F_SETFD, FD_CLOEXEC)         = 0
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb7f12708) = -1 ENOMEM (Cannot allocate memory)
write(2, "Traceback (most recent call last"..., 35) = 35
open("/usr/bin/sd-agent/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/bin/sd-agent/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python24.zip/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/plat-linux2/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python2.4/lib-tk/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/lib-dynload/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/site-packages/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
write(2, "  File \"/usr/bin/sd-agent/agent."..., 52) = 52
open("/home/admin/sd-agent/daemon.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/bin/sd-agent/daemon.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python24.zip/daemon.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/daemon.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/plat-linux2/daemon.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python2.4/lib-tk/daemon.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/lib-dynload/daemon.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/site-packages/daemon.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
write(2, "  File \"/home/admin/sd-agent/dae"..., 60) = 60
open("/usr/bin/sd-agent/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/bin/sd-agent/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python24.zip/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/plat-linux2/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python2.4/lib-tk/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/lib-dynload/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/site-packages/agent.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
write(2, "  File \"/usr/bin/sd-agent/agent."..., 54) = 54
open("/usr/lib/python2.4/sched.py", O_RDONLY|O_LARGEFILE) = 8
write(2, "  File \"/usr/lib/python2.4/sched"..., 55) = 55
fstat64(8, {st_mode=S_IFREG|0644, st_size=4054, ...}) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7d28000
read(8, "\"\"\"A generally useful event sche"..., 4096) = 4054
write(2, "    ", 4)                     = 4
write(2, "void = action(*argument)\n", 25) = 25
close(8)                                = 0
munmap(0xb7d28000, 4096)                = 0
open("/usr/bin/sd-agent/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/bin/sd-agent/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python24.zip/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/plat-linux2/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python2.4/lib-tk/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/lib-dynload/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/site-packages/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
write(2, "  File \"/usr/bin/sd-agent/checks"..., 60) = 60
open("/usr/bin/sd-agent/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/bin/sd-agent/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python24.zip/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/plat-linux2/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory)
open("/usr/lib/python2.4/lib-tk/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/lib-dynload/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
open("/usr/lib/python2.4/site-packages/checks.py", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
write(2, "  File \"/usr/bin/sd-agent/checks"..., 64) = 64
open("/usr/lib/python2.4/subprocess.py", O_RDONLY|O_LARGEFILE) = 8
write(2, "  File \"/usr/lib/python2.4/subpr"..., 65) = 65
fstat64(8, {st_mode=S_IFREG|0644, st_size=39931, ...}) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7d28000
read(8, "# subprocess - Subprocesses with"..., 4096) = 4096
read(8, "lso, the newlines attribute of t"..., 4096) = 4096
read(8, "code < 0:\n        print >>sys.st"..., 4096) = 4096
read(8, "alse does not exist on 2.2.0\ntry"..., 4096) = 4096
read(8, " p2cread\n        # c2pread    <-"..., 4096) = 4096
write(2, "    ", 4)                     = 4
write(2, "errread, errwrite)\n", 19)    = 19
close(8)                                = 0
munmap(0xb7d28000, 4096)                = 0
open("/usr/lib/python2.4/subprocess.py", O_RDONLY|O_LARGEFILE) = 8
write(2, "  File \"/usr/lib/python2.4/subpr"..., 71) = 71
fstat64(8, {st_mode=S_IFREG|0644, st_size=39931, ...}) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7d28000
read(8, "# subprocess - Subprocesses with"..., 4096) = 4096
read(8, "lso, the newlines attribute of t"..., 4096) = 4096
read(8, "code < 0:\n        print >>sys.st"..., 4096) = 4096
read(8, "alse does not exist on 2.2.0\ntry"..., 4096) = 4096
read(8, " p2cread\n        # c2pread    <-"..., 4096) = 4096
read(8, "table(self, handle):\n           "..., 4096) = 4096
read(8, "rrno using _sys_errlist (or siml"..., 4096) = 4096
read(8, " p2cwrite = None, None\n         "..., 4096) = 4096
write(2, "    ", 4)                     = 4
write(2, "self.pid = os.fork()\n", 21)  = 21
close(8)                                = 0
munmap(0xb7d28000, 4096)                = 0
write(2, "OSError", 7)                  = 7
write(2, ": ", 2)                       = 2
write(2, "[Errno 12] Cannot allocate memor"..., 33) = 33
write(2, "\n", 1)                       = 1
unlink("/var/run/sd-agent.pid")         = 0
close(3)                                = 0
munmap(0xb7e0d000, 4096)                = 0
rt_sigaction(SIGINT, {SIG_DFL, [], SA_RESTORER, 0x589978}, {0xb89a60, [], SA_RESTORER, 0x589978}, 8) = 0
brk(0xa022000)                          = 0xa022000
exit_group(1)                           = ?

回答 0

作为一般规则(即香草内核),fork/ clone有故障ENOMEM 发生的具体原因的任何一个诚实的神了内存不足的条件dup_mmdup_task_structalloc_pidmpol_dupmm_init等呱呱叫),或者是因为security_vm_enough_memory_mm你失望实施过载策略

首先,在尝试进行分叉时,检查未能分叉的进程的vmsize,然后将其与过量使用策略相关的可用内存(物理和交换)量进行比较(插入数字)。

在您的特定情况下,请注意,Virtuozzo 在过量使用执法方面还有其他检查。而且,我不确定您容器内部交换和过量使用配置真正拥有多少控制权(以影响执行结果)。

现在,为了真正前进,我想告诉您,您还有两个选择

  • 切换到更大的实例,或者
  • 投入一些编码工作来更有效地控制脚本的内存占用量

注意,如果事实证明不是您自己,而是与您运行amock在同一台服务器上的其他实例并置在另一个实例中,那么编码工作可能就一事无成。

在内存方面,我们已经知道subprocess.Popen使用fork/ clone 在幕后,这意味着每次调用它时,您都在请求与Python已经耗尽的内存一样多的内存,即增加数百MB,以便exec很小的10kB可执行文件,例如freeps。如果出现不利的过量使用政策,您很快就会看到ENOMEM

替代方法fork没有此父页面表等。复制问题为vforkposix_spawn。但是,如果您不想subprocess.Popenvfork/ 重写大块的代码posix_spawn,请考虑suprocess.Popen在脚本开始时(Python的内存占用最小的情况下)仅使用一次,以生成一个shell脚本,然后再运行free/ ps/ sleep以及其他与脚本并行循环;轮询脚本的输出或同步读取它,如果您还有其他要异步处理的内容,则可能从一个单独的线程中读取它-使用Python处理数据,但将分叉交给下级进程处理。

无论其,在您的特定情况下,你可以跳过调用psfree干脆; 无论您选择自己亲自还是通过现有的库和/或程序包访问这些信息,都可以直接在Python中直接从中procfs使用。如果和是你正在运行的唯一的实用工具,那么你就可以弄死完全psfreesubprocess.Popen

最后,无论您做什么subprocess.Popen,如果脚本泄漏内存,您最终还是会碰壁。密切注意它,并检查是否有内存泄漏

As a general rule (i.e. in vanilla kernels), fork/clone failures with ENOMEM occur specifically because of either an honest to God out-of-memory condition (dup_mm, dup_task_struct, alloc_pid, mpol_dup, mm_init etc. croak), or because security_vm_enough_memory_mm failed you while enforcing the overcommit policy.

Start by checking the vmsize of the process that failed to fork, at the time of the fork attempt, and then compare to the amount of free memory (physical and swap) as it relates to the overcommit policy (plug the numbers in.)

In your particular case, note that Virtuozzo has additional checks in overcommit enforcement. Moreover, I’m not sure how much control you truly have, from within your container, over swap and overcommit configuration (in order to influence the outcome of the enforcement.)

Now, in order to actually move forward I’d say you’re left with two options:

  • switch to a larger instance, or
  • put some coding effort into more effectively controlling your script’s memory footprint

NOTE that the coding effort may be all for naught if it turns out that it’s not you, but some other guy collocated in a different instance on the same server as you running amock.

Memory-wise, we already know that subprocess.Popen uses fork/clone under the hood, meaning that every time you call it you’re requesting once more as much memory as Python is already eating up, i.e. in the hundreds of additional MB, all in order to then exec a puny 10kB executable such as free or ps. In the case of an unfavourable overcommit policy, you’ll soon see ENOMEM.

Alternatives to fork that do not have this parent page tables etc. copy problem are vfork and posix_spawn. But if you do not feel like rewriting chunks of subprocess.Popen in terms of vfork/posix_spawn, consider using suprocess.Popen only once, at the beginning of your script (when Python’s memory footprint is minimal), to spawn a shell script that then runs free/ps/sleep and whatever else in a loop parallel to your script; poll the script’s output or read it synchronously, possibly from a separate thread if you have other stuff to take care of asynchronously — do your data crunching in Python but leave the forking to the subordinate process.

HOWEVER, in your particular case you can skip invoking ps and free altogether; that information is readily available to you in Python directly from procfs, whether you choose to access it yourself or via existing libraries and/or packages. If ps and free were the only utilities you were running, then you can do away with subprocess.Popen completely.

Finally, whatever you do as far as subprocess.Popen is concerned, if your script leaks memory you will still hit the wall eventually. Keep an eye on it, and check for memory leaks.


回答 1

free -m我看来,从输出看,您实际上没有可用的交换内存。我不确定在Linux中交换是否总是可以按需自动进行,但是我遇到了同样的问题,这里的答案都没有真正帮助我。但是,添加一些交换内存可以解决我的问题,因为这可能会帮助其他面临相同问题的人,所以我发布了有关如何添加1GB交换的答案(在Ubuntu 12.04上,但对于其他发行版也应类似地工作)。

您可以首先检查是否启用了任何交换内存。

$sudo swapon -s

如果为空,则表示您没有启用任何交换。要添加1GB交换空间:

$sudo dd if=/dev/zero of=/swapfile bs=1024 count=1024k
$sudo mkswap /swapfile
$sudo swapon /swapfile

将以下行添加到中,fstab以使交换永久生效。

$sudo vim /etc/fstab

     /swapfile       none    swap    sw      0       0 

来源和更多信息可以在这里找到。

Looking at the output of free -m it seems to me that you actually do not have swap memory available. I am not sure if in Linux the swap always will be available automatically on demand, but I was having the same problem and none of the answers here really helped me. Adding some swap memory however, fixed the problem in my case so since this might help other people facing the same problem, I post my answer on how to add a 1GB swap (on Ubuntu 12.04 but it should work similarly for other distributions.)

You can first check if there is any swap memory enabled.

$sudo swapon -s

if it is empty, it means you don’t have any swap enabled. To add a 1GB swap:

$sudo dd if=/dev/zero of=/swapfile bs=1024 count=1024k
$sudo mkswap /swapfile
$sudo swapon /swapfile

Add the following line to the fstab to make the swap permanent.

$sudo vim /etc/fstab

     /swapfile       none    swap    sw      0       0 

Source and more information can be found here.


回答 2

swap可能不是以前建议的红色鲱鱼。之前的python进程有多大ENOMEM

在内核2.6下,/proc/sys/vm/swappiness控制内核将如何积极地进行交换,并overcommit*归档内核可以眨眨一下头来分配多少内存以及如何精确分配内存。就像您的Facebook关系状态一样,这很复杂

…但是交换实际上是按需提供的(根据Web主机)…

但不是根据free(1)命令的输出,该命令的输出不显示服务器实例识别的交换空间。现在,您的Web主机肯定比我对这个主题了解更多,但是我使用的虚拟RHEL / CentOS系统报告了可用于来宾OS的交换。

改编Red Hat KB第15252条

只要匿名内存和系统V共享内存的总和少于RAM的3/4,红帽企业Linux 5系统就可以很好地运行,根本没有交换空间。….内存小于或等于4GB的系统 [建议]至少具有2GB的交换空间。

将您的/proc/sys/vm设置与普通的CentOS 5.3安装进行比较。添加交换文件。棘轮下来swappiness,看看你是否再活下去。

swap may not be the red herring previously suggested. How big is the python process in question just before the ENOMEM?

Under kernel 2.6, /proc/sys/vm/swappiness controls how aggressively the kernel will turn to swap, and overcommit* files how much and how precisely the kernel may apportion memory with a wink and a nod. Like your facebook relationship status, it’s complicated.

…but swap is actually available on demand (according to the web host)…

but not according to the output of your free(1) command, which shows no swap space recognized by your server instance. Now, your web host may certainly know much more than I about this topic, but virtual RHEL/CentOS systems I’ve used have reported swap available to the guest OS.

Adapting Red Hat KB Article 15252:

A Red Hat Enterprise Linux 5 system will run just fine with no swap space at all as long as the sum of anonymous memory and system V shared memory is less than about 3/4 the amount of RAM. …. Systems with 4GB of ram or less [are recommended to have] a minimum of 2GB of swap space.

Compare your /proc/sys/vm settings to a plain CentOS 5.3 installation. Add a swap file. Ratchet down swappiness and see if you live any longer.


回答 3

为了轻松解决,您可以

echo 1 > /proc/sys/vm/overcommit_memory

如果您确定系统有足够的内存。参见Linux以上的提交启发式方法

For an easy fix, you could

echo 1 > /proc/sys/vm/overcommit_memory

if your’re sure that your system has enough memory. See Linux over commit heuristic.


回答 4

我仍然怀疑您的客户/用户已加载了某些内核模块或驱动程序,从而干扰了clone()系统调用(可能是一些晦涩的安全性增强功能,例如LIDS,但更为晦涩吗?),或者是以某种方式填充了某些内核数据结构对fork()/来说是必需的clone()(进程表,页面表,文件描述符表等)。

这是fork(2)手册页的相关部分:

错误
       EAGAIN fork()无法分配足够的内存来复制父级的页表并为该任务分配任务结构
              儿童。

       EAGAIN无法创建新进程,因为遇到了调用者的RLIMIT_NPROC资源限制。至
              超过此限制,该进程必须具有CAP_SYS_ADMIN或CAP_SYS_RESOURCE能力。

       由于内存紧张,ENOMEM fork()无法分配必要的内核结构。

我建议让用户在引导到普通的通用内核之后,并仅加载最少的一组模块和驱动程序(运行应用程序/脚本的最低要求)之后尝试一下。从那里开始,假设它在该配置下有效,则他们可以在显示该问题的配置和配置之间执行二进制搜索。这是标准的系统管理员故障排除101。

您中的相关行strace是:

clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb7f12708) = -1 ENOMEM (Cannot allocate memory)

…我知道其他人已经讨论过交换和内存的可用性(我建议您至少设置一个小的交换分区,即使该分区位于RAM磁盘上,也具有讽刺意味的是…与可用零交换的那些交换(exceptions处理路径)相比,甚至很少的可用交换也被广泛地执行。

但是,我怀疑这仍然是鲱鱼。

free报告高速缓存和缓冲区正在使用的0(零)内存的事实非常令人不安。我怀疑free输出…甚至可能是您的应用程序问题,是由某种专有内核模块引起的,该模块以某种方式干扰了内存分配。

根据fork()/ clone()的手册页,如果您的调用会导致资源限制冲突(RLIMIT_NPROC),则fork()系统调用应返回EAGAIN …但是,它没有说是否要返回EAGAIN违反其他RLIMIT *。无论如何,如果目标/主机具有某种奇怪的Vormetric或其他安全设置(或者即使您的进程在某种奇怪的SELinux策略下运行),也可能导致此-ENOMEM故障。

不太可能是正常的Linux / UNIX问题。您正在那里进行一些非标准的操作。

I continue to suspect that your customer/user has some kernel module or driver loaded which is interfering with the clone() system call (perhaps some obscure security enhancement, something like LIDS but more obscure?) or is somehow filling up some of the kernel data structures that are necessary for fork()/clone() to operate (process table, page tables, file descriptor tables, etc).

Here’s the relevant portion of the fork(2) man page:

ERRORS
       EAGAIN fork() cannot allocate sufficient memory to copy the parent's page tables and allocate a task  structure  for  the
              child.

       EAGAIN It  was not possible to create a new process because the caller's RLIMIT_NPROC resource limit was encountered.  To
              exceed this limit, the process must have either the CAP_SYS_ADMIN or the CAP_SYS_RESOURCE capability.

       ENOMEM fork() failed to allocate the necessary kernel structures because memory is tight.

I suggest having the user try this after booting into a stock, generic kernel and with only a minimal set of modules and drivers loaded (minimum necessary to run your application/script). From there, assuming it works in that configuration, they can perform a binary search between that and the configuration which exhibits the issue. This is standard sysadmin troubleshooting 101.

The relevant line in your strace is:

clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb7f12708) = -1 ENOMEM (Cannot allocate memory)

… I know others have talked about swap and memory availability (and I would recommend that you set up at least a small swap partition, ironically even if it’s on a RAM disk … the code paths through the Linux kernel when it has even a tiny bit of swap available have been exercised far more extensively than those (exception handling paths) in which there is zero swap available.

However I suspect that this is still a red herring.

The fact that free is reporting 0 (ZERO) memory in use by the cache and buffers is very disturbing. I suspect that the free output … and possibly your application issue here, are caused by some proprietary kernel module which is interfering with the memory allocation in some way.

According to the man pages for fork()/clone() the fork() system call should return EAGAIN if your call would cause a resource limit violation (RLIMIT_NPROC) … however, it doesn’t say if EAGAIN is to be returned by other RLIMIT* violations. In any event if your target/host has some sort of weird Vormetric or other security settings (or even if your process is running under some weird SELinux policy) then it might be causing this -ENOMEM failure.

It’s pretty unlikely to be a normal run-of-the-mill Linux/UNIX issue. You’ve got something non-standard going on there.


回答 5

您是否尝试过使用:

(status,output) = commands.getstatusoutput("ps aux")

我认为这为我解决了完全相同的问题。但是后来我的过程最终被杀死,而不是没有产生,这更糟。

经过一些测试,我发现这仅在旧版本的python上发生:它在2.6.5中发生,而在2.7.2中不发生

我的搜索将我引到了这里python-close_fds-issue,但是取消设置closed_fds并不能解决问题。仍然值得一读。

我发现python通过仅关注它就泄漏了文件描述符:

watch "ls /proc/$PYTHONPID/fd | wc -l"

像您一样,我确实想捕获命令的输出,并且确实要避免OOM错误……但是看来,唯一的方法是让人们使用没有太多错误的Python版本。不理想…

Have you tried using:

(status,output) = commands.getstatusoutput("ps aux")

I thought this had fixed the exact same problem for me. But then my process ended up getting killed instead of failing to spawn, which is even worse..

After some testing I found that this only occurred on older versions of python: it happens with 2.6.5 but not with 2.7.2

My search had led me here python-close_fds-issue, but unsetting closed_fds had not solved the issue. It is still well worth a read.

I found that python was leaking file descriptors by just keeping an eye on it:

watch "ls /proc/$PYTHONPID/fd | wc -l"

Like you, I do want to capture the command’s output, and I do want to avoid OOM errors… but it looks like the only way is for people to use a less buggy version of Python. Not ideal…


回答 6

munmap(0xb7d28000,4096)= 0
写(2,“ OSError”,7)= 7

我看过草率的代码,看起来像这样:

serrno = errno;
some_Syscall(...)
if (serrno != errno)
/* sound alarm: CATROSTOPHIC ERROR !!! */

您应该检查这是否是python代码中正在发生的事情。Errno仅在进行中的系统调用失败时才有效。

编辑添加:

您没有说这个过程可以持续多久。可能的内存使用者

  • 分叉过程
  • 未使用的数据结构
  • 共享库
  • 内存映射文件

munmap(0xb7d28000, 4096) = 0
write(2, “OSError”, 7) = 7

I’ve seen sloppy code that looks like this:

serrno = errno;
some_Syscall(...)
if (serrno != errno)
/* sound alarm: CATROSTOPHIC ERROR !!! */

You should check to see if this is what is happening in the python code. Errno is only valid if the proceeding system call failed.

Edited to add:

You don’t say how long this process lives. Possible consumers of memory

  • forked processes
  • unused data structures
  • shared libraries
  • memory mapped files

回答 7

也许你可以简单地

$ sudo bash -c "echo vm.overcommit_memory=1 >> /etc/sysctl.conf"
$ sudo sysctl -p

它适用于我的情况。

参考:https : //github.com/openai/gym/issues/110#issuecomment-220672405

Maybe you can simply

$ sudo bash -c "echo vm.overcommit_memory=1 >> /etc/sysctl.conf"
$ sudo sysctl -p

It works for my case.

Reference: https://github.com/openai/gym/issues/110#issuecomment-220672405


如何释放熊猫数据框使用的内存?

问题:如何释放熊猫数据框使用的内存?

我在熊猫中打开了一个非常大的csv文件,如下所示。

import pandas
df = pandas.read_csv('large_txt_file.txt')

完成此操作后,内存使用量将增加2GB,这是预期的,因为此文件包含数百万行。我的问题出在我需要释放此内存的时候。我跑了…

del df

但是,我的内存使用没有下降。这是释放熊猫数据帧使用的内存的错误方法吗?如果是,正确的方法是什么?

I have a really large csv file that I opened in pandas as follows….

import pandas
df = pandas.read_csv('large_txt_file.txt')

Once I do this my memory usage increases by 2GB, which is expected because this file contains millions of rows. My problem comes when I need to release this memory. I ran….

del df

However, my memory usage did not drop. Is this the wrong approach to release memory used by a pandas data frame? If it is, what is the proper way?


回答 0

减少Python中的内存使用量非常困难,因为Python实际上并未将内存释放回操作系统。如果删除对象,则内存可用于新的Python对象,但不能free()返回系统(请参阅此问题)。

如果坚持使用数字numpy数组,则将释放它们,但装箱的对象不会释放。

>>> import os, psutil, numpy as np
>>> def usage():
...     process = psutil.Process(os.getpid())
...     return process.get_memory_info()[0] / float(2 ** 20)
... 
>>> usage() # initial memory usage
27.5 

>>> arr = np.arange(10 ** 8) # create a large array without boxing
>>> usage()
790.46875
>>> del arr
>>> usage()
27.52734375 # numpy just free()'d the array

>>> arr = np.arange(10 ** 8, dtype='O') # create lots of objects
>>> usage()
3135.109375
>>> del arr
>>> usage()
2372.16796875  # numpy frees the array, but python keeps the heap big

减少数据框的数量

Python使内存保持高水位,但是我们可以减少创建的数据帧的总数。修改数据框时,请选择inplace=True,这样就不会创建副本。

另一个常见的陷阱是在ipython中保留以前创建的数据帧的副本:

In [1]: import pandas as pd

In [2]: df = pd.DataFrame({'foo': [1,2,3,4]})

In [3]: df + 1
Out[3]: 
   foo
0    2
1    3
2    4
3    5

In [4]: df + 2
Out[4]: 
   foo
0    3
1    4
2    5
3    6

In [5]: Out # Still has all our temporary DataFrame objects!
Out[5]: 
{3:    foo
 0    2
 1    3
 2    4
 3    5, 4:    foo
 0    3
 1    4
 2    5
 3    6}

您可以通过键入%reset Out清除历史记录来解决此问题。另外,您可以调整ipython保留的历史记录数量ipython --cache-size=5(默认为1000)。

减少数据框大小

尽可能避免使用对象dtype。

>>> df.dtypes
foo    float64 # 8 bytes per value
bar      int64 # 8 bytes per value
baz     object # at least 48 bytes per value, often more

带有对象dtype的值被装箱,这意味着numpy数组仅包含一个指针,并且堆中对于数据框中的每个值都有一个完整的Python对象。这包括字符串。

尽管numpy支持数组中固定大小的字符串,但pandas不支持(这会引起用户混乱)。这可以产生很大的变化:

>>> import numpy as np
>>> arr = np.array(['foo', 'bar', 'baz'])
>>> arr.dtype
dtype('S3')
>>> arr.nbytes
9

>>> import sys; import pandas as pd
>>> s = pd.Series(['foo', 'bar', 'baz'])
dtype('O')
>>> sum(sys.getsizeof(x) for x in s)
120

您可能要避免使用字符串列,或者找到一种将字符串数据表示为数字的方法。

如果您的数据框包含许多重复值(NaN非常常见),则可以使用稀疏数据结构来减少内存使用量:

>>> df1.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 39681584 entries, 0 to 39681583
Data columns (total 1 columns):
foo    float64
dtypes: float64(1)
memory usage: 605.5 MB

>>> df1.shape
(39681584, 1)

>>> df1.foo.isnull().sum() * 100. / len(df1)
20.628483479893344 # so 20% of values are NaN

>>> df1.to_sparse().info()
<class 'pandas.sparse.frame.SparseDataFrame'>
Int64Index: 39681584 entries, 0 to 39681583
Data columns (total 1 columns):
foo    float64
dtypes: float64(1)
memory usage: 543.0 MB

查看内存使用情况

您可以查看内存使用情况(docs):

>>> df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 39681584 entries, 0 to 39681583
Data columns (total 14 columns):
...
dtypes: datetime64[ns](1), float64(8), int64(1), object(4)
memory usage: 4.4+ GB

从熊猫0.17.1开始,您还df.info(memory_usage='deep')可以查看包括对象在内的内存使用情况。

Reducing memory usage in Python is difficult, because Python does not actually release memory back to the operating system. If you delete objects, then the memory is available to new Python objects, but not free()‘d back to the system (see this question).

If you stick to numeric numpy arrays, those are freed, but boxed objects are not.

>>> import os, psutil, numpy as np
>>> def usage():
...     process = psutil.Process(os.getpid())
...     return process.get_memory_info()[0] / float(2 ** 20)
... 
>>> usage() # initial memory usage
27.5 

>>> arr = np.arange(10 ** 8) # create a large array without boxing
>>> usage()
790.46875
>>> del arr
>>> usage()
27.52734375 # numpy just free()'d the array

>>> arr = np.arange(10 ** 8, dtype='O') # create lots of objects
>>> usage()
3135.109375
>>> del arr
>>> usage()
2372.16796875  # numpy frees the array, but python keeps the heap big

Reducing the Number of Dataframes

Python keep our memory at high watermark, but we can reduce the total number of dataframes we create. When modifying your dataframe, prefer inplace=True, so you don’t create copies.

Another common gotcha is holding on to copies of previously created dataframes in ipython:

In [1]: import pandas as pd

In [2]: df = pd.DataFrame({'foo': [1,2,3,4]})

In [3]: df + 1
Out[3]: 
   foo
0    2
1    3
2    4
3    5

In [4]: df + 2
Out[4]: 
   foo
0    3
1    4
2    5
3    6

In [5]: Out # Still has all our temporary DataFrame objects!
Out[5]: 
{3:    foo
 0    2
 1    3
 2    4
 3    5, 4:    foo
 0    3
 1    4
 2    5
 3    6}

You can fix this by typing %reset Out to clear your history. Alternatively, you can adjust how much history ipython keeps with ipython --cache-size=5 (default is 1000).

Reducing Dataframe Size

Wherever possible, avoid using object dtypes.

>>> df.dtypes
foo    float64 # 8 bytes per value
bar      int64 # 8 bytes per value
baz     object # at least 48 bytes per value, often more

Values with an object dtype are boxed, which means the numpy array just contains a pointer and you have a full Python object on the heap for every value in your dataframe. This includes strings.

Whilst numpy supports fixed-size strings in arrays, pandas does not (it’s caused user confusion). This can make a significant difference:

>>> import numpy as np
>>> arr = np.array(['foo', 'bar', 'baz'])
>>> arr.dtype
dtype('S3')
>>> arr.nbytes
9

>>> import sys; import pandas as pd
>>> s = pd.Series(['foo', 'bar', 'baz'])
dtype('O')
>>> sum(sys.getsizeof(x) for x in s)
120

You may want to avoid using string columns, or find a way of representing string data as numbers.

If you have a dataframe that contains many repeated values (NaN is very common), then you can use a sparse data structure to reduce memory usage:

>>> df1.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 39681584 entries, 0 to 39681583
Data columns (total 1 columns):
foo    float64
dtypes: float64(1)
memory usage: 605.5 MB

>>> df1.shape
(39681584, 1)

>>> df1.foo.isnull().sum() * 100. / len(df1)
20.628483479893344 # so 20% of values are NaN

>>> df1.to_sparse().info()
<class 'pandas.sparse.frame.SparseDataFrame'>
Int64Index: 39681584 entries, 0 to 39681583
Data columns (total 1 columns):
foo    float64
dtypes: float64(1)
memory usage: 543.0 MB

Viewing Memory Usage

You can view the memory usage (docs):

>>> df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 39681584 entries, 0 to 39681583
Data columns (total 14 columns):
...
dtypes: datetime64[ns](1), float64(8), int64(1), object(4)
memory usage: 4.4+ GB

As of pandas 0.17.1, you can also do df.info(memory_usage='deep') to see memory usage including objects.


回答 1

如评论中所述,有一些尝试的方法:gc.collect(@EdChum)可能清除东西。至少从我的经验来看,这些东西有时会起作用,而通常却不会。

但是,总有一件事情总是可行的,因为它是在操作系统而不是语言级别上完成的。

假设您有一个函数,该函数创建一个中间庞大的DataFrame,并返回较小的结果(也可能是DataFrame):

def huge_intermediate_calc(something):
    ...
    huge_df = pd.DataFrame(...)
    ...
    return some_aggregate

那如果你做类似的事情

import multiprocessing

result = multiprocessing.Pool(1).map(huge_intermediate_calc, [something_])[0]

然后,该函数在不同的过程中执行。该过程完成后,操作系统将收回其使用的所有资源。实际上,Python,熊猫(垃圾收集器)无法阻止这种情况。

As noted in the comments, there are some things to try: gc.collect (@EdChum) may clear stuff, for example. At least from my experience, these things sometimes work and often don’t.

There is one thing that always works, however, because it is done at the OS, not language, level.

Suppose you have a function that creates an intermediate huge DataFrame, and returns a smaller result (which might also be a DataFrame):

def huge_intermediate_calc(something):
    ...
    huge_df = pd.DataFrame(...)
    ...
    return some_aggregate

Then if you do something like

import multiprocessing

result = multiprocessing.Pool(1).map(huge_intermediate_calc, [something_])[0]

Then the function is executed at a different process. When that process completes, the OS retakes all the resources it used. There’s really nothing Python, pandas, the garbage collector, could do to stop that.


回答 2

这解决了为我释放内存的问题!!!

del [[df_1,df_2]]
gc.collect()
df_1=pd.DataFrame()
df_2=pd.DataFrame()

数据框将显式设置为null

This solves the problem of releasing the memory for me!!!

import gc
import pandas as pd

del [[df_1,df_2]]
gc.collect()
df_1=pd.DataFrame()
df_2=pd.DataFrame()

the data-frame will be explicitly set to null

in the above statements

Firstly, the self reference of the dataframe is deleted meaning the dataframe is no longer available to python there after all the references of the dataframe is collected by garbage collector (gc.collect()) and then explicitly set all the references to empty dataframe.

more on the working of garbage collector is well explained in https://stackify.com/python-garbage-collection/


回答 3

del df如果删除时有任何引用,将不会df删除。因此,您需要删除所有对其的引用,del df以释放内存。

因此,应删除绑定到df的所有实例以触发垃圾回收。

使用objgragh来检查哪些对象被保留。

del df will not be deleted if there are any reference to the df at the time of deletion. So you need to to delete all the references to it with del df to release the memory.

So all the instances bound to df should be deleted to trigger garbage collection.

Use objgragh to check which is holding onto the objects.


回答 4

似乎glibc有一个问题会影响Pandas中的内存分配:https : //github.com/pandas-dev/pandas/issues/2659

对这个问题的详细猴补丁解决了我的问题:

# monkeypatches.py

# Solving memory leak problem in pandas
# https://github.com/pandas-dev/pandas/issues/2659#issuecomment-12021083
import pandas as pd
from ctypes import cdll, CDLL
try:
    cdll.LoadLibrary("libc.so.6")
    libc = CDLL("libc.so.6")
    libc.malloc_trim(0)
except (OSError, AttributeError):
    libc = None

__old_del = getattr(pd.DataFrame, '__del__', None)

def __new_del(self):
    if __old_del:
        __old_del(self)
    libc.malloc_trim(0)

if libc:
    print('Applying monkeypatch for pd.DataFrame.__del__', file=sys.stderr)
    pd.DataFrame.__del__ = __new_del
else:
    print('Skipping monkeypatch for pd.DataFrame.__del__: libc or malloc_trim() not found', file=sys.stderr)

It seems there is an issue with glibc that affects the memory allocation in Pandas: https://github.com/pandas-dev/pandas/issues/2659

The monkey patch detailed on this issue has resolved the problem for me:

# monkeypatches.py

# Solving memory leak problem in pandas
# https://github.com/pandas-dev/pandas/issues/2659#issuecomment-12021083
import pandas as pd
from ctypes import cdll, CDLL
try:
    cdll.LoadLibrary("libc.so.6")
    libc = CDLL("libc.so.6")
    libc.malloc_trim(0)
except (OSError, AttributeError):
    libc = None

__old_del = getattr(pd.DataFrame, '__del__', None)

def __new_del(self):
    if __old_del:
        __old_del(self)
    libc.malloc_trim(0)

if libc:
    print('Applying monkeypatch for pd.DataFrame.__del__', file=sys.stderr)
    pd.DataFrame.__del__ = __new_del
else:
    print('Skipping monkeypatch for pd.DataFrame.__del__: libc or malloc_trim() not found', file=sys.stderr)

连续数组和非连续数组有什么区别?

问题:连续数组和非连续数组有什么区别?

在有关reshape()函数的numpy手册中,它说

>>> a = np.zeros((10, 2))
# A transpose make the array non-contiguous
>>> b = a.T
# Taking a view makes it possible to modify the shape without modifying the
# initial object.
>>> c = b.view()
>>> c.shape = (20)
AttributeError: incompatible shape for a non-contiguous array

我的问题是:

  1. 什么是连续和不连续数组?它类似于C 中的连续存储块吗?什么是连续存储块?
  2. 两者之间是否有性能差异?我们什么时候应该使用其中一个?
  3. 为什么转置会使数组不连续?
  4. 为什么会c.shape = (20)引发错误incompatible shape for a non-contiguous array

感谢您的回答!

In the numpy manual about the reshape() function, it says

>>> a = np.zeros((10, 2))
# A transpose make the array non-contiguous
>>> b = a.T
# Taking a view makes it possible to modify the shape without modifying the
# initial object.
>>> c = b.view()
>>> c.shape = (20)
AttributeError: incompatible shape for a non-contiguous array

My questions are:

  1. What are continuous and noncontiguous arrays? Is it similar to the contiguous memory block in C like What is a contiguous memory block?
  2. Is there any performance difference between these two? When should we use one or the other?
  3. Why does transpose make the array non-contiguous?
  4. Why does c.shape = (20) throws an error incompatible shape for a non-contiguous array?

Thanks for your answer!


回答 0

连续数组只是存储在不间断内存块中的数组:要访问数组中的下一个值,我们只需移至下一个内存地址。

考虑2D数组arr = np.arange(12).reshape(3,4)。看起来像这样:

在此处输入图片说明

在计算机的内存中,的值arr存储如下:

在此处输入图片说明

这意味着arrC连续数组,因为被存储为连续的内存块。下一个内存地址保存该行的下一行值。如果要向下移动一列,我们只需要跳过三个块(例如,从0跳到4意味着我们跳过1,2和3)。

用换位数组arr.T意味着C连续性丢失,因为相邻的行条目不再位于相邻的存储器地址中。但是,Fortranarr.T连续的,因为在内存的连续块中:

在此处输入图片说明


从性能角度来看,访问彼此相邻的内存地址通常比访问更“扩展”的地址更快(从RAM中获取值可能需要为CPU提取并缓存许多相邻地址。)意味着对连续阵列的操作通常会更快。

由于C连续的内存布局,因此按行操作通常比按列操作快。例如,您通常会发现

np.sum(arr, axis=1) # sum the rows

快于:

np.sum(arr, axis=0) # sum the columns

同样,对于Fortran连续数组,对列的操作将稍快一些。


最后,为什么不能通过分配新形状来展平Fortran连续数组?

>>> arr2 = arr.T
>>> arr2.shape = 12
AttributeError: incompatible shape for a non-contiguous array

为了使这成为可能,NumPy必须arr.T像这样将各行放在一起:

在此处输入图片说明

shape直接设置该属性将假定C顺序-即NumPy尝试逐行执行该操作。)

这是不可能的。对于任何轴,NumPy必须具有恒定的步幅长度(要移动的字节数)才能到达数组的下一个元素。arr.T以这种方式展平将需要在内存中向前和向后跳过以检索数组的连续值。

如果我们arr2.reshape(12)改为写,NumPy会将arr2的值复制到新的内存块中(因为它无法将视图返回到该形状的原始数据)。

A contiguous array is just an array stored in an unbroken block of memory: to access the next value in the array, we just move to the next memory address.

Consider the 2D array arr = np.arange(12).reshape(3,4). It looks like this:

enter image description here

In the computer’s memory, the values of arr are stored like this:

enter image description here

This means arr is a C contiguous array because the rows are stored as contiguous blocks of memory. The next memory address holds the next row value on that row. If we want to move down a column, we just need to jump over three blocks (e.g. to jump from 0 to 4 means we skip over 1,2 and 3).

Transposing the array with arr.T means that C contiguity is lost because adjacent row entries are no longer in adjacent memory addresses. However, arr.T is Fortran contiguous since the columns are in contiguous blocks of memory:

enter image description here


Performance-wise, accessing memory addresses which are next to each other is very often faster than accessing addresses which are more “spread out” (fetching a value from RAM could entail a number of neighbouring addresses being fetched and cached for the CPU.) This means that operations over contiguous arrays will often be quicker.

As a consequence of C contiguous memory layout, row-wise operations are usually faster than column-wise operations. For example, you’ll typically find that

np.sum(arr, axis=1) # sum the rows

is slightly faster than:

np.sum(arr, axis=0) # sum the columns

Similarly, operations on columns will be slightly faster for Fortran contiguous arrays.


Finally, why can’t we flatten the Fortran contiguous array by assigning a new shape?

>>> arr2 = arr.T
>>> arr2.shape = 12
AttributeError: incompatible shape for a non-contiguous array

In order for this to be possible NumPy would have to put the rows of arr.T together like this:

enter image description here

(Setting the shape attribute directly assumes C order – i.e. NumPy tries to perform the operation row-wise.)

This is impossible to do. For any axis, NumPy needs to have a constant stride length (the number of bytes to move) to get to the next element of the array. Flattening arr.T in this way would require skipping forwards and backwards in memory to retrieve consecutive values of the array.

If we wrote arr2.reshape(12) instead, NumPy would copy the values of arr2 into a new block of memory (since it can’t return a view on to the original data for this shape).


回答 1

也许此示例具有12个不同的数组值将有所帮助:

In [207]: x=np.arange(12).reshape(3,4).copy()

In [208]: x.flags
Out[208]: 
  C_CONTIGUOUS : True
  F_CONTIGUOUS : False
  OWNDATA : True
  ...
In [209]: x.T.flags
Out[209]: 
  C_CONTIGUOUS : False
  F_CONTIGUOUS : True
  OWNDATA : False
  ...

C order值是,他们在生成的顺序。在调换哪些不是

In [212]: x.reshape(12,)   # same as x.ravel()
Out[212]: array([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11])

In [213]: x.T.reshape(12,)
Out[213]: array([ 0,  4,  8,  1,  5,  9,  2,  6, 10,  3,  7, 11])

您可以同时获得两者的一维视图

In [214]: x1=x.T

In [217]: x.shape=(12,)

的形状x也可以更改。

In [220]: x1.shape=(12,)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-220-cf2b1a308253> in <module>()
----> 1 x1.shape=(12,)

AttributeError: incompatible shape for a non-contiguous array

但是移调的形状无法更改。在data仍处于0,1,2,3,4...顺序,这不能被访问访问如0,4,8...在一维数组。

但是x1可以更改的副本:

In [227]: x2=x1.copy()

In [228]: x2.flags
Out[228]: 
  C_CONTIGUOUS : True
  F_CONTIGUOUS : False
  OWNDATA : True
  ...
In [229]: x2.shape=(12,)

strides也许也有帮助。跨步是到达下一个值必须走多远(以字节为单位)。对于2d数组,将有2个跨步值:

In [233]: x=np.arange(12).reshape(3,4).copy()

In [234]: x.strides
Out[234]: (16, 4)

要到达下一行,请步进16个字节,仅下一列4。

In [235]: x1.strides
Out[235]: (4, 16)

转置只是切换步幅的顺序。下一行只有4个字节,即下一个数字。

In [236]: x.shape=(12,)

In [237]: x.strides
Out[237]: (4,)

改变形状也会改变步幅-一次仅通过缓冲区4个字节。

In [238]: x2=x1.copy()

In [239]: x2.strides
Out[239]: (12, 4)

即使x2看起来像x1,它也有自己的数据缓冲区,其值以不同的顺序排列。现在,下一列是4字节,而下一行是12(3 * 4)。

In [240]: x2.shape=(12,)

In [241]: x2.strides
Out[241]: (4,)

并且x,将形状更改为1d会将步幅减小为(4,)

因为x1,按0,1,2,...顺序排列数据,不会产生一维的跨度0,4,8...

__array_interface__ 是显示数组信息的另一种有用方法:

In [242]: x1.__array_interface__
Out[242]: 
{'strides': (4, 16),
 'typestr': '<i4',
 'shape': (4, 3),
 'version': 3,
 'data': (163336056, False),
 'descr': [('', '<i4')]}

x1数据缓冲器地址将是相同x,同它的数据。 x2具有不同的缓冲区地址。

您也可以尝试order='F'copyreshape命令中添加参数。

Maybe this example with 12 different array values will help:

In [207]: x=np.arange(12).reshape(3,4).copy()

In [208]: x.flags
Out[208]: 
  C_CONTIGUOUS : True
  F_CONTIGUOUS : False
  OWNDATA : True
  ...
In [209]: x.T.flags
Out[209]: 
  C_CONTIGUOUS : False
  F_CONTIGUOUS : True
  OWNDATA : False
  ...

The C order values are in the order that they were generated in. The transposed ones are not

In [212]: x.reshape(12,)   # same as x.ravel()
Out[212]: array([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11])

In [213]: x.T.reshape(12,)
Out[213]: array([ 0,  4,  8,  1,  5,  9,  2,  6, 10,  3,  7, 11])

You can get 1d views of both

In [214]: x1=x.T

In [217]: x.shape=(12,)

the shape of x can also be changed.

In [220]: x1.shape=(12,)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-220-cf2b1a308253> in <module>()
----> 1 x1.shape=(12,)

AttributeError: incompatible shape for a non-contiguous array

But the shape of the transpose cannot be changed. The data is still in the 0,1,2,3,4... order, which can’t be accessed accessed as 0,4,8... in a 1d array.

But a copy of x1 can be changed:

In [227]: x2=x1.copy()

In [228]: x2.flags
Out[228]: 
  C_CONTIGUOUS : True
  F_CONTIGUOUS : False
  OWNDATA : True
  ...
In [229]: x2.shape=(12,)

Looking at strides might also help. A strides is how far (in bytes) it has to step to get to the next value. For a 2d array, there will be be 2 stride values:

In [233]: x=np.arange(12).reshape(3,4).copy()

In [234]: x.strides
Out[234]: (16, 4)

To get to the next row, step 16 bytes, next column only 4.

In [235]: x1.strides
Out[235]: (4, 16)

Transpose just switches the order of the strides. The next row is only 4 bytes- i.e. the next number.

In [236]: x.shape=(12,)

In [237]: x.strides
Out[237]: (4,)

Changing the shape also changes the strides – just step through the buffer 4 bytes at a time.

In [238]: x2=x1.copy()

In [239]: x2.strides
Out[239]: (12, 4)

Even though x2 looks just like x1, it has its own data buffer, with the values in a different order. The next column is now 4 bytes over, while the next row is 12 (3*4).

In [240]: x2.shape=(12,)

In [241]: x2.strides
Out[241]: (4,)

And as with x, changing the shape to 1d reduces the strides to (4,).

For x1, with data in the 0,1,2,... order, there isn’t a 1d stride that would give 0,4,8....

__array_interface__ is another useful way of displaying array information:

In [242]: x1.__array_interface__
Out[242]: 
{'strides': (4, 16),
 'typestr': '<i4',
 'shape': (4, 3),
 'version': 3,
 'data': (163336056, False),
 'descr': [('', '<i4')]}

The x1 data buffer address will be same as for x, with which it shares the data. x2 has a different buffer address.

You could also experiment with adding a order='F' parameter to the copy and reshape commands.


如何清除ipython中的变量?

问题:如何清除ipython中的变量?

有时,我会在同一ipython会话中重新运行脚本,但在未清除变量的情况下,我会感到很惊讶。如何清除所有变量?每次我调用魔术命令%run时是否有可能强制执行此操作?

谢谢

Sometimes I rerun a script within the same ipython session and I get bad surprises when variables haven’t been cleared. How do I clear all variables? And is it possible to force this somehow every time I invoke the magic command %run?

Thanks


回答 0

%reset 似乎清除定义的变量。

%reset seems to clear defined variables.


回答 1

@ErdemKAYA评论后编辑。

要删除变量,请使用magic命令:

%reset_selective <regular_expression>

从命名空间删除的变量是与给定匹配的变量<regular_expression>

因此

%reset_selective -f a 

将删除所有包含的变量a

相反,仅擦除a而不是aa

In: a, aa = 1, 2
In: %reset_selective -f "^a$"
In: a  # raise NameError
In: aa  # returns 2

另请参阅以%reset_selective?获取更多示例和https://regexone.com/获得regex教程。

要擦除命名空间中的所有变量,请参见:

%reset?

EDITED after @ErdemKAYA comment.

To erase a variable, use the magic command:

%reset_selective <regular_expression>

The variables that are erased from the namespace are the one matching the given <regular_expression>.

Therefore

%reset_selective -f a 

will erase all the variables containing an a.

Instead, to erase only a and not aa:

In: a, aa = 1, 2
In: %reset_selective -f "^a$"
In: a  # raise NameError
In: aa  # returns 2

see as well %reset_selective? for more examples and https://regexone.com/ for a regex tutorial.

To erase all the variables in the namespace see:

%reset?

回答 2

在iPython中,您可以删除单个变量,如下所示:

del x

In iPython you can remove a single variable like this:

del x

回答 3

我试过了

%reset -f

并清除所有变量和内容而无提示。-f在不提示yes / no的情况下对给定命令执行强制操作。

希望这会有所帮助.. :)

I tried

%reset -f

and cleared all the variables and contents without prompt. -f does the force action on the given command without prompting for yes/no.

Wish this helps.. :)


回答 4

每次重新运行脚本时,将以下行添加到新脚本将清除所有变量:

from IPython import get_ipython
get_ipython().magic('reset -sf') 

为了使生活更轻松,您可以将它们添加到默认模板中。

在Spyder中: Tools>Preferences>Editor>Edit template

Adding the following lines to a new script will clear all variables each time you rerun the script:

from IPython import get_ipython
get_ipython().magic('reset -sf') 

To make life easy, you can add them to your default template.

In Spyder: Tools>Preferences>Editor>Edit template


回答 5

除了前面提到的方法。您还可以使用命令del删除多个变量

del variable1,variable2

Apart from the methods mentioned earlier. You can also use the command del to remove multiple variables

del variable1,variable2

回答 6

控制台面板中的退出选项还将清除变量资源管理器中的所有变量

***请注意,您将失去在控制台面板中运行的所有代码

An quit option in the Console Panel will also clear all variables in variable explorer

*** Note that you will be loosing all the code which you have run in Console Panel


PyTorch中的“视图”方法如何工作?

问题:PyTorch中的“视图”方法如何工作?

我对方法感到困惑 view()对以下代码片段中。

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool  = nn.MaxPool2d(2,2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1   = nn.Linear(16*5*5, 120)
        self.fc2   = nn.Linear(120, 84)
        self.fc3   = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16*5*5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

net = Net()

我的困惑是关于以下几行。

x = x.view(-1, 16*5*5)

是什么 tensor.view()函数有作用?我已经在很多地方看到了它的用法,但是我不明白它是如何解释其参数的。

如果我给负值作为参数,会发生什么? view()函数怎样?例如,如果我打电话给我tensor_variable.view(1, 1, -1)怎么办?

谁能view()用一些例子解释功能的主要原理?

I am confused about the method view() in the following code snippet.

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool  = nn.MaxPool2d(2,2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1   = nn.Linear(16*5*5, 120)
        self.fc2   = nn.Linear(120, 84)
        self.fc3   = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16*5*5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

net = Net()

My confusion is regarding the following line.

x = x.view(-1, 16*5*5)

What does tensor.view() function do? I have seen its usage in many places, but I can’t understand how it interprets its parameters.

What happens if I give negative values as parameters to the view() function? For example, what happens if I call, tensor_variable.view(1, 1, -1)?

Can anyone explain the main principle of view() function with some examples?


回答 0

视图功能旨在重塑张量。

说你有张量

import torch
a = torch.range(1, 16)

a是具有16个元素(从1到16(包括))的张量。如果要重塑该张量以使其成为4 x 4张量,则可以使用

a = a.view(4, 4)

现在a将是4 x 4张量。请注意,在重塑后,元素总数必须保持不变。重塑张a3 x 5张量是不恰当的。

参数-1是什么意思?

如果在某些情况下您不知道要多少行,但是确定了列数,则可以将其指定为-1。(请注意,您可以将其扩展到具有更大尺寸的张量。轴值之一只能是-1)。这是一种告诉库的方法:“给我一个具有这么多列的张量,然后您就可以计算出实现此目的所需的适当行数”。

可以在上面给出的神经网络代码中看到。在x = self.pool(F.relu(self.conv2(x)))前进功能中的线之后,您将具有16深度特征图。您必须将其展平以将其分配给完全连接的层。因此,您告诉pytorch重塑所获得的张量,使其具有特定的列数,并告诉它自己决定行数。

在numpy和pytorch之间绘制相似之处, view类似于numpy的重塑功能。

The view function is meant to reshape the tensor.

Say you have a tensor

import torch
a = torch.range(1, 16)

a is a tensor that has 16 elements from 1 to 16(included). If you want to reshape this tensor to make it a 4 x 4 tensor then you can use

a = a.view(4, 4)

Now a will be a 4 x 4 tensor. Note that after the reshape the total number of elements need to remain the same. Reshaping the tensor a to a 3 x 5 tensor would not be appropriate.

What is the meaning of parameter -1?

If there is any situation that you don’t know how many rows you want but are sure of the number of columns, then you can specify this with a -1. (Note that you can extend this to tensors with more dimensions. Only one of the axis value can be -1). This is a way of telling the library: “give me a tensor that has these many columns and you compute the appropriate number of rows that is necessary to make this happen”.

This can be seen in the neural network code that you have given above. After the line x = self.pool(F.relu(self.conv2(x))) in the forward function, you will have a 16 depth feature map. You have to flatten this to give it to the fully connected layer. So you tell pytorch to reshape the tensor you obtained to have specific number of columns and tell it to decide the number of rows by itself.

Drawing a similarity between numpy and pytorch, view is similar to numpy’s reshape function.


回答 1

让我们做一些例子,从简单到困难。

  1. view方法返回的张量具有与张量相同的数据self(这意味着返回的张量具有相同数量的元素),但形状不同。例如:

    a = torch.arange(1, 17)  # a's shape is (16,)
    
    a.view(4, 4) # output below
      1   2   3   4
      5   6   7   8
      9  10  11  12
     13  14  15  16
    [torch.FloatTensor of size 4x4]
    
    a.view(2, 2, 4) # output below
    (0 ,.,.) = 
    1   2   3   4
    5   6   7   8
    
    (1 ,.,.) = 
     9  10  11  12
    13  14  15  16
    [torch.FloatTensor of size 2x2x4]
  2. 假设这-1不是参数之一,则将它们相乘时,结果必须等于张量中的元素数量。如果您执行以下操作:a.view(3, 3),它将引发一个RuntimeError原因,因为形状(3 x 3)不适用于具有16个元素的输入。换句话说:3 x 3不等于16而是9。

  3. 您可以将其-1用作传递给函数的参数之一,但只能使用一次。所有发生的事情是该方法将为您完成如何填充该维​​度的数学运算。例如a.view(2, -1, 4)等于a.view(2, 2, 4)。[16 /(2 x 4)= 2]

  4. 请注意,返回的张量共享相同的数据。如果您在“视图”中进行了更改,那么您正在更改原始张量的数据:

    b = a.view(4, 4)
    b[0, 2] = 2
    a[2] == 3.0
    False
  5. 现在,对于更复杂的用例。该文档说,每个新视图维必须是原始维的子空间,或者只能是跨度d,d + 1,…,d + k,它们满足以下所有i = 0,…的连续性条件。 ..,k-1,stride [i] = stride [i +1] x size [i +1]。否则,contiguous()需要先调用才能查看张量。例如:

    a = torch.rand(5, 4, 3, 2) # size (5, 4, 3, 2)
    a_t = a.permute(0, 2, 3, 1) # size (5, 3, 2, 4)
    
    # The commented line below will raise a RuntimeError, because one dimension
    # spans across two contiguous subspaces
    # a_t.view(-1, 4)
    
    # instead do:
    a_t.contiguous().view(-1, 4)
    
    # To see why the first one does not work and the second does,
    # compare a.stride() and a_t.stride()
    a.stride() # (24, 6, 2, 1)
    a_t.stride() # (24, 2, 1, 6)

    请注意,对于a_t,因为24!= 2 x 3,所以stride [0]!= stride [1] x size [1]

Let’s do some examples, from simpler to more difficult.

  1. The view method returns a tensor with the same data as the self tensor (which means that the returned tensor has the same number of elements), but with a different shape. For example:

    a = torch.arange(1, 17)  # a's shape is (16,)
    
    a.view(4, 4) # output below
      1   2   3   4
      5   6   7   8
      9  10  11  12
     13  14  15  16
    [torch.FloatTensor of size 4x4]
    
    a.view(2, 2, 4) # output below
    (0 ,.,.) = 
    1   2   3   4
    5   6   7   8
    
    (1 ,.,.) = 
     9  10  11  12
    13  14  15  16
    [torch.FloatTensor of size 2x2x4]
    
  2. Assuming that -1 is not one of the parameters, when you multiply them together, the result must be equal to the number of elements in the tensor. If you do: a.view(3, 3), it will raise a RuntimeError because shape (3 x 3) is invalid for input with 16 elements. In other words: 3 x 3 does not equal 16 but 9.

  3. You can use -1 as one of the parameters that you pass to the function, but only once. All that happens is that the method will do the math for you on how to fill that dimension. For example a.view(2, -1, 4) is equivalent to a.view(2, 2, 4). [16 / (2 x 4) = 2]

  4. Notice that the returned tensor shares the same data. If you make a change in the “view” you are changing the original tensor’s data:

    b = a.view(4, 4)
    b[0, 2] = 2
    a[2] == 3.0
    False
    
  5. Now, for a more complex use case. The documentation says that each new view dimension must either be a subspace of an original dimension, or only span d, d + 1, …, d + k that satisfy the following contiguity-like condition that for all i = 0, …, k – 1, stride[i] = stride[i + 1] x size[i + 1]. Otherwise, contiguous() needs to be called before the tensor can be viewed. For example:

    a = torch.rand(5, 4, 3, 2) # size (5, 4, 3, 2)
    a_t = a.permute(0, 2, 3, 1) # size (5, 3, 2, 4)
    
    # The commented line below will raise a RuntimeError, because one dimension
    # spans across two contiguous subspaces
    # a_t.view(-1, 4)
    
    # instead do:
    a_t.contiguous().view(-1, 4)
    
    # To see why the first one does not work and the second does,
    # compare a.stride() and a_t.stride()
    a.stride() # (24, 6, 2, 1)
    a_t.stride() # (24, 2, 1, 6)
    

    Notice that for a_t, stride[0] != stride[1] x size[1] since 24 != 2 x 3


回答 2

torch.Tensor.view()

简而言之,torch.Tensor.view()numpy.ndarray.reshape()或启发numpy.reshape(),创建了一个新视图,只要新形状与原始张量的形状兼容张量。

让我们通过一个具体的例子来详细了解这一点。

In [43]: t = torch.arange(18) 

In [44]: t 
Out[44]: 
tensor([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16, 17])

有了这个张量t的形状(18,),新观点可以为以下形状创建:

(1, 18)或等效 (1, -1)或 或等效 或 或等效 或 或等效 或 或等效 或或等效 或(-1, 18)
(2, 9)(2, -1)(-1, 9)
(3, 6)(3, -1)(-1, 6)
(6, 3)(6, -1)(-1, 3)
(9, 2)(9, -1)(-1, 2)
(18, 1)(18, -1)(-1, 1)

正如我们可以从已经上述形状元组观察,形状元组(例如中的元素的乘法运算2*93*6等)必须始终等于在原始张量元素的总数(18在我们的例子)。

要观察的另一件事是,我们-1在每个形状元组的一个位置中使用了a 。通过使用a -1,我们懒于自己进行计算,而是将任务委托给PyTorch来在形状创建新视图时对该形状进行该值的计算。需要注意的重要一件事是,我们只能-1在形状元组中使用单个。其余值应由我们明确提供。其他PyTorch会抱怨RuntimeError

RuntimeError:只能推断一个维度

因此,使用上述所有形状,PyTorch将始终返回原始张量的新视图t。这基本上意味着,它只是针对所请求的每个新视图更改张量的步幅信息。

下面是一些示例,说明每个新视图如何改变张量的步幅。

# stride of our original tensor `t`
In [53]: t.stride() 
Out[53]: (1,)

现在,我们将看到新视图的大步前进:

# shape (1, 18)
In [54]: t1 = t.view(1, -1)
# stride tensor `t1` with shape (1, 18)
In [55]: t1.stride() 
Out[55]: (18, 1)

# shape (2, 9)
In [56]: t2 = t.view(2, -1)
# stride of tensor `t2` with shape (2, 9)
In [57]: t2.stride()       
Out[57]: (9, 1)

# shape (3, 6)
In [59]: t3 = t.view(3, -1) 
# stride of tensor `t3` with shape (3, 6)
In [60]: t3.stride() 
Out[60]: (6, 1)

# shape (6, 3)
In [62]: t4 = t.view(6,-1)
# stride of tensor `t4` with shape (6, 3)
In [63]: t4.stride() 
Out[63]: (3, 1)

# shape (9, 2)
In [65]: t5 = t.view(9, -1) 
# stride of tensor `t5` with shape (9, 2)
In [66]: t5.stride()
Out[66]: (2, 1)

# shape (18, 1)
In [68]: t6 = t.view(18, -1)
# stride of tensor `t6` with shape (18, 1)
In [69]: t6.stride()
Out[69]: (1, 1)

这就是view()功能的魔力。它只是改变(原始)张量的步幅为每个新的观点,只要新的形状视图是与原来的形状相容。

从跨步元组可能会观察到的另一件有趣的事情是,在形状元组的第0 位置的元素的值等于在形状元组的第一个位置的元素的值。

In [74]: t3.shape 
Out[74]: torch.Size([3, 6])
                        |
In [75]: t3.stride()    |
Out[75]: (6, 1)         |
          |_____________|

这是因为:

In [76]: t3 
Out[76]: 
tensor([[ 0,  1,  2,  3,  4,  5],
        [ 6,  7,  8,  9, 10, 11],
        [12, 13, 14, 15, 16, 17]])

步幅(6, 1)说,从一个元素到下一个元素沿0 维度,我们要或采取6个步骤。(即从去06,人们必须采取6个步骤。)但是,从一个元素去的1个一个元素ST层面,我们只需要只差一步(例如,用于从去23)。

因此,步幅信息是如何从存储器访问元素以执行计算的核心。


torch.reshape()

此函数将返回一个视图,并且与使用完全相同torch.Tensor.view()只要新形状与原始张量的形状兼容,与之。否则,它将返回一个副本。

但是,注意事项torch.reshape()警告:

连续的输入和具有兼容步幅的输入可以在不复制的情况下进行重塑,但其中一个不应依赖于复制与查看行为。

torch.Tensor.view()

Simply put, torch.Tensor.view() which is inspired by numpy.ndarray.reshape() or numpy.reshape(), creates a new view of the tensor, as long as the new shape is compatible with the shape of the original tensor.

Let’s understand this in detail using a concrete example.

In [43]: t = torch.arange(18) 

In [44]: t 
Out[44]: 
tensor([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16, 17])

With this tensor t of shape (18,), new views can only be created for the following shapes:

(1, 18) or equivalently (1, -1) or (-1, 18)
(2, 9) or equivalently (2, -1) or (-1, 9)
(3, 6) or equivalently (3, -1) or (-1, 6)
(6, 3) or equivalently (6, -1) or (-1, 3)
(9, 2) or equivalently (9, -1) or (-1, 2)
(18, 1) or equivalently (18, -1) or (-1, 1)

As we can already observe from the above shape tuples, the multiplication of the elements of the shape tuple (e.g. 2*9, 3*6 etc.) must always be equal to the total number of elements in the original tensor (18 in our example).

Another thing to observe is that we used a -1 in one of the places in each of the shape tuples. By using a -1, we are being lazy in doing the computation ourselves and rather delegate the task to PyTorch to do calculation of that value for the shape when it creates the new view. One important thing to note is that we can only use a single -1 in the shape tuple. The remaining values should be explicitly supplied by us. Else PyTorch will complain by throwing a RuntimeError:

RuntimeError: only one dimension can be inferred

So, with all of the above mentioned shapes, PyTorch will always return a new view of the original tensor t. This basically means that it just changes the stride information of the tensor for each of the new views that are requested.

Below are some examples illustrating how the strides of the tensors are changed with each new view.

# stride of our original tensor `t`
In [53]: t.stride() 
Out[53]: (1,)

Now, we will see the strides for the new views:

# shape (1, 18)
In [54]: t1 = t.view(1, -1)
# stride tensor `t1` with shape (1, 18)
In [55]: t1.stride() 
Out[55]: (18, 1)

# shape (2, 9)
In [56]: t2 = t.view(2, -1)
# stride of tensor `t2` with shape (2, 9)
In [57]: t2.stride()       
Out[57]: (9, 1)

# shape (3, 6)
In [59]: t3 = t.view(3, -1) 
# stride of tensor `t3` with shape (3, 6)
In [60]: t3.stride() 
Out[60]: (6, 1)

# shape (6, 3)
In [62]: t4 = t.view(6,-1)
# stride of tensor `t4` with shape (6, 3)
In [63]: t4.stride() 
Out[63]: (3, 1)

# shape (9, 2)
In [65]: t5 = t.view(9, -1) 
# stride of tensor `t5` with shape (9, 2)
In [66]: t5.stride()
Out[66]: (2, 1)

# shape (18, 1)
In [68]: t6 = t.view(18, -1)
# stride of tensor `t6` with shape (18, 1)
In [69]: t6.stride()
Out[69]: (1, 1)

So that’s the magic of the view() function. It just changes the strides of the (original) tensor for each of the new views, as long as the shape of the new view is compatible with the original shape.

Another interesting thing one might observe from the strides tuples is that the value of the element in the 0th position is equal to the value of the element in the 1st position of the shape tuple.

In [74]: t3.shape 
Out[74]: torch.Size([3, 6])
                        |
In [75]: t3.stride()    |
Out[75]: (6, 1)         |
          |_____________|

This is because:

In [76]: t3 
Out[76]: 
tensor([[ 0,  1,  2,  3,  4,  5],
        [ 6,  7,  8,  9, 10, 11],
        [12, 13, 14, 15, 16, 17]])

the stride (6, 1) says that to go from one element to the next element along the 0th dimension, we have to jump or take 6 steps. (i.e. to go from 0 to 6, one has to take 6 steps.) But to go from one element to the next element in the 1st dimension, we just need only one step (for e.g. to go from 2 to 3).

Thus, the strides information is at the heart of how the elements are accessed from memory for performing the computation.


torch.reshape()

This function would return a view and is exactly the same as using torch.Tensor.view() as long as the new shape is compatible with the shape of the original tensor. Otherwise, it will return a copy.

However, the notes of torch.reshape() warns that:

contiguous inputs and inputs with compatible strides can be reshaped without copying, but one should not depend on the copying vs. viewing behavior.


回答 3

我发现它x.view(-1, 16 * 5 * 5)等效于x.flatten(1),其中参数1指示扁平化过程从第一维开始(而不是扁平化“样本”维),如您所见,后者的用法在语义上更加清晰并且易于使用,因此我喜欢flatten()

I figured it out that x.view(-1, 16 * 5 * 5) is equivalent to x.flatten(1), where the parameter 1 indicates the flatten process starts from the 1st dimension(not flattening the ‘sample’ dimension) As you can see, the latter usage is semantically more clear and easier to use, so I prefer flatten().


回答 4

参数-1是什么意思?

您可以读取-1为动态数量的参数或“任何内容”。正因为如此,只能有一个参数-1view()

如果您要求,x.view(-1,1)将输出张量形状,[anything, 1]具体取决于中的元素数量x。例如:

import torch
x = torch.tensor([1, 2, 3, 4])
print(x,x.shape)
print("...")
print(x.view(-1,1), x.view(-1,1).shape)
print(x.view(1,-1), x.view(1,-1).shape)

将输出:

tensor([1, 2, 3, 4]) torch.Size([4])
...
tensor([[1],
        [2],
        [3],
        [4]]) torch.Size([4, 1])
tensor([[1, 2, 3, 4]]) torch.Size([1, 4])

What is the meaning of parameter -1?

You can read -1 as dynamic number of parameters or “anything”. Because of that there can be only one parameter -1 in view().

If you ask x.view(-1,1) this will output tensor shape [anything, 1] depending on the number of elements in x. For example:

import torch
x = torch.tensor([1, 2, 3, 4])
print(x,x.shape)
print("...")
print(x.view(-1,1), x.view(-1,1).shape)
print(x.view(1,-1), x.view(1,-1).shape)

Will output:

tensor([1, 2, 3, 4]) torch.Size([4])
...
tensor([[1],
        [2],
        [3],
        [4]]) torch.Size([4, 1])
tensor([[1, 2, 3, 4]]) torch.Size([1, 4])

回答 5

weights.reshape(a, b) 将返回一个新的张量,该张量的数据与权重为(a,b)的权重相同,因为它会将数据复制到内存的另一部分。

weights.resize_(a, b)返回具有不同形状的相同张量。但是,如果新形状导致的元素数量少于原始张量,则某些元素将从张量中删除(但不会从内存中删除)。如果新形状导致的元素数量多于原始张量,则新元素将在内存中未初始化。

weights.view(a, b) 将返回与具有权重(a,b)的权重相同的数据的新张量

weights.reshape(a, b) will return a new tensor with the same data as weights with size (a, b) as in it copies the data to another part of memory.

weights.resize_(a, b) returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory.

weights.view(a, b) will return a new tensor with the same data as weights with size (a, b)


回答 6

我真的很喜欢@Jadiel de Armas的例子。

我想对.view(…)的元素排序方式有一点了解

  • 对于形状为(a,b,c)的张量,其元素的顺序由编号系统确定:其中第一个数字 数字,第二个数字为b数字,第三个数字为c数字。
  • .view(…)返回的新Tensor中的元素映射将保留原始Tensor的此顺序

I really liked @Jadiel de Armas examples.

I would like to add a small insight to how elements are ordered for .view(…)

  • For a Tensor with shape (a,b,c), the order of it’s elements are determined by a numbering system: where the first digit has a numbers, second digit has b numbers and third digit has c numbers.
  • The mapping of the elements in the new Tensor returned by .view(…) preserves this order of the original Tensor.

回答 7

让我们尝试通过以下示例了解视图:

    a=torch.range(1,16)

print(a)

    tensor([ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10., 11., 12., 13., 14.,
            15., 16.])

print(a.view(-1,2))

    tensor([[ 1.,  2.],
            [ 3.,  4.],
            [ 5.,  6.],
            [ 7.,  8.],
            [ 9., 10.],
            [11., 12.],
            [13., 14.],
            [15., 16.]])

print(a.view(2,-1,4))   #3d tensor

    tensor([[[ 1.,  2.,  3.,  4.],
             [ 5.,  6.,  7.,  8.]],

            [[ 9., 10., 11., 12.],
             [13., 14., 15., 16.]]])
print(a.view(2,-1,2))

    tensor([[[ 1.,  2.],
             [ 3.,  4.],
             [ 5.,  6.],
             [ 7.,  8.]],

            [[ 9., 10.],
             [11., 12.],
             [13., 14.],
             [15., 16.]]])

print(a.view(4,-1,2))

    tensor([[[ 1.,  2.],
             [ 3.,  4.]],

            [[ 5.,  6.],
             [ 7.,  8.]],

            [[ 9., 10.],
             [11., 12.]],

            [[13., 14.],
             [15., 16.]]])

如果我们知道y,z的值,则将-1作为参数值是计算x值的一种简便方法;在3d的情况下,反之亦然;对于2d,它又是计算x值的一种简便方法知道y的值,反之亦然。

Let’s try to understand view by the following examples:

    a=torch.range(1,16)

print(a)

    tensor([ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10., 11., 12., 13., 14.,
            15., 16.])

print(a.view(-1,2))

    tensor([[ 1.,  2.],
            [ 3.,  4.],
            [ 5.,  6.],
            [ 7.,  8.],
            [ 9., 10.],
            [11., 12.],
            [13., 14.],
            [15., 16.]])

print(a.view(2,-1,4))   #3d tensor

    tensor([[[ 1.,  2.,  3.,  4.],
             [ 5.,  6.,  7.,  8.]],

            [[ 9., 10., 11., 12.],
             [13., 14., 15., 16.]]])
print(a.view(2,-1,2))

    tensor([[[ 1.,  2.],
             [ 3.,  4.],
             [ 5.,  6.],
             [ 7.,  8.]],

            [[ 9., 10.],
             [11., 12.],
             [13., 14.],
             [15., 16.]]])

print(a.view(4,-1,2))

    tensor([[[ 1.,  2.],
             [ 3.,  4.]],

            [[ 5.,  6.],
             [ 7.,  8.]],

            [[ 9., 10.],
             [11., 12.]],

            [[13., 14.],
             [15., 16.]]])

-1 as an argument value is an easy way to compute the value of say x provided we know values of y, z or the other way round in case of 3d and for 2d again an easy way to compute the value of say x provided we know values of y or vice versa..


如何使用pandas读取较大的csv文件?

问题:如何使用pandas读取较大的csv文件?

我试图在熊猫中读取较大的csv文件(大约6 GB),但出现内存错误:

MemoryError                               Traceback (most recent call last)
<ipython-input-58-67a72687871b> in <module>()
----> 1 data=pd.read_csv('aphro.csv',sep=';')

...

MemoryError: 

有什么帮助吗?

I am trying to read a large csv file (aprox. 6 GB) in pandas and i am getting a memory error:

MemoryError                               Traceback (most recent call last)
<ipython-input-58-67a72687871b> in <module>()
----> 1 data=pd.read_csv('aphro.csv',sep=';')

...

MemoryError: 

Any help on this?


回答 0

该错误表明机器没有足够的内存来一次将整个CSV读入DataFrame。假设您一次也不需要整个内存中的整个数据集,一种避免问题的方法是分批处理CSV(通过指定chunksize参数):

chunksize = 10 ** 6
for chunk in pd.read_csv(filename, chunksize=chunksize):
    process(chunk)

chunksize参数指定每个块的行数。(当然,最后一块可能少于chunksize行。)

The error shows that the machine does not have enough memory to read the entire CSV into a DataFrame at one time. Assuming you do not need the entire dataset in memory all at one time, one way to avoid the problem would be to process the CSV in chunks (by specifying the chunksize parameter):

chunksize = 10 ** 6
for chunk in pd.read_csv(filename, chunksize=chunksize):
    process(chunk)

The chunksize parameter specifies the number of rows per chunk. (The last chunk may contain fewer than chunksize rows, of course.)


回答 1

分块不一定总是解决此问题的第一站。

  1. 文件是否由于重复的非数字数据或不需要的列而变大?

    如果是这样,您有时可以通过读取列作为类别并通过pd.read_csv usecols参数选择所需的列来节省大量内存。

  2. 您的工作流程是否需要切片,操作,导出?

    如果是这样,则可以使用dask.dataframe进行切片,执行计算并迭代导出。打包由dask静默执行,它也支持pandas API的子集。

  3. 如果所有其他方法均失败,请通过块逐行读取。

    作为最后手段,可以通过熊猫csv库进行分块。

Chunking shouldn’t always be the first port of call for this problem.

  1. Is the file large due to repeated non-numeric data or unwanted columns?

    If so, you can sometimes see massive memory savings by reading in columns as categories and selecting required columns via pd.read_csv usecols parameter.

  2. Does your workflow require slicing, manipulating, exporting?

    If so, you can use dask.dataframe to slice, perform your calculations and export iteratively. Chunking is performed silently by dask, which also supports a subset of pandas API.

  3. If all else fails, read line by line via chunks.

    Chunk via pandas or via csv library as a last resort.


回答 2

我这样进行:

chunks=pd.read_table('aphro.csv',chunksize=1000000,sep=';',\
       names=['lat','long','rf','date','slno'],index_col='slno',\
       header=None,parse_dates=['date'])

df=pd.DataFrame()
%time df=pd.concat(chunk.groupby(['lat','long',chunk['date'].map(lambda x: x.year)])['rf'].agg(['sum']) for chunk in chunks)

I proceeded like this:

chunks=pd.read_table('aphro.csv',chunksize=1000000,sep=';',\
       names=['lat','long','rf','date','slno'],index_col='slno',\
       header=None,parse_dates=['date'])

df=pd.DataFrame()
%time df=pd.concat(chunk.groupby(['lat','long',chunk['date'].map(lambda x: x.year)])['rf'].agg(['sum']) for chunk in chunks)

回答 3

对于大数据,我建议您使用库“ dask”,
例如:

# Dataframes implement the Pandas API
import dask.dataframe as dd
df = dd.read_csv('s3://.../2018-*-*.csv')

您可以从此处阅读更多文档。

另一个很好的选择是使用modin,因为所有功能都与pandas相同,但它利用了dask等分布式数据框架库。

For large data l recommend you use the library “dask”
e.g:

# Dataframes implement the Pandas API
import dask.dataframe as dd
df = dd.read_csv('s3://.../2018-*-*.csv')

You can read more from the documentation here.

Another great alternative would be to use modin because all the functionality is identical to pandas yet it leverages on distributed dataframe libraries such as dask.


回答 4

上面的答案已经满足了这个主题。无论如何,如果您需要内存中的所有数据,请查看bcolz。它压缩内存中的数据。我有非常好的经验。但是它缺少许多熊猫功能

编辑:我得到的压缩率大约是我认为的1/10或原始大小,这当然取决于数据类型。缺少的重要功能是聚合。

The above answer is already satisfying the topic. Anyway, if you need all the data in memory – have a look at bcolz. Its compressing the data in memory. I have had really good experience with it. But its missing a lot of pandas features

Edit: I got compression rates at around 1/10 or orig size i think, of course depending of the kind of data. Important features missing were aggregates.


回答 5

您可以将数据读取为大块,并将每个大块另存为泡菜。

import pandas as pd 
import pickle

in_path = "" #Path where the large file is
out_path = "" #Path to save the pickle files to
chunk_size = 400000 #size of chunks relies on your available memory
separator = "~"

reader = pd.read_csv(in_path,sep=separator,chunksize=chunk_size, 
                    low_memory=False)    


for i, chunk in enumerate(reader):
    out_file = out_path + "/data_{}.pkl".format(i+1)
    with open(out_file, "wb") as f:
        pickle.dump(chunk,f,pickle.HIGHEST_PROTOCOL)

在下一步中,您将读取泡菜并将每个泡菜附加到所需的数据框中。

import glob
pickle_path = "" #Same Path as out_path i.e. where the pickle files are

data_p_files=[]
for name in glob.glob(pickle_path + "/data_*.pkl"):
   data_p_files.append(name)


df = pd.DataFrame([])
for i in range(len(data_p_files)):
    df = df.append(pd.read_pickle(data_p_files[i]),ignore_index=True)

You can read in the data as chunks and save each chunk as pickle.

import pandas as pd 
import pickle

in_path = "" #Path where the large file is
out_path = "" #Path to save the pickle files to
chunk_size = 400000 #size of chunks relies on your available memory
separator = "~"

reader = pd.read_csv(in_path,sep=separator,chunksize=chunk_size, 
                    low_memory=False)    


for i, chunk in enumerate(reader):
    out_file = out_path + "/data_{}.pkl".format(i+1)
    with open(out_file, "wb") as f:
        pickle.dump(chunk,f,pickle.HIGHEST_PROTOCOL)

In the next step you read in the pickles and append each pickle to your desired dataframe.

import glob
pickle_path = "" #Same Path as out_path i.e. where the pickle files are

data_p_files=[]
for name in glob.glob(pickle_path + "/data_*.pkl"):
   data_p_files.append(name)


df = pd.DataFrame([])
for i in range(len(data_p_files)):
    df = df.append(pd.read_pickle(data_p_files[i]),ignore_index=True)

回答 6

函数read_csv和read_table几乎相同。但是,在程序中使用函数read_table时,必须分配定界符“,”。

def get_from_action_data(fname, chunk_size=100000):
    reader = pd.read_csv(fname, header=0, iterator=True)
    chunks = []
    loop = True
    while loop:
        try:
            chunk = reader.get_chunk(chunk_size)[["user_id", "type"]]
            chunks.append(chunk)
        except StopIteration:
            loop = False
            print("Iteration is stopped")

    df_ac = pd.concat(chunks, ignore_index=True)

The function read_csv and read_table is almost the same. But you must assign the delimiter “,” when you use the function read_table in your program.

def get_from_action_data(fname, chunk_size=100000):
    reader = pd.read_csv(fname, header=0, iterator=True)
    chunks = []
    loop = True
    while loop:
        try:
            chunk = reader.get_chunk(chunk_size)[["user_id", "type"]]
            chunks.append(chunk)
        except StopIteration:
            loop = False
            print("Iteration is stopped")

    df_ac = pd.concat(chunks, ignore_index=True)

回答 7

解决方案1:

使用大数据的熊猫

解决方案2:

TextFileReader = pd.read_csv(path, chunksize=1000)  # the number of rows per chunk

dfList = []
for df in TextFileReader:
    dfList.append(df)

df = pd.concat(dfList,sort=False)

Solution 1:

Using pandas with large data

Solution 2:

TextFileReader = pd.read_csv(path, chunksize=1000)  # the number of rows per chunk

dfList = []
for df in TextFileReader:
    dfList.append(df)

df = pd.concat(dfList,sort=False)

回答 8

下面是一个示例:

chunkTemp = []
queryTemp = []
query = pd.DataFrame()

for chunk in pd.read_csv(file, header=0, chunksize=<your_chunksize>, iterator=True, low_memory=False):

    #REPLACING BLANK SPACES AT COLUMNS' NAMES FOR SQL OPTIMIZATION
    chunk = chunk.rename(columns = {c: c.replace(' ', '') for c in chunk.columns})

    #YOU CAN EITHER: 
    #1)BUFFER THE CHUNKS IN ORDER TO LOAD YOUR WHOLE DATASET 
    chunkTemp.append(chunk)

    #2)DO YOUR PROCESSING OVER A CHUNK AND STORE THE RESULT OF IT
    query = chunk[chunk[<column_name>].str.startswith(<some_pattern>)]   
    #BUFFERING PROCESSED DATA
    queryTemp.append(query)

#!  NEVER DO pd.concat OR pd.DataFrame() INSIDE A LOOP
print("Database: CONCATENATING CHUNKS INTO A SINGLE DATAFRAME")
chunk = pd.concat(chunkTemp)
print("Database: LOADED")

#CONCATENATING PROCESSED DATA
query = pd.concat(queryTemp)
print(query)

Here follows an example:

chunkTemp = []
queryTemp = []
query = pd.DataFrame()

for chunk in pd.read_csv(file, header=0, chunksize=<your_chunksize>, iterator=True, low_memory=False):

    #REPLACING BLANK SPACES AT COLUMNS' NAMES FOR SQL OPTIMIZATION
    chunk = chunk.rename(columns = {c: c.replace(' ', '') for c in chunk.columns})

    #YOU CAN EITHER: 
    #1)BUFFER THE CHUNKS IN ORDER TO LOAD YOUR WHOLE DATASET 
    chunkTemp.append(chunk)

    #2)DO YOUR PROCESSING OVER A CHUNK AND STORE THE RESULT OF IT
    query = chunk[chunk[<column_name>].str.startswith(<some_pattern>)]   
    #BUFFERING PROCESSED DATA
    queryTemp.append(query)

#!  NEVER DO pd.concat OR pd.DataFrame() INSIDE A LOOP
print("Database: CONCATENATING CHUNKS INTO A SINGLE DATAFRAME")
chunk = pd.concat(chunkTemp)
print("Database: LOADED")

#CONCATENATING PROCESSED DATA
query = pd.concat(queryTemp)
print(query)

回答 9

您可以尝试sframe,它的语法与pandas相同,但允许您处理大于RAM的文件。

You can try sframe, that have the same syntax as pandas but allows you to manipulate files that are bigger than your RAM.


回答 10

如果您使用熊猫将大文件读入块中,然后逐行产生,这就是我所做的

import pandas as pd

def chunck_generator(filename, header=False,chunk_size = 10 ** 5):
   for chunk in pd.read_csv(filename,delimiter=',', iterator=True, chunksize=chunk_size, parse_dates=[1] ): 
        yield (chunk)

def _generator( filename, header=False,chunk_size = 10 ** 5):
    chunk = chunck_generator(filename, header=False,chunk_size = 10 ** 5)
    for row in chunk:
        yield row

if __name__ == "__main__":
filename = r'file.csv'
        generator = generator(filename=filename)
        while True:
           print(next(generator))

If you use pandas read large file into chunk and then yield row by row, here is what I have done

import pandas as pd

def chunck_generator(filename, header=False,chunk_size = 10 ** 5):
   for chunk in pd.read_csv(filename,delimiter=',', iterator=True, chunksize=chunk_size, parse_dates=[1] ): 
        yield (chunk)

def _generator( filename, header=False,chunk_size = 10 ** 5):
    chunk = chunck_generator(filename, header=False,chunk_size = 10 ** 5)
    for row in chunk:
        yield row

if __name__ == "__main__":
filename = r'file.csv'
        generator = generator(filename=filename)
        while True:
           print(next(generator))

回答 11

我想根据已经提供的大多数潜在解决方案做出更全面的回答。我还想指出另一种可能有助于阅读过程的潜在帮助。

选项1:dtypes

“ dtypes”是一个非常强大的参数,可用于减少read方法的内存压力。看到这个这个答案。熊猫默认情况下会尝试推断数据的dtypes。

参照数据结构,存储的每个数据都会进行内存分配。在基本级别上,请参考以下值(下表说明了C编程语言的值):

The maximum value of UNSIGNED CHAR = 255                                    
The minimum value of SHORT INT = -32768                                     
The maximum value of SHORT INT = 32767                                      
The minimum value of INT = -2147483648                                      
The maximum value of INT = 2147483647                                       
The minimum value of CHAR = -128                                            
The maximum value of CHAR = 127                                             
The minimum value of LONG = -9223372036854775808                            
The maximum value of LONG = 9223372036854775807

请参阅页面以查看NumPy和C类型之间的匹配。

假设您有一个由数字组成的整数数组。您可以在理论上和实践上都进行分配,比如说16位整数类型的数组,但是您分配的内存将比实际存储该数组所需的更多。为防止这种情况,您可以dtype在上设置选项read_csv。您不希望将数组项存储为长整数,而实际上可以使用8位整数(np.int8np.uint8)来使它们适合。

观察以下dtype映射。

资料来源:https : //pbpython.com/pandas_dtypes.html

您可以像在{column:type}一样将dtype参数作为参数传递给pandas方法read

import numpy as np
import pandas as pd

df_dtype = {
        "column_1": int,
        "column_2": str,
        "column_3": np.int16,
        "column_4": np.uint8,
        ...
        "column_n": np.float32
}

df = pd.read_csv('path/to/file', dtype=df_dtype)

选项2:大块读取

逐块读取数据使您可以访问内存中的部分数据,并且可以对数据进行预处理,并保留处理后的数据而不是原始数据。如果将此选项与第一个dtypes结合使用会更好。

我想指出该过程的“熊猫食谱”部分,您可以在这里找到它。注意那两个部分;

选项3:达斯

Dask是在Dask网站上定义为的框架:

Dask为分析提供高级并行性,从而为您喜欢的工具提供大规模性能

它的诞生是为了覆盖熊猫无法到达的必要部分。Dask是一个功能强大的框架,通过以分布式方式处理它,可以使您访问更多数据。

您可以使用dask预处理整个数据,Dask负责分块部分,因此与熊猫不同,您可以定义处理步骤并让Dask完成工作。Dask不会在compute和和/或显式推送计算之前应用这些计算persist(有关差异,请参见此处的答案)。

其他援助(想法)

  • 为数据设计的ETL流。仅保留原始数据中需要的内容。
    • 首先,使用Dask或PySpark之类的框架将ETL应用于整个数据,然后导出处理后的数据。
    • 然后查看处理后的数据是否可以整体容纳在内存中。
  • 考虑增加RAM。
  • 考虑在云平台上使用该数据。

I want to make a more comprehensive answer based off of the most of the potential solutions that are already provided. I also want to point out one more potential aid that may help reading process.

Option 1: dtypes

“dtypes” is a pretty powerful parameter that you can use to reduce the memory pressure of read methods. See this and this answer. Pandas, on default, try to infer dtypes of the data.

Referring to data structures, every data stored, a memory allocation takes place. At a basic level refer to the values below (The table below illustrates values for C programming language):

The maximum value of UNSIGNED CHAR = 255                                    
The minimum value of SHORT INT = -32768                                     
The maximum value of SHORT INT = 32767                                      
The minimum value of INT = -2147483648                                      
The maximum value of INT = 2147483647                                       
The minimum value of CHAR = -128                                            
The maximum value of CHAR = 127                                             
The minimum value of LONG = -9223372036854775808                            
The maximum value of LONG = 9223372036854775807

Refer to this page to see the matching between NumPy and C types.

Let’s say you have an array of integers of digits. You can both theoretically and practically assign, say array of 16-bit integer type, but you would then allocate more memory than you actually need to store that array. To prevent this, you can set dtype option on read_csv. You do not want to store the array items as long integer where actually you can fit them with 8-bit integer (np.int8 or np.uint8).

Observe the following dtype map.

Source: https://pbpython.com/pandas_dtypes.html

You can pass dtype parameter as a parameter on pandas methods as dict on read like {column: type}.

import numpy as np
import pandas as pd

df_dtype = {
        "column_1": int,
        "column_2": str,
        "column_3": np.int16,
        "column_4": np.uint8,
        ...
        "column_n": np.float32
}

df = pd.read_csv('path/to/file', dtype=df_dtype)

Option 2: Read by Chunks

Reading the data in chunks allows you to access a part of the data in-memory, and you can apply preprocessing on your data and preserve the processed data rather than raw data. It’d be much better if you combine this option with the first one, dtypes.

I want to point out the pandas cookbook sections for that process, where you can find it here. Note those two sections there;

Option 3: Dask

Dask is a framework that is defined in Dask’s website as:

Dask provides advanced parallelism for analytics, enabling performance at scale for the tools you love

It was born to cover the necessary parts where pandas cannot reach. Dask is a powerful framework that allows you much more data access by processing it in a distributed way.

You can use dask to preprocess your data as a whole, Dask takes care of the chunking part, so unlike pandas you can just define your processing steps and let Dask do the work. Dask does not apply the computations before it is explicitly pushed by compute and/or persist (see the answer here for the difference).

Other Aids (Ideas)

  • ETL flow designed for the data. Keeping only what is needed from the raw data.
    • First, apply ETL to whole data with frameworks like Dask or PySpark, and export the processed data.
    • Then see if the processed data can be fit in the memory as a whole.
  • Consider increasing your RAM.
  • Consider working with that data on a cloud platform.

回答 12

除了上述答案之外,对于那些想要处理CSV然后导出到csv,镶木地板或SQL的用户来说,d6tstack是另一个不错的选择。您可以加载多个文件,并且它处理数据架构更改(添加/删除的列)。已经内置了核心支持之外的其他功能。

def apply(dfg):
    # do stuff
    return dfg

c = d6tstack.combine_csv.CombinerCSV([bigfile.csv], apply_after_read=apply, sep=',', chunksize=1e6)

# or
c = d6tstack.combine_csv.CombinerCSV(glob.glob('*.csv'), apply_after_read=apply, chunksize=1e6)

# output to various formats, automatically chunked to reduce memory consumption
c.to_csv_combine(filename='out.csv')
c.to_parquet_combine(filename='out.pq')
c.to_psql_combine('postgresql+psycopg2://usr:pwd@localhost/db', 'tablename') # fast for postgres
c.to_mysql_combine('mysql+mysqlconnector://usr:pwd@localhost/db', 'tablename') # fast for mysql
c.to_sql_combine('postgresql+psycopg2://usr:pwd@localhost/db', 'tablename') # slow but flexible

In addition to the answers above, for those who want to process CSV and then export to csv, parquet or SQL, d6tstack is another good option. You can load multiple files and it deals with data schema changes (added/removed columns). Chunked out of core support is already built in.

def apply(dfg):
    # do stuff
    return dfg

c = d6tstack.combine_csv.CombinerCSV([bigfile.csv], apply_after_read=apply, sep=',', chunksize=1e6)

# or
c = d6tstack.combine_csv.CombinerCSV(glob.glob('*.csv'), apply_after_read=apply, chunksize=1e6)

# output to various formats, automatically chunked to reduce memory consumption
c.to_csv_combine(filename='out.csv')
c.to_parquet_combine(filename='out.pq')
c.to_psql_combine('postgresql+psycopg2://usr:pwd@localhost/db', 'tablename') # fast for postgres
c.to_mysql_combine('mysql+mysqlconnector://usr:pwd@localhost/db', 'tablename') # fast for mysql
c.to_sql_combine('postgresql+psycopg2://usr:pwd@localhost/db', 'tablename') # slow but flexible

回答 13

如果有人仍在寻找这样的东西,我发现这个叫做modin的新库可以提供帮助。它使用可以帮助读取的分布式计算。这是一篇很好的文章,比较了它与熊猫的功能。它基本上使用与熊猫相同的功能。

import modin.pandas as pd
pd.read_csv(CSV_FILE_NAME)

In case someone is still looking for something like this, I found that this new library called modin can help. It uses distributed computing that can help with the read. Here’s a nice article comparing its functionality with pandas. It essentially uses the same functions as pandas.

import modin.pandas as pd
pd.read_csv(CSV_FILE_NAME)

回答 14

在使用chunksize选项之前,如果要确定要在@unutbu所提到的分块for循环中编写的过程函数,可以简单地使用nrows选项。

small_df = pd.read_csv(filename, nrows=100)

一旦确定过程块已准备就绪,就可以将其放入整个数据帧的块循环中。

Before using chunksize option if you want to be sure about the process function that you want to write inside the chunking for-loop as mentioned by @unutbu you can simply use nrows option.

small_df = pd.read_csv(filename, nrows=100)

Once you are sure that the process block is ready, you can put that in the chunking for loop for the entire dataframe.


如何剖析Python中的内存使用情况?

问题:如何剖析Python中的内存使用情况?

最近,我对算法产生了兴趣,并通过编写一个简单的实现,然后以各种方式对其进行了优化来开始探索它们。

我已经熟悉了用于分析运行时的标准Python模块(对于大多数事情,我发现IPython中的timeit magic函数就足够了),但是我也对内存使用感兴趣,因此我也可以探索这些折衷方案(例如,缓存先前计算的值与根据需要重新计算它们的表的成本)。是否有一个模块可以为我配置给定功能的内存使用情况?

I’ve recently become interested in algorithms and have begun exploring them by writing a naive implementation and then optimizing it in various ways.

I’m already familiar with the standard Python module for profiling runtime (for most things I’ve found the timeit magic function in IPython to be sufficient), but I’m also interested in memory usage so I can explore those tradeoffs as well (e.g. the cost of caching a table of previously computed values versus recomputing them as needed). Is there a module that will profile the memory usage of a given function for me?


回答 0

在这里已经回答了这个问题:Python memory profiler

基本上,您可以执行以下操作(引用自Guppy-PE):

>>> from guppy import hpy; h=hpy()
>>> h.heap()
Partition of a set of 48477 objects. Total size = 3265516 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0  25773  53  1612820  49   1612820  49 str
     1  11699  24   483960  15   2096780  64 tuple
     2    174   0   241584   7   2338364  72 dict of module
     3   3478   7   222592   7   2560956  78 types.CodeType
     4   3296   7   184576   6   2745532  84 function
     5    401   1   175112   5   2920644  89 dict of class
     6    108   0    81888   3   3002532  92 dict (no owner)
     7    114   0    79632   2   3082164  94 dict of type
     8    117   0    51336   2   3133500  96 type
     9    667   1    24012   1   3157512  97 __builtin__.wrapper_descriptor
<76 more rows. Type e.g. '_.more' to view.>
>>> h.iso(1,[],{})
Partition of a set of 3 objects. Total size = 176 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0      1  33      136  77       136  77 dict (no owner)
     1      1  33       28  16       164  93 list
     2      1  33       12   7       176 100 int
>>> x=[]
>>> h.iso(x).sp
 0: h.Root.i0_modules['__main__'].__dict__['x']
>>> 

This one has been answered already here: Python memory profiler

Basically you do something like that (cited from Guppy-PE):

>>> from guppy import hpy; h=hpy()
>>> h.heap()
Partition of a set of 48477 objects. Total size = 3265516 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0  25773  53  1612820  49   1612820  49 str
     1  11699  24   483960  15   2096780  64 tuple
     2    174   0   241584   7   2338364  72 dict of module
     3   3478   7   222592   7   2560956  78 types.CodeType
     4   3296   7   184576   6   2745532  84 function
     5    401   1   175112   5   2920644  89 dict of class
     6    108   0    81888   3   3002532  92 dict (no owner)
     7    114   0    79632   2   3082164  94 dict of type
     8    117   0    51336   2   3133500  96 type
     9    667   1    24012   1   3157512  97 __builtin__.wrapper_descriptor
<76 more rows. Type e.g. '_.more' to view.>
>>> h.iso(1,[],{})
Partition of a set of 3 objects. Total size = 176 bytes.
 Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
     0      1  33      136  77       136  77 dict (no owner)
     1      1  33       28  16       164  93 list
     2      1  33       12   7       176 100 int
>>> x=[]
>>> h.iso(x).sp
 0: h.Root.i0_modules['__main__'].__dict__['x']
>>> 

回答 1

Python 3.4包含一个新模块:tracemalloc。它提供有关哪些代码分配最多内存的详细统计信息。这是显示分配内存的前三行的示例。

from collections import Counter
import linecache
import os
import tracemalloc

def display_top(snapshot, key_type='lineno', limit=3):
    snapshot = snapshot.filter_traces((
        tracemalloc.Filter(False, "<frozen importlib._bootstrap>"),
        tracemalloc.Filter(False, "<unknown>"),
    ))
    top_stats = snapshot.statistics(key_type)

    print("Top %s lines" % limit)
    for index, stat in enumerate(top_stats[:limit], 1):
        frame = stat.traceback[0]
        # replace "/path/to/module/file.py" with "module/file.py"
        filename = os.sep.join(frame.filename.split(os.sep)[-2:])
        print("#%s: %s:%s: %.1f KiB"
              % (index, filename, frame.lineno, stat.size / 1024))
        line = linecache.getline(frame.filename, frame.lineno).strip()
        if line:
            print('    %s' % line)

    other = top_stats[limit:]
    if other:
        size = sum(stat.size for stat in other)
        print("%s other: %.1f KiB" % (len(other), size / 1024))
    total = sum(stat.size for stat in top_stats)
    print("Total allocated size: %.1f KiB" % (total / 1024))


tracemalloc.start()

counts = Counter()
fname = '/usr/share/dict/american-english'
with open(fname) as words:
    words = list(words)
    for word in words:
        prefix = word[:3]
        counts[prefix] += 1
print('Top prefixes:', counts.most_common(3))

snapshot = tracemalloc.take_snapshot()
display_top(snapshot)

结果如下:

Top prefixes: [('con', 1220), ('dis', 1002), ('pro', 809)]
Top 3 lines
#1: scratches/memory_test.py:37: 6527.1 KiB
    words = list(words)
#2: scratches/memory_test.py:39: 247.7 KiB
    prefix = word[:3]
#3: scratches/memory_test.py:40: 193.0 KiB
    counts[prefix] += 1
4 other: 4.3 KiB
Total allocated size: 6972.1 KiB

什么时候内存泄漏不是泄漏?

当计算结束时仍保留内存时,该示例非常有用,但是有时您拥有分配大量内存然后释放所有内存的代码。从技术上讲,这不是内存泄漏,但是它使用的内存比您想象的要多。释放所有内存时如何跟踪?如果是您的代码,则可能可以添加一些调试代码以在运行时拍摄快照。如果没有,您可以在主线程运行时启动后台线程来监视内存使用情况。

这是前面的示例,其中所有代码都已移入count_prefixes()函数中。该函数返回时,将释放所有内存。我还添加了一些sleep()调用来模拟长时间运行的计算。

from collections import Counter
import linecache
import os
import tracemalloc
from time import sleep


def count_prefixes():
    sleep(2)  # Start up time.
    counts = Counter()
    fname = '/usr/share/dict/american-english'
    with open(fname) as words:
        words = list(words)
        for word in words:
            prefix = word[:3]
            counts[prefix] += 1
            sleep(0.0001)
    most_common = counts.most_common(3)
    sleep(3)  # Shut down time.
    return most_common


def main():
    tracemalloc.start()

    most_common = count_prefixes()
    print('Top prefixes:', most_common)

    snapshot = tracemalloc.take_snapshot()
    display_top(snapshot)


def display_top(snapshot, key_type='lineno', limit=3):
    snapshot = snapshot.filter_traces((
        tracemalloc.Filter(False, "<frozen importlib._bootstrap>"),
        tracemalloc.Filter(False, "<unknown>"),
    ))
    top_stats = snapshot.statistics(key_type)

    print("Top %s lines" % limit)
    for index, stat in enumerate(top_stats[:limit], 1):
        frame = stat.traceback[0]
        # replace "/path/to/module/file.py" with "module/file.py"
        filename = os.sep.join(frame.filename.split(os.sep)[-2:])
        print("#%s: %s:%s: %.1f KiB"
              % (index, filename, frame.lineno, stat.size / 1024))
        line = linecache.getline(frame.filename, frame.lineno).strip()
        if line:
            print('    %s' % line)

    other = top_stats[limit:]
    if other:
        size = sum(stat.size for stat in other)
        print("%s other: %.1f KiB" % (len(other), size / 1024))
    total = sum(stat.size for stat in top_stats)
    print("Total allocated size: %.1f KiB" % (total / 1024))


main()

当我运行该版本时,内存使用已从6MB减少到4KB,因为该函数在完成时会释放其所有内存。

Top prefixes: [('con', 1220), ('dis', 1002), ('pro', 809)]
Top 3 lines
#1: collections/__init__.py:537: 0.7 KiB
    self.update(*args, **kwds)
#2: collections/__init__.py:555: 0.6 KiB
    return _heapq.nlargest(n, self.items(), key=_itemgetter(1))
#3: python3.6/heapq.py:569: 0.5 KiB
    result = [(key(elem), i, elem) for i, elem in zip(range(0, -n, -1), it)]
10 other: 2.2 KiB
Total allocated size: 4.0 KiB

现在,这是受另一个答案启发的版本,该答案启动了另一个线程来监视内存使用情况。

from collections import Counter
import linecache
import os
import tracemalloc
from datetime import datetime
from queue import Queue, Empty
from resource import getrusage, RUSAGE_SELF
from threading import Thread
from time import sleep

def memory_monitor(command_queue: Queue, poll_interval=1):
    tracemalloc.start()
    old_max = 0
    snapshot = None
    while True:
        try:
            command_queue.get(timeout=poll_interval)
            if snapshot is not None:
                print(datetime.now())
                display_top(snapshot)

            return
        except Empty:
            max_rss = getrusage(RUSAGE_SELF).ru_maxrss
            if max_rss > old_max:
                old_max = max_rss
                snapshot = tracemalloc.take_snapshot()
                print(datetime.now(), 'max RSS', max_rss)


def count_prefixes():
    sleep(2)  # Start up time.
    counts = Counter()
    fname = '/usr/share/dict/american-english'
    with open(fname) as words:
        words = list(words)
        for word in words:
            prefix = word[:3]
            counts[prefix] += 1
            sleep(0.0001)
    most_common = counts.most_common(3)
    sleep(3)  # Shut down time.
    return most_common


def main():
    queue = Queue()
    poll_interval = 0.1
    monitor_thread = Thread(target=memory_monitor, args=(queue, poll_interval))
    monitor_thread.start()
    try:
        most_common = count_prefixes()
        print('Top prefixes:', most_common)
    finally:
        queue.put('stop')
        monitor_thread.join()


def display_top(snapshot, key_type='lineno', limit=3):
    snapshot = snapshot.filter_traces((
        tracemalloc.Filter(False, "<frozen importlib._bootstrap>"),
        tracemalloc.Filter(False, "<unknown>"),
    ))
    top_stats = snapshot.statistics(key_type)

    print("Top %s lines" % limit)
    for index, stat in enumerate(top_stats[:limit], 1):
        frame = stat.traceback[0]
        # replace "/path/to/module/file.py" with "module/file.py"
        filename = os.sep.join(frame.filename.split(os.sep)[-2:])
        print("#%s: %s:%s: %.1f KiB"
              % (index, filename, frame.lineno, stat.size / 1024))
        line = linecache.getline(frame.filename, frame.lineno).strip()
        if line:
            print('    %s' % line)

    other = top_stats[limit:]
    if other:
        size = sum(stat.size for stat in other)
        print("%s other: %.1f KiB" % (len(other), size / 1024))
    total = sum(stat.size for stat in top_stats)
    print("Total allocated size: %.1f KiB" % (total / 1024))


main()

resource模块使您可以检查当前内存使用情况,并从峰值内存使用情况中保存快照。队列让主线程告诉内存监视器线程何时打印其报告并关闭。运行时,它显示list()调用正在使用的内存:

2018-05-29 10:34:34.441334 max RSS 10188
2018-05-29 10:34:36.475707 max RSS 23588
2018-05-29 10:34:36.616524 max RSS 38104
2018-05-29 10:34:36.772978 max RSS 45924
2018-05-29 10:34:36.929688 max RSS 46824
2018-05-29 10:34:37.087554 max RSS 46852
Top prefixes: [('con', 1220), ('dis', 1002), ('pro', 809)]
2018-05-29 10:34:56.281262
Top 3 lines
#1: scratches/scratch.py:36: 6527.0 KiB
    words = list(words)
#2: scratches/scratch.py:38: 16.4 KiB
    prefix = word[:3]
#3: scratches/scratch.py:39: 10.1 KiB
    counts[prefix] += 1
19 other: 10.8 KiB
Total allocated size: 6564.3 KiB

如果您使用的是Linux,则可能会发现/proc/self/statm比该resource模块更有用。

Python 3.4 includes a new module: tracemalloc. It provides detailed statistics about which code is allocating the most memory. Here’s an example that displays the top three lines allocating memory.

from collections import Counter
import linecache
import os
import tracemalloc

def display_top(snapshot, key_type='lineno', limit=3):
    snapshot = snapshot.filter_traces((
        tracemalloc.Filter(False, "<frozen importlib._bootstrap>"),
        tracemalloc.Filter(False, "<unknown>"),
    ))
    top_stats = snapshot.statistics(key_type)

    print("Top %s lines" % limit)
    for index, stat in enumerate(top_stats[:limit], 1):
        frame = stat.traceback[0]
        # replace "/path/to/module/file.py" with "module/file.py"
        filename = os.sep.join(frame.filename.split(os.sep)[-2:])
        print("#%s: %s:%s: %.1f KiB"
              % (index, filename, frame.lineno, stat.size / 1024))
        line = linecache.getline(frame.filename, frame.lineno).strip()
        if line:
            print('    %s' % line)

    other = top_stats[limit:]
    if other:
        size = sum(stat.size for stat in other)
        print("%s other: %.1f KiB" % (len(other), size / 1024))
    total = sum(stat.size for stat in top_stats)
    print("Total allocated size: %.1f KiB" % (total / 1024))


tracemalloc.start()

counts = Counter()
fname = '/usr/share/dict/american-english'
with open(fname) as words:
    words = list(words)
    for word in words:
        prefix = word[:3]
        counts[prefix] += 1
print('Top prefixes:', counts.most_common(3))

snapshot = tracemalloc.take_snapshot()
display_top(snapshot)

And here are the results:

Top prefixes: [('con', 1220), ('dis', 1002), ('pro', 809)]
Top 3 lines
#1: scratches/memory_test.py:37: 6527.1 KiB
    words = list(words)
#2: scratches/memory_test.py:39: 247.7 KiB
    prefix = word[:3]
#3: scratches/memory_test.py:40: 193.0 KiB
    counts[prefix] += 1
4 other: 4.3 KiB
Total allocated size: 6972.1 KiB

When is a memory leak not a leak?

That example is great when the memory is still being held at the end of the calculation, but sometimes you have code that allocates a lot of memory and then releases it all. It’s not technically a memory leak, but it’s using more memory than you think it should. How can you track memory usage when it all gets released? If it’s your code, you can probably add some debugging code to take snapshots while it’s running. If not, you can start a background thread to monitor memory usage while the main thread runs.

Here’s the previous example where the code has all been moved into the count_prefixes() function. When that function returns, all the memory is released. I also added some sleep() calls to simulate a long-running calculation.

from collections import Counter
import linecache
import os
import tracemalloc
from time import sleep


def count_prefixes():
    sleep(2)  # Start up time.
    counts = Counter()
    fname = '/usr/share/dict/american-english'
    with open(fname) as words:
        words = list(words)
        for word in words:
            prefix = word[:3]
            counts[prefix] += 1
            sleep(0.0001)
    most_common = counts.most_common(3)
    sleep(3)  # Shut down time.
    return most_common


def main():
    tracemalloc.start()

    most_common = count_prefixes()
    print('Top prefixes:', most_common)

    snapshot = tracemalloc.take_snapshot()
    display_top(snapshot)


def display_top(snapshot, key_type='lineno', limit=3):
    snapshot = snapshot.filter_traces((
        tracemalloc.Filter(False, "<frozen importlib._bootstrap>"),
        tracemalloc.Filter(False, "<unknown>"),
    ))
    top_stats = snapshot.statistics(key_type)

    print("Top %s lines" % limit)
    for index, stat in enumerate(top_stats[:limit], 1):
        frame = stat.traceback[0]
        # replace "/path/to/module/file.py" with "module/file.py"
        filename = os.sep.join(frame.filename.split(os.sep)[-2:])
        print("#%s: %s:%s: %.1f KiB"
              % (index, filename, frame.lineno, stat.size / 1024))
        line = linecache.getline(frame.filename, frame.lineno).strip()
        if line:
            print('    %s' % line)

    other = top_stats[limit:]
    if other:
        size = sum(stat.size for stat in other)
        print("%s other: %.1f KiB" % (len(other), size / 1024))
    total = sum(stat.size for stat in top_stats)
    print("Total allocated size: %.1f KiB" % (total / 1024))


main()

When I run that version, the memory usage has gone from 6MB down to 4KB, because the function released all its memory when it finished.

Top prefixes: [('con', 1220), ('dis', 1002), ('pro', 809)]
Top 3 lines
#1: collections/__init__.py:537: 0.7 KiB
    self.update(*args, **kwds)
#2: collections/__init__.py:555: 0.6 KiB
    return _heapq.nlargest(n, self.items(), key=_itemgetter(1))
#3: python3.6/heapq.py:569: 0.5 KiB
    result = [(key(elem), i, elem) for i, elem in zip(range(0, -n, -1), it)]
10 other: 2.2 KiB
Total allocated size: 4.0 KiB

Now here’s a version inspired by another answer that starts a second thread to monitor memory usage.

from collections import Counter
import linecache
import os
import tracemalloc
from datetime import datetime
from queue import Queue, Empty
from resource import getrusage, RUSAGE_SELF
from threading import Thread
from time import sleep

def memory_monitor(command_queue: Queue, poll_interval=1):
    tracemalloc.start()
    old_max = 0
    snapshot = None
    while True:
        try:
            command_queue.get(timeout=poll_interval)
            if snapshot is not None:
                print(datetime.now())
                display_top(snapshot)

            return
        except Empty:
            max_rss = getrusage(RUSAGE_SELF).ru_maxrss
            if max_rss > old_max:
                old_max = max_rss
                snapshot = tracemalloc.take_snapshot()
                print(datetime.now(), 'max RSS', max_rss)


def count_prefixes():
    sleep(2)  # Start up time.
    counts = Counter()
    fname = '/usr/share/dict/american-english'
    with open(fname) as words:
        words = list(words)
        for word in words:
            prefix = word[:3]
            counts[prefix] += 1
            sleep(0.0001)
    most_common = counts.most_common(3)
    sleep(3)  # Shut down time.
    return most_common


def main():
    queue = Queue()
    poll_interval = 0.1
    monitor_thread = Thread(target=memory_monitor, args=(queue, poll_interval))
    monitor_thread.start()
    try:
        most_common = count_prefixes()
        print('Top prefixes:', most_common)
    finally:
        queue.put('stop')
        monitor_thread.join()


def display_top(snapshot, key_type='lineno', limit=3):
    snapshot = snapshot.filter_traces((
        tracemalloc.Filter(False, "<frozen importlib._bootstrap>"),
        tracemalloc.Filter(False, "<unknown>"),
    ))
    top_stats = snapshot.statistics(key_type)

    print("Top %s lines" % limit)
    for index, stat in enumerate(top_stats[:limit], 1):
        frame = stat.traceback[0]
        # replace "/path/to/module/file.py" with "module/file.py"
        filename = os.sep.join(frame.filename.split(os.sep)[-2:])
        print("#%s: %s:%s: %.1f KiB"
              % (index, filename, frame.lineno, stat.size / 1024))
        line = linecache.getline(frame.filename, frame.lineno).strip()
        if line:
            print('    %s' % line)

    other = top_stats[limit:]
    if other:
        size = sum(stat.size for stat in other)
        print("%s other: %.1f KiB" % (len(other), size / 1024))
    total = sum(stat.size for stat in top_stats)
    print("Total allocated size: %.1f KiB" % (total / 1024))


main()

The resource module lets you check the current memory usage, and save the snapshot from the peak memory usage. The queue lets the main thread tell the memory monitor thread when to print its report and shut down. When it runs, it shows the memory being used by the list() call:

2018-05-29 10:34:34.441334 max RSS 10188
2018-05-29 10:34:36.475707 max RSS 23588
2018-05-29 10:34:36.616524 max RSS 38104
2018-05-29 10:34:36.772978 max RSS 45924
2018-05-29 10:34:36.929688 max RSS 46824
2018-05-29 10:34:37.087554 max RSS 46852
Top prefixes: [('con', 1220), ('dis', 1002), ('pro', 809)]
2018-05-29 10:34:56.281262
Top 3 lines
#1: scratches/scratch.py:36: 6527.0 KiB
    words = list(words)
#2: scratches/scratch.py:38: 16.4 KiB
    prefix = word[:3]
#3: scratches/scratch.py:39: 10.1 KiB
    counts[prefix] += 1
19 other: 10.8 KiB
Total allocated size: 6564.3 KiB

If you’re on Linux, you may find /proc/self/statm more useful than the resource module.


回答 2

如果只想查看对象的内存使用情况,(回答其他问题

有一个名为Pymplerasizeof 模块,其中包含该模块。

用法如下:

from pympler import asizeof
asizeof.asizeof(my_object)

sys.getsizeof与之不同,它适用于您自己创建的对象

>>> asizeof.asizeof(tuple('bcd'))
200
>>> asizeof.asizeof({'foo': 'bar', 'baz': 'bar'})
400
>>> asizeof.asizeof({})
280
>>> asizeof.asizeof({'foo':'bar'})
360
>>> asizeof.asizeof('foo')
40
>>> asizeof.asizeof(Bar())
352
>>> asizeof.asizeof(Bar().__dict__)
280
>>> help(asizeof.asizeof)
Help on function asizeof in module pympler.asizeof:

asizeof(*objs, **opts)
    Return the combined size in bytes of all objects passed as positional arguments.

If you only want to look at the memory usage of an object, (answer to other question)

There is a module called Pympler which contains the asizeof module.

Use as follows:

from pympler import asizeof
asizeof.asizeof(my_object)

Unlike sys.getsizeof, it works for your self-created objects.

>>> asizeof.asizeof(tuple('bcd'))
200
>>> asizeof.asizeof({'foo': 'bar', 'baz': 'bar'})
400
>>> asizeof.asizeof({})
280
>>> asizeof.asizeof({'foo':'bar'})
360
>>> asizeof.asizeof('foo')
40
>>> asizeof.asizeof(Bar())
352
>>> asizeof.asizeof(Bar().__dict__)
280
>>> help(asizeof.asizeof)
Help on function asizeof in module pympler.asizeof:

asizeof(*objs, **opts)
    Return the combined size in bytes of all objects passed as positional arguments.

回答 3

披露:

  • 仅适用于Linux
  • 报告用于由当前过程作为一个整体,而不是单个存储器功能

但由于它的简单性,它很不错:

import resource
def using(point=""):
    usage=resource.getrusage(resource.RUSAGE_SELF)
    return '''%s: usertime=%s systime=%s mem=%s mb
           '''%(point,usage[0],usage[1],
                usage[2]/1024.0 )

只需插入using("Label")您想查看的情况即可。例如

print(using("before"))
wrk = ["wasting mem"] * 1000000
print(using("after"))

>>> before: usertime=2.117053 systime=1.703466 mem=53.97265625 mb
>>> after: usertime=2.12023 systime=1.70708 mem=60.8828125 mb

Disclosure:

  • Applicable on Linux only
  • Reports memory used by the current process as a whole, not individual functions within

But nice because of its simplicity:

import resource
def using(point=""):
    usage=resource.getrusage(resource.RUSAGE_SELF)
    return '''%s: usertime=%s systime=%s mem=%s mb
           '''%(point,usage[0],usage[1],
                usage[2]/1024.0 )

Just insert using("Label") where you want to see what’s going on. For example

print(using("before"))
wrk = ["wasting mem"] * 1000000
print(using("after"))

>>> before: usertime=2.117053 systime=1.703466 mem=53.97265625 mb
>>> after: usertime=2.12023 systime=1.70708 mem=60.8828125 mb

回答 4

在我看来,既然已接受的答案以及投票数第二高的答案都存在一些问题,所以我想再提供一个基于Ihor B.答案的答案,并进行了一些微小但重要的修改。

该解决方案允许您运行分析上或者通过包装函数调用用profile,功能和调用它通过与装饰你的函数/法@profile装饰。

当您要分析一些第三方代码而不弄乱其源代码时,第一种技术很有用,而第二种技术则比较“干净”,当您不介意修改函数/方法的源代码时,效果更好想要简介。

我还修改了输出,以便获得RSS,VMS和共享内存。我不太关心“之前”和“之后”的值,只关心增量,所以我删除了那些值(如果您要与Ihor B.的答案进行比较)。

分析代码

# profile.py
import time
import os
import psutil
import inspect


def elapsed_since(start):
    #return time.strftime("%H:%M:%S", time.gmtime(time.time() - start))
    elapsed = time.time() - start
    if elapsed < 1:
        return str(round(elapsed*1000,2)) + "ms"
    if elapsed < 60:
        return str(round(elapsed, 2)) + "s"
    if elapsed < 3600:
        return str(round(elapsed/60, 2)) + "min"
    else:
        return str(round(elapsed / 3600, 2)) + "hrs"


def get_process_memory():
    process = psutil.Process(os.getpid())
    mi = process.memory_info()
    return mi.rss, mi.vms, mi.shared


def format_bytes(bytes):
    if abs(bytes) < 1000:
        return str(bytes)+"B"
    elif abs(bytes) < 1e6:
        return str(round(bytes/1e3,2)) + "kB"
    elif abs(bytes) < 1e9:
        return str(round(bytes / 1e6, 2)) + "MB"
    else:
        return str(round(bytes / 1e9, 2)) + "GB"


def profile(func, *args, **kwargs):
    def wrapper(*args, **kwargs):
        rss_before, vms_before, shared_before = get_process_memory()
        start = time.time()
        result = func(*args, **kwargs)
        elapsed_time = elapsed_since(start)
        rss_after, vms_after, shared_after = get_process_memory()
        print("Profiling: {:>20}  RSS: {:>8} | VMS: {:>8} | SHR {"
              ":>8} | time: {:>8}"
            .format("<" + func.__name__ + ">",
                    format_bytes(rss_after - rss_before),
                    format_bytes(vms_after - vms_before),
                    format_bytes(shared_after - shared_before),
                    elapsed_time))
        return result
    if inspect.isfunction(func):
        return wrapper
    elif inspect.ismethod(func):
        return wrapper(*args,**kwargs)

用法示例,假设上面的代码另存为profile.py

from profile import profile
from time import sleep
from sklearn import datasets # Just an example of 3rd party function call


# Method 1
run_profiling = profile(datasets.load_digits)
data = run_profiling()

# Method 2
@profile
def my_function():
    # do some stuff
    a_list = []
    for i in range(1,100000):
        a_list.append(i)
    return a_list


res = my_function()

这将导致输出类似于以下内容:

Profiling:        <load_digits>  RSS:   5.07MB | VMS:   4.91MB | SHR  73.73kB | time:  89.99ms
Profiling:        <my_function>  RSS:   1.06MB | VMS:   1.35MB | SHR       0B | time:   8.43ms

重要的最后几点注意事项:

  1. 请记住,这种剖析方法仅是近似的,因为计算机上可能会发生许多其他事情。由于垃圾收集和其他因素,增量甚至可能为零。
  2. 由于某些未知的原因,出现非常短的函数调用(例如1或2 ms),而内存使用量为零。我怀疑这是硬件/操作系统(在装有Linux的基本笔记本电脑上测试过)在内存统计信息更新频率方面的一些限制。
  3. 为了使示例简单,我没有使用任何函数参数,但是它们应该像预期的那样工作,即 profile(my_function, arg)进行概要分析my_function(arg)

Since the accepted answer and also the next highest voted answer have, in my opinion, some problems, I’d like to offer one more answer that is based closely on Ihor B.’s answer with some small but important modifications.

This solution allows you to run profiling on either by wrapping a function call with the profile function and calling it, or by decorating your function/method with the @profile decorator.

The first technique is useful when you want to profile some third-party code without messing with its source, whereas the second technique is a bit “cleaner” and works better when you are don’t mind modifying the source of the function/method you want to profile.

I’ve also modified the output, so that you get RSS, VMS, and shared memory. I don’t care much about the “before” and “after” values, but only the delta, so I removed those (if you’re comparing to Ihor B.’s answer).

Profiling code

# profile.py
import time
import os
import psutil
import inspect


def elapsed_since(start):
    #return time.strftime("%H:%M:%S", time.gmtime(time.time() - start))
    elapsed = time.time() - start
    if elapsed < 1:
        return str(round(elapsed*1000,2)) + "ms"
    if elapsed < 60:
        return str(round(elapsed, 2)) + "s"
    if elapsed < 3600:
        return str(round(elapsed/60, 2)) + "min"
    else:
        return str(round(elapsed / 3600, 2)) + "hrs"


def get_process_memory():
    process = psutil.Process(os.getpid())
    mi = process.memory_info()
    return mi.rss, mi.vms, mi.shared


def format_bytes(bytes):
    if abs(bytes) < 1000:
        return str(bytes)+"B"
    elif abs(bytes) < 1e6:
        return str(round(bytes/1e3,2)) + "kB"
    elif abs(bytes) < 1e9:
        return str(round(bytes / 1e6, 2)) + "MB"
    else:
        return str(round(bytes / 1e9, 2)) + "GB"


def profile(func, *args, **kwargs):
    def wrapper(*args, **kwargs):
        rss_before, vms_before, shared_before = get_process_memory()
        start = time.time()
        result = func(*args, **kwargs)
        elapsed_time = elapsed_since(start)
        rss_after, vms_after, shared_after = get_process_memory()
        print("Profiling: {:>20}  RSS: {:>8} | VMS: {:>8} | SHR {"
              ":>8} | time: {:>8}"
            .format("<" + func.__name__ + ">",
                    format_bytes(rss_after - rss_before),
                    format_bytes(vms_after - vms_before),
                    format_bytes(shared_after - shared_before),
                    elapsed_time))
        return result
    if inspect.isfunction(func):
        return wrapper
    elif inspect.ismethod(func):
        return wrapper(*args,**kwargs)

Example usage, assuming the above code is saved as profile.py:

from profile import profile
from time import sleep
from sklearn import datasets # Just an example of 3rd party function call


# Method 1
run_profiling = profile(datasets.load_digits)
data = run_profiling()

# Method 2
@profile
def my_function():
    # do some stuff
    a_list = []
    for i in range(1,100000):
        a_list.append(i)
    return a_list


res = my_function()

This should result in output similar to the below:

Profiling:        <load_digits>  RSS:   5.07MB | VMS:   4.91MB | SHR  73.73kB | time:  89.99ms
Profiling:        <my_function>  RSS:   1.06MB | VMS:   1.35MB | SHR       0B | time:   8.43ms

A couple of important final notes:

  1. Keep in mind, this method of profiling is only going to be approximate, since lots of other stuff might be happening on the machine. Due to garbage collection and other factors, the deltas might even be zero.
  2. For some unknown reason, very short function calls (e.g. 1 or 2 ms) show up with zero memory usage. I suspect this is some limitation of the hardware/OS (tested on basic laptop with Linux) on how often memory statistics are updated.
  3. To keep the examples simple, I didn’t use any function arguments, but they should work as one would expect, i.e. profile(my_function, arg) to profile my_function(arg)

回答 5

下面是一个简单的函数装饰器,它可以跟踪函数调用之前,函数调用之后进程消耗的内存量以及它们之间的区别:

import time
import os
import psutil


def elapsed_since(start):
    return time.strftime("%H:%M:%S", time.gmtime(time.time() - start))


def get_process_memory():
    process = psutil.Process(os.getpid())
    return process.get_memory_info().rss


def profile(func):
    def wrapper(*args, **kwargs):
        mem_before = get_process_memory()
        start = time.time()
        result = func(*args, **kwargs)
        elapsed_time = elapsed_since(start)
        mem_after = get_process_memory()
        print("{}: memory before: {:,}, after: {:,}, consumed: {:,}; exec time: {}".format(
            func.__name__,
            mem_before, mem_after, mem_after - mem_before,
            elapsed_time))
        return result
    return wrapper

这是我的博客,描述了所有详细信息。(已归档的链接

Below is a simple function decorator which allows to track how much memory the process consumed before the function call, after the function call, and what is the difference:

import time
import os
import psutil
 
 
def elapsed_since(start):
    return time.strftime("%H:%M:%S", time.gmtime(time.time() - start))
 
 
def get_process_memory():
    process = psutil.Process(os.getpid())
    mem_info = process.memory_info()
    return mem_info.rss
 
 
def profile(func):
    def wrapper(*args, **kwargs):
        mem_before = get_process_memory()
        start = time.time()
        result = func(*args, **kwargs)
        elapsed_time = elapsed_since(start)
        mem_after = get_process_memory()
        print("{}: memory before: {:,}, after: {:,}, consumed: {:,}; exec time: {}".format(
            func.__name__,
            mem_before, mem_after, mem_after - mem_before,
            elapsed_time))
        return result
    return wrapper

Here is my blog which describes all the details. (archived link)


回答 6

也许有帮助:
< 参见其他 >

pip install gprof2dot
sudo apt-get install graphviz

gprof2dot -f pstats profile_for_func1_001 | dot -Tpng -o profile.png

def profileit(name):
    """
    @profileit("profile_for_func1_001")
    """
    def inner(func):
        def wrapper(*args, **kwargs):
            prof = cProfile.Profile()
            retval = prof.runcall(func, *args, **kwargs)
            # Note use of name from outer scope
            prof.dump_stats(name)
            return retval
        return wrapper
    return inner

@profileit("profile_for_func1_001")
def func1(...)

maybe it help:
<see additional>

pip install gprof2dot
sudo apt-get install graphviz

gprof2dot -f pstats profile_for_func1_001 | dot -Tpng -o profile.png

def profileit(name):
    """
    @profileit("profile_for_func1_001")
    """
    def inner(func):
        def wrapper(*args, **kwargs):
            prof = cProfile.Profile()
            retval = prof.runcall(func, *args, **kwargs)
            # Note use of name from outer scope
            prof.dump_stats(name)
            return retval
        return wrapper
    return inner

@profileit("profile_for_func1_001")
def func1(...)

回答 7

一个简单的示例,使用memory_profile计算代码块/函数的内存使用率,同时返回函数的结果:

import memory_profiler as mp

def fun(n):
    tmp = []
    for i in range(n):
        tmp.extend(list(range(i*i)))
    return "XXXXX"

在运行代码之前计算内存使用量,然后在代码执行期间计算最大使用量:

start_mem = mp.memory_usage(max_usage=True)
res = mp.memory_usage(proc=(fun, [100]), max_usage=True, retval=True) 
print('start mem', start_mem)
print('max mem', res[0][0])
print('used mem', res[0][0]-start_mem)
print('fun output', res[1])

计算运行功能时采样点的使用情况:

res = mp.memory_usage((fun, [100]), interval=.001, retval=True)
print('min mem', min(res[0]))
print('max mem', max(res[0]))
print('used mem', max(res[0])-min(res[0]))
print('fun output', res[1])

积分:@skeept

A simple example to calculate the memory usage of a block of codes / function using memory_profile, while returning result of the function:

import memory_profiler as mp

def fun(n):
    tmp = []
    for i in range(n):
        tmp.extend(list(range(i*i)))
    return "XXXXX"

calculate memory usage before running the code then calculate max usage during the code:

start_mem = mp.memory_usage(max_usage=True)
res = mp.memory_usage(proc=(fun, [100]), max_usage=True, retval=True) 
print('start mem', start_mem)
print('max mem', res[0][0])
print('used mem', res[0][0]-start_mem)
print('fun output', res[1])

calculate usage in sampling points while running function:

res = mp.memory_usage((fun, [100]), interval=.001, retval=True)
print('min mem', min(res[0]))
print('max mem', max(res[0]))
print('used mem', max(res[0])-min(res[0]))
print('fun output', res[1])

Credits: @skeept


如何在Python中显式释放内存?

问题:如何在Python中显式释放内存?

我编写了一个Python程序,该程序作用于大型输入文件,以创建代表三角形的数百万个对象。该算法是:

  1. 读取输入文件
  2. 处理文件并创建一个三角形列表,以其顶点表示
  3. 以OFF格式输出顶点:顶点列表,后跟三角形列表。三角形由顶点列表中的索引表示

在打印出三角形之前先打印出完整的顶点列表的OFF要求意味着在将输出写入文件之前,必须将三角形的列表保留在内存中。同时,由于列表的大小,我遇到了内存错误。

告诉Python我不再需要某些数据并且可以释放它们的最佳方法是什么?

I wrote a Python program that acts on a large input file to create a few million objects representing triangles. The algorithm is:

  1. read an input file
  2. process the file and create a list of triangles, represented by their vertices
  3. output the vertices in the OFF format: a list of vertices followed by a list of triangles. The triangles are represented by indices into the list of vertices

The requirement of OFF that I print out the complete list of vertices before I print out the triangles means that I have to hold the list of triangles in memory before I write the output to file. In the meanwhile I’m getting memory errors because of the sizes of the lists.

What is the best way to tell Python that I no longer need some of the data, and it can be freed?


回答 0

根据Python官方文档,您可以使用强制垃圾回收器释放未引用的内存gc.collect()。例:

import gc
gc.collect()

According to Python Official Documentation, you can force the Garbage Collector to release unreferenced memory with gc.collect(). Example:

import gc
gc.collect()

回答 1

不幸的是(取决于您的Python版本和版本),某些类型的对象使用“空闲列表”,这是一种整洁的局部优化,但可能会导致内存碎片,特别是通过为特定类型的对象设置越来越多的“专用”内存来实现。因此无法使用“普通基金”。

确保大量但临时使用内存的唯一真正可靠的方法是在完成后将所有资源都返还给系统,这是让使用发生在子进程中,该进程需要大量内存,然后终止。在这种情况下,操作系统将完成其工作,并乐意回收子进程可能吞没的所有资源。幸运的是,该multiprocessing模块使这种操作(过去很痛苦)在现代版本的Python中还不错。

在您的用例中,似乎子过程累积一些结果并确保这些结果可用于主过程的最佳方法是使用半临时文件(我所说的是半临时文件,而不是那种关闭后会自动消失,只会删除您用完后会明确删除的普通文件)。

Unfortunately (depending on your version and release of Python) some types of objects use “free lists” which are a neat local optimization but may cause memory fragmentation, specifically by making more and more memory “earmarked” for only objects of a certain type and thereby unavailable to the “general fund”.

The only really reliable way to ensure that a large but temporary use of memory DOES return all resources to the system when it’s done, is to have that use happen in a subprocess, which does the memory-hungry work then terminates. Under such conditions, the operating system WILL do its job, and gladly recycle all the resources the subprocess may have gobbled up. Fortunately, the multiprocessing module makes this kind of operation (which used to be rather a pain) not too bad in modern versions of Python.

In your use case, it seems that the best way for the subprocesses to accumulate some results and yet ensure those results are available to the main process is to use semi-temporary files (by semi-temporary I mean, NOT the kind of files that automatically go away when closed, just ordinary files that you explicitly delete when you’re all done with them).


回答 2

del语句可能有用,但是IIRC 不能保证释放内存。该文档是在这里 …和为什么它没有被释放是在这里

我听说Linux和Unix类型系统上的人们分叉python进程来做一些工作,获得结果然后杀死它。

本文对Python垃圾收集器进行了说明,但我认为缺乏内存控制是托管内存的缺点

The del statement might be of use, but IIRC it isn’t guaranteed to free the memory. The docs are here … and a why it isn’t released is here.

I have heard people on Linux and Unix-type systems forking a python process to do some work, getting results and then killing it.

This article has notes on the Python garbage collector, but I think lack of memory control is the downside to managed memory


回答 3

Python是垃圾回收的,因此,如果减小列表的大小,它将回收内存。您还可以使用“ del”语句完全摆脱变量:

biglist = [blah,blah,blah]
#...
del biglist

Python is garbage-collected, so if you reduce the size of your list, it will reclaim memory. You can also use the “del” statement to get rid of a variable completely:

biglist = [blah,blah,blah]
#...
del biglist

回答 4

您不能显式释放内存。您需要做的是确保您不保留对对象的引用。然后将对它们进行垃圾回收,从而释放内存。

对于您的情况,当您需要大型列表时,通常需要重新组织代码,通常使用生成器/迭代器。这样,您根本就不需要在内存中存储大型列表。

http://www.prasannatech.net/2009/07/introduction-python-generators.html

You can’t explicitly free memory. What you need to do is to make sure you don’t keep references to objects. They will then be garbage collected, freeing the memory.

In your case, when you need large lists, you typically need to reorganize the code, typically using generators/iterators instead. That way you don’t need to have the large lists in memory at all.

http://www.prasannatech.net/2009/07/introduction-python-generators.html


回答 5

del可以是您的朋友,因为当没有其他引用时,它将对象标记为可删除。现在,CPython解释器通常会保留此内存供以后使用,因此您的操作系统可能看不到“已释放”的内存。)

通过使用更紧凑的数据结构,也许您一开始就不会遇到任何内存问题。因此,数字列表的存储效率比标准array模块或第三方numpy模块使用的格式低得多。通过将顶点放在NumPy 3xN数组中并将三角形放在N元素数组中,可以节省内存。

(del can be your friend, as it marks objects as being deletable when there no other references to them. Now, often the CPython interpreter keeps this memory for later use, so your operating system might not see the “freed” memory.)

Maybe you would not run into any memory problem in the first place by using a more compact structure for your data. Thus, lists of numbers are much less memory-efficient than the format used by the standard array module or the third-party numpy module. You would save memory by putting your vertices in a NumPy 3xN array and your triangles in an N-element array.


回答 6

从文件读取图形时,我遇到了类似的问题。该处理包括计算不适合内存的200 000×200 000浮点矩阵(一次一行)。尝试使用gc.collect()固定的内存相关方面来释放两次计算之间的内存,但这导致了性能问题:我不知道为什么,但是即使使用的内存量保持不变,每次调用也要gc.collect()花费更多的时间。前一个。因此,垃圾收集很快就花费了大部分计算时间。

为了解决内存和性能问题,我改用了在某处阅读过的多线程技巧(很抱歉,我找不到相关的文章了)。在以大for循环读取文件的每一行之前,先对其进行处理,然后gc.collect()每隔一段时间运行一次以释放内存空间。现在,我调用一个在新线程中读取和处理文件块的函数。线程结束后,将自动释放内存,而不会出现奇怪的性能问题。

实际上它是这样的:

from dask import delayed  # this module wraps the multithreading
def f(storage, index, chunk_size):  # the processing function
    # read the chunk of size chunk_size starting at index in the file
    # process it using data in storage if needed
    # append data needed for further computations  to storage 
    return storage

partial_result = delayed([])  # put into the delayed() the constructor for your data structure
# I personally use "delayed(nx.Graph())" since I am creating a networkx Graph
chunk_size = 100  # ideally you want this as big as possible while still enabling the computations to fit in memory
for index in range(0, len(file), chunk_size):
    # we indicates to dask that we will want to apply f to the parameters partial_result, index, chunk_size
    partial_result = delayed(f)(partial_result, index, chunk_size)

    # no computations are done yet !
    # dask will spawn a thread to run f(partial_result, index, chunk_size) once we call partial_result.compute()
    # passing the previous "partial_result" variable in the parameters assures a chunk will only be processed after the previous one is done
    # it also allows you to use the results of the processing of the previous chunks in the file if needed

# this launches all the computations
result = partial_result.compute()

# one thread is spawned for each "delayed" one at a time to compute its result
# dask then closes the tread, which solves the memory freeing issue
# the strange performance issue with gc.collect() is also avoided

I had a similar problem in reading a graph from a file. The processing included the computation of a 200 000×200 000 float matrix (one line at a time) that did not fit into memory. Trying to free the memory between computations using gc.collect() fixed the memory-related aspect of the problem but it resulted in performance issues: I don’t know why but even though the amount of used memory remained constant, each new call to gc.collect() took some more time than the previous one. So quite quickly the garbage collecting took most of the computation time.

To fix both the memory and performance issues I switched to the use of a multithreading trick I read once somewhere (I’m sorry, I cannot find the related post anymore). Before I was reading each line of the file in a big for loop, processing it, and running gc.collect() every once and a while to free memory space. Now I call a function that reads and processes a chunk of the file in a new thread. Once the thread ends, the memory is automatically freed without the strange performance issue.

Practically it works like this:

from dask import delayed  # this module wraps the multithreading
def f(storage, index, chunk_size):  # the processing function
    # read the chunk of size chunk_size starting at index in the file
    # process it using data in storage if needed
    # append data needed for further computations  to storage 
    return storage

partial_result = delayed([])  # put into the delayed() the constructor for your data structure
# I personally use "delayed(nx.Graph())" since I am creating a networkx Graph
chunk_size = 100  # ideally you want this as big as possible while still enabling the computations to fit in memory
for index in range(0, len(file), chunk_size):
    # we indicates to dask that we will want to apply f to the parameters partial_result, index, chunk_size
    partial_result = delayed(f)(partial_result, index, chunk_size)

    # no computations are done yet !
    # dask will spawn a thread to run f(partial_result, index, chunk_size) once we call partial_result.compute()
    # passing the previous "partial_result" variable in the parameters assures a chunk will only be processed after the previous one is done
    # it also allows you to use the results of the processing of the previous chunks in the file if needed

# this launches all the computations
result = partial_result.compute()

# one thread is spawned for each "delayed" one at a time to compute its result
# dask then closes the tread, which solves the memory freeing issue
# the strange performance issue with gc.collect() is also avoided

回答 7

其他人已经发布了一些方法,使您可以“哄骗” Python解释器释放内存(或者避免出现内存问题)。您应该首先尝试他们的想法。但是,我觉得给您直接回答您的问题很重要。

实际上并没有直接告诉Python释放内存的方法。这件事的事实是,如果您想要较低的控制级别,则必须使用C或C ++编写扩展。

也就是说,有一些工具可以帮助您:

Others have posted some ways that you might be able to “coax” the Python interpreter into freeing the memory (or otherwise avoid having memory problems). Chances are you should try their ideas out first. However, I feel it important to give you a direct answer to your question.

There isn’t really any way to directly tell Python to free memory. The fact of that matter is that if you want that low a level of control, you’re going to have to write an extension in C or C++.

That said, there are some tools to help with this:


回答 8

如果您不关心顶点重用,则可以有两个输出文件-一个用于顶点,一个用于三角形。完成后,将三角形文件附加到顶点文件。

If you don’t care about vertex reuse, you could have two output files–one for vertices and one for triangles. Then append the triangle file to the vertex file when you are done.


如何确定Python中对象的大小?

问题:如何确定Python中对象的大小?

我想知道如何在Python中获取对象的大小,例如字符串,整数等。

相关问题:Python列表(元组)中每个元素有多少个字节?

我使用的XML文件包含指定值大小的大小字段。我必须解析此XML并进行编码。当我想更改特定字段的值时,我将检查该值的大小字段。在这里,我想比较输入的新值是否与XML中的值相同。我需要检查新值的大小。如果是字符串,我可以说它的长度。但是如果是int,float等,我会感到困惑。

I want to know how to get size of objects like a string, integer, etc. in Python.

Related question: How many bytes per element are there in a Python list (tuple)?

I am using an XML file which contains size fields that specify the size of value. I must parse this XML and do my coding. When I want to change the value of a particular field, I will check the size field of that value. Here I want to compare whether the new value that I’m gong to enter is of the same size as in XML. I need to check the size of new value. In case of a string I can say its the length. But in case of int, float, etc. I am confused.


回答 0

只需使用模块中定义的sys.getsizeof函数即可sys

sys.getsizeof(object[, default])

返回对象的大小(以字节为单位)。该对象可以是任何类型的对象。所有内置对象都将返回正确的结果,但是对于第三方扩展,这不一定成立,因为它是特定于实现的。

default参数允许定义一个值,如果对象类型不提供检索大小的方法并导致,则将返回该值 TypeError

getsizeof__sizeof__如果对象由垃圾收集器管理,则调用该对象的 方法并添加额外的垃圾收集器开销。

用法示例,在python 3.0中:

>>> import sys
>>> x = 2
>>> sys.getsizeof(x)
24
>>> sys.getsizeof(sys.getsizeof)
32
>>> sys.getsizeof('this')
38
>>> sys.getsizeof('this also')
48

如果您使用的是python <2.6及以下版本,则sys.getsizeof可以使用此扩展模块。虽然从未使用过。

Just use the sys.getsizeof function defined in the sys module.

sys.getsizeof(object[, default]):

Return the size of an object in bytes. The object can be any type of object. All built-in objects will return correct results, but this does not have to hold true for third-party extensions as it is implementation specific.

The default argument allows to define a value which will be returned if the object type does not provide means to retrieve the size and would cause a TypeError.

getsizeof calls the object’s __sizeof__ method and adds an additional garbage collector overhead if the object is managed by the garbage collector.

Usage example, in python 3.0:

>>> import sys
>>> x = 2
>>> sys.getsizeof(x)
24
>>> sys.getsizeof(sys.getsizeof)
32
>>> sys.getsizeof('this')
38
>>> sys.getsizeof('this also')
48

If you are in python < 2.6 and don’t have sys.getsizeof you can use this extensive module instead. Never used it though.


回答 1

如何确定Python中对象的大小?

答案“仅使用sys.getsizeof”不是一个完整的答案。

该答案确实直接适用于内置对象,但没有考虑这些对象可能包含的内容,特别是不包含哪些类型,例如自定义对象,元组,列表,字典和集合所包含的类型。它们可以互相包含实例,以及数字,字符串和其他对象。

更完整的答案

使用Anaconda发行版中的64位Python 3.6和sys.getsizeof,我确定了以下对象的最小大小,并请注意set和dict预分配了空间,因此空的对象直到设定的数量后才再次增长。因语言的实现而异):

Python 3:

Empty
Bytes  type        scaling notes
28     int         +4 bytes about every 30 powers of 2
37     bytes       +1 byte per additional byte
49     str         +1-4 per additional character (depending on max width)
48     tuple       +8 per additional item
64     list        +8 for each additional
224    set         5th increases to 736; 21nd, 2272; 85th, 8416; 341, 32992
240    dict        6th increases to 368; 22nd, 1184; 43rd, 2280; 86th, 4704; 171st, 9320
136    func def    does not include default args and other attrs
1056   class def   no slots 
56     class inst  has a __dict__ attr, same scaling as dict above
888    class def   with slots
16     __slots__   seems to store in mutable tuple-like structure
                   first slot grows to 48, and so on.

您如何解释呢?好吧,说您有一套10件物品。如果每个项目都是100字节,那么整个数据结构有多大?该集合本身为736,因为它的大小增加了一倍,达到736字节。然后,添加项目的大小,因此总计1736字节

有关函数和类定义的一些警告:

请注意,每个类定义都有一个__dict__用于类attrs 的代理(48字节)结构。每个插槽property在类定义中都有一个描述符(如)。

开槽实例在其第一个元素上以48个字节开头,并且每增加一个字节就增加8个字节。只有空的带槽对象具有16个字节,而没有数据的实例意义不大。

此外,每个函数定义都有代码对象(可能是文档字符串)和其他可能的属性,甚至是__dict__

还要注意,我们sys.getsizeof()之所以使用,是因为我们关心的是边际空间使用情况,其中包括docs中对象的垃圾回收开销:

__sizeof__如果对象是由垃圾收集器管理的,则getsizeof()调用对象的方法并增加额外的垃圾收集器开销。

还要注意,调整列表的大小(例如重复添加到列表中)会使它们预先分配空间,类似于集合和字典。从listobj.c源代码

    /* This over-allocates proportional to the list size, making room
     * for additional growth.  The over-allocation is mild, but is
     * enough to give linear-time amortized behavior over a long
     * sequence of appends() in the presence of a poorly-performing
     * system realloc().
     * The growth pattern is:  0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ...
     * Note: new_allocated won't overflow because the largest possible value
     *       is PY_SSIZE_T_MAX * (9 / 8) + 6 which always fits in a size_t.
     */
    new_allocated = (size_t)newsize + (newsize >> 3) + (newsize < 9 ? 3 : 6);

历史数据

Python 2.7分析,通过guppy.hpy和确认sys.getsizeof

Bytes  type        empty + scaling notes
24     int         NA
28     long        NA
37     str         + 1 byte per additional character
52     unicode     + 4 bytes per additional character
56     tuple       + 8 bytes per additional item
72     list        + 32 for first, 8 for each additional
232    set         sixth item increases to 744; 22nd, 2280; 86th, 8424
280    dict        sixth item increases to 1048; 22nd, 3352; 86th, 12568 *
120    func def    does not include default args and other attrs
64     class inst  has a __dict__ attr, same scaling as dict above
16     __slots__   class with slots has no dict, seems to store in 
                   mutable tuple-like structure.
904    class def   has a proxy __dict__ structure for class attrs
104    old class   makes sense, less stuff, has real dict though.

请注意,字典(而非集合)在Python 3.6中得到了更紧凑的表示形式

我认为在64位计算机上,每个附加项目要引用8个字节是很有意义的。这8个字节指向所包含项在内存中的位置。如果我没记错的话,Python 2的unicode的4个字节是固定宽度的,但是在Python 3中,str变成的unicode的宽度等于字符的最大宽度。

(有关插槽的更多信息,请参见此答案

更完整的功能

我们需要一个功能来搜索列表,元组,集合,字典,obj.__dict__‘s和中的元素obj.__slots__,以及我们可能尚未想到的其他内容。

我们希望依靠gc.get_referents此搜索,因为它可以在C级别上运行(使其变得非常快)。缺点是get_referents可以返回冗余成员,因此我们需要确保不会重复计算。

类,模块和函数是单例-它们在内存中存在一次。我们对它们的大小不太感兴趣,因为我们对此无能为力-它们是程序的一部分。因此,如果碰巧引用了它们,我们将避免计算它们。

我们将使用类型的黑名单,因此我们不将整个程序包括在我们的大小计数中。

import sys
from types import ModuleType, FunctionType
from gc import get_referents

# Custom objects know their class.
# Function objects seem to know way too much, including modules.
# Exclude modules as well.
BLACKLIST = type, ModuleType, FunctionType


def getsize(obj):
    """sum size of object & members."""
    if isinstance(obj, BLACKLIST):
        raise TypeError('getsize() does not take argument of type: '+ str(type(obj)))
    seen_ids = set()
    size = 0
    objects = [obj]
    while objects:
        need_referents = []
        for obj in objects:
            if not isinstance(obj, BLACKLIST) and id(obj) not in seen_ids:
                seen_ids.add(id(obj))
                size += sys.getsizeof(obj)
                need_referents.append(obj)
        objects = get_referents(*need_referents)
    return size

为了与下面的白名单功能形成对比,大多数对象都知道如何遍历自身以进行垃圾回收(当我们想知道某些对象在内存中有多昂贵时,这正是我们要寻找的东西。gc.get_referents。)但是,如果我们不谨慎的话,这一措施的范围将比我们预期的要广泛得多。

例如,函数对创建它们的模块非常了解。

另一个对比点是,字典中作为键的字符串通常会被保留,因此不会重复。检查id(key)还将使我们避免计算重复项,这将在下一部分中进行。黑名单解决方案会跳过对全部为字符串的键的计数。

白名单类型,递归访问者(旧的实现)

为了亲自涵盖其中的大多数类型,我编写了此递归函数以尝试估算大多数Python对象的大小,包括大多数内建函数,集合模块中的类型以及自定义类型(有槽或其他),而不是依赖于gc模块。 。

这种功能可以对要计算内存使用情况的类型进行更细粒度的控制,但存在将类型排除在外的危险:

import sys
from numbers import Number
from collections import Set, Mapping, deque

try: # Python 2
    zero_depth_bases = (basestring, Number, xrange, bytearray)
    iteritems = 'iteritems'
except NameError: # Python 3
    zero_depth_bases = (str, bytes, Number, range, bytearray)
    iteritems = 'items'

def getsize(obj_0):
    """Recursively iterate to sum size of object & members."""
    _seen_ids = set()
    def inner(obj):
        obj_id = id(obj)
        if obj_id in _seen_ids:
            return 0
        _seen_ids.add(obj_id)
        size = sys.getsizeof(obj)
        if isinstance(obj, zero_depth_bases):
            pass # bypass remaining control flow and return
        elif isinstance(obj, (tuple, list, Set, deque)):
            size += sum(inner(i) for i in obj)
        elif isinstance(obj, Mapping) or hasattr(obj, iteritems):
            size += sum(inner(k) + inner(v) for k, v in getattr(obj, iteritems)())
        # Check for custom object instances - may subclass above too
        if hasattr(obj, '__dict__'):
            size += inner(vars(obj))
        if hasattr(obj, '__slots__'): # can have __slots__ with __dict__
            size += sum(inner(getattr(obj, s)) for s in obj.__slots__ if hasattr(obj, s))
        return size
    return inner(obj_0)

我相当随意地测试了它(我应该对其进行单元测试):

>>> getsize(['a', tuple('bcd'), Foo()])
344
>>> getsize(Foo())
16
>>> getsize(tuple('bcd'))
194
>>> getsize(['a', tuple('bcd'), Foo(), {'foo': 'bar', 'baz': 'bar'}])
752
>>> getsize({'foo': 'bar', 'baz': 'bar'})
400
>>> getsize({})
280
>>> getsize({'foo':'bar'})
360
>>> getsize('foo')
40
>>> class Bar():
...     def baz():
...         pass
>>> getsize(Bar())
352
>>> getsize(Bar().__dict__)
280
>>> sys.getsizeof(Bar())
72
>>> getsize(Bar.__dict__)
872
>>> sys.getsizeof(Bar.__dict__)
280

此实现违反了类定义和函数定义,因为我们没有使用它们的所有属性,但是由于它们在该进程的内存中应该只存在一次,因此它们的大小实际上并没有太大关系。

How do I determine the size of an object in Python?

The answer, “Just use sys.getsizeof” is not a complete answer.

That answer does work for builtin objects directly, but it does not account for what those objects may contain, specifically, what types, such as custom objects, tuples, lists, dicts, and sets contain. They can contain instances each other, as well as numbers, strings and other objects.

A More Complete Answer

Using 64 bit Python 3.6 from the Anaconda distribution, with sys.getsizeof, I have determined the minimum size of the following objects, and note that sets and dicts preallocate space so empty ones don’t grow again until after a set amount (which may vary by implementation of the language):

Python 3:

Empty
Bytes  type        scaling notes
28     int         +4 bytes about every 30 powers of 2
37     bytes       +1 byte per additional byte
49     str         +1-4 per additional character (depending on max width)
48     tuple       +8 per additional item
64     list        +8 for each additional
224    set         5th increases to 736; 21nd, 2272; 85th, 8416; 341, 32992
240    dict        6th increases to 368; 22nd, 1184; 43rd, 2280; 86th, 4704; 171st, 9320
136    func def    does not include default args and other attrs
1056   class def   no slots 
56     class inst  has a __dict__ attr, same scaling as dict above
888    class def   with slots
16     __slots__   seems to store in mutable tuple-like structure
                   first slot grows to 48, and so on.

How do you interpret this? Well say you have a set with 10 items in it. If each item is 100 bytes each, how big is the whole data structure? The set is 736 itself because it has sized up one time to 736 bytes. Then you add the size of the items, so that’s 1736 bytes in total

Some caveats for function and class definitions:

Note each class definition has a proxy __dict__ (48 bytes) structure for class attrs. Each slot has a descriptor (like a property) in the class definition.

Slotted instances start out with 48 bytes on their first element, and increase by 8 each additional. Only empty slotted objects have 16 bytes, and an instance with no data makes very little sense.

Also, each function definition has code objects, maybe docstrings, and other possible attributes, even a __dict__.

Also note that we use sys.getsizeof() because we care about the marginal space usage, which includes the garbage collection overhead for the object, from the docs:

getsizeof() calls the object’s __sizeof__ method and adds an additional garbage collector overhead if the object is managed by the garbage collector.

Also note that resizing lists (e.g. repetitively appending to them) causes them to preallocate space, similarly to sets and dicts. From the listobj.c source code:

    /* This over-allocates proportional to the list size, making room
     * for additional growth.  The over-allocation is mild, but is
     * enough to give linear-time amortized behavior over a long
     * sequence of appends() in the presence of a poorly-performing
     * system realloc().
     * The growth pattern is:  0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ...
     * Note: new_allocated won't overflow because the largest possible value
     *       is PY_SSIZE_T_MAX * (9 / 8) + 6 which always fits in a size_t.
     */
    new_allocated = (size_t)newsize + (newsize >> 3) + (newsize < 9 ? 3 : 6);

Historical data

Python 2.7 analysis, confirmed with guppy.hpy and sys.getsizeof:

Bytes  type        empty + scaling notes
24     int         NA
28     long        NA
37     str         + 1 byte per additional character
52     unicode     + 4 bytes per additional character
56     tuple       + 8 bytes per additional item
72     list        + 32 for first, 8 for each additional
232    set         sixth item increases to 744; 22nd, 2280; 86th, 8424
280    dict        sixth item increases to 1048; 22nd, 3352; 86th, 12568 *
120    func def    does not include default args and other attrs
64     class inst  has a __dict__ attr, same scaling as dict above
16     __slots__   class with slots has no dict, seems to store in 
                   mutable tuple-like structure.
904    class def   has a proxy __dict__ structure for class attrs
104    old class   makes sense, less stuff, has real dict though.

Note that dictionaries (but not sets) got a more compact representation in Python 3.6

I think 8 bytes per additional item to reference makes a lot of sense on a 64 bit machine. Those 8 bytes point to the place in memory the contained item is at. The 4 bytes are fixed width for unicode in Python 2, if I recall correctly, but in Python 3, str becomes a unicode of width equal to the max width of the characters.

(And for more on slots, see this answer )

A More Complete Function

We want a function that searches the elements in lists, tuples, sets, dicts, obj.__dict__‘s, and obj.__slots__, as well as other things we may not have yet thought of.

We want to rely on gc.get_referents to do this search because it works at the C level (making it very fast). The downside is that get_referents can return redundant members, so we need to ensure we don’t double count.

Classes, modules, and functions are singletons – they exist one time in memory. We’re not so interested in their size, as there’s not much we can do about them – they’re a part of the program. So we’ll avoid counting them if they happen to be referenced.

We’re going to use a blacklist of types so we don’t include the entire program in our size count.

import sys
from types import ModuleType, FunctionType
from gc import get_referents

# Custom objects know their class.
# Function objects seem to know way too much, including modules.
# Exclude modules as well.
BLACKLIST = type, ModuleType, FunctionType


def getsize(obj):
    """sum size of object & members."""
    if isinstance(obj, BLACKLIST):
        raise TypeError('getsize() does not take argument of type: '+ str(type(obj)))
    seen_ids = set()
    size = 0
    objects = [obj]
    while objects:
        need_referents = []
        for obj in objects:
            if not isinstance(obj, BLACKLIST) and id(obj) not in seen_ids:
                seen_ids.add(id(obj))
                size += sys.getsizeof(obj)
                need_referents.append(obj)
        objects = get_referents(*need_referents)
    return size

To contrast this with the following whitelisted function, most objects know how to traverse themselves for the purposes of garbage collection (which is approximately what we’re looking for when we want to know how expensive in memory certain objects are. This functionality is used by gc.get_referents.) However, this measure is going to be much more expansive in scope than we intended if we are not careful.

For example, functions know quite a lot about the modules they are created in.

Another point of contrast is that strings that are keys in dictionaries are usually interned so they are not duplicated. Checking for id(key) will also allow us to avoid counting duplicates, which we do in the next section. The blacklist solution skips counting keys that are strings altogether.

Whitelisted Types, Recursive visitor (old implementation)

To cover most of these types myself, instead of relying on the gc module, I wrote this recursive function to try to estimate the size of most Python objects, including most builtins, types in the collections module, and custom types (slotted and otherwise).

This sort of function gives much more fine-grained control over the types we’re going to count for memory usage, but has the danger of leaving types out:

import sys
from numbers import Number
from collections import Set, Mapping, deque

try: # Python 2
    zero_depth_bases = (basestring, Number, xrange, bytearray)
    iteritems = 'iteritems'
except NameError: # Python 3
    zero_depth_bases = (str, bytes, Number, range, bytearray)
    iteritems = 'items'

def getsize(obj_0):
    """Recursively iterate to sum size of object & members."""
    _seen_ids = set()
    def inner(obj):
        obj_id = id(obj)
        if obj_id in _seen_ids:
            return 0
        _seen_ids.add(obj_id)
        size = sys.getsizeof(obj)
        if isinstance(obj, zero_depth_bases):
            pass # bypass remaining control flow and return
        elif isinstance(obj, (tuple, list, Set, deque)):
            size += sum(inner(i) for i in obj)
        elif isinstance(obj, Mapping) or hasattr(obj, iteritems):
            size += sum(inner(k) + inner(v) for k, v in getattr(obj, iteritems)())
        # Check for custom object instances - may subclass above too
        if hasattr(obj, '__dict__'):
            size += inner(vars(obj))
        if hasattr(obj, '__slots__'): # can have __slots__ with __dict__
            size += sum(inner(getattr(obj, s)) for s in obj.__slots__ if hasattr(obj, s))
        return size
    return inner(obj_0)

And I tested it rather casually (I should unittest it):

>>> getsize(['a', tuple('bcd'), Foo()])
344
>>> getsize(Foo())
16
>>> getsize(tuple('bcd'))
194
>>> getsize(['a', tuple('bcd'), Foo(), {'foo': 'bar', 'baz': 'bar'}])
752
>>> getsize({'foo': 'bar', 'baz': 'bar'})
400
>>> getsize({})
280
>>> getsize({'foo':'bar'})
360
>>> getsize('foo')
40
>>> class Bar():
...     def baz():
...         pass
>>> getsize(Bar())
352
>>> getsize(Bar().__dict__)
280
>>> sys.getsizeof(Bar())
72
>>> getsize(Bar.__dict__)
872
>>> sys.getsizeof(Bar.__dict__)
280

This implementation breaks down on class definitions and function definitions because we don’t go after all of their attributes, but since they should only exist once in memory for the process, their size really doesn’t matter too much.


回答 2

Pympler封装的asizeof模块可以做到这一点。

用法如下:

from pympler import asizeof
asizeof.asizeof(my_object)

sys.getsizeof与之不同,它适用于您自己创建的对象。它甚至可以与numpy一起使用。

>>> asizeof.asizeof(tuple('bcd'))
200
>>> asizeof.asizeof({'foo': 'bar', 'baz': 'bar'})
400
>>> asizeof.asizeof({})
280
>>> asizeof.asizeof({'foo':'bar'})
360
>>> asizeof.asizeof('foo')
40
>>> asizeof.asizeof(Bar())
352
>>> asizeof.asizeof(Bar().__dict__)
280
>>> A = rand(10)
>>> B = rand(10000)
>>> asizeof.asizeof(A)
176
>>> asizeof.asizeof(B)
80096

正如提到的

可以通过设置option来包含类,函数,方法,模块等对象的(字节)代码大小code=True

如果您需要其他有关实时数据的视图,Pympler的

该模块muppy用于对Python应用程序进行在线监视,该模块Class Tracker提供对所选Python对象生命周期的离线分析。

The Pympler package’s asizeof module can do this.

Use as follows:

from pympler import asizeof
asizeof.asizeof(my_object)

Unlike sys.getsizeof, it works for your self-created objects. It even works with numpy.

>>> asizeof.asizeof(tuple('bcd'))
200
>>> asizeof.asizeof({'foo': 'bar', 'baz': 'bar'})
400
>>> asizeof.asizeof({})
280
>>> asizeof.asizeof({'foo':'bar'})
360
>>> asizeof.asizeof('foo')
40
>>> asizeof.asizeof(Bar())
352
>>> asizeof.asizeof(Bar().__dict__)
280
>>> A = rand(10)
>>> B = rand(10000)
>>> asizeof.asizeof(A)
176
>>> asizeof.asizeof(B)
80096

As mentioned,

The (byte)code size of objects like classes, functions, methods, modules, etc. can be included by setting option code=True.

And if you need other view on live data, Pympler’s

module muppy is used for on-line monitoring of a Python application and module Class Tracker provides off-line analysis of the lifetime of selected Python objects.


回答 3

对于numpy数组,getsizeof它不起作用-对于我来说,由于某种原因它总是返回40:

from pylab import *
from sys import getsizeof
A = rand(10)
B = rand(10000)

然后(在ipython中):

In [64]: getsizeof(A)
Out[64]: 40

In [65]: getsizeof(B)
Out[65]: 40

令人高兴的是:

In [66]: A.nbytes
Out[66]: 80

In [67]: B.nbytes
Out[67]: 80000

For numpy arrays, getsizeof doesn’t work – for me it always returns 40 for some reason:

from pylab import *
from sys import getsizeof
A = rand(10)
B = rand(10000)

Then (in ipython):

In [64]: getsizeof(A)
Out[64]: 40

In [65]: getsizeof(B)
Out[65]: 40

Happily, though:

In [66]: A.nbytes
Out[66]: 80

In [67]: B.nbytes
Out[67]: 80000

回答 4

这可能比看起来要复杂得多,具体取决于您要如何计算事物。例如,如果您有一个整数列表,您是否想要包含整数引用的列表的大小?(即仅列出,而不列出其中的内容),还是要包括指向的实际数据,在这种情况下,您需要处理重复的引用,以及当两个对象包含对引用的引用时如何防止重复计算同一对象。

您可能想看看其中一种python内存分析器,例如pysizer,看看它们是否满足您的需求。

This can be more complicated than it looks depending on how you want to count things. For instance, if you have a list of ints, do you want the size of the list containing the references to the ints? (ie. list only, not what is contained in it), or do you want to include the actual data pointed to, in which case you need to deal with duplicate references, and how to prevent double-counting when two objects contain references to the same object.

You may want to take a look at one of the python memory profilers, such as pysizer to see if they meet your needs.


回答 5

Raymond Hettinger 在此宣布sys.getsizeof,Python 3.8(2019年第一季度)将更改的某些结果:

在64位版本中,Python容器要小8字节。

tuple ()  48 -> 40       
list  []  64 ->56
set()    224 -> 216
dict  {} 240 -> 232

这是在问题33597Inada Naoki(methane围绕Compact PyGC_Head和PR 7043开展的工作之后

这个想法将PyGC_Head的大小减少到两个单词

目前,PyGC_Head包含三个词gc_prevgc_nextgc_refcnt

  • gc_refcnt 收集时用于尝试删除。
  • gc_prev 用于跟踪和取消跟踪。

因此,如果我们可以避免在尝试删除时进行跟踪/取消跟踪,gc_prev并且gc_refcnt可以共享相同的内存空间。

参见commit d5c875b

Py_ssize_t从中删除一名成员PyGC_Head
所有GC跟踪的对象(例如,元组,列表,字典)的大小都减少了4或8个字节。

Python 3.8 (Q1 2019) will change some of the results of sys.getsizeof, as announced here by Raymond Hettinger:

Python containers are 8 bytes smaller on 64-bit builds.

tuple ()  48 -> 40       
list  []  64 ->56
set()    224 -> 216
dict  {} 240 -> 232

This comes after issue 33597 and Inada Naoki (methane)‘s work around Compact PyGC_Head, and PR 7043

This idea reduces PyGC_Head size to two words.

Currently, PyGC_Head takes three words; gc_prev, gc_next, and gc_refcnt.

  • gc_refcnt is used when collecting, for trial deletion.
  • gc_prev is used for tracking and untracking.

So if we can avoid tracking/untracking while trial deletion, gc_prev and gc_refcnt can share same memory space.

See commit d5c875b:

Removed one Py_ssize_t member from PyGC_Head.
All GC tracked objects (e.g. tuple, list, dict) size is reduced 4 or 8 bytes.


回答 6

我本人多次遇到此问题,然后写了一个小函数(受@ aaron-hall的启发)和测试,实现了sys.getsizeof的期望:

https://github.com/bosswissam/pysize

如果您对背景故事感兴趣,请在这里

编辑:附加下面的代码,以方便参考。要查看最新代码,请检查github链接。

    import sys

    def get_size(obj, seen=None):
        """Recursively finds size of objects"""
        size = sys.getsizeof(obj)
        if seen is None:
            seen = set()
        obj_id = id(obj)
        if obj_id in seen:
            return 0
        # Important mark as seen *before* entering recursion to gracefully handle
        # self-referential objects
        seen.add(obj_id)
        if isinstance(obj, dict):
            size += sum([get_size(v, seen) for v in obj.values()])
            size += sum([get_size(k, seen) for k in obj.keys()])
        elif hasattr(obj, '__dict__'):
            size += get_size(obj.__dict__, seen)
        elif hasattr(obj, '__iter__') and not isinstance(obj, (str, bytes, bytearray)):
            size += sum([get_size(i, seen) for i in obj])
        return size

Having run into this problem many times myself, I wrote up a small function (inspired by @aaron-hall’s answer) & tests that does what I would have expected sys.getsizeof to do:

https://github.com/bosswissam/pysize

If you’re interested in the backstory, here it is

EDIT: Attaching the code below for easy reference. To see the most up-to-date code, please check the github link.

    import sys

    def get_size(obj, seen=None):
        """Recursively finds size of objects"""
        size = sys.getsizeof(obj)
        if seen is None:
            seen = set()
        obj_id = id(obj)
        if obj_id in seen:
            return 0
        # Important mark as seen *before* entering recursion to gracefully handle
        # self-referential objects
        seen.add(obj_id)
        if isinstance(obj, dict):
            size += sum([get_size(v, seen) for v in obj.values()])
            size += sum([get_size(k, seen) for k in obj.keys()])
        elif hasattr(obj, '__dict__'):
            size += get_size(obj.__dict__, seen)
        elif hasattr(obj, '__iter__') and not isinstance(obj, (str, bytes, bytearray)):
            size += sum([get_size(i, seen) for i in obj])
        return size

回答 7

这是我根据先前的答案编写的一个快速脚本,用于列出所有变量的大小

for i in dir():
    print (i, sys.getsizeof(eval(i)) )

Here is a quick script I wrote based on the previous answers to list sizes of all variables

for i in dir():
    print (i, sys.getsizeof(eval(i)) )

回答 8

您可以序列化对象以得出与对象大小密切相关的度量:

import pickle

## let o be the object, whose size you want to measure
size_estimate = len(pickle.dumps(o))

如果您要测量无法腌制的对象(例如,由于lambda表达式),则可以使用混浊解决方案。

You can serialize the object to derive a measure that is closely related to the size of the object:

import pickle

## let o be the object, whose size you want to measure
size_estimate = len(pickle.dumps(o))

If you want to measure objects that cannot be pickled (e.g. because of lambda expressions) cloudpickle can be a solution.


回答 9

如果不想包含链接(嵌套)对象的大小,请使用sys.getsizeof()

但是,如果您要计算嵌套在列表,字典,集合,元组中的子对象(通常这就是您要查找的内容),请使用递归的deep sizeof()函数,如下所示:

import sys
def sizeof(obj):
    size = sys.getsizeof(obj)
    if isinstance(obj, dict): return size + sum(map(sizeof, obj.keys())) + sum(map(sizeof, obj.values()))
    if isinstance(obj, (list, tuple, set, frozenset)): return size + sum(map(sizeof, obj))
    return size

您还可以在漂亮的工具箱中找到此功能,以及许多其他有用的单行代码:

https://github.com/mwojnars/nifty/blob/master/util.py

Use sys.getsizeof() if you DON’T want to include sizes of linked (nested) objects.

However, if you want to count sub-objects nested in lists, dicts, sets, tuples – and usually THIS is what you’re looking for – use the recursive deep sizeof() function as shown below:

import sys
def sizeof(obj):
    size = sys.getsizeof(obj)
    if isinstance(obj, dict): return size + sum(map(sizeof, obj.keys())) + sum(map(sizeof, obj.values()))
    if isinstance(obj, (list, tuple, set, frozenset)): return size + sum(map(sizeof, obj))
    return size

You can also find this function in the nifty toolbox, together with many other useful one-liners:

https://github.com/mwojnars/nifty/blob/master/util.py


回答 10

如果您不需要对象的确切大小,但大致了解对象的大小,一种快速(又脏)的方法是让程序运行,睡眠较长时间并检查内存使用情况(例如:Mac的活动监视器)通过此特定的python进程。当您尝试在python进程中查找单个大对象的大小时,这将是有效的。例如,我最近想检查一个新数据结构的内存使用情况,并将其与Python的set数据结构进行比较。首先,我将元素(大型公共领域书中的单词)写到一个集合中,然后检查流程的大小,然后对其他数据结构执行相同的操作。我发现一组Python进程占用的内存是新数据结构的两倍。再一次,你不会 不能准确地说出进程使用的内存等于对象的大小。随着对象的大小变大,与要监视的对象的大小相比,该过程的其余部分所消耗的内存可以忽略不计,这变得接近。

If you don’t need the exact size of the object but roughly to know how big it is, one quick (and dirty) way is to let the program run, sleep for an extended period of time, and check the memory usage (ex: Mac’s activity monitor) by this particular python process. This would be effective when you are trying to find the size of one single large object in a python process. For example, I recently wanted to check the memory usage of a new data structure and compare it with that of Python’s set data structure. First I wrote the elements (words from a large public domain book) to a set, then checked the size of the process, and then did the same thing with the other data structure. I found out the Python process with a set is taking twice as much memory as the new data structure. Again, you wouldn’t be able to exactly say the memory used by the process is equal to the size of the object. As the size of the object gets large, this becomes close as the memory consumed by the rest of the process becomes negligible compared to the size of the object you are trying to monitor.


回答 11

您可以使用如下所述的getSizeof()来确定对象的大小

import sys
str1 = "one"
int_element=5
print("Memory size of '"+str1+"' = "+str(sys.getsizeof(str1))+ " bytes")
print("Memory size of '"+ str(int_element)+"' = "+str(sys.getsizeof(int_element))+ " bytes")

You can make use of getSizeof() as mentioned below to determine the size of an object

import sys
str1 = "one"
int_element=5
print("Memory size of '"+str1+"' = "+str(sys.getsizeof(str1))+ " bytes")
print("Memory size of '"+ str(int_element)+"' = "+str(sys.getsizeof(int_element))+ " bytes")

回答 12

我使用这个技巧…可能在小对象上不准确,但是我认为它对于复杂对象(如pygame表面)比sys.getsizeof()更准确

import pygame as pg
import os
import psutil
import time


process = psutil.Process(os.getpid())
pg.init()    
vocab = ['hello', 'me', 'you', 'she', 'he', 'they', 'we',
         'should', 'why?', 'necessarily', 'do', 'that']

font = pg.font.SysFont("monospace", 100, True)

dct = {}

newMem = process.memory_info().rss  # don't mind this line
Str = f'store ' + f'Nothing \tsurface use about '.expandtabs(15) + \
      f'0\t bytes'.expandtabs(9)  # don't mind this assignment too

usedMem = process.memory_info().rss

for word in vocab:
    dct[word] = font.render(word, True, pg.Color("#000000"))

    time.sleep(0.1)  # wait a moment

    # get total used memory of this script:
    newMem = process.memory_info().rss
    Str = f'store ' + f'{word}\tsurface use about '.expandtabs(15) + \
          f'{newMem - usedMem}\t bytes'.expandtabs(9)

    print(Str)
    usedMem = newMem

在我的Windows 10(python 3.7.3)上,输出为:

store hello          surface use about 225280    bytes
store me             surface use about 61440     bytes
store you            surface use about 94208     bytes
store she            surface use about 81920     bytes
store he             surface use about 53248     bytes
store they           surface use about 114688    bytes
store we             surface use about 57344     bytes
store should         surface use about 172032    bytes
store why?           surface use about 110592    bytes
store necessarily    surface use about 311296    bytes
store do             surface use about 57344     bytes
store that           surface use about 110592    bytes

I use this trick… May won’t be accurate on small objects, but I think it’s much more accurate for a complex object (like pygame surface) rather than sys.getsizeof()

import pygame as pg
import os
import psutil
import time


process = psutil.Process(os.getpid())
pg.init()    
vocab = ['hello', 'me', 'you', 'she', 'he', 'they', 'we',
         'should', 'why?', 'necessarily', 'do', 'that']

font = pg.font.SysFont("monospace", 100, True)

dct = {}

newMem = process.memory_info().rss  # don't mind this line
Str = f'store ' + f'Nothing \tsurface use about '.expandtabs(15) + \
      f'0\t bytes'.expandtabs(9)  # don't mind this assignment too

usedMem = process.memory_info().rss

for word in vocab:
    dct[word] = font.render(word, True, pg.Color("#000000"))

    time.sleep(0.1)  # wait a moment

    # get total used memory of this script:
    newMem = process.memory_info().rss
    Str = f'store ' + f'{word}\tsurface use about '.expandtabs(15) + \
          f'{newMem - usedMem}\t bytes'.expandtabs(9)

    print(Str)
    usedMem = newMem

On my windows 10, python 3.7.3, the output is:

store hello          surface use about 225280    bytes
store me             surface use about 61440     bytes
store you            surface use about 94208     bytes
store she            surface use about 81920     bytes
store he             surface use about 53248     bytes
store they           surface use about 114688    bytes
store we             surface use about 57344     bytes
store should         surface use about 172032    bytes
store why?           surface use about 110592    bytes
store necessarily    surface use about 311296    bytes
store do             surface use about 57344     bytes
store that           surface use about 110592    bytes