建议使用哪个Python内存分析器?[关闭]

问题:建议使用哪个Python内存分析器?[关闭]

我想知道我的Python应用程序的内存使用情况,尤其想知道哪些代码块/部分或对象消耗了最多的内存。Google搜索显示商用的是Python Memory Validator(仅限Windows)。

开源的是PySizerHeapy

我没有尝试过任何人,所以我想知道哪个是最好的考虑因素:

  1. 提供大多数细节。

  2. 我必须要做最少的更改,也可以不做任何更改。

I want to know the memory usage of my Python application and specifically want to know what code blocks/portions or objects are consuming most memory. Google search shows a commercial one is Python Memory Validator (Windows only).

And open source ones are PySizer and Heapy.

I haven’t tried anyone, so I wanted to know which one is the best considering:

  1. Gives most details.

  2. I have to do least or no changes to my code.


回答 0

很容易使用。在代码中的某些时候,您必须编写以下代码:

from guppy import hpy
h = hpy()
print(h.heap())

这将为您提供如下输出:

Partition of a set of 132527 objects. Total size = 8301532 bytes.
Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
0  35144  27  2140412  26   2140412  26 str
1  38397  29  1309020  16   3449432  42 tuple
2    530   0   739856   9   4189288  50 dict (no owner)

您还可以从何处引用对象,并获取有关该对象的统计信息,但是以某种方式,该文档上的文档很少。

还有一个用Tk编写的图形浏览器。

Heapy is quite simple to use. At some point in your code, you have to write the following:

from guppy import hpy
h = hpy()
print(h.heap())

This gives you some output like this:

Partition of a set of 132527 objects. Total size = 8301532 bytes.
Index  Count   %     Size   % Cumulative  % Kind (class / dict of class)
0  35144  27  2140412  26   2140412  26 str
1  38397  29  1309020  16   3449432  42 tuple
2    530   0   739856   9   4189288  50 dict (no owner)

You can also find out from where objects are referenced and get statistics about that, but somehow the docs on that are a bit sparse.

There is a graphical browser as well, written in Tk.


回答 1

由于没有人提到它,因此我将指向我的模块memory_profiler,该能够打印内存使用情况的报告,并且可以在Unix和Windows上运行(在最后一个版本中需要psutil)。输出不是很详细,但是目标是让您概述代码在哪里消耗了更多的内存,而不是对分配的对象进行详尽的分析。

在用函数修饰功能@profile并使用-m memory_profiler标记运行代码后,它将打印如下一行报告:

Line #    Mem usage  Increment   Line Contents
==============================================
     3                           @profile
     4      5.97 MB    0.00 MB   def my_func():
     5     13.61 MB    7.64 MB       a = [1] * (10 ** 6)
     6    166.20 MB  152.59 MB       b = [2] * (2 * 10 ** 7)
     7     13.61 MB -152.59 MB       del b
     8     13.61 MB    0.00 MB       return a

Since nobody has mentioned it I’ll point to my module memory_profiler which is capable of printing line-by-line report of memory usage and works on Unix and Windows (needs psutil on this last one). Output is not very detailed but the goal is to give you an overview of where the code is consuming more memory and not a exhaustive analysis on allocated objects.

After decorating your function with @profile and running your code with the -m memory_profiler flag it will print a line-by-line report like this:

Line #    Mem usage  Increment   Line Contents
==============================================
     3                           @profile
     4      5.97 MB    0.00 MB   def my_func():
     5     13.61 MB    7.64 MB       a = [1] * (10 ** 6)
     6    166.20 MB  152.59 MB       b = [2] * (2 * 10 ** 7)
     7     13.61 MB -152.59 MB       del b
     8     13.61 MB    0.00 MB       return a

回答 2

我推荐Dowser。设置非常容易,您需要对代码进行零更改。您可以通过简单的Web界面随时查看每种类型的对象计数,查看活动对象列表,查看对活动对象的引用。

# memdebug.py

import cherrypy
import dowser

def start(port):
    cherrypy.tree.mount(dowser.Root())
    cherrypy.config.update({
        'environment': 'embedded',
        'server.socket_port': port
    })
    cherrypy.server.quickstart()
    cherrypy.engine.start(blocking=False)

您导入memdebug,然后调用memdebug.start。就这样。

我没有尝试过PySizer或Heapy。我会很感激别人的评论。

更新

上面的代码用于CherryPy 2.XCherryPy 3.Xserver.quickstart方法已删除,并且engine.start不带有该blocking标志。因此,如果您正在使用CherryPy 3.X

# memdebug.py

import cherrypy
import dowser

def start(port):
    cherrypy.tree.mount(dowser.Root())
    cherrypy.config.update({
        'environment': 'embedded',
        'server.socket_port': port
    })
    cherrypy.engine.start()

I recommend Dowser. It is very easy to setup, and you need zero changes to your code. You can view counts of objects of each type through time, view list of live objects, view references to live objects, all from the simple web interface.

# memdebug.py

import cherrypy
import dowser

def start(port):
    cherrypy.tree.mount(dowser.Root())
    cherrypy.config.update({
        'environment': 'embedded',
        'server.socket_port': port
    })
    cherrypy.server.quickstart()
    cherrypy.engine.start(blocking=False)

You import memdebug, and call memdebug.start. That’s all.

I haven’t tried PySizer or Heapy. I would appreciate others’ reviews.

UPDATE

The above code is for CherryPy 2.X, CherryPy 3.X the server.quickstart method has been removed and engine.start does not take the blocking flag. So if you are using CherryPy 3.X

# memdebug.py

import cherrypy
import dowser

def start(port):
    cherrypy.tree.mount(dowser.Root())
    cherrypy.config.update({
        'environment': 'embedded',
        'server.socket_port': port
    })
    cherrypy.engine.start()

回答 3

考虑objgraph库(请参阅http://www.lshift.net/blog/2008/11/14/tracing-python-memory-leaks以获得示例用例)。


回答 4

up(还有另一个)是Python的Memory Usage Profiler。该工具集的重点在于识别内存泄漏。

Muppy试图帮助开发人员识别Python应用程序的内存泄漏。它可以跟踪运行时的内存使用情况,并标识泄漏的对象。另外,提供了一些工具,这些工具可以定位未释放对象的来源。

Muppy is (yet another) Memory Usage Profiler for Python. The focus of this toolset is laid on the identification of memory leaks.

Muppy tries to help developers to identity memory leaks of Python applications. It enables the tracking of memory usage during runtime and the identification of objects which are leaking. Additionally, tools are provided which allow to locate the source of not released objects.


回答 5

我正在为Python开发一个名为memprof的内存分析器:

http://jmdana.github.io/memprof/

它允许您在执行装饰方法期间记录和绘制变量的内存使用情况。您只需要使用以下方法导入库:

from memprof import memprof

并使用以下方法装饰您的方法:

@memprof

这是有关情节外观的示例:

该项目托管在GitHub中:

https://github.com/jmdana/memprof

I’m developing a memory profiler for Python called memprof:

http://jmdana.github.io/memprof/

It allows you to log and plot the memory usage of your variables during the execution of the decorated methods. You just have to import the library using:

from memprof import memprof

And decorate your method using:

@memprof

This is an example on how the plots look like:

The project is hosted in GitHub:

https://github.com/jmdana/memprof


回答 6

我发现苹果酱比Heapy或PySizer更具功能。如果您恰巧正在运行wsgi Webapp,则Dozer是Dowser的一个很好的中间件包装

I found meliae to be much more functional than Heapy or PySizer. If you happen to be running a wsgi webapp, then Dozer is a nice middleware wrapper of Dowser


回答 7

也尝试pytracemalloc项目,该项目提供每个Python行号的内存使用情况。

编辑(2014/04):现在它具有Qt GUI来分析快照。

Try also the pytracemalloc project which provides the memory usage per Python line number.

EDIT (2014/04): It now has a Qt GUI to analyze snapshots.