标签归档:unit-testing

如何在pytest中打印到控制台?

问题:如何在pytest中打印到控制台?

我正在尝试将TDD(测试驱动的开发)与 pytestpytest使用时不会print进入控制台print

我正在pytest my_tests.py运行它。

documentation似乎是说,它应该是默认的工作:http://pytest.org/latest/capture.html

但:

import myapplication as tum

class TestBlogger:

    @classmethod
    def setup_class(self):
        self.user = "alice"
        self.b = tum.Blogger(self.user)
        print "This should be printed, but it won't be!"

    def test_inherit(self):
        assert issubclass(tum.Blogger, tum.Site)
        links = self.b.get_links(posts)
        print len(links)   # This won't print either.

什么都没有打印到我的标准输出控制台上(只是正常的进度以及通过/失败的测试数量)。

我正在测试的脚本包含打印:

class Blogger(Site):
    get_links(self, posts):
        print len(posts)   # It won't get printed in the test.

unittest模块中,默认情况下会打印所有内容,这正是我所需要的。但是,我想用pytest出于其他原因。

有谁知道如何使打印报表显示出来?

I’m trying to use TDD (test-driven development) with pytest. pytest will not print to the console when I use print.

I am using pytest my_tests.py to run it.

The documentation seems to say that it should work by default: http://pytest.org/latest/capture.html

But:

import myapplication as tum

class TestBlogger:

    @classmethod
    def setup_class(self):
        self.user = "alice"
        self.b = tum.Blogger(self.user)
        print "This should be printed, but it won't be!"

    def test_inherit(self):
        assert issubclass(tum.Blogger, tum.Site)
        links = self.b.get_links(posts)
        print len(links)   # This won't print either.

Nothing gets printed to my standard output console (just the normal progress and how many many tests passed/failed).

And the script that I’m testing contains print:

class Blogger(Site):
    get_links(self, posts):
        print len(posts)   # It won't get printed in the test.

In unittest module, everything gets printed by default, which is exactly what I need. However, I wish to use pytest for other reasons.

Does anyone know how to make the print statements get shown?


回答 0

默认情况下,py.test捕获标准输出的结果,以便它可以控制其输出结果的方式。如果不这样做,它将喷出大量文本,而没有测试打印该文本的上下文。

但是,如果测试失败,它将在结果报告中包括一部分,以显示在该特定测试中打印出的标准内容。

例如,

def test_good():
    for i in range(1000):
        print(i)

def test_bad():
    print('this should fail!')
    assert False

结果如下:

>>> py.test tmp.py
============================= test session starts ==============================
platform darwin -- Python 2.7.6 -- py-1.4.20 -- pytest-2.5.2
plugins: cache, cov, pep8, xdist
collected 2 items

tmp.py .F

=================================== FAILURES ===================================
___________________________________ test_bad ___________________________________

    def test_bad():
        print('this should fail!')
>       assert False
E       assert False

tmp.py:7: AssertionError
------------------------------- Captured stdout --------------------------------
this should fail!
====================== 1 failed, 1 passed in 0.04 seconds ======================

注意该Captured stdout部分。

如果您希望print在执行语句时看到它们,可以将-s标志传递给py.test。但是,请注意,有时可能很难解析。

>>> py.test tmp.py -s
============================= test session starts ==============================
platform darwin -- Python 2.7.6 -- py-1.4.20 -- pytest-2.5.2
plugins: cache, cov, pep8, xdist
collected 2 items

tmp.py 0
1
2
3
... and so on ...
997
998
999
.this should fail!
F

=================================== FAILURES ===================================
___________________________________ test_bad ___________________________________

    def test_bad():
        print('this should fail!')
>       assert False
E       assert False

tmp.py:7: AssertionError
====================== 1 failed, 1 passed in 0.02 seconds ======================

By default, py.test captures the result of standard out so that it can control how it prints it out. If it didn’t do this, it would spew out a lot of text without the context of what test printed that text.

However, if a test fails, it will include a section in the resulting report that shows what was printed to standard out in that particular test.

For example,

def test_good():
    for i in range(1000):
        print(i)

def test_bad():
    print('this should fail!')
    assert False

Results in the following output:

>>> py.test tmp.py
============================= test session starts ==============================
platform darwin -- Python 2.7.6 -- py-1.4.20 -- pytest-2.5.2
plugins: cache, cov, pep8, xdist
collected 2 items

tmp.py .F

=================================== FAILURES ===================================
___________________________________ test_bad ___________________________________

    def test_bad():
        print('this should fail!')
>       assert False
E       assert False

tmp.py:7: AssertionError
------------------------------- Captured stdout --------------------------------
this should fail!
====================== 1 failed, 1 passed in 0.04 seconds ======================

Note the Captured stdout section.

If you would like to see print statements as they are executed, you can pass the -s flag to py.test. However, note that this can sometimes be difficult to parse.

>>> py.test tmp.py -s
============================= test session starts ==============================
platform darwin -- Python 2.7.6 -- py-1.4.20 -- pytest-2.5.2
plugins: cache, cov, pep8, xdist
collected 2 items

tmp.py 0
1
2
3
... and so on ...
997
998
999
.this should fail!
F

=================================== FAILURES ===================================
___________________________________ test_bad ___________________________________

    def test_bad():
        print('this should fail!')
>       assert False
E       assert False

tmp.py:7: AssertionError
====================== 1 failed, 1 passed in 0.02 seconds ======================

回答 1

using -s选项将打印所有功能的输出,可能太多了。

如果您需要特定的输出,则您提到的文档页面提供了一些建议:

  1. assert False, "dumb assert to make PyTest print my stuff"在函数的末尾插入,由于测试失败,您将看到输出。

  2. 您有PyTest传递给您的特殊对象,您可以将输出写入文件中以供日后检查,例如

    def test_good1(capsys):
        for i in range(5):
            print i
        out, err = capsys.readouterr()
        open("err.txt", "w").write(err)
        open("out.txt", "w").write(out)

    您可以在单独的标签中打开outerr文件,然后让编辑器为您自动刷新它,或者执行简单的py.test; cat out.txtshell命令来运行测试。

那是做事的一种骇人听闻的方式,但是可能正是您所需要的东西:毕竟,TDD意味着您会弄乱这些东西,并在准备就绪时保持干净整洁:-)。

Using -s option will print output of all functions, which may be too much.

If you need particular output, the doc page you mentioned offers few suggestions:

  1. Insert assert False, "dumb assert to make PyTest print my stuff" at the end of your function, and you will see your output due to failed test.

  2. You have special object passed to you by PyTest, and you can write the output into a file to inspect it later, like

    def test_good1(capsys):
        for i in range(5):
            print i
        out, err = capsys.readouterr()
        open("err.txt", "w").write(err)
        open("out.txt", "w").write(out)
    

    You can open the out and err files in a separate tab and let editor automatically refresh it for you, or do a simple py.test; cat out.txt shell command to run your test.

That is rather hackish way to do stuff, but may be it is the stuff you need: after all, TDD means you mess with stuff and leave it clean and silent when it’s ready :-).


回答 2

简短答案

使用-s选项:

pytest -s

详细答案

文档

在执行测试期间,将捕获发送到stdoutstderr的所有输出。如果测试或设置方法失败,则通常会显示其相应的捕获输出以及失败回溯。

pytest具有选项--capture=method,其中method是每个测试捕获方法,并且可以是下列之一:fdsysnopytest还具有-s是的快捷方式--capture=no的选项,该选项使您可以在控制台中查看打印语句。

pytest --capture=no     # show print statements in console
pytest -s               # equivalent to previous command

设置捕获方法或禁用捕获

有两种pytest执行捕获的方法:

  1. 文件描述符(FD)级别捕获(默认):将捕获所有对操作系统文件描述符1和2的写操作。

  2. sys级捕获:仅捕获对Python文件sys.stdout和sys.stderr的写入。不捕获对文件描述符的写入。

pytest -s            # disable all capturing
pytest --capture=sys # replace sys.stdout/stderr with in-mem files
pytest --capture=fd  # also point filedescriptors 1 and 2 to temp file

Short Answer

Use the -s option:

pytest -s

Detailed answer

From the docs:

During test execution any output sent to stdout and stderr is captured. If a test or a setup method fails its according captured output will usually be shown along with the failure traceback.

pytest has the option --capture=method in which method is per-test capturing method, and could be one of the following: fd, sys or no. pytest also has the option -s which is a shortcut for --capture=no, and this is the option that will allow you to see your print statements in the console.

pytest --capture=no     # show print statements in console
pytest -s               # equivalent to previous command

Setting capturing methods or disabling capturing

There are two ways in which pytest can perform capturing:

  1. file descriptor (FD) level capturing (default): All writes going to the operating system file descriptors 1 and 2 will be captured.

  2. sys level capturing: Only writes to Python files sys.stdout and sys.stderr will be captured. No capturing of writes to filedescriptors is performed.

pytest -s            # disable all capturing
pytest --capture=sys # replace sys.stdout/stderr with in-mem files
pytest --capture=fd  # also point filedescriptors 1 and 2 to temp file

回答 3

PyTest确实需要在忽略所有内容时打印有关跳过测试的重要警告。

我不想通过测试发送信号失败,所以我做了如下的修改:

def test_2_YellAboutBrokenAndMutedTests():
    import atexit
    def report():
        print C_patch.tidy_text("""
In silent mode PyTest breaks low level stream structure I work with, so
I cannot test if my functionality work fine. I skipped corresponding tests.
Run `py.test -s` to make sure everything is tested.""")
    if sys.stdout != sys.__stdout__:
        atexit.register(report)

atexit模块允许我 PyTest释放输出流打印内容。输出如下:

============================= test session starts ==============================
platform linux2 -- Python 2.7.3, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: /media/Storage/henaro/smyth/Alchemist2-git/sources/C_patch, inifile: 
collected 15 items 

test_C_patch.py .....ssss....s.

===================== 10 passed, 5 skipped in 0.15 seconds =====================
In silent mode PyTest breaks low level stream structure I work with, so
I cannot test if my functionality work fine. I skipped corresponding tests.
Run `py.test -s` to make sure everything is tested.
~/.../sources/C_patch$

即使PyTest在静默模式下,消息也会被打印,如果您使用来运行东西,则消息不会被打印py.test -s,因此一切都已经过了很好的测试。

I needed to print important warning about skipped tests exactly when PyTest muted literally everything.

I didn’t want to fail a test to send a signal, so I did a hack as follow:

def test_2_YellAboutBrokenAndMutedTests():
    import atexit
    def report():
        print C_patch.tidy_text("""
In silent mode PyTest breaks low level stream structure I work with, so
I cannot test if my functionality work fine. I skipped corresponding tests.
Run `py.test -s` to make sure everything is tested.""")
    if sys.stdout != sys.__stdout__:
        atexit.register(report)

The atexit module allows me to print stuff after PyTest released the output streams. The output looks as follow:

============================= test session starts ==============================
platform linux2 -- Python 2.7.3, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: /media/Storage/henaro/smyth/Alchemist2-git/sources/C_patch, inifile: 
collected 15 items 

test_C_patch.py .....ssss....s.

===================== 10 passed, 5 skipped in 0.15 seconds =====================
In silent mode PyTest breaks low level stream structure I work with, so
I cannot test if my functionality work fine. I skipped corresponding tests.
Run `py.test -s` to make sure everything is tested.
~/.../sources/C_patch$

Message is printed even when PyTest is in silent mode, and is not printed if you run stuff with py.test -s, so everything is tested nicely already.


回答 4

根据pytest docspytest --capture=sys应该可以工作。如果要在测试中捕获标准,请参考capsys装置。

According to the pytest docs, pytest --capture=sys should work. If you want to capture standard out inside a test, refer to the capsys fixture.


回答 5

我最初是来这里寻找如何PyTest在VSCode的控制台中运行/调试单元测试的同时进行打印的。这可以通过以下launch.json配置完成。给定.venv虚拟环境文件夹。

    "version": "0.2.0",
    "configurations": [
        {
            "name": "PyTest",
            "type": "python",
            "request": "launch",
            "stopOnEntry": false,
            "pythonPath": "${config:python.pythonPath}",
            "module": "pytest",
            "args": [
                "-sv"
            ],
            "cwd": "${workspaceRoot}",
            "env": {},
            "envFile": "${workspaceRoot}/.venv",
            "debugOptions": [
                "WaitOnAbnormalExit",
                "WaitOnNormalExit",
                "RedirectOutput"
            ]
        }
    ]
}

I originally came in here to find how to make PyTest print in VSCode’s console while running/debugging the unit test from there. This can be done with the following launch.json configuration. Given .venv the virtual environment folder.

    "version": "0.2.0",
    "configurations": [
        {
            "name": "PyTest",
            "type": "python",
            "request": "launch",
            "stopOnEntry": false,
            "pythonPath": "${config:python.pythonPath}",
            "module": "pytest",
            "args": [
                "-sv"
            ],
            "cwd": "${workspaceRoot}",
            "env": {},
            "envFile": "${workspaceRoot}/.venv",
            "debugOptions": [
                "WaitOnAbnormalExit",
                "WaitOnNormalExit",
                "RedirectOutput"
            ]
        }
    ]
}

Jenkins中的Python单元测试?

问题:Jenkins中的Python单元测试?

您如何让Jenkins执行python unittest案例?是否可以从内置unittest包中输出JUnit样式的XML ?

How do you get Jenkins to execute python unittest cases? Is it possible to JUnit style XML output from the builtin unittest package?


回答 0

样本测试:

tests.py:

# tests.py

import random
try:
    import unittest2 as unittest
except ImportError:
    import unittest

class SimpleTest(unittest.TestCase):
    @unittest.skip("demonstrating skipping")
    def test_skipped(self):
        self.fail("shouldn't happen")

    def test_pass(self):
        self.assertEqual(10, 7 + 3)

    def test_fail(self):
        self.assertEqual(11, 7 + 3)

带有pytest的JUnit

使用以下命令运行测试:

py.test --junitxml results.xml tests.py

results.xml:

<?xml version="1.0" encoding="utf-8"?>
<testsuite errors="0" failures="1" name="pytest" skips="1" tests="2" time="0.097">
    <testcase classname="tests.SimpleTest" name="test_fail" time="0.000301837921143">
        <failure message="test failure">self = &lt;tests.SimpleTest testMethod=test_fail&gt;

    def test_fail(self):
&gt;       self.assertEqual(11, 7 + 3)
E       AssertionError: 11 != 10

tests.py:16: AssertionError</failure>
    </testcase>
    <testcase classname="tests.SimpleTest" name="test_pass" time="0.000109910964966"/>
    <testcase classname="tests.SimpleTest" name="test_skipped" time="0.000164031982422">
        <skipped message="demonstrating skipping" type="pytest.skip">/home/damien/test-env/lib/python2.6/site-packages/_pytest/unittest.py:119: Skipped: demonstrating skipping</skipped>
    </testcase>
</testsuite>

带鼻子的JUnit

使用以下命令运行测试:

nosetests --with-xunit

鼻子测试.xml:

<?xml version="1.0" encoding="UTF-8"?>
<testsuite name="nosetests" tests="3" errors="0" failures="1" skip="1">
    <testcase classname="tests.SimpleTest" name="test_fail" time="0.000">
        <failure type="exceptions.AssertionError" message="11 != 10">
            <![CDATA[Traceback (most recent call last):
File "/opt/python-2.6.1/lib/python2.6/site-packages/unittest2-0.5.1-py2.6.egg/unittest2/case.py", line 340, in run
testMethod()
File "/home/damien/tests.py", line 16, in test_fail
self.assertEqual(11, 7 + 3)
File "/opt/python-2.6.1/lib/python2.6/site-packages/unittest2-0.5.1-py2.6.egg/unittest2/case.py", line 521, in assertEqual
assertion_func(first, second, msg=msg)
File "/opt/python-2.6.1/lib/python2.6/site-packages/unittest2-0.5.1-py2.6.egg/unittest2/case.py", line 514, in _baseAssertEqual
raise self.failureException(msg)
AssertionError: 11 != 10
]]>
        </failure>
    </testcase>
    <testcase classname="tests.SimpleTest" name="test_pass" time="0.000"></testcase>
    <testcase classname="tests.SimpleTest" name="test_skipped" time="0.000">
        <skipped type="nose.plugins.skip.SkipTest" message="demonstrating skipping">
            <![CDATA[SkipTest: demonstrating skipping
]]>
        </skipped>
    </testcase>
</testsuite>

带有鼻子的JUnit2

您将需要使用nose2.plugins.junitxml插件。您可以nose2像通常一样使用配置文件进行配置,也可以使用--plugin命令行选项进行配置。

使用以下命令运行测试:

nose2 --plugin nose2.plugins.junitxml --junit-xml tests

鼻子2-junit.xml:

<testsuite errors="0" failures="1" name="nose2-junit" skips="1" tests="3" time="0.001">
  <testcase classname="tests.SimpleTest" name="test_fail" time="0.000126">
    <failure message="test failure">Traceback (most recent call last):
  File "/Users/damien/Work/test2/tests.py", line 18, in test_fail
    self.assertEqual(11, 7 + 3)
AssertionError: 11 != 10
</failure>
  </testcase>
  <testcase classname="tests.SimpleTest" name="test_pass" time="0.000095" />
  <testcase classname="tests.SimpleTest" name="test_skipped" time="0.000058">
    <skipped />
  </testcase>
</testsuite>

具有unittest-xml-reporting的JUnit

将以下内容附加到 tests.py

if __name__ == '__main__':
    import xmlrunner
    unittest.main(testRunner=xmlrunner.XMLTestRunner(output='test-reports'))

使用以下命令运行测试:

python tests.py

测试报告/TEST-SimpleTest-20131001140629.xml:

<?xml version="1.0" ?>
<testsuite errors="1" failures="0" name="SimpleTest-20131001140629" tests="3" time="0.000">
    <testcase classname="SimpleTest" name="test_pass" time="0.000"/>
    <testcase classname="SimpleTest" name="test_fail" time="0.000">
        <error message="11 != 10" type="AssertionError">
<![CDATA[Traceback (most recent call last):
  File "tests.py", line 16, in test_fail
    self.assertEqual(11, 7 + 3)
AssertionError: 11 != 10
]]>     </error>
    </testcase>
    <testcase classname="SimpleTest" name="test_skipped" time="0.000">
        <skipped message="demonstrating skipping" type="skip"/>
    </testcase>
    <system-out>
<![CDATA[]]>    </system-out>
    <system-err>
<![CDATA[]]>    </system-err>
</testsuite>

sample tests:

tests.py:

# tests.py

import random
try:
    import unittest2 as unittest
except ImportError:
    import unittest

class SimpleTest(unittest.TestCase):
    @unittest.skip("demonstrating skipping")
    def test_skipped(self):
        self.fail("shouldn't happen")

    def test_pass(self):
        self.assertEqual(10, 7 + 3)

    def test_fail(self):
        self.assertEqual(11, 7 + 3)

JUnit with pytest

run the tests with:

py.test --junitxml results.xml tests.py

results.xml:

<?xml version="1.0" encoding="utf-8"?>
<testsuite errors="0" failures="1" name="pytest" skips="1" tests="2" time="0.097">
    <testcase classname="tests.SimpleTest" name="test_fail" time="0.000301837921143">
        <failure message="test failure">self = &lt;tests.SimpleTest testMethod=test_fail&gt;

    def test_fail(self):
&gt;       self.assertEqual(11, 7 + 3)
E       AssertionError: 11 != 10

tests.py:16: AssertionError</failure>
    </testcase>
    <testcase classname="tests.SimpleTest" name="test_pass" time="0.000109910964966"/>
    <testcase classname="tests.SimpleTest" name="test_skipped" time="0.000164031982422">
        <skipped message="demonstrating skipping" type="pytest.skip">/home/damien/test-env/lib/python2.6/site-packages/_pytest/unittest.py:119: Skipped: demonstrating skipping</skipped>
    </testcase>
</testsuite>

JUnit with nose

run the tests with:

nosetests --with-xunit

nosetests.xml:

<?xml version="1.0" encoding="UTF-8"?>
<testsuite name="nosetests" tests="3" errors="0" failures="1" skip="1">
    <testcase classname="tests.SimpleTest" name="test_fail" time="0.000">
        <failure type="exceptions.AssertionError" message="11 != 10">
            <![CDATA[Traceback (most recent call last):
File "/opt/python-2.6.1/lib/python2.6/site-packages/unittest2-0.5.1-py2.6.egg/unittest2/case.py", line 340, in run
testMethod()
File "/home/damien/tests.py", line 16, in test_fail
self.assertEqual(11, 7 + 3)
File "/opt/python-2.6.1/lib/python2.6/site-packages/unittest2-0.5.1-py2.6.egg/unittest2/case.py", line 521, in assertEqual
assertion_func(first, second, msg=msg)
File "/opt/python-2.6.1/lib/python2.6/site-packages/unittest2-0.5.1-py2.6.egg/unittest2/case.py", line 514, in _baseAssertEqual
raise self.failureException(msg)
AssertionError: 11 != 10
]]>
        </failure>
    </testcase>
    <testcase classname="tests.SimpleTest" name="test_pass" time="0.000"></testcase>
    <testcase classname="tests.SimpleTest" name="test_skipped" time="0.000">
        <skipped type="nose.plugins.skip.SkipTest" message="demonstrating skipping">
            <![CDATA[SkipTest: demonstrating skipping
]]>
        </skipped>
    </testcase>
</testsuite>

JUnit with nose2

You would need to use the nose2.plugins.junitxml plugin. You can configure nose2 with a config file like you would normally do, or with the --plugin command-line option.

run the tests with:

nose2 --plugin nose2.plugins.junitxml --junit-xml tests

nose2-junit.xml:

<testsuite errors="0" failures="1" name="nose2-junit" skips="1" tests="3" time="0.001">
  <testcase classname="tests.SimpleTest" name="test_fail" time="0.000126">
    <failure message="test failure">Traceback (most recent call last):
  File "/Users/damien/Work/test2/tests.py", line 18, in test_fail
    self.assertEqual(11, 7 + 3)
AssertionError: 11 != 10
</failure>
  </testcase>
  <testcase classname="tests.SimpleTest" name="test_pass" time="0.000095" />
  <testcase classname="tests.SimpleTest" name="test_skipped" time="0.000058">
    <skipped />
  </testcase>
</testsuite>

JUnit with unittest-xml-reporting

Append the following to tests.py

if __name__ == '__main__':
    import xmlrunner
    unittest.main(testRunner=xmlrunner.XMLTestRunner(output='test-reports'))

run the tests with:

python tests.py

test-reports/TEST-SimpleTest-20131001140629.xml:

<?xml version="1.0" ?>
<testsuite errors="1" failures="0" name="SimpleTest-20131001140629" tests="3" time="0.000">
    <testcase classname="SimpleTest" name="test_pass" time="0.000"/>
    <testcase classname="SimpleTest" name="test_fail" time="0.000">
        <error message="11 != 10" type="AssertionError">
<![CDATA[Traceback (most recent call last):
  File "tests.py", line 16, in test_fail
    self.assertEqual(11, 7 + 3)
AssertionError: 11 != 10
]]>     </error>
    </testcase>
    <testcase classname="SimpleTest" name="test_skipped" time="0.000">
        <skipped message="demonstrating skipping" type="skip"/>
    </testcase>
    <system-out>
<![CDATA[]]>    </system-out>
    <system-err>
<![CDATA[]]>    </system-err>
</testsuite>

回答 1

我会第二次使用鼻子。现在已经内置了基本的XML报告。只需使用–with-xunit命令行选项,它就会生成一个nosetests.xml文件。例如:

鼻子测试–with-xunit

然后添加一个“发布JUnit测试结果报告”后生成操作,并使用nasesttests.xml填充“测试报告XML”字段(假设您在$ WORKSPACE中运行了鼻子测试)。

I would second using nose. Basic XML reporting is now built in. Just use the –with-xunit command line option and it will produce a nosetests.xml file. For example:

nosetests –with-xunit

Then add a “Publish JUnit test result report” post build action, and fill in the “Test report XMLs” field with nosetests.xml (assuming that you ran nosetests in $WORKSPACE).


回答 2

您可以安装unittest-xml-reporting包,以将生成XML的测试运行器添加到内置unittest

我们使用pytest,它内置了XML输出(这是一个命令行选项)。

无论哪种方式,都可以通过运行shell命令来执行单元测试。

You can install the unittest-xml-reporting package to add a test runner that generates XML to the built-in unittest.

We use pytest, which has XML output built in (it’s a command line option).

Either way, executing the unit tests can be done by running a shell command.


回答 3

我用鼻子测试。有一些插件可以为Jenkins输出XML

I used nosetests. There are addons to output the XML for Jenkins


回答 4

当使用buildout时,我们collective.xmltestreport会产生JUnit风格的XML输出,也许它是源代码,或者模块本身可能会有所帮助。

When using buildout we use collective.xmltestreport to produce JUnit-style XML output, perhaps it’s source code or the module itself could be of help.


回答 5

python -m pytest --junit-xml=pytest_unit.xml source_directory/test/unit || true # tests may fail

从jenkins作为shell运行它,您可以在pytest_unit.xml中作为工件获取报告。

python -m pytest --junit-xml=pytest_unit.xml source_directory/test/unit || true # tests may fail

Run this as shell from jenkins , you can get the report in pytest_unit.xml as artifact.


断言未使用Mock调用函数/方法

问题:断言未使用Mock调用函数/方法

我正在使用Mock库来测试我的应用程序,但是我想断言某些函数没有被调用。模拟文档谈论类似的方法mock.assert_called_withmock.assert_called_once_with,但我没有找到像什么mock.assert_not_called或验证模拟相关的东西是不叫

我可以使用类似以下的内容,尽管它看起来既不酷也不是pythonic:

def test_something:
    # some actions
    with patch('something') as my_var:
        try:
            # args are not important. func should never be called in this test
            my_var.assert_called_with(some, args)
        except AssertionError:
            pass  # this error being raised means it's ok
    # other stuff

任何想法如何做到这一点?

I’m using the Mock library to test my application, but I want to assert that some function was not called. Mock docs talk about methods like mock.assert_called_with and mock.assert_called_once_with, but I didn’t find anything like mock.assert_not_called or something related to verify mock was NOT called.

I could go with something like the following, though it doesn’t seem cool nor pythonic:

def test_something:
    # some actions
    with patch('something') as my_var:
        try:
            # args are not important. func should never be called in this test
            my_var.assert_called_with(some, args)
        except AssertionError:
            pass  # this error being raised means it's ok
    # other stuff

Any ideas how to accomplish this?


回答 0

这应该适合您的情况;

assert not my_var.called, 'method should not have been called'

样品;

>>> mock=Mock()
>>> mock.a()
<Mock name='mock.a()' id='4349129872'>
>>> assert not mock.b.called, 'b was called and should not have been'
>>> assert not mock.a.called, 'a was called and should not have been'
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AssertionError: a was called and should not have been

This should work for your case;

assert not my_var.called, 'method should not have been called'

Sample;

>>> mock=Mock()
>>> mock.a()
<Mock name='mock.a()' id='4349129872'>
>>> assert not mock.b.called, 'b was called and should not have been'
>>> assert not mock.a.called, 'a was called and should not have been'
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AssertionError: a was called and should not have been

回答 1

尽管是一个老问题,我想补充一下当前的mock库(unittest.mock的反向端口)支持assert_not_called方法。

只需升级您的;

pip install mock --upgrade

Though an old question, I would like to add that currently mock library (backport of unittest.mock) supports assert_not_called method.

Just upgrade yours;

pip install mock --upgrade


回答 2

您可以检查该called属性,但是如果断言失败,那么您接下来要了解的是有关意外调用的信息,因此您也可以安排从头开始显示该信息。使用unittest,您可以查看的内容call_args_list

self.assertItemsEqual(my_var.call_args_list, [])

当失败时,它会给出如下消息:

AssertionError:元素计数不相等:
第一个具有0,第二个具有1:call('first arguments',4)

You can check the called attribute, but if your assertion fails, the next thing you’ll want to know is something about the unexpected call, so you may as well arrange for that information to be displayed from the start. Using unittest, you can check the contents of call_args_list instead:

self.assertItemsEqual(my_var.call_args_list, [])

When it fails, it gives a message like this:

AssertionError: Element counts were not equal:
First has 0, Second has 1:  call('first argument', 4)

回答 3

当您使用类进行测试时,继承了unittest.TestCase,您可以简单地使用以下方法:

  • assertTrue
  • 断言错误
  • 断言等于

和类似的(在python文档中找到其余的)。

在您的示例中,我们可以简单地断言嘲笑嘲笑的方法是否为False,这意味着未调用该方法。

import unittest
from unittest import mock

import my_module

class A(unittest.TestCase):
    def setUp(self):
        self.message = "Method should not be called. Called {times} times!"

    @mock.patch("my_module.method_to_mock")
    def test(self, mock_method):
        my_module.method_to_mock()

        self.assertFalse(mock_method.called,
                         self.message.format(times=mock_method.call_count))

When you test using class inherits unittest.TestCase you can simply use methods like:

  • assertTrue
  • assertFalse
  • assertEqual

and similar (in python documentation you find the rest).

In your example we can simply assert if mock_method.called property is False, which means that method was not called.

import unittest
from unittest import mock

import my_module

class A(unittest.TestCase):
    def setUp(self):
        self.message = "Method should not be called. Called {times} times!"

    @mock.patch("my_module.method_to_mock")
    def test(self, mock_method):
        my_module.method_to_mock()

        self.assertFalse(mock_method.called,
                         self.message.format(times=mock_method.call_count))

回答 4

随便python >= 3.5可以使用mock_object.assert_not_called()

With python >= 3.5 you can use mock_object.assert_not_called().


回答 5

从其他答案来看,除了@ rob-kennedy之外,没有人谈论过call_args_list

它是一个强大的工具,可以实现与 MagicMock.assert_called_with()

call_args_listcall对象列表。每个call对象代表对模拟可调用对象的调用。

>>> from unittest.mock import MagicMock
>>> m = MagicMock()
>>> m.call_args_list
[]
>>> m(42)
<MagicMock name='mock()' id='139675158423872'>
>>> m.call_args_list
[call(42)]
>>> m(42, 30)
<MagicMock name='mock()' id='139675158423872'>
>>> m.call_args_list
[call(42), call(42, 30)]

使用call对象很容易,因为您可以将其与长度为2的元组进行比较,其中第一个组件是包含相关调用的所有位置参数的元组,而第二个组件是关键字arguments的字典。

>>> ((42,),) in m.call_args_list
True
>>> m(42, foo='bar')
<MagicMock name='mock()' id='139675158423872'>
>>> ((42,), {'foo': 'bar'}) in m.call_args_list
True
>>> m(foo='bar')
<MagicMock name='mock()' id='139675158423872'>
>>> ((), {'foo': 'bar'}) in m.call_args_list
True

因此,解决OP特定问题的一种方法是

def test_something():
    with patch('something') as my_var:
        assert ((some, args),) not in my_var.call_args_list

请注意,通过这种方式,MagicMock.called您不仅可以通过检查是否已调用了模拟的可调用对象,还可以通过一组特定的参数来检查它是否已被调用。

这很有用。假设您要测试一个接受列表的函数,然后compute()仅在满足特定条件的情况下针对列表的每个值调用另一个函数。

现在compute,您可以模拟,并测试是否已按某个值调用了它,但未按其他值调用了它。

Judging from other answers, no one except @rob-kennedy talked about the call_args_list.

It’s a powerful tool for that you can implement the exact contrary of MagicMock.assert_called_with()

call_args_list is a list of call objects. Each call object represents a call made on a mocked callable.

>>> from unittest.mock import MagicMock
>>> m = MagicMock()
>>> m.call_args_list
[]
>>> m(42)
<MagicMock name='mock()' id='139675158423872'>
>>> m.call_args_list
[call(42)]
>>> m(42, 30)
<MagicMock name='mock()' id='139675158423872'>
>>> m.call_args_list
[call(42), call(42, 30)]

Consuming a call object is easy, since you can compare it with a tuple of length 2 where the first component is a tuple containing all the positional arguments of the related call, while the second component is a dictionary of the keyword arguments.

>>> ((42,),) in m.call_args_list
True
>>> m(42, foo='bar')
<MagicMock name='mock()' id='139675158423872'>
>>> ((42,), {'foo': 'bar'}) in m.call_args_list
True
>>> m(foo='bar')
<MagicMock name='mock()' id='139675158423872'>
>>> ((), {'foo': 'bar'}) in m.call_args_list
True

So, a way to address the specific problem of the OP is

def test_something():
    with patch('something') as my_var:
        assert ((some, args),) not in my_var.call_args_list

Note that this way, instead of just checking if a mocked callable has been called, via MagicMock.called, you can now check if it has been called with a specific set of arguments.

That’s useful. Say you want to test a function that takes a list and call another function, compute(), for each of the value of the list only if they satisfy a specific condition.

You can now mock compute, and test if it has been called on some value but not on others.


如何仅在内存中运行Django的测试数据库?

问题:如何仅在内存中运行Django的测试数据库?

我的Django单元测试需要很长时间才能运行,因此我正在寻找加快速度的方法。我正在考虑安装SSD,但我知道它也有缺点。当然,我的代码可以做一些事情,但是我正在寻找结构上的修复方法。由于每次都需要重建/向南迁移数据库,因此即使运行单个测试也很慢。所以这是我的主意…

由于我知道测试数据库总是很小,所以为什么不能仅将系统配置为始终将整个测试数据库保留在RAM中?绝对不要触摸磁盘。如何在Django中配置它?我宁愿继续使用MySQL,因为这是我在生产中使用的方式,但是如果使用SQLite  3或其他方法可以简化这一点,我会采用这种方式。

SQLite或MySQL是否可以选择完全在内存中运行?应该可以配置RAM磁盘,然后配置测试数据库以将其数据存储在其中,但是我不确定如何告诉Django / MySQL为特定数据库使用不同的数据目录,特别是因为它不断被删除并重新创建每次运行。(我在Mac FWIW上。)

My Django unit tests take a long time to run, so I’m looking for ways to speed that up. I’m considering installing an SSD, but I know that has its downsides too. Of course, there are things I could do with my code, but I’m looking for a structural fix. Even running a single test is slow since the database needs to be rebuilt / south migrated every time. So here’s my idea…

Since I know the test database will always be quite small, why can’t I just configure the system to always keep the entire test database in RAM? Never touch the disk at all. How do I configure this in Django? I’d prefer to keep using MySQL since that’s what I use in production, but if SQLite 3 or something else makes this easy, I’d go that way.

Does SQLite or MySQL have an option to run entirely in memory? It should be possible to configure a RAM disk and then configure the test database to store its data there, but I’m not sure how to tell Django / MySQL to use a different data directory for a certain database, especially since it keeps getting erased and recreated each run. (I’m on a Mac FWIW.)


回答 0

如果在运行测试时将数据库引擎设置为sqlite3,则Django将使用内存数据库

settings.py在运行测试时使用如下代码将引擎设置为sqlite:

if 'test' in sys.argv:
    DATABASE_ENGINE = 'sqlite3'

或在Django 1.2中:

if 'test' in sys.argv:
    DATABASES['default'] = {'ENGINE': 'sqlite3'}

最后在Django 1.3和1.4中:

if 'test' in sys.argv:
    DATABASES['default'] = {'ENGINE': 'django.db.backends.sqlite3'}

(到Django 1.3并不一定要有完整的后端路径,但是可以使设置向前兼容。)

您还可以添加以下行,以防南向迁移出现问题:

    SOUTH_TESTS_MIGRATE = False

If you set your database engine to sqlite3 when you run your tests, Django will use a in-memory database.

I’m using code like this in my settings.py to set the engine to sqlite when running my tests:

if 'test' in sys.argv:
    DATABASE_ENGINE = 'sqlite3'

Or in Django 1.2:

if 'test' in sys.argv:
    DATABASES['default'] = {'ENGINE': 'sqlite3'}

And finally in Django 1.3 and 1.4:

if 'test' in sys.argv:
    DATABASES['default'] = {'ENGINE': 'django.db.backends.sqlite3'}

(The full path to the backend isn’t strictly necessary with Django 1.3, but makes the setting forward compatible.)

You can also add the following line, in case you are having problems with South migrations:

    SOUTH_TESTS_MIGRATE = False

回答 1

我通常为测试创建一个单独的设置文件,并在测试命令中使用它,例如

python manage.py test --settings=mysite.test_settings myapp

它有两个好处:

  1. 您不必检查testsys.argv中的任何此类神奇词,test_settings.py只需

    from settings import *
    
    # make tests faster
    SOUTH_TESTS_MIGRATE = False
    DATABASES['default'] = {'ENGINE': 'django.db.backends.sqlite3'}

    或者,您可以根据需要进一步调整它,将测试设置与生产设置完全分开。

  2. 另一个好处是您可以使用生产数据库引擎而不是sqlite3进行测试,从而避免了细微的错误,因此在开发使用时

    python manage.py test --settings=mysite.test_settings myapp

    并在提交代码之前运行一次

    python manage.py test myapp

    只是为了确保所有测试都通过了。

I usually create a separate settings file for tests and use it in test command e.g.

python manage.py test --settings=mysite.test_settings myapp

It has two benefits:

  1. You don’t have to check for test or any such magic word in sys.argv, test_settings.py can simply be

    from settings import *
    
    # make tests faster
    SOUTH_TESTS_MIGRATE = False
    DATABASES['default'] = {'ENGINE': 'django.db.backends.sqlite3'}
    

    Or you can further tweak it for your needs, cleanly separating test settings from production settings.

  2. Another benefit is that you can run test with production database engine instead of sqlite3 avoiding subtle bugs, so while developing use

    python manage.py test --settings=mysite.test_settings myapp
    

    and before committing code run once

    python manage.py test myapp
    

    just to be sure that all test are really passing.


回答 2

MySQL支持称为“ MEMORY”的存储引擎,您可以在数据库config(settings.py)中对其进行配置,如下所示:

    'USER': 'root',                      # Not used with sqlite3.
    'PASSWORD': '',                  # Not used with sqlite3.
    'OPTIONS': {
        "init_command": "SET storage_engine=MEMORY",
    }

请注意,MEMORY存储引擎不支持blob /文本列,因此,如果您使用django.db.models.TextField此功能,则将无法使用。

MySQL supports a storage engine called “MEMORY”, which you can configure in your database config (settings.py) as such:

    'USER': 'root',                      # Not used with sqlite3.
    'PASSWORD': '',                  # Not used with sqlite3.
    'OPTIONS': {
        "init_command": "SET storage_engine=MEMORY",
    }

Note that the MEMORY storage engine doesn’t support blob / text columns, so if you’re using django.db.models.TextField this won’t work for you.


回答 3

我无法回答您的主要问题,但是您可以做一些事情来加快速度。

首先,请确保您的MySQL数据库已设置为使用InnoDB。然后,它可以使用事务在每次测试之前回滚db的状态,以我的经验,这导致了极大的提速。您可以在settings.py中传递数据库init命令(Django 1.2语法):

DATABASES = {
    'default': {
            'ENGINE':'django.db.backends.mysql',
            'HOST':'localhost',
            'NAME':'mydb',
            'USER':'whoever',
            'PASSWORD':'whatever',
            'OPTIONS':{"init_command": "SET storage_engine=INNODB" } 
        }
    }

其次,您不需要每次都运行South迁移。设置SOUTH_TESTS_MIGRATE = False在你的settings.py和数据库将与普通的执行syncdb,这将是比通过所有历史悠久的迁移运行更快创建。

I can’t answer your main question, but there are a couple of things that you can do to speed things up.

Firstly, make sure that your MySQL database is set up to use InnoDB. Then it can use transactions to rollback the state of the db before each test, which in my experience has led to a massive speed-up. You can pass a database init command in your settings.py (Django 1.2 syntax):

DATABASES = {
    'default': {
            'ENGINE':'django.db.backends.mysql',
            'HOST':'localhost',
            'NAME':'mydb',
            'USER':'whoever',
            'PASSWORD':'whatever',
            'OPTIONS':{"init_command": "SET storage_engine=INNODB" } 
        }
    }

Secondly, you don’t need to run the South migrations each time. Set SOUTH_TESTS_MIGRATE = False in your settings.py and the database will be created with plain syncdb, which will be much quicker than running through all the historic migrations.


回答 4

您可以进行两次调整:

  • 使用事务表:在每个TestCase之后,将使用数据库回滚来设置初始固定装置状态。
  • 将您的数据库数据目录放在ramdisk上:就数据库创建而言,您将获得很多收益,并且运行测试会更快。

我正在使用这两种技巧,我很高兴。

如何在Ubuntu上为MySQL设置它:

$ sudo service mysql stop
$ sudo cp -pRL /var/lib/mysql /dev/shm/mysql

$ vim /etc/mysql/my.cnf
# datadir = /dev/shm/mysql
$ sudo service mysql start

当心,这只是为了测试,从内存中重新启动数据库后,丢失了!

You can do double tweaking:

  • use transactional tables: initial fixtures state will be set using database rollback after every TestCase.
  • put your database data dir on ramdisk: you will gain much as far as database creation is concerned and also running test will be faster.

I’m using both tricks and I’m quite happy.

How to set up it for MySQL on Ubuntu:

$ sudo service mysql stop
$ sudo cp -pRL /var/lib/mysql /dev/shm/mysql

$ vim /etc/mysql/my.cnf
# datadir = /dev/shm/mysql
$ sudo service mysql start

Beware, it’s just for testing, after reboot your database from memory is lost!


回答 5

另一种方法:让另一个MySQL实例在使用RAM磁盘的tempf中运行。这篇博客文章中的说明:加速MySQL以在Django中进行测试

优点:

  • 您使用与生产服务器完全相同的数据库
  • 无需更改默认的mysql配置

Another approach: have another instance of MySQL running in a tempfs that uses a RAM Disk. Instructions in this blog post: Speeding up MySQL for testing in Django.

Advantages:

  • You use the exactly same database that your production server uses
  • no need to change your default mysql configuration

回答 6

通过扩展Anurag的答案,我通过创建相同的test_settings并将以下内容添加到manage.py来简化了过程

if len(sys.argv) > 1 and sys.argv[1] == "test":
    os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.test_settings")
else:
    os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")

似乎更干净,因为sys已经导入并且manage.py仅通过命令行使用,因此无需弄乱设置

Extending on Anurag’s answer I simplified the process by creating the same test_settings and adding the following to manage.py

if len(sys.argv) > 1 and sys.argv[1] == "test":
    os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.test_settings")
else:
    os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")

seems cleaner since sys is already imported and manage.py is only used via command line, so no need to clutter up settings


回答 7

在您的下方使用 setting.py

DATABASES['default']['ENGINE'] = 'django.db.backends.sqlite3'

Use below in your setting.py

DATABASES['default']['ENGINE'] = 'django.db.backends.sqlite3'

Python从导入的模块中模拟函数

问题:Python从导入的模块中模拟函数

我想了解如何@patch从导入的模块执行功能。

这是我到目前为止的位置。

app / mocking.py:

from app.my_module import get_user_name

def test_method():
  return get_user_name()

if __name__ == "__main__":
  print "Starting Program..."
  test_method()

app / my_module / __ init__.py:

def get_user_name():
  return "Unmocked User"

测试/模拟测试.py:

import unittest
from app.mocking import test_method 

def mock_get_user():
  return "Mocked This Silly"

@patch('app.my_module.get_user_name')
class MockingTestTestCase(unittest.TestCase):

  def test_mock_stubs(self, mock_method):
    mock_method.return_value = 'Mocked This Silly')
    ret = test_method()
    self.assertEqual(ret, 'Mocked This Silly')

if __name__ == '__main__':
  unittest.main()

这不符合我的预期。“已修补”模块仅返回的未模拟值get_user_name。如何模拟要导入到被测命名空间中的其他包中的方法?

I want to understand how to @patch a function from an imported module.

This is where I am so far.

app/mocking.py:

from app.my_module import get_user_name

def test_method():
  return get_user_name()

if __name__ == "__main__":
  print "Starting Program..."
  test_method()

app/my_module/__init__.py:

def get_user_name():
  return "Unmocked User"

test/mock-test.py:

import unittest
from app.mocking import test_method 

def mock_get_user():
  return "Mocked This Silly"

@patch('app.my_module.get_user_name')
class MockingTestTestCase(unittest.TestCase):

  def test_mock_stubs(self, mock_method):
    mock_method.return_value = 'Mocked This Silly')
    ret = test_method()
    self.assertEqual(ret, 'Mocked This Silly')

if __name__ == '__main__':
  unittest.main()

This does not work as I would expect. The “patched” module simply returns the unmocked value of get_user_name. How do I mock methods from other packages that I am importing into a namespace under test?


回答 0

当您patchunittest.mock包中使用装饰器时,您未在修补命名空间,而是从(在这种情况下app.my_module.get_user_name)导入模块,而是在被测试的命名空间中对其进行修补app.mocking.get_user_name

为此,请Mock尝试以下类似方法:

from mock import patch
from app.mocking import test_method 

class MockingTestTestCase(unittest.TestCase):

    @patch('app.mocking.get_user_name')
    def test_mock_stubs(self, test_patch):
        test_patch.return_value = 'Mocked This Silly'
        ret = test_method()
        self.assertEqual(ret, 'Mocked This Silly')

标准库文档中包含一个有用的部分对此进行了描述。

When you are using the patch decorator from the unittest.mock package you are not patching the namespace the module is imported from (in this case app.my_module.get_user_name) you are patching it in the namespace under test app.mocking.get_user_name.

To do the above with Mock try something like the below:

from mock import patch
from app.mocking import test_method 

class MockingTestTestCase(unittest.TestCase):

    @patch('app.mocking.get_user_name')
    def test_mock_stubs(self, test_patch):
        test_patch.return_value = 'Mocked This Silly'
        ret = test_method()
        self.assertEqual(ret, 'Mocked This Silly')

The standard library documentation includes a useful section describing this.


回答 1

尽管Matti John的答案解决了您的问题(也为我提供了帮助,谢谢!),但是,我建议将本地的“ get_user_name”函数替换为模拟的函数。这将允许您控制何时替换功能以及何时不替换功能。同样,这将允许您在同一测试中进行多次替换。为此,请以类似的方式使用“ with”语句:

from mock import patch

class MockingTestTestCase(unittest.TestCase):

    def test_mock_stubs(self):
        with patch('app.mocking.get_user_name', return_value = 'Mocked This Silly'):
            ret = test_method()
            self.assertEqual(ret, 'Mocked This Silly')

While Matti John’s answer solves your issue (and helped me too, thanks!), I would, however, suggest localizing the replacement of the original ‘get_user_name’ function with the mocked one. This will allow you to control when the function is replaced and when it isn’t. Also, this will allow you to make several replacements in the same test. In order to do so, use the ‘with’ statment in a pretty simillar manner:

from mock import patch

class MockingTestTestCase(unittest.TestCase):

    def test_mock_stubs(self):
        with patch('app.mocking.get_user_name', return_value = 'Mocked This Silly'):
            ret = test_method()
            self.assertEqual(ret, 'Mocked This Silly')

__init__ for unittest.TestCase

问题:__init__ for unittest.TestCase

我想在unittest.TestCase类初始化后添加一些内容,但是我不知道该怎么做。

现在我正在这样做:

#filename test.py

class TestingClass(unittest.TestCase):

    def __init__(self):
        self.gen_stubs()

    def gen_stubs(self):
        # Create a couple of tempfiles/dirs etc etc.
        self.tempdir = tempfile.mkdtemp()
        # more stuff here

我希望针对整个测试集只生成一次所有存根。我无法使用,setUpClass()因为我正在使用Python 2.4(我也无法在python 2.7上使用该功能)。

我在这里做错了什么?

我收到此错误:

 `TypeError: __init__() takes 1 argument (2 given)` 

__init__当我使用命令运行所有存根代码时,…以及其他错误python -m unittest -v test

I’d like to add a couple of things to what the unittest.TestCase class does upon being initialized but I can’t figure out how to do it.

Right now I’m doing this:

#filename test.py

class TestingClass(unittest.TestCase):

    def __init__(self):
        self.gen_stubs()

    def gen_stubs(self):
        # Create a couple of tempfiles/dirs etc etc.
        self.tempdir = tempfile.mkdtemp()
        # more stuff here

I’d like all the stubs to be generated only once for this entire set of tests. I can’t use setUpClass() because I’m working on Python 2.4 (I haven’t been able to get that working on python 2.7 either).

What am I doing wrong here?

I get this error:

 `TypeError: __init__() takes 1 argument (2 given)` 

…and other errors when I move all of the stub code into __init__ when I run it with the command python -m unittest -v test.


回答 0

试试这个:

class TestingClass(unittest.TestCase):

    def __init__(self, *args, **kwargs):
        super(TestingClass, self).__init__(*args, **kwargs)
        self.gen_stubs()

您正在覆盖TestCase__init__,因此您可能希望让基类为您处理参数。

Try this:

class TestingClass(unittest.TestCase):

    def __init__(self, *args, **kwargs):
        super(TestingClass, self).__init__(*args, **kwargs)
        self.gen_stubs()

You are overriding the TestCase‘s __init__, so you might want to let the base class handle the arguments for you.


回答 1

只是想添加一些关于重写init函数的说明

unittest.TestCase

该函数将在测试类中的每个方法之前调用。请注意,如果您想添加一些昂贵的计算,然后再运行所有测试方法,则应执行一次,请使用SetUpClass类方法

@classmethod
def setUpClass(cls):
    cls.attribute1 = some_expensive_computation()

该函数将在该类的所有测试方法之前被调用一次。有关setUp在每个测试方法之前调用的方法,请参见。

Just wanted to add some clarifications about overriding the init function of

unittest.TestCase

The function will be called before each method in your test class. Please note that if you want to add some expensive computations that should be performed once before running all test methods please use the SetUpClass classmethod

@classmethod
def setUpClass(cls):
    cls.attribute1 = some_expensive_computation()

This function will be called once before all test methods of the class. See setUp for a method that is called before each test method.


回答 2

安装unittest2并使用该软件包的unittest。

import unittest2 

然后使用setupModule / tearDownModule或setupClass / tearDown类进行特殊的初始化逻辑

更多信息:http : //www.voidspace.org.uk/python/articles/unittest2.shtml

同样,您很有可能正在创建集成测试而不是单元测试。为测试选择一个好的名称以区分它们,或将其放在其他容器模块中。

Install unittest2 and use that package’s unittest.

import unittest2 

and then use the setupModule / tearDownModule or setupClass / tearDown class for special initialization logic

More info: http://www.voidspace.org.uk/python/articles/unittest2.shtml

Also most likely your are creating an integration test more than an unittest. Choose a good name for the Tests to differentiate them or put in a different container module.


用于多个测试的Unittest setUp / tearDown

问题:用于多个测试的Unittest setUp / tearDown

在测试场景的开头/结尾是否触发了某个功能?在每次测试之前/之后都会触发setUp和tearDown函数。

我通常希望拥有:

class TestSequenceFunctions(unittest.TestCase):

    def setUpScenario(self):
        start() #launched at the beginning, once

    def test_choice(self):
        element = random.choice(self.seq)
        self.assertTrue(element in self.seq)

    def test_sample(self):
        with self.assertRaises(ValueError):
            random.sample(self.seq, 20)
        for element in random.sample(self.seq, 5):
            self.assertTrue(element in self.seq)

    def tearDownScenario(self):
        end() #launched at the end, once

目前,这些setUp和tearDown是单元测试,并且分散在我所有的场景中(包含许多测试),一个是第一个测试,另一个是最后一个测试。

Is there a function that is fired at the beginning/end of a scenario of tests? The functions setUp and tearDown are fired before/after every single test.

I typically would like to have this:

class TestSequenceFunctions(unittest.TestCase):

    def setUpScenario(self):
        start() #launched at the beginning, once

    def test_choice(self):
        element = random.choice(self.seq)
        self.assertTrue(element in self.seq)

    def test_sample(self):
        with self.assertRaises(ValueError):
            random.sample(self.seq, 20)
        for element in random.sample(self.seq, 5):
            self.assertTrue(element in self.seq)

    def tearDownScenario(self):
        end() #launched at the end, once

For now, these setUp and tearDown are unit tests and spread in all my scenarios (containing many tests), one is the first test, the other is the last test.


回答 0

从2.7开始(根据文档),分别在给定类中的测试运行之前和之后获得,setUpClasstearDownClass分别执行。或者,如果您在一个文件中有一组,则可以使用setUpModuletearDownModule文档)。

否则,最好的选择是创建自己的派生TestSuite并重写run()。所有其他调用将由父级处理,运行会在父级run方法的调用前后调用您的设置和拆卸代码。

As of 2.7 (per the documentation) you get setUpClass and tearDownClass which execute before and after the tests in a given class are run, respectively. Alternatively, if you have a group of them in one file, you can use setUpModule and tearDownModule (documentation).

Otherwise your best bet is probably going to be to create your own derived TestSuite and override run(). All other calls would be handled by the parent, and run would call your setup and teardown code around a call up to the parent’s run method.


回答 1

我有同样的情况,对我来说setUpClass和tearDownClass方法可以完美地工作

import unittest

class Test(unittest.TestCase):
    @classmethod
    def setUpClass(cls):
        cls._connection = createExpensiveConnectionObject()

    @classmethod
    def tearDownClass(cls):
        cls._connection.destroy()

I have the same scenario, for me setUpClass and tearDownClass methods works perfectly

import unittest

class Test(unittest.TestCase):
    @classmethod
    def setUpClass(cls):
        cls._connection = createExpensiveConnectionObject()

    @classmethod
    def tearDownClass(cls):
        cls._connection.destroy()

回答 2

对于python 2.5,以及在使用pydev时,这有点困难。似乎pydev并没有使用测试套件,而是找到所有单独的测试用例并分别运行它们。

我的解决方案是使用这样的类变量:

class TestCase(unittest.TestCase):
    runCount = 0

    def setUpClass(self):
        pass # overridden in actual testcases

    def run(self, result=None):
        if type(self).runCount == 0:
            self.setUpClass()

        super(TestCase, self).run(result)
        type(self).runCount += 1

使用此技巧,从此继承TestCase(而不是从原始unittest.TestCase)继承时,您还将继承runCount0。然后在run方法中,runCount检查并增加子测试用例的。此类的runCount变量保留为0。

这意味着setUpClass遗嘱每个类只能运行一次,而不是每个实例一次。

我还没有tearDownClass方法,但是我想使用该计数器可能会有所作为。

For python 2.5, and when working with pydev, it’s a bit hard. It appears that pydev doesn’t use the test suite, but finds all individual test cases and runs them all separately.

My solution for this was using a class variable like this:

class TestCase(unittest.TestCase):
    runCount = 0

    def setUpClass(self):
        pass # overridden in actual testcases

    def run(self, result=None):
        if type(self).runCount == 0:
            self.setUpClass()

        super(TestCase, self).run(result)
        type(self).runCount += 1

With this trick, when you inherit from this TestCase (instead of from the original unittest.TestCase), you’ll also inherit the runCount of 0. Then in the run method, the runCount of the child testcase is checked and incremented. This leaves the runCount variable for this class at 0.

This means the setUpClass will only be ran once per class and not once per instance.

I don’t have a tearDownClass method yet, but I guess something could be made with using that counter.


回答 3

这是一个示例:3种测试方法访问一个共享资源,该资源一次创建,而不是每个测试一次。

import unittest
import random

class TestSimulateLogistics(unittest.TestCase):

    shared_resource = None

    @classmethod
    def setUpClass(cls):
        cls.shared_resource = random.randint(1, 100)

    @classmethod
    def tearDownClass(cls):
        cls.shared_resource = None

    def test_1(self):
        print('test 1:', self.shared_resource)

    def test_2(self):
        print('test 2:', self.shared_resource)

    def test_3(self):
        print('test 3:', self.shared_resource)

Here is an example: 3 test methods access a shared resource, which is created once, not per test.

import unittest
import random

class TestSimulateLogistics(unittest.TestCase):

    shared_resource = None

    @classmethod
    def setUpClass(cls):
        cls.shared_resource = random.randint(1, 100)

    @classmethod
    def tearDownClass(cls):
        cls.shared_resource = None

    def test_1(self):
        print('test 1:', self.shared_resource)

    def test_2(self):
        print('test 2:', self.shared_resource)

    def test_3(self):
        print('test 3:', self.shared_resource)


模拟函数引发异常以测试except块

问题:模拟函数引发异常以测试except块

我有一个foo调用另一个函数(bar)的函数()。如果调用bar()引发一个HttpError,如果状态代码为404,我想特别处理它,否则重新引发。

我正在尝试围绕此foo函数编写一些单元测试,以模拟对的调用bar()。不幸的是,我无法得到模拟调用bar()以引发被我的代码except块捕获的异常。

这是说明我问题的代码:

import unittest
import mock
from apiclient.errors import HttpError


class FooTests(unittest.TestCase):
    @mock.patch('my_tests.bar')
    def test_foo_shouldReturnResultOfBar_whenBarSucceeds(self, barMock):
        barMock.return_value = True
        result = foo()
        self.assertTrue(result)  # passes

    @mock.patch('my_tests.bar')
    def test_foo_shouldReturnNone_whenBarRaiseHttpError404(self, barMock):
        barMock.side_effect = HttpError(mock.Mock(return_value={'status': 404}), 'not found')
        result = foo()
        self.assertIsNone(result)  # fails, test raises HttpError

    @mock.patch('my_tests.bar')
    def test_foo_shouldRaiseHttpError_whenBarRaiseHttpErrorNot404(self, barMock):
        barMock.side_effect = HttpError(mock.Mock(return_value={'status': 500}), 'error')
        with self.assertRaises(HttpError):  # passes
            foo()

def foo():
    try:
        result = bar()
        return result
    except HttpError as error:
        if error.resp.status == 404:
            print '404 - %s' % error.message
            return None
        raise

def bar():
    raise NotImplementedError()

我跟着模拟文档这不能不说您应该设置side_effect一个的Mock情况下,以一个Exception班有嘲笑功能引发错误。

我还查看了其他一些与StackOverflow相关的问与答,看起来我在做他们正在做的相同事情,以使他们的模拟引发Exception。

为什么设置side_effectbarMock不引起预期Exception得到提升?如果我做的事情很奇怪,我应该如何在except块中测试逻辑?

I have a function (foo) which calls another function (bar). If invoking bar() raises an HttpError, I want to handle it specially if the status code is 404, otherwise re-raise.

I am trying to write some unit tests around this foo function, mocking out the call to bar(). Unfortunately, I am unable to get the mocked call to bar() to raise an Exception which is caught by my except block.

Here is my code which illustrates my problem:

import unittest
import mock
from apiclient.errors import HttpError


class FooTests(unittest.TestCase):
    @mock.patch('my_tests.bar')
    def test_foo_shouldReturnResultOfBar_whenBarSucceeds(self, barMock):
        barMock.return_value = True
        result = foo()
        self.assertTrue(result)  # passes

    @mock.patch('my_tests.bar')
    def test_foo_shouldReturnNone_whenBarRaiseHttpError404(self, barMock):
        barMock.side_effect = HttpError(mock.Mock(return_value={'status': 404}), 'not found')
        result = foo()
        self.assertIsNone(result)  # fails, test raises HttpError

    @mock.patch('my_tests.bar')
    def test_foo_shouldRaiseHttpError_whenBarRaiseHttpErrorNot404(self, barMock):
        barMock.side_effect = HttpError(mock.Mock(return_value={'status': 500}), 'error')
        with self.assertRaises(HttpError):  # passes
            foo()

def foo():
    try:
        result = bar()
        return result
    except HttpError as error:
        if error.resp.status == 404:
            print '404 - %s' % error.message
            return None
        raise

def bar():
    raise NotImplementedError()

I followed the Mock docs which say that you should set the side_effect of a Mock instance to an Exception class to have the mocked function raise the error.

I also looked at some other related StackOverflow Q&As, and it looks like I am doing the same thing they are doing to cause and Exception to be raised by their mock.

Why is setting the side_effect of barMock not causing the expected Exception to be raised? If I am doing something weird, how should I go about testing logic in my except block?


回答 0

您的模拟正在引发异常,但是该error.resp.status值丢失了。而不是使用return_value,只是告诉Mockstatus是一个属性:

barMock.side_effect = HttpError(mock.Mock(status=404), 'not found')

将其他关键字参数Mock()设置为结果对象的属性。

我将您的foobar定义放在my_tests模块中,并添加到HttpError类中,这样我也可以使用它,然后您的测试可以成功进行:

>>> from my_tests import foo, HttpError
>>> import mock
>>> with mock.patch('my_tests.bar') as barMock:
...     barMock.side_effect = HttpError(mock.Mock(status=404), 'not found')
...     result = my_test.foo()
... 
404 - 
>>> result is None
True

您甚至可以看到print '404 - %s' % error.message生产线运行,但是我想您想在error.content那里使用它。HttpError()无论如何,这是第二个参数设置的属性。

Your mock is raising the exception just fine, but the error.resp.status value is missing. Rather than use return_value, just tell Mock that status is an attribute:

barMock.side_effect = HttpError(mock.Mock(status=404), 'not found')

Additional keyword arguments to Mock() are set as attributes on the resulting object.

I put your foo and bar definitions in a my_tests module, added in the HttpError class so I could use it too, and your test then can be ran to success:

>>> from my_tests import foo, HttpError
>>> import mock
>>> with mock.patch('my_tests.bar') as barMock:
...     barMock.side_effect = HttpError(mock.Mock(status=404), 'not found')
...     result = my_test.foo()
... 
404 - 
>>> result is None
True

You can even see the print '404 - %s' % error.message line run, but I think you wanted to use error.content there instead; that’s the attribute HttpError() sets from the second argument, at any rate.


模拟类:Mock()或patch()?

问题:模拟类:Mock()或patch()?

我在Python中使用模拟,并想知道这两种方法中哪一种更好(请参阅:更多pythonic)。

方法一:只需创建一个模拟对象并使用它即可。代码如下:

def test_one (self):
    mock = Mock()
    mock.method.return_value = True 
    self.sut.something(mock) # This should called mock.method and checks the result. 
    self.assertTrue(mock.method.called)

方法二:使用补丁创建一个模拟。代码如下:

@patch("MyClass")
def test_two (self, mock):
    instance = mock.return_value
    instance.method.return_value = True
    self.sut.something(instance) # This should called mock.method and checks the result. 
    self.assertTrue(instance.method.called)

两种方法都做同样的事情。我不确定这些差异。

谁能启发我?

I am using mock with Python and was wondering which of those two approaches is better (read: more pythonic).

Method one: Just create a mock object and use that. The code looks like:

def test_one (self):
    mock = Mock()
    mock.method.return_value = True 
    self.sut.something(mock) # This should called mock.method and checks the result. 
    self.assertTrue(mock.method.called)

Method two: Use patch to create a mock. The code looks like:

@patch("MyClass")
def test_two (self, mock):
    instance = mock.return_value
    instance.method.return_value = True
    self.sut.something(instance) # This should called mock.method and checks the result. 
    self.assertTrue(instance.method.called)

Both methods do the same thing. I am unsure of the differences.

Could anyone enlighten me?


回答 0

mock.patch与…是一个非常不同的生物mock.Mockpatch 模拟对象替换该类,并允许您使用模拟实例。看一下这个片段:

>>> class MyClass(object):
...   def __init__(self):
...     print 'Created MyClass@{0}'.format(id(self))
... 
>>> def create_instance():
...   return MyClass()
... 
>>> x = create_instance()
Created MyClass@4299548304
>>> 
>>> @mock.patch('__main__.MyClass')
... def create_instance2(MyClass):
...   MyClass.return_value = 'foo'
...   return create_instance()
... 
>>> i = create_instance2()
>>> i
'foo'
>>> def create_instance():
...   print MyClass
...   return MyClass()
...
>>> create_instance2()
<mock.Mock object at 0x100505d90>
'foo'
>>> create_instance()
<class '__main__.MyClass'>
Created MyClass@4300234128
<__main__.MyClass object at 0x100505d90>

patchMyClass以允许您控制所调用函数中类的用法的方式进行替换。修补类后,对该类的引用将完全由模拟实例替换。

mock.patch通常在测试要在测试内部创建类的新实例的东西时使用。 mock.Mock实例更清晰,更可取。如果您的self.sut.something方法创建了的实例MyClass而不是将实例作为参数接收,则mock.patch此处适当。

mock.patch is a very very different critter than mock.Mock. patch replaces the class with a mock object and lets you work with the mock instance. Take a look at this snippet:

>>> class MyClass(object):
...   def __init__(self):
...     print 'Created MyClass@{0}'.format(id(self))
... 
>>> def create_instance():
...   return MyClass()
... 
>>> x = create_instance()
Created MyClass@4299548304
>>> 
>>> @mock.patch('__main__.MyClass')
... def create_instance2(MyClass):
...   MyClass.return_value = 'foo'
...   return create_instance()
... 
>>> i = create_instance2()
>>> i
'foo'
>>> def create_instance():
...   print MyClass
...   return MyClass()
...
>>> create_instance2()
<mock.Mock object at 0x100505d90>
'foo'
>>> create_instance()
<class '__main__.MyClass'>
Created MyClass@4300234128
<__main__.MyClass object at 0x100505d90>

patch replaces MyClass in a way that allows you to control the usage of the class in functions that you call. Once you patch a class, references to the class are completely replaced by the mock instance.

mock.patch is usually used when you are testing something that creates a new instance of a class inside of the test. mock.Mock instances are clearer and are preferred. If your self.sut.something method created an instance of MyClass instead of receiving an instance as a parameter, then mock.patch would be appropriate here.


回答 1

我有一个YouTube视频

简短答案:mock在传递要嘲笑的东西时使用,patch如果不是,则使用。在这两种方法中,mock是首选,因为它意味着您正在使用适当的依赖注入来编写代码。

愚蠢的例子:

# Use a mock to test this.
my_custom_tweeter(twitter_api, sentence):
    sentence.replace('cks','x')   # We're cool and hip.
    twitter_api.send(sentence)

# Use a patch to mock out twitter_api. You have to patch the Twitter() module/class 
# and have it return a mock. Much uglier, but sometimes necessary.
my_badly_written_tweeter(sentence):
    twitter_api = Twitter(user="XXX", password="YYY")
    sentence.replace('cks','x') 
    twitter_api.send(sentence)

I’ve got a YouTube video on this.

Short answer: Use mock when you’re passing in the thing that you want mocked, and patch if you’re not. Of the two, mock is strongly preferred because it means you’re writing code with proper dependency injection.

Silly example:

# Use a mock to test this.
my_custom_tweeter(twitter_api, sentence):
    sentence.replace('cks','x')   # We're cool and hip.
    twitter_api.send(sentence)

# Use a patch to mock out twitter_api. You have to patch the Twitter() module/class 
# and have it return a mock. Much uglier, but sometimes necessary.
my_badly_written_tweeter(sentence):
    twitter_api = Twitter(user="XXX", password="YYY")
    sentence.replace('cks','x') 
    twitter_api.send(sentence)

在python中从单元测试输出数据

问题:在python中从单元测试输出数据

如果我使用python(使用unittest模块)编写单元测试,是否可以从失败的测试中输出数据,所以我可以对其进行检查以帮助推断出导致错误的原因?我知道创建自定义消息的能力,该消息可以包含一些信息,但是有时您可能会处理更复杂的数据,这些数据无法轻松地表示为字符串。

例如,假设您有一个Foo类,并且正在使用名为testdata的列表中的数据测试方法栏:

class TestBar(unittest.TestCase):
    def runTest(self):
        for t1, t2 in testdata:
            f = Foo(t1)
            self.assertEqual(f.bar(t2), 2)

如果测试失败,则可能要输出t1,t2和/或f,以查看为什么此特定数据导致失败。通过输出,我的意思是说,在运行测试之后,可以像访问任何其他变量一样访问这些变量。

If I’m writing unit tests in python (using the unittest module), is it possible to output data from a failed test, so I can examine it to help deduce what caused the error? I am aware of the ability to create a customized message, which can carry some information, but sometimes you might deal with more complex data, that can’t easily be represented as a string.

For example, suppose you had a class Foo, and were testing a method bar, using data from a list called testdata:

class TestBar(unittest.TestCase):
    def runTest(self):
        for t1, t2 in testdata:
            f = Foo(t1)
            self.assertEqual(f.bar(t2), 2)

If the test failed, I might want to output t1, t2 and/or f, to see why this particular data resulted in a failure. By output, I mean that the variables can be accessed like any other variables, after the test has been run.


回答 0

对于像我这样来这里寻求简单快速答案的人,答案很晚。

在Python 2.7中,您可以使用其他参数msg将信息添加到错误消息中,如下所示:

self.assertEqual(f.bar(t2), 2, msg='{0}, {1}'.format(t1, t2))

官方文档在这里

Very late answer for someone that, like me, comes here looking for a simple and quick answer.

In Python 2.7 you could use an additional parameter msg to add information to the error message like this:

self.assertEqual(f.bar(t2), 2, msg='{0}, {1}'.format(t1, t2))

Offical docs here


回答 1

我们为此使用日志记录模块。

例如:

import logging
class SomeTest( unittest.TestCase ):
    def testSomething( self ):
        log= logging.getLogger( "SomeTest.testSomething" )
        log.debug( "this= %r", self.this )
        log.debug( "that= %r", self.that )
        # etc.
        self.assertEquals( 3.14, pi )

if __name__ == "__main__":
    logging.basicConfig( stream=sys.stderr )
    logging.getLogger( "SomeTest.testSomething" ).setLevel( logging.DEBUG )
    unittest.main()

这使我们可以为已知失败的特定测试打开调试,并且需要其他调试信息。

但是,我的首选方法不是花很多时间进行调试,而是花更多的时间编写更细粒度的测试来揭示问题。

We use the logging module for this.

For example:

import logging
class SomeTest( unittest.TestCase ):
    def testSomething( self ):
        log= logging.getLogger( "SomeTest.testSomething" )
        log.debug( "this= %r", self.this )
        log.debug( "that= %r", self.that )
        # etc.
        self.assertEquals( 3.14, pi )

if __name__ == "__main__":
    logging.basicConfig( stream=sys.stderr )
    logging.getLogger( "SomeTest.testSomething" ).setLevel( logging.DEBUG )
    unittest.main()

That allows us to turn on debugging for specific tests which we know are failing and for which we want additional debugging information.

My preferred method, however, isn’t to spend a lot of time on debugging, but spend it writing more fine-grained tests to expose the problem.


回答 2

您可以使用简单的打印语句,也可以使用其他任何方式写入stdout。您还可以在测试中的任何地方调用Python调试器。

如果你用鼻子来运行测试(我建议这样做),它将为每个测试收集标准输出,并且仅在测试失败时才向您显示该标准输出,因此在测试通过时,您不必忍受混乱的输出。

鼻子还具有自动显示断言中提到的变量或在失败的测试中调用调试器的开关。例如-s--nocapture)阻止捕获stdout。

You can use simple print statements, or any other way of writing to stdout. You can also invoke the Python debugger anywhere in your tests.

If you use nose to run your tests (which I recommend), it will collect the stdout for each test and only show it to you if the test failed, so you don’t have to live with the cluttered output when the tests pass.

nose also has switches to automatically show variables mentioned in asserts, or to invoke the debugger on failed tests. For example -s (--nocapture) prevents the capture of stdout.


回答 3

我认为这并不是您要找的东西,没有办法显示不会失败的变量值,但这可以帮助您更接近以所需方式输出结果。

您可以使用TestRunner.run()返回的TestResult对象进行结果分析和处理。特别是TestResult.errors和TestResult.failures

关于TestResults对象:

http://docs.python.org/library/unittest.html#id3

还有一些代码可以指导您正确的方向:

>>> import random
>>> import unittest
>>>
>>> class TestSequenceFunctions(unittest.TestCase):
...     def setUp(self):
...         self.seq = range(5)
...     def testshuffle(self):
...         # make sure the shuffled sequence does not lose any elements
...         random.shuffle(self.seq)
...         self.seq.sort()
...         self.assertEqual(self.seq, range(10))
...     def testchoice(self):
...         element = random.choice(self.seq)
...         error_test = 1/0
...         self.assert_(element in self.seq)
...     def testsample(self):
...         self.assertRaises(ValueError, random.sample, self.seq, 20)
...         for element in random.sample(self.seq, 5):
...             self.assert_(element in self.seq)
...
>>> suite = unittest.TestLoader().loadTestsFromTestCase(TestSequenceFunctions)
>>> testResult = unittest.TextTestRunner(verbosity=2).run(suite)
testchoice (__main__.TestSequenceFunctions) ... ERROR
testsample (__main__.TestSequenceFunctions) ... ok
testshuffle (__main__.TestSequenceFunctions) ... FAIL

======================================================================
ERROR: testchoice (__main__.TestSequenceFunctions)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "<stdin>", line 11, in testchoice
ZeroDivisionError: integer division or modulo by zero

======================================================================
FAIL: testshuffle (__main__.TestSequenceFunctions)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "<stdin>", line 8, in testshuffle
AssertionError: [0, 1, 2, 3, 4] != [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

----------------------------------------------------------------------
Ran 3 tests in 0.031s

FAILED (failures=1, errors=1)
>>>
>>> testResult.errors
[(<__main__.TestSequenceFunctions testMethod=testchoice>, 'Traceback (most recent call last):\n  File "<stdin>"
, line 11, in testchoice\nZeroDivisionError: integer division or modulo by zero\n')]
>>>
>>> testResult.failures
[(<__main__.TestSequenceFunctions testMethod=testshuffle>, 'Traceback (most recent call last):\n  File "<stdin>
", line 8, in testshuffle\nAssertionError: [0, 1, 2, 3, 4] != [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n')]
>>>

I don’t think this is quite what your looking for, there’s no way to display variable values that don’t fail, but this may help you get closer to outputting the results the way you want.

You can use the TestResult object returned by the TestRunner.run() for results analysis and processing. Particularly, TestResult.errors and TestResult.failures

About the TestResults Object:

http://docs.python.org/library/unittest.html#id3

And some code to point you in the right direction:

>>> import random
>>> import unittest
>>>
>>> class TestSequenceFunctions(unittest.TestCase):
...     def setUp(self):
...         self.seq = range(5)
...     def testshuffle(self):
...         # make sure the shuffled sequence does not lose any elements
...         random.shuffle(self.seq)
...         self.seq.sort()
...         self.assertEqual(self.seq, range(10))
...     def testchoice(self):
...         element = random.choice(self.seq)
...         error_test = 1/0
...         self.assert_(element in self.seq)
...     def testsample(self):
...         self.assertRaises(ValueError, random.sample, self.seq, 20)
...         for element in random.sample(self.seq, 5):
...             self.assert_(element in self.seq)
...
>>> suite = unittest.TestLoader().loadTestsFromTestCase(TestSequenceFunctions)
>>> testResult = unittest.TextTestRunner(verbosity=2).run(suite)
testchoice (__main__.TestSequenceFunctions) ... ERROR
testsample (__main__.TestSequenceFunctions) ... ok
testshuffle (__main__.TestSequenceFunctions) ... FAIL

======================================================================
ERROR: testchoice (__main__.TestSequenceFunctions)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "<stdin>", line 11, in testchoice
ZeroDivisionError: integer division or modulo by zero

======================================================================
FAIL: testshuffle (__main__.TestSequenceFunctions)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "<stdin>", line 8, in testshuffle
AssertionError: [0, 1, 2, 3, 4] != [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

----------------------------------------------------------------------
Ran 3 tests in 0.031s

FAILED (failures=1, errors=1)
>>>
>>> testResult.errors
[(<__main__.TestSequenceFunctions testMethod=testchoice>, 'Traceback (most recent call last):\n  File "<stdin>"
, line 11, in testchoice\nZeroDivisionError: integer division or modulo by zero\n')]
>>>
>>> testResult.failures
[(<__main__.TestSequenceFunctions testMethod=testshuffle>, 'Traceback (most recent call last):\n  File "<stdin>
", line 8, in testshuffle\nAssertionError: [0, 1, 2, 3, 4] != [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n')]
>>>

回答 4

另一个选择-在测试失败的地方启动调试器。

尝试使用Testoob运行测试(它将运行您的unittest套件而不进行任何更改),并且当测试失败时,您可以使用’–debug’命令行开关打开调试器。

这是Windows上的终端会话:

C:\work> testoob tests.py --debug
F
Debugging for failure in test: test_foo (tests.MyTests.test_foo)
> c:\python25\lib\unittest.py(334)failUnlessEqual()
-> (msg or '%r != %r' % (first, second))
(Pdb) up
> c:\work\tests.py(6)test_foo()
-> self.assertEqual(x, y)
(Pdb) l
  1     from unittest import TestCase
  2     class MyTests(TestCase):
  3       def test_foo(self):
  4         x = 1
  5         y = 2
  6  ->     self.assertEqual(x, y)
[EOF]
(Pdb)

Another option – start a debugger where the test fails.

Try running your tests with Testoob (it will run your unittest suite without changes), and you can use the ‘–debug’ command line switch to open a debugger when a test fails.

Here’s a terminal session on windows:

C:\work> testoob tests.py --debug
F
Debugging for failure in test: test_foo (tests.MyTests.test_foo)
> c:\python25\lib\unittest.py(334)failUnlessEqual()
-> (msg or '%r != %r' % (first, second))
(Pdb) up
> c:\work\tests.py(6)test_foo()
-> self.assertEqual(x, y)
(Pdb) l
  1     from unittest import TestCase
  2     class MyTests(TestCase):
  3       def test_foo(self):
  4         x = 1
  5         y = 2
  6  ->     self.assertEqual(x, y)
[EOF]
(Pdb)

回答 5

我使用的方法非常简单。我只是将其记录为警告,因此它实际上会显示出来。

import logging

class TestBar(unittest.TestCase):
    def runTest(self):

       #this line is important
       logging.basicConfig()
       log = logging.getLogger("LOG")

       for t1, t2 in testdata:
         f = Foo(t1)
         self.assertEqual(f.bar(t2), 2)
         log.warning(t1)

The method I use is really simple. I just log it as a warning so it will actually show up.

import logging

class TestBar(unittest.TestCase):
    def runTest(self):

       #this line is important
       logging.basicConfig()
       log = logging.getLogger("LOG")

       for t1, t2 in testdata:
         f = Foo(t1)
         self.assertEqual(f.bar(t2), 2)
         log.warning(t1)

回答 6

我想我可能一直在想这个。我想出的一种方法可以做到这一点,就是让全局变量累积诊断数据。

像这样的东西:

log1 = dict()
class TestBar(unittest.TestCase):
    def runTest(self):
        for t1, t2 in testdata:
            f = Foo(t1) 
            if f.bar(t2) != 2: 
                log1("TestBar.runTest") = (f, t1, t2)
                self.fail("f.bar(t2) != 2")

感谢您的答复。他们给了我一些关于如何在python中记录单元测试信息的替代想法。

I think I might have been overthinking this. One way I’ve come up with that does the job, is simply to have a global variable, that accumulates the diagnostic data.

Somthing like this:

log1 = dict()
class TestBar(unittest.TestCase):
    def runTest(self):
        for t1, t2 in testdata:
            f = Foo(t1) 
            if f.bar(t2) != 2: 
                log1("TestBar.runTest") = (f, t1, t2)
                self.fail("f.bar(t2) != 2")

Thanks for the replies. They have given me some alternative ideas for how to record information from unit tests in python.


回答 7

使用日志记录:

import unittest
import logging
import inspect
import os

logging_level = logging.INFO

try:
    log_file = os.environ["LOG_FILE"]
except KeyError:
    log_file = None

def logger(stack=None):
    if not hasattr(logger, "initialized"):
        logging.basicConfig(filename=log_file, level=logging_level)
        logger.initialized = True
    if not stack:
        stack = inspect.stack()
    name = stack[1][3]
    try:
        name = stack[1][0].f_locals["self"].__class__.__name__ + "." + name
    except KeyError:
        pass
    return logging.getLogger(name)

def todo(msg):
    logger(inspect.stack()).warning("TODO: {}".format(msg))

def get_pi():
    logger().info("sorry, I know only three digits")
    return 3.14

class Test(unittest.TestCase):

    def testName(self):
        todo("use a better get_pi")
        pi = get_pi()
        logger().info("pi = {}".format(pi))
        todo("check more digits in pi")
        self.assertAlmostEqual(pi, 3.14)
        logger().debug("end of this test")
        pass

用法:

# LOG_FILE=/tmp/log python3 -m unittest LoggerDemo
.
----------------------------------------------------------------------
Ran 1 test in 0.047s

OK
# cat /tmp/log
WARNING:Test.testName:TODO: use a better get_pi
INFO:get_pi:sorry, I know only three digits
INFO:Test.testName:pi = 3.14
WARNING:Test.testName:TODO: check more digits in pi

如果您未设置LOG_FILE,则必须登录stderr

Use logging:

import unittest
import logging
import inspect
import os

logging_level = logging.INFO

try:
    log_file = os.environ["LOG_FILE"]
except KeyError:
    log_file = None

def logger(stack=None):
    if not hasattr(logger, "initialized"):
        logging.basicConfig(filename=log_file, level=logging_level)
        logger.initialized = True
    if not stack:
        stack = inspect.stack()
    name = stack[1][3]
    try:
        name = stack[1][0].f_locals["self"].__class__.__name__ + "." + name
    except KeyError:
        pass
    return logging.getLogger(name)

def todo(msg):
    logger(inspect.stack()).warning("TODO: {}".format(msg))

def get_pi():
    logger().info("sorry, I know only three digits")
    return 3.14

class Test(unittest.TestCase):

    def testName(self):
        todo("use a better get_pi")
        pi = get_pi()
        logger().info("pi = {}".format(pi))
        todo("check more digits in pi")
        self.assertAlmostEqual(pi, 3.14)
        logger().debug("end of this test")
        pass

Usage:

# LOG_FILE=/tmp/log python3 -m unittest LoggerDemo
.
----------------------------------------------------------------------
Ran 1 test in 0.047s

OK
# cat /tmp/log
WARNING:Test.testName:TODO: use a better get_pi
INFO:get_pi:sorry, I know only three digits
INFO:Test.testName:pi = 3.14
WARNING:Test.testName:TODO: check more digits in pi

If you do not set LOG_FILE, logging will got to stderr.


回答 8

您可以使用 logging模块。

因此,在单元测试代码中,使用:

import logging as log

def test_foo(self):
    log.debug("Some debug message.")
    log.info("Some info message.")
    log.warning("Some warning message.")
    log.error("Some error message.")

默认情况下,警告和错误输出到 /dev/stderr,因此它们应该在控制台上可见。

要自定义日志(例如格式化),请尝试以下示例:

# Set-up logger
if args.verbose or args.debug:
    logging.basicConfig( stream=sys.stdout )
    root = logging.getLogger()
    root.setLevel(logging.INFO if args.verbose else logging.DEBUG)
    ch = logging.StreamHandler(sys.stdout)
    ch.setLevel(logging.INFO if args.verbose else logging.DEBUG)
    ch.setFormatter(logging.Formatter('%(asctime)s %(levelname)s: %(name)s: %(message)s'))
    root.addHandler(ch)
else:
    logging.basicConfig(stream=sys.stderr)

You can use logging module for that.

So in the unit test code, use:

import logging as log

def test_foo(self):
    log.debug("Some debug message.")
    log.info("Some info message.")
    log.warning("Some warning message.")
    log.error("Some error message.")

By default warnings and errors are outputted to /dev/stderr, so they should be visible on the console.

To customize logs (such as formatting), try the following sample:

# Set-up logger
if args.verbose or args.debug:
    logging.basicConfig( stream=sys.stdout )
    root = logging.getLogger()
    root.setLevel(logging.INFO if args.verbose else logging.DEBUG)
    ch = logging.StreamHandler(sys.stdout)
    ch.setLevel(logging.INFO if args.verbose else logging.DEBUG)
    ch.setFormatter(logging.Formatter('%(asctime)s %(levelname)s: %(name)s: %(message)s'))
    root.addHandler(ch)
else:
    logging.basicConfig(stream=sys.stderr)

回答 9

在这些情况下,我要做的是log.debug()在应用程序中包含一些消息。由于默认的日志记录级别是WARNING,因此此类消息不会在正常执行中显示。

然后,在单元测试中,我将日志记录级别更改为DEBUG,以便在运行它们时显示此类消息。

import logging

log.debug("Some messages to be shown just when debugging or unittesting")

在单元测试中:

# Set log level
loglevel = logging.DEBUG
logging.basicConfig(level=loglevel)



查看完整示例:

这是daikiri.py一个使用其名称和价格实现Daikiri的基本类。有一种方法make_discount()可在应用给定折扣后返回该特定daikiri的价格:

import logging

log = logging.getLogger(__name__)

class Daikiri(object):
    def __init__(self, name, price):
        self.name = name
        self.price = price

    def make_discount(self, percentage):
        log.debug("Deducting discount...")  # I want to see this message
        return self.price * percentage

然后,我创建一个test_daikiri.py检查其用法的单元测试:

import unittest
import logging
from .daikiri import Daikiri


class TestDaikiri(unittest.TestCase):
    def setUp(self):
        # Changing log level to DEBUG
        loglevel = logging.DEBUG
        logging.basicConfig(level=loglevel)

        self.mydaikiri = Daikiri("cuban", 25)

    def test_drop_price(self):
        new_price = self.mydaikiri.make_discount(0)
        self.assertEqual(new_price, 0)

if __name__ == "__main__":
    unittest.main()

所以当我执行它时,我得到log.debug消息:

$ python -m test_daikiri
DEBUG:daikiri:Deducting discount...
.
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

What I do in these cases is to have a log.debug() with some messages in my application. Since the default logging level is WARNING, such messages don’t show in the normal execution.

Then, in the unittest I change the logging level to DEBUG, so that such messages are shown while running them.

import logging

log.debug("Some messages to be shown just when debugging or unittesting")

In the unittests:

# Set log level
loglevel = logging.DEBUG
logging.basicConfig(level=loglevel)



See a full example:

This is daikiri.py, a basic class that implements a Daikiri with its name and price. There is method make_discount() that returns the price of that specific daikiri after applying a given discount:

import logging

log = logging.getLogger(__name__)

class Daikiri(object):
    def __init__(self, name, price):
        self.name = name
        self.price = price

    def make_discount(self, percentage):
        log.debug("Deducting discount...")  # I want to see this message
        return self.price * percentage

Then, I create a unittest test_daikiri.py that checks its usage:

import unittest
import logging
from .daikiri import Daikiri


class TestDaikiri(unittest.TestCase):
    def setUp(self):
        # Changing log level to DEBUG
        loglevel = logging.DEBUG
        logging.basicConfig(level=loglevel)

        self.mydaikiri = Daikiri("cuban", 25)

    def test_drop_price(self):
        new_price = self.mydaikiri.make_discount(0)
        self.assertEqual(new_price, 0)

if __name__ == "__main__":
    unittest.main()

So when I execute it I get the log.debug messages:

$ python -m test_daikiri
DEBUG:daikiri:Deducting discount...
.
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

回答 10

inspect.trace将让您在引发异常后获取局部变量。然后,您可以使用如下装饰器包装单元测试,以免在局部验尸期间检查这些局部变量。

import random
import unittest
import inspect


def store_result(f):
    """
    Store the results of a test
    On success, store the return value.
    On failure, store the local variables where the exception was thrown.
    """
    def wrapped(self):
        if 'results' not in self.__dict__:
            self.results = {}
        # If a test throws an exception, store local variables in results:
        try:
            result = f(self)
        except Exception as e:
            self.results[f.__name__] = {'success':False, 'locals':inspect.trace()[-1][0].f_locals}
            raise e
        self.results[f.__name__] = {'success':True, 'result':result}
        return result
    return wrapped

def suite_results(suite):
    """
    Get all the results from a test suite
    """
    ans = {}
    for test in suite:
        if 'results' in test.__dict__:
            ans.update(test.results)
    return ans

# Example:
class TestSequenceFunctions(unittest.TestCase):

    def setUp(self):
        self.seq = range(10)

    @store_result
    def test_shuffle(self):
        # make sure the shuffled sequence does not lose any elements
        random.shuffle(self.seq)
        self.seq.sort()
        self.assertEqual(self.seq, range(10))
        # should raise an exception for an immutable sequence
        self.assertRaises(TypeError, random.shuffle, (1,2,3))
        return {1:2}

    @store_result
    def test_choice(self):
        element = random.choice(self.seq)
        self.assertTrue(element in self.seq)
        return {7:2}

    @store_result
    def test_sample(self):
        x = 799
        with self.assertRaises(ValueError):
            random.sample(self.seq, 20)
        for element in random.sample(self.seq, 5):
            self.assertTrue(element in self.seq)
        return {1:99999}


suite = unittest.TestLoader().loadTestsFromTestCase(TestSequenceFunctions)
unittest.TextTestRunner(verbosity=2).run(suite)

from pprint import pprint
pprint(suite_results(suite))

最后一行将在测试成功的地方打印返回的值,并在测试失败的情况下显示局部变量(在本例中为x):

{'test_choice': {'result': {7: 2}, 'success': True},
 'test_sample': {'locals': {'self': <__main__.TestSequenceFunctions testMethod=test_sample>,
                            'x': 799},
                 'success': False},
 'test_shuffle': {'result': {1: 2}, 'success': True}}

Har detgøy:-)

inspect.trace will let you get local variables after an exception has been thrown. You can then wrap the unit tests with a decorator like the following one to save off those local variables for examination during the post mortem.

import random
import unittest
import inspect


def store_result(f):
    """
    Store the results of a test
    On success, store the return value.
    On failure, store the local variables where the exception was thrown.
    """
    def wrapped(self):
        if 'results' not in self.__dict__:
            self.results = {}
        # If a test throws an exception, store local variables in results:
        try:
            result = f(self)
        except Exception as e:
            self.results[f.__name__] = {'success':False, 'locals':inspect.trace()[-1][0].f_locals}
            raise e
        self.results[f.__name__] = {'success':True, 'result':result}
        return result
    return wrapped

def suite_results(suite):
    """
    Get all the results from a test suite
    """
    ans = {}
    for test in suite:
        if 'results' in test.__dict__:
            ans.update(test.results)
    return ans

# Example:
class TestSequenceFunctions(unittest.TestCase):

    def setUp(self):
        self.seq = range(10)

    @store_result
    def test_shuffle(self):
        # make sure the shuffled sequence does not lose any elements
        random.shuffle(self.seq)
        self.seq.sort()
        self.assertEqual(self.seq, range(10))
        # should raise an exception for an immutable sequence
        self.assertRaises(TypeError, random.shuffle, (1,2,3))
        return {1:2}

    @store_result
    def test_choice(self):
        element = random.choice(self.seq)
        self.assertTrue(element in self.seq)
        return {7:2}

    @store_result
    def test_sample(self):
        x = 799
        with self.assertRaises(ValueError):
            random.sample(self.seq, 20)
        for element in random.sample(self.seq, 5):
            self.assertTrue(element in self.seq)
        return {1:99999}


suite = unittest.TestLoader().loadTestsFromTestCase(TestSequenceFunctions)
unittest.TextTestRunner(verbosity=2).run(suite)

from pprint import pprint
pprint(suite_results(suite))

The last line will print the returned values where the test succeeded and the local variables, in this case x, when it fails:

{'test_choice': {'result': {7: 2}, 'success': True},
 'test_sample': {'locals': {'self': <__main__.TestSequenceFunctions testMethod=test_sample>,
                            'x': 799},
                 'success': False},
 'test_shuffle': {'result': {1: 2}, 'success': True}}

Har det gøy :-)


回答 11

如何捕获由断言失败生成的异常?在catch块中,您可以将数据输出到任何地方。然后,当您完成操作后,可以重新引发异常。测试跑步者可能不会知道区别。

免责声明:我尚未在python的单元测试框架中尝试过此方法,但在其他单元测试框架中尝试过此方法。

How about catching the exception that gets generated from the assertion failure? In your catch block you could output the data however you wanted to wherever. Then when you were done you could re-throw the exception. The test runner probably wouldn’t know the difference.

Disclaimer: I haven’t tried this with python’s unit test framework but have with other unit test frameworks.


回答 12

承认我还没有尝试过,testfixtures的日志记录功能看起来非常有用…

Admitting that I haven’t tried it, the testfixtures’ logging feature looks quite useful…


回答 13

扩展@FC的答案,这对我来说效果很好:

class MyTest(unittest.TestCase):
    def messenger(self, message):
        try:
            self.assertEqual(1, 2, msg=message)
        except AssertionError as e:      
            print "\nMESSENGER OUTPUT: %s" % str(e),

Expanding @F.C. ‘s answer, this works quite well for me:

class MyTest(unittest.TestCase):
    def messenger(self, message):
        try:
            self.assertEqual(1, 2, msg=message)
        except AssertionError as e:      
            print "\nMESSENGER OUTPUT: %s" % str(e),