标签归档:unit-testing

如何在python中使用鼻子测试/单元测试来断言输出?

问题:如何在python中使用鼻子测试/单元测试来断言输出?

我正在为下一个功能编写测试:

def foo():
    print 'hello world!'

因此,当我要测试此功能时,代码将如下所示:

import sys
from foomodule import foo
def test_foo():
    foo()
    output = sys.stdout.getline().strip() # because stdout is an StringIO instance
    assert output == 'hello world!'

但是,如果我使用-s参数运行鼻子测试,则测试将崩溃。如何使用unittest或鼻子模块捕获输出?

I’m writing tests for a function like next one:

def foo():
    print 'hello world!'

So when I want to test this function the code will be like this:

import sys
from foomodule import foo
def test_foo():
    foo()
    output = sys.stdout.getline().strip() # because stdout is an StringIO instance
    assert output == 'hello world!'

But if I run nosetests with -s parameter the test crashes. How can I catch the output with unittest or nose module?


回答 0

我使用此上下文管理器捕获输出。通过临时替换,它最终使用了与其他一些答案相同的技术sys.stdout。我更喜欢上下文管理器,因为它将所有簿记功能都包装到一个函数中,因此,我不必重新编写任何最终代码,也不必为此编写设置和拆卸功能。

import sys
from contextlib import contextmanager
from StringIO import StringIO

@contextmanager
def captured_output():
    new_out, new_err = StringIO(), StringIO()
    old_out, old_err = sys.stdout, sys.stderr
    try:
        sys.stdout, sys.stderr = new_out, new_err
        yield sys.stdout, sys.stderr
    finally:
        sys.stdout, sys.stderr = old_out, old_err

像这样使用它:

with captured_output() as (out, err):
    foo()
# This can go inside or outside the `with` block
output = out.getvalue().strip()
self.assertEqual(output, 'hello world!')

此外,由于退出with块后将恢复原始输出状态,因此我们可以使用与第一个捕获块相同的功能来设置第二个捕获块,使用设置和拆卸功能是不可能的,并且在最终尝试写入时会变得很冗长手动阻止。当测试的目的是将两个函数的结果相互比较而不是与某个预先计算的值进行比较时,该功能就派上用场了。

I use this context manager to capture output. It ultimately uses the same technique as some of the other answers by temporarily replacing sys.stdout. I prefer the context manager because it wraps all the bookkeeping into a single function, so I don’t have to re-write any try-finally code, and I don’t have to write setup and teardown functions just for this.

import sys
from contextlib import contextmanager
from StringIO import StringIO

@contextmanager
def captured_output():
    new_out, new_err = StringIO(), StringIO()
    old_out, old_err = sys.stdout, sys.stderr
    try:
        sys.stdout, sys.stderr = new_out, new_err
        yield sys.stdout, sys.stderr
    finally:
        sys.stdout, sys.stderr = old_out, old_err

Use it like this:

with captured_output() as (out, err):
    foo()
# This can go inside or outside the `with` block
output = out.getvalue().strip()
self.assertEqual(output, 'hello world!')

Furthermore, since the original output state is restored upon exiting the with block, we can set up a second capture block in the same function as the first one, which isn’t possible using setup and teardown functions, and gets wordy when writing try-finally blocks manually. That ability came in handy when the goal of a test was to compare the results of two functions relative to each other rather than to some precomputed value.


回答 1

如果确实要执行此操作,则可以在测试期间重新分配sys.stdout。

def test_foo():
    import sys
    from foomodule import foo
    from StringIO import StringIO

    saved_stdout = sys.stdout
    try:
        out = StringIO()
        sys.stdout = out
        foo()
        output = out.getvalue().strip()
        assert output == 'hello world!'
    finally:
        sys.stdout = saved_stdout

但是,如果我正在编写此代码,则希望将可选out参数传递给该foo函数。

def foo(out=sys.stdout):
    out.write("hello, world!")

然后测试要简单得多:

def test_foo():
    from foomodule import foo
    from StringIO import StringIO

    out = StringIO()
    foo(out=out)
    output = out.getvalue().strip()
    assert output == 'hello world!'

If you really want to do this, you can reassign sys.stdout for the duration of the test.

def test_foo():
    import sys
    from foomodule import foo
    from StringIO import StringIO

    saved_stdout = sys.stdout
    try:
        out = StringIO()
        sys.stdout = out
        foo()
        output = out.getvalue().strip()
        assert output == 'hello world!'
    finally:
        sys.stdout = saved_stdout

If I were writing this code, however, I would prefer to pass an optional out parameter to the foo function.

def foo(out=sys.stdout):
    out.write("hello, world!")

Then the test is much simpler:

def test_foo():
    from foomodule import foo
    from StringIO import StringIO

    out = StringIO()
    foo(out=out)
    output = out.getvalue().strip()
    assert output == 'hello world!'

回答 2

从2.7版开始,您不再需要重新分配sys.stdout,这是通过bufferflag提供的。而且,这是鼻子测试的默认行为。

这是在非缓冲上下文中失败的示例:

import sys
import unittest

def foo():
    print 'hello world!'

class Case(unittest.TestCase):
    def test_foo(self):
        foo()
        if not hasattr(sys.stdout, "getvalue"):
            self.fail("need to run in buffered mode")
        output = sys.stdout.getvalue().strip() # because stdout is an StringIO instance
        self.assertEquals(output,'hello world!')

您可以设置缓冲区通过unit2命令行标志-b--bufferunittest.main选择。相反的是通过nosetestflag 实现的--nocapture

if __name__=="__main__":   
    assert not hasattr(sys.stdout, "getvalue")
    unittest.main(module=__name__, buffer=True, exit=False)
    #.
    #----------------------------------------------------------------------
    #Ran 1 test in 0.000s
    #
    #OK
    assert not hasattr(sys.stdout, "getvalue")

    unittest.main(module=__name__, buffer=False)
    #hello world!
    #F
    #======================================================================
    #FAIL: test_foo (__main__.Case)
    #----------------------------------------------------------------------
    #Traceback (most recent call last):
    #  File "test_stdout.py", line 15, in test_foo
    #    self.fail("need to run in buffered mode")
    #AssertionError: need to run in buffered mode
    #
    #----------------------------------------------------------------------
    #Ran 1 test in 0.002s
    #
    #FAILED (failures=1)

Since version 2.7, you do not need anymore to reassign sys.stdout, this is provided through buffer flag. Moreover, it is the default behavior of nosetest.

Here is a sample failing in non buffered context:

import sys
import unittest

def foo():
    print 'hello world!'

class Case(unittest.TestCase):
    def test_foo(self):
        foo()
        if not hasattr(sys.stdout, "getvalue"):
            self.fail("need to run in buffered mode")
        output = sys.stdout.getvalue().strip() # because stdout is an StringIO instance
        self.assertEquals(output,'hello world!')

You can set buffer through unit2 command line flag -b, --buffer or in unittest.main options. The opposite is achieved through nosetest flag --nocapture.

if __name__=="__main__":   
    assert not hasattr(sys.stdout, "getvalue")
    unittest.main(module=__name__, buffer=True, exit=False)
    #.
    #----------------------------------------------------------------------
    #Ran 1 test in 0.000s
    #
    #OK
    assert not hasattr(sys.stdout, "getvalue")

    unittest.main(module=__name__, buffer=False)
    #hello world!
    #F
    #======================================================================
    #FAIL: test_foo (__main__.Case)
    #----------------------------------------------------------------------
    #Traceback (most recent call last):
    #  File "test_stdout.py", line 15, in test_foo
    #    self.fail("need to run in buffered mode")
    #AssertionError: need to run in buffered mode
    #
    #----------------------------------------------------------------------
    #Ran 1 test in 0.002s
    #
    #FAILED (failures=1)

回答 3

对于我来说,这些答案很多都失败了,因为您无法from StringIO import StringIO使用Python3。这是基于@naxa的注释和Python Cookbook的最小工作片段。

from io import StringIO
from unittest.mock import patch

with patch('sys.stdout', new=StringIO()) as fakeOutput:
    print('hello world')
    self.assertEqual(fakeOutput.getvalue().strip(), 'hello world')

A lot of these answers failed for me because you can’t from StringIO import StringIO in Python 3. Here’s a minimum working snippet based on @naxa’s comment and the Python Cookbook.

from io import StringIO
from unittest.mock import patch

with patch('sys.stdout', new=StringIO()) as fakeOutput:
    print('hello world')
    self.assertEqual(fakeOutput.getvalue().strip(), 'hello world')

回答 4

在python 3.5中,您可以使用contextlib.redirect_stdout()StringIO()。这是对代码的修改

import contextlib
from io import StringIO
from foomodule import foo

def test_foo():
    temp_stdout = StringIO()
    with contextlib.redirect_stdout(temp_stdout):
        foo()
    output = temp_stdout.getvalue().strip()
    assert output == 'hello world!'

In python 3.5 you can use contextlib.redirect_stdout() and StringIO(). Here’s the modification to your code

import contextlib
from io import StringIO
from foomodule import foo

def test_foo():
    temp_stdout = StringIO()
    with contextlib.redirect_stdout(temp_stdout):
        foo()
    output = temp_stdout.getvalue().strip()
    assert output == 'hello world!'

回答 5

我只是在学习Python,并且发现自己遇到了与上述类似的问题,并且对带有输出的方法进行了单元测试。我上面对foo模块的传递单元测试最终看起来像这样:

import sys
import unittest
from foo import foo
from StringIO import StringIO

class FooTest (unittest.TestCase):
    def setUp(self):
        self.held, sys.stdout = sys.stdout, StringIO()

    def test_foo(self):
        foo()
        self.assertEqual(sys.stdout.getvalue(),'hello world!\n')

I’m only just learning Python and found myself struggling with a similar problem to the one above with unit tests for methods with output. My passing unit test for foo module above has ended up looking like this:

import sys
import unittest
from foo import foo
from StringIO import StringIO

class FooTest (unittest.TestCase):
    def setUp(self):
        self.held, sys.stdout = sys.stdout, StringIO()

    def test_foo(self):
        foo()
        self.assertEqual(sys.stdout.getvalue(),'hello world!\n')

回答 6

编写测试通常会向我们展示一种更好的编写代码的方法。与Shane的答案类似,我想提出另一种看待这个问题的方式。您是否真的要断言您的程序输出了某个字符串,还是只是构造了一个用于输出的字符串?这变得更容易测试,因为我们可能可以假设Python print语句正确完成了工作。

def foo_msg():
    return 'hello world'

def foo():
    print foo_msg()

然后您的测试非常简单:

def test_foo_msg():
    assert 'hello world' == foo_msg()

当然,如果您确实需要测试程序的实际输出,则可以忽略。:)

Writing tests often shows us a better way to write our code. Similar to Shane’s answer, I’d like to suggest yet another way of looking at this. Do you really want to assert that your program outputted a certain string, or just that it constructed a certain string for output? This becomes easier to test, since we can probably assume that the Python print statement does its job correctly.

def foo_msg():
    return 'hello world'

def foo():
    print foo_msg()

Then your test is very simple:

def test_foo_msg():
    assert 'hello world' == foo_msg()

Of course, if you really have a need to test your program’s actual output, then feel free to disregard. :)


回答 7

基于Rob Kennedy的回答,我编写了一个基于类的上下文管理器版本来缓冲输出。

用法就像:

with OutputBuffer() as bf:
    print('hello world')
assert bf.out == 'hello world\n'

这是实现:

from io import StringIO
import sys


class OutputBuffer(object):

    def __init__(self):
        self.stdout = StringIO()
        self.stderr = StringIO()

    def __enter__(self):
        self.original_stdout, self.original_stderr = sys.stdout, sys.stderr
        sys.stdout, sys.stderr = self.stdout, self.stderr
        return self

    def __exit__(self, exception_type, exception, traceback):
        sys.stdout, sys.stderr = self.original_stdout, self.original_stderr

    @property
    def out(self):
        return self.stdout.getvalue()

    @property
    def err(self):
        return self.stderr.getvalue()

Based on Rob Kennedy’s answer, I wrote a class-based version of the context manager to buffer the output.

Usage is like:

with OutputBuffer() as bf:
    print('hello world')
assert bf.out == 'hello world\n'

Here’s the implementation:

from io import StringIO
import sys


class OutputBuffer(object):

    def __init__(self):
        self.stdout = StringIO()
        self.stderr = StringIO()

    def __enter__(self):
        self.original_stdout, self.original_stderr = sys.stdout, sys.stderr
        sys.stdout, sys.stderr = self.stdout, self.stderr
        return self

    def __exit__(self, exception_type, exception, traceback):
        sys.stdout, sys.stderr = self.original_stdout, self.original_stderr

    @property
    def out(self):
        return self.stdout.getvalue()

    @property
    def err(self):
        return self.stderr.getvalue()

回答 8

或考虑使用pytest,它具有内置的断言stdout和stderr支持。查看文件

def test_myoutput(capsys): # or use "capfd" for fd-level
    print("hello")
    captured = capsys.readouterr()
    assert captured.out == "hello\n"
    print("next")
    captured = capsys.readouterr()
    assert captured.out == "next\n"

Or consider using pytest, it has built-in support for asserting stdout and stderr. See docs

def test_myoutput(capsys): # or use "capfd" for fd-level
    print("hello")
    captured = capsys.readouterr()
    assert captured.out == "hello\n"
    print("next")
    captured = capsys.readouterr()
    assert captured.out == "next\n"

回答 9

无论n611x007本体使用已经建议unittest.mock,但这个答案适应阿卡门斯的向您展示如何轻松地包裹unittest.TestCase方法与嘲笑互动stdout

import io
import unittest
import unittest.mock

msg = "Hello World!"


# function we will be testing
def foo():
    print(msg, end="")


# create a decorator which wraps a TestCase method and pass it a mocked
# stdout object
mock_stdout = unittest.mock.patch('sys.stdout', new_callable=io.StringIO)


class MyTests(unittest.TestCase):

    @mock_stdout
    def test_foo(self, stdout):
        # run the function whose output we want to test
        foo()
        # get its output from the mocked stdout
        actual = stdout.getvalue()
        expected = msg
        self.assertEqual(actual, expected)

Both n611x007 and Noumenon already suggested using unittest.mock, but this answer adapts Acumenus’s to show how you can easily wrap unittest.TestCase methods to interact with a mocked stdout.

import io
import unittest
import unittest.mock

msg = "Hello World!"


# function we will be testing
def foo():
    print(msg, end="")


# create a decorator which wraps a TestCase method and pass it a mocked
# stdout object
mock_stdout = unittest.mock.patch('sys.stdout', new_callable=io.StringIO)


class MyTests(unittest.TestCase):

    @mock_stdout
    def test_foo(self, stdout):
        # run the function whose output we want to test
        foo()
        # get its output from the mocked stdout
        actual = stdout.getvalue()
        expected = msg
        self.assertEqual(actual, expected)

回答 10

基于此线程中所有出色的答案,这就是我解决的方法。我想尽可能地保留它。我使用增强的单元测试机制setUp(),以捕获sys.stdoutsys.stderr,增加了新的API断言检查捕获的值对一个预期值,然后还原sys.stdoutsys.stderrtearDown(). I did this to keep a similar unit test API as the built-in单元测试API while still being able to unit test values printed tosys.stdout的orsys.stderr`。

import io
import sys
import unittest


class TestStdout(unittest.TestCase):

    # before each test, capture the sys.stdout and sys.stderr
    def setUp(self):
        self.test_out = io.StringIO()
        self.test_err = io.StringIO()
        self.original_output = sys.stdout
        self.original_err = sys.stderr
        sys.stdout = self.test_out
        sys.stderr = self.test_err

    # restore sys.stdout and sys.stderr after each test
    def tearDown(self):
        sys.stdout = self.original_output
        sys.stderr = self.original_err

    # assert that sys.stdout would be equal to expected value
    def assertStdoutEquals(self, value):
        self.assertEqual(self.test_out.getvalue().strip(), value)

    # assert that sys.stdout would not be equal to expected value
    def assertStdoutNotEquals(self, value):
        self.assertNotEqual(self.test_out.getvalue().strip(), value)

    # assert that sys.stderr would be equal to expected value
    def assertStderrEquals(self, value):
        self.assertEqual(self.test_err.getvalue().strip(), value)

    # assert that sys.stderr would not be equal to expected value
    def assertStderrNotEquals(self, value):
        self.assertNotEqual(self.test_err.getvalue().strip(), value)

    # example of unit test that can capture the printed output
    def test_print_good(self):
        print("------")

        # use assertStdoutEquals(value) to test if your
        # printed value matches your expected `value`
        self.assertStdoutEquals("------")

    # fails the test, expected different from actual!
    def test_print_bad(self):
        print("@=@=")
        self.assertStdoutEquals("@-@-")


if __name__ == '__main__':
    unittest.main()

运行单元测试时,输出为:

$ python3 -m unittest -v tests/print_test.py
test_print_bad (tests.print_test.TestStdout) ... FAIL
test_print_good (tests.print_test.TestStdout) ... ok

======================================================================
FAIL: test_print_bad (tests.print_test.TestStdout)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/tests/print_test.py", line 51, in test_print_bad
    self.assertStdoutEquals("@-@-")
  File "/tests/print_test.py", line 24, in assertStdoutEquals
    self.assertEqual(self.test_out.getvalue().strip(), value)
AssertionError: '@=@=' != '@-@-'
- @=@=
+ @-@-


----------------------------------------------------------------------
Ran 2 tests in 0.001s

FAILED (failures=1)

Building on all the awesome answers in this thread, this is how I solved it. I wanted to keep it as stock as possible. I augmented the unit test mechanism using setUp() to capture sys.stdout and sys.stderr, added new assert APIs to check the captured values against an expected value and then restore sys.stdout and sys.stderr upon tearDown(). I did this to keep a similar unit test API as the built-inunittestAPI while still being able to unit test values printed tosys.stdoutorsys.stderr`.

import io
import sys
import unittest


class TestStdout(unittest.TestCase):

    # before each test, capture the sys.stdout and sys.stderr
    def setUp(self):
        self.test_out = io.StringIO()
        self.test_err = io.StringIO()
        self.original_output = sys.stdout
        self.original_err = sys.stderr
        sys.stdout = self.test_out
        sys.stderr = self.test_err

    # restore sys.stdout and sys.stderr after each test
    def tearDown(self):
        sys.stdout = self.original_output
        sys.stderr = self.original_err

    # assert that sys.stdout would be equal to expected value
    def assertStdoutEquals(self, value):
        self.assertEqual(self.test_out.getvalue().strip(), value)

    # assert that sys.stdout would not be equal to expected value
    def assertStdoutNotEquals(self, value):
        self.assertNotEqual(self.test_out.getvalue().strip(), value)

    # assert that sys.stderr would be equal to expected value
    def assertStderrEquals(self, value):
        self.assertEqual(self.test_err.getvalue().strip(), value)

    # assert that sys.stderr would not be equal to expected value
    def assertStderrNotEquals(self, value):
        self.assertNotEqual(self.test_err.getvalue().strip(), value)

    # example of unit test that can capture the printed output
    def test_print_good(self):
        print("------")

        # use assertStdoutEquals(value) to test if your
        # printed value matches your expected `value`
        self.assertStdoutEquals("------")

    # fails the test, expected different from actual!
    def test_print_bad(self):
        print("@=@=")
        self.assertStdoutEquals("@-@-")


if __name__ == '__main__':
    unittest.main()

When the unit test is run, the output is:

$ python3 -m unittest -v tests/print_test.py
test_print_bad (tests.print_test.TestStdout) ... FAIL
test_print_good (tests.print_test.TestStdout) ... ok

======================================================================
FAIL: test_print_bad (tests.print_test.TestStdout)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/tests/print_test.py", line 51, in test_print_bad
    self.assertStdoutEquals("@-@-")
  File "/tests/print_test.py", line 24, in assertStdoutEquals
    self.assertEqual(self.test_out.getvalue().strip(), value)
AssertionError: '@=@=' != '@-@-'
- @=@=
+ @-@-


----------------------------------------------------------------------
Ran 2 tests in 0.001s

FAILED (failures=1)

您如何对Celery任务进行单元测试?

问题:您如何对Celery任务进行单元测试?

Celery文档提到了在Django中测试Celery,但没有解释如果不使用Django,如何测试Celery任务。你怎么做到这一点?

The Celery documentation mentions testing Celery within Django but doesn’t explain how to test a Celery task if you are not using Django. How do you do this?


回答 0

可以使用任何unittest库同步测试任务。在处理Celery任务时,我通常会进行2次不同的测试。第一个(如我所建议的,下面是波纹管)是完全同步的,应该确保算法执行应做的工作。第二部分使用整个系统(包括代理),并确保我没有序列化问题或任何其他分发,通信问题。

所以:

from celery import Celery

celery = Celery()

@celery.task
def add(x, y):
    return x + y

和您的测试:

from nose.tools import eq_

def test_add_task():
    rst = add.apply(args=(4, 4)).get()
    eq_(rst, 8)

希望有帮助!

It is possible to test tasks synchronously using any unittest lib out there. I normaly do 2 different test sessions when working with celery tasks. The first one (as I’m suggesting bellow) is completely synchronous and should be the one that makes sure the algorithm does what it should do. The second session uses the whole system (including the broker) and makes sure I’m not having serialization issues or any other distribution, comunication problem.

So:

from celery import Celery

celery = Celery()

@celery.task
def add(x, y):
    return x + y

And your test:

from nose.tools import eq_

def test_add_task():
    rst = add.apply(args=(4, 4)).get()
    eq_(rst, 8)

Hope that helps!


回答 1

我用这个:

with mock.patch('celeryconfig.CELERY_ALWAYS_EAGER', True, create=True):
    ...

文件:http : //docs.celeryproject.org/en/3.1/configuration.html#celery-always-eager

CELERY_ALWAYS_EAGER使您可以同步运行任务,并且不需要Celery服务器。

I use this:

with mock.patch('celeryconfig.CELERY_ALWAYS_EAGER', True, create=True):
    ...

Docs: http://docs.celeryproject.org/en/3.1/configuration.html#celery-always-eager

CELERY_ALWAYS_EAGER lets you run your task synchronous, and you don’t need a celery server.


回答 2

取决于您要测试的内容。

  • 直接测试任务代码。不要调用“ task.delay(…)”,而只是在单元测试中调用“ task(…)”。
  • 使用CELERY_ALWAYS_EAGER。这将导致您的任务在您说“ task.delay(…)”时立即被调用,因此您可以测试整个路径(但不能测试任何异步行为)。

Depends on what exactly you want to be testing.

  • Test the task code directly. Don’t call “task.delay(…)” just call “task(…)” from your unit tests.
  • Use CELERY_ALWAYS_EAGER. This will cause your tasks to be called immediately at the point you say “task.delay(…)”, so you can test the whole path (but not any asynchronous behavior).

回答 3

单元测试

import unittest

from myproject.myapp import celeryapp

class TestMyCeleryWorker(unittest.TestCase):

  def setUp(self):
      celeryapp.conf.update(CELERY_ALWAYS_EAGER=True)

py.test装置

# conftest.py
from myproject.myapp import celeryapp

@pytest.fixture(scope='module')
def celery_app(request):
    celeryapp.conf.update(CELERY_ALWAYS_EAGER=True)
    return celeryapp

# test_tasks.py
def test_some_task(celery_app):
    ...

附录:让send_task尊重

from celery import current_app

def send_task(name, args=(), kwargs={}, **opts):
    # https://github.com/celery/celery/issues/581
    task = current_app.tasks[name]
    return task.apply(args, kwargs, **opts)

current_app.send_task = send_task

unittest

import unittest

from myproject.myapp import celeryapp

class TestMyCeleryWorker(unittest.TestCase):

  def setUp(self):
      celeryapp.conf.update(CELERY_ALWAYS_EAGER=True)

py.test fixtures

# conftest.py
from myproject.myapp import celeryapp

@pytest.fixture(scope='module')
def celery_app(request):
    celeryapp.conf.update(CELERY_ALWAYS_EAGER=True)
    return celeryapp

# test_tasks.py
def test_some_task(celery_app):
    ...

Addendum: make send_task respect eager

from celery import current_app

def send_task(name, args=(), kwargs={}, **opts):
    # https://github.com/celery/celery/issues/581
    task = current_app.tasks[name]
    return task.apply(args, kwargs, **opts)

current_app.send_task = send_task

回答 4

对于Celery 4上的用户:

@override_settings(CELERY_TASK_ALWAYS_EAGER=True)

由于设置名称已更改,如果您选择升级,则需要更新,请参见

https://docs.celeryproject.org/en/latest/history/whatsnew-4.0.html?highlight=what%20is%20new#lowercase-setting-names

For those on Celery 4 it’s:

@override_settings(CELERY_TASK_ALWAYS_EAGER=True)

Because the settings names have been changed and need updating if you choose to upgrade, see

https://docs.celeryproject.org/en/latest/history/whatsnew-4.0.html?highlight=what%20is%20new#lowercase-setting-names


回答 5

Celery 3.0开始CELERY_ALWAYS_EAGERDjango中进行设置的一种方法是:

from django.test import TestCase, override_settings

from .foo import foo_celery_task

class MyTest(TestCase):

    @override_settings(CELERY_ALWAYS_EAGER=True)
    def test_foo(self):
        self.assertTrue(foo_celery_task.delay())

As of Celery 3.0, one way to set CELERY_ALWAYS_EAGER in Django is:

from django.test import TestCase, override_settings

from .foo import foo_celery_task

class MyTest(TestCase):

    @override_settings(CELERY_ALWAYS_EAGER=True)
    def test_foo(self):
        self.assertTrue(foo_celery_task.delay())

回答 6

从Celery v4.0开始提供了py.test固定装置来启动celery工人只是为了进行测试,并在完成后将其关闭:

def test_myfunc_is_executed(celery_session_worker):
    # celery_session_worker: <Worker: gen93553@gnpill.local (running)>
    assert myfunc.delay().wait(3)

http://docs.celeryproject.org/en/latest/userguide/testing.html#py-test中描述的其他灯具中,您可以通过celery_config以下方式重新定义灯具来更改celery的默认选项:

@pytest.fixture(scope='session')
def celery_config():
    return {
        'accept_content': ['json', 'pickle'],
        'result_serializer': 'pickle',
    }

默认情况下,测试人员使用内存中的代理和结果后端。如果不测试特定功能,则无需使用本地Redis或RabbitMQ。

Since Celery v4.0, py.test fixtures are provided to start a celery worker just for the test and are shut down when done:

def test_myfunc_is_executed(celery_session_worker):
    # celery_session_worker: <Worker: gen93553@gnpill.local (running)>
    assert myfunc.delay().wait(3)

Among other fixtures described on http://docs.celeryproject.org/en/latest/userguide/testing.html#py-test, you can change the celery default options by redefining the celery_config fixture this way:

@pytest.fixture(scope='session')
def celery_config():
    return {
        'accept_content': ['json', 'pickle'],
        'result_serializer': 'pickle',
    }

By default, the test worker uses an in-memory broker and result backend. No need to use a local Redis or RabbitMQ if not testing specific features.


回答 7

使用pytest 参考

def test_add(celery_worker):
    mytask.delay()

如果您使用烧瓶,请设置应用配置

    CELERY_BROKER_URL = 'memory://'
    CELERY_RESULT_BACKEND = 'cache+memory://'

和在 conftest.py

@pytest.fixture
def app():
    yield app   # Your actual Flask application

@pytest.fixture
def celery_app(app):
    from celery.contrib.testing import tasks   # need it
    yield celery_app    # Your actual Flask-Celery application

reference using pytest.

def test_add(celery_worker):
    mytask.delay()

if you use flask, set the app config

    CELERY_BROKER_URL = 'memory://'
    CELERY_RESULT_BACKEND = 'cache+memory://'

and in conftest.py

@pytest.fixture
def app():
    yield app   # Your actual Flask application

@pytest.fixture
def celery_app(app):
    from celery.contrib.testing import tasks   # need it
    yield celery_app    # Your actual Flask-Celery application

回答 8

就我而言(我假设还有很多其他人),我想要做的就是使用pytest测试任务的内部逻辑。

TL; DR; 最终嘲笑了一切(选项2


示例用例

proj/tasks.py

@shared_task(bind=True)
def add_task(self, a, b):
    return a+b;

tests/test_tasks.py

from proj import add_task

def test_add():
    assert add_task(1, 2) == 3, '1 + 2 should equal 3'

但是,由于shared_task装饰器执行了许多Celery内部逻辑操作,因此它实际上不是单元测试。

因此,对我来说,有2种选择:

选项1:独立的内部逻辑

proj/tasks_logic.py

def internal_add(a, b):
    return a + b;

proj/tasks.py

from .tasks_logic import internal_add

@shared_task(bind=True)
def add_task(self, a, b):
    return internal_add(a, b);

这看起来很奇怪,除了使可读性降低之外,它还需要手动提取并传递属于请求的属性,例如task_id在您需要的情况下,这会使逻辑不那么纯净。

选项2:模拟
嘲笑Celery内部

tests/__init__.py

# noinspection PyUnresolvedReferences
from celery import shared_task

from mock import patch


def mock_signature(**kwargs):
    return {}


def mocked_shared_task(*decorator_args, **decorator_kwargs):
    def mocked_shared_decorator(func):
        func.signature = func.si = func.s = mock_signature
        return func

    return mocked_shared_decorator

patch('celery.shared_task', mocked_shared_task).start()

然后,它允许我模拟请求对象(同样,如果您需要请求中的内容,例如ID或重试计数器)。

tests/test_tasks.py

from proj import add_task

class MockedRequest:
    def __init__(self, id=None):
        self.id = id or 1


class MockedTask:
    def __init__(self, id=None):
        self.request = MockedRequest(id=id)


def test_add():
    mocked_task = MockedTask(id=3)
    assert add_task(mocked_task, 1, 2) == 3, '1 + 2 should equal 3'

该解决方案更加手动,但是,它为我提供了实际进行单元测试所需的控制,而无需重复自己,也不会丢失Celery范围。

In my case (and I assume many others), all I wanted was to test the inner logic of a task using pytest.

TL;DR; ended up mocking everything away (OPTION 2)


Example Use Case:

proj/tasks.py

@shared_task(bind=True)
def add_task(self, a, b):
    return a+b;

tests/test_tasks.py

from proj import add_task

def test_add():
    assert add_task(1, 2) == 3, '1 + 2 should equal 3'

but, since shared_task decorator does a lot of celery internal logic, it isn’t really a unit tests.

So, for me, there were 2 options:

OPTION 1: Separate internal logic

proj/tasks_logic.py

def internal_add(a, b):
    return a + b;

proj/tasks.py

from .tasks_logic import internal_add

@shared_task(bind=True)
def add_task(self, a, b):
    return internal_add(a, b);

This looks very odd, and other than making it less readable, it requires to manually extract and pass attributes that are part of the request, for instance the task_id in case you need it, which make the logic less pure.

OPTION 2: mocks
mocking away celery internals

tests/__init__.py

# noinspection PyUnresolvedReferences
from celery import shared_task

from mock import patch


def mock_signature(**kwargs):
    return {}


def mocked_shared_task(*decorator_args, **decorator_kwargs):
    def mocked_shared_decorator(func):
        func.signature = func.si = func.s = mock_signature
        return func

    return mocked_shared_decorator

patch('celery.shared_task', mocked_shared_task).start()

which then allows me to mock the request object (again, in case you need things from the request, like the id, or the retries counter.

tests/test_tasks.py

from proj import add_task

class MockedRequest:
    def __init__(self, id=None):
        self.id = id or 1


class MockedTask:
    def __init__(self, id=None):
        self.request = MockedRequest(id=id)


def test_add():
    mocked_task = MockedTask(id=3)
    assert add_task(mocked_task, 1, 2) == 3, '1 + 2 should equal 3'

This solution is much more manual, but, it gives me the control I need to actually unit test, without repeating myself, and without losing the celery scope.


断言numpy.array相等的最佳方法?

问题:断言numpy.array相等的最佳方法?

我想为我的应用程序做一些单元测试,并且需要比较两个数组。由于array.__eq__返回一个新数组(因此TestCase.assertEqual失败),为相等性断言的最佳方法是什么?

目前我正在使用

self.assertTrue((arr1 == arr2).all())

但我不是很喜欢

I want to make some unit-tests for my app, and I need to compare two arrays. Since array.__eq__ returns a new array (so TestCase.assertEqual fails), what is the best way to assert for equality?

Currently I’m using

self.assertTrue((arr1 == arr2).all())

but I don’t really like it


回答 0

在中检出assert函数numpy.testing,例如

assert_array_equal

对于浮点数组,相等性测试可能会失败,并且 assert_almost_equal更加可靠。

更新

之前获得了一些版本的numpy assert_allclose,现在它是我的最爱,因为它允许我们指定绝对误差和相对误差,并且不需要十进制舍入作为接近度标准。

check out the assert functions in numpy.testing, e.g.

assert_array_equal

for floating point arrays equality test might fail and assert_almost_equal is more reliable.

update

A few versions ago numpy obtained assert_allclose which is now my favorite since it allows us to specify both absolute and relative error and doesn’t require decimal rounding as the closeness criterion.


回答 1

我觉得(arr1 == arr2).all()很好看。但是您可以使用:

numpy.allclose(arr1, arr2)

但是不完全一样。

与您的示例几乎相同的替代方法是:

numpy.alltrue(arr1 == arr2)

请注意,scipy.array实际上是一个引用numpy.array。这样可以更轻松地找到文档。

I think (arr1 == arr2).all() looks pretty nice. But you could use:

numpy.allclose(arr1, arr2)

but it’s not quite the same.

An alternative, almost the same as your example is:

numpy.alltrue(arr1 == arr2)

Note that scipy.array is actually a reference numpy.array. That makes it easier to find the documentation.


回答 2

我发现使用 self.assertEqual(arr1.tolist(), arr2.tolist()) 是比较数组与unittest的最简单方法。

我同意这不是最漂亮的解决方案,并且可能不是最快的解决方案,但是与其余测试用例相比,它可能更统一,您可以获得所有unittest错误描述,并且实现起来非常简单。

I find that using self.assertEqual(arr1.tolist(), arr2.tolist()) is the easiest way of comparing arrays with unittest.

I agree it’s not the prettiest solution and it’s probably not the fastest but it’s probably more uniform with the rest of your test cases, you get all the unittest error description and it’s really simple to implement.


回答 3

从Python 3.2开始,您可以使用assertSequenceEqual(array1.tolist(), array2.tolist())

这具有向您显示数组不同的确切项目的附加价值。

Since Python 3.2 you can use assertSequenceEqual(array1.tolist(), array2.tolist()).

This has the added value of showing you the exact items in which the arrays differ.


回答 4

在我的测试中,我使用以下代码:

try:
    numpy.testing.assert_array_equal(arr1, arr2)
    res = True
except AssertionError as err:
    res = False
    print (err)
self.assertTrue(res)

In my tests I use this:

numpy.testing.assert_array_equal(arr1, arr2)

回答 5

np.linalg.norm(arr1 - arr2) < 1e-6

np.linalg.norm(arr1 - arr2) < 1e-6


我应该如何组织Python源代码?[关闭]

问题:我应该如何组织Python源代码?[关闭]

我正在开始使用Python(现在是时候尝试了),并且我正在寻找一些最佳实践。

我的第一个项目是一个在多个线程中运行命令行实验的队列。我开始得到一个很长的main.py文件,我想将其分解。总的来说,我在寻找:python程序员如何组织多个源文件?有没有适合您的特定结构?

我的具体问题包括:

  1. 每个类都应该放在单独的文件中吗?
  2. 我应该如何组织相对于源代码的单元测试?
  3. 我应该在哪里放置doc注释,尤其是命令行操作的注释?
  4. 如果使用多个目录,如何在它们之间导入类?

我可能会通过反复试验得出一些自己的结论,但是我宁愿从好的东西开始。

I’m getting started with Python (it’s high time I give it a shot), and I’m looking for some best practices.

My first project is a queue which runs command-line experiments in multiple threads. I’m starting to get a very long main.py file, and I’d like to break it up. In general, I’m looking for: How do python programmers organize multiple source files? Is there a particular structure that works for you?

My specific questions include:

  1. Should each class be in a separate file?
  2. How should I organize unit tests relative to source code?
  3. Where should I put doc comments, specifically those for command-line operation?
  4. If I use multiple directories, how do I import classes between them?

I can probably draw some of my own conclusions here by trial and error, but I’d rather start from something good.


回答 0

Eric指向文章很棒,因为它涵盖了组织大型Python代码库的详细信息。

如果您是从Google登陆到这里的,并试图找出如何将一个大型源文件拆分为多个,更易于管理的文件,那么我将简要概述该过程。

假设当前所有内容都在一个名为的文件中main.py

  • 在同一文件夹中创建另一个源文件(utils.py在此示例中,我们称为我们的文件)
  • 将所需的任何类,函数,语句等main.py移入utils.py
  • main.py顶部添加一行:import utils

从概念上讲,此操作是utils在另一个源文件中创建一个新模块。然后,您可以在任何需要的地方导入它。

The article Eric pointed to is awesome because it covers details of organising large Python code bases.

If you’ve landed here from Google and are trying to find out how to split one large source file into multiple, more manageable, files I’ll summarise the process briefly.

Assume you currently have everything in a file called main.py:

  • Create another source file in the same folder (let’s call ours utils.py for this example)
  • Move whatever classes, functions, statements, etc you need from main.py into utils.py
  • In main.py add a single line at the top: import utils

Conceptually what this does is to create a new module called utils in another source file. You can then import it wherever it’s needed.


回答 1

组织代码和测试的方式与使用任何OO语言完全相同。

我做这件事的答案。可能不对,但对我有用

  1. 取决于如何拆分功能。对于我的主要python应用程序,我有1个文件,其中包含用于入口点的类,然后包含功能不同的包
  2. 我使用PyDev进行蚀,并像使用Java一样组织它。
>  Workspace
>     |
>     |-Src
>     |   |-Package1
>     |   |-Package2
>     |   |-main.py
>     |-Test
>         |-TestPackage1
>         |-TestPackage2
  1. 随处使用DocString跟踪所有内容
  2. 确保相关__init__.py文件位于文件夹中之后。这只是一个简单的例子from module import class

The way you should organise your code and tests is exactly the same you would for any OO language.

Answers from the way I do it. It may not be right but works for me

  1. Depends on how your functionality is split. For my main python app I have 1 file with classes for the entry points and then packages of different bits of functionality
  2. I use PyDev for eclipse and organise it like I would for Java.
>  Workspace
>     |
>     |-Src
>     |   |-Package1
>     |   |-Package2
>     |   |-main.py
>     |-Test
>         |-TestPackage1
>         |-TestPackage2
  1. Use DocString everywhere to keep track of everything
  2. After making sure that the relevant __init__.py files are in the folders. its just a simple case of from module import class

Python unittest中的setUp()和setUpClass()有什么区别?

问题:Python unittest中的setUp()和setUpClass()有什么区别?

setUp()setUpClass()Python unittest框架之间有什么区别?为什么设置会以一种方法而不是另一种方法处理?

我想了解什么设置的一部分在完成setUp()setUpClass()功能,以及与tearDown()tearDownClass()

What is the difference between setUp() and setUpClass() in the Python unittest framework? Why would setup be handled in one method over the other?

I want to understand what part of setup is done in the setUp() and setUpClass() functions, as well as with tearDown() and tearDownClass().


回答 0

当您的类中有多个测试方法时,差异就会显现出来。setUpClasstearDownClass一旦被全班运行; setUptearDown在每种测试方法之前和之后运行。

例如:

class Example(unittest.TestCase):
    @classmethod
    def setUpClass(cls):
        print("setUpClass")

    def setUp(self):
        print("setUp")

    def test1(self):
        print("test1")

    def test2(self):
        print("test2")

    def tearDown(self):
        print("tearDown")

    @classmethod
    def tearDownClass(cls):
        print("tearDownClass")

运行此测试时,它会打印:

setUpClass
setUp
test1
tearDown
.setUp
test2
tearDown
.tearDownClass

(该点(.)是unittest的默认输出时,测试通过)观察到setUptearDown之前和之后出现test1 test2,而setUpClasstearDownClass只出现一次,在整个测试案例的开始和结束。

The difference manifests itself when you have more than one test method in your class. setUpClass and tearDownClass are run once for the whole class; setUp and tearDown are run before and after each test method.

For example:

class Example(unittest.TestCase):
    @classmethod
    def setUpClass(cls):
        print("setUpClass")

    def setUp(self):
        print("setUp")

    def test1(self):
        print("test1")

    def test2(self):
        print("test2")

    def tearDown(self):
        print("tearDown")

    @classmethod
    def tearDownClass(cls):
        print("tearDownClass")

When you run this test, it prints:

setUpClass
setUp
test1
tearDown
.setUp
test2
tearDown
.tearDownClass

(The dots (.) are unittest‘s default output when a test passes.) Observe that setUp and tearDown appear before and after test1 and test2, whereas setUpClass and tearDownClass appear only once, at the beginning and end of the whole test case.


回答 1

setUp()setUpClass()Python unittest框架之间有什么区别?

主要区别(如本杰明·霍奇森(Benjamin Hodgson)的回答中所述)是setUpClass仅一次调用,即在所有测试之前,而setUp在每次测试之前均被调用。(注意:这同样适用于其他xUnit测试框架中的等效方法,而不仅仅是Python的方法unittest。)

unittest 文档中

setUpClass()

在运行单个类中的测试之前调用的类方法。使用类作为唯一参数调用setUpClass,并且必须将其装饰为classmethod():

@classmethod
def setUpClass(cls):
    ...

和:

setUp()

调用准备测试夹具的方法。在调用测试方法之前立即调用该方法。除了AssertionError或SkipTest之外,此方法引发的任何异常都将被视为错误而不是测试失败。默认实现不执行任何操作。

为什么设置会以一种方法而不是另一种方法处理?

问题的这一部分尚未回答。根据我对Gearon回答的评论,该setUp方法适用于所有测试通用的灯具元素(以避免在每个测试中重复该代码)。我发现这通常很有用,因为删除重复项(通常)可以提高可读性并减少维护负担。

setUpClass方法适用于昂贵的元素,您只需要执行一次即可,例如打开数据库连接,在文件系统上打开临时文件,加载共享库以进行测试等。在每次测试之前进行此类操作会使速度降低测试套件太多,所以我们在所有测试之前只做一次。测试的独立性略有下降,但在某些情况下是必要的优化。可以说,在单元测试中不应该这样做,因为通常可以在不使用真实内容的情况下模拟数据库/文件系统/库/任何东西。因此,我发现这setUpClass几乎是不需要的。但是,在需要测试上述示例(或类似示例)时很有用。

What is the difference between setUp() and setUpClass() in the Python unittest framework?

The main difference (as noted in the answer by Benjamin Hodgson) is that setUpClass is called only once and that is before all the tests, while setUp is called immediately before each and every test. (NB: The same applies to the equivalent methods in other xUnit test frameworks, not just Python’s unittest.)

From the unittest documentation:

setUpClass()

A class method called before tests in an individual class are run. setUpClass is called with the class as the only argument and must be decorated as a classmethod():

@classmethod
def setUpClass(cls):
    ...

and:

setUp()

Method called to prepare the test fixture. This is called immediately before calling the test method; other than AssertionError or SkipTest, any exception raised by this method will be considered an error rather than a test failure. The default implementation does nothing.

Why would setup be handled in one method over the other?

This part of the question has not been answered yet. As per my comment in response to the answer by Gearon, the setUp method is meant for elements of the fixture that are common to all tests (to avoid duplicating that code in each test). I find this is often useful as removing duplication (usually) improves readability and reduces the maintenance burden.

The setUpClass method is for expensive elements that you would rather only have to do once, such as opening a database connection, opening a temporary file on the filesystem, loading a shared library for testing, etc. Doing such things before each test would slow down the test suite too much, so we just do it once before all the tests. This is a slight degradation in the independence of the tests but a necessary optimization in some situations. Arguably, one should not be doing such things in unit tests as it is usually possible to mock the database / filesystem / library / whatever without using the real thing. As such, I find that setUpClass is rarely needed. However, it is useful when testing the above examples (or similar) becomes necessary.


单元测试是否可以断言方法调用sys.exit()

问题:单元测试是否可以断言方法调用sys.exit()

我有一个python 2.7方法,有时会调用

sys.exit(1) 

当满足正确的条件时,是否可以进行单元测试以验证是否调用了此代码行?

I have a python 2.7 method that sometimes calls

sys.exit(1) 

Is it possible to make a unit test that verifies this line of code is called when the right conditions are met?


回答 0

是。sys.exit加注SystemExit,因此您可以使用以下命令进行检查assertRaises

with self.assertRaises(SystemExit):
    your_method()

的实例SystemExit具有code设置为建议的退出状态的属性,并且由返回的上下文管理器assertRaises将捕获的异常实例设置为exception,因此检查退出状态很容易:

with self.assertRaises(SystemExit) as cm:
    your_method()

self.assertEqual(cm.exception.code, 1)

 

sys.exit文档

从Python退出。这是通过引发SystemExit异常来实现的…可以在外部级别拦截出口尝试。

Yes. sys.exit raises SystemExit, so you can check it with assertRaises:

with self.assertRaises(SystemExit):
    your_method()

Instances of SystemExit have an attribute code which is set to the proposed exit status, and the context manager returned by assertRaises has the caught exception instance as exception, so checking the exit status is easy:

with self.assertRaises(SystemExit) as cm:
    your_method()

self.assertEqual(cm.exception.code, 1)

 

sys.exit Documentation:

Exit from Python. This is implemented by raising the SystemExit exception … it is possible to intercept the exit attempt at an outer level.


回答 1

这是一个完整的工作示例。尽管Pavel给出了出色的回答,但我还是花了一些时间才弄清楚这一点,因此我将其包含在此处,希望对您有所帮助。

import unittest
from glf.logtype.grinder.mapping_reader import MapReader

INCOMPLETE_MAPPING_FILE="test/data/incomplete.http.mapping"

class TestMapReader(unittest.TestCase):

    def test_get_tx_names_incomplete_mapping_file(self):
        map_reader = MapReader()
        with self.assertRaises(SystemExit) as cm:
            tx_names = map_reader.get_tx_names(INCOMPLETE_MAPPING_FILE)
        self.assertEqual(cm.exception.code, 1)

Here’s a complete working example. In spite of Pavel’s excellent answer it took me a while to figure this out, so I’m including it here in the hope that it will be helpful.

import unittest
from glf.logtype.grinder.mapping_reader import MapReader

INCOMPLETE_MAPPING_FILE="test/data/incomplete.http.mapping"

class TestMapReader(unittest.TestCase):

    def test_get_tx_names_incomplete_mapping_file(self):
        map_reader = MapReader()
        with self.assertRaises(SystemExit) as cm:
            tx_names = map_reader.get_tx_names(INCOMPLETE_MAPPING_FILE)
        self.assertEqual(cm.exception.code, 1)

回答 2

我在Python单元测试文档搜索“测试异常”中找到了您问题的答案。使用您的示例,单元测试如下所示:

self.assertRaises(SystemExit, your_function, argument 1, argument 2)

记住要包括测试功能所需的所有参数。

I found the answer to your question in the Python Unit Testing documentation search for “Testing for Exceptions”. Using your example, the unit test would look like the following:

self.assertRaises(SystemExit, your_function, argument 1, argument 2)

Remember to include all arguments needed to test your function.


回答 3

作为Pavel出色答案的补充说明,如果您正在测试的功能中提供了特定状态,您还可以检查特定状态。例如,如果your_method()包含以下内容,sys.exit("Error")则可以专门测试“错误”:

with self.assertRaises(SystemExit) as cm:
    your_method()
    self.assertEqual(cm.exception, "Error")

As an additional note to Pavel’s excellent answer you can also check for specific statuses if they’re provided in the function you’re testing. For example if your_method() contained the following sys.exit("Error") it would be possible to test for “Error” specifically:

with self.assertRaises(SystemExit) as cm:
    your_method()
    self.assertEqual(cm.exception, "Error")

在Python Django中运行单元测试时,如何禁用日志记录?

问题:在Python Django中运行单元测试时,如何禁用日志记录?

我正在使用一个基于单元测试的简单测试运行器来测试我的Django应用程序。

我的应用程序本身配置为在settings.py中使用基本记录器,方法是:

logging.basicConfig(level=logging.DEBUG)

在我的应用程序代码中使用:

logger = logging.getLogger(__name__)
logger.setLevel(getattr(settings, 'LOG_LEVEL', logging.DEBUG))

但是,在运行单元测试时,我想禁用日志记录,以免混乱我的测试结果输出。有没有一种简单的方法可以以全局方式关闭日志记录,以便在运行测试时,特定于应用程序的记录器不会将内容写到控制台上?

I am using a simple unit test based test runner to test my Django application.

My application itself is configured to use a basic logger in settings.py using:

logging.basicConfig(level=logging.DEBUG)

And in my application code using:

logger = logging.getLogger(__name__)
logger.setLevel(getattr(settings, 'LOG_LEVEL', logging.DEBUG))

However, when running unittests, I’d like to disable logging so that it doesn’t clutter my test result output. Is there a simple way to turn off logging in a global way, so that the application specific loggers aren’t writing stuff out to the console when I run tests?


回答 0

logging.disable(logging.CRITICAL)

将禁用所有级别不低于或等于的日志记录调用CRITICAL。可以通过以下方式重新启用日志记录

logging.disable(logging.NOTSET)
logging.disable(logging.CRITICAL)

will disable all logging calls with levels less severe than or equal to CRITICAL. Logging can be re-enabled with

logging.disable(logging.NOTSET)

回答 1

由于您使用的是Django,因此可以将以下几行添加到settings.py中:

import sys
import logging

if len(sys.argv) > 1 and sys.argv[1] == 'test':
    logging.disable(logging.CRITICAL)

这样,您不必setUp()在测试中的每行中都添加该行。

您也可以通过这种方式对测试需求进行一些方便的更改。

还有另一种“更精细”的方法可以为测试添加细节,这就是您自己的测试运行者。

只需创建一个这样的类:

import logging

from django.test.simple import DjangoTestSuiteRunner
from django.conf import settings

class MyOwnTestRunner(DjangoTestSuiteRunner):
    def run_tests(self, test_labels, extra_tests=None, **kwargs):

        # Don't show logging messages while testing
        logging.disable(logging.CRITICAL)

        return super(MyOwnTestRunner, self).run_tests(test_labels, extra_tests, **kwargs)

现在添加到您的settings.py文件中:

TEST_RUNNER = "PATH.TO.PYFILE.MyOwnTestRunner"
#(for example, 'utils.mytest_runner.MyOwnTestRunner')

这使您可以进行一种非常方便的修改,而另一种方法则不需要,这就是使Django仅测试所需的应用程序。您可以通过更改test_labels将以下行添加到测试运行器来实现:

if not test_labels:
    test_labels = ['my_app1', 'my_app2', ...]

Since you are in Django, you could add these lines to your settings.py:

import sys
import logging

if len(sys.argv) > 1 and sys.argv[1] == 'test':
    logging.disable(logging.CRITICAL)

That way you don’t have to add that line in every setUp() on your tests.

You could also do a couple of handy changes for your test needs this way.

There is another “nicer” or “cleaner” way to add specifics to your tests and that is making your own test runner.

Just create a class like this:

import logging

from django.test.simple import DjangoTestSuiteRunner
from django.conf import settings

class MyOwnTestRunner(DjangoTestSuiteRunner):
    def run_tests(self, test_labels, extra_tests=None, **kwargs):

        # Don't show logging messages while testing
        logging.disable(logging.CRITICAL)

        return super(MyOwnTestRunner, self).run_tests(test_labels, extra_tests, **kwargs)

And now add to your settings.py file:

TEST_RUNNER = "PATH.TO.PYFILE.MyOwnTestRunner"
#(for example, 'utils.mytest_runner.MyOwnTestRunner')

This lets you do one really handy modification that the other approach doesn’t, which is to make Django just tests the applications that you want. You can do that by changing the test_labels adding this line to the test runner:

if not test_labels:
    test_labels = ['my_app1', 'my_app2', ...]

回答 2

有没有一种简单的方法可以以全局方式关闭日志记录,以便在运行测试时,特定于应用程序的记录器不会将内容写到控制台上?

其他答案通过全局设置日志记录基础结构以忽略任何内容来防止“将内容写到控制台”。这行得通,但我觉得这种方法太钝了。我的方法是执行配置更改,该更改只执行防止日志从控制台中丢失所需的操作。所以我添加了一个自定义的日志过滤器到我的settings.py

from logging import Filter

class NotInTestingFilter(Filter):

    def filter(self, record):
        # Although I normally just put this class in the settings.py
        # file, I have my reasons to load settings here. In many
        # cases, you could skip the import and just read the setting
        # from the local symbol space.
        from django.conf import settings

        # TESTING_MODE is some settings variable that tells my code
        # whether the code is running in a testing environment or
        # not. Any test runner I use will load the Django code in a
        # way that makes it True.
        return not settings.TESTING_MODE

将Django日志配置为使用过滤器:

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'filters': {
        'testing': {
            '()': NotInTestingFilter
        }
    },
    'formatters': {
        'verbose': {
            'format': ('%(levelname)s %(asctime)s %(module)s '
                       '%(process)d %(thread)d %(message)s')
        },
    },
    'handlers': {
        'console': {
            'level': 'DEBUG',
            'class': 'logging.StreamHandler',
            'filters': ['testing'],
            'formatter': 'verbose'
        },
    },
    'loggers': {
        'foo': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': True,
        },
    }
}

最终结果:当我进行测试时,没有任何内容进入控制台,但其他一切保持不变。

为什么这样做?

我设计的代码包含仅在特定情况下触发的日志记录指令,如果出现问题,该指令应输出我诊断所需的确切数据。因此,我测试了他们执行了应做的事情,因此完全禁用日志记录对我而言不可行。我不想在软件投入生产后发现我认为要记录的内容没有记录下来。

此外,一些测试运行程序(例如,Nose)将在测试过程中捕获日志,并输出日志的相关部分以及测试失败。在弄清楚测试失败的原因时很有用。如果日志记录已完全关闭,则无法捕获任何内容。

Is there a simple way to turn off logging in a global way, so that the application specific loggers aren’t writing stuff out to the console when I run tests?

The other answers prevent “writing stuff out to the console” by globally setting the logging infrastructure to ignore anything. This works but I find it too blunt an approach. My approach is to perform a configuration change which does only what’s needed to prevent logs to get out on the console. So I add a custom logging filter to my settings.py:

from logging import Filter

class NotInTestingFilter(Filter):

    def filter(self, record):
        # Although I normally just put this class in the settings.py
        # file, I have my reasons to load settings here. In many
        # cases, you could skip the import and just read the setting
        # from the local symbol space.
        from django.conf import settings

        # TESTING_MODE is some settings variable that tells my code
        # whether the code is running in a testing environment or
        # not. Any test runner I use will load the Django code in a
        # way that makes it True.
        return not settings.TESTING_MODE

And I configure the Django logging to use the filter:

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'filters': {
        'testing': {
            '()': NotInTestingFilter
        }
    },
    'formatters': {
        'verbose': {
            'format': ('%(levelname)s %(asctime)s %(module)s '
                       '%(process)d %(thread)d %(message)s')
        },
    },
    'handlers': {
        'console': {
            'level': 'DEBUG',
            'class': 'logging.StreamHandler',
            'filters': ['testing'],
            'formatter': 'verbose'
        },
    },
    'loggers': {
        'foo': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': True,
        },
    }
}

End result: when I’m testing, nothing goes to the console, but everything else stays the same.

Why Do This?

I design code that contains logging instructions that are triggered only in specific circumstances and that should output the exact data I need for diagnosis if things go wrong. Therefore I test that they do what they are supposed to do and thus completely disabling logging is not viable for me. I don’t want to find once the software is in production that what I thought would be logged is not logged.

Moreover, some test runners (Nose, for instance) will capture logs during testing and output the relevant part of the log together with a test failure. It is useful in figuring out why a test failed. If logging is completely turned off, then there’s nothing that can be captured.


回答 3

我喜欢Hassek的自定义测试跑步者想法。应该注意的DjangoTestSuiteRunner是,它不再是Django 1.6+中的默认测试运行程序,而是由代替DiscoverRunner。对于默认行为,测试运行器应类似于:

import logging

from django.test.runner import DiscoverRunner

class NoLoggingTestRunner(DiscoverRunner):
    def run_tests(self, test_labels, extra_tests=None, **kwargs):

        # disable logging below CRITICAL while testing
        logging.disable(logging.CRITICAL)

        return super(NoLoggingTestRunner, self).run_tests(test_labels, extra_tests, **kwargs)

I like Hassek’s custom test runner idea. It should be noted that DjangoTestSuiteRunner is no longer the default test runner in Django 1.6+, it has been replaced by the DiscoverRunner. For default behaviour, the test runner should be more like:

import logging

from django.test.runner import DiscoverRunner

class NoLoggingTestRunner(DiscoverRunner):
    def run_tests(self, test_labels, extra_tests=None, **kwargs):

        # disable logging below CRITICAL while testing
        logging.disable(logging.CRITICAL)

        return super(NoLoggingTestRunner, self).run_tests(test_labels, extra_tests, **kwargs)

回答 4

我发现对于unittest框架内或类似框架内的测试,安全禁用单元测试中不必要的日志记录的最有效方法是在特定测试用例的setUp/ tearDown方法中启用/禁用。这使一个目标明确地应在哪里禁用日志。您也可以在要测试的类的记录器上明确地执行此操作。

import unittest
import logging

class TestMyUnitTest(unittest.TestCase):
    def setUp(self):
        logging.disable(logging.CRITICAL)

    def tearDown(self):
        logging.disable(logging.NOTSET)

I’ve found that for tests within unittest or similar a framework, the most effective way to safely disable unwanted logging in unit tests is to enable/disable in the setUp/tearDown methods of a particular test case. This lets one target specifically where logs should be disabled. You could also do this explicitly on the logger of the class you’re testing.

import unittest
import logging

class TestMyUnitTest(unittest.TestCase):
    def setUp(self):
        logging.disable(logging.CRITICAL)

    def tearDown(self):
        logging.disable(logging.NOTSET)

回答 5

我正在使用一个简单的方法装饰器来仅在特定的测试方法中禁用日志记录。

def disable_logging(f):

    def wrapper(*args):
        logging.disable(logging.CRITICAL)
        result = f(*args)
        logging.disable(logging.NOTSET)

        return result

    return wrapper

然后像下面的示例一样使用它:

class ScenarioTestCase(TestCase):

    @disable_logging
    test_scenario(self):
        pass

I am using a simple method decorator to disable logging only in a particular test method.

def disable_logging(f):

    def wrapper(*args):
        logging.disable(logging.CRITICAL)
        result = f(*args)
        logging.disable(logging.NOTSET)

        return result

    return wrapper

And then I use it as in the following example:

class ScenarioTestCase(TestCase):

    @disable_logging
    test_scenario(self):
        pass

回答 6

有一些漂亮而干净的方法可以挂起使用unittest.mock.patch方法登录测试。

foo.py

import logging


logger = logging.getLogger(__name__)

def bar():
    logger.error('There is some error output here!')
    return True

tests.py

from unittest import mock, TestCase
from foo import bar


class FooBarTestCase(TestCase):
    @mock.patch('foo.logger', mock.Mock())
    def test_bar(self):
        self.assertTrue(bar())

并且python3 -m unittest tests不会产生任何日志输出。

There is some pretty and clean method to suspend logging in tests with unittest.mock.patch method.

foo.py:

import logging


logger = logging.getLogger(__name__)

def bar():
    logger.error('There is some error output here!')
    return True

tests.py:

from unittest import mock, TestCase
from foo import bar


class FooBarTestCase(TestCase):
    @mock.patch('foo.logger', mock.Mock())
    def test_bar(self):
        self.assertTrue(bar())

And python3 -m unittest tests will produce no logging output.


回答 7

有时您需要日志,有时则不需要。我的代码中有settings.py

import sys

if '--no-logs' in sys.argv:
    print('> Disabling logging levels of CRITICAL and below.')
    sys.argv.remove('--no-logs')
    logging.disable(logging.CRITICAL)

因此,如果使用--no-logs选项运行测试,则只会获得critical日志:

$ python ./manage.py tests --no-logs
> Disabling logging levels of CRITICAL and below.

如果要在持续集成流程中加快测试速度,这将非常有帮助。

Sometimes you want the logs and sometimes not. I have this code in my settings.py

import sys

if '--no-logs' in sys.argv:
    print('> Disabling logging levels of CRITICAL and below.')
    sys.argv.remove('--no-logs')
    logging.disable(logging.CRITICAL)

So if you run your test with the --no-logs options you’ll get only the critical logs:

$ python ./manage.py tests --no-logs
> Disabling logging levels of CRITICAL and below.

It’s very helpful if you want speedup the tests on your continuous integration flow.


回答 8

如果您不希望它在setUp()和tearDown()中反复打开/关闭它以进行单元测试(看不到原因),则每个类只能执行一次:

    import unittest
    import logging

    class TestMyUnitTest(unittest.TestCase):
        @classmethod
        def setUpClass(cls):
            logging.disable(logging.CRITICAL)
        @classmethod
        def tearDownClass(cls):
            logging.disable(logging.NOTSET)

If you don’t want it repeatedly turn it on/off in setUp() and tearDown() for unittest (don’t see the reason for that), you could just do it once per class:

    import unittest
    import logging

    class TestMyUnitTest(unittest.TestCase):
        @classmethod
        def setUpClass(cls):
            logging.disable(logging.CRITICAL)
        @classmethod
        def tearDownClass(cls):
            logging.disable(logging.NOTSET)

回答 9

如果我想暂时取消某个特定的记录器,我编写了一个有用的小上下文管理器:

from contextlib import contextmanager
import logging

@contextmanager
def disable_logger(name):
    """Temporarily disable a specific logger."""
    logger = logging.getLogger(name)
    old_value = logger.disabled
    logger.disabled = True
    try:
        yield
    finally:
        logger.disabled = old_value

然后,您可以像这样使用它:

class MyTestCase(TestCase):
    def test_something(self):
        with disable_logger('<logger name>'):
            # code that causes the logger to fire

这样做的好处是,with完成后将重新启用记录器(或将其设置回其先前的状态)。

In cases where I wish to temporarily suppress a specific logger, I’ve written a little context manager that I’ve found useful:

from contextlib import contextmanager
import logging

@contextmanager
def disable_logger(name):
    """Temporarily disable a specific logger."""
    logger = logging.getLogger(name)
    old_value = logger.disabled
    logger.disabled = True
    try:
        yield
    finally:
        logger.disabled = old_value

You then use it like:

class MyTestCase(TestCase):
    def test_something(self):
        with disable_logger('<logger name>'):
            # code that causes the logger to fire

This has the advantage that the logger is re-enabled (or set back to its prior state) once the with completes.


回答 10

您可以将其放在单元测试__init__.py文件的顶级目录中。这将禁用单元测试套件中的全局日志记录。

# tests/unit/__init__.py
import logging

logging.disable(logging.CRITICAL)

You can put this in the top level directory for unit tests __init__.py file. This will disable logging globally in the unit test suite.

# tests/unit/__init__.py
import logging

logging.disable(logging.CRITICAL)

回答 11

就我而言,我有一个settings/test.py专门为测试目的而创建的设置文件 ,如下所示:

from .base import *

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.sqlite3',
        'NAME': 'test_db'
    }
}

PASSWORD_HASHERS = (
    'django.contrib.auth.hashers.MD5PasswordHasher',
)

LOGGING = {}

我把一个环境变量DJANGO_SETTINGS_MODULE=settings.test/etc/environment

In my case I have a settings file settings/test.py created specifically for testing purposes, here’s what it looks like:

from .base import *

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.sqlite3',
        'NAME': 'test_db'
    }
}

PASSWORD_HASHERS = (
    'django.contrib.auth.hashers.MD5PasswordHasher',
)

LOGGING = {}

I put an environment variable DJANGO_SETTINGS_MODULE=settings.test to /etc/environment.


回答 12

如果您有用于测试,开发和生产的不同的初始化模块,则可以禁用任何内容或将其重定向到初始化程序中。我有local.py,test.py和production.py,它们都从common.y继承

common.py进行包括以下代码段在内的所有主要配置:

LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
    'django.server': {
        '()': 'django.utils.log.ServerFormatter',
        'format': '[%(server_time)s] %(message)s',
    },
    'verbose': {
        'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
    },
    'simple': {
        'format': '%(levelname)s %(message)s'
    },
},
'filters': {
    'require_debug_true': {
        '()': 'django.utils.log.RequireDebugTrue',
    },
},
'handlers': {
    'django.server': {
        'level': 'INFO',
        'class': 'logging.StreamHandler',
        'formatter': 'django.server',
    },
    'console': {
        'level': 'DEBUG',
        'class': 'logging.StreamHandler',
        'formatter': 'simple'
    },
    'mail_admins': {
        'level': 'ERROR',
        'class': 'django.utils.log.AdminEmailHandler'
    }
},
'loggers': {
    'django': {
        'handlers': ['console'],
        'level': 'INFO',
        'propagate': True,
    },
    'celery.tasks': {
        'handlers': ['console'],
        'level': 'DEBUG',
        'propagate': True,
    },
    'django.server': {
        'handlers': ['django.server'],
        'level': 'INFO',
        'propagate': False,
    },
}

然后在test.py我有这个:

console_logger = Common.LOGGING.get('handlers').get('console')
console_logger['class'] = 'logging.FileHandler
console_logger['filename'] = './unitest.log

这用FileHandler代替了控制台处理程序,意味着仍然可以记录日志,但是我不必接触生产代码库。

If you have different initaliser modules for test, dev and production then you can disable anything or redirect it in the initialser. I have local.py, test.py and production.py that all inherit from common.y

common.py does all the main config including this snippet :

LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
    'django.server': {
        '()': 'django.utils.log.ServerFormatter',
        'format': '[%(server_time)s] %(message)s',
    },
    'verbose': {
        'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
    },
    'simple': {
        'format': '%(levelname)s %(message)s'
    },
},
'filters': {
    'require_debug_true': {
        '()': 'django.utils.log.RequireDebugTrue',
    },
},
'handlers': {
    'django.server': {
        'level': 'INFO',
        'class': 'logging.StreamHandler',
        'formatter': 'django.server',
    },
    'console': {
        'level': 'DEBUG',
        'class': 'logging.StreamHandler',
        'formatter': 'simple'
    },
    'mail_admins': {
        'level': 'ERROR',
        'class': 'django.utils.log.AdminEmailHandler'
    }
},
'loggers': {
    'django': {
        'handlers': ['console'],
        'level': 'INFO',
        'propagate': True,
    },
    'celery.tasks': {
        'handlers': ['console'],
        'level': 'DEBUG',
        'propagate': True,
    },
    'django.server': {
        'handlers': ['django.server'],
        'level': 'INFO',
        'propagate': False,
    },
}

Then in test.py I have this:

console_logger = Common.LOGGING.get('handlers').get('console')
console_logger['class'] = 'logging.FileHandler
console_logger['filename'] = './unitest.log

This replaces the console handler with a FileHandler and means still get logging but I do not have to touch the production code base.


回答 13

如果您使用的是pytest

由于pytest捕获日志消息并仅在失败的测试中显示它们,因此您通常不希望禁用任何日志记录。相反,请使用单独的settings.py文件进行测试(例如test_settings.py),然后添加到其中:

LOGGING_CONFIG = None

这告诉Django完全跳过配置日志记录。的LOGGING设置将被忽略,可以从设置中删除。

使用这种方法,对于通过的测试,您将不会获得任何日志记录,对于失败的测试,您将获得所有可用的日志记录。

测试将使用由设置的日志记录运行pytest。您可以根据自己的喜好来配置它pytest(例如tox.ini)。要包括调试级别日志消息,请使用log_level = DEBUG(或相应的命令行参数)。

If you’re using pytest:

Since pytest captures log messages and only displays them for failed tests, you typically don’t want to disable any logging. Instead, use a separate settings.py file for tests (e.g., test_settings.py), and add to it:

LOGGING_CONFIG = None

This tells Django to skip configuring the logging altogether. The LOGGING setting will be ignored and can be removed from the settings.

With this approach, you don’t get any logging for passed tests, and you get all available logging for failed tests.

The tests will run using the logging that was set up by pytest. It can be configured to your liking in the pytest settings (e.g., tox.ini). To include debug level log messages, use log_level = DEBUG (or the corresponding command line argument).


Python模拟多个返回值

问题:Python模拟多个返回值

我正在使用pythons mock.patch并想更改每个调用的返回值。请注意,正在修补的函数没有输入,因此我无法根据输入更改返回值。

这是我的代码供参考。

def get_boolean_response():
    response = io.prompt('y/n').lower()
    while response not in ('y', 'n', 'yes', 'no'):
        io.echo('Not a valid input. Try again'])
        response = io.prompt('y/n').lower()

    return response in ('y', 'yes')

我的测试代码:

@mock.patch('io')
def test_get_boolean_response(self, mock_io):
    #setup
    mock_io.prompt.return_value = ['x','y']
    result = operations.get_boolean_response()

    #test
    self.assertTrue(result)
    self.assertEqual(mock_io.prompt.call_count, 2)

io.prompt仅仅是“输入”的独立于平台的版本(python 2和3)。因此,最终我将尝试模拟用户的输入。我已经尝试过使用列表作为返回值,但这并不能正常工作。

您可以看到,如果返回值无效,那么我将在此处得到一个无限循环。因此,我需要一种最终更改返回值的方法,以便测试实际上完成。

(回答此问题的另一种可能方法是解释如何在单元测试中模仿用户输入)


不是这个问题的重复,主要是因为我没有能力改变输入。

关于这个问题的答案的评论之一是相同的,但是没有提供答案/评论。

I am using pythons mock.patch and would like to change the return value for each call. Here is the caveat: the function being patched has no inputs, so I can not change the return value based on the input.

Here is my code for reference.

def get_boolean_response():
    response = io.prompt('y/n').lower()
    while response not in ('y', 'n', 'yes', 'no'):
        io.echo('Not a valid input. Try again'])
        response = io.prompt('y/n').lower()

    return response in ('y', 'yes')

My Test code:

@mock.patch('io')
def test_get_boolean_response(self, mock_io):
    #setup
    mock_io.prompt.return_value = ['x','y']
    result = operations.get_boolean_response()

    #test
    self.assertTrue(result)
    self.assertEqual(mock_io.prompt.call_count, 2)

io.prompt is just a platform independent (python 2 and 3) version of “input”. So ultimately I am trying to mock out the users input. I have tried using a list for the return value, but that doesn’t seam to work.

You can see that if the return value is something invalid, I will just get an infinite loop here. So I need a way to eventually change the return value, so that my test actually finishes.

(another possible way to answer this question could be to explain how I could mimic user input in a unit-test)


Not a dup of this question mainly because I do not have the ability to vary the inputs.

One of the comments of the Answer on this question is along the same lines, but no answer/comment has been provided.


回答 0

你可以指定一个迭代side_effect,并模拟将序列中的每个被调用时,返回的下一个值:

>>> from unittest.mock import Mock
>>> m = Mock()
>>> m.side_effect = ['foo', 'bar', 'baz']
>>> m()
'foo'
>>> m()
'bar'
>>> m()
'baz'

引用Mock()文档

如果side_effect是可迭代的,则对模拟的每次调用都将返回可迭代的下一个值。

顺便说一句,测试response is not 'y' or 'n' or 'yes' or 'no'无法进行;您在问表达式(response is not 'y')是正确的还是'y'正确的(总是这样,非空字符串始终为true),等等。or运算符两侧的各种表达式都是独立的。请参阅如何针对多个值测试一个变量?

你应该不会使用is,以测试对一个字符串。CPython解释器在某些情况下可以重用字符串对象,但这不是您应该依靠的行为。

因此,请使用:

response not in ('y', 'n', 'yes', 'no')

代替; 这将使用相等性测试(==)确定是否response引用了具有相同内容(值)的字符串。

同样适用于response == 'y' or 'yes'; 使用response in ('y', 'yes')代替。

You can assign an iterable to side_effect, and the mock will return the next value in the sequence each time it is called:

>>> from unittest.mock import Mock
>>> m = Mock()
>>> m.side_effect = ['foo', 'bar', 'baz']
>>> m()
'foo'
>>> m()
'bar'
>>> m()
'baz'

Quoting the Mock() documentation:

If side_effect is an iterable then each call to the mock will return the next value from the iterable.


当您的应用具有测试目录时,在Django中运行特定的测试用例

问题:当您的应用具有测试目录时,在Django中运行特定的测试用例

Django文档(http://docs.djangoproject.com/en/1.3/topics/testing/#running-tests)指出,您可以通过指定单个测试用例来运行它们:

$ ./manage.py test animals.AnimalTestCase

假设您将测试保存在Django应用程序的tests.py文件中。如果是这样,那么此命令将按预期工作。

我在tests目录中有针对Django应用程序的测试:

my_project/apps/my_app/
├── __init__.py
├── tests
   ├── __init__.py
   ├── field_tests.py
   ├── storage_tests.py
├── urls.py
├── utils.py
└── views.py

tests/__init__.py文件具有suite()函数:

import unittest

from my_project.apps.my_app.tests import field_tests, storage_tests

def suite():
    tests_loader = unittest.TestLoader().loadTestsFromModule
    test_suites = []
    test_suites.append(tests_loader(field_tests))
    test_suites.append(tests_loader(storage_tests))
    return unittest.TestSuite(test_suites)

要运行测试,请执行以下操作:

$ ./manage.py test my_app

尝试指定单个测试用例会引发异常:

$ ./manage.py test my_app.tests.storage_tests.StorageTestCase
...
ValueError: Test label 'my_app.tests.storage_tests.StorageTestCase' should be of the form app.TestCase or app.TestCase.test_method

我试图做异常消息说:

$ ./manage.py test my_app.StorageTestCase
...
ValueError: Test label 'my_app.StorageTestCase' does not refer to a test

当我的测试位于多个文件中时,如何指定单个测试用例?

The Django documentation (http://docs.djangoproject.com/en/1.3/topics/testing/#running-tests) says that you can run individual test cases by specifying them:

$ ./manage.py test animals.AnimalTestCase

This assumes that you have your tests in a tests.py file in your Django application. If this is true, then this command works like expected.

I have my tests for a Django application in a tests directory:

my_project/apps/my_app/
├── __init__.py
├── tests
│   ├── __init__.py
│   ├── field_tests.py
│   ├── storage_tests.py
├── urls.py
├── utils.py
└── views.py

The tests/__init__.py file has a suite() function:

import unittest

from my_project.apps.my_app.tests import field_tests, storage_tests

def suite():
    tests_loader = unittest.TestLoader().loadTestsFromModule
    test_suites = []
    test_suites.append(tests_loader(field_tests))
    test_suites.append(tests_loader(storage_tests))
    return unittest.TestSuite(test_suites)

To run the tests I do:

$ ./manage.py test my_app

Trying to specify an individual test case raises an exception:

$ ./manage.py test my_app.tests.storage_tests.StorageTestCase
...
ValueError: Test label 'my_app.tests.storage_tests.StorageTestCase' should be of the form app.TestCase or app.TestCase.test_method

I tried to do what the exception message said:

$ ./manage.py test my_app.StorageTestCase
...
ValueError: Test label 'my_app.StorageTestCase' does not refer to a test

How do I specify an individual test case when my tests are in multiple files?


回答 0

结帐django-nose。它允许您指定测试运行方式:

python manage.py test another.test:TestCase.test_method

或如注释中所述,使用以下语法:

python manage.py test another.test.TestCase.test_method

Checkout django-nose. It allows you to specify tests to run like:

python manage.py test another.test:TestCase.test_method

or as noted in comments, use the syntax:

python manage.py test another.test.TestCase.test_method

回答 1

从Django 1.6开始,您可以对要运行的元素使用完整的点符号来运行完整的测试用例或单个测试。

现在,自动测试发现将在工作目录下以test开头的任何文件中找到测试,因此解决了您必须重命名文件的问题,但是现在您可以将其保留在所需的目录中。如果要使用自定义文件名,则可以使用option标志指定一个模式(默认Django测试运行器)--pattern="my_pattern_*.py"

因此,如果您在manage.py目录中,并且想要在app / module下的文件test_a内的TestCase子类中运行测试,则可以执行以下操作:Atests.pyexample

python manage.py test example.tests.A.test_a

如果您不想在Django 1.6或更高版本中包含依赖项,那么您将采用这种方式。

有关更多信息,请参见Django文档。

Since Django 1.6 you can run a complete test case, or single test, using the complete dot notation for the element you want to run.

Automatic test discovery will now find tests in any file that starts with test under the working directory, so addressing the question you would have to rename your files, but you can now keep them inside the directory you want. If you want to use custom file names you can specify a pattern (default Django test runner) with the option flag --pattern="my_pattern_*.py".

So if you are in your manage.py directory and want to run the test test_a inside TestCase subclass A inside a file tests.py under the app/module example you would do:

python manage.py test example.tests.A.test_a

If you don’t want to include a dependency and are in Django 1.6 or later that’s how you do it.

See the Django documentation for more information


回答 2

我自己遇到了这个问题,发现了这个问题,以防万一其他人出现,这就是我挖的东西。DjangoTestSuiteRuner使用一种称为build_test(label)的方法,该方法根据标签确定要运行哪些测试用例。研究此方法,结果发现他们正在“模型”或“测试”模块上执行getattr()。这意味着,如果您返回一个套件,则测试运行程序不在该套件中寻找您的测试用例,它只会查找那些模块中的一个。

一个快速的解决方法是__init__.py直接导入测试而不是定义套件。使它们成为“测试”模块的一部分,因此build_test(label)可以找到它们。

对于上面的示例,tests/__init__.py应仅包含:

from field_tests import *
from storage_tests import *

这不是很优雅,当然,如果您要对套件进行一些更复杂的操作,则此方法将无法正常工作,但在这种情况下可以。

I was having this problem myself and found this question, in case anyone else comes along, here was what I dug up. The DjangoTestSuiteRuner uses a method called build_test(label) that figures out what test cases to run based on the label. Looking into this method it turns out they’re doing a getattr() on either the “models” or “test” module. This means if you return a suite the test runner isn’t looking for your test cases in that suite, it only looks in one of those modules.

A quick work-around is to use __init__.py to import your tests directly instead of defining a suite. The makes them part of the “test” module and so build_test(label) can find them.

For your example above, tests/__init__.py should simply contain:

from field_tests import *
from storage_tests import *

This isn’t very elegant and of course if you’re trying to do something more complicated with your suite then this won’t work, but it would for this case.


回答 3

这应该工作-

python manage.py test my_app.tests.storage_tests

This should work-

python manage.py test my_app.tests.storage_tests

回答 4

我也遇到了这个问题,而不是使用django-nose,而是在此链接了:http : //www.pioverpi.net/2010/03/10/organizing-django-tests-into-folders/。您需要打开init .py并导入测试。

init .py中:from unique_test_file import *

I also ran into this problem and instead of using django-nose I followed this link here: http://www.pioverpi.net/2010/03/10/organizing-django-tests-into-folders/. You need to open you init.py and import your tests.

Ex in init.py: from unique_test_file import *


回答 5

将此代码放在__init__.py中,它将导入包和子包中的所有测试类。这将允许您运行特定的测试,而无需手动导入每个文件。

import pkgutil
import unittest

for loader, module_name, is_pkg in pkgutil.walk_packages(__path__):
    module = loader.find_module(module_name).load_module(module_name)
    for name in dir(module):
        obj = getattr(module, name)
        if isinstance(obj, type) and issubclass(obj, unittest.case.TestCase):
            exec ('%s = obj' % obj.__name__)

同样,对于您的测试套件,您可以简单地使用:

def suite():   
    return unittest.TestLoader().discover("appname.tests", pattern="*.py")

现在,您需要为新测试做的就是编写它们,并确保它们在tests文件夹中。不再需要繁琐的进口维修!

Put this code in your __init__.py and it will import all test classes in the package and subpackages. This will allow you to run specific tests without manually importing every file.

import pkgutil
import unittest

for loader, module_name, is_pkg in pkgutil.walk_packages(__path__):
    module = loader.find_module(module_name).load_module(module_name)
    for name in dir(module):
        obj = getattr(module, name)
        if isinstance(obj, type) and issubclass(obj, unittest.case.TestCase):
            exec ('%s = obj' % obj.__name__)

Similarly, for your test suite you can simply use:

def suite():   
    return unittest.TestLoader().discover("appname.tests", pattern="*.py")

Now all you have to do for new tests is write them and make sure they are in the tests folder. No more tedious maintenance of the imports!


如何为python模块的argparse部分编写测试?[关闭]

问题:如何为python模块的argparse部分编写测试?[关闭]

我有一个使用argparse库的Python模块。如何为代码库的该部分编写测试?

I have a Python module that uses the argparse library. How do I write tests for that section of the code base?


回答 0

您应该重构代码并将解析移至函数:

def parse_args(args):
    parser = argparse.ArgumentParser(...)
    parser.add_argument...
    # ...Create your parser as you like...
    return parser.parse_args(args)

然后在main函数中,应使用以下命令调用它:

parser = parse_args(sys.argv[1:])

(其中sys.argv代表脚本名称的第一个元素被删除,以使其在CLI操作期间不作为附加开关发送。)

在测试中,然后可以使用要测试的参数列表调用解析器函数:

def test_parser(self):
    parser = parse_args(['-l', '-m'])
    self.assertTrue(parser.long)
    # ...and so on.

这样,您就不必执行应用程序的代码即可测试解析器。

如果稍后需要在应用程序中更改和/或向解析器添加选项,请创建一个工厂方法:

def create_parser():
    parser = argparse.ArgumentParser(...)
    parser.add_argument...
    # ...Create your parser as you like...
    return parser

以后,您可以根据需要对其进行操作,然后进行如下测试:

class ParserTest(unittest.TestCase):
    def setUp(self):
        self.parser = create_parser()

    def test_something(self):
        parsed = self.parser.parse_args(['--something', 'test'])
        self.assertEqual(parsed.something, 'test')

You should refactor your code and move the parsing to a function:

def parse_args(args):
    parser = argparse.ArgumentParser(...)
    parser.add_argument...
    # ...Create your parser as you like...
    return parser.parse_args(args)

Then in your main function you should just call it with:

parser = parse_args(sys.argv[1:])

(where the first element of sys.argv that represents the script name is removed to not send it as an additional switch during CLI operation.)

In your tests, you can then call the parser function with whatever list of arguments you want to test it with:

def test_parser(self):
    parser = parse_args(['-l', '-m'])
    self.assertTrue(parser.long)
    # ...and so on.

This way you’ll never have to execute the code of your application just to test the parser.

If you need to change and/or add options to your parser later in your application, then create a factory method:

def create_parser():
    parser = argparse.ArgumentParser(...)
    parser.add_argument...
    # ...Create your parser as you like...
    return parser

You can later manipulate it if you want, and a test could look like:

class ParserTest(unittest.TestCase):
    def setUp(self):
        self.parser = create_parser()

    def test_something(self):
        parsed = self.parser.parse_args(['--something', 'test'])
        self.assertEqual(parsed.something, 'test')

回答 1

“ argparse部分”有点含糊不清,因此该答案仅集中在一部分:parse_args方法上。这是与命令行交互并获取所有传递的值的方法。基本上,您可以模拟parse_args返回的内容,因此实际上不需要从命令行获取值。该mock 软件包可以通过pip安装,适用于python 2.6-3.2版本。unittest.mock从版本3.3开始,它是标准库的一部分。

import argparse
try:
    from unittest import mock  # python 3.3+
except ImportError:
    import mock  # python 2.6-3.2


@mock.patch('argparse.ArgumentParser.parse_args',
            return_value=argparse.Namespace(kwarg1=value, kwarg2=value))
def test_command(mock_args):
    pass

您必须包括所有命令方法的参数,Namespace 即使它们没有被传递。赋予这些args值为None。(请参阅docs)此样式对于快速进行测试(对于每个方法参数传递不同值的情况)很有用。如果您选择模拟Namespace自己以完全避免测试中的argparse依赖,请确保其行为与实际Namespace类相似。

以下是使用argparse库中第一个代码段的示例。

# test_mock_argparse.py
import argparse
try:
    from unittest import mock  # python 3.3+
except ImportError:
    import mock  # python 2.6-3.2


def main():
    parser = argparse.ArgumentParser(description='Process some integers.')
    parser.add_argument('integers', metavar='N', type=int, nargs='+',
                        help='an integer for the accumulator')
    parser.add_argument('--sum', dest='accumulate', action='store_const',
                        const=sum, default=max,
                        help='sum the integers (default: find the max)')

    args = parser.parse_args()
    print(args)  # NOTE: this is how you would check what the kwargs are if you're unsure
    return args.accumulate(args.integers)


@mock.patch('argparse.ArgumentParser.parse_args',
            return_value=argparse.Namespace(accumulate=sum, integers=[1,2,3]))
def test_command(mock_args):
    res = main()
    assert res == 6, "1 + 2 + 3 = 6"


if __name__ == "__main__":
    print(main())

“argparse portion” is a bit vague so this answer focuses on one part: the parse_args method. This is the method that interacts with your command line and gets all the passed values. Basically, you can mock what parse_args returns so that it doesn’t need to actually get values from the command line. The mock package can be installed via pip for python versions 2.6-3.2. It’s part of the standard library as unittest.mock from version 3.3 onwards.

import argparse
try:
    from unittest import mock  # python 3.3+
except ImportError:
    import mock  # python 2.6-3.2


@mock.patch('argparse.ArgumentParser.parse_args',
            return_value=argparse.Namespace(kwarg1=value, kwarg2=value))
def test_command(mock_args):
    pass

You have to include all your command method’s args in Namespace even if they’re not passed. Give those args a value of None. (see the docs) This style is useful for quickly doing testing for cases where different values are passed for each method argument. If you opt to mock Namespace itself for total argparse non-reliance in your tests, make sure it behaves similarly to the actual Namespace class.

Below is an example using the first snippet from the argparse library.

# test_mock_argparse.py
import argparse
try:
    from unittest import mock  # python 3.3+
except ImportError:
    import mock  # python 2.6-3.2


def main():
    parser = argparse.ArgumentParser(description='Process some integers.')
    parser.add_argument('integers', metavar='N', type=int, nargs='+',
                        help='an integer for the accumulator')
    parser.add_argument('--sum', dest='accumulate', action='store_const',
                        const=sum, default=max,
                        help='sum the integers (default: find the max)')

    args = parser.parse_args()
    print(args)  # NOTE: this is how you would check what the kwargs are if you're unsure
    return args.accumulate(args.integers)


@mock.patch('argparse.ArgumentParser.parse_args',
            return_value=argparse.Namespace(accumulate=sum, integers=[1,2,3]))
def test_command(mock_args):
    res = main()
    assert res == 6, "1 + 2 + 3 = 6"


if __name__ == "__main__":
    print(main())

回答 2

让您的main()函数argv作为参数,而不是像默认情况那样让它读取sys.argv

# mymodule.py
import argparse
import sys


def main(args):
    parser = argparse.ArgumentParser()
    parser.add_argument('-a')
    process(**vars(parser.parse_args(args)))
    return 0


def process(a=None):
    pass

if __name__ == "__main__":
    sys.exit(main(sys.argv[1:]))

然后您就可以正常测试了。

import mock

from mymodule import main


@mock.patch('mymodule.process')
def test_main(process):
    main([])
    process.assert_call_once_with(a=None)


@mock.patch('foo.process')
def test_main_a(process):
    main(['-a', '1'])
    process.assert_call_once_with(a='1')

Make your main() function take argv as an argument rather than letting it read from sys.argv as it will by default:

# mymodule.py
import argparse
import sys


def main(args):
    parser = argparse.ArgumentParser()
    parser.add_argument('-a')
    process(**vars(parser.parse_args(args)))
    return 0


def process(a=None):
    pass

if __name__ == "__main__":
    sys.exit(main(sys.argv[1:]))

Then you can test normally.

import mock

from mymodule import main


@mock.patch('mymodule.process')
def test_main(process):
    main([])
    process.assert_call_once_with(a=None)


@mock.patch('foo.process')
def test_main_a(process):
    main(['-a', '1'])
    process.assert_call_once_with(a='1')

回答 3

  1. 使用sys.argv.append(),然后调用 parse(),填充arg列表,检查结果并重复。
  2. 从带有您的标志和转储args标志的批处理/ bash文件中调用。
  3. 将所有参数解析放在一个单独的文件中,然后在if __name__ == "__main__":调用解析中转储/评估结果,然后从批处理/ bash文件进行测试。
  1. Populate your arg list by using sys.argv.append() and then call parse(), check the results and repeat.
  2. Call from a batch/bash file with your flags and a dump args flag.
  3. Put all your argument parsing in a separate file and in the if __name__ == "__main__": call parse and dump/evaluate the results then test this from a batch/bash file.

回答 4

我不想修改原始的服务脚本,所以我只是sys.argv在argparse中模拟了该部分。

from unittest.mock import patch

with patch('argparse._sys.argv', ['python', 'serve.py']):
    ...  # your test code here

如果argparse实现更改但足以进行快速测试脚本,则此操作会中断。无论如何,在测试脚本中,敏感性比特异性要重要得多。

I did not want to modify the original serving script so I just mocked out the sys.argv part in argparse.

from unittest.mock import patch

with patch('argparse._sys.argv', ['python', 'serve.py']):
    ...  # your test code here

This breaks if argparse implementation changes but enough for a quick test script. Sensibility is much more important than specificity in test scripts anyways.


回答 5

测试解析器的一种简单方法是:

parser = ...
parser.add_argument('-a',type=int)
...
argv = '-a 1 foo'.split()  # or ['-a','1','foo']
args = parser.parse_args(argv)
assert(args.a == 1)
...

另一种方法是修改sys.argv,然后调用args = parser.parse_args()

有很多的测试的例子argparselib/test/test_argparse.py

A simple way of testing a parser is:

parser = ...
parser.add_argument('-a',type=int)
...
argv = '-a 1 foo'.split()  # or ['-a','1','foo']
args = parser.parse_args(argv)
assert(args.a == 1)
...

Another way is to modify sys.argv, and call args = parser.parse_args()

There are lots of examples of testing argparse in lib/test/test_argparse.py


回答 6

parse_args抛出a SystemExit并打印到stderr,您可以捕获以下两个:

import contextlib
import io
import sys

@contextlib.contextmanager
def captured_output():
    new_out, new_err = io.StringIO(), io.StringIO()
    old_out, old_err = sys.stdout, sys.stderr
    try:
        sys.stdout, sys.stderr = new_out, new_err
        yield sys.stdout, sys.stderr
    finally:
        sys.stdout, sys.stderr = old_out, old_err

def validate_args(args):
    with captured_output() as (out, err):
        try:
            parser.parse_args(args)
            return True
        except SystemExit as e:
            return False

您检查stderr(使用,err.seek(0); err.read()但通常不需要粒度。

现在,您可以使用assertTrue或进行任何喜欢的测试:

assertTrue(validate_args(["-l", "-m"]))

另外,您可能想捕获并抛出另一个错误(而不是SystemExit):

def validate_args(args):
    with captured_output() as (out, err):
        try:
            return parser.parse_args(args)
        except SystemExit as e:
            err.seek(0)
            raise argparse.ArgumentError(err.read())

parse_args throws a SystemExit and prints to stderr, you can catch both of these:

import contextlib
import io
import sys

@contextlib.contextmanager
def captured_output():
    new_out, new_err = io.StringIO(), io.StringIO()
    old_out, old_err = sys.stdout, sys.stderr
    try:
        sys.stdout, sys.stderr = new_out, new_err
        yield sys.stdout, sys.stderr
    finally:
        sys.stdout, sys.stderr = old_out, old_err

def validate_args(args):
    with captured_output() as (out, err):
        try:
            parser.parse_args(args)
            return True
        except SystemExit as e:
            return False

You inspect stderr (using err.seek(0); err.read() but generally that granularity isn’t required.

Now you can use assertTrue or whichever testing you like:

assertTrue(validate_args(["-l", "-m"]))

Alternatively you might like to catch and rethrow a different error (instead of SystemExit):

def validate_args(args):
    with captured_output() as (out, err):
        try:
            return parser.parse_args(args)
        except SystemExit as e:
            err.seek(0)
            raise argparse.ArgumentError(err.read())

回答 7

将结果从argparse.ArgumentParser.parse_args传递给函数时,有时会使用a namedtuple来模拟参数以进行测试。

import unittest
from collections import namedtuple
from my_module import main

class TestMyModule(TestCase):

    args_tuple = namedtuple('args', 'arg1 arg2 arg3 arg4')

    def test_arg1(self):
        args = TestMyModule.args_tuple("age > 85", None, None, None)
        res = main(args)
        assert res == ["55289-0524", "00591-3496"], 'arg1 failed'

    def test_arg2(self):
        args = TestMyModule.args_tuple(None, [42, 69], None, None)
        res = main(args)
        assert res == [], 'arg2 failed'

if __name__ == '__main__':
    unittest.main()

When passing results from argparse.ArgumentParser.parse_args to a function, I sometimes use a namedtuple to mock arguments for testing.

import unittest
from collections import namedtuple
from my_module import main

class TestMyModule(TestCase):

    args_tuple = namedtuple('args', 'arg1 arg2 arg3 arg4')

    def test_arg1(self):
        args = TestMyModule.args_tuple("age > 85", None, None, None)
        res = main(args)
        assert res == ["55289-0524", "00591-3496"], 'arg1 failed'

    def test_arg2(self):
        args = TestMyModule.args_tuple(None, [42, 69], None, None)
        res = main(args)
        assert res == [], 'arg2 failed'

if __name__ == '__main__':
    unittest.main()

回答 8

为了测试CLI(命令行界面),而不是命令输出,我做了类似的事情

import pytest
from argparse import ArgumentParser, _StoreAction

ap = ArgumentParser(prog="cli")
ap.add_argument("cmd", choices=("spam", "ham"))
ap.add_argument("-a", "--arg", type=str, nargs="?", default=None, const=None)
...

def test_parser():
    assert isinstance(ap, ArgumentParser)
    assert isinstance(ap, list)
    args = {_.dest: _ for _ in ap._actions if isinstance(_, _StoreAction)}
    
    assert args.keys() == {"cmd", "arg"}
    assert args["cmd"] == ("spam", "ham")
    assert args["arg"].type == str
    assert args["arg"].nargs == "?"
    ...

For testing CLI (command line interface), and not command output I did something like this

import pytest
from argparse import ArgumentParser, _StoreAction

ap = ArgumentParser(prog="cli")
ap.add_argument("cmd", choices=("spam", "ham"))
ap.add_argument("-a", "--arg", type=str, nargs="?", default=None, const=None)
...

def test_parser():
    assert isinstance(ap, ArgumentParser)
    assert isinstance(ap, list)
    args = {_.dest: _ for _ in ap._actions if isinstance(_, _StoreAction)}
    
    assert args.keys() == {"cmd", "arg"}
    assert args["cmd"] == ("spam", "ham")
    assert args["arg"].type == str
    assert args["arg"].nargs == "?"
    ...