分类目录归档:知识问答

Python 3中的相对导入

问题:Python 3中的相对导入

我想从同一目录中的另一个文件导入函数。

有时它对我有用,from .mymodule import myfunction但有时我得到:

SystemError: Parent module '' not loaded, cannot perform relative import

有时它可与一起使用from mymodule import myfunction,但有时我也会得到:

SystemError: Parent module '' not loaded, cannot perform relative import

我不了解这里的逻辑,也找不到任何解释。这看起来完全是随机的。

有人可以向我解释所有这些背后的逻辑是什么?

I want to import a function from another file in the same directory.

Sometimes it works for me with from .mymodule import myfunction but sometimes I get a:

SystemError: Parent module '' not loaded, cannot perform relative import

Sometimes it works with from mymodule import myfunction, but sometimes I also get a:

SystemError: Parent module '' not loaded, cannot perform relative import

I don’t understand the logic here, and I couldn’t find any explanation. This looks completely random.

Could someone explain to me what’s the logic behind all this?


回答 0

不幸的是,该模块需要位于程序包内部,有时还需要作为脚本运行。知道如何实现吗?

像这样的布局很普遍…

main.py
mypackage/
    __init__.py
    mymodule.py
    myothermodule.py

mymodule.py像这样…

#!/usr/bin/env python3

# Exported function
def as_int(a):
    return int(a)

# Test function for module  
def _test():
    assert as_int('1') == 1

if __name__ == '__main__':
    _test()

……一个myothermodule.py像这样…

#!/usr/bin/env python3

from .mymodule import as_int

# Exported function
def add(a, b):
    return as_int(a) + as_int(b)

# Test function for module  
def _test():
    assert add('1', '1') == 2

if __name__ == '__main__':
    _test()

main.py这样的…

#!/usr/bin/env python3

from mypackage.myothermodule import add

def main():
    print(add('1', '1'))

if __name__ == '__main__':
    main()

…在您运行main.py或时工作正常mypackage/mymodule.py,但mypackage/myothermodule.py由于相对导入而失败,…

from .mymodule import as_int

您应该运行它的方式是…

python3 -m mypackage.myothermodule

…但是有些冗长,并且与像这样的shebang行不能很好地融合在一起#!/usr/bin/env python3

假设名称mymodule在全球范围内是唯一的,这种情况下最简单的解决方法是避免使用相对导入,而只需使用…

from mymodule import as_int

…尽管它不是唯一的,或者您的包结构更复杂,您仍需要在中包含包含包目录的目录PYTHONPATH,并按以下步骤进行操作…

from mypackage.mymodule import as_int

…或者如果您希望它“开箱即用”运行,则可以PYTHONPATH使用此方法首先获取输入代码…

import sys
import os

PACKAGE_PARENT = '..'
SCRIPT_DIR = os.path.dirname(os.path.realpath(os.path.join(os.getcwd(), os.path.expanduser(__file__))))
sys.path.append(os.path.normpath(os.path.join(SCRIPT_DIR, PACKAGE_PARENT)))

from mypackage.mymodule import as_int

这有点痛苦,但是有一个线索可以说明为什么某位Guido van Rossum写的电子邮件中

我对此表示怀疑,也对任何其他提议的__main__ 机械装置都为-1 。唯一的用例似乎是正在运行的脚本,它们恰好位于模块目录中,我一直将其视为反模式。为了让我改变主意,您必须说服我不要。

在程序包中运行脚本是否是反模式是主观的,但是就我个人而言,我发现它在包含一些自定义wxPython小部件的程序包中非常有用,因此我可以为任何源文件运行脚本以wx.Frame仅显示包含该小部件用于测试目的。

unfortunately, this module needs to be inside the package, and it also needs to be runnable as a script, sometimes. Any idea how I could achieve that?

It’s quite common to have a layout like this…

main.py
mypackage/
    __init__.py
    mymodule.py
    myothermodule.py

…with a mymodule.py like this…

#!/usr/bin/env python3

# Exported function
def as_int(a):
    return int(a)

# Test function for module  
def _test():
    assert as_int('1') == 1

if __name__ == '__main__':
    _test()

…a myothermodule.py like this…

#!/usr/bin/env python3

from .mymodule import as_int

# Exported function
def add(a, b):
    return as_int(a) + as_int(b)

# Test function for module  
def _test():
    assert add('1', '1') == 2

if __name__ == '__main__':
    _test()

…and a main.py like this…

#!/usr/bin/env python3

from mypackage.myothermodule import add

def main():
    print(add('1', '1'))

if __name__ == '__main__':
    main()

…which works fine when you run main.py or mypackage/mymodule.py, but fails with mypackage/myothermodule.py, due to the relative import…

from .mymodule import as_int

The way you’re supposed to run it is…

python3 -m mypackage.myothermodule

…but it’s somewhat verbose, and doesn’t mix well with a shebang line like #!/usr/bin/env python3.

The simplest fix for this case, assuming the name mymodule is globally unique, would be to avoid using relative imports, and just use…

from mymodule import as_int

…although, if it’s not unique, or your package structure is more complex, you’ll need to include the directory containing your package directory in PYTHONPATH, and do it like this…

from mypackage.mymodule import as_int

…or if you want it to work “out of the box”, you can frob the PYTHONPATH in code first with this…

import sys
import os

PACKAGE_PARENT = '..'
SCRIPT_DIR = os.path.dirname(os.path.realpath(os.path.join(os.getcwd(), os.path.expanduser(__file__))))
sys.path.append(os.path.normpath(os.path.join(SCRIPT_DIR, PACKAGE_PARENT)))

from mypackage.mymodule import as_int

It’s kind of a pain, but there’s a clue as to why in an email written by a certain Guido van Rossum…

I’m -1 on this and on any other proposed twiddlings of the __main__ machinery. The only use case seems to be running scripts that happen to be living inside a module’s directory, which I’ve always seen as an antipattern. To make me change my mind you’d have to convince me that it isn’t.

Whether running scripts inside a package is an antipattern or not is subjective, but personally I find it really useful in a package I have which contains some custom wxPython widgets, so I can run the script for any of the source files to display a wx.Frame containing only that widget for testing purposes.


回答 1

说明

PEP 328

相对导入使用模块的__name__属性来确定该模块在包层次结构中的位置。如果模块的名称不包含任何包信息(例如,将其设置为’__main__’), 则相对导入的解析就好像该模块是顶级模块一样,无论该模块实际位于文件系统上的哪个位置。

在某些时候,PEP 338PEP 328冲突:

…相对导入依赖于__name__来确定当前模块在包层次结构中的位置。在主模块中,__name__的值始终为‘__main__’,因此显式相对导入将始终失败(因为它们仅适用于包中的模块)

为了解决这个问题,PEP 366引入了顶层变量__package__

通过添加新的模块级别属性,如果使用-m 开关执行模块,则该PEP允许相对导入自动进行。当按名称执行文件时,模块本身中的少量样板文件将允许相对导入工作。[…]如果存在[属性],则相对导入将基于此属性而不是模块__name__属性。[…]当通过其文件名指定主模块时,__package__属性将设置为None。[…] 当导入系统在未设置__package__的模块(或将其设置为None)的模块中遇到显式相对导入时,它将计算并存储正确的值__name __。rpartition(’。’)[0]用于常规模块__ name__用于程序包初始化模块)

(强调我的)

如果__name__'__main__',则__name__.rpartition('.')[0]返回空字符串。这就是为什么错误描述中有空字符串文字的原因:

SystemError: Parent module '' not loaded, cannot perform relative import

CPython PyImport_ImportModuleLevelObject函数的相关部分:

if (PyDict_GetItem(interp->modules, package) == NULL) {
    PyErr_Format(PyExc_SystemError,
            "Parent module %R not loaded, cannot perform relative "
            "import", package);
    goto error;
}

如果CPython packageinterp->modules(可通过访问)中找不到(包的名称),则会引发此异常sys.modules。由于sys.modules“将模块名称映射到已经加载的模块的字典”,因此现在很清楚,必须在执行相对导入之前显式绝对导入父模块

注意:问题18018中 的补丁添加了另一个ifblock,它将在以上代码之前执行:

if (PyUnicode_CompareWithASCIIString(package, "") == 0) {
    PyErr_SetString(PyExc_ImportError,
            "attempted relative import with no known parent package");
    goto error;
} /* else if (PyDict_GetItem(interp->modules, package) == NULL) {
    ...
*/

如果package(与上面相同)为空字符串,则错误消息将为

ImportError: attempted relative import with no known parent package

但是,您只会在Python 3.6或更高版本中看到它。

解决方案1:使用-m运行脚本

考虑一个目录(这是一个Python ):

.
├── package
│   ├── __init__.py
│   ├── module.py
│   └── standalone.py

软件包中的所有文件均以相同的两行代码开头:

from pathlib import Path
print('Running' if __name__ == '__main__' else 'Importing', Path(__file__).resolve())

我加入这两行只是为了使操作顺序显而易见。我们可以完全忽略它们,因为它们不会影响执行。

__init__.pymodule.py仅包含这两行(即,它们实际上是空的)。

standalone.py另外尝试通过相对导入来导入module.py

from . import module  # explicit relative import

我们深知这/path/to/python/interpreter package/standalone.py将失败。但是,我们可以使用-m命令行选项运行该模块,该选项“搜索sys.path命名的模块并将其内容作为__main__模块执行”

vaultah@base:~$ python3 -i -m package.standalone
Importing /home/vaultah/package/__init__.py
Running /home/vaultah/package/standalone.py
Importing /home/vaultah/package/module.py
>>> __file__
'/home/vaultah/package/standalone.py'
>>> __package__
'package'
>>> # The __package__ has been correctly set and module.py has been imported.
... # What's inside sys.modules?
... import sys
>>> sys.modules['__main__']
<module 'package.standalone' from '/home/vaultah/package/standalone.py'>
>>> sys.modules['package.module']
<module 'package.module' from '/home/vaultah/package/module.py'>
>>> sys.modules['package']
<module 'package' from '/home/vaultah/package/__init__.py'>

-m为您完成所有导入工作并自动设置__package__,但是您可以在

解决方案2:手动设置__package__

请把它当作概念证明而不是实际解决方案。它不适合在实际代码中使用。

PEP 366可以解决此问题,但是它不完整,因为__package__仅设置是不够的。您将需要至少在模块层次结构中导入N个先前的软件包,其中N是要搜索要导入的模块的父目录(相对于脚本目录)的数量。

从而,

  1. 将当前模块的第N个前辈的父目录添加到sys.path

  2. 从中删除当前文件的目录 sys.path

  3. 使用标准名称导入当前模块的父模块

  4. 设置__package__2的标准名称

  5. 执行相对导入

我将从解决方案1中借用文件,并添加更多子包:

package
├── __init__.py
├── module.py
└── subpackage
    ├── __init__.py
    └── subsubpackage
        ├── __init__.py
        └── standalone.py

这次standalone.py将使用以下相对导入方式从软件包中导入module.py

from ... import module  # N = 3

我们需要在该行之前加上样板代码,以使其正常工作。

import sys
from pathlib import Path

if __name__ == '__main__' and __package__ is None:
    file = Path(__file__).resolve()
    parent, top = file.parent, file.parents[3]

    sys.path.append(str(top))
    try:
        sys.path.remove(str(parent))
    except ValueError: # Already removed
        pass

    import package.subpackage.subsubpackage
    __package__ = 'package.subpackage.subsubpackage'

from ... import module # N = 3

它允许我们按文件名执行standalone.py

vaultah@base:~$ python3 package/subpackage/subsubpackage/standalone.py
Running /home/vaultah/package/subpackage/subsubpackage/standalone.py
Importing /home/vaultah/package/__init__.py
Importing /home/vaultah/package/subpackage/__init__.py
Importing /home/vaultah/package/subpackage/subsubpackage/__init__.py
Importing /home/vaultah/package/module.py

包裹在一个功能更通用的解决方案,可以发现在这里。用法示例:

if __name__ == '__main__' and __package__ is None:
    import_parents(level=3) # N = 3

from ... import module
from ...module.submodule import thing

解决方案3:使用绝对导入和设置工具

步骤是-

  1. 将显式相对导入替换为等效的绝对导入

  2. 安装package以使其可导入

例如,目录结构可以如下

.
├── project
│   ├── package
│   │   ├── __init__.py
│   │   ├── module.py
│   │   └── standalone.py
│   └── setup.py

其中setup.py

from setuptools import setup, find_packages
setup(
    name = 'your_package_name',
    packages = find_packages(),
)

其余文件是从解决方案#1借用的。

安装后,无论您的工作目录如何,都可以导入软件包(假设没有命名问题)。

我们可以修改standalone.py以利用这一优势(步骤1):

from package import module  # absolute import

将工作目录更改为project并运行/path/to/python/interpreter setup.py install --user--user将软件包安装在site-packages目录中)(步骤2):

vaultah@base:~$ cd project
vaultah@base:~/project$ python3 setup.py install --user

让我们验证一下现在可以将standalone.py作为脚本运行:

vaultah@base:~/project$ python3 -i package/standalone.py
Running /home/vaultah/project/package/standalone.py
Importing /home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/__init__.py
Importing /home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/module.py
>>> module
<module 'package.module' from '/home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/module.py'>
>>> import sys
>>> sys.modules['package']
<module 'package' from '/home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/__init__.py'>
>>> sys.modules['package.module']
<module 'package.module' from '/home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/module.py'>

注意:如果您决定走这条路,最好使用虚拟环境来隔离安装软件包。

解决方案4:使用绝对导入和一些样板代码

坦白地说,不需要安装-您可以在脚本中添加一些样板代码以使绝对导入工作。

我将从解决方案1借用文件并更改standalone.py

  1. 尝试使用绝对导入从包中导入任何内容之前,将的父目录添加到:sys.path

    import sys
    from pathlib import Path # if you haven't already done so
    file = Path(__file__).resolve()
    parent, root = file.parent, file.parents[1]
    sys.path.append(str(root))
    
    # Additionally remove the current file's directory from sys.path
    try:
        sys.path.remove(str(parent))
    except ValueError: # Already removed
        pass
  2. 用绝对导入替换相对导入:

    from package import module  # absolute import

standalone.py运行没有问题:

vaultah@base:~$ python3 -i package/standalone.py
Running /home/vaultah/package/standalone.py
Importing /home/vaultah/package/__init__.py
Importing /home/vaultah/package/module.py
>>> module
<module 'package.module' from '/home/vaultah/package/module.py'>
>>> import sys
>>> sys.modules['package']
<module 'package' from '/home/vaultah/package/__init__.py'>
>>> sys.modules['package.module']
<module 'package.module' from '/home/vaultah/package/module.py'>

我认为我应该警告您:请不要这样做,尤其是在您的项目结构复杂的情况下。


作为附带说明,PEP 8建议使用绝对导入,但指出在某些情况下,显式相对导入是可以接受的:

建议使用绝对导入,因为它们通常更具可读性,并且往往表现得更好(或至少会提供更好的错误消息)。[…]但是,显式相对导入是绝对导入的一种可接受的替代方法,尤其是在处理复杂的包装布局时,使用绝对导入会不必要地冗长。

Explanation

From PEP 328

Relative imports use a module’s __name__ attribute to determine that module’s position in the package hierarchy. If the module’s name does not contain any package information (e.g. it is set to ‘__main__’) then relative imports are resolved as if the module were a top level module, regardless of where the module is actually located on the file system.

At some point PEP 338 conflicted with PEP 328:

… relative imports rely on __name__ to determine the current module’s position in the package hierarchy. In a main module, the value of __name__ is always ‘__main__’, so explicit relative imports will always fail (as they only work for a module inside a package)

and to address the issue, PEP 366 introduced the top level variable __package__:

By adding a new module level attribute, this PEP allows relative imports to work automatically if the module is executed using the -m switch. A small amount of boilerplate in the module itself will allow the relative imports to work when the file is executed by name. […] When it [the attribute] is present, relative imports will be based on this attribute rather than the module __name__ attribute. […] When the main module is specified by its filename, then the __package__ attribute will be set to None. […] When the import system encounters an explicit relative import in a module without __package__ set (or with it set to None), it will calculate and store the correct value (__name__.rpartition(‘.’)[0] for normal modules and __name__ for package initialisation modules)

(emphasis mine)

If the __name__ is '__main__', __name__.rpartition('.')[0] returns empty string. This is why there’s empty string literal in the error description:

SystemError: Parent module '' not loaded, cannot perform relative import

The relevant part of the CPython’s PyImport_ImportModuleLevelObject function:

if (PyDict_GetItem(interp->modules, package) == NULL) {
    PyErr_Format(PyExc_SystemError,
            "Parent module %R not loaded, cannot perform relative "
            "import", package);
    goto error;
}

CPython raises this exception if it was unable to find package (the name of the package) in interp->modules (accessible as sys.modules). Since sys.modules is “a dictionary that maps module names to modules which have already been loaded”, it’s now clear that the parent module must be explicitly absolute-imported before performing relative import.

Note: The patch from the issue 18018 has added another if block, which will be executed before the code above:

if (PyUnicode_CompareWithASCIIString(package, "") == 0) {
    PyErr_SetString(PyExc_ImportError,
            "attempted relative import with no known parent package");
    goto error;
} /* else if (PyDict_GetItem(interp->modules, package) == NULL) {
    ...
*/

If package (same as above) is empty string, the error message will be

ImportError: attempted relative import with no known parent package

However, you will only see this in Python 3.6 or newer.

Solution #1: Run your script using -m

Consider a directory (which is a Python package):

.
├── package
│   ├── __init__.py
│   ├── module.py
│   └── standalone.py

All of the files in package begin with the same 2 lines of code:

from pathlib import Path
print('Running' if __name__ == '__main__' else 'Importing', Path(__file__).resolve())

I’m including these two lines only to make the order of operations obvious. We can ignore them completely, since they don’t affect the execution.

__init__.py and module.py contain only those two lines (i.e., they are effectively empty).

standalone.py additionally attempts to import module.py via relative import:

from . import module  # explicit relative import

We’re well aware that /path/to/python/interpreter package/standalone.py will fail. However, we can run the module with the -m command line option that will “search sys.path for the named module and execute its contents as the __main__ module”:

vaultah@base:~$ python3 -i -m package.standalone
Importing /home/vaultah/package/__init__.py
Running /home/vaultah/package/standalone.py
Importing /home/vaultah/package/module.py
>>> __file__
'/home/vaultah/package/standalone.py'
>>> __package__
'package'
>>> # The __package__ has been correctly set and module.py has been imported.
... # What's inside sys.modules?
... import sys
>>> sys.modules['__main__']
<module 'package.standalone' from '/home/vaultah/package/standalone.py'>
>>> sys.modules['package.module']
<module 'package.module' from '/home/vaultah/package/module.py'>
>>> sys.modules['package']
<module 'package' from '/home/vaultah/package/__init__.py'>

-m does all the importing stuff for you and automatically sets __package__, but you can do that yourself in the

Solution #2: Set __package__ manually

Please treat it as a proof of concept rather than an actual solution. It isn’t well-suited for use in real-world code.

PEP 366 has a workaround to this problem, however, it’s incomplete, because setting __package__ alone is not enough. You’re going to need to import at least N preceding packages in the module hierarchy, where N is the number of parent directories (relative to the directory of the script) that will be searched for the module being imported.

Thus,

  1. Add the parent directory of the Nth predecessor of the current module to sys.path

  2. Remove the current file’s directory from sys.path

  3. Import the parent module of the current module using its fully-qualified name

  4. Set __package__ to the fully-qualified name from 2

  5. Perform the relative import

I’ll borrow files from the Solution #1 and add some more subpackages:

package
├── __init__.py
├── module.py
└── subpackage
    ├── __init__.py
    └── subsubpackage
        ├── __init__.py
        └── standalone.py

This time standalone.py will import module.py from the package package using the following relative import

from ... import module  # N = 3

We’ll need to precede that line with the boilerplate code, to make it work.

import sys
from pathlib import Path

if __name__ == '__main__' and __package__ is None:
    file = Path(__file__).resolve()
    parent, top = file.parent, file.parents[3]

    sys.path.append(str(top))
    try:
        sys.path.remove(str(parent))
    except ValueError: # Already removed
        pass

    import package.subpackage.subsubpackage
    __package__ = 'package.subpackage.subsubpackage'

from ... import module # N = 3

It allows us to execute standalone.py by filename:

vaultah@base:~$ python3 package/subpackage/subsubpackage/standalone.py
Running /home/vaultah/package/subpackage/subsubpackage/standalone.py
Importing /home/vaultah/package/__init__.py
Importing /home/vaultah/package/subpackage/__init__.py
Importing /home/vaultah/package/subpackage/subsubpackage/__init__.py
Importing /home/vaultah/package/module.py

A more general solution wrapped in a function can be found here. Example usage:

if __name__ == '__main__' and __package__ is None:
    import_parents(level=3) # N = 3

from ... import module
from ...module.submodule import thing

Solution #3: Use absolute imports and setuptools

The steps are –

  1. Replace explicit relative imports with equivalent absolute imports

  2. Install package to make it importable

For instance, the directory structure may be as follows

.
├── project
│   ├── package
│   │   ├── __init__.py
│   │   ├── module.py
│   │   └── standalone.py
│   └── setup.py

where setup.py is

from setuptools import setup, find_packages
setup(
    name = 'your_package_name',
    packages = find_packages(),
)

The rest of the files were borrowed from the Solution #1.

Installation will allow you to import the package regardless of your working directory (assuming there’ll be no naming issues).

We can modify standalone.py to use this advantage (step 1):

from package import module  # absolute import

Change your working directory to project and run /path/to/python/interpreter setup.py install --user (--user installs the package in your site-packages directory) (step 2):

vaultah@base:~$ cd project
vaultah@base:~/project$ python3 setup.py install --user

Let’s verify that it’s now possible to run standalone.py as a script:

vaultah@base:~/project$ python3 -i package/standalone.py
Running /home/vaultah/project/package/standalone.py
Importing /home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/__init__.py
Importing /home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/module.py
>>> module
<module 'package.module' from '/home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/module.py'>
>>> import sys
>>> sys.modules['package']
<module 'package' from '/home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/__init__.py'>
>>> sys.modules['package.module']
<module 'package.module' from '/home/vaultah/.local/lib/python3.6/site-packages/your_package_name-0.0.0-py3.6.egg/package/module.py'>

Note: If you decide to go down this route, you’d be better off using virtual environments to install packages in isolation.

Solution #4: Use absolute imports and some boilerplate code

Frankly, the installation is not necessary – you could add some boilerplate code to your script to make absolute imports work.

I’m going to borrow files from Solution #1 and change standalone.py:

  1. Add the parent directory of package to sys.path before attempting to import anything from package using absolute imports:

    import sys
    from pathlib import Path # if you haven't already done so
    file = Path(__file__).resolve()
    parent, root = file.parent, file.parents[1]
    sys.path.append(str(root))
    
    # Additionally remove the current file's directory from sys.path
    try:
        sys.path.remove(str(parent))
    except ValueError: # Already removed
        pass
    
  2. Replace the relative import by the absolute import:

    from package import module  # absolute import
    

standalone.py runs without problems:

vaultah@base:~$ python3 -i package/standalone.py
Running /home/vaultah/package/standalone.py
Importing /home/vaultah/package/__init__.py
Importing /home/vaultah/package/module.py
>>> module
<module 'package.module' from '/home/vaultah/package/module.py'>
>>> import sys
>>> sys.modules['package']
<module 'package' from '/home/vaultah/package/__init__.py'>
>>> sys.modules['package.module']
<module 'package.module' from '/home/vaultah/package/module.py'>

I feel that I should warn you: try not to do this, especially if your project has a complex structure.


As a side note, PEP 8 recommends the use of absolute imports, but states that in some scenarios explicit relative imports are acceptable:

Absolute imports are recommended, as they are usually more readable and tend to be better behaved (or at least give better error messages). […] However, explicit relative imports are an acceptable alternative to absolute imports, especially when dealing with complex package layouts where using absolute imports would be unnecessarily verbose.


回答 2

将其放入包的__init__.py文件中

# For relative imports to work in Python 3.6
import os, sys; sys.path.append(os.path.dirname(os.path.realpath(__file__)))

假设您的包裹是这样的:

├── project
   ├── package
      ├── __init__.py
      ├── module1.py
      └── module2.py
   └── setup.py

现在在包中使用常规导入,例如:

# in module2.py
from module1 import class1

这适用于python 2和3。

Put this inside your package’s __init__.py file:

# For relative imports to work in Python 3.6
import os, sys; sys.path.append(os.path.dirname(os.path.realpath(__file__)))

Assuming your package is like this:

├── project
│   ├── package
│   │   ├── __init__.py
│   │   ├── module1.py
│   │   └── module2.py
│   └── setup.py

Now use regular imports in you package, like:

# in module2.py
from module1 import class1

This works in both python 2 and 3.


回答 3

我遇到了这个问题。黑客的解决方法是通过if / else块导入,如下所示:

#!/usr/bin/env python3
#myothermodule

if __name__ == '__main__':
    from mymodule import as_int
else:
    from .mymodule import as_int


# Exported function
def add(a, b):
    return as_int(a) + as_int(b)

# Test function for module  
def _test():
    assert add('1', '1') == 2

if __name__ == '__main__':
    _test()

I ran into this issue. A hack workaround is importing via an if/else block like follows:

#!/usr/bin/env python3
#myothermodule

if __name__ == '__main__':
    from mymodule import as_int
else:
    from .mymodule import as_int


# Exported function
def add(a, b):
    return as_int(a) + as_int(b)

# Test function for module  
def _test():
    assert add('1', '1') == 2

if __name__ == '__main__':
    _test()

回答 4

希望这对那里的某人有价值-我浏览了六堆stackoverflow帖子,试图找出与上面上面发布的内容类似的相对进口量。我按照建议设置了所有内容,但仍在执行ModuleNotFoundError: No module named 'my_module_name'

由于我只是在本地开发和玩耍,所以我没有创建/运行setup.py文件。我也没有明显地设置了我PYTHONPATH

我意识到,当我像在模块位于同一目录中那样运行代码时,找不到模块:

$ python3 test/my_module/module_test.py                                                                                                               2.4.0
Traceback (most recent call last):
  File "test/my_module/module_test.py", line 6, in <module>
    from my_module.module import *
ModuleNotFoundError: No module named 'my_module'

但是,当我明确指定路径时,事情开始起作用:

$ PYTHONPATH=. python3 test/my_module/module_test.py                                                                                                  2.4.0
...........
----------------------------------------------------------------------
Ran 11 tests in 0.001s

OK

因此,如果有人尝试了一些建议,则认为他们的代码结构正确,并且如果您不将当前目录导出到PYTHONPATH,则仍然遇到与我类似的情况:

  1. 运行您的代码,并明确包含如下路径: $ PYTHONPATH=. python3 test/my_module/module_test.py
  2. 为避免调用PYTHONPATH=.,请创建一个setup.py内容如下的文件,然后运行python setup.py development以将软件包添加到路径中:
# setup.py
from setuptools import setup, find_packages

setup(
    name='sample',
    packages=find_packages()
)

Hopefully, this will be of value to someone out there – I went through half a dozen stackoverflow posts trying to figure out relative imports similar to whats posted above here. I set up everything as suggested but I was still hitting ModuleNotFoundError: No module named 'my_module_name'

Since I was just developing locally and playing around, I hadn’t created/run a setup.py file. I also hadn’t apparently set my PYTHONPATH.

I realized that when I ran my code as I had been when the tests were in the same directory as the module, I couldn’t find my module:

$ python3 test/my_module/module_test.py                                                                                                               2.4.0
Traceback (most recent call last):
  File "test/my_module/module_test.py", line 6, in <module>
    from my_module.module import *
ModuleNotFoundError: No module named 'my_module'

However, when I explicitly specified the path things started to work:

$ PYTHONPATH=. python3 test/my_module/module_test.py                                                                                                  2.4.0
...........
----------------------------------------------------------------------
Ran 11 tests in 0.001s

OK

So, in the event that anyone has tried a few suggestions, believes their code is structured correctly and still finds themselves in a similar situation as myself try either of the following if you don’t export the current directory to your PYTHONPATH:

  1. Run your code and explicitly include the path like so: $ PYTHONPATH=. python3 test/my_module/module_test.py
  2. To avoid calling PYTHONPATH=., create a setup.py file with contents like the following and run python setup.py development to add packages to the path:
# setup.py
from setuptools import setup, find_packages

setup(
    name='sample',
    packages=find_packages()
)

回答 5

我需要从主项目目录运行python3才能使其工作。

例如,如果项目具有以下结构:

project_demo/
├── main.py
├── some_package/
   ├── __init__.py
   └── project_configs.py
└── test/
    └── test_project_configs.py

我将在文件夹project_demo /中运行python3 ,然后执行

from some_package import project_configs

I needed to run python3 from the main project directory to make it work.

For example, if the project has the following structure:

project_demo/
├── main.py
├── some_package/
│   ├── __init__.py
│   └── project_configs.py
└── test/
    └── test_project_configs.py

Solution

I would run python3 inside folder project_demo/ and then perform a

from some_package import project_configs

回答 6

为了解决这个问题,我设计了带有重新包装软件包的解决方案,该解决方案已经为我服务了一段时间。它将上层目录添加到lib路径:

import repackage
repackage.up()
from mypackage.mymodule import myfunction

重新打包可以使用智能策略(检查调用堆栈)进行相对导入,从而在各种情况下都起作用。

To obviate this problem, I devised a solution with the repackage package, which has worked for me for some time. It adds the upper directory to the lib path:

import repackage
repackage.up()
from mypackage.mymodule import myfunction

Repackage can make relative imports that work in a wide range of cases, using an intelligent strategy (inspecting the call stack).


回答 7

如果两个软件包都在您的导入路径(sys.path)中,并且您想要的模块/类在example / example.py中,则在没有相对导入的情况下访问该类,请尝试:

from example.example import fkt

if both packages are in your import path (sys.path), and the module/class you want is in example/example.py, then to access the class without relative import try:

from example.example import fkt

回答 8

我认为最好的解决方案是为您的模块创建一个软件包: 是有关如何执行操作的更多信息。

有了软件包后,您就不必担心相对导入了,就可以进行绝对导入。

I think the best solution is to create a package for your module: Here is more info on how to do it.

Once you have a package you don’t need to worry about relative import, you can just do absolute imports.


回答 9

我有一个类似的问题:我需要一个Linux服务和cgi插件,它们使用公共常量进行协作。做到这一点的“自然”方法是将它们放在程序包的init .py中,但是我无法使用-m参数启动cgi插件。

我的最终解决方案与上述解决方案2类似:

import sys
import pathlib as p
import importlib

pp = p.Path(sys.argv[0])
pack = pp.resolve().parent

pkg = importlib.import_module('__init__', package=str(pack))

缺点是必须在常量(或通用函数)前加上pkg:

print(pkg.Glob)

I had a similar problem: I needed a Linux service and cgi plugin which use common constants to cooperate. The ‘natural’ way to do this is to place them in the init.py of the package, but I cannot start the cgi plugin with the -m parameter.

My final solution was similar to Solution #2 above:

import sys
import pathlib as p
import importlib

pp = p.Path(sys.argv[0])
pack = pp.resolve().parent

pkg = importlib.import_module('__init__', package=str(pack))

The disadvantage is that you must prefix the constants (or common functions) with pkg:

print(pkg.Glob)

查找pip安装了哪个版本的软件包

问题:查找pip安装了哪个版本的软件包

使用pip,可以确定当前安装了哪个版本的软件包?

我知道,pip install XYZ --upgrade但是我想知道是否有类似的东西pip info XYZ。如果不是,最好的方法就是告诉我当前使用的版本。

Using pip, is it possible to figure out which version of a package is currently installed?

I know about pip install XYZ --upgrade but I am wondering if there is anything like pip info XYZ. If not what would be the best way to tell what version I am currently using.


回答 0

pip 1.3开始,有一个pip show命令。

$ pip show Jinja2
---
Name: Jinja2
Version: 2.7.3
Location: /path/to/virtualenv/lib/python2.7/site-packages
Requires: markupsafe

在旧版本,pip freezegrep应做的工作很好。

$ pip freeze | grep Jinja2
Jinja2==2.7.3

As of pip 1.3, there is a pip show command.

$ pip show Jinja2
---
Name: Jinja2
Version: 2.7.3
Location: /path/to/virtualenv/lib/python2.7/site-packages
Requires: markupsafe

In older versions, pip freeze and grep should do the job nicely.

$ pip freeze | grep Jinja2
Jinja2==2.7.3

回答 1

我刚刚以增强的雨果·塔瓦雷斯(Hugo Tavares)的点子发送了请求请求:

(以specloud为例)

$ pip show specloud

Package: specloud
Version: 0.4.4
Requires:
nose
figleaf
pinocchio

I just sent a pull request in pip with the enhancement Hugo Tavares said:

(specloud as example)

$ pip show specloud

Package: specloud
Version: 0.4.4
Requires:
nose
figleaf
pinocchio

回答 2

Pip 1.3现在也有一个list命令:

$ pip list
argparse (1.2.1)
pip (1.5.1)
setuptools (2.1)
wsgiref (0.1.2)

Pip 1.3 now also has a list command:

$ pip list
argparse (1.2.1)
pip (1.5.1)
setuptools (2.1)
wsgiref (0.1.2)

回答 3

并将–outdated作为额外的参数,您将获得所使用软件包的当前版本和最新版本:

$ pip list --outdated
distribute (Current: 0.6.34 Latest: 0.7.3)
django-bootstrap3 (Current: 1.1.0 Latest: 4.3.0)
Django (Current: 1.5.4 Latest: 1.6.4)
Jinja2 (Current: 2.6 Latest: 2.8)

因此,结合AdamKG的答案:

$ pip list --outdated | grep Jinja2
Jinja2 (Current: 2.6 Latest: 2.8)

也检查pip-toolshttps : //github.com/nvie/pip-tools

and with –outdated as an extra argument, you will get the Current and Latest versions of the packages you are using :

$ pip list --outdated
distribute (Current: 0.6.34 Latest: 0.7.3)
django-bootstrap3 (Current: 1.1.0 Latest: 4.3.0)
Django (Current: 1.5.4 Latest: 1.6.4)
Jinja2 (Current: 2.6 Latest: 2.8)

So combining with AdamKG ‘s answer :

$ pip list --outdated | grep Jinja2
Jinja2 (Current: 2.6 Latest: 2.8)

Check pip-tools too : https://github.com/nvie/pip-tools


回答 4

您还可以安装yolk然后运行yolk -l,这也会提供一些不错的输出。这是我的小virtualenv获得的:

(venv)CWD> /space/vhosts/pyramid.xcode.com/venv/build/unittest 
project@pyramid 43> yolk -l
Chameleon       - 2.8.2        - active 
Jinja2          - 2.6          - active 
Mako            - 0.7.0        - active 
MarkupSafe      - 0.15         - active 
PasteDeploy     - 1.5.0        - active 
Pygments        - 1.5          - active 
Python          - 2.7.3        - active development (/usr/lib/python2.7/lib-dynload)
SQLAlchemy      - 0.7.6        - active 
WebOb           - 1.2b3        - active 
account         - 0.0          - active development (/space/vhosts/pyramid.xcode.com/project/account)
distribute      - 0.6.19       - active 
egenix-mx-base  - 3.2.3        - active 
ipython         - 0.12         - active 
logilab-astng   - 0.23.1       - active 
logilab-common  - 0.57.1       - active 
nose            - 1.1.2        - active 
pbkdf2          - 1.3          - active 
pip             - 1.0.2        - active 
pyScss          - 1.1.3        - active 
pycrypto        - 2.5          - active 
pylint          - 0.25.1       - active 
pyramid-debugtoolbar - 1.0.1        - active 
pyramid-tm      - 0.4          - active 
pyramid         - 1.3          - active 
repoze.lru      - 0.5          - active 
simplejson      - 2.5.0        - active 
transaction     - 1.2.0        - active 
translationstring - 1.1          - active 
venusian        - 1.0a3        - active 
waitress        - 0.8.1        - active 
wsgiref         - 0.1.2        - active development (/usr/lib/python2.7)
yolk            - 0.4.3        - active 
zope.deprecation - 3.5.1        - active 
zope.interface  - 3.8.0        - active 
zope.sqlalchemy - 0.7          - active 

You can also install yolk and then run yolk -l which also gives some nice output. Here is what I get for my little virtualenv:

(venv)CWD> /space/vhosts/pyramid.xcode.com/venv/build/unittest 
project@pyramid 43> yolk -l
Chameleon       - 2.8.2        - active 
Jinja2          - 2.6          - active 
Mako            - 0.7.0        - active 
MarkupSafe      - 0.15         - active 
PasteDeploy     - 1.5.0        - active 
Pygments        - 1.5          - active 
Python          - 2.7.3        - active development (/usr/lib/python2.7/lib-dynload)
SQLAlchemy      - 0.7.6        - active 
WebOb           - 1.2b3        - active 
account         - 0.0          - active development (/space/vhosts/pyramid.xcode.com/project/account)
distribute      - 0.6.19       - active 
egenix-mx-base  - 3.2.3        - active 
ipython         - 0.12         - active 
logilab-astng   - 0.23.1       - active 
logilab-common  - 0.57.1       - active 
nose            - 1.1.2        - active 
pbkdf2          - 1.3          - active 
pip             - 1.0.2        - active 
pyScss          - 1.1.3        - active 
pycrypto        - 2.5          - active 
pylint          - 0.25.1       - active 
pyramid-debugtoolbar - 1.0.1        - active 
pyramid-tm      - 0.4          - active 
pyramid         - 1.3          - active 
repoze.lru      - 0.5          - active 
simplejson      - 2.5.0        - active 
transaction     - 1.2.0        - active 
translationstring - 1.1          - active 
venusian        - 1.0a3        - active 
waitress        - 0.8.1        - active 
wsgiref         - 0.1.2        - active development (/usr/lib/python2.7)
yolk            - 0.4.3        - active 
zope.deprecation - 3.5.1        - active 
zope.interface  - 3.8.0        - active 
zope.sqlalchemy - 0.7          - active 

回答 5

您可以使用grep命令进行查找。

pip show <package_name>|grep Version

例:

pip show urllib3|grep Version

将仅显示版本。

元数据版本:2.0
版本:1.12

You can use the grep command to find out.

pip show <package_name>|grep Version

Example:

pip show urllib3|grep Version

will show only the versions.

Metadata-Version: 2.0
Version: 1.12


回答 6

最简单的方法是这样的:

import jinja2
print jinja2.__version__

The easiest way is this:

import jinja2
print jinja2.__version__

回答 7

还有一个名为的工具pip-check,可为您提供所有已安装软件包及其更新状态的快速概述:

我自己没有使用过;只是偶然地偶然发现了这个问题,因此没有被提及…

There’s also a tool called pip-check which gives you a quick overview of all installed packages and their update status:

Haven’t used it myself; just stumbled upon it and this SO question in quick succession, and since it wasn’t mentioned…


回答 8

在Windows上,您可以发出以下命令:

pip show setuptools | findstr "Version"

输出:

Version: 34.1.1

On windows, you can issue command such as:

pip show setuptools | findstr "Version"

Output:

Version: 34.1.1

回答 9

python函数仅以机器可读格式返回软件包版本:

from importlib.metadata import version 
version('numpy')

在python 3.8之前:

pip install importlib-metadata 
from importlib_metadata import version
version('numpy')

bash等效项(这里也从python调用)会复杂得多(但更健壮-请参见下面的注意事项):

import subprocess
def get_installed_ver(pkg_name):
    bash_str="pip freeze | grep -w %s= | awk -F '==' {'print $2'} | tr -d '\n'" %(pkg_name)
    return(subprocess.check_output(bash_str, shell=True).decode())

用法示例:

# pkg_name="xgboost"
# pkg_name="Flask"
# pkg_name="Flask-Caching"
pkg_name="scikit-learn"

print(get_installed_ver(pkg_name))
>>> 0.22

请注意,在两种情况下,pkg_name参数都应包含程序包名称,其格式应为返回的格式,pip freeze而不是在此过程中使用的格式import,例如scikit-learnnot sklearnFlask-Caching,not flask_caching

请注意,虽然pip freeze在bash版本中调用似乎效率低下,但只有这种方法被证明足够强大,可以打包命名的特殊性和不一致性(例如,下划线vs破折号,小写vs大写字母以及缩写,例如sklearnvs scikit-learn)。

注意:在复杂的环境中,这两种变体都可能返回令人惊讶的版本号,这与您在期间实际获得的内容不一致import

用户 site-packages子文件夹中隐藏了其他版本的软件包时,就会出现这样的问题。为了说明使用version()这种情况的危险,我遇到了这种情况:

$ pip freeze | grep lightgbm
lightgbm==2.3.1

and

$ python -c "import lightgbm; print(lightgbm.__version__)"
2.3.1

vs.

$ python -c "from importlib_metadata import version; print(version(\"lightgbm\"))"
2.2.3

until you delete the subfolder with the old version (here 2.2.3) from the user folder (only one would normally be preserved by `pip` - the one installed as last with the `--user` switch):

$ ls /home/jovyan/.local/lib/python3.7/site-packages/lightgbm*
/home/jovyan/.local/lib/python3.7/site-packages/lightgbm-2.2.3.dist-info
/home/jovyan/.local/lib/python3.7/site-packages/lightgbm-2.3.1.dist-info

另一个问题是在同一环境中有一些conda安装的软件包。如果它们与pip安装的软件包共享依赖关系,并且这些依赖关系的版本不同,则您可能会降低pip安装的依赖关系的等级。

为了说明这一点,numpyPyPI于04-01-2020 上提供的最新版本是1.18.0,与此同时,Anaconda的conda-forge频道仅提供了1.17.3版本numpy。因此,当您basemap使用conda(第二个)安装软件包时,先前pip安装的numpy版本将被conda降级为1.17.3,并且该import功能无法使用1.18.0版本。在这种情况下version()是正确的,和pip freeze/或conda list错误的:

$ python -c "from importlib_metadata import version; print(version(\"numpy\"))"
1.17.3

$ python -c "import numpy; print(numpy.__version__)"
1.17.3

$ pip freeze | grep numpy
numpy==1.18.0

$ conda list | grep numpy
numpy                     1.18.0                   pypi_0    pypi

The python function returning just the package version in a machine-readable format:

from importlib.metadata import version 
version('numpy')

Prior to python 3.8:

pip install importlib-metadata 
from importlib_metadata import version
version('numpy')

The bash equivalent (here also invoked from python) would be much more complex (but more robust – see caution below):

import subprocess
def get_installed_ver(pkg_name):
    bash_str="pip freeze | grep -w %s= | awk -F '==' {'print $2'} | tr -d '\n'" %(pkg_name)
    return(subprocess.check_output(bash_str, shell=True).decode())

Sample usage:

# pkg_name="xgboost"
# pkg_name="Flask"
# pkg_name="Flask-Caching"
pkg_name="scikit-learn"

print(get_installed_ver(pkg_name))
>>> 0.22

Note that in both cases pkg_name parameter should contain package name in the format as returned by pip freeze and not as used during import, e.g. scikit-learn not sklearn or Flask-Caching, not flask_caching.

Note that while invoking pip freeze in bash version may seem inefficient, only this method proves to be sufficiently robust to package naming peculiarities and inconsistencies (e.g. underscores vs dashes, small vs large caps, and abbreviations such as sklearn vs scikit-learn).

Caution: in complex environments both variants can return surprise version numbers, inconsistent with what you can actually get during import.

One such problem arises when there are other versions of the package hidden in a user site-packages subfolder. As an illustration of the perils of using version() here’s a situation I encountered:

$ pip freeze | grep lightgbm
lightgbm==2.3.1

and

$ python -c "import lightgbm; print(lightgbm.__version__)"
2.3.1

vs.

$ python -c "from importlib_metadata import version; print(version(\"lightgbm\"))"
2.2.3

until you delete the subfolder with the old version (here 2.2.3) from the user folder (only one would normally be preserved by `pip` - the one installed as last with the `--user` switch):

$ ls /home/jovyan/.local/lib/python3.7/site-packages/lightgbm*
/home/jovyan/.local/lib/python3.7/site-packages/lightgbm-2.2.3.dist-info
/home/jovyan/.local/lib/python3.7/site-packages/lightgbm-2.3.1.dist-info

Another problem is having some conda-installed packages in the same environment. If they share dependencies with your pip-installed packages, and versions of these dependencies differ, you may get downgrades of your pip-installed dependencies.

To illustrate, the latest version of numpy available in PyPI on 04-01-2020 was 1.18.0, while at the same time Anaconda’s conda-forge channel had only 1.17.3 version on numpy as their latest. So when you installed a basemap package with conda (as second), your previously pip-installed numpy would get downgraded by conda to 1.17.3, and version 1.18.0 would become unavailable to the import function. In this case version() would be right, and pip freeze/conda list wrong:

$ python -c "from importlib_metadata import version; print(version(\"numpy\"))"
1.17.3

$ python -c "import numpy; print(numpy.__version__)"
1.17.3

$ pip freeze | grep numpy
numpy==1.18.0

$ conda list | grep numpy
numpy                     1.18.0                   pypi_0    pypi

回答 10

pip show在python 3.7中有效:

pip show selenium
Name: selenium
Version: 4.0.0a3
Summary: Python bindings for Selenium
Home-page: https://github.com/SeleniumHQ/selenium/
Author: UNKNOWN
Author-email: UNKNOWN
License: Apache 2.0
Location: c:\python3.7\lib\site-packages\selenium-4.0.0a3-py3.7.egg
Requires: urllib3
Required-by:

pip show works in python 3.7:

pip show selenium
Name: selenium
Version: 4.0.0a3
Summary: Python bindings for Selenium
Home-page: https://github.com/SeleniumHQ/selenium/
Author: UNKNOWN
Author-email: UNKNOWN
License: Apache 2.0
Location: c:\python3.7\lib\site-packages\selenium-4.0.0a3-py3.7.egg
Requires: urllib3
Required-by:

回答 11

为此,请使用Python代码:

使用 importlib.metadata.version

Python≥3.8

import importlib.metadata
importlib.metadata.version('beautifulsoup4')
'4.9.1'

Python≤3.7

(使用importlib_metadata.version

!pip install importlib-metadata

import importlib_metadata
importlib_metadata.version('beautifulsoup4')
'4.9.1'

使用 pkg_resources.Distribution

import pkg_resources
pkg_resources.get_distribution('beautifulsoup4').version
'4.9.1'
pkg_resources.get_distribution('beautifulsoup4').parsed_version
<Version('4.9.1')>

归功于sinorocmirekphd的评论。

To do this using Python code:

Using importlib.metadata.version

Python ≥3.8

import importlib.metadata
importlib.metadata.version('beautifulsoup4')
'4.9.1'

Python ≤3.7

(using importlib_metadata.version)

!pip install importlib-metadata

import importlib_metadata
importlib_metadata.version('beautifulsoup4')
'4.9.1'

Using pkg_resources.Distribution

import pkg_resources
pkg_resources.get_distribution('beautifulsoup4').version
'4.9.1'
pkg_resources.get_distribution('beautifulsoup4').parsed_version
<Version('4.9.1')>

Credited to comments by sinoroc and mirekphd.


回答 12

对于Windows,您可以

  1. 打开cmd并键入python,然后按Enter。

  2. 键入导入,然后按Enter。

  3. 输入._version__并按Enter。

如您在此处的屏幕快照中所见,我正在使用此方法检查串行模块的版本。


For Windows you can

  1. open cmd and type python, press enter.

  2. type the import and press enter.

  3. type ._version__ and press enter.

As you can see in screen shot here I am using this method for checking the version of serial module.



回答 13

有问题的是,没有提到哪个OS用户正在使用(Windows / Linux / Mac)

因为有几个答案可以在Mac和Linux上完美运行。

如果用户试图在Windows上查找python软件包的版本,可以使用以下命令。

在PowerShell中,使用以下命令:

pip list | findstr <PackageName>

例:- pip list | findstr requests

输出: requests 2.18.4

In question, it is not mentioned which OS user is using (Windows/Linux/Mac)

As there are couple of answers which will work flawlessly on Mac and Linux.

Below command can be used in case the user is trying to find the version of a python package on windows.

In PowerShell use below command :

pip list | findstr <PackageName>

Example:- pip list | findstr requests

Output : requests 2.18.4


为什么[]比list()快?

问题:为什么[]比list()快?

我最近比较了[]和的处理速度,并list()惊讶地发现它的[]运行速度是的三倍以上list()。我跑了相同的测试与{}dict(),结果几乎相同:[]{}两个花了大约0.128sec /百万次,而list()dict()把每个粗0.428sec /万次。

为什么是这样?不要[]{}(可能()'',太)立即传回了一些空的股票面值的副本,而其明确命名同行(list()dict()tuple()str())完全去创建一个对象,他们是否真的有元素?

我不知道这两种方法有何不同,但我很想找出答案。我在文档中或SO上都找不到答案,而寻找空括号却比我预期的要麻烦得多。

通过分别调用timeit.timeit("[]")timeit.timeit("list()"),和timeit.timeit("{}")timeit.timeit("dict()")来比较列表和字典,以获得计时结果。我正在运行Python 2.7.9。

我最近发现“ 为什么True慢于if? ”比较了if Trueto 的性能,if 1并且似乎触及了类似的文字对全局场景;也许也值得考虑。

I recently compared the processing speeds of [] and list() and was surprised to discover that [] runs more than three times faster than list(). I ran the same test with {} and dict() and the results were practically identical: [] and {} both took around 0.128sec / million cycles, while list() and dict() took roughly 0.428sec / million cycles each.

Why is this? Do [] and {} (and probably () and '', too) immediately pass back a copies of some empty stock literal while their explicitly-named counterparts (list(), dict(), tuple(), str()) fully go about creating an object, whether or not they actually have elements?

I have no idea how these two methods differ but I’d love to find out. I couldn’t find an answer in the docs or on SO, and searching for empty brackets turned out to be more problematic than I’d expected.

I got my timing results by calling timeit.timeit("[]") and timeit.timeit("list()"), and timeit.timeit("{}") and timeit.timeit("dict()"), to compare lists and dictionaries, respectively. I’m running Python 2.7.9.

I recently discovered “Why is if True slower than if 1?” that compares the performance of if True to if 1 and seems to touch on a similar literal-versus-global scenario; perhaps it’s worth considering as well.


回答 0

因为[]{}文字语法。Python可以创建字节码仅用于创建列表或字典对象:

>>> import dis
>>> dis.dis(compile('[]', '', 'eval'))
  1           0 BUILD_LIST               0
              3 RETURN_VALUE        
>>> dis.dis(compile('{}', '', 'eval'))
  1           0 BUILD_MAP                0
              3 RETURN_VALUE        

list()dict()是单独的对象。它们的名称需要解析,必须包含堆栈以推入参数,必须存储框架以供以后检索,并且必须进行调用。这都需要更多时间。

对于空的情况,这意味着您至少要有一个LOAD_NAME(必须在全局命名空间以及__builtin__模块中进行搜索),后跟一个CALL_FUNCTION必须保留当前帧的:

>>> dis.dis(compile('list()', '', 'eval'))
  1           0 LOAD_NAME                0 (list)
              3 CALL_FUNCTION            0
              6 RETURN_VALUE        
>>> dis.dis(compile('dict()', '', 'eval'))
  1           0 LOAD_NAME                0 (dict)
              3 CALL_FUNCTION            0
              6 RETURN_VALUE        

您可以使用以下命令分别计时名称查找timeit

>>> import timeit
>>> timeit.timeit('list', number=10**7)
0.30749011039733887
>>> timeit.timeit('dict', number=10**7)
0.4215109348297119

时间差异可能是字典哈希冲突。从调用这些对象的时间中减去这些时间,然后将结果与使用文字的时间进行比较:

>>> timeit.timeit('[]', number=10**7)
0.30478692054748535
>>> timeit.timeit('{}', number=10**7)
0.31482696533203125
>>> timeit.timeit('list()', number=10**7)
0.9991960525512695
>>> timeit.timeit('dict()', number=10**7)
1.0200958251953125

因此,1.00 - 0.31 - 0.30 == 0.39每1000万次调用必须调用该对象花费了额外的几秒钟。

您可以通过将全局名称别名为本地名称来避免全局查找成本(使用timeit设置,绑定到名称的所有内容都是本地名称):

>>> timeit.timeit('_list', '_list = list', number=10**7)
0.1866450309753418
>>> timeit.timeit('_dict', '_dict = dict', number=10**7)
0.19016098976135254
>>> timeit.timeit('_list()', '_list = list', number=10**7)
0.841480016708374
>>> timeit.timeit('_dict()', '_dict = dict', number=10**7)
0.7233691215515137

但您永远无法克服这些CALL_FUNCTION成本。

Because [] and {} are literal syntax. Python can create bytecode just to create the list or dictionary objects:

>>> import dis
>>> dis.dis(compile('[]', '', 'eval'))
  1           0 BUILD_LIST               0
              3 RETURN_VALUE        
>>> dis.dis(compile('{}', '', 'eval'))
  1           0 BUILD_MAP                0
              3 RETURN_VALUE        

list() and dict() are separate objects. Their names need to be resolved, the stack has to be involved to push the arguments, the frame has to be stored to retrieve later, and a call has to be made. That all takes more time.

For the empty case, that means you have at the very least a LOAD_NAME (which has to search through the global namespace as well as the __builtin__ module) followed by a CALL_FUNCTION, which has to preserve the current frame:

>>> dis.dis(compile('list()', '', 'eval'))
  1           0 LOAD_NAME                0 (list)
              3 CALL_FUNCTION            0
              6 RETURN_VALUE        
>>> dis.dis(compile('dict()', '', 'eval'))
  1           0 LOAD_NAME                0 (dict)
              3 CALL_FUNCTION            0
              6 RETURN_VALUE        

You can time the name lookup separately with timeit:

>>> import timeit
>>> timeit.timeit('list', number=10**7)
0.30749011039733887
>>> timeit.timeit('dict', number=10**7)
0.4215109348297119

The time discrepancy there is probably a dictionary hash collision. Subtract those times from the times for calling those objects, and compare the result against the times for using literals:

>>> timeit.timeit('[]', number=10**7)
0.30478692054748535
>>> timeit.timeit('{}', number=10**7)
0.31482696533203125
>>> timeit.timeit('list()', number=10**7)
0.9991960525512695
>>> timeit.timeit('dict()', number=10**7)
1.0200958251953125

So having to call the object takes an additional 1.00 - 0.31 - 0.30 == 0.39 seconds per 10 million calls.

You can avoid the global lookup cost by aliasing the global names as locals (using a timeit setup, everything you bind to a name is a local):

>>> timeit.timeit('_list', '_list = list', number=10**7)
0.1866450309753418
>>> timeit.timeit('_dict', '_dict = dict', number=10**7)
0.19016098976135254
>>> timeit.timeit('_list()', '_list = list', number=10**7)
0.841480016708374
>>> timeit.timeit('_dict()', '_dict = dict', number=10**7)
0.7233691215515137

but you never can overcome that CALL_FUNCTION cost.


回答 1

list()需要全局查找和函数调用,但需要[]编译为一条指令。看到:

Python 2.7.3
>>> import dis
>>> print dis.dis(lambda: list())
  1           0 LOAD_GLOBAL              0 (list)
              3 CALL_FUNCTION            0
              6 RETURN_VALUE        
None
>>> print dis.dis(lambda: [])
  1           0 BUILD_LIST               0
              3 RETURN_VALUE        
None

list() requires a global lookup and a function call but [] compiles to a single instruction. See:

Python 2.7.3
>>> import dis
>>> print dis.dis(lambda: list())
  1           0 LOAD_GLOBAL              0 (list)
              3 CALL_FUNCTION            0
              6 RETURN_VALUE        
None
>>> print dis.dis(lambda: [])
  1           0 BUILD_LIST               0
              3 RETURN_VALUE        
None

回答 2

因为list是一个功能转化说一个字符串列表对象,而[]用于创建一个列表蝙蝠。尝试一下(可能对您更有意义):

x = "wham bam"
a = list(x)
>>> a
["w", "h", "a", "m", ...]

y = ["wham bam"]
>>> y
["wham bam"]

为您提供包含您所输入内容的实际列表。

Because list is a function to convert say a string to a list object, while [] is used to create a list off the bat. Try this (might make more sense to you):

x = "wham bam"
a = list(x)
>>> a
["w", "h", "a", "m", ...]

While

y = ["wham bam"]
>>> y
["wham bam"]

Gives you a actual list containing whatever you put in it.


回答 3

至此,答案非常好,并完全涵盖了这个问题。对于那些感兴趣的人,我将进一步从字节码中删除。我正在使用CPython的最新仓库;在这方面,旧版本的行为类似,但可能会稍作更改。

这是每个BUILD_LIST针对for []CALL_FUNCTIONfor 的执行情况的细分list()


BUILD_LIST指令:

您应该只查看恐怖:

PyObject *list =  PyList_New(oparg);
if (list == NULL)
    goto error;
while (--oparg >= 0) {
    PyObject *item = POP();
    PyList_SET_ITEM(list, oparg, item);
}
PUSH(list);
DISPATCH();

我知道那令人费解。这是多么简单:

  • 使用创建新列表PyList_New(主要是为新的列表对象分配内存),以oparg信号指示堆栈上的参数数量。开门见山。
  • 检查是否没有问题if (list==NULL)
  • 使用PyList_SET_ITEM(宏)添加位于堆栈上的所有参数(在我们的示例中此参数未执行)。

难怪它很快!它是为创建新列表而定制的,仅此而已:-)

CALL_FUNCTION指令:

窥视代码处理时,这是您看到的第一件事CALL_FUNCTION

PyObject **sp, *res;
sp = stack_pointer;
res = call_function(&sp, oparg, NULL);
stack_pointer = sp;
PUSH(res);
if (res == NULL) {
    goto error;
}
DISPATCH();

看起来很无害吧?好吧,不是,不幸的call_function是,不是一个会立即调用该函数的直截了当的家伙,它不会。相反,它从堆栈中获取对象,获取堆栈中的所有参数,然后根据对象的类型进行切换。它是:

我们正在调用list类型,传入的参数call_functionPyList_Type。CPython现在必须调用一个泛型函数来处理名为的所有可调用对象_PyObject_FastCallKeywords,还有更多函数调用。

该函数再次检查某些函数类型(我不明白为什么),然后在为kwargs创建字典后,如果需要,继续调用_PyObject_FastCallDict

_PyObject_FastCallDict终于把我们带到某个地方!执行后甚至更多的检查抓住了tp_call从插槽type中的type我们在通过了,那就是它抓住type.tp_call。然后,它根据传入的参数来创建元组_PyStack_AsTuple,最后可以最终进行调用

tp_call,它将匹配type.__call__并最终创建列表对象。它调用与之__new__对应的列表PyType_GenericNew并为其分配内存PyType_GenericAlloc这实际上是它与追上的部分PyList_New,最后。所有以前的内容对于以通用方式处理对象都是必需的。

最后,使用任何可用参数type_call调用list.__init__并初始化列表,然后继续返回原来的方式。:-)

最后,记住 LOAD_NAME,这是另一个在这里做出贡献的家伙。


很容易看到,在处理我们的输入时,Python通常必须跳过圈以真正找到合适的C函数来完成工作。它不具有立即调用它的功能,因为它是动态的,有人可能会掩盖list并且男孩会做很多人做的事情),因此必须采取另一条路。

这是哪里 list()损失很多的地方:正在探索的Python需要做以找出它应该做什么。

另一方面,字面语法恰好意味着一回事。它无法更改,并且始终以预定的方式运行。

脚注:所有功能名称均可能从一个版本更改为另一个版本。关键点仍然存在,并且很可能在将来的任何版本中都存在,这是动态查找使事情变慢的原因。

The answers here are great, to the point and fully cover this question. I’ll drop a further step down from byte-code for those interested. I’m using the most recent repo of CPython; older versions behave similar in this regard but slight changes might be in place.

Here’s a break down of the execution for each of these, BUILD_LIST for [] and CALL_FUNCTION for list().


The BUILD_LIST instruction:

You should just view the horror:

PyObject *list =  PyList_New(oparg);
if (list == NULL)
    goto error;
while (--oparg >= 0) {
    PyObject *item = POP();
    PyList_SET_ITEM(list, oparg, item);
}
PUSH(list);
DISPATCH();

Terribly convoluted, I know. This is how simple it is:

  • Create a new list with PyList_New (this mainly allocates the memory for a new list object), oparg signalling the number of arguments on the stack. Straight to the point.
  • Check that nothing went wrong with if (list==NULL).
  • Add any arguments (in our case this isn’t executed) located on the stack with PyList_SET_ITEM (a macro).

No wonder it is fast! It’s custom-made for creating new lists, nothing else :-)

The CALL_FUNCTION instruction:

Here’s the first thing you see when you peek at the code handling CALL_FUNCTION:

PyObject **sp, *res;
sp = stack_pointer;
res = call_function(&sp, oparg, NULL);
stack_pointer = sp;
PUSH(res);
if (res == NULL) {
    goto error;
}
DISPATCH();

Looks pretty harmless, right? Well, no, unfortunately not, call_function is not a straightforward guy that will call the function immediately, it can’t. Instead, it grabs the object from the stack, grabs all arguments of the stack and then switches based on the type of the object; is it a:

We’re calling the list type, the argument passed in to call_function is PyList_Type. CPython now has to call a generic function to handle any callable objects named _PyObject_FastCallKeywords, yay more function calls.

This function again makes some checks for certain function types (which I cannot understand why) and then, after creating a dict for kwargs if required, goes on to call _PyObject_FastCallDict.

_PyObject_FastCallDict finally gets us somewhere! After performing even more checks it grabs the tp_call slot from the type of the type we’ve passed in, that is, it grabs type.tp_call. It then proceeds to create a tuple out of of the arguments passed in with _PyStack_AsTuple and, finally, a call can finally be made!

tp_call, which matches type.__call__ takes over and finally creates the list object. It calls the lists __new__ which corresponds to PyType_GenericNew and allocates memory for it with PyType_GenericAlloc: This is actually the part where it catches up with PyList_New, finally. All the previous are necessary to handle objects in a generic fashion.

In the end, type_call calls list.__init__ and initializes the list with any available arguments, then we go on a returning back the way we came. :-)

Finally, remmeber the LOAD_NAME, that’s another guy that contributes here.


It’s easy to see that, when dealing with our input, Python generally has to jump through hoops in order to actually find out the appropriate C function to do the job. It doesn’t have the curtesy of immediately calling it because it’s dynamic, someone might mask list (and boy do many people do) and another path must be taken.

This is where list() loses much: The exploring Python needs to do to find out what the heck it should do.

Literal syntax, on the other hand, means exactly one thing; it cannot be changed and always behaves in a pre-determined way.

Footnote: All function names are subject to change from one release to the other. The point still stands and most likely will stand in any future versions, it’s the dynamic look-up that slows things down.


回答 4

为什么[]要比list()

最大的原因是Python list()就像用户定义的函数一样对待,这意味着您可以通过别名别名来拦截它list并做一些不同的事情(例如使用您自己的子类列表或双端队列)。

它将立即使用创建新的内置列表实例[]

我的解释旨在为您提供直觉。

说明

[] 通常称为文字语法。

在语法中,这称为“列表显示”。从文档

列表显示是括在方括号中的一系列可能为空的表达式:

list_display ::=  "[" [starred_list | comprehension] "]"

列表显示将产生一个新的列表对象,其内容由表达式列表或理解列表指定。提供逗号分隔的表达式列表时,将按从左到右的顺序评估其元素,并将其按此顺序放入列表对象中。提供理解后,将根据理解产生的元素来构建列表。

简而言之,这意味着将list创建一个内置类型的对象。

不能回避这一点-这意味着Python可以尽快完成它。

另一方面,list()可以list使用内置列表构造函数拦截创建内置对象的过程。

例如,假设我们希望创建噪音较大的列表:

class List(list):
    def __init__(self, iterable=None):
        if iterable is None:
            super().__init__()
        else:
            super().__init__(iterable)
        print('List initialized.')

然后,我们可以list在模块级别的全局范围内截取该名称,然后在创建时list,实际上创建了子类型列表:

>>> list = List
>>> a_list = list()
List initialized.
>>> type(a_list)
<class '__main__.List'>

同样,我们可以将其从全局命名空间中删除

del list

并将其放在内置命名空间中:

import builtins
builtins.list = List

现在:

>>> list_0 = list()
List initialized.
>>> type(list_0)
<class '__main__.List'>

并注意列表显示无条件创建列表:

>>> list_1 = []
>>> type(list_1)
<class 'list'>

我们可能只是暂时执行此操作,所以请撤消更改-首先List从内置文件中删除新对象:

>>> del builtins.list
>>> builtins.list
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: module 'builtins' has no attribute 'list'
>>> list()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
NameError: name 'list' is not defined

哦,不,我们失去了原来的踪迹。

不用担心,我们仍然可以得到list-它是列表文字的类型:

>>> builtins.list = type([])
>>> list()
[]

所以…

为什么[]要比list()

如我们所见-我们可以覆盖list-但是我们不能截取文字类型的创建。使用时,list我们必须进行查找以查看是否存在任何内容。

然后,我们必须调用已查找的任何可调用对象。从语法上:

调用使用一系列可能为空的参数来调用可调用对象(例如,函数):

call                 ::=  primary "(" [argument_list [","] | comprehension] ")"

我们可以看到它对任何名称都具有相同的作用,而不仅仅是列表:

>>> import dis
>>> dis.dis('list()')
  1           0 LOAD_NAME                0 (list)
              2 CALL_FUNCTION            0
              4 RETURN_VALUE
>>> dis.dis('doesnotexist()')
  1           0 LOAD_NAME                0 (doesnotexist)
              2 CALL_FUNCTION            0
              4 RETURN_VALUE

因为[]在Python字节码级别没有函数调用:

>>> dis.dis('[]')
  1           0 BUILD_LIST               0
              2 RETURN_VALUE

它只是直接建立列表而无需在字节码级别进行任何查找或调用。

结论

我们已经证明了list可以使用范围规则用用户代码拦截,并且可以list()查找可调用对象然后调用它。

[]列表显示或文字显示则避免了名称查找和函数调用。

Why is [] faster than list()?

The biggest reason is that Python treats list() just like a user-defined function, which means you can intercept it by aliasing something else to list and do something different (like use your own subclassed list or perhaps a deque).

It immediately creates a new instance of a builtin list with [].

My explanation seeks to give you the intuition for this.

Explanation

[] is commonly known as literal syntax.

In the grammar, this is referred to as a “list display”. From the docs:

A list display is a possibly empty series of expressions enclosed in square brackets:

list_display ::=  "[" [starred_list | comprehension] "]"

A list display yields a new list object, the contents being specified by either a list of expressions or a comprehension. When a comma-separated list of expressions is supplied, its elements are evaluated from left to right and placed into the list object in that order. When a comprehension is supplied, the list is constructed from the elements resulting from the comprehension.

In short, this means that a builtin object of type list is created.

There is no circumventing this – which means Python can do it as quickly as it may.

On the other hand, list() can be intercepted from creating a builtin list using the builtin list constructor.

For example, say we want our lists to be created noisily:

class List(list):
    def __init__(self, iterable=None):
        if iterable is None:
            super().__init__()
        else:
            super().__init__(iterable)
        print('List initialized.')

We could then intercept the name list on the module level global scope, and then when we create a list, we actually create our subtyped list:

>>> list = List
>>> a_list = list()
List initialized.
>>> type(a_list)
<class '__main__.List'>

Similarly we could remove it from the global namespace

del list

and put it in the builtin namespace:

import builtins
builtins.list = List

And now:

>>> list_0 = list()
List initialized.
>>> type(list_0)
<class '__main__.List'>

And note that the list display creates a list unconditionally:

>>> list_1 = []
>>> type(list_1)
<class 'list'>

We probably only do this temporarily, so lets undo our changes – first remove the new List object from the builtins:

>>> del builtins.list
>>> builtins.list
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: module 'builtins' has no attribute 'list'
>>> list()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
NameError: name 'list' is not defined

Oh, no, we lost track of the original.

Not to worry, we can still get list – it’s the type of a list literal:

>>> builtins.list = type([])
>>> list()
[]

So…

Why is [] faster than list()?

As we’ve seen – we can overwrite list – but we can’t intercept the creation of the literal type. When we use list we have to do the lookups to see if anything is there.

Then we have to call whatever callable we have looked up. From the grammar:

A call calls a callable object (e.g., a function) with a possibly empty series of arguments:

call                 ::=  primary "(" [argument_list [","] | comprehension] ")"

We can see that it does the same thing for any name, not just list:

>>> import dis
>>> dis.dis('list()')
  1           0 LOAD_NAME                0 (list)
              2 CALL_FUNCTION            0
              4 RETURN_VALUE
>>> dis.dis('doesnotexist()')
  1           0 LOAD_NAME                0 (doesnotexist)
              2 CALL_FUNCTION            0
              4 RETURN_VALUE

For [] there is no function call at the Python bytecode level:

>>> dis.dis('[]')
  1           0 BUILD_LIST               0
              2 RETURN_VALUE

It simply goes straight to building the list without any lookups or calls at the bytecode level.

Conclusion

We have demonstrated that list can be intercepted with user code using the scoping rules, and that list() looks for a callable and then calls it.

Whereas [] is a list display, or a literal, and thus avoids the name lookup and function call.


如何使Python脚本独立可执行文件在没有任何依赖的情况下运行?

问题:如何使Python脚本独立可执行文件在没有任何依赖的情况下运行?

我正在构建一个Python应用程序,不想强制我的客户端安装Python和模块。

因此,有没有办法将Python脚本编译为独立的可执行文件?

I’m building a Python application and don’t want to force my clients to install Python and modules.

So, is there a way to compile a Python script to be a standalone executable?


回答 0

您可以使用已经回答的py2exe并使用cython来转换C编译文件中的密钥.py文件.pyc,例如.dll在Windows和.solinux中,它比普通文件.pyo.pyc文件要难得多(并且还可以提高性能!)

You can use py2exe as already answered and use cython to convert your key .py files in .pyc, C compiled files, like .dll in Windows and .so in linux, much harder to revert than common .pyo and .pyc files (and also gain in performance!)


回答 1

您可以使用PyInstaller将Python程序打包为独立的可执行文件。它适用于Windows,Linux和Mac。

PyInstaller快速入门

从PyPI安装PyInstaller:

pip install pyinstaller

转到程序的目录并运行:

pyinstaller yourprogram.py

这将在名为的子目录中生成捆绑包dist

有关更详细的演练,请参见手册

You can use PyInstaller to package Python programs as standalone executables. It works on Windows, Linux, and Mac.

PyInstaller Quickstart

Install PyInstaller from PyPI:

pip install pyinstaller

Go to your program’s directory and run:

pyinstaller yourprogram.py

This will generate the bundle in a subdirectory called dist.

For a more detailed walkthrough, see the manual.


回答 2

您可能希望调查 Nuitka。它需要python源代码并将其转换为C ++ API调用。然后将其编译为可执行二进制文件(在Linux上为ELF)。它已经存在了几年,并且支持多种Python版本。

如果使用它,您可能还会获得性能上的改进。推荐的。

You might wish to investigate Nuitka. It takes python source code and converts it in to C++ API calls. Then it compiles into an executable binary (ELF on Linux). It has been around for a few years now and supports a wide range of Python versions.

You will probably also get a performance improvement if you use it. Recommended.


回答 3

我想汇编一些有关使用Python 2.7在Windows上创建独立文件的有用信息。

我已经使用了py2exe,并且可以使用,但是我遇到了一些问题。

这最后一个原因使我尝试使用PyInstaller http://www.pyinstaller.org/

我认为,这样做更好,因为:

  • 它更易于使用。

我建议例如使用以下几行创建一个.bat文件(pyinstaller.exe必须在Windows路径中):

pyinstaller.exe --onefile MyCode.py

因此,我认为,至少对于python 2.7,更好和更简单的选择是PyInstaller。

I would like to compile some useful information about creating standalone files on windows using Python 2.7.

I have used py2exe and it works, but I had some problems.

This last reason made me try PyInstaller http://www.pyinstaller.org/ .

In my opinion, it is much better because:

  • It is easier to use.

I suggest creating a .bat file with the following lines for example (pyinstaller.exe must be in Windows Path):

pyinstaller.exe --onefile MyCode.py

So, I think that, at least for python 2.7, a better and simpler option is PyInstaller.


回答 4

是的,可以将Python脚本编译成独立的可执行文件。

WindowsLinuxMac OS XFreeBSDSolarisAIX下,可以使用PyInstaller将Python程序转换为独立的可执行文件。它是推荐的转换器之一。

py2exe将Python脚本转换为仅Windows平台上的可执行文件。

Cython是用于Python编程语言和扩展的Cython编程语言的静态编译器。

Yes, it is possible to compile Python scripts into standalone executable.

PyInstaller can be used to convert Python programs into stand-alone executables, under Windows, Linux, Mac OS X, FreeBSD, Solaris and AIX. It is one of the recommended converters.

py2exe converts Python scripts into only executable in Windows platform.

Cython is a static compiler for both the Python programming language and the extended Cython programming language.


回答 5

第三个选择是cx_Freeze,它是跨平台的。

And a third option is cx_Freeze, which is cross-platform.


回答 6

您可能喜欢py2exe。您还将在那里找到在Linux上进行此操作的信息

you may like py2exe. you’ll also find in there infos for doing it on linux


回答 7

使用py2exe…。使用以下设置文件:

 from distutils.core import setup
 import py2exe

 from distutils.filelist import findall
 import matplotlib

 setup(
       console=['PlotMemInfo.py'],

       options={
                'py2exe': {
                'packages' : ['matplotlib'],
            'dll_excludes': ['libgdk-win32-2.0-0.dll',
                                 'libgobject-2.0-0.dll',
                 'libgdk_pixbuf-2.0-0.dll']
                          }
                },
       data_files = matplotlib.get_py2exe_datafiles()
     )

Use py2exe…. use below set up files:

 from distutils.core import setup
 import py2exe

 from distutils.filelist import findall
 import matplotlib

 setup(
       console=['PlotMemInfo.py'],

       options={
                'py2exe': {
                'packages' : ['matplotlib'],
            'dll_excludes': ['libgdk-win32-2.0-0.dll',
                                 'libgobject-2.0-0.dll',
                 'libgdk_pixbuf-2.0-0.dll']
                          }
                },
       data_files = matplotlib.get_py2exe_datafiles()
     )

回答 8

我喜欢pyinstaller-尤其是“窗口式”变体:

pyinstaller --onefile --windowed myscript.py

它将在dist /文件夹中创建一个单个* .exe文件。

I like pyinstaller – especially the “windowed” variant:

pyinstaller --onefile --windowed myscript.py

It will create one single *.exe file in a dist/ folder.


回答 9

我还建议使用 pyinstaller 以获得更好的向后兼容性,例如python 2.3-2.7
对于py2exe,您必须具有python 2.6

I also recommend pyinstaller for better backward compatibility such as python 2.3 – 2.7.
for py2exe, you have to have python 2.6


回答 10

对于Python 3.2脚本,唯一的选择是Cxfreeze。从源代码构建它,否则它将不起作用。

对于python 2.x,我建议使用pyinstaller,因为它可以将python程序打包在单个可执行文件中,这与CxFreeze也输出库的方式不同。

For Python 3.2 scripts the only choice is Cxfreeze. Build it from sources otherwise it won’t work.

For python 2.x I suggest pyinstaller as it can package a python program in a single executable, unlike CxFreeze which outputs also libraries.


回答 11

py2exe将生成您想要的exe文件,但您需要在要使用新exe的计算机上具有相同版本的MSVCR90.dll。有关更多信息,请参见http://www.py2exe.org/index.cgi/Tutorial

py2exe will make the exe file you want but you need to have the same version of MSVCR90.dll on the machine you’re going to use your new exe. See http://www.py2exe.org/index.cgi/Tutorial for more info.


回答 12

你可以找到的上市@配电公司名单https://wiki.python.org/moin/DistributionUtilities

我使用bbfreeze,并且运行良好(虽然还支持python 3)。

You can find the list of distribution utilities listed @ https://wiki.python.org/moin/DistributionUtilities.

I use bbfreeze and it has been working very well (yet to have python 3 support though).


回答 13

pyinstaller yourfile.py -F --onefile

这将在Windows中创建一个独立的EXE文件。重要说明:该exe文件将在名为“ dist”的文件夹中生成。

您可以使用安装pyinstaller pip install PyInstaller

pyinstaller yourfile.py -F --onefile

This creates a standalone EXE file in Windows IMPORTANT NOTE: The exe file will be generated in a folder named ‘dist’

you can install pyinstaller using pip install PyInstaller


回答 14

不完全是python代码的打包,但是google 现在也有脾气暴躁的人,它将代码转换为Go。它不支持python C api,因此它可能不适用于所有项目。

Not exactly a packaging of the python code, but there is now also grumpy from google, which transpiles the code to Go. It doesn’t support the python C api, so it may not work for all projects.


回答 15

使用pyinstaller,我发现了使用.exe快捷方式而不是make的更好方法--onefile。无论如何,周围可能都有一些数据文件,如果您正在运行基于站点的应用程序,则您的程序也将依赖于html,js,css文件。将所有这些文件移到某个地方没有意义..相反,如果我们将工作路径向上移怎么办。

为exe创建快捷方式,将其移动到顶部,并按指定设置目标路径和开始路径,以使相对路径进入dist \ folder: Target: %windir%\system32\cmd.exe /c start dist\web_wrapper\web_wrapper.exe Start in: "%windir%\system32\cmd.exe /c start dist\web_wrapper\" 可以将快捷方式重命名为任何名称,从而重命名为“ GTFS-Manager”
现在当我双击-单击快捷方式,就好像我是python运行文件一样!我发现这种方法比以下方法更好--onefile

  1. 在onefile的情况下,存在Win7 OS缺少.dll的问题,该文件需要事先安装等。使用具有多个文件的常规构建,不会出现此类问题。
  2. 我的python脚本使用的所有文件(正在部署龙卷风Web服务器,并且需要整个freakin’网站上有价值的文件!)不需要移动到任何地方:我只是在顶部创建了快捷方式。
  3. 我实际上可以在ubuntu(运行python3 myfile.py)和Windows(双击快捷方式)中使用完全相同的文件夹。
  4. 我不需要为.spec文件的过于复杂的黑客行为而烦恼,以包括数据文件等。

哦,切记在构建后删除构建文件夹,这样可以节省大小。

Using pyinstaller, I found a better method using shortcut to the .exe rather than making --onefile. Anyways there’s probably some data files around and if you’re running a site-based app then your program depends on html, js, css files too. No point in moving all these files somewhere.. instead what if we move the working path up.

Make a shortcut to the exe, move it at top and set the target and start-in paths as specified, to have relative paths going to dist\folder: Target: %windir%\system32\cmd.exe /c start dist\web_wrapper\web_wrapper.exe Start in: "%windir%\system32\cmd.exe /c start dist\web_wrapper\" Can rename shortcut to anything so renaming to “GTFS-Manager”
Now when I double-click the shortcut, it’s as if I python-ran the file! I found this approach better than the --onefile one as:

  1. In onefile’s case, there’s a problem with a .dll missing for win7 OS which needs some prior installation etc. Yawn. With the usual build with multiple files, no such issues.
  2. All the files that my python script uses (it’s deploying a tornado web server and needs a whole freakin’ website worth of files to be there!) don’t need to be moved anywhere: I simply create the shortcut at top.
  3. I can actually use this exact same folder in ubuntu (run python3 myfile.py) and windows (double-click the shortcut).
  4. I don’t need to bother with the overly complicated hacking of .spec file to include data files etc.

Oh, remember to delete off the build folder after building, will save on size.


回答 16

使用Cython转换为c,编译并与gcc链接。另一个可能是,在c中创建核心功能(要使其难以逆转的功能),对其进行编译,然后使用python boost导入已编译的代码(此外,您还将获得更快的代码执行速度)。然后使用提到的任何工具进行分发。

Use Cython to convert to c, compile and link with gcc. Another could be, make the core functions in c (the ones you want to make hard to reverse), compile them and use python boost to import the compiled code ( plus you get a much faster code execution). then use any tool mentioned to distribute.


回答 17

有人告诉我PyRun,https://www.egenix.com/products/python/PyRun/ 也是一种选择。


如何删除使用Python的easy_install安装的软件包?

问题:如何删除使用Python的easy_install安装的软件包?

Python easy_install使安装新软件包非常方便。但是,据我所知,它没有实现依赖项管理器的其他常见功能-列出和删除已安装的软件包。

找出已安装的软件包的最佳方法是什么,以及删除已安装软件包的首选方法是什么?如果我手动(例如,通过rm /usr/local/lib/python2.6/dist-packages/my_installed_pkg.egg类似方式)删除软件包,是否需要更新任何文件?

Python’s easy_install makes installing new packages extremely convenient. However, as far as I can tell, it doesn’t implement the other common features of a dependency manager – listing and removing installed packages.

What is the best way of finding out what’s installed, and what is the preferred way of removing installed packages? Are there any files that need to be updated if I remove packages manually (e.g. by rm /usr/local/lib/python2.6/dist-packages/my_installed_pkg.egg or similar)?


回答 0

pip是setuptools / easy_install的替代产品,提供了“卸载”命令。

根据安装说明安装pip :

$ wget https://bootstrap.pypa.io/get-pip.py
$ python get-pip.py

然后,您可以使用pip uninstall删除与easy_install

pip, an alternative to setuptools/easy_install, provides an “uninstall” command.

Install pip according to the installation instructions:

$ wget https://bootstrap.pypa.io/get-pip.py
$ python get-pip.py

Then you can use pip uninstall to remove packages installed with easy_install


回答 1

要卸载,.egg您需要rm -rf鸡蛋(可能是目录)并从中删除匹配的行site-packages/easy-install.pth

To uninstall an .egg you need to rm -rf the egg (it might be a directory) and remove the matching line from site-packages/easy-install.pth


回答 2

首先,您必须运行以下命令:

$ easy_install -m [PACKAGE]

它删除了程序包的所有依赖项。

然后删除该包装的鸡蛋文件:

$ sudo rm -rf /usr/local/lib/python2.X/site-packages/[PACKAGE].egg

First you have to run this command:

$ easy_install -m [PACKAGE]

It removes all dependencies of the package.

Then remove egg file of that package:

$ sudo rm -rf /usr/local/lib/python2.X/site-packages/[PACKAGE].egg

回答 3

所有的信息在其他的答案,但没有总结两者的请求,或者似乎使事情不必要的复杂性:

  • 对于您的搬迁需求,请使用:

    pip uninstall <package>

    (使用安装easy_install pip

  • 对于“列出已安装的软件包”,需要使用以下任一方法:

    pip freeze

    要么:

    yolk -l

    可以输出更多包装详细信息。

    (通过easy_install yolk或安装pip install yolk

All the info is in the other answers, but none summarizes both your requests or seem to make things needlessly complex:

  • For your removal needs use:

    pip uninstall <package>
    

    (install using easy_install pip)

  • For your ‘list installed packages’ needs either use:

    pip freeze
    

    Or:

    yolk -l
    

    which can output more package details.

    (Install via easy_install yolk or pip install yolk)


回答 4

网上有多个消息源建议通过使用-m选项重新安装软件包,然后仅删除lib /中的.egg文件和bin /中的二进制文件来破解。另外,可以在python bug跟踪器上找到有关setuptools问题的讨论,名称为setuptools issue 21

编辑:将链接添加到python bugtracker。

There are several sources on the net suggesting a hack by reinstalling the package with the -m option and then just removing the .egg file in lib/ and the binaries in bin/. Also, discussion about this setuptools issue can be found on the python bug tracker as setuptools issue 21.

Edit: Added the link to the python bugtracker.


回答 5

如果问题严重困扰您,您可以考虑使用virtualenv。它允许您创建一个封装python库的环境。您在此处而不是在全局site-packages目录中安装软件包。您在该环境中运行的所有脚本都可以访问这些程序包(也可以选择全局程序包)。在评估不确定/是否需要全局安装的软件包时,我经常使用此工具。如果您决定不需要该软件包,那么将虚拟环境吹走就很容易了。它很容易使用。制作一个新的环境:

$>virtualenv /path/to/your/new/ENV

virtual_envt在新环境中为您安装setuptools,因此您可以执行以下操作:

$>ENV/bin/easy_install

您甚至可以创建自己的boostrap脚本来设置新环境。因此,使用一个命令,您可以创建一个新的虚拟环境,例如默认安装了python 2.6,psycopg2和django(如果需要,您可以安装特定于环境的python版本)。

If the problem is a serious-enough annoyance to you, you might consider virtualenv. It allows you to create an environment that encapsulates python libraries. You install packages there rather than in the global site-packages directory. Any scripts you run in that environment have access to those packages (and optionally, your global ones as well). I use this a lot when evaluating packages that I am not sure I want/need to install globally. If you decide you don’t need the package, it’s easy enough to just blow that virtual environment away. It’s pretty easy to use. Make a new env:

$>virtualenv /path/to/your/new/ENV

virtual_envt installs setuptools for you in the new environment, so you can do:

$>ENV/bin/easy_install

You can even create your own boostrap scripts that setup your new environment. So, with one command, you can create a new virtual env with, say, python 2.6, psycopg2 and django installed by default (you can can install an env-specific version of python if you want).


回答 6

官方说明?:http : //peak.telecommunity.com/DevCenter/EasyInstall#uninstalling-packages

如果您用其他版本替换了软件包,则可以通过删除PackageName-versioninfo.egg文件或目录(位于安装目录中)来删除不需要的软件包。

如果要删除软件包的当前安装版本(或软件包的所有版本),则应首先运行:

easy_install -mxN PackageName

这样可以确保Python不会继续搜索您打算删除的软件包。完成此操作后,您可以安全地删除.egg文件或目录以及要删除的所有脚本。

Official(?) instructions: http://peak.telecommunity.com/DevCenter/EasyInstall#uninstalling-packages

If you have replaced a package with another version, then you can just delete the package(s) you don’t need by deleting the PackageName-versioninfo.egg file or directory (found in the installation directory).

If you want to delete the currently installed version of a package (or all versions of a package), you should first run:

easy_install -mxN PackageName

This will ensure that Python doesn’t continue to search for a package you’re planning to remove. After you’ve done this, you can safely delete the .egg files or directories, along with any scripts you wish to remove.


回答 7

尝试

$ easy_install -m [PACKAGE]

然后

$ rm -rf .../python2.X/site-packages/[PACKAGE].egg

try

$ easy_install -m [PACKAGE]

then

$ rm -rf .../python2.X/site-packages/[PACKAGE].egg

回答 8

要列出已安装的Python软件包,可以使用yolk -l。不过,您需要先使用easy_install yolk

To list installed Python packages, you can use yolk -l. You’ll need to use easy_install yolk first though.


回答 9

在尝试卸载随时间推移而安装的许多随机Python软件包时遇到了这个问题。

使用此线程中的信息,这是我想到的:

cat package_list | xargs -n1 sudo pip uninstall -y

package_list从清理(AWK)pip freeze中的virtualenv。

要删除几乎所有的Python软件包:

yolk -l | cut -f 1 -d " " | grep -v "setuptools|pip|ETC.." | xargs -n1 pip uninstall -y

Came across this question, while trying to uninstall the many random Python packages installed over time.

Using information from this thread, this is what I came up with:

cat package_list | xargs -n1 sudo pip uninstall -y

The package_list is cleaned up (awk) from a pip freeze in a virtualenv.

To remove almost all Python packages:

yolk -l | cut -f 1 -d " " | grep -v "setuptools|pip|ETC.." | xargs -n1 pip uninstall -y

回答 10

我在MacOS X Leopard 10.6.blah上遇到了同样的问题。

解决方案是确保您正在调用MacPorts Python:

sudo port install python26
sudo port install python_select
sudo python_select python26
sudo port install py26-mysql

希望这可以帮助。

I ran into the same problem on my MacOS X Leopard 10.6.blah.

Solution is to make sure you’re calling the MacPorts Python:

sudo port install python26
sudo port install python_select
sudo python_select python26
sudo port install py26-mysql

Hope this helps.


回答 11

对我而言,仅删除此文件:easy-install.pth有效,其余pip install django == 1.3.7

For me only deleting this file : easy-install.pth worked, rest pip install django==1.3.7


回答 12

这对我有用。它与先前的答案相似,但打包的路径不同。

  1. 须藤easy_install -m
  2. 须藤rm -rf /Library/Python/2.7/site-packages/.egg

平台:MacOS High Sierra版本10.13.3

This worked for me. It’s similar to previous answers but the path to the packages is different.

  1. sudo easy_install -m
  2. sudo rm -rf /Library/Python/2.7/site-packages/.egg

Plaform: MacOS High Sierra version 10.13.3


Python2中的dict.items()和dict.iteritems()有什么区别?

问题:Python2中的dict.items()和dict.iteritems()有什么区别?

dict.items()和之间有适用的区别dict.iteritems()吗?

Python文档

dict.items():返回字典的(键,值)对列表的副本

dict.iteritems():在字典的(键,值)对上返回迭代器

如果我运行下面的代码,每个似乎都返回对同一对象的引用。我缺少任何细微的差异吗?

#!/usr/bin/python

d={1:'one',2:'two',3:'three'}
print 'd.items():'
for k,v in d.items():
   if d[k] is v: print '\tthey are the same object' 
   else: print '\tthey are different'

print 'd.iteritems():'   
for k,v in d.iteritems():
   if d[k] is v: print '\tthey are the same object' 
   else: print '\tthey are different'   

输出:

d.items():
    they are the same object
    they are the same object
    they are the same object
d.iteritems():
    they are the same object
    they are the same object
    they are the same object

Are there any applicable differences between dict.items() and dict.iteritems()?

From the Python docs:

dict.items(): Return a copy of the dictionary’s list of (key, value) pairs.

dict.iteritems(): Return an iterator over the dictionary’s (key, value) pairs.

If I run the code below, each seems to return a reference to the same object. Are there any subtle differences that I am missing?

#!/usr/bin/python

d={1:'one',2:'two',3:'three'}
print 'd.items():'
for k,v in d.items():
   if d[k] is v: print '\tthey are the same object' 
   else: print '\tthey are different'

print 'd.iteritems():'   
for k,v in d.iteritems():
   if d[k] is v: print '\tthey are the same object' 
   else: print '\tthey are different'   

Output:

d.items():
    they are the same object
    they are the same object
    they are the same object
d.iteritems():
    they are the same object
    they are the same object
    they are the same object

回答 0

这是演变的一部分。

最初,Python items()构建了一个真正的元组列表,并将其返回。这可能会占用大量额外的内存。

然后,一般将生成器引入该语言,然后将该方法重新实现为名为的迭代器-生成器方法iteritems()。保留原始版本是为了向后兼容。

Python 3的更改之一是 items()现在返回迭代器,并且列表从未完全构建。该iteritems()方法也消失了,因为items()在Python 3中的工作方式与viewitems()在Python 2.7中一样。

It’s part of an evolution.

Originally, Python items() built a real list of tuples and returned that. That could potentially take a lot of extra memory.

Then, generators were introduced to the language in general, and that method was reimplemented as an iterator-generator method named iteritems(). The original remains for backwards compatibility.

One of Python 3’s changes is that items() now return iterators, and a list is never fully built. The iteritems() method is also gone, since items() in Python 3 works like viewitems() in Python 2.7.


回答 1

dict.items()返回2元组([(key, value), (key, value), ...])的列表,而是dict.iteritems()生成2元组的生成器。前者最初占用更多空间和时间,但是访问每个元素的速度很快,而前者最初占用较少的空间和时间,但是在生成每个元素时要花费更多的时间。

dict.items() returns a list of 2-tuples ([(key, value), (key, value), ...]), whereas dict.iteritems() is a generator that yields 2-tuples. The former takes more space and time initially, but accessing each element is fast, whereas the second takes less space and time initially, but a bit more time in generating each element.


回答 2

在Py2.x中

该命令dict.items()dict.keys()dict.values()返回一个副本字典的的列表(k, v)对,键和值。如果复制的列表很大,则可能会占用大量内存。

该命令dict.iteritems()dict.iterkeys()dict.itervalues()返回一个迭代器在字典的(k, v)对,键和值。

该命令dict.viewitems()dict.viewkeys()dict.viewvalues()返回视图对象,它可以体现字典的变化。(即,如果您在字典中del添加了项或(k,v)在字典中添加了对,则视图对象可以同时自动更改。)

$ python2.7

>>> d = {'one':1, 'two':2}
>>> type(d.items())
<type 'list'>
>>> type(d.keys())
<type 'list'>
>>> 
>>> 
>>> type(d.iteritems())
<type 'dictionary-itemiterator'>
>>> type(d.iterkeys())
<type 'dictionary-keyiterator'>
>>> 
>>> 
>>> type(d.viewitems())
<type 'dict_items'>
>>> type(d.viewkeys())
<type 'dict_keys'>

在Py3.x中

在Py3.x,事情比较干净,因为只有dict.items()dict.keys()dict.values()可用,这回该视图对象,就像dict.viewitems()在Py2.x一样。

就像@lvc指出的那样,view对象iterator并不相同,因此,如果要在Py3.x中返回迭代器,可以使用iter(dictview)

$ python3.3

>>> d = {'one':'1', 'two':'2'}
>>> type(d.items())
<class 'dict_items'>
>>>
>>> type(d.keys())
<class 'dict_keys'>
>>>
>>>
>>> ii = iter(d.items())
>>> type(ii)
<class 'dict_itemiterator'>
>>>
>>> ik = iter(d.keys())
>>> type(ik)
<class 'dict_keyiterator'>

In Py2.x

The commands dict.items(), dict.keys() and dict.values() return a copy of the dictionary’s list of (k, v) pair, keys and values. This could take a lot of memory if the copied list is very large.

The commands dict.iteritems(), dict.iterkeys() and dict.itervalues() return an iterator over the dictionary’s (k, v) pair, keys and values.

The commands dict.viewitems(), dict.viewkeys() and dict.viewvalues() return the view objects, which can reflect the dictionary’s changes. (I.e. if you del an item or add a (k,v) pair in the dictionary, the view object can automatically change at the same time.)

$ python2.7

>>> d = {'one':1, 'two':2}
>>> type(d.items())
<type 'list'>
>>> type(d.keys())
<type 'list'>
>>> 
>>> 
>>> type(d.iteritems())
<type 'dictionary-itemiterator'>
>>> type(d.iterkeys())
<type 'dictionary-keyiterator'>
>>> 
>>> 
>>> type(d.viewitems())
<type 'dict_items'>
>>> type(d.viewkeys())
<type 'dict_keys'>

While in Py3.x

In Py3.x, things are more clean, since there are only dict.items(), dict.keys() and dict.values() available, which return the view objects just as dict.viewitems() in Py2.x did.

But

Just as @lvc noted, view object isn’t the same as iterator, so if you want to return an iterator in Py3.x, you could use iter(dictview) :

$ python3.3

>>> d = {'one':'1', 'two':'2'}
>>> type(d.items())
<class 'dict_items'>
>>>
>>> type(d.keys())
<class 'dict_keys'>
>>>
>>>
>>> ii = iter(d.items())
>>> type(ii)
<class 'dict_itemiterator'>
>>>
>>> ik = iter(d.keys())
>>> type(ik)
<class 'dict_keyiterator'>

回答 3

您问:“ dict.items()和dict.iteritems()之间是否有适用的区别”

这可能会有所帮助(对于Python 2.x):

>>> d={1:'one',2:'two',3:'three'}
>>> type(d.items())
<type 'list'>
>>> type(d.iteritems())
<type 'dictionary-itemiterator'>

您将看到d.items()返回键,值对的元组列表,并d.iteritems()返回一个字典迭代器。

清单d.items()是可切片的:

>>> l1=d.items()[0]
>>> l1
(1, 'one')   # an unordered value!

但是没有__iter__方法:

>>> next(d.items())
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: list object is not an iterator

作为迭代器,d.iteritems()不可切片:

>>> i1=d.iteritems()[0]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'dictionary-itemiterator' object is not subscriptable

但是确实有__iter__

>>> next(d.iteritems())
(1, 'one')               # an unordered value!

因此,物品本身是相同的-运送物品的容器是不同的。一个是列表,另一个是迭代器(取决于Python版本…)

因此,dict.items()和dict.iteritems()之间的适用差异与列表和迭代器之间的适用差异相同。

You asked: ‘Are there any applicable differences between dict.items() and dict.iteritems()’

This may help (for Python 2.x):

>>> d={1:'one',2:'two',3:'three'}
>>> type(d.items())
<type 'list'>
>>> type(d.iteritems())
<type 'dictionary-itemiterator'>

You can see that d.items() returns a list of tuples of the key, value pairs and d.iteritems() returns a dictionary-itemiterator.

As a list, d.items() is slice-able:

>>> l1=d.items()[0]
>>> l1
(1, 'one')   # an unordered value!

But would not have an __iter__ method:

>>> next(d.items())
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: list object is not an iterator

As an iterator, d.iteritems() is not slice-able:

>>> i1=d.iteritems()[0]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'dictionary-itemiterator' object is not subscriptable

But does have __iter__:

>>> next(d.iteritems())
(1, 'one')               # an unordered value!

So the items themselves are same — the container delivering the items are different. One is a list, the other an iterator (depending on the Python version…)

So the applicable differences between dict.items() and dict.iteritems() are the same as the applicable differences between a list and an iterator.


回答 4

dict.items()返回元组列表,并dict.iteritems()在字典中返回元组的迭代器对象为(key,value)。元组相同,但容器不同。

dict.items()基本上将所有字典复制到列表中。尝试使用下面的代码的执行时间比较dict.items()dict.iteritems()。您将看到差异。

import timeit

d = {i:i*2 for i in xrange(10000000)}  
start = timeit.default_timer() #more memory intensive
for key,value in d.items():
    tmp = key + value #do something like print
t1 = timeit.default_timer() - start

start = timeit.default_timer()
for key,value in d.iteritems(): #less memory intensive
    tmp = key + value
t2 = timeit.default_timer() - start

在我的机器上输出:

Time with d.items(): 9.04773592949
Time with d.iteritems(): 2.17707300186

这清楚地表明这dictionary.iteritems()是非常有效的。

dict.items() return list of tuples, and dict.iteritems() return iterator object of tuple in dictionary as (key,value). The tuples are the same, but container is different.

dict.items() basically copies all dictionary into list. Try using following code to compare the execution times of the dict.items() and dict.iteritems(). You will see the difference.

import timeit

d = {i:i*2 for i in xrange(10000000)}  
start = timeit.default_timer() #more memory intensive
for key,value in d.items():
    tmp = key + value #do something like print
t1 = timeit.default_timer() - start

start = timeit.default_timer()
for key,value in d.iteritems(): #less memory intensive
    tmp = key + value
t2 = timeit.default_timer() - start

Output in my machine:

Time with d.items(): 9.04773592949
Time with d.iteritems(): 2.17707300186

This clearly shows that dictionary.iteritems() is much more efficient.


回答 5

如果你有

dict = {key1:value1, key2:value2, key3:value3,...}

Python 2中dict.items()复制每个元组并返回字典中的元组列表,即[(key1,value1), (key2,value2), ...]。这意味着整个字典将被复制到包含元组的新列表中

dict = {i: i * 2 for i in xrange(10000000)}  
# Slow and memory hungry.
for key, value in dict.items():
    print(key,":",value)

dict.iteritems()返回字典项迭代器。返回的项的值也相同,即(key1,value1), (key2,value2), ...,但这不是列表。这只是字典项迭代器对象。这意味着更少的内存使用量(减少了50%)。

  • 列出为可变快照: d.items() -> list(d.items())
  • 迭代器对象: d.iteritems() -> iter(d.items())

元组是相同的。您比较了每个中的元组,因此您得到相同的元组。

dict = {i: i * 2 for i in xrange(10000000)}  
# More memory efficient.
for key, value in dict.iteritems():
    print(key,":",value)

Python 3中dict.items()返回迭代器对象。dict.iteritems()已删除,因此不再有问题。

If you have

dict = {key1:value1, key2:value2, key3:value3,...}

In Python 2, dict.items() copies each tuples and returns the list of tuples in dictionary i.e. [(key1,value1), (key2,value2), ...]. Implications are that the whole dictionary is copied to new list containing tuples

dict = {i: i * 2 for i in xrange(10000000)}  
# Slow and memory hungry.
for key, value in dict.items():
    print(key,":",value)

dict.iteritems() returns the dictionary item iterator. The value of the item returned is also the same i.e. (key1,value1), (key2,value2), ..., but this is not a list. This is only dictionary item iterator object. That means less memory usage (50% less).

  • Lists as mutable snapshots: d.items() -> list(d.items())
  • Iterator objects: d.iteritems() -> iter(d.items())

The tuples are the same. You compared tuples in each so you get same.

dict = {i: i * 2 for i in xrange(10000000)}  
# More memory efficient.
for key, value in dict.iteritems():
    print(key,":",value)

In Python 3, dict.items() returns iterator object. dict.iteritems() is removed so there is no more issue.


回答 6

dict.iteritems在Python3.x中已经不存在了,因此用于iter(dict.items())获得相同的输出和内存分配

dict.iteritems is gone in Python3.x So use iter(dict.items()) to get the same output and memory alocation


回答 7

如果您想要一种方法来迭代同时适用于Python 2和3的字典的项对,请尝试如下操作:

DICT_ITER_ITEMS = (lambda d: d.iteritems()) if hasattr(dict, 'iteritems') else (lambda d: iter(d.items()))

像这样使用它:

for key, value in DICT_ITER_ITEMS(myDict):
    # Do something with 'key' and/or 'value'.

If you want a way to iterate the item pairs of a dictionary that works with both Python 2 and 3, try something like this:

DICT_ITER_ITEMS = (lambda d: d.iteritems()) if hasattr(dict, 'iteritems') else (lambda d: iter(d.items()))

Use it like this:

for key, value in DICT_ITER_ITEMS(myDict):
    # Do something with 'key' and/or 'value'.

回答 8

dict.iteritems():给您一个迭代器。您可以在循环外的其他模式中使用迭代器。

student = {"name": "Daniel", "student_id": 2222}

for key,value in student.items():
    print(key,value)

('student_id', 2222)
('name', 'Daniel')

for key,value in student.iteritems():
    print(key,value)

('student_id', 2222)
('name', 'Daniel')

studentIterator = student.iteritems()

print(studentIterator.next())
('student_id', 2222)

print(studentIterator.next())
('name', 'Daniel')

dict.iteritems(): gives you an iterator. You may use the iterator in other patterns outside of the loop.

student = {"name": "Daniel", "student_id": 2222}

for key,value in student.items():
    print(key,value)

('student_id', 2222)
('name', 'Daniel')

for key,value in student.iteritems():
    print(key,value)

('student_id', 2222)
('name', 'Daniel')

studentIterator = student.iteritems()

print(studentIterator.next())
('student_id', 2222)

print(studentIterator.next())
('name', 'Daniel')

回答 9

python 2中的dict.iteritems()与python 3中的dict.items()等效。

dict.iteritems() in python 2 is equivalent to dict.items() in python 3.


在Python中以相反顺序遍历列表

问题:在Python中以相反顺序遍历列表

所以我可以从开始到len(collection)结束collection[0]

我还希望能够访问循环索引。

So I can start from len(collection) and end in collection[0].

I also want to be able to access the loop index.


回答 0

使用内置reversed()功能:

>>> a = ["foo", "bar", "baz"]
>>> for i in reversed(a):
...     print(i)
... 
baz
bar
foo

要同时访问原始索引,请enumerate()在列表上使用,然后将其传递给reversed()

>>> for i, e in reversed(list(enumerate(a))):
...     print(i, e)
... 
2 baz
1 bar
0 foo

由于enumerate()返回生成器并且生成器不能反转,因此需要将其转换为list第一个。

Use the built-in reversed() function:

>>> a = ["foo", "bar", "baz"]
>>> for i in reversed(a):
...     print(i)
... 
baz
bar
foo

To also access the original index, use enumerate() on your list before passing it to reversed():

>>> for i, e in reversed(list(enumerate(a))):
...     print(i, e)
... 
2 baz
1 bar
0 foo

Since enumerate() returns a generator and generators can’t be reversed, you need to convert it to a list first.


回答 1

你可以做:

for item in my_list[::-1]:
    print item

(或者您想要在for循环中执行的任何操作。)

[::-1]切片反转for循环在列表中(但不会实际修改列表的“永久”)。

You can do:

for item in my_list[::-1]:
    print item

(Or whatever you want to do in the for loop.)

The [::-1] slice reverses the list in the for loop (but won’t actually modify your list “permanently”).


回答 2

如果您需要循环索引,并且不想遍历整个列表两次,或者不想使用额外的内存,则可以编写一个生成器。

def reverse_enum(L):
   for index in reversed(xrange(len(L))):
      yield index, L[index]

L = ['foo', 'bar', 'bas']
for index, item in reverse_enum(L):
   print index, item

If you need the loop index, and don’t want to traverse the entire list twice, or use extra memory, I’d write a generator.

def reverse_enum(L):
   for index in reversed(xrange(len(L))):
      yield index, L[index]

L = ['foo', 'bar', 'bas']
for index, item in reverse_enum(L):
   print index, item

回答 3

可以这样完成:

对于范围(len(collect)-1,-1,-1)中的i:
    印刷品收藏[i]

    #为python 3打印(collection [i])。

因此,您的猜测非常接近:)有点尴尬,但这基本上是在说:从小于1开始len(collection),一直走到-1之前,以-1为步长。

Fyi,该help功能非常有用,因为它使您可以从Python控制台查看文档,例如:

help(range)

It can be done like this:

for i in range(len(collection)-1, -1, -1):
    print collection[i]

    # print(collection[i]) for python 3. +

So your guess was pretty close :) A little awkward but it’s basically saying: start with 1 less than len(collection), keep going until you get to just before -1, by steps of -1.

Fyi, the help function is very useful as it lets you view the docs for something from the Python console, eg:

help(range)


回答 4

reversed内置功能非常方便:

for item in reversed(sequence):

文档的反向解释它的局限性。

对于必须与索引一起反向遍历序列的情况(例如,对于更改序列长度的就地修改),我需要将此函数定义为我的codeutil模块:

import itertools
def reversed_enumerate(sequence):
    return itertools.izip(
        reversed(xrange(len(sequence))),
        reversed(sequence),
    )

这避免了创建序列的副本。显然,这些reversed限制仍然适用。

The reversed builtin function is handy:

for item in reversed(sequence):

The documentation for reversed explains its limitations.

For the cases where I have to walk a sequence in reverse along with the index (e.g. for in-place modifications changing the sequence length), I have this function defined an my codeutil module:

import itertools
def reversed_enumerate(sequence):
    return itertools.izip(
        reversed(xrange(len(sequence))),
        reversed(sequence),
    )

This one avoids creating a copy of the sequence. Obviously, the reversed limitations still apply.


回答 5

无需重新创建新列表,可以通过建立索引来实现:

>>> foo = ['1a','2b','3c','4d']
>>> for i in range(len(foo)):
...     print foo[-(i+1)]
...
4d
3c
2b
1a
>>>

要么

>>> length = len(foo)
>>> for i in range(length):
...     print foo[length-i-1]
...
4d
3c
2b
1a
>>>

How about without recreating a new list, you can do by indexing:

>>> foo = ['1a','2b','3c','4d']
>>> for i in range(len(foo)):
...     print foo[-(i+1)]
...
4d
3c
2b
1a
>>>

OR

>>> length = len(foo)
>>> for i in range(length):
...     print foo[length-i-1]
...
4d
3c
2b
1a
>>>

回答 6

>>> l = ["a","b","c","d"]
>>> l.reverse()
>>> l
['d', 'c', 'b', 'a']

要么

>>> print l[::-1]
['d', 'c', 'b', 'a']
>>> l = ["a","b","c","d"]
>>> l.reverse()
>>> l
['d', 'c', 'b', 'a']

OR

>>> print l[::-1]
['d', 'c', 'b', 'a']

回答 7

我喜欢单线生成器方法:

((i, sequence[i]) for i in reversed(xrange(len(sequence))))

I like the one-liner generator approach:

((i, sequence[i]) for i in reversed(xrange(len(sequence))))

回答 8

另外,您可以使用“范围”或“计数”功能。如下:

a = ["foo", "bar", "baz"]
for i in range(len(a)-1, -1, -1):
    print(i, a[i])

3 baz
2 bar
1 foo

您还可以按以下方式使用itertools中的“ count”:

a = ["foo", "bar", "baz"]
from itertools import count, takewhile

def larger_than_0(x):
    return x > 0

for x in takewhile(larger_than_0, count(3, -1)):
    print(x, a[x-1])

3 baz
2 bar
1 foo

Also, you could use either “range” or “count” functions. As follows:

a = ["foo", "bar", "baz"]
for i in range(len(a)-1, -1, -1):
    print(i, a[i])

3 baz
2 bar
1 foo

You could also use “count” from itertools as following:

a = ["foo", "bar", "baz"]
from itertools import count, takewhile

def larger_than_0(x):
    return x > 0

for x in takewhile(larger_than_0, count(3, -1)):
    print(x, a[x-1])

3 baz
2 bar
1 foo

回答 9

list.reverse()照常使用,然后进行迭代。

http://docs.python.org/tutorial/datastructures.html

Use list.reverse() and then iterate as you normally would.

http://docs.python.org/tutorial/datastructures.html


回答 10

没有导入的方法:

for i in range(1,len(arr)+1):
    print(arr[-i])

要么

for i in arr[::-1]:
    print(i)

An approach with no imports:

for i in range(1,len(arr)+1):
    print(arr[-i])

or

for i in arr[::-1]:
    print(i)

回答 11

def reverse(spam):
    k = []
    for i in spam:
        k.insert(0,i)
    return "".join(k)
def reverse(spam):
    k = []
    for i in spam:
        k.insert(0,i)
    return "".join(k)

回答 12

对于有什么价值的事情,您也可以这样做。很简单。

a = [1, 2, 3, 4, 5, 6, 7]
for x in xrange(len(a)):
    x += 1
    print a[-x]

for what ever it’s worth you can do it like this too. very simple.

a = [1, 2, 3, 4, 5, 6, 7]
for x in xrange(len(a)):
    x += 1
    print a[-x]

回答 13

reverse(enumerate(collection))在python 3中实现的一种表达方式:

zip(reversed(range(len(collection))), reversed(collection))

在python 2:

izip(reversed(xrange(len(collection))), reversed(collection))

我不确定为什么我们没有快捷方式,例如:

def reversed_enumerate(collection):
    return zip(reversed(range(len(collection))), reversed(collection))

还是为什么我们没有 reversed_range()

An expressive way to achieve reverse(enumerate(collection)) in python 3:

zip(reversed(range(len(collection))), reversed(collection))

in python 2:

izip(reversed(xrange(len(collection))), reversed(collection))

I’m not sure why we don’t have a shorthand for this, eg.:

def reversed_enumerate(collection):
    return zip(reversed(range(len(collection))), reversed(collection))

or why we don’t have reversed_range()


回答 14

如果您需要索引并且列表很小,那么最容易理解的方法是reversed(list(enumerate(your_list)))按照接受的答案进行操作。但这会创建列表的副本,因此,如果列表占用了很大一部分内存,则必须减去enumerate(reversed())从中返回的索引len()-1

如果您只需要执行一次:

a = ['b', 'd', 'c', 'a']

for index, value in enumerate(reversed(a)):
    index = len(a)-1 - index

    do_something(index, value)

或者,如果您需要多次执行此操作,则应使用生成器:

def enumerate_reversed(lyst):
    for index, value in enumerate(reversed(lyst)):
        index = len(lyst)-1 - index
        yield index, value

for index, value in enumerate_reversed(a):
    do_something(index, value)

If you need the index and your list is small, the most readable way is to do reversed(list(enumerate(your_list))) like the accepted answer says. But this creates a copy of your list, so if your list is taking up a large portion of your memory you’ll have to subtract the index returned by enumerate(reversed()) from len()-1.

If you just need to do it once:

a = ['b', 'd', 'c', 'a']

for index, value in enumerate(reversed(a)):
    index = len(a)-1 - index

    do_something(index, value)

or if you need to do this multiple times you should use a generator:

def enumerate_reversed(lyst):
    for index, value in enumerate(reversed(lyst)):
        index = len(lyst)-1 - index
        yield index, value

for index, value in enumerate_reversed(a):
    do_something(index, value)

回答 15

反向功能在这里很方便:

myArray = [1,2,3,4]
myArray.reverse()
for x in myArray:
    print x

the reverse function comes in handy here:

myArray = [1,2,3,4]
myArray.reverse()
for x in myArray:
    print x

回答 16

您还可以使用while循环:

i = len(collection)-1
while i>=0:
    value = collection[i]
    index = i
    i-=1

You can also use a while loop:

i = len(collection)-1
while i>=0:
    value = collection[i]
    index = i
    i-=1

回答 17

您可以在普通的for循环中使用负索引:

>>> collection = ["ham", "spam", "eggs", "baked beans"]
>>> for i in range(1, len(collection) + 1):
...     print(collection[-i])
... 
baked beans
eggs
spam
ham

要访问索引,就好像您正在遍历集合的反向副本一样,请使用i - 1

>>> for i in range(1, len(collection) + 1):
...     print(i-1, collection[-i])
... 
0 baked beans
1 eggs
2 spam
3 ham

要访问原始的非反向索引,请使用len(collection) - i

>>> for i in range(1, len(collection) + 1):
...     print(len(collection)-i, collection[-i])
... 
3 baked beans
2 eggs
1 spam
0 ham

You can use a negative index in an ordinary for loop:

>>> collection = ["ham", "spam", "eggs", "baked beans"]
>>> for i in range(1, len(collection) + 1):
...     print(collection[-i])
... 
baked beans
eggs
spam
ham

To access the index as though you were iterating forward over a reversed copy of the collection, use i - 1:

>>> for i in range(1, len(collection) + 1):
...     print(i-1, collection[-i])
... 
0 baked beans
1 eggs
2 spam
3 ham

To access the original, un-reversed index, use len(collection) - i:

>>> for i in range(1, len(collection) + 1):
...     print(len(collection)-i, collection[-i])
... 
3 baked beans
2 eggs
1 spam
0 ham

回答 18

如果您不介意索引为负,则可以执行以下操作:

>>> a = ["foo", "bar", "baz"]
>>> for i in range(len(a)):
...     print(~i, a[~i]))
-1 baz
-2 bar
-3 foo

If you don’t mind the index being negative, you can do:

>>> a = ["foo", "bar", "baz"]
>>> for i in range(len(a)):
...     print(~i, a[~i]))
-1 baz
-2 bar
-3 foo

回答 19

我认为最优雅的方法是转换enumeratereversed使用以下生成器

(-(ri+1), val) for ri, val in enumerate(reversed(foo))

生成enumerate迭代器的逆向

例:

foo = [1,2,3]
bar = [3,6,9]
[
    bar[i] - val
    for i, val in ((-(ri+1), val) for ri, val in enumerate(reversed(foo)))
]

结果:

[6, 4, 2]

I think the most elegant way is to transform enumerate and reversed using the following generator

(-(ri+1), val) for ri, val in enumerate(reversed(foo))

which generates a the reverse of the enumerate iterator

Example:

foo = [1,2,3]
bar = [3,6,9]
[
    bar[i] - val
    for i, val in ((-(ri+1), val) for ri, val in enumerate(reversed(foo)))
]

Result:

[6, 4, 2]

回答 20

其他答案是好的,但是如果您要作为 列表理解样式

collection = ['a','b','c']
[item for item in reversed( collection ) ]

The other answers are good, but if you want to do as List comprehension style

collection = ['a','b','c']
[item for item in reversed( collection ) ]

回答 21

要使用负索引,请执行以下操作:从-1开始,然后在每次迭代后退-1。

>>> a = ["foo", "bar", "baz"]
>>> for i in range(-1, -1*(len(a)+1), -1):
...     print i, a[i]
... 
-1 baz
-2 bar
-3 foo

To use negative indices: start at -1 and step back by -1 at each iteration.

>>> a = ["foo", "bar", "baz"]
>>> for i in range(-1, -1*(len(a)+1), -1):
...     print i, a[i]
... 
-1 baz
-2 bar
-3 foo

回答 22

一个简单的方法:

n = int(input())
arr = list(map(int, input().split()))

for i in reversed(range(0, n)):
    print("%d %d" %(i, arr[i]))

A simple way :

n = int(input())
arr = list(map(int, input().split()))

for i in reversed(range(0, n)):
    print("%d %d" %(i, arr[i]))

回答 23

input_list = ['foo','bar','baz']
for i in range(-1,-len(input_list)-1,-1)
    print(input_list[i])

我认为这也是一种简单的方法…从末尾读取并不断递减直到列表的长度,因为我们从不执行“ end”索引,因此也添加了-1

input_list = ['foo','bar','baz']
for i in range(-1,-len(input_list)-1,-1)
    print(input_list[i])

i think this one is also simple way to do it… read from end and keep decrementing till the length of list, since we never execute the “end” index hence added -1 also


回答 24

假设任务是在列表中找到满足某些条件的最后一个元素(即向后看时的第一个元素),我得到以下数字:

>>> min(timeit.repeat('for i in xrange(len(xs)-1,-1,-1):\n    if 128 == xs[i]: break', setup='xs, n = range(256), 0', repeat=8))
4.6937971115112305
>>> min(timeit.repeat('for i in reversed(xrange(0, len(xs))):\n    if 128 == xs[i]: break', setup='xs, n = range(256), 0', repeat=8))
4.809093952178955
>>> min(timeit.repeat('for i, x in enumerate(reversed(xs), 1):\n    if 128 == x: break', setup='xs, n = range(256), 0', repeat=8))
4.931743860244751
>>> min(timeit.repeat('for i, x in enumerate(xs[::-1]):\n    if 128 == x: break', setup='xs, n = range(256), 0', repeat=8))
5.548468112945557
>>> min(timeit.repeat('for i in xrange(len(xs), 0, -1):\n    if 128 == xs[i - 1]: break', setup='xs, n = range(256), 0', repeat=8))
6.286104917526245
>>> min(timeit.repeat('i = len(xs)\nwhile 0 < i:\n    i -= 1\n    if 128 == xs[i]: break', setup='xs, n = range(256), 0', repeat=8))
8.384078979492188

因此,最丑陋的选择xrange(len(xs)-1,-1,-1)是最快的。

Assuming task is to find last element that satisfies some condition in a list (i.e. first when looking backwards), I’m getting following numbers:

>>> min(timeit.repeat('for i in xrange(len(xs)-1,-1,-1):\n    if 128 == xs[i]: break', setup='xs, n = range(256), 0', repeat=8))
4.6937971115112305
>>> min(timeit.repeat('for i in reversed(xrange(0, len(xs))):\n    if 128 == xs[i]: break', setup='xs, n = range(256), 0', repeat=8))
4.809093952178955
>>> min(timeit.repeat('for i, x in enumerate(reversed(xs), 1):\n    if 128 == x: break', setup='xs, n = range(256), 0', repeat=8))
4.931743860244751
>>> min(timeit.repeat('for i, x in enumerate(xs[::-1]):\n    if 128 == x: break', setup='xs, n = range(256), 0', repeat=8))
5.548468112945557
>>> min(timeit.repeat('for i in xrange(len(xs), 0, -1):\n    if 128 == xs[i - 1]: break', setup='xs, n = range(256), 0', repeat=8))
6.286104917526245
>>> min(timeit.repeat('i = len(xs)\nwhile 0 < i:\n    i -= 1\n    if 128 == xs[i]: break', setup='xs, n = range(256), 0', repeat=8))
8.384078979492188

So, the ugliest option xrange(len(xs)-1,-1,-1) is the fastest.


回答 25

您可以使用生成器:

li = [1,2,3,4,5,6]
len_li = len(li)
gen = (len_li-1-i for i in range(len_li))

最后:

for i in gen:
    print(li[i])

希望对您有帮助。

you can use a generator:

li = [1,2,3,4,5,6]
len_li = len(li)
gen = (len_li-1-i for i in range(len_li))

finally:

for i in gen:
    print(li[i])

hope this help you.


使用典型的测试目录结构运行unittest

问题:使用典型的测试目录结构运行unittest

即使是一个简单的Python模块,最常见的目录结构似乎也是将单元测试分成各自的test目录:

new_project/
    antigravity/
        antigravity.py
    test/
        test_antigravity.py
    setup.py
    etc.

例如,请参见此Python项目howto

我的问题是,实际上运行测试的通常方法什么?我怀疑这对除我以外的所有人来说都是显而易见的,但是您不能仅从python test_antigravity.pytest目录运行,import antigravity因为模块不在路径上,它将失败。

我知道我可以修改PYTHONPATH和其他与搜索路径有关的技巧,但我不敢相信这是最简单的方法-如果您是开发人员,这很好,但如果用户只是想检查测试结果,就不能期望用户使用通过。

另一种选择是将测试文件复制到另一个目录中,但似乎有点愚蠢,并且错过了将它们放在一个单独目录中的意义。

那么,如果您刚刚将源代码下载到我的新项目中,将如何运行单元测试?我希望有一个答案让我对用户说:“要运行单元测试,请执行X。”

The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory:

new_project/
    antigravity/
        antigravity.py
    test/
        test_antigravity.py
    setup.py
    etc.

for example see this Python project howto.

My question is simply What’s the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can’t just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path.

I know I could modify PYTHONPATH and other search path related tricks, but I can’t believe that’s the simplest way – it’s fine if you’re the developer but not realistic to expect your users to use if they just want to check the tests are passing.

The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with.

So, if you had just downloaded the source to my new project how would you run the unit tests? I’d prefer an answer that would let me say to my users: “To run the unit tests do X.”


回答 0

我认为最好的解决方案是使用unittest 命令行界面,该界面会将目录添加到,sys.path因此您不必(在TestLoader类中完成)。

例如,对于这样的目录结构:

new_project
├── antigravity.py
└── test_antigravity.py

您可以运行:

$ cd new_project
$ python -m unittest test_antigravity

对于像您这样的目录结构:

new_project
├── antigravity
   ├── __init__.py         # make it a package
   └── antigravity.py
└── test
    ├── __init__.py         # also make test a package
    └── test_antigravity.py

test包内的测试模块中,您可以antigravity照常导入包及其模块:

# import the package
import antigravity

# import the antigravity module
from antigravity import antigravity

# or an object inside the antigravity module
from antigravity.antigravity import my_object

运行一个测试模块:

要运行单个测试模块,在这种情况下test_antigravity.py

$ cd new_project
$ python -m unittest test.test_antigravity

只需以导入模块的相同方式引用测试模块即可。

运行单个测试用例或测试方法:

您也可以运行一个TestCase或单个测试方法:

$ python -m unittest test.test_antigravity.GravityTestCase
$ python -m unittest test.test_antigravity.GravityTestCase.test_method

运行所有测试:

您还可以使用测试发现,它将为您发现并运行所有测试,它们必须是名为的模块或软件包test*.py(可以使用-p, --pattern标志进行更改):

$ cd new_project
$ python -m unittest discover
$ # Also works without discover for Python 3
$ # as suggested by @Burrito in the comments
$ python -m unittest

这将运行包中的所有test*.py模块test

The best solution in my opinion is to use the unittest command line interface which will add the directory to the sys.path so you don’t have to (done in the TestLoader class).

For example for a directory structure like this:

new_project
├── antigravity.py
└── test_antigravity.py

You can just run:

$ cd new_project
$ python -m unittest test_antigravity

For a directory structure like yours:

new_project
├── antigravity
│   ├── __init__.py         # make it a package
│   └── antigravity.py
└── test
    ├── __init__.py         # also make test a package
    └── test_antigravity.py

And in the test modules inside the test package, you can import the antigravity package and its modules as usual:

# import the package
import antigravity

# import the antigravity module
from antigravity import antigravity

# or an object inside the antigravity module
from antigravity.antigravity import my_object

Running a single test module:

To run a single test module, in this case test_antigravity.py:

$ cd new_project
$ python -m unittest test.test_antigravity

Just reference the test module the same way you import it.

Running a single test case or test method:

Also you can run a single TestCase or a single test method:

$ python -m unittest test.test_antigravity.GravityTestCase
$ python -m unittest test.test_antigravity.GravityTestCase.test_method

Running all tests:

You can also use test discovery which will discover and run all the tests for you, they must be modules or packages named test*.py (can be changed with the -p, --pattern flag):

$ cd new_project
$ python -m unittest discover
$ # Also works without discover for Python 3
$ # as suggested by @Burrito in the comments
$ python -m unittest

This will run all the test*.py modules inside the test package.


回答 1

对用户来说,最简单的解决方案是提供一个可执行脚本(runtests.py或某些类似脚本),该脚本引导必要的测试环境,包括在需要时sys.path临时添加您的根项目目录。这不需要用户设置环境变量,类似这样的东西在引导脚本中可以很好地工作:

import sys, os

sys.path.insert(0, os.path.dirname(__file__))

这样,您对用户的指示就可以像“ python runtests.py” 一样简单。

当然,如果您真正需要的路径是os.path.dirname(__file__),则根本不需要添加它sys.path;Python始终将当前正在运行的脚本的目录放在的开头sys.path,因此根据您的目录结构,可能仅需要将您的脚本放在runtests.py正确的位置即可。

此外,Python 2.7+中unittest模块(已反向移植为Python 2.6及更早版本的unittest2)现在具有内置的测试发现功能,因此,如果您要进行自动测试发现,则不再需要鼻子:您的用户说明可以很简单python -m unittest discover

The simplest solution for your users is to provide an executable script (runtests.py or some such) which bootstraps the necessary test environment, including, if needed, adding your root project directory to sys.path temporarily. This doesn’t require users to set environment variables, something like this works fine in a bootstrap script:

import sys, os

sys.path.insert(0, os.path.dirname(__file__))

Then your instructions to your users can be as simple as “python runtests.py“.

Of course, if the path you need really is os.path.dirname(__file__), then you don’t need to add it to sys.path at all; Python always puts the directory of the currently running script at the beginning of sys.path, so depending on your directory structure, just locating your runtests.py at the right place might be all that’s needed.

Also, the unittest module in Python 2.7+ (which is backported as unittest2 for Python 2.6 and earlier) now has test discovery built-in, so nose is no longer necessary if you want automated test discovery: your user instructions can be as simple as python -m unittest discover.


回答 2

我通常在项目目录(源目录和公用)下创建一个“运行测试”脚本,以test加载我的“所有测试”套件。这通常是样板代码,因此我可以在项目之间重复使用它。

run_tests.py:

import unittest
import test.all_tests
testSuite = test.all_tests.create_test_suite()
text_runner = unittest.TextTestRunner().run(testSuite)

test / all_tests.py(来自我如何在目录中运行所有Python单元测试?

import glob
import unittest

def create_test_suite():
    test_file_strings = glob.glob('test/test_*.py')
    module_strings = ['test.'+str[5:len(str)-3] for str in test_file_strings]
    suites = [unittest.defaultTestLoader.loadTestsFromName(name) \
              for name in module_strings]
    testSuite = unittest.TestSuite(suites)
    return testSuite

通过此设置,您确实可以只include antigravity在测试模块中。缺点是您需要更多的支持代码来执行特定的测试……我每次都运行它们。

I generally create a “run tests” script in the project directory (the one that is common to both the source directory and test) that loads my “All Tests” suite. This is usually boilerplate code, so I can reuse it from project to project.

run_tests.py:

import unittest
import test.all_tests
testSuite = test.all_tests.create_test_suite()
text_runner = unittest.TextTestRunner().run(testSuite)

test/all_tests.py (from How do I run all Python unit tests in a directory?)

import glob
import unittest

def create_test_suite():
    test_file_strings = glob.glob('test/test_*.py')
    module_strings = ['test.'+str[5:len(str)-3] for str in test_file_strings]
    suites = [unittest.defaultTestLoader.loadTestsFromName(name) \
              for name in module_strings]
    testSuite = unittest.TestSuite(suites)
    return testSuite

With this setup, you can indeed just include antigravity in your test modules. The downside is you would need more support code to execute a particular test… I just run them all every time.


回答 3

从您链接到的文章:

创建一个test_modulename.py文件,并将您的unittest测试放入其中。由于测试模块与代码位于不同的目录中,因此您可能需要将模块的父目录添加到PYTHONPATH中才能运行它们:

$ cd /path/to/googlemaps

$ export PYTHONPATH=$PYTHONPATH:/path/to/googlemaps/googlemaps

$ python test/test_googlemaps.py

最后,鼻子还有一个更流行的Python单元测试框架(这很重要!)。鼻子可以帮助简化和扩展内置的单元测试框架(例如,它可以自动找到您的测试代码并为您设置PYTHONPATH),但是标准Python发行版中并未包含。

也许您应该按照提示看一下鼻子

From the article you linked to:

Create a test_modulename.py file and put your unittest tests in it. Since the test modules are in a separate directory from your code, you may need to add your module’s parent directory to your PYTHONPATH in order to run them:

$ cd /path/to/googlemaps

$ export PYTHONPATH=$PYTHONPATH:/path/to/googlemaps/googlemaps

$ python test/test_googlemaps.py

Finally, there is one more popular unit testing framework for Python (it’s that important!), nose. nose helps simplify and extend the builtin unittest framework (it can, for example, automagically find your test code and setup your PYTHONPATH for you), but it is not included with the standard Python distribution.

Perhaps you should look at nose as it suggests?


回答 4

我有一个相同的问题,有一个单独的单元测试文件夹。根据上述建议,我将绝对源路径添加到sys.path

以下解决方案的好处是,test/test_yourmodule.py无需首先更改测试目录即可运行文件:

import sys, os
testdir = os.path.dirname(__file__)
srcdir = '../antigravity'
sys.path.insert(0, os.path.abspath(os.path.join(testdir, srcdir)))

import antigravity
import unittest

I had the same problem, with a separate unit tests folder. From the mentioned suggestions I add the absolute source path to sys.path.

The benefit of the following solution is, that one can run the file test/test_yourmodule.py without changing at first into the test-directory:

import sys, os
testdir = os.path.dirname(__file__)
srcdir = '../antigravity'
sys.path.insert(0, os.path.abspath(os.path.join(testdir, srcdir)))

import antigravity
import unittest

回答 5

如果您运行“ python setup.py development”,则该软件包将位于路径中。但是您可能不想这样做,因为您可能会感染系统python安装,这就是为什么存在virtualenvbuildout之类的工具的原因。

if you run “python setup.py develop” then the package will be in the path. But you may not want to do that because you could infect your system python installation, which is why tools like virtualenv and buildout exist.


回答 6

Python unittest模块的解决方案/示例

给出以下项目结构:

ProjectName
 ├── project_name
 |    ├── models
 |    |    └── thing_1.py
 |    └── __main__.py
 └── test
      ├── models
      |    └── test_thing_1.py
      └── __main__.py

您可以使用python project_name调用从根目录运行项目ProjectName/project_name/__main__.py


要使用python test有效运行的测试ProjectName/test/__main__.py,您需要执行以下操作:

1)test/models通过添加__init__.py文件将目录变成一个包。这使得子目录中的测试用例可以从父test目录访问。

# ProjectName/test/models/__init__.py

from .test_thing_1 import Thing1TestCase        

2)修改系统路径test/__main__.py以包含project_name目录。

# ProjectName/test/__main__.py

import sys
import unittest

sys.path.append('../project_name')

loader = unittest.TestLoader()
testSuite = loader.discover('test')
testRunner = unittest.TextTestRunner(verbosity=2)
testRunner.run(testSuite)

现在,您可以从project_name测试中成功导入内容。

# ProjectName/test/models/test_thing_1.py    

import unittest
from project_name.models import Thing1  # this doesn't work without 'sys.path.append' per step 2 above

class Thing1TestCase(unittest.TestCase):

    def test_thing_1_init(self):
        thing_id = 'ABC'
        thing1 = Thing1(thing_id)
        self.assertEqual(thing_id, thing.id)

Solution/Example for Python unittest module

Given the following project structure:

ProjectName
 ├── project_name
 |    ├── models
 |    |    └── thing_1.py
 |    └── __main__.py
 └── test
      ├── models
      |    └── test_thing_1.py
      └── __main__.py

You can run your project from the root directory with python project_name, which calls ProjectName/project_name/__main__.py.


To run your tests with python test, effectively running ProjectName/test/__main__.py, you need to do the following:

1) Turn your test/models directory into a package by adding a __init__.py file. This makes the test cases within the sub directory accessible from the parent test directory.

# ProjectName/test/models/__init__.py

from .test_thing_1 import Thing1TestCase        

2) Modify your system path in test/__main__.py to include the project_name directory.

# ProjectName/test/__main__.py

import sys
import unittest

sys.path.append('../project_name')

loader = unittest.TestLoader()
testSuite = loader.discover('test')
testRunner = unittest.TextTestRunner(verbosity=2)
testRunner.run(testSuite)

Now you can successfully import things from project_name in your tests.

# ProjectName/test/models/test_thing_1.py    

import unittest
from project_name.models import Thing1  # this doesn't work without 'sys.path.append' per step 2 above

class Thing1TestCase(unittest.TestCase):

    def test_thing_1_init(self):
        thing_id = 'ABC'
        thing1 = Thing1(thing_id)
        self.assertEqual(thing_id, thing.id)

回答 7

使用setup.py develop让您的工作目录是安装Python环境的一部分,然后运行测试。

Use setup.py develop to make your working directory be part of the installed Python environment, then run the tests.


回答 8

如果您使用VS Code,并且您的测试与项目位于同一级别,则运行和调试代码无法立即使用。您可以做的就是更改launch.json文件:

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Python",
            "type": "python",
            "request": "launch",
            "stopOnEntry": false,
            "pythonPath": "${config:python.pythonPath}",
            "program": "${file}",
            "cwd": "${workspaceRoot}",
            "env": {},
            "envFile": "${workspaceRoot}/.env",
            "debugOptions": [
                "WaitOnAbnormalExit",
                "WaitOnNormalExit",
                "RedirectOutput"
            ]
        }    
    ]
}

关键是envFile

"envFile": "${workspaceRoot}/.env",

在项目的根目录中添加.env文件

在您的.env文件内部,将路径添加到项目的根目录。这将暂时添加

PYTHONPATH = C:\您的\ PYTHON \ PROJECT \ ROOT_DIRECTORY

项目的路径,您将能够使用VS Code中的调试单元测试

If you use VS Code and your tests are located on the same level as your project then running and debug your code doesn’t work out of the box. What you can do is change your launch.json file:

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Python",
            "type": "python",
            "request": "launch",
            "stopOnEntry": false,
            "pythonPath": "${config:python.pythonPath}",
            "program": "${file}",
            "cwd": "${workspaceRoot}",
            "env": {},
            "envFile": "${workspaceRoot}/.env",
            "debugOptions": [
                "WaitOnAbnormalExit",
                "WaitOnNormalExit",
                "RedirectOutput"
            ]
        }    
    ]
}

The key line here is envFile

"envFile": "${workspaceRoot}/.env",

In the root of your project add .env file

Inside of your .env file add path to the root of your project. This will temporarily add

PYTHONPATH=C:\YOUR\PYTHON\PROJECT\ROOT_DIRECTORY

path to your project and you will be able to use debug unit tests from VS Code


回答 9

我注意到,如果您从“ src”目录运行unittest命令行界面,则导入无需修改即可正常工作。

python -m unittest discover -s ../test

如果要将其放入项目目录中的批处理文件中,可以执行以下操作:

setlocal & cd src & python -m unittest discover -s ../test

I noticed that if you run the unittest command line interface from your “src” directory, then imports work correctly without modification.

python -m unittest discover -s ../test

If you want to put that in a batch file in your project directory, you can do this:

setlocal & cd src & python -m unittest discover -s ../test

回答 10

我有相同的问题很长时间了。我最近选择的是以下目录结构:

project_path
├── Makefile
├── src
   ├── script_1.py
   ├── script_2.py
   └── script_3.py
└── tests
    ├── __init__.py
    ├── test_script_1.py
    ├── test_script_2.py
    └── test_script_3.py

__init__.py测试文件夹的脚本中,编写以下代码:

import os
import sys
PROJECT_PATH = os.getcwd()
SOURCE_PATH = os.path.join(
    PROJECT_PATH,"src"
)
sys.path.append(SOURCE_PATH)

对于共享项目而言,超级重要的是Makefile,因为它强制正确运行脚本。这是我放入Makefile中的命令:

run_tests:
    python -m unittest discover .

Makefile之所以重要,不仅是因为它运行的命令,还因为它从何处运行它。如果您要在测试中执行cd操作python -m unittest discover .,它将无法正常工作,因为unit_tests中的init脚本会调用os.getcwd(),这将指向不正确的绝对路径(该路径将附加到sys.path中,您将会丢失您的源文件夹)。自发现发现所有测试以来,脚本便会运行,但它们无法正常运行。因此,Makefile可以避免记住此问题。

我真的很喜欢这种方法,因为我不必触摸src文件夹,单元测试或环境变量,并且一切运行都非常顺利。

让我知道你们是否喜欢它。

希望能有所帮助,

I’ve had the same problem for a long time. What I recently chose is the following directory structure:

project_path
├── Makefile
├── src
│   ├── script_1.py
│   ├── script_2.py
│   └── script_3.py
└── tests
    ├── __init__.py
    ├── test_script_1.py
    ├── test_script_2.py
    └── test_script_3.py

and in the __init__.py script of the test folder, I write the following:

import os
import sys
PROJECT_PATH = os.getcwd()
SOURCE_PATH = os.path.join(
    PROJECT_PATH,"src"
)
sys.path.append(SOURCE_PATH)

Super important for sharing the project is the Makefile, because it enforces running the scripts properly. Here is the command that I put in the Makefile:

run_tests:
    python -m unittest discover .

The Makefile is important not just because of the command it runs but also because of where it runs it from. If you would cd in tests and do python -m unittest discover ., it wouldn’t work because the init script in unit_tests calls os.getcwd(), which would then point to the incorrect absolute path (that would be appended to sys.path and you would be missing your source folder). The scripts would run since discover finds all the tests, but they wouldn’t run properly. So the Makefile is there to avoid having to remember this issue.

I really like this approach because I don’t have to touch my src folder, my unit tests or my environment variables and everything runs smoothly.

Let me know if you guys like it.

Hope that helps,


回答 11

以下是我的项目结构:

ProjectFolder:
 - project:
     - __init__.py
     - item.py
 - tests:
     - test_item.py

我发现最好导入setUp()方法:

import unittest
import sys    

class ItemTest(unittest.TestCase):

    def setUp(self):
        sys.path.insert(0, "../project")
        from project import item
        # further setup using this import

    def test_item_props(self):
        # do my assertions

if __name__ == "__main__":
    unittest.main()

Following is my project structure:

ProjectFolder:
 - project:
     - __init__.py
     - item.py
 - tests:
     - test_item.py

I found it better to import in the setUp() method:

import unittest
import sys    

class ItemTest(unittest.TestCase):

    def setUp(self):
        sys.path.insert(0, "../project")
        from project import item
        # further setup using this import

    def test_item_props(self):
        # do my assertions

if __name__ == "__main__":
    unittest.main()

回答 12

实际运行测试的通常方法是什么

我使用Python 3.6.2

cd new_project

pytest test/test_antigravity.py

要安装pytestsudo pip install pytest

我没有设置任何路径变量,并且导入不会因相同的“测试”项目结构而失败。

我评论了这些东西if __name__ == '__main__'

test_antigravity.py

import antigravity

class TestAntigravity(unittest.TestCase):

    def test_something(self):

        # ... test stuff here


# if __name__ == '__main__':
# 
#     if __package__ is None:
# 
#         import something
#         sys.path.append(path.dirname(path.dirname(path.abspath(__file__))))
#         from .. import antigravity
# 
#     else:
# 
#         from .. import antigravity
# 
#     unittest.main()

What’s the usual way of actually running the tests

I use Python 3.6.2

cd new_project

pytest test/test_antigravity.py

To install pytest: sudo pip install pytest

I didn’t set any path variable and my imports are not failing with the same “test” project structure.

I commented out this stuff: if __name__ == '__main__' like this:

test_antigravity.py

import antigravity

class TestAntigravity(unittest.TestCase):

    def test_something(self):

        # ... test stuff here


# if __name__ == '__main__':
# 
#     if __package__ is None:
# 
#         import something
#         sys.path.append(path.dirname(path.dirname(path.abspath(__file__))))
#         from .. import antigravity
# 
#     else:
# 
#         from .. import antigravity
# 
#     unittest.main()

回答 13

可以使用运行选定测试或所有测试的包装器。

例如:

./run_tests antigravity/*.py

或使用globlobtests/**/*.py)递归运行所有测试(由启用shopt -s globstar)。

包装器基本上可以argparse用来解析参数,例如:

parser = argparse.ArgumentParser()
parser.add_argument('files', nargs='*')

然后加载所有测试:

for filename in args.files:
    exec(open(filename).read())

然后将它们添加到您的测试套件中(使用inspect):

alltests = unittest.TestSuite()
for name, obj in inspect.getmembers(sys.modules[__name__]):
    if inspect.isclass(obj) and name.startswith("FooTest"):
        alltests.addTest(unittest.makeSuite(obj))

并运行它们:

result = unittest.TextTestRunner(verbosity=2).run(alltests)

查看示例以获取更多详细信息。

另请参阅:如何在目录中运行所有Python单元测试?

It’s possible to use wrapper which runs selected or all tests.

For instance:

./run_tests antigravity/*.py

or to run all tests recursively use globbing (tests/**/*.py) (enable by shopt -s globstar).

The wrapper can basically use argparse to parse the arguments like:

parser = argparse.ArgumentParser()
parser.add_argument('files', nargs='*')

Then load all the tests:

for filename in args.files:
    exec(open(filename).read())

then add them into your test suite (using inspect):

alltests = unittest.TestSuite()
for name, obj in inspect.getmembers(sys.modules[__name__]):
    if inspect.isclass(obj) and name.startswith("FooTest"):
        alltests.addTest(unittest.makeSuite(obj))

and run them:

result = unittest.TextTestRunner(verbosity=2).run(alltests)

Check this example for more details.

See also: How to run all Python unit tests in a directory?


回答 14

Python 3+

添加到@Pierre

使用这样的unittest目录结构:

new_project
├── antigravity
   ├── __init__.py         # make it a package
   └── antigravity.py
└── test
    ├── __init__.py         # also make test a package
    └── test_antigravity.py

要运行测试模块test_antigravity.py

$ cd new_project
$ python -m unittest test.test_antigravity

或单 TestCase

$ python -m unittest test.test_antigravity.GravityTestCase

强制不要忘记__init__.py即使为空也不会起作用。

Python 3+

Adding to @Pierre

Using unittest directory structure like this:

new_project
├── antigravity
│   ├── __init__.py         # make it a package
│   └── antigravity.py
└── test
    ├── __init__.py         # also make test a package
    └── test_antigravity.py

To run the test module test_antigravity.py:

$ cd new_project
$ python -m unittest test.test_antigravity

Or a single TestCase

$ python -m unittest test.test_antigravity.GravityTestCase

Mandatory don’t forget the __init__.py even if empty otherwise will not work.


回答 15

没有伏都教,您无法从父目录导入。这是至少与Python 3.6兼容的另一种方式。

首先,具有以下内容的文件test / context.py:

import sys
import os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))

然后在文件test / test_antigravity.py中进行以下导入:

import unittest
try:
    import context
except ModuleNotFoundError:
    import test.context    
import antigravity

请注意,此try-except子句的原因是

  • 使用“ python test_antigravity.py”运行时,导入test.context失败,并且
  • 在new_project目录中使用“ python -m unittest”运行时,导入上下文失败。

有了这个技巧,他们俩都可以工作。

现在,您可以使用以下命令运行测试目录中的所有测试文件:

$ pwd
/projects/new_project
$ python -m unittest

或使用以下命令运行单个测试文件:

$ cd test
$ python test_antigravity

好的,与其在test_antigravity.py中包含context.py的内容相比,没有什么漂亮,但也许有一点。欢迎提出建议。

You can’t import from the parent directory without some voodoo. Here’s yet another way that works with at least Python 3.6.

First, have a file test/context.py with the following content:

import sys
import os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))

Then have the following import in the file test/test_antigravity.py:

import unittest
try:
    import context
except ModuleNotFoundError:
    import test.context    
import antigravity

Note that the reason for this try-except clause is that

  • import test.context fails when run with “python test_antigravity.py” and
  • import context fails when run with “python -m unittest” from the new_project directory.

With this trickery they both work.

Now you can run all the test files within test directory with:

$ pwd
/projects/new_project
$ python -m unittest

or run an individual test file with:

$ cd test
$ python test_antigravity

Ok, it’s not much prettier than having the content of context.py within test_antigravity.py, but maybe a little. Suggestions are welcome.


回答 16

如果测试目录中有多个目录,则必须在每个目录中添加一个__init__.py文件。

/home/johndoe/snakeoil
└── test
    ├── __init__.py        
    └── frontend
        └── __init__.py
        └── test_foo.py
    └── backend
        └── __init__.py
        └── test_bar.py

然后要一次运行每个测试,请运行:

python -m unittest discover -s /home/johndoe/snakeoil/test -t /home/johndoe/snakeoil

资源: python -m unittest -h

  -s START, --start-directory START
                        Directory to start discovery ('.' default)
  -t TOP, --top-level-directory TOP
                        Top level directory of project (defaults to start
                        directory)

If you have multiple directories in your test directory, then you have to add to each directory an __init__.py file.

/home/johndoe/snakeoil
└── test
    ├── __init__.py        
    └── frontend
        └── __init__.py
        └── test_foo.py
    └── backend
        └── __init__.py
        └── test_bar.py

Then to run every test at once, run:

python -m unittest discover -s /home/johndoe/snakeoil/test -t /home/johndoe/snakeoil

Source: python -m unittest -h

  -s START, --start-directory START
                        Directory to start discovery ('.' default)
  -t TOP, --top-level-directory TOP
                        Top level directory of project (defaults to start
                        directory)

回答 17

无论您位于哪个工作目录中,此BASH脚本都将从文件系统中的任何位置执行python unittest测试目录。

当留在./src./example工作目录中并且需要快速的单元测试时,这很有用:

#!/bin/bash

this_program="$0"
dirname="`dirname $this_program`"
readlink="`readlink -e $dirname`"

python -m unittest discover -s "$readlink"/test -v

test/__init__.py在生产过程中,无需文件来负担您的包/内存开销。

This BASH script will execute the python unittest test directory from anywhere in the file system, no matter what working directory you are in.

This is useful when staying in the ./src or ./example working directory and you need a quick unit test:

#!/bin/bash

this_program="$0"
dirname="`dirname $this_program`"
readlink="`readlink -e $dirname`"

python -m unittest discover -s "$readlink"/test -v

No need for a test/__init__.py file to burden your package/memory-overhead during production.


回答 18

这样一来,您就可以从任何位置运行测试脚本,而不必从命令行中弄乱系统变量。

这会将主项目文件夹添加到python路径,并找到相对于脚本本身而不是相对于当前工作目录的位置。

import sys, os

sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.realpath(__file__))))

将其添加到所有测试脚本的顶部。这样会将主项目文件夹添加到系统路径,因此从那里开始工作的所有模块导入现在都可以工作。而且从哪里运行测试都没有关系。

您显然可以更改project_path_hack文件以匹配您的主项目文件夹位置。

This way will let you run the test scripts from wherever you want without messing around with system variables from the command line.

This adds the main project folder to the python path, with the location found relative to the script itself, not relative to the current working directory.

import sys, os

sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.realpath(__file__))))

Add that to the top of all your test scripts. That will add the main project folder to the system path, so any module imports that work from there will now work. And it doesn’t matter where you run the tests from.

You can obviously change the project_path_hack file to match your main project folder location.


回答 19

如果您正在寻找仅命令行解决方案:

基于以下目录结构(一般带有专用的源目录):

new_project/
    src/
        antigravity.py
    test/
        test_antigravity.py

Windows:(在中new_project

$ set PYTHONPATH=%PYTHONPATH%;%cd%\src
$ python -m unittest discover -s test

看到这个问题如果要在批处理for循环中使用它,。

Linux:(在中new_project

$ export PYTHONPATH=$PYTHONPATH:$(pwd)/src  [I think - please edit this answer if you are a Linux user and you know this]
$ python -m unittest discover -s test

使用这种方法,还可以根据需要向PYTHONPATH添加更多目录。

If you are looking for a command line-only solution:

Based on the following directory structure (generalized with a dedicated source directory):

new_project/
    src/
        antigravity.py
    test/
        test_antigravity.py

Windows: (in new_project)

$ set PYTHONPATH=%PYTHONPATH%;%cd%\src
$ python -m unittest discover -s test

See this question if you want to use this in a batch for-loop.

Linux: (in new_project)

$ export PYTHONPATH=$PYTHONPATH:$(pwd)/src  [I think - please edit this answer if you are a Linux user and you know this]
$ python -m unittest discover -s test

With this approach, it is also possible to add more directories to the PYTHONPATH if necessary.


回答 20

您应该真正使用pip工具。

用于pip install -e .在开发模式下安装软件包。这是pytest推荐的一种非常好的做法(请参阅其良好做法文档,在这里您还可以找到两个要遵循的项目布局)。

You should really use the pip tool.

Use pip install -e . to install your package in development mode. This is a very good practice, recommended by pytest (see their good practices documentation, where you can also find two project layouts to follow).


关于捕获任何异常

问题:关于捕获任何异常

我如何编写一个try/ except块来捕获所有异常?

How can I write a try/except block that catches all exceptions?


回答 0

您可以,但您可能不应该:

try:
    do_something()
except:
    print "Caught it!"

但是,这也会捕获类似的异常KeyboardInterrupt,您通常不希望那样,对吗?除非您立即重新引发异常-参见docs中的以下示例:

try:
    f = open('myfile.txt')
    s = f.readline()
    i = int(s.strip())
except IOError as (errno, strerror):
    print "I/O error({0}): {1}".format(errno, strerror)
except ValueError:
    print "Could not convert data to an integer."
except:
    print "Unexpected error:", sys.exc_info()[0]
    raise

You can but you probably shouldn’t:

try:
    do_something()
except:
    print "Caught it!"

However, this will also catch exceptions like KeyboardInterrupt and you usually don’t want that, do you? Unless you re-raise the exception right away – see the following example from the docs:

try:
    f = open('myfile.txt')
    s = f.readline()
    i = int(s.strip())
except IOError as (errno, strerror):
    print "I/O error({0}): {1}".format(errno, strerror)
except ValueError:
    print "Could not convert data to an integer."
except:
    print "Unexpected error:", sys.exc_info()[0]
    raise

回答 1

除了裸露的except:子句(就像其他人说的那样,您不应该使用),您可以简单地捕获Exception

import traceback
import logging

try:
    whatever()
except Exception as e:
    logging.error(traceback.format_exc())
    # Logs the error appropriately. 

通常,仅当您想在终止之前处理任何其他未捕获的异常时,才通常考虑在代码的最外层执行此操作。

的优势,except Exception在裸露的except是,有少数exceptions,它不会赶上,最明显KeyboardInterruptSystemExit:如果你抓住了,吞下这些,那么你可以让任何人都很难离开你的脚本。

Apart from a bare except: clause (which as others have said you shouldn’t use), you can simply catch Exception:

import traceback
import logging

try:
    whatever()
except Exception as e:
    logging.error(traceback.format_exc())
    # Logs the error appropriately. 

You would normally only ever consider doing this at the outermost level of your code if for example you wanted to handle any otherwise uncaught exceptions before terminating.

The advantage of except Exception over the bare except is that there are a few exceptions that it wont catch, most obviously KeyboardInterrupt and SystemExit: if you caught and swallowed those then you could make it hard for anyone to exit your script.


回答 2

您可以执行此操作以处理一般异常

try:
    a = 2/0
except Exception as e:
    print e.__doc__
    print e.message

You can do this to handle general exceptions

try:
    a = 2/0
except Exception as e:
    print e.__doc__
    print e.message

回答 3

要捕获所有可能的异常,请捕获BaseException。它位于Exception层次结构的顶部:

Python 3:https//docs.python.org/3.5/library/exceptions.html#exception-hierarchy

Python 2.7:https//docs.python.org/2.7/library/exceptions.html#exception-hierarchy

try:
    something()
except BaseException as error:
    print('An exception occurred: {}'.format(error))

但是,正如其他人所提到的那样,通常仅在特定情况下才需要此功能。

To catch all possible exceptions, catch BaseException. It’s on top of the Exception hierarchy:

Python 3: https://docs.python.org/3.5/library/exceptions.html#exception-hierarchy

Python 2.7: https://docs.python.org/2.7/library/exceptions.html#exception-hierarchy

try:
    something()
except BaseException as error:
    print('An exception occurred: {}'.format(error))

But as other people mentioned, you would usually not need this, only for specific cases.


回答 4

非常简单的示例,类似于此处找到的示例:

http://docs.python.org/tutorial/errors.html#defining-clean-up-actions

如果您尝试捕获所有异常,则将所有代码放在“ try:”语句中,代替“ print”执行可能会引发异常的操作。”。

try:
    print "Performing an action which may throw an exception."
except Exception, error:
    print "An exception was thrown!"
    print str(error)
else:
    print "Everything looks great!"
finally:
    print "Finally is called directly after executing the try statement whether an exception is thrown or not."

在上面的示例中,您将按以下顺序查看输出:

1)执行可能会引发异常的动作。

2)无论是否引发异常,在执行try语句后直接调用final。

3)“引发了异常!” 或“一切看起来都很棒!” 取决于是否引发异常。

希望这可以帮助!

Very simple example, similar to the one found here:

http://docs.python.org/tutorial/errors.html#defining-clean-up-actions

If you’re attempting to catch ALL exceptions, then put all your code within the “try:” statement, in place of ‘print “Performing an action which may throw an exception.”‘.

try:
    print "Performing an action which may throw an exception."
except Exception, error:
    print "An exception was thrown!"
    print str(error)
else:
    print "Everything looks great!"
finally:
    print "Finally is called directly after executing the try statement whether an exception is thrown or not."

In the above example, you’d see output in this order:

1) Performing an action which may throw an exception.

2) Finally is called directly after executing the try statement whether an exception is thrown or not.

3) “An exception was thrown!” or “Everything looks great!” depending on whether an exception was thrown.

Hope this helps!


回答 5

有多种方法可以做到这一点,特别是在Python 3.0及更高版本中

方法1

这是简单的方法,但不建议使用,因为您不知道确切的代码行实际引发异常:

def bad_method():
    try:
        sqrt = 0**-1
    except Exception as e:
        print(e)

bad_method()

方法2

建议使用此方法,因为它提供了有关每个异常的更多详细信息。这包括:

  • 您代码的行号
  • 文件名
  • 实际错误更详细

唯一的缺点是需要导入tracback。

import traceback

def bad_method():
    try:
        sqrt = 0**-1
    except Exception:
        print(traceback.print_exc())

bad_method()

There are multiple ways to do this in particular with Python 3.0 and above

Approach 1

This is simple approach but not recommended because you would not know exactly which line of code is actually throwing the exception:

def bad_method():
    try:
        sqrt = 0**-1
    except Exception as e:
        print(e)

bad_method()

Approach 2

This approach is recommended because it provides more detail about each exception. It includes:

  • Line number for your code
  • File name
  • The actual error in more verbose way

The only drawback is tracback needs to be imported.

import traceback

def bad_method():
    try:
        sqrt = 0**-1
    except Exception:
        print(traceback.print_exc())

bad_method()

回答 6

我刚刚发现了这个小技巧,可以测试Python 2.7中的异常名称。有时我已经在代码中处理了特定的异常,因此我需要进行测试以查看该名称是否在已处理的异常列表中。

try:
    raise IndexError #as test error
except Exception as e:
    excepName = type(e).__name__ # returns the name of the exception

I’ve just found out this little trick for testing if exception names in Python 2.7 . Sometimes i have handled specific exceptions in the code, so i needed a test to see if that name is within a list of handled exceptions.

try:
    raise IndexError #as test error
except Exception as e:
    excepName = type(e).__name__ # returns the name of the exception

回答 7

try:
    whatever()
except:
    # this will catch any exception or error

值得一提的是,这不是正确的Python编码。这还将捕获许多您可能不想捕获的错误。

try:
    whatever()
except:
    # this will catch any exception or error

It is worth mentioning this is not proper Python coding. This will catch also many errors you might not want to catch.


如何在Python中更改工作目录?

问题:如何在Python中更改工作目录?

cd 是用于更改工作目录的shell命令。

如何在Python中更改当前的工作目录?

cd is the shell command to change the working directory.

How do I change the current working directory in Python?


回答 0

您可以使用以下命令更改工作目录:

import os

os.chdir(path)

使用此方法时,有两个最佳实践:

  1. 在无效路径上捕获异常(WindowsError,OSError)。如果抛出异常,请不要执行任何递归操作,尤其是破坏性操作。它们将沿旧路径而不是新路径运行。
  2. 完成后,返回到旧目录。可以通过将chdir调用包装在上下文管理器中以异常安全的方式完成,就像Brian M. Hunt在他的答案中所做的那样。

更改子流程中的当前工作目录不会更改父流程中的当前工作目录。Python解释器也是如此。您不能用于os.chdir()更改呼叫过程的CWD。

You can change the working directory with:

import os

os.chdir(path)

There are two best practices to follow when using this method:

  1. Catch the exception (WindowsError, OSError) on invalid path. If the exception is thrown, do not perform any recursive operations, especially destructive ones. They will operate on the old path and not the new one.
  2. Return to your old directory when you’re done. This can be done in an exception-safe manner by wrapping your chdir call in a context manager, like Brian M. Hunt did in his answer.

Changing the current working directory in a subprocess does not change the current working directory in the parent process. This is true of the Python interpreter as well. You cannot use os.chdir() to change the CWD of the calling process.


回答 1

这是上下文管理器更改工作目录的示例。它比其他地方提到的ActiveState版本要简单,但这可以完成工作。

上下文管理器: cd

import os

class cd:
    """Context manager for changing the current working directory"""
    def __init__(self, newPath):
        self.newPath = os.path.expanduser(newPath)

    def __enter__(self):
        self.savedPath = os.getcwd()
        os.chdir(self.newPath)

    def __exit__(self, etype, value, traceback):
        os.chdir(self.savedPath)

或者使用ContextManager尝试更简洁的等效方法(如下)

import subprocess # just to call an arbitrary command e.g. 'ls'

# enter the directory like this:
with cd("~/Library"):
   # we are in ~/Library
   subprocess.call("ls")

# outside the context manager we are back wherever we started.

Here’s an example of a context manager to change the working directory. It is simpler than an ActiveState version referred to elsewhere, but this gets the job done.

Context Manager: cd

import os

class cd:
    """Context manager for changing the current working directory"""
    def __init__(self, newPath):
        self.newPath = os.path.expanduser(newPath)

    def __enter__(self):
        self.savedPath = os.getcwd()
        os.chdir(self.newPath)

    def __exit__(self, etype, value, traceback):
        os.chdir(self.savedPath)

Or try the more concise equivalent(below), using ContextManager.

Example

import subprocess # just to call an arbitrary command e.g. 'ls'

# enter the directory like this:
with cd("~/Library"):
   # we are in ~/Library
   subprocess.call("ls")

# outside the context manager we are back wherever we started.

回答 2

我会这样使用os.chdir

os.chdir("/path/to/change/to")

顺便说一句,如果您需要弄清楚当前路径,请使用os.getcwd()

这里更多

I would use os.chdir like this:

os.chdir("/path/to/change/to")

By the way, if you need to figure out your current path, use os.getcwd().

More here


回答 3

cd() 使用生成器和装饰器很容易编写。

from contextlib import contextmanager
import os

@contextmanager
def cd(newdir):
    prevdir = os.getcwd()
    os.chdir(os.path.expanduser(newdir))
    try:
        yield
    finally:
        os.chdir(prevdir)

然后,即使在引发异常之后,也将还原目录:

os.chdir('/home')

with cd('/tmp'):
    # ...
    raise Exception("There's no place like home.")
# Directory is now back to '/home'.

cd() is easy to write using a generator and a decorator.

from contextlib import contextmanager
import os

@contextmanager
def cd(newdir):
    prevdir = os.getcwd()
    os.chdir(os.path.expanduser(newdir))
    try:
        yield
    finally:
        os.chdir(prevdir)

Then, the directory is reverted even after an exception is thrown:

os.chdir('/home')

with cd('/tmp'):
    # ...
    raise Exception("There's no place like home.")
# Directory is now back to '/home'.

回答 4

如果您使用的Python的一个相对较新的版本,你也可以使用一个上下文管理器,比如这一个

from __future__ import with_statement
from grizzled.os import working_directory

with working_directory(path_to_directory):
    # code in here occurs within the directory

# code here is in the original directory

更新

如果您喜欢自己动手:

import os
from contextlib import contextmanager

@contextmanager
def working_directory(directory):
    owd = os.getcwd()
    try:
        os.chdir(directory)
        yield directory
    finally:
        os.chdir(owd)

If you’re using a relatively new version of Python, you can also use a context manager, such as this one:

from __future__ import with_statement
from grizzled.os import working_directory

with working_directory(path_to_directory):
    # code in here occurs within the directory

# code here is in the original directory

UPDATE

If you prefer to roll your own:

import os
from contextlib import contextmanager

@contextmanager
def working_directory(directory):
    owd = os.getcwd()
    try:
        os.chdir(directory)
        yield directory
    finally:
        os.chdir(owd)

回答 5

正如其他人已经指出的那样,以上所有解决方案仅会更改当前进程的工作目录。当您返回Unix shell时,这将丢失。如果不顾一切,您可以使用以下可怕的方法在Unix上更改父shell目录:

def quote_against_shell_expansion(s):
    import pipes
    return pipes.quote(s)

def put_text_back_into_terminal_input_buffer(text):
    # use of this means that it only works in an interactive session
    # (and if the user types while it runs they could insert characters between the characters in 'text'!)
    import fcntl, termios
    for c in text:
        fcntl.ioctl(1, termios.TIOCSTI, c)

def change_parent_process_directory(dest):
    # the horror
    put_text_back_into_terminal_input_buffer("cd "+quote_against_shell_expansion(dest)+"\n")

As already pointed out by others, all the solutions above only change the working directory of the current process. This is lost when you exit back to the Unix shell. If desperate you can change the parent shell directory on Unix with this horrible hack:

def quote_against_shell_expansion(s):
    import pipes
    return pipes.quote(s)

def put_text_back_into_terminal_input_buffer(text):
    # use of this means that it only works in an interactive session
    # (and if the user types while it runs they could insert characters between the characters in 'text'!)
    import fcntl, termios
    for c in text:
        fcntl.ioctl(1, termios.TIOCSTI, c)

def change_parent_process_directory(dest):
    # the horror
    put_text_back_into_terminal_input_buffer("cd "+quote_against_shell_expansion(dest)+"\n")

回答 6

os.chdir() 是正确的方法。

os.chdir() is the right way.


回答 7

os.chdir()是的Python版本cd

os.chdir() is the Pythonic version of cd.


回答 8

import os

abs_path = 'C://a/b/c'
rel_path = './folder'

os.chdir(abs_path)
os.chdir(rel_path)

您可以同时使用os.chdir(abs_path)或os.chdir(rel_path),无需调用os.getcwd()即可使用相对路径。

import os

abs_path = 'C://a/b/c'
rel_path = './folder'

os.chdir(abs_path)
os.chdir(rel_path)

You can use both with os.chdir(abs_path) or os.chdir(rel_path), there’s no need to call os.getcwd() to use a relative path.


回答 9

进一步进入方向指出了由Brian和基于SH(1.0.8+)

from sh import cd, ls

cd('/tmp')
print ls()

Further into direction pointed out by Brian and based on sh (1.0.8+)

from sh import cd, ls

cd('/tmp')
print ls()

回答 10

如果您想执行“ cd ..”选项,只需键入:

os.chdir(“ ..”)

它与Windows cmd中的相同:cd。。当然,导入os是必需的(例如,将其键入为代码的第一行)。

If You would like to perform something like “cd..” option, just type:

os.chdir(“..”)

it is the same as in Windows cmd: cd.. Of course import os is neccessary (e.g type it as 1st line of your code)


回答 11

如果您使用spyder和love GUI,则只需单击屏幕右上角的文件夹按钮,然后浏览要用作当前目录的文件夹/目录。完成此操作后,您可以转到spyder IDE中窗口的文件浏览器选项卡,然后可以看到其中存在的所有文件/文件夹。要检查当前工作目录,请转到spyder IDE的控制台,然后键入

pwd

它将打印与之前选择的路径相同的路径。

If you use spyder and love GUI, you can simply click on the folder button on the upper right corner of your screen and navigate through folders/directories you want as current directory. After doing so you can go to the file explorer tab of the window in spyder IDE and you can see all the files/folders present there. to check your current working directory go to the console of spyder IDE and simply type

pwd

it will print the the same path as you have selected before.


回答 12

更改脚本过程的当前目录很简单。我认为问题实际上是如何更改从中调用python脚本的命令窗口的当前目录,这非常困难。Windows中的Bat脚本或Bash shell中的Bash脚本可以使用普通的cd命令来执行此操作,因为shell本身就是解释器。在Windows和Linux中,Python都是程序,任何程序都不能直接更改其父级的环境。但是,将简单的Shell脚本与Python脚本结合使用可完成大多数艰苦的工作,即可达到预期的效果。例如,为了制作具有遍历历史记录的扩展cd命令以进行向后/向前/选择重新访问,我编写了一个相对复杂的Python脚本,该脚本由一个简单的bat脚本调用。遍历列表存储在文件中,目标目录位于第一行。当python脚本返回时,bat脚本读取文件的第一行并将其作为cd的参数。完整的蝙蝠脚本(为简洁起见,减去注释)为:

if _%1 == _. goto cdDone
if _%1 == _? goto help
if /i _%1 NEQ _-H goto doCd
:help
echo d.bat and dSup.py 2016.03.05. Extended chdir.
echo -C = clear traversal list.
echo -B or nothing = backward (to previous dir).
echo -F or - = forward (to next dir).
echo -R = remove current from list and return to previous.
echo -S = select from list.
echo -H, -h, ? = help.
echo . = make window title current directory.
echo Anything else = target directory.
goto done

:doCd
%~dp0dSup.py %1
for /F %%d in ( %~dp0dSupList ) do (
    cd %%d
    if errorlevel 1 ( %~dp0dSup.py -R )
    goto cdDone
)
:cdDone
title %CD%
:done

python脚本dSup.py是:

import sys, os, msvcrt

def indexNoCase ( slist, s ) :
    for idx in range( len( slist )) :
        if slist[idx].upper() == s.upper() :
            return idx
    raise ValueError

# .........main process ...................
if len( sys.argv ) < 2 :
    cmd = 1 # No argument defaults to -B, the most common operation
elif sys.argv[1][0] == '-':
    if len(sys.argv[1]) == 1 :
        cmd = 2 # '-' alone defaults to -F, second most common operation.
    else :
        cmd = 'CBFRS'.find( sys.argv[1][1:2].upper())
else :
    cmd = -1
    dir = os.path.abspath( sys.argv[1] ) + '\n'

# cmd is -1 = path, 0 = C, 1 = B, 2 = F, 3 = R, 4 = S

fo = open( os.path.dirname( sys.argv[0] ) + '\\dSupList', mode = 'a+t' )
fo.seek( 0 )
dlist = fo.readlines( -1 )
if len( dlist ) == 0 :
    dlist.append( os.getcwd() + '\n' ) # Prime new directory list with current.

if cmd == 1 : # B: move backward, i.e. to previous
    target = dlist.pop(0)
    dlist.append( target )
elif cmd == 2 : # F: move forward, i.e. to next
    target = dlist.pop( len( dlist ) - 1 )
    dlist.insert( 0, target )
elif cmd == 3 : # R: remove current from list. This forces cd to previous, a
                # desireable side-effect
    dlist.pop( 0 )
elif cmd == 4 : # S: select from list
# The current directory (dlist[0]) is included essentially as ESC.
    for idx in range( len( dlist )) :
        print( '(' + str( idx ) + ')', dlist[ idx ][:-1])
    while True :
        inp = msvcrt.getche()
        if inp.isdigit() :
            inp = int( inp )
            if inp < len( dlist ) :
                print( '' ) # Print the newline we didn't get from getche.
                break
        print( ' is out of range' )
# Select 0 means the current directory and the list is not changed. Otherwise
# the selected directory is moved to the top of the list. This can be done by
# either rotating the whole list until the selection is at the head or pop it
# and insert it to 0. It isn't obvious which would be better for the user but
# since pop-insert is simpler, it is used.
    if inp > 0 :
        dlist.insert( 0, dlist.pop( inp ))

elif cmd == -1 : # -1: dir is the requested new directory.
# If it is already in the list then remove it before inserting it at the head.
# This takes care of both the common case of it having been recently visited
# and the less common case of user mistakenly requesting current, in which
# case it is already at the head. Deleting and putting it back is a trivial
# inefficiency.
    try:
        dlist.pop( indexNoCase( dlist, dir ))
    except ValueError :
        pass
    dlist = dlist[:9] # Control list length by removing older dirs (should be
                      # no more than one).
    dlist.insert( 0, dir ) 

fo.truncate( 0 )
if cmd != 0 : # C: clear the list
    fo.writelines( dlist )

fo.close()
exit(0)

Changing the current directory of the script process is trivial. I think the question is actually how to change the current directory of the command window from which a python script is invoked, which is very difficult. A Bat script in Windows or a Bash script in a Bash shell can do this with an ordinary cd command because the shell itself is the interpreter. In both Windows and Linux Python is a program and no program can directly change its parent’s environment. However the combination of a simple shell script with a Python script doing most of the hard stuff can achieve the desired result. For example, to make an extended cd command with traversal history for backward/forward/select revisit, I wrote a relatively complex Python script invoked by a simple bat script. The traversal list is stored in a file, with the target directory on the first line. When the python script returns, the bat script reads the first line of the file and makes it the argument to cd. The complete bat script (minus comments for brevity) is:

if _%1 == _. goto cdDone
if _%1 == _? goto help
if /i _%1 NEQ _-H goto doCd
:help
echo d.bat and dSup.py 2016.03.05. Extended chdir.
echo -C = clear traversal list.
echo -B or nothing = backward (to previous dir).
echo -F or - = forward (to next dir).
echo -R = remove current from list and return to previous.
echo -S = select from list.
echo -H, -h, ? = help.
echo . = make window title current directory.
echo Anything else = target directory.
goto done

:doCd
%~dp0dSup.py %1
for /F %%d in ( %~dp0dSupList ) do (
    cd %%d
    if errorlevel 1 ( %~dp0dSup.py -R )
    goto cdDone
)
:cdDone
title %CD%
:done

The python script, dSup.py is:

import sys, os, msvcrt

def indexNoCase ( slist, s ) :
    for idx in range( len( slist )) :
        if slist[idx].upper() == s.upper() :
            return idx
    raise ValueError

# .........main process ...................
if len( sys.argv ) < 2 :
    cmd = 1 # No argument defaults to -B, the most common operation
elif sys.argv[1][0] == '-':
    if len(sys.argv[1]) == 1 :
        cmd = 2 # '-' alone defaults to -F, second most common operation.
    else :
        cmd = 'CBFRS'.find( sys.argv[1][1:2].upper())
else :
    cmd = -1
    dir = os.path.abspath( sys.argv[1] ) + '\n'

# cmd is -1 = path, 0 = C, 1 = B, 2 = F, 3 = R, 4 = S

fo = open( os.path.dirname( sys.argv[0] ) + '\\dSupList', mode = 'a+t' )
fo.seek( 0 )
dlist = fo.readlines( -1 )
if len( dlist ) == 0 :
    dlist.append( os.getcwd() + '\n' ) # Prime new directory list with current.

if cmd == 1 : # B: move backward, i.e. to previous
    target = dlist.pop(0)
    dlist.append( target )
elif cmd == 2 : # F: move forward, i.e. to next
    target = dlist.pop( len( dlist ) - 1 )
    dlist.insert( 0, target )
elif cmd == 3 : # R: remove current from list. This forces cd to previous, a
                # desireable side-effect
    dlist.pop( 0 )
elif cmd == 4 : # S: select from list
# The current directory (dlist[0]) is included essentially as ESC.
    for idx in range( len( dlist )) :
        print( '(' + str( idx ) + ')', dlist[ idx ][:-1])
    while True :
        inp = msvcrt.getche()
        if inp.isdigit() :
            inp = int( inp )
            if inp < len( dlist ) :
                print( '' ) # Print the newline we didn't get from getche.
                break
        print( ' is out of range' )
# Select 0 means the current directory and the list is not changed. Otherwise
# the selected directory is moved to the top of the list. This can be done by
# either rotating the whole list until the selection is at the head or pop it
# and insert it to 0. It isn't obvious which would be better for the user but
# since pop-insert is simpler, it is used.
    if inp > 0 :
        dlist.insert( 0, dlist.pop( inp ))

elif cmd == -1 : # -1: dir is the requested new directory.
# If it is already in the list then remove it before inserting it at the head.
# This takes care of both the common case of it having been recently visited
# and the less common case of user mistakenly requesting current, in which
# case it is already at the head. Deleting and putting it back is a trivial
# inefficiency.
    try:
        dlist.pop( indexNoCase( dlist, dir ))
    except ValueError :
        pass
    dlist = dlist[:9] # Control list length by removing older dirs (should be
                      # no more than one).
    dlist.insert( 0, dir ) 

fo.truncate( 0 )
if cmd != 0 : # C: clear the list
    fo.writelines( dlist )

fo.close()
exit(0)