标签归档:shell

如何从python代码调用Shell脚本?

问题:如何从python代码调用Shell脚本?

如何从python代码调用Shell脚本?

How to call a shell script from python code?


回答 0

模块将帮助你。

显而易见的例子:

>>> import subprocess
>>> subprocess.call(['./test.sh']) # Thanks @Jim Dennis for suggesting the []
0 
>>> 

其中test.sh是一个简单的shell脚本,0它是此运行的返回值。

The subprocess module will help you out.

Blatantly trivial example:

>>> import subprocess
>>> subprocess.call(['sh', './test.sh']) # Thanks @Jim Dennis for suggesting the []
0 
>>> 

Where test.sh is a simple shell script and 0 is its return value for this run.


回答 1

有一些使用os.popen()(不推荐使用)或整个subprocess模块的方法,但是这种方法

import os
os.system(command)

是最简单的之一。

There are some ways using os.popen() (deprecated) or the whole subprocess module, but this approach

import os
os.system(command)

is one of the easiest.


回答 2

如果要将一些参数传递给Shell脚本,可以使用shlex.split()方法:

import subprocess
import shlex
subprocess.call(shlex.split('./test.sh param1 param2'))

test.sh在同一文件夹:

#!/bin/sh
echo $1
echo $2
exit 0

输出:

$ python test.py 
param1
param2

In case you want to pass some parameters to your shell script, you can use the method shlex.split():

import subprocess
import shlex
subprocess.call(shlex.split('./test.sh param1 param2'))

with test.sh in the same folder:

#!/bin/sh
echo $1
echo $2
exit 0

Outputs:

$ python test.py 
param1
param2

回答 3

import os
import sys

假设test.sh是您要执行的shell脚本

os.system("sh test.sh")
import os
import sys

Assuming test.sh is the shell script that you would want to execute

os.system("sh test.sh")

回答 4

如上所述,使用子流程模块。

我这样使用它:

subprocess.call(["notepad"])

Use the subprocess module as mentioned above.

I use it like this:

subprocess.call(["notepad"])

回答 5

我正在运行python 3.5,subprocess.call([‘./ test.sh’])对我不起作用。

我给你三种解决方案,取决于你想对输出做些什么。

1-通话脚本。您将在终端中看到输出。输出是一个数字。

import subprocess 
output = subprocess.call(['test.sh'])

2-调用和转储执行并将错误转换为字符串。除非您打印(stdout),否则您不会在终端中看到执行。Shell = True在Popen中作为参数对我不起作用。

import subprocess
from subprocess import Popen, PIPE

session = subprocess.Popen(['test.sh'], stdout=PIPE, stderr=PIPE)
stdout, stderr = session.communicate()

if stderr:
    raise Exception("Error "+str(stderr))

3-调用脚本并将temp.txt的echo命令转储到temp_file中

import subprocess
temp_file = open("temp.txt",'w')
subprocess.call([executable], stdout=temp_file)
with open("temp.txt",'r') as file:
    output = file.read()
print(output)

不要忘了看一下doc子流程

I’m running python 3.5 and subprocess.call([‘./test.sh’]) doesn’t work for me.

I give you three solutions depends on what you wanna do with the output.

1 – call script. You will see output in your terminal. output is a number.

import subprocess 
output = subprocess.call(['test.sh'])

2 – call and dump execution and error into string. You don’t see execution in your terminal unless you print(stdout). Shell=True as argument in Popen doesn’t work for me.

import subprocess
from subprocess import Popen, PIPE

session = subprocess.Popen(['test.sh'], stdout=PIPE, stderr=PIPE)
stdout, stderr = session.communicate()

if stderr:
    raise Exception("Error "+str(stderr))

3 – call script and dump the echo commands of temp.txt in temp_file

import subprocess
temp_file = open("temp.txt",'w')
subprocess.call([executable], stdout=temp_file)
with open("temp.txt",'r') as file:
    output = file.read()
print(output)

Don’t forget to take a look at the doc subprocess


回答 6

子流程模块是启动子流程的好模块。您可以使用它来调用shell命令,如下所示:

subprocess.call(["ls","-l"]);
#basic syntax
#subprocess.call(args, *)

您可以在此处查看其文档

如果您的脚本是用.sh文件或长字符串编写的,则可以使用os.system模块。调用起来非常简单容易:

import os
os.system("your command here")
# or
os.system('sh file.sh')

该命令将运行脚本一次,直到完成,然后阻塞直到退出。

Subprocess module is a good module to launch subprocesses. You can use it to call shell commands as this:

subprocess.call(["ls","-l"]);
#basic syntax
#subprocess.call(args, *)

You can see its documentation here.

If you have your script written in some .sh file or a long string, then you can use os.system module. It is fairly simple and easy to call:

import os
os.system("your command here")
# or
os.system('sh file.sh')

This command will run the script once, to completion, and block until it exits.


回答 7

如果脚本有多个参数

#!/usr/bin/python

import subprocess
output = subprocess.call(["./test.sh","xyz","1234"])
print output

输出将给出状态码。如果脚本成功运行,它将给出0,否则为非零整数。

podname=xyz  serial=1234
0

下面是test.sh shell脚本。

#!/bin/bash

podname=$1
serial=$2
echo "podname=$podname  serial=$serial"

In case the script is having multiple arguments

#!/usr/bin/python

import subprocess
output = subprocess.call(["./test.sh","xyz","1234"])
print output

Output will give the status code. If script runs successfully it will give 0 otherwise non-zero integer.

podname=xyz  serial=1234
0

Below is the test.sh shell script.

#!/bin/bash

podname=$1
serial=$2
echo "podname=$podname  serial=$serial"

回答 8

子过程很好,但有些人可能更喜欢脚本。Scriptine有更多高级方法,例如shell.call(args)path.rename(new_name)path.move(src,dst)。Scriptine基于子流程和其他流程

脚本的两个缺点:

  • 即使足够,当前的文档级别也会更全面。
  • 与子进程不同,scriptine软件包当前默认情况下未安装。

Subprocess is good but some people may like scriptine better. Scriptine has more high-level set of methods like shell.call(args), path.rename(new_name) and path.move(src,dst). Scriptine is based on subprocess and others.

Two drawbacks of scriptine:

  • Current documentation level would be more comprehensive even though it is sufficient.
  • Unlike subprocess, scriptine package is currently not installed by default.

回答 9

我知道这是一个老问题,但是最近我偶然发现了这个问题,由于Subprocess API自python 3.5以来发生了变化,最终导致了我的误解

执行外部脚本的新方法是使用run函数,该函数运行args描述的命令。等待命令完成,然后返回CompletedProcess实例。

import subprocess

subprocess.run(['./test.sh'])

I know this is an old question but I stumbled upon this recently and it ended up misguiding me since the Subprocess API as changed since python 3.5.

The new way to execute external scripts is with the run function, which runs the command described by args. Waits for command to complete, then returns a CompletedProcess instance.

import subprocess

subprocess.run(['./test.sh'])

回答 10

如果您的Shell脚本文件没有执行权限,请按照以下方式进行操作。

import subprocess
subprocess.run(['/bin/bash', './test.sh'])

If your shell script file does not have execute permissions, do so in the following way.

import subprocess
subprocess.run(['/bin/bash', './test.sh'])

回答 11

请尝试以下代码:

Import Execute 

Execute("zbx_control.sh")

Please Try the following codes :

Import Execute 

Execute("zbx_control.sh")

在单行命令行中执行多行语句?

问题:在单行命令行中执行多行语句?

我正在使用Python -c执行单线循环,即:

$ python -c "for r in range(10): print 'rob'"

这很好。但是,如果在for循环之前导入模块,则会出现语法错误:

$ python -c "import sys; for r in range(10): print 'rob'"
  File "<string>", line 1
    import sys; for r in range(10): print 'rob'
              ^
SyntaxError: invalid syntax

任何想法如何解决?

对我而言,将其作为一个单行放置非常重要,这样我才能将其包含在Makefile中。

I’m using Python with -c to execute a one-liner loop, i.e.:

$ python -c "for r in range(10): print 'rob'"

This works fine. However, if I import a module before the for loop, I get a syntax error:

$ python -c "import sys; for r in range(10): print 'rob'"
  File "<string>", line 1
    import sys; for r in range(10): print 'rob'
              ^
SyntaxError: invalid syntax

Any idea how this can be fixed?

It’s important to me to have this as a one-liner so that I can include it in a Makefile.


回答 0

你可以做

echo -e "import sys\nfor r in range(10): print 'rob'" | python

或不带管道:

python -c "exec(\"import sys\nfor r in range(10): print 'rob'\")"

要么

(echo "import sys" ; echo "for r in range(10): print 'rob'") | python

或@ SilentGhost的答案 / @ Crast的答案

you could do

echo -e "import sys\nfor r in range(10): print 'rob'" | python

or w/out pipes:

python -c "exec(\"import sys\nfor r in range(10): print 'rob'\")"

or

(echo "import sys" ; echo "for r in range(10): print 'rob'") | python

or @SilentGhost’s answer / @Crast’s answer


回答 1

这种样式也可以在makefile中使用(实际上,它经常使用)。

python - <<EOF
import sys
for r in range(3): print 'rob'
EOF

要么

python - <<-EOF
    import sys
    for r in range(3): print 'rob'
EOF

在后一种情况下,也删除了前导制表符(并且可以实现某些结构化的外观)

代替EOF可以忍受行首未出现在here文档中的任何标记词(另请参见bash联机帮助页中的here文档或here)。

this style can be used in makefiles too (and in fact it is used quite often).

python - <<EOF
import sys
for r in range(3): print 'rob'
EOF

or

python - <<-EOF
    import sys
    for r in range(3): print 'rob'
EOF

in latter case leading tab characters are removed too (and some structured outlook can be achieved)

instead of EOF can stand any marker word not appearing in the here document at a beginning of a line (see also here documents in the bash manpage or here).


回答 2

问题实际上与import语句无关,而是for循环之前的所有内容。更具体地说,任何在内联块之前出现的内容。

例如,所有这些工作:

python -c "import sys; print 'rob'"
python -c "import sys; sys.stdout.write('rob\n')"

如果将import声明为问题,则可以这样做,但不能:

python -c "__import__('sys'); for r in range(10): print 'rob'"

对于非常基本的示例,可以将其重写为:

python -c "import sys; map(lambda x: sys.stdout.write('rob%d\n' % x), range(10))"

但是,lambda只能执行表达式,而不能执行语句或多个语句,因此您可能仍然无法执行想要执行的操作。但是,在生成器表达式,列表推导,lambda,sys.stdout.write,内置的“ map”以及一些创造性的字符串插值之间,您可以执行一些强大的单线操作。

问题是,您想走多远?在什么时候写一个.py由makefile执行的小文件更好?

The issue is not actually with the import statement, it’s with anything being before the for loop. Or more specifically, anything appearing before an inlined block.

For example, these all work:

python -c "import sys; print 'rob'"
python -c "import sys; sys.stdout.write('rob\n')"

If import being a statement were an issue, this would work, but it doesn’t:

python -c "__import__('sys'); for r in range(10): print 'rob'"

For your very basic example, you could rewrite it as this:

python -c "import sys; map(lambda x: sys.stdout.write('rob%d\n' % x), range(10))"

However, lambdas can only execute expressions, not statements or multiple statements, so you may still be unable to do the thing you want to do. However, between generator expressions, list comprehension, lambdas, sys.stdout.write, the “map” builtin, and some creative string interpolation, you can do some powerful one-liners.

The question is, how far do you want to go, and at what point is it not better to write a small .py file which your makefile executes instead?


回答 3


-为了使该答案也适用于Python 3.xprint被称为一个函数:在3.x中, print('foo')适用,而2.x也接受print 'foo'
-有关包括Windows的跨平台观点,请参阅kxr的有用答案

bashkshzsh

使用ANSI C引号的字符串$'...'),该字符串允许使用\n来表示在将字符串传递给之前扩展为实际换行符的换行符python

python -c $'import sys\nfor r in range(10): print("rob")'

注意和语句\n之间的来实现换行符。importfor

要将shell变量值传递给这样的命令,最安全的方法是使用参数并通过sys.argvPython脚本内部访问它们:

name='rob' # value to pass to the Python script
python -c $'import sys\nfor r in range(10): print(sys.argv[1])' "$name"

请参阅以下内容,讨论使用带嵌入式外壳变量引用的(转义序列预处理的)双引号命令字符串的优缺点。

为了安全地使用$'...'字符串:

  • \您的原始源代码中的实例。
    • \<char>序列-例如\n在此情况下,也是通常的嫌疑人如\t\r\b-通过膨胀$'...'(参见man printf用于所支持逃逸)
  • '实例转义为\'

如果您必须保持POSIX兼容

使用printf带有命令替换

python -c "$(printf %b 'import sys\nfor r in range(10): print("rob")')"

为了安全地使用这种类型的字符串:

  • \您的原始源代码中的实例。
    • \<char>序列-例如\n在此情况下,也是通常的嫌疑人如\t\r\b-通过扩展printf(见man printf所支持的转义序列)。
  • 单引号字符串传递给(sic)printf %b转义嵌入的单引号 '\''

    • 使用单引号可以防止shell解释字符串的内容。

      • 也就是说,对于简短的 Python脚本(如本例所示),您可以使用双引号引起来的字符串将shell变量值合并到脚本中-只要您知道相关的陷阱(请参阅下一点);例如,shell扩展$HOME到当前用户的主目录。在以下命令中:

        • python -c "$(printf %b "import sys\nfor r in range(10): print('rob is $HOME')")"
      • 但是,通常首选的方法是通过参数从shell传递值,并sys.argv在Python中通过访问它们。以上命令的等效项是:

        • python -c "$(printf %b 'import sys\nfor r in range(10): print("rob is " + sys.argv[1])')" "$HOME"
    • 使用双引号的字符串更方便 -它使您可以使用未转义的嵌入式单引号和嵌入式双引号\"–它还使字符串受外壳程序解释,这可能是或不是意图;$`在源代码字符不是用于外壳可能会导致语法错误或意外改变的字符串。

      • 另外,shell自己\在双引号字符串中的处理可能会妨碍您执行;例如,要使Python产生文字输出ro\b,必须将ro\\b其传递给它;用'...'shell字符串和一倍 \的情况下,我们得到:
        python -c "$(printf %b 'import sys\nprint("ro\\\\bs")')" # ok: 'ro\bs'
        相比之下,这的确不是按预期有一张"..."shell字符串:
        python -c "$(printf %b "import sys\nprint('ro\\\\bs')")" # !! INCORRECT: 'rs'
        Shell解释 "\b""\\b"为文字\b,需要额外的令人眼花缭乱的数字\实例来达到预期的效果:
        python -c "$(printf %b "import sys\nprint('ro\\\\\\\\bs')")"

通过传递代码stdin,而不是-c

注意:我在这里专注于单线解决方案。xorho的答案显示了如何使用多行here-document-但是请务必引用定界符;例如,,<<'EOF'除非您明确希望外壳程序将字符串扩展到前面(上述注意事项附带)。


bashkshzsh

结合使用ANSI C引号的字符串$'...')和here字符串<<<...):

python - <<<$'import sys\nfor r in range(10): print("rob")'

-讲述python明确从标准输入读取(其中它在默认情况下)。 -在这种情况下是可选的,但是如果您还想将参数传递给脚本,则需要使用它来消除脚本文件名中的参数歧义:

python - 'rob' <<<$'import sys\nfor r in range(10): print(sys.argv[1])'

如果您必须保持POSIX兼容

printf如上使用,但使用管道以便通过stdin传递其输出:

printf %b 'import sys\nfor r in range(10): print("rob")' | python

带有一个参数:

printf %b 'import sys\nfor r in range(10): print(sys.argv[1])' | python - 'rob'


– To make this answer work with Python 3.x as well, print is called as a function: in 3.x, only print('foo') works, whereas 2.x also accepts print 'foo'.
– For a cross-platform perspective that includes Windows, see kxr’s helpful answer.

In bash, ksh, or zsh:

Use an ANSI C-quoted string ($'...'), which allows using \n to represent newlines that are expanded to actual newlines before the string is passed to python:

python -c $'import sys\nfor r in range(10): print("rob")'

Note the \n between the import and for statements to effect a line break.

To pass shell-variable values to such a command, it is safest to use arguments and access them via sys.argv inside the Python script:

name='rob' # value to pass to the Python script
python -c $'import sys\nfor r in range(10): print(sys.argv[1])' "$name"

See below for a discussion of the pros and cons of using an (escape sequence-preprocessed) double-quoted command string with embedded shell-variable references.

To work safely with $'...' strings:

  • Double \ instances in your original source code.
    • \<char> sequences – such as \n in this case, but also the usual suspects such as \t, \r, \b – are expanded by $'...' (see man printf for the supported escapes)
  • Escape ' instances as \'.

If you must remain POSIX-compliant:

Use printf with a command substitution:

python -c "$(printf %b 'import sys\nfor r in range(10): print("rob")')"

To work safely with this type of string:

  • Double \ instances in your original source code.
    • \<char> sequences – such as \n in this case, but also the usual suspects such as \t, \r, \b – are expanded by printf (see man printf for the supported escape sequences).
  • Pass a single-quoted string to printf %b and escape embedded single quotes as '\'' (sic).

    • Using single quotes protects the string’s contents from interpretation by the shell.

      • That said, for short Python scripts (as in this case) you can use a double-quoted string to incorporate shell variable values into your scripts – as long as you’re aware of the associated pitfalls (see next point); e.g., the shell expands $HOME to the current user’s home dir. in the following command:

        • python -c "$(printf %b "import sys\nfor r in range(10): print('rob is $HOME')")"
      • However, the generally preferred approach is to pass values from the shell via arguments, and access them via sys.argv in Python; the equivalent of the above command is:

        • python -c "$(printf %b 'import sys\nfor r in range(10): print("rob is " + sys.argv[1])')" "$HOME"
    • While using a double-quoted string is more convenient – it allows you to use embedded single quotes unescaped and embedded double quotes as \" – it also makes the string subject to interpretation by the shell, which may or may not be the intent; $ and ` characters in your source code that are not meant for the shell may cause a syntax error or alter the string unexpectedly.

      • Additionally, the shell’s own \ processing in double-quoted strings can get in the way; for instance, to get Python to produce literal output ro\b, you must pass ro\\b to it; with a '...' shell string and doubled \ instances, we get:
        python -c "$(printf %b 'import sys\nprint("ro\\\\bs")')" # ok: 'ro\bs'
        By contrast, this does not work as intended with a "..." shell string:
        python -c "$(printf %b "import sys\nprint('ro\\\\bs')")" # !! INCORRECT: 'rs'
        The shell interprets both "\b" and "\\b" as literal \b, requiring a dizzying number of additional \ instances to achieve the desired effect:
        python -c "$(printf %b "import sys\nprint('ro\\\\\\\\bs')")"

To pass the code via stdin rather than -c:

Note: I’m focusing on single-line solutions here; xorho’s answer shows how to use a multi-line here-document – be sure to quote the delimiter, however; e.g., <<'EOF', unless you explicitly want the shell to expand the string up front (which comes with the caveats noted above).


In bash, ksh, or zsh:

Combine an ANSI C-quoted string ($'...') with a here-string (<<<...):

python - <<<$'import sys\nfor r in range(10): print("rob")'

- tells python explicitly to read from stdin (which it does by default). - is optional in this case, but if you also want to pass arguments to the scripts, you do need it to disambiguate the argument from a script filename:

python - 'rob' <<<$'import sys\nfor r in range(10): print(sys.argv[1])'

If you must remain POSIX-compliant:

Use printf as above, but with a pipeline so as to pass its output via stdin:

printf %b 'import sys\nfor r in range(10): print("rob")' | python

With an argument:

printf %b 'import sys\nfor r in range(10): print(sys.argv[1])' | python - 'rob'

回答 4

任何想法如何解决?

您的问题是由以下事实引起的:用分隔的Python语句;仅允许为“小语句”,它们都是一线的。从Python文档中的语法文件中:

stmt: simple_stmt | compound_stmt
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |
             import_stmt | global_stmt | nonlocal_stmt | assert_stmt)

不能通过分号将复合语句与其他语句包含在同一行中,因此使用-c标志执行此操作非常不便。

在bash shell环境中演示Python时,我发现包含复合语句非常有用。可靠地做到这一点的唯一简单方法是使用heredocs(posix shell东西)。

Heredocs

使用定界符(与创建<<)和Python的命令行界面选项-

$ python - <<-"EOF"
        import sys                    # 1 tab indent
        for r in range(10):           # 1 tab indent
            print('rob')              # 1 tab indent and 4 spaces
EOF

添加-after <<<<-)允许您使用制表符缩进(Stackoverflow将制表符转换为空格,因此我缩进了8个空格以强调这一点)。前导标签将被剥离。

您只需使用以下选项卡就可以做到<<

$ python - << "EOF"
import sys
for r in range(10):
    print('rob')
EOF

用引号引起来EOF可防止参数算术扩展。这使heredoc更加健壮。

重击多行字符串

如果使用双引号,则会得到shell扩展:

$ python -c "
> import sys
> for p in '$PATH'.split(':'):
>     print(p)
> "
/usr/sbin
/usr/bin
/sbin
/bin
...

为了避免shell扩展,请使用单引号:

$ python -c '
> import sys
> for p in "$PATH".split(":"):
>     print(p)
> '
$PATH

请注意,我们需要在Python中的文字上交换引号字符-我们基本上不能使用BASH解释的引号字符。虽然我们可以像在Python中那样来替换它们-但这已经看起来很混乱,这就是为什么我不建议这样做的原因:

$ python -c '
import sys
for p in "'"$PATH"'".split(":"):
    print(p)
'
/usr/sbin
/usr/bin
/sbin
/bin
...

批评接受的答案(和其他)

这不是很可读:

echo -e "import sys\nfor r in range(10): print 'rob'" | python

可读性很差,并且在发生错误时也很难调试:

python -c "exec(\"import sys\\nfor r in range(10): print 'rob'\")"

也许更具可读性,但仍然很丑陋:

(echo "import sys" ; echo "for r in range(10): print 'rob'") | python

如果您"的python中有,那么您将度过一段糟糕的时光:

$ python -c "import sys
> for r in range(10): print 'rob'"

不要滥用map或列出理解以获得循环:

python -c "import sys; map(lambda x: sys.stdout.write('rob%d\n' % x), range(10))"

这些都是悲伤和坏的。不要做

Any idea how this can be fixed?

Your problem is created by the fact that Python statements, separated by ;, are only allowed to be “small statements”, which are all one-liners. From the grammar file in the Python docs:

stmt: simple_stmt | compound_stmt
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |
             import_stmt | global_stmt | nonlocal_stmt | assert_stmt)

Compound statements can’t be included on the same line with other statements via semicolons – so doing this with the -c flag becomes very inconvenient.

When demonstrating Python while in a bash shell environment, I find it very useful to include compound statements. The only simple way of doing this reliably is with heredocs (a posix shell thing).

Heredocs

Use a heredoc (created with <<) and Python’s command line interface option, -:

$ python - <<-"EOF"
        import sys                    # 1 tab indent
        for r in range(10):           # 1 tab indent
            print('rob')              # 1 tab indent and 4 spaces
EOF

Adding the - after << (the <<-) allows you to use tabs to indent (Stackoverflow converts tabs to spaces, so I’ve indented 8 spaces to emphasize this). The leading tabs will be stripped.

You can do it without the tabs with just <<:

$ python - << "EOF"
import sys
for r in range(10):
    print('rob')
EOF

Putting quotes around EOF prevents parameter and arithmetic expansion. This makes the heredoc more robust.

Bash multiline strings

If you use double-quotes, you’ll get shell-expansion:

$ python -c "
> import sys
> for p in '$PATH'.split(':'):
>     print(p)
> "
/usr/sbin
/usr/bin
/sbin
/bin
...

To avoid shell expansion use single-quotes:

$ python -c '
> import sys
> for p in "$PATH".split(":"):
>     print(p)
> '
$PATH

Note that we need to swap the quote characters on the literals in Python – we basically can’t use quote character being interpreted by BASH. We can alternate them though, like we can in Python – but this already looks quite confusing, which is why I don’t recommend this:

$ python -c '
import sys
for p in "'"$PATH"'".split(":"):
    print(p)
'
/usr/sbin
/usr/bin
/sbin
/bin
...

Critique of the accepted answer (and others)

This is not very readable:

echo -e "import sys\nfor r in range(10): print 'rob'" | python

Not very readable, and additionally difficult to debug in the case of an error:

python -c "exec(\"import sys\\nfor r in range(10): print 'rob'\")"

Perhaps a bit more readable, but still quite ugly:

(echo "import sys" ; echo "for r in range(10): print 'rob'") | python

You’ll have a bad time if you have "‘s in your python:

$ python -c "import sys
> for r in range(10): print 'rob'"

Don’t abuse map or list comprehensions to get for-loops:

python -c "import sys; map(lambda x: sys.stdout.write('rob%d\n' % x), range(10))"

These are all sad and bad. Don’t do them.


回答 5

只需使用return并在下一行输入它:

user@host:~$ python -c "import sys
> for r in range(10): print 'rob'"
rob
rob
...

just use return and type it on the next line:

user@host:~$ python -c "import sys
> for r in range(10): print 'rob'"
rob
rob
...

回答 6

$ python2.6 -c "import sys; [sys.stdout.write('rob\n') for r in range(10)]"

工作良好。使用“ []”内联for循环。

$ python2.6 -c "import sys; [sys.stdout.write('rob\n') for r in range(10)]"

Works fine. Use “[ ]” to inline your for loop.


回答 7

问题不在于import语句。问题在于控制流语句无法在python命令中内联。用import其他任何语句替换该语句,您将看到相同的问题。

想想看:python不可能内联所有内容。它使用缩进对控制流进行分组。

The problem is not with the import statement. The problem is that the control flow statements don’t work inlined in a python command. Replace that import statement with any other statement and you’ll see the same problem.

Think about it: python can’t possibly inline everything. It uses indentation to group control-flow.


回答 8

如果您的系统符合Posix.2,则应提供printf实用程序:

$ printf "print 'zap'\nfor r in range(3): print 'rob'" | python
zap
rob
rob
rob

If your system is Posix.2 compliant it should supply the printf utility:

$ printf "print 'zap'\nfor r in range(3): print 'rob'" | python
zap
rob
rob
rob

回答 9

single/double quotesbackslash无处不在:

$ python -c 'exec("import sys\nfor i in range(10): print \"bob\"")'

好多了:

$ python -c '
> import sys
> for i in range(10):
>   print "bob"
> '

single/double quotes and backslash everywhere:

$ python -c 'exec("import sys\nfor i in range(10): print \"bob\"")'

Much better:

$ python -c '
> import sys
> for i in range(10):
>   print "bob"
> '

回答 10

(在10年11月23日,19:48回答) 我并不是一个真正的Pythoner,但是我一次发现了这种语法,却忘记了它的来源,所以我想记录一下它:

如果您使用sys.stdout.write而不是print区别在于,sys.stdout.write将参数作为函数,请放在括号中-而print不是),则对于单行代码,您可以避免颠倒命令的顺序,然后for删除分号,并将命令括在方括号中,即:

python -c "import sys; [sys.stdout.write('rob\n') for r in range(10)]"

不知道如何在Python中调用此语法:)

希望这可以帮助,

干杯!


(2013年4月9日星期二20:57:30 EDIT)好吧,我想我终于找到了这些单线括起来的方括号。它们是“列表理解”(显然);首先请注意Python 2.7:

$ STR=abc
$ echo $STR | python -c "import sys,re; a=(sys.stdout.write(line) for line in sys.stdin); print a"
<generator object <genexpr> at 0xb771461c>

因此,圆括号/括号中的命令被视为“生成器对象”;如果我们通过调用“迭代”它next()-括号内的命令将被执行(注意输出中的“ abc”):

$ echo $STR | python -c "import sys,re; a=(sys.stdout.write(line) for line in sys.stdin); a.next() ; print a"
abc
<generator object <genexpr> at 0xb777b734>

如果我们现在使用方括号-请注意,无需调用next()即可执行命令,它会在分配后立即执行;但是,以后的检查发现aNone

$ echo $STR | python -c "import sys,re; a=[sys.stdout.write(line) for line in sys.stdin]; print a"
abc
[None]

对于方括号,这没有太多信息可寻-但是我偶然发现了该页面,我认为这解释了:

Python技巧和窍门–第一版-Python教程| Dream.In.Code

回想一下,单行生成器的标准格式是括号内的一种“ for”循环。这将产生一个“一次性”可迭代对象,该对象只能在一个方向上迭代,到达终点后就不能重复使用。

“列表理解”看起来与常规单行生成器几乎相同,除了常规方括号-()被方括号-[]代替。列表理解的主要优点是产生了一个“列表”,而不是一个“一次性”的可迭代对象,因此您可以在它之间来回移动,添加元素,排序等。

实际上,它是一个列表-它只是它的第一个元素在执行后立即变为空:

$ echo $STR | python -c "import sys,re; print [sys.stdout.write(line) for line in sys.stdin].__class__"
abc
<type 'list'>
$ echo $STR | python -c "import sys,re; print [sys.stdout.write(line) for line in sys.stdin][0]"
abc
None

列表推导5中有其他说明。数据结构:5.1.4。列表推导— Python v2.7.4文档为“列表推导提供了一种创建列表的简洁方法”;大概就是列表的有限“可执行性”发挥作用的地方。

好吧,希望我在这里不会太过分……

EDIT2:这是一个带有两个非嵌套的for循环的单行命令行;都包含在“列表理解”方括号内:

$ echo $STR | python -c "import sys,re; a=[sys.stdout.write(line) for line in sys.stdin]; b=[sys.stdout.write(str(x)) for x in range(2)] ; print a ; print b"
abc
01[None]
[None, None]

注意,第二个“列表” b现在有两个元素,因为它的for循环显式运行了两次;但是,这sys.stdout.write()两种情况的结果都是(显然)None

(answered Nov 23 ’10 at 19:48) I’m not really a big Pythoner – but I found this syntax once, forgot where from, so I thought I’d document it:

if you use sys.stdout.write instead of print (the difference being, sys.stdout.write takes arguments as a function, in parenthesis – whereas print doesn’t), then for a one-liner, you can get away with inverting the order of the command and the for, removing the semicolon, and enclosing the command in square brackets, i.e.:

python -c "import sys; [sys.stdout.write('rob\n') for r in range(10)]"

Have no idea how this syntax would be called in Python :)

Hope this helps,

Cheers!


(EDIT Tue Apr 9 20:57:30 2013) Well, I think I finally found what these square brackets in one-liners are about; they are “list comprehensions” (apparently); first note this in Python 2.7:

$ STR=abc
$ echo $STR | python -c "import sys,re; a=(sys.stdout.write(line) for line in sys.stdin); print a"
<generator object <genexpr> at 0xb771461c>

So the command in round brackets/parenthesis is seen as a “generator object”; if we “iterate” through it by calling next() – then the command inside the parenthesis will be executed (note the “abc” in the output):

$ echo $STR | python -c "import sys,re; a=(sys.stdout.write(line) for line in sys.stdin); a.next() ; print a"
abc
<generator object <genexpr> at 0xb777b734>

If we now use square brackets – note that we don’t need to call next() to have the command execute, it executes immediately upon assignment; however, later inspection reveals that a is None:

$ echo $STR | python -c "import sys,re; a=[sys.stdout.write(line) for line in sys.stdin]; print a"
abc
[None]

This doesn’t leave much info to look for, for the square brackets case – but I stumbled upon this page which I think explains:

Python Tips And Tricks – First Edition – Python Tutorials | Dream.In.Code:

If you recall, the standard format of a single line generator is a kind of one line ‘for’ loop inside brackets. This will produce a ‘one-shot’ iterable object which is an object you can iterate over in only one direction and which you can’t re-use once you reach the end.

A ‘list comprehension’ looks almost the same as a regular one-line generator, except that the regular brackets – ( ) – are replaced by square brackets – [ ]. The major advanatge of alist comprehension is that produces a ‘list’, rather than a ‘one-shot’ iterable object, so that you can go back and forth through it, add elements, sort, etc.

And indeed it is a list – it’s just its first element becomes none as soon as it is executed:

$ echo $STR | python -c "import sys,re; print [sys.stdout.write(line) for line in sys.stdin].__class__"
abc
<type 'list'>
$ echo $STR | python -c "import sys,re; print [sys.stdout.write(line) for line in sys.stdin][0]"
abc
None

List comprehensions are otherwise documented in 5. Data Structures: 5.1.4. List Comprehensions — Python v2.7.4 documentation as “List comprehensions provide a concise way to create lists”; presumably, that’s where the limited “executability” of lists comes into play in one-liners.

Well, hope I’m not terribly too off the mark here …

EDIT2: and here is a one-liner command line with two non-nested for-loops; both enclosed within “list comprehension” square brackets:

$ echo $STR | python -c "import sys,re; a=[sys.stdout.write(line) for line in sys.stdin]; b=[sys.stdout.write(str(x)) for x in range(2)] ; print a ; print b"
abc
01[None]
[None, None]

Notice that the second “list” b now has two elements, since its for loop explicitly ran twice; however, the result of sys.stdout.write() in both cases was (apparently) None.


回答 11

此变体最便于移植,用于在Windows和* nix,py2 / 3(不带管道)的命令行上放置多行脚本:

python -c "exec(\"import sys \nfor r in range(10): print('rob') \")"

(到目前为止,这里没有其他示例可以这样做)

在Windows上,整洁的是:

python -c exec"""import sys \nfor r in range(10): print 'rob' """
python -c exec("""import sys \nfor r in range(10): print('rob') """)

bash / * nix的整洁度是:

python -c $'import sys \nfor r in range(10): print("rob")'

此函数将任何多行脚本转换为可移植的命令一列式:

def py2cmdline(script):
    exs = 'exec(%r)' % re.sub('\r\n|\r', '\n', script.rstrip())
    print('python -c "%s"' % exs.replace('"', r'\"'))

用法:

>>> py2cmdline(getcliptext())
python -c "exec('print \'AA\tA\'\ntry:\n for i in 1, 2, 3:\n  print i / 0\nexcept:\n print \"\"\"longer\nmessage\"\"\"')"

输入为:

print 'AA   A'
try:
 for i in 1, 2, 3:
  print i / 0
except:
 print """longer
message"""

This variant is most portable for putting multi-line scripts on command-line on Windows and *nix, py2/3, without pipes:

python -c "exec(\"import sys \nfor r in range(10): print('rob') \")"

(None of the other examples seen here so far did so)

Neat on Windows is:

python -c exec"""import sys \nfor r in range(10): print 'rob' """
python -c exec("""import sys \nfor r in range(10): print('rob') """)

Neat on bash/*nix is:

python -c $'import sys \nfor r in range(10): print("rob")'

This function turns any multiline-script into a portable command-one-liner:

def py2cmdline(script):
    exs = 'exec(%r)' % re.sub('\r\n|\r', '\n', script.rstrip())
    print('python -c "%s"' % exs.replace('"', r'\"'))

Usage:

>>> py2cmdline(getcliptext())
python -c "exec('print \'AA\tA\'\ntry:\n for i in 1, 2, 3:\n  print i / 0\nexcept:\n print \"\"\"longer\nmessage\"\"\"')"

Input was:

print 'AA   A'
try:
 for i in 1, 2, 3:
  print i / 0
except:
 print """longer
message"""

回答 12

该脚本提供了类似Perl的命令行界面:

Pyliner-在命令行上运行任意Python代码的脚本(Python配方)


回答 13

当我需要这样做时,我使用

python -c "$(echo -e "import sys\nsys.stdout.write('Hello World!\\\n')")"

注意sys.stdout.write语句中换行符的三倍反斜杠。

When I needed to do this, I use

python -c "$(echo -e "import sys\nsys.stdout.write('Hello World!\\\n')")"

Note the triple backslash for the newline in the sys.stdout.write statement.


回答 14

我想要一个具有以下属性的解决方案:

  1. 可读的
  2. 阅读stdin来处理其他工具的输出

其他答案中未同时提供这两个要求,因此,这是在命令行上执行所有操作时如何读取stdin的方法:

grep special_string -r | sort | python3 <(cat <<EOF
import sys
for line in sys.stdin:
    tokens = line.split()
    if len(tokens) == 4:
        print("%-45s %7.3f    %s    %s" % (tokens[0], float(tokens[1]), tokens[2], tokens[3]))
EOF
)

I wanted a solution with the following properties:

  1. Readable
  2. Read stdin for processing output of other tools

Both requirements were not provided in the other answers, so here’s how to read stdin while doing everything on the command line:

grep special_string -r | sort | python3 <(cat <<EOF
import sys
for line in sys.stdin:
    tokens = line.split()
    if len(tokens) == 4:
        print("%-45s %7.3f    %s    %s" % (tokens[0], float(tokens[1]), tokens[2], tokens[3]))
EOF
)

回答 15

还有一个选项,sys.stdout.write返回None,使列表为空

cat somefile.log | python -c“ import sys; [如果sys.stdout.write(line * 2),则为sys.stdin中的行的一行]”

there is one more option, sys.stdout.write returns None, which keep the list empty

cat somefile.log|python -c "import sys;[line for line in sys.stdin if sys.stdout.write(line*2)]"

回答 16

如果您不想触摸stdin并像传递“ python cmdfile.py”一样进行模拟,则可以从bash shell执行以下操作:

$ python  <(printf "word=raw_input('Enter word: ')\nimport sys\nfor i in range(5):\n    print(word)")

如您所见,它允许您使用stdin读取输入数据。在内部,shell为输入命令内容创建临时文件。

If you don’t want to touch stdin and simulate as if you had passed “python cmdfile.py”, you can do the following from a bash shell:

$ python  <(printf "word=raw_input('Enter word: ')\nimport sys\nfor i in range(5):\n    print(word)")

As you can see, it allows you to use stdin for reading input data. Internally the shell creates the temporary file for the input command contents.


在python shell中按箭头键时看到转义字符

问题:在python shell中按箭头键时看到转义字符

在交互式python shell之类的shell中,通常可以使用箭头键在当前行中移动或获取先前的命令(使用向上箭头)等。

但是,当我将SSH切换到另一台机器并从python那里启动时,我得到如下会话:

>>> import os 
>>> ^[[A    

最后符来自上箭头。或者,使用左箭头:

>>> impor^[[D

我怎样才能解决这个问题?

在常规bash中,箭头键可以正常工作。奇怪的行为只是在交互式python(或perl等)shell中。

In shells like the interactive python shell, you can usually use the arrow keys to move around in the current line or get previous commands (with arrow-up) etc.

But after I ssh into another machine and start python there, I get sessions like:

>>> import os 
>>> ^[[A    

where the last character comes from arrow-up. Or, using arrow-left:

>>> impor^[[D

How can I fix this?

In the regular bash, arrow keys work fine. The weird behavior is just in the interactive python (or perl etc.) shell.


回答 0

似乎未启用readline。检查PYTHONSTARTUP变量是否已定义,对我而言,它指向/etc/pythonstart该文件,并且文件在交互之前由python进程执行,从而设置了读取行/历史记录处理。

感谢@chown,这里是有关此文档:http ://docs.python.org/2/tutorial/interactive.html

Looks like readline is not enabled. Check if PYTHONSTARTUP variable is defined, for me it points to /etc/pythonstart and that file is executed by the python process before going interactive, which setups readline/history handling.

Thanks to @chown here is the docs on this: http://docs.python.org/2/tutorial/interactive.html


回答 1

我已经通过安装readline软件包解决了这个问题:

pip install readline

I’ve solved this issue by installing readline package:

pip install readline

回答 2

在OS X上,我有不同的问题。

当我使用系统python shell时,键没有问题,但是virtualenv中有问题。我会尝试重新安装/升级virtualenv / readline,但未解决任何问题。

当我尝试使用import readline有问题的python shell时,收到以下错误消息:

ImportError: dlopen(/Users/raptor/.virtualenvs/bottle/lib/python2.7/lib-dynload/readline.so, 2): Library not loaded: /usr/local/opt/readline/lib/libreadline.6.dylib
Referenced from: /Users/raptor/.virtualenvs/bottle/lib/python2.7/lib-dynload/readline.so
Reason: image not found

因为有/usr/local/opt/readline/lib/libreadline.7.dylib但没有libreadline.6.dylib,所以我做了一个符号链接:

ln -s libreadline.7.dylib libreadline.6.dylib

问题已解决!

On OS X, I have different problem.

When I using system python shell, the keys is no problem, but problem in virtualenv. I’d try to reinstall/upgrade virtualenv/readline and nothing fixed.

While I try to import readline in problem python shell, get this error message:

ImportError: dlopen(/Users/raptor/.virtualenvs/bottle/lib/python2.7/lib-dynload/readline.so, 2): Library not loaded: /usr/local/opt/readline/lib/libreadline.6.dylib
Referenced from: /Users/raptor/.virtualenvs/bottle/lib/python2.7/lib-dynload/readline.so
Reason: image not found

Cause there is /usr/local/opt/readline/lib/libreadline.7.dylib but not libreadline.6.dylib, so I make a symbol link:

ln -s libreadline.7.dylib libreadline.6.dylib

Problem has been solved!


回答 3

在OS X上,Xcode更新有时会中断readline。解:

brew uninstall readline
brew upgrade python3
brew install readline
pip3 install readline

如果问题仍然存在,请尝试删除并readline使用pip安装它easy_install

pip3 uninstall readline
easy_install readline

On OS X, Xcode updates sometimes break readline. Solution:

brew uninstall readline
brew upgrade python3
brew install readline
pip3 install readline

If the problem still persists, try to remove readline using pip and install it using easy_install:

pip3 uninstall readline
easy_install readline

回答 4

在OS X上,使用python 3.5和virtualenv

$ pip install gnureadline

在解释器中执行以下操作:

import gnureadline

现在,箭头键应该可以正常工作。


附加信息…

请注意,自2015年10月1日起-readline已被弃用(来源https://github.com/ludwigschwardt/python-readline

改用gnureadline(请参阅:https : //github.com/ludwigschwardt/python-gnureadline

如果我使用python 3.5安装readline而不是gnureadline,则尝试导入解释器后收到错误消息:

>>> import readline
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: dlopen(/Users/pi/tmp/python-readline-test/.venv/lib/python3.5/readline.so, 2): Library not loaded: /usr/local/opt/readline/lib/libreadline.6.dylib
  Referenced from: /Users/pi/tmp/python-readline-test/.venv/lib/python3.5/readline.so
  Reason: image not found

On OS X, using python 3.5 and virtualenv

$ pip install gnureadline

In the interpreter do:

import gnureadline

Now arrow keys should work properly.


Additional information…

Note that as of Oct 1, 2015 – readline has been DEPRECATED (source https://github.com/ludwigschwardt/python-readline)

Use gnureadline instead (see: https://github.com/ludwigschwardt/python-gnureadline)

If I install readline instead of gnureadline using python 3.5, I receive errors after attempt to import in the interpreter:

>>> import readline
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: dlopen(/Users/pi/tmp/python-readline-test/.venv/lib/python3.5/readline.so, 2): Library not loaded: /usr/local/opt/readline/lib/libreadline.6.dylib
  Referenced from: /Users/pi/tmp/python-readline-test/.venv/lib/python3.5/readline.so
  Reason: image not found

回答 5

我最近遇到过这个问题,在阅读了很多有关pip install readline(不适用于mac osx)并且pip install gnureadline不满意之后,现在这是我的设置,该设置允许在任何python控制台中使用箭头键:

  1. 使用安装gnureadline pip install gnureadline

现在您可以执行操作,import gnureadline并且箭头键应该可以按预期工作。要使它们自动工作,请执行以下步骤:

  1. 创建(或附加到)文件~/.startup.pyimport gnureadline
  2. 追加到文件~/.bash_profileexport PYTHONSTARTUP=~/.startup.py

在我之前的设置中没有起作用的一件事是:在上自动导入gnureadline pdb.set_trace()。如果有人对这个问题有一个好的解决方案,我将不胜感激。

I have run into this issue recently and after reading a lot about pip install readline (does not work for mac osx) and pip install gnureadline and not being satisfied, this is now my setup which enables using arrow keys in any python console:

  1. install gnureadline using pip install gnureadline

now you can either do import gnureadline and arrow keys should work as expected. To make them work automatically follow the following steps:

  1. create (or append to) file ~/.startup.py: import gnureadline
  2. append to file ~/.bash_profile: export PYTHONSTARTUP=~/.startup.py

One thing that does not work, but did in my previous setup is: automatic import of gnureadline on pdb.set_trace(). If anyone has a good solution to this problem I would be grateful for a comment.


回答 6

  1. 安装readline-devel软件包。
  2. 用readline模块重新编译python
  3. 答对了!
  1. install readline-devel package.
  2. recompile python with readline module
  3. Bingo!

回答 7

我在Ubuntu 16.04 LTS上遇到Python 3.6.x的外壳历史记录(选项卡/箭头命令)问题。

Python 3.6.x是从源代码安装的。

对我来说解决的是使用以下命令行安装user12345所说的模块“ gnureadline”:

sudo pip3.6 install gnureadline

:)

I had problems with shell history(tab/arrows commands) of Python 3.6.x on Ubuntu 16.04 LTS.

Python 3.6.x was installed from source.

What solved for me was install the module “gnureadline” as said by user12345, using this command line:

sudo pip3.6 install gnureadline

:)


回答 8

这是在ubuntu 12.04 for python 3.3中为我工作的步骤。

1)打开终端并编写 sudo apt-get install libreadline-dev

2)从http://www.python.org/ftp/python/3.3.2/Python-3.3.2.tar.xz下载python 3.3.2的源文件

3)解压缩并导航到Shell中的Python-3.3.2 /目录

4)执行以下命令:

./configure
make
make test
sudo make install

Here are the steps which worked for me in ubuntu 12.04 for python 3.3.

1) open teminal and write sudo apt-get install libreadline-dev

2) download the source file of python 3.3.2 from http://www.python.org/ftp/python/3.3.2/Python-3.3.2.tar.xz

3) extract it and navigate to the Python-3.3.2/ directory in a shell

4) execute the following command:

./configure
make
make test
sudo make install

回答 9

将Mac升级到High Sierra后受到影响,这为我成功解决了它:

brew unlink python
xcode-select --install
brew install python

Was impacted after upgrading Mac to High Sierra, this successfully resolved it for me:

brew unlink python
xcode-select --install
brew install python

回答 10

在CentOS上,我通过

yum install readline-devel

然后重新编译python 3.4。

在OpenSUSE上,我通过

pip3 install readline

按照Valerio Crini的回答。

也许“ pip3 install readline”是一个通用解决方案。尚未在我的CentOS上尝试过。

On CentOS, I fix this by

yum install readline-devel

and then recompile python 3.4.

On OpenSUSE, I fix this by

pip3 install readline

following Valerio Crini’s answer.

Perhaps “pip3 install readline” is a general solution. Haven’t tried on my CentOS.


回答 11

我通过以下操作解决了这个问题:

  • 百胜安装readline-devel
  • 点安装readline

    • 我在这里遇到另一个错误:

      gcc: readline/libreadline.a: No such file or directory

      gcc: readline/libhistory.a: No such file or directory

      我通过安装解决了这个问题patch

      yum install patch

之后,我成功运行pip install readline,解决了我的python shell中的转义字符。

仅供参考,我正在使用RedHat

I fixed this by doing the following:

  • yum install readline-devel
  • pip install readline

    • I encountered another error here:

      gcc: readline/libreadline.a: No such file or directory

      gcc: readline/libhistory.a: No such file or directory

      I fixed this by installing patch:

      yum install patch

After that I managed to run pip install readline successfully which solved the escape characters in my python shell.

FYI, I’m using RedHat


回答 12

如果使用Anaconda Python,则可以通过运行以下命令解决此问题:

conda install readline

为我工作!

If you use Anaconda Python, you can fix this by running:

conda install readline

Worked for me!


回答 13

对于使用conda的用户,从conda-forge频道安装readline软件包将解决此问题:

conda install -c conda-forge readline=6.2

For those using conda, installing the readline package from conda-forge channel will fix the problem:

conda install -c conda-forge readline=6.2

回答 14

您是否使用-t参数调用ssh 来告诉ssh为您分配虚拟终端?

从手册页:

-t
强制伪tty分配。这可用于在远程计算机上执行任意基于屏幕的程序,这可能非常有用,例如在实现菜单服务时。即使ssh没有本地tty,多个-t选项也会强制tty分配。

另外,您可能还必须按照另一篇文章中的建议,在服务器上正确设置TERM环境变量。

Did you call ssh with the -t parameter to tell ssh to allocate a virtual terminal for you?

From the man page:

-t
Force pseudo-tty allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.

Additionally you may also have to set the TERM environment variable on the server correctly as suggested in another post.


回答 15

在Mac OS X Mojave 10.14.6上,通过各种历史安装,brew我使用以下方法解决了此问题:

brew reinstall python2

鉴于每个人都有不同的安装方案,因此可能没有万灵药。我也尝试了上述方法,因此可能是一些答案的组合。Brew默认为默认值,python3因此,如果您安装了python2软件包,则还需要重新安装它。

On Mac OS X Mojave 10.14.6 with various historical installs via brew I solved this with:

brew reinstall python2

There is likely no magic bullet given everyone has a different install scenario. I tried the above as well so it may have been a combination of a few of the answers. Brew defaults to python3 so if you installed the python2 package it also needs to be reinstalled.


回答 16

这些答案都不适用于两个不同版本的Ubuntu。什么对我有用,但不是真正的解决办法,是将我的python代码包装在调用中rlwrap(可在ubuntu存储库中找到):

rlwrap python mycode.py

None of these answers worked for me on two different version of Ubuntu. What worked for me, but isn’t a true fix, is wrapping my python code in a call to rlwrap (available in the ubuntu repositories):

rlwrap python mycode.py


回答 17

您是否尝试过使用其他SSH客户端?某些SSH客户端具有用于不同远程进程的特殊内置键映射。我经常使用emacs遇到这一问题。

您正在使用什么客户端?我建议尝试使用Putty和SecureCRT来比较它们的行为。

Have you tried using a different SSH client? Some SSH clients have special, built-in keymappings for different remote processes. I ran into this one a lot with emacs.

What client are you using? I’d recommend trying Putty and SecureCRT to compare their behavior.


回答 18

readline模块已被弃用,这将在python shell中执行quit()或exit()时在最新的python版本中导致无效的指针错误。 pip install gnureadline代替

readline module has been deprecated which will cause invalid pointer error in latest python versions when executing quit() or exit() in python shell. pip install gnureadline instead


回答 19

当一切正常时,您的环境变量$ TERM如何设置?环境设置通常是解决此类问题的关键。

How’s your env variable $TERM set [a] when things work fine and [b] when they don’t? Env settings are often the key to such problems.


回答 20

尝试让密钥代码库在服务器上运行。如果这样不起作用,请尝试下载具有读键功能的库。

Try getting a key code library running on the server. If that does not work try to download a library with read-key ability.


回答 21

我试图在Ubuntu 14.0上构建Python 2.7。您将需要libreadline-dev。但是,如果从apt-get获取它,则当前版本为6.3,该版本与Python 2.7不兼容(不确定Python 3)。例如,在6.3版本中已删除了在readline的早期版本中定义的数据类型“ Function”和“ CPPFunction”,如下所示:

https://github.com/yyuu/pyenv/issues/126

也就是说,您需要获取早期版本的readline的源代码。我从apt-get中为该库安装了libreadline 5.2,并获得了头文件5.2的源代码。将它们放在/ usr / include中。

终于问题解决了。

I was trying build Python 2.7 on Ubuntu 14.0. You will need libreadline-dev. However, if you get it from apt-get, the current version is 6.3, which is incompatible with Python 2.7 (not sure about Python 3). For example, the data type “Function” and “CPPFunction”, which were defined in previous versions of readline has been removed in 6.3, as reported here:

https://github.com/yyuu/pyenv/issues/126

That is to say you need to get the source code of an earlier version of readline. I installed libreadline 5.2 from apt-get for the library, and get the source code of 5.2 for the header files. Put them in /usr/include.

Finally the issue has been resolved.


回答 22

在MacOsx上,我通过重新安装readline修复了此问题

brew reinstall readline

On MacOsx, I fixed this by reinstalling readline

brew reinstall readline

子流程命令的实时输出

问题:子流程命令的实时输出

我正在使用python脚本作为流体力学代码的驱动程序。是时候运行模拟了,我subprocess.Popen用来运行代码,将stdout和stderr的输出收集到subprocess.PIPE—中,然后我可以打印(并保存到日志文件中)输出信息,并检查是否有错误。问题是,我不知道代码是如何进行的。如果直接从命令行运行它,它会向我输出有关它的迭代时间,时间,下一时间步长等的信息。

有没有办法既存储输出(用于日志记录和错误检查),又产生实时流输出?

我的代码的相关部分:

ret_val = subprocess.Popen( run_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True )
output, errors = ret_val.communicate()
log_file.write(output)
print output
if( ret_val.returncode ):
    print "RUN failed\n\n%s\n\n" % (errors)
    success = False

if( errors ): log_file.write("\n\n%s\n\n" % errors)

最初,我是run_command通过管道传递数据,tee以便将副本直接发送到日志文件,并且流仍直接输出到终端-但是那样,我无法存储任何错误(据我所知)。


编辑:

临时解决方案:

ret_val = subprocess.Popen( run_command, stdout=log_file, stderr=subprocess.PIPE, shell=True )
while not ret_val.poll():
    log_file.flush()

然后,在另一个终端中,运行tail -f log.txt(st log_file = 'log.txt')。

I’m using a python script as a driver for a hydrodynamics code. When it comes time to run the simulation, I use subprocess.Popen to run the code, collect the output from stdout and stderr into a subprocess.PIPE — then I can print (and save to a log-file) the output information, and check for any errors. The problem is, I have no idea how the code is progressing. If I run it directly from the command line, it gives me output about what iteration its at, what time, what the next time-step is, etc.

Is there a way to both store the output (for logging and error checking), and also produce a live-streaming output?

The relevant section of my code:

ret_val = subprocess.Popen( run_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True )
output, errors = ret_val.communicate()
log_file.write(output)
print output
if( ret_val.returncode ):
    print "RUN failed\n\n%s\n\n" % (errors)
    success = False

if( errors ): log_file.write("\n\n%s\n\n" % errors)

Originally I was piping the run_command through tee so that a copy went directly to the log-file, and the stream still output directly to the terminal — but that way I can’t store any errors (to my knowlege).


Edit:

Temporary solution:

ret_val = subprocess.Popen( run_command, stdout=log_file, stderr=subprocess.PIPE, shell=True )
while not ret_val.poll():
    log_file.flush()

then, in another terminal, run tail -f log.txt (s.t. log_file = 'log.txt').


回答 0

您可以通过两种方法执行此操作,或者通过从readreadline函数创建一个迭代器,然后执行:

import subprocess
import sys
with open('test.log', 'w') as f:  # replace 'w' with 'wb' for Python 3
    process = subprocess.Popen(your_command, stdout=subprocess.PIPE)
    for c in iter(lambda: process.stdout.read(1), ''):  # replace '' with b'' for Python 3
        sys.stdout.write(c)
        f.write(c)

要么

import subprocess
import sys
with open('test.log', 'w') as f:  # replace 'w' with 'wb' for Python 3
    process = subprocess.Popen(your_command, stdout=subprocess.PIPE)
    for line in iter(process.stdout.readline, ''):  # replace '' with b'' for Python 3
        sys.stdout.write(line)
        f.write(line)

或者,您可以创建readerwriter文件。将传递writerPopen并从中读取reader

import io
import time
import subprocess
import sys

filename = 'test.log'
with io.open(filename, 'wb') as writer, io.open(filename, 'rb', 1) as reader:
    process = subprocess.Popen(command, stdout=writer)
    while process.poll() is None:
        sys.stdout.write(reader.read())
        time.sleep(0.5)
    # Read the remaining
    sys.stdout.write(reader.read())

这样,您就可以将数据写入 test.log在和标准输出中。

文件方法的唯一优点是您的代码不会阻塞。因此,您可以在此期间做任何您想做的事情,并reader以不阻塞的方式随时阅读。当使用PIPEreadreadline功能将阻塞,直到任一个字符被写入到管或线被分别写入到管道。

You have two ways of doing this, either by creating an iterator from the read or readline functions and do:

import subprocess
import sys
with open('test.log', 'w') as f:  # replace 'w' with 'wb' for Python 3
    process = subprocess.Popen(your_command, stdout=subprocess.PIPE)
    for c in iter(lambda: process.stdout.read(1), ''):  # replace '' with b'' for Python 3
        sys.stdout.write(c)
        f.write(c)

or

import subprocess
import sys
with open('test.log', 'w') as f:  # replace 'w' with 'wb' for Python 3
    process = subprocess.Popen(your_command, stdout=subprocess.PIPE)
    for line in iter(process.stdout.readline, ''):  # replace '' with b'' for Python 3
        sys.stdout.write(line)
        f.write(line)

Or you can create a reader and a writer file. Pass the writer to the Popen and read from the reader

import io
import time
import subprocess
import sys

filename = 'test.log'
with io.open(filename, 'wb') as writer, io.open(filename, 'rb', 1) as reader:
    process = subprocess.Popen(command, stdout=writer)
    while process.poll() is None:
        sys.stdout.write(reader.read())
        time.sleep(0.5)
    # Read the remaining
    sys.stdout.write(reader.read())

This way you will have the data written in the test.log as well as on the standard output.

The only advantage of the file approach is that your code doesn’t block. So you can do whatever you want in the meantime and read whenever you want from the reader in a non-blocking way. When you use PIPE, read and readline functions will block until either one character is written to the pipe or a line is written to the pipe respectively.


回答 1

执行摘要(或“ tl; dr”版本):最多有一个很容易subprocess.PIPE,否则很难。

现在可能是时候解释一下它是如何subprocess.Popen工作的了。

(注意:这是针对Python 2.x的,尽管3.x相似;并且我对Windows变体很模糊。我对POSIX的了解要好得多。)

Popen功能需要同时处理零到三个I / O流。分别以stdinstdout和表示stderr

您可以提供:

  • None,表示您不想重定向流。它将照常继承这些。请注意,至少在POSIX系统上,这并不意味着它将使用Python的sys.stdout,而仅使用Python的实际标准输出。参见演示示例。
  • 一个int值。这是一个“原始”文件描述符(至少在POSIX中)。(附带说明:PIPESTDOUT实际上int是内部的,但是是“不可能的”描述符-1和-2。)
  • 流-实际上是具有fileno方法的任何对象。 Popen将使用来找到该流的描述符stream.fileno(),然后按照int值进行操作。
  • subprocess.PIPE,指示Python应该创建一个管道。
  • subprocess.STDOUTstderr仅适用):告诉Python使用与相同的描述符stdout。仅当您提供的(非None)值时才有意义stdout,即使如此,也只有在设置时才需要stdout=subprocess.PIPE。(否则,您可以只提供您提供的相同参数stdout,例如Popen(..., stdout=stream, stderr=stream)。)

最简单的情况(无管道)

如果不进行任何重定向(将所有三个都保留为默认None值或提供明确的None),Pipe则非常简单。它只需要剥离子流程并使其运行。或者,如果您重定向到一个非PIPE-an int或流是fileno()-它仍然很容易,因为OS做所有的工作。Python只需要剥离子进程,即可将其stdin,stdout和/或stderr连接到提供的文件描述符。

仍然很容易的情况:一根烟斗

如果仅重定向一个流,那么Pipe事情仍然很简单。让我们一次选择一个流并观看。

假设你想提供一些stdin,但让stdoutstderr去未重定向,或去文件描述符。作为父进程,您的Python程序只需要用于通过write()管道发送数据。您可以自己执行此操作,例如:

proc = subprocess.Popen(cmd, stdin=subprocess.PIPE)
proc.stdin.write('here, have some data\n') # etc

或者您可以将stdin数据传递到proc.communicate(),然后执行stdin.write上面所示的操作。没有输出返回,因此communicate()只有一项实际工作:它还会为您关闭管道。(如果不调用proc.communicate(),则必须调用proc.stdin.close()以关闭管道,以便子进程知道不再有数据通过。)

假设你想捕捉stdout,但休假stdinstderr孤独。同样,这很容易:只需调用proc.stdout.read()(或等效命令),直到没有更多输出为止。由于proc.stdout()是普通的Python I / O流,因此可以在其上使用所有普通的构造,例如:

for line in proc.stdout:

或者,您也可以使用proc.communicate(),它可以read()为您轻松完成。

如果只想捕获stderr,则它的功能与相同stdout

在事情变得艰难之前,还有另外一个技巧。假设您要捕获stdout,并且还捕获stderr与stdout在同一管道上:

proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)

在这种情况下,subprocess“作弊”!好吧,它必须这样做,所以它并不是真正的作弊:它使用其stdout和stderr引导到(单个)管道描述符中的子进程来启动子进程,该子进程描述符反馈给其父进程(Python)。在父端,只有一个管道描述符用于读取输出。所有“ stderr”输出都显示在中proc.stdout,如果调用proc.communicate(),stderr结果(元组中的第二个值)将是None,而不是字符串。

困难情况:两个或更多管道

当您要使用至少两个管道时,所有问题都会出现。实际上,subprocess代码本身具有以下功能:

def communicate(self, input=None):
    ...
    # Optimization: If we are only using one pipe, or no pipe at
    # all, using select() or threads is unnecessary.
    if [self.stdin, self.stdout, self.stderr].count(None) >= 2:

但是,可惜,在这里,我们至少制作了两个(也许三个)不同的管道,因此count(None)返回值为1或0。我们必须用困难的方式做事。

在Windows上,这用于threading.Thread累积self.stdout和的结果self.stderr,并让父线程传递self.stdin输入数据(然后关闭管道)。

在POSIX上,poll如果可用,则使用,否则select,使用累加输出并传递标准输入。所有这些都在(单个)父进程/线程中运行。

这里需要线程或轮询/选择以避免死锁。例如,假设我们已将所有三个流重定向到三个单独的管道。进一步假设在写入过程被挂起之前,等待读取过程从另一端“清除”管道之前,可以在管道中填充多少数据有一个很小的限制。为了说明起见,我们将这个较小的限制设置为一个字节。(实际上,这是工作原理,但限制远大于一个字节。)

如果父进程(Python)尝试写入多个字节(例如'go\n'到)proc.stdin,则第一个字节进入,然后第二个字节导致Python进程挂起,等待子进程读取第一个字节,从而清空管道。

同时,假设子流程决定打印一个友好的“ Hello!Do n’t Panic!”。问候。在H进入它的标准输出管道,但e导致其暂停,等待其家长阅读H,排空stdout管道。

现在我们陷入困境:Python进程处于睡眠状态,等待说完“ go”,而子进程也处于睡眠状态,等待说完“ Hello!Don Panic!”。

subprocess.Popen代码避免了线程化或选择/轮询的问题。当字节可以通过管道时,它们就会通过。如果不能,则只有一个线程(而不是整个进程)必须进入睡眠状态;或者,在选择/轮询的情况下,Python进程同时等待“可以写入”或“可用数据”,然后写入该进程的stdin仅在有空间时,并且仅在数据准备就绪时读取其stdout和/或stderr。一旦发送了所有标准输入数据(如果有的话)并且所有标准输出和/或标准错误数据都已存储,则该proc.communicate()代码(实际上_communicate是处理多毛案件的地方)返回。

如果你想同时读取stdoutstderr在两个不同的管道(无论任何的stdin重定向),则需要避免死锁了。此处的死锁情况有所不同-发生在子进程stderr从中提取数据时写入了很长时间stdout,反之亦然,但是这种情况仍然存在。


演示

我答应演示未经重定向的python subprocess写入底层标准输出,而不是sys.stdout。因此,这是一些代码:

from cStringIO import StringIO
import os
import subprocess
import sys

def show1():
    print 'start show1'
    save = sys.stdout
    sys.stdout = StringIO()
    print 'sys.stdout being buffered'
    proc = subprocess.Popen(['echo', 'hello'])
    proc.wait()
    in_stdout = sys.stdout.getvalue()
    sys.stdout = save
    print 'in buffer:', in_stdout

def show2():
    print 'start show2'
    save = sys.stdout
    sys.stdout = open(os.devnull, 'w')
    print 'after redirect sys.stdout'
    proc = subprocess.Popen(['echo', 'hello'])
    proc.wait()
    sys.stdout = save

show1()
show2()

运行时:

$ python out.py
start show1
hello
in buffer: sys.stdout being buffered

start show2
hello

请注意,如果添加stdout=sys.stdout,第一个例程将失败,因为StringIO对象没有filenohello如果已添加,第二个将省略,stdout=sys.stdout因为它sys.stdout已被重定向到os.devnull

(如果重定向Python的file-descriptor-1,则子进程遵循该重定向。该open(os.devnull, 'w')调用将产生一个fileno()大于2 的流。)

Executive Summary (or “tl;dr” version): it’s easy when there’s at most one subprocess.PIPE, otherwise it’s hard.

It may be time to explain a bit about how subprocess.Popen does its thing.

(Caveat: this is for Python 2.x, although 3.x is similar; and I’m quite fuzzy on the Windows variant. I understand the POSIX stuff much better.)

The Popen function needs to deal with zero-to-three I/O streams, somewhat simultaneously. These are denoted stdin, stdout, and stderr as usual.

You can provide:

  • None, indicating that you don’t want to redirect the stream. It will inherit these as usual instead. Note that on POSIX systems, at least, this does not mean it will use Python’s sys.stdout, just Python’s actual stdout; see demo at end.
  • An int value. This is a “raw” file descriptor (in POSIX at least). (Side note: PIPE and STDOUT are actually ints internally, but are “impossible” descriptors, -1 and -2.)
  • A stream—really, any object with a fileno method. Popen will find the descriptor for that stream, using stream.fileno(), and then proceed as for an int value.
  • subprocess.PIPE, indicating that Python should create a pipe.
  • subprocess.STDOUT (for stderr only): tell Python to use the same descriptor as for stdout. This only makes sense if you provided a (non-None) value for stdout, and even then, it is only needed if you set stdout=subprocess.PIPE. (Otherwise you can just provide the same argument you provided for stdout, e.g., Popen(..., stdout=stream, stderr=stream).)

The easiest cases (no pipes)

If you redirect nothing (leave all three as the default None value or supply explicit None), Pipe has it quite easy. It just needs to spin off the subprocess and let it run. Or, if you redirect to a non-PIPE—an int or a stream’s fileno()—it’s still easy, as the OS does all the work. Python just needs to spin off the subprocess, connecting its stdin, stdout, and/or stderr to the provided file descriptors.

The still-easy case: one pipe

If you redirect only one stream, Pipe still has things pretty easy. Let’s pick one stream at a time and watch.

Suppose you want to supply some stdin, but let stdout and stderr go un-redirected, or go to a file descriptor. As the parent process, your Python program simply needs to use write() to send data down the pipe. You can do this yourself, e.g.:

proc = subprocess.Popen(cmd, stdin=subprocess.PIPE)
proc.stdin.write('here, have some data\n') # etc

or you can pass the stdin data to proc.communicate(), which then does the stdin.write shown above. There is no output coming back so communicate() has only one other real job: it also closes the pipe for you. (If you don’t call proc.communicate() you must call proc.stdin.close() to close the pipe, so that the subprocess knows there is no more data coming through.)

Suppose you want to capture stdout but leave stdin and stderr alone. Again, it’s easy: just call proc.stdout.read() (or equivalent) until there is no more output. Since proc.stdout() is a normal Python I/O stream you can use all the normal constructs on it, like:

for line in proc.stdout:

or, again, you can use proc.communicate(), which simply does the read() for you.

If you want to capture only stderr, it works the same as with stdout.

There’s one more trick before things get hard. Suppose you want to capture stdout, and also capture stderr but on the same pipe as stdout:

proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)

In this case, subprocess “cheats”! Well, it has to do this, so it’s not really cheating: it starts the subprocess with both its stdout and its stderr directed into the (single) pipe-descriptor that feeds back to its parent (Python) process. On the parent side, there’s again only a single pipe-descriptor for reading the output. All the “stderr” output shows up in proc.stdout, and if you call proc.communicate(), the stderr result (second value in the tuple) will be None, not a string.

The hard cases: two or more pipes

The problems all come about when you want to use at least two pipes. In fact, the subprocess code itself has this bit:

def communicate(self, input=None):
    ...
    # Optimization: If we are only using one pipe, or no pipe at
    # all, using select() or threads is unnecessary.
    if [self.stdin, self.stdout, self.stderr].count(None) >= 2:

But, alas, here we’ve made at least two, and maybe three, different pipes, so the count(None) returns either 1 or 0. We must do things the hard way.

On Windows, this uses threading.Thread to accumulate results for self.stdout and self.stderr, and has the parent thread deliver self.stdin input data (and then close the pipe).

On POSIX, this uses poll if available, otherwise select, to accumulate output and deliver stdin input. All this runs in the (single) parent process/thread.

Threads or poll/select are needed here to avoid deadlock. Suppose, for instance, that we’ve redirected all three streams to three separate pipes. Suppose further that there’s a small limit on how much data can be stuffed into to a pipe before the writing process is suspended, waiting for the reading process to “clean out” the pipe from the other end. Let’s set that small limit to a single byte, just for illustration. (This is in fact how things work, except that the limit is much bigger than one byte.)

If the parent (Python) process tries to write several bytes—say, 'go\n'to proc.stdin, the first byte goes in and then the second causes the Python process to suspend, waiting for the subprocess to read the first byte, emptying the pipe.

Meanwhile, suppose the subprocess decides to print a friendly “Hello! Don’t Panic!” greeting. The H goes into its stdout pipe, but the e causes it to suspend, waiting for its parent to read that H, emptying the stdout pipe.

Now we’re stuck: the Python process is asleep, waiting to finish saying “go”, and the subprocess is also asleep, waiting to finish saying “Hello! Don’t Panic!”.

The subprocess.Popen code avoids this problem with threading-or-select/poll. When bytes can go over the pipes, they go. When they can’t, only a thread (not the whole process) has to sleep—or, in the case of select/poll, the Python process waits simultaneously for “can write” or “data available”, writes to the process’s stdin only when there is room, and reads its stdout and/or stderr only when data are ready. The proc.communicate() code (actually _communicate where the hairy cases are handled) returns once all stdin data (if any) have been sent and all stdout and/or stderr data have been accumulated.

If you want to read both stdout and stderr on two different pipes (regardless of any stdin redirection), you will need to avoid deadlock too. The deadlock scenario here is different—it occurs when the subprocess writes something long to stderr while you’re pulling data from stdout, or vice versa—but it’s still there.


The Demo

I promised to demonstrate that, un-redirected, Python subprocesses write to the underlying stdout, not sys.stdout. So, here is some code:

from cStringIO import StringIO
import os
import subprocess
import sys

def show1():
    print 'start show1'
    save = sys.stdout
    sys.stdout = StringIO()
    print 'sys.stdout being buffered'
    proc = subprocess.Popen(['echo', 'hello'])
    proc.wait()
    in_stdout = sys.stdout.getvalue()
    sys.stdout = save
    print 'in buffer:', in_stdout

def show2():
    print 'start show2'
    save = sys.stdout
    sys.stdout = open(os.devnull, 'w')
    print 'after redirect sys.stdout'
    proc = subprocess.Popen(['echo', 'hello'])
    proc.wait()
    sys.stdout = save

show1()
show2()

When run:

$ python out.py
start show1
hello
in buffer: sys.stdout being buffered

start show2
hello

Note that the first routine will fail if you add stdout=sys.stdout, as a StringIO object has no fileno. The second will omit the hello if you add stdout=sys.stdout since sys.stdout has been redirected to os.devnull.

(If you redirect Python’s file-descriptor-1, the subprocess will follow that redirection. The open(os.devnull, 'w') call produces a stream whose fileno() is greater than 2.)


回答 2

我们还可以使用默认的文件迭代器来读取stdout,而不是使用带有readline()的iter构造。

import subprocess
import sys
process = subprocess.Popen(your_command, stdout=subprocess.PIPE)
for line in process.stdout:
    sys.stdout.write(line)

We can also use the default file iterator for reading stdout instead of using iter construct with readline().

import subprocess
import sys
process = subprocess.Popen(your_command, stdout=subprocess.PIPE)
for line in process.stdout:
    sys.stdout.write(line)

回答 3

如果您可以使用第三方库,则可以使用类似的东西sarge(披露:我是它的维护者)。该库允许无阻塞地访问子流程的输出流-它位于subprocess模块之上。

If you’re able to use third-party libraries, You might be able to use something like sarge (disclosure: I’m its maintainer). This library allows non-blocking access to output streams from subprocesses – it’s layered over the subprocess module.


回答 4

解决方案1:实时并发记录stdoutstderr

一个简单的解决方案,可以同时逐行实时地同时将stdout和stderr 记录到日志文件中。

import subprocess as sp
from concurrent.futures import ThreadPoolExecutor


def log_popen_pipe(p, stdfile):

    with open("mylog.txt", "w") as f:

        while p.poll() is None:
            f.write(stdfile.readline())
            f.flush()

        # Write the rest from the buffer
        f.write(stdfile.read())


with sp.Popen(["ls"], stdout=sp.PIPE, stderr=sp.PIPE, text=True) as p:

    with ThreadPoolExecutor(2) as pool:
        r1 = pool.submit(log_popen_pipe, p, p.stdout)
        r2 = pool.submit(log_popen_pipe, p, p.stderr)
        r1.result()
        r2.result()

解决方案2:read_popen_pipes()允许您同时并行访问两个管道(stdout / stderr)的功能

import subprocess as sp
from queue import Queue, Empty
from concurrent.futures import ThreadPoolExecutor


def enqueue_output(file, queue):
    for line in iter(file.readline, ''):
        queue.put(line)
    file.close()


def read_popen_pipes(p):

    with ThreadPoolExecutor(2) as pool:
        q_stdout, q_stderr = Queue(), Queue()

        pool.submit(enqueue_output, p.stdout, q_stdout)
        pool.submit(enqueue_output, p.stderr, q_stderr)

        while True:

            if p.poll() is not None and q_stdout.empty() and q_stderr.empty():
                break

            out_line = err_line = ''

            try:
                out_line = q_stdout.get_nowait()
                err_line = q_stderr.get_nowait()
            except Empty:
                pass

            yield (out_line, err_line)

# The function in use:

with sp.Popen(my_cmd, stdout=sp.PIPE, stderr=sp.PIPE, text=True) as p:

    for out_line, err_line in read_popen_pipes(p):
        print(out_line, end='')
        print(err_line, end='')

    return p.poll()

Solution 1: Log stdout AND stderr concurrently in realtime

A simple solution which logs both stdout AND stderr concurrently, line-by-line in realtime into a log file.

import subprocess as sp
from concurrent.futures import ThreadPoolExecutor


def log_popen_pipe(p, stdfile):

    with open("mylog.txt", "w") as f:

        while p.poll() is None:
            f.write(stdfile.readline())
            f.flush()

        # Write the rest from the buffer
        f.write(stdfile.read())


with sp.Popen(["ls"], stdout=sp.PIPE, stderr=sp.PIPE, text=True) as p:

    with ThreadPoolExecutor(2) as pool:
        r1 = pool.submit(log_popen_pipe, p, p.stdout)
        r2 = pool.submit(log_popen_pipe, p, p.stderr)
        r1.result()
        r2.result()

Solution 2: A function read_popen_pipes() that allows you to iterate over both pipes (stdout/stderr), concurrently in realtime

import subprocess as sp
from queue import Queue, Empty
from concurrent.futures import ThreadPoolExecutor


def enqueue_output(file, queue):
    for line in iter(file.readline, ''):
        queue.put(line)
    file.close()


def read_popen_pipes(p):

    with ThreadPoolExecutor(2) as pool:
        q_stdout, q_stderr = Queue(), Queue()

        pool.submit(enqueue_output, p.stdout, q_stdout)
        pool.submit(enqueue_output, p.stderr, q_stderr)

        while True:

            if p.poll() is not None and q_stdout.empty() and q_stderr.empty():
                break

            out_line = err_line = ''

            try:
                out_line = q_stdout.get_nowait()
                err_line = q_stderr.get_nowait()
            except Empty:
                pass

            yield (out_line, err_line)

# The function in use:

with sp.Popen(["ls"], stdout=sp.PIPE, stderr=sp.PIPE, text=True) as p:

    for out_line, err_line in read_popen_pipes(p):
        print(out_line, end='')
        print(err_line, end='')

    p.poll()


回答 5

一个好的但“重量级”的解决方案是使用Twisted-参见底部。

如果您只愿意接受标准输出,则应该遵循以下原则:

import subprocess
import sys
popenobj = subprocess.Popen(["ls", "-Rl"], stdout=subprocess.PIPE)
while not popenobj.poll():
   stdoutdata = popenobj.stdout.readline()
   if stdoutdata:
      sys.stdout.write(stdoutdata)
   else:
      break
print "Return code", popenobj.returncode

(如果使用read(),它将尝试读取无用的整个“文件”,我们在这里真正可以使用的是读取管道中所有数据的东西)

一个人也可以尝试通过线程来解决这个问题,例如:

import subprocess
import sys
import threading

popenobj = subprocess.Popen("ls", stdout=subprocess.PIPE, shell=True)

def stdoutprocess(o):
   while True:
      stdoutdata = o.stdout.readline()
      if stdoutdata:
         sys.stdout.write(stdoutdata)
      else:
         break

t = threading.Thread(target=stdoutprocess, args=(popenobj,))
t.start()
popenobj.wait()
t.join()
print "Return code", popenobj.returncode

现在,我们可以通过两个线程来添加stderr。

但是请注意,子流程文档不建议直接使用这些文件,建议使用communicate()(主要涉及死锁,我认为这不是上面的问题),解决方案有点笨拙,因此看来子流程模块似乎还不够用工作(另请参见:http : //www.python.org/dev/peps/pep-3145/),我们需要查看其他内容。

一个更复杂的解决方案是使用Twisted,如下所示:https : //twistedmatrix.com/documents/11.1.0/core/howto/process.html

使用Twisted进行此操作的方法是使用reactor.spawnprocess()并提供ProcessProtocol,然后异步处理输出来创建您的流程。Twisted示例Python代码在这里:https : //twistedmatrix.com/documents/11.1.0/core/howto/listings/process/process.py

A good but “heavyweight” solution is to use Twisted – see the bottom.

If you’re willing to live with only stdout something along those lines should work:

import subprocess
import sys
popenobj = subprocess.Popen(["ls", "-Rl"], stdout=subprocess.PIPE)
while not popenobj.poll():
   stdoutdata = popenobj.stdout.readline()
   if stdoutdata:
      sys.stdout.write(stdoutdata)
   else:
      break
print "Return code", popenobj.returncode

(If you use read() it tries to read the entire “file” which isn’t useful, what we really could use here is something that reads all the data that’s in the pipe right now)

One might also try to approach this with threading, e.g.:

import subprocess
import sys
import threading

popenobj = subprocess.Popen("ls", stdout=subprocess.PIPE, shell=True)

def stdoutprocess(o):
   while True:
      stdoutdata = o.stdout.readline()
      if stdoutdata:
         sys.stdout.write(stdoutdata)
      else:
         break

t = threading.Thread(target=stdoutprocess, args=(popenobj,))
t.start()
popenobj.wait()
t.join()
print "Return code", popenobj.returncode

Now we could potentially add stderr as well by having two threads.

Note however the subprocess docs discourage using these files directly and recommends to use communicate() (mostly concerned with deadlocks which I think isn’t an issue above) and the solutions are a little klunky so it really seems like the subprocess module isn’t quite up to the job (also see: http://www.python.org/dev/peps/pep-3145/ ) and we need to look at something else.

A more involved solution is to use Twisted as shown here: https://twistedmatrix.com/documents/11.1.0/core/howto/process.html

The way you do this with Twisted is to create your process using reactor.spawnprocess() and providing a ProcessProtocol that then processes output asynchronously. The Twisted sample Python code is here: https://twistedmatrix.com/documents/11.1.0/core/howto/listings/process/process.py


回答 6

除了所有这些答案之外,一种简单的方法还可以如下:

process = subprocess.Popen(your_command, stdout=subprocess.PIPE)

while process.stdout.readable():
    line = process.stdout.readline()

    if not line:
        break

    print(line.strip())

只要可读流就循环遍历可读流,如果结果为空,则将其停止。

这里的关键是,只要有输出,就readline()返回一行(\n末尾带有),如果确实是末尾,则返回空。

希望这对某人有帮助。

In addition to all these answer, one simple approach could also be as follows:

process = subprocess.Popen(your_command, stdout=subprocess.PIPE)

while process.stdout.readable():
    line = process.stdout.readline()

    if not line:
        break

    print(line.strip())

Loop through the readable stream as long as it’s readable and if it gets an empty result, stop.

The key here is that readline() returns a line (with \n at the end) as long as there’s an output and empty if it’s really at the end.

Hope this helps someone.


回答 7

基于以上所有内容,我建议您对版本进行略微修改(python3):

  • while循环调用readline(建议的iter解决方案似乎对我而言永远受阻-Python 3,Windows 7)
  • 经过结构化处理,因此在轮询返回后,不需要重复处理读数据-None
  • 将stderr传递到stdout,以便读取两个输出
  • 添加了代码以获取cmd的退出值。

码:

import subprocess
proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE,
                        stderr=subprocess.STDOUT, universal_newlines=True)
while True:
    rd = proc.stdout.readline()
    print(rd, end='')  # and whatever you want to do...
    if not rd:  # EOF
        returncode = proc.poll()
        if returncode is not None:
            break
        time.sleep(0.1)  # cmd closed stdout, but not exited yet

# You may want to check on ReturnCode here

Based on all the above I suggest a slightly modified version (python3):

  • while loop calling readline (The iter solution suggested seemed to block forever for me – Python 3, Windows 7)
  • structered so handling of read data does not need to be duplicated after poll returns not-None
  • stderr piped into stdout so both output outputs are read
  • Added code to get exit value of cmd.

Code:

import subprocess
proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE,
                        stderr=subprocess.STDOUT, universal_newlines=True)
while True:
    rd = proc.stdout.readline()
    print(rd, end='')  # and whatever you want to do...
    if not rd:  # EOF
        returncode = proc.poll()
        if returncode is not None:
            break
        time.sleep(0.1)  # cmd closed stdout, but not exited yet

# You may want to check on ReturnCode here

回答 8

看起来行缓冲输出将为您工作,在这种情况下,可能适合以下情况。(注意:未经测试。)这只会实时提供子进程的标准输出。如果您想同时拥有stderr和stdout,则必须使用进行更复杂的操作select

proc = subprocess.Popen(run_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
while proc.poll() is None:
    line = proc.stdout.readline()
    print line
    log_file.write(line + '\n')
# Might still be data on stdout at this point.  Grab any
# remainder.
for line in proc.stdout.read().split('\n'):
    print line
    log_file.write(line + '\n')
# Do whatever you want with proc.stderr here...

It looks like line-buffered output will work for you, in which case something like the following might suit. (Caveat: it’s untested.) This will only give the subprocess’s stdout in real time. If you want to have both stderr and stdout in real time, you’ll have to do something more complex with select.

proc = subprocess.Popen(run_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
while proc.poll() is None:
    line = proc.stdout.readline()
    print line
    log_file.write(line + '\n')
# Might still be data on stdout at this point.  Grab any
# remainder.
for line in proc.stdout.read().split('\n'):
    print line
    log_file.write(line + '\n')
# Do whatever you want with proc.stderr here...

回答 9

为什么不stdout直接设置为sys.stdout?而且,如果还需要输出到日志,则可以简单地覆盖f的write方法。

import sys
import subprocess

class SuperFile(open.__class__):

    def write(self, data):
        sys.stdout.write(data)
        super(SuperFile, self).write(data)

f = SuperFile("log.txt","w+")       
process = subprocess.Popen(command, stdout=f, stderr=f)

Why not set stdout directly to sys.stdout? And if you need to output to a log as well, then you can simply override the write method of f.

import sys
import subprocess

class SuperFile(open.__class__):

    def write(self, data):
        sys.stdout.write(data)
        super(SuperFile, self).write(data)

f = SuperFile("log.txt","w+")       
process = subprocess.Popen(command, stdout=f, stderr=f)

回答 10

我尝试过的所有上述解决方案都无法将stderr和stdout输出分开(多个管道),或者在OS管道缓冲区已满时永远阻塞,这在运行命令的命令输出速度太快时会发生(在python上有此警告poll()子流程手册)。我发现的唯一可靠方法是通过select,但这是仅posix的解决方案:

import subprocess
import sys
import os
import select
# returns command exit status, stdout text, stderr text
# rtoutput: show realtime output while running
def run_script(cmd,rtoutput=0):
    p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    poller = select.poll()
    poller.register(p.stdout, select.POLLIN)
    poller.register(p.stderr, select.POLLIN)

    coutput=''
    cerror=''
    fdhup={}
    fdhup[p.stdout.fileno()]=0
    fdhup[p.stderr.fileno()]=0
    while sum(fdhup.values()) < len(fdhup):
        try:
            r = poller.poll(1)
        except select.error, err:
            if err.args[0] != EINTR:
                raise
            r=[]
        for fd, flags in r:
            if flags & (select.POLLIN | select.POLLPRI):
                c = os.read(fd, 1024)
                if rtoutput:
                    sys.stdout.write(c)
                    sys.stdout.flush()
                if fd == p.stderr.fileno():
                    cerror+=c
                else:
                    coutput+=c
            else:
                fdhup[fd]=1
    return p.poll(), coutput.strip(), cerror.strip()

All of the above solutions I tried failed either to separate stderr and stdout output, (multiple pipes) or blocked forever when the OS pipe buffer was full which happens when the command you are running outputs too fast (there is a warning for this on python poll() manual of subprocess). The only reliable way I found was through select, but this is a posix-only solution:

import subprocess
import sys
import os
import select
# returns command exit status, stdout text, stderr text
# rtoutput: show realtime output while running
def run_script(cmd,rtoutput=0):
    p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    poller = select.poll()
    poller.register(p.stdout, select.POLLIN)
    poller.register(p.stderr, select.POLLIN)

    coutput=''
    cerror=''
    fdhup={}
    fdhup[p.stdout.fileno()]=0
    fdhup[p.stderr.fileno()]=0
    while sum(fdhup.values()) < len(fdhup):
        try:
            r = poller.poll(1)
        except select.error, err:
            if err.args[0] != EINTR:
                raise
            r=[]
        for fd, flags in r:
            if flags & (select.POLLIN | select.POLLPRI):
                c = os.read(fd, 1024)
                if rtoutput:
                    sys.stdout.write(c)
                    sys.stdout.flush()
                if fd == p.stderr.fileno():
                    cerror+=c
                else:
                    coutput+=c
            else:
                fdhup[fd]=1
    return p.poll(), coutput.strip(), cerror.strip()

回答 11

与先前的答案类似,但是以下解决方案为我在使用Python3的Windows上提供了一种通用的实时打印和登录方法(get-realtime-output-using-python):

def print_and_log(command, logFile):
    with open(logFile, 'wb') as f:
        command = subprocess.Popen(command, stdout=subprocess.PIPE, shell=True)

        while True:
            output = command.stdout.readline()
            if not output and command.poll() is not None:
                f.close()
                break
            if output:
                f.write(output)
                print(str(output.strip(), 'utf-8'), flush=True)
        return command.poll()

Similar to previous answers but the following solution worked for for me on windows using Python3 to provide a common method to print and log in realtime (getting-realtime-output-using-python):

def print_and_log(command, logFile):
    with open(logFile, 'wb') as f:
        command = subprocess.Popen(command, stdout=subprocess.PIPE, shell=True)

        while True:
            output = command.stdout.readline()
            if not output and command.poll() is not None:
                f.close()
                break
            if output:
                f.write(output)
                print(str(output.strip(), 'utf-8'), flush=True)
        return command.poll()

回答 12

我认为该subprocess.communicate方法有点误导:它实际上填充了您在中指定的stdoutstderrsubprocess.Popen

但是,从subprocess.PIPE您可以提供给subprocess.Popenstdoutstderr参数中读取信息,最终将填满OS管道缓冲区并死锁您的应用程序(特别是如果您有多个必须使用的进程/线程)subprocess)。

我提出的解决方案是为stdoutstderr提供文件-并读取文件的内容,而不是从死锁中读取PIPE。这些文件可以是tempfile.NamedTemporaryFile()-在将它们写入时也可以访问以进行读取subprocess.communicate

以下是示例用法:

        try:
            with ProcessRunner(('python', 'task.py'), env=os.environ.copy(), seconds_to_wait=0.01) as process_runner:
                for out in process_runner:
                    print(out)
        catch ProcessError as e:
            print(e.error_message)
            raise

这是准备使用的源代码与我可以用来解释其作用的注释:

如果您使用的是python 2,请确保首先从pypi 安装最新版本的subprocess32软件包。


import os
import sys
import threading
import time
import tempfile
import logging

if os.name == 'posix' and sys.version_info[0] < 3:
    # Support python 2
    import subprocess32 as subprocess
else:
    # Get latest and greatest from python 3
    import subprocess

logger = logging.getLogger(__name__)


class ProcessError(Exception):
    """Base exception for errors related to running the process"""


class ProcessTimeout(ProcessError):
    """Error that will be raised when the process execution will exceed a timeout"""


class ProcessRunner(object):
    def __init__(self, args, env=None, timeout=None, bufsize=-1, seconds_to_wait=0.25, **kwargs):
        """
        Constructor facade to subprocess.Popen that receives parameters which are more specifically required for the
        Process Runner. This is a class that should be used as a context manager - and that provides an iterator
        for reading captured output from subprocess.communicate in near realtime.

        Example usage:


        try:
            with ProcessRunner(('python', task_file_path), env=os.environ.copy(), seconds_to_wait=0.01) as process_runner:
                for out in process_runner:
                    print(out)
        catch ProcessError as e:
            print(e.error_message)
            raise

        :param args: same as subprocess.Popen
        :param env: same as subprocess.Popen
        :param timeout: same as subprocess.communicate
        :param bufsize: same as subprocess.Popen
        :param seconds_to_wait: time to wait between each readline from the temporary file
        :param kwargs: same as subprocess.Popen
        """
        self._seconds_to_wait = seconds_to_wait
        self._process_has_timed_out = False
        self._timeout = timeout
        self._process_done = False
        self._std_file_handle = tempfile.NamedTemporaryFile()
        self._process = subprocess.Popen(args, env=env, bufsize=bufsize,
                                         stdout=self._std_file_handle, stderr=self._std_file_handle, **kwargs)
        self._thread = threading.Thread(target=self._run_process)
        self._thread.daemon = True

    def __enter__(self):
        self._thread.start()
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        self._thread.join()
        self._std_file_handle.close()

    def __iter__(self):
        # read all output from stdout file that subprocess.communicate fills
        with open(self._std_file_handle.name, 'r') as stdout:
            # while process is alive, keep reading data
            while not self._process_done:
                out = stdout.readline()
                out_without_trailing_whitespaces = out.rstrip()
                if out_without_trailing_whitespaces:
                    # yield stdout data without trailing \n
                    yield out_without_trailing_whitespaces
                else:
                    # if there is nothing to read, then please wait a tiny little bit
                    time.sleep(self._seconds_to_wait)

            # this is a hack: terraform seems to write to buffer after process has finished
            out = stdout.read()
            if out:
                yield out

        if self._process_has_timed_out:
            raise ProcessTimeout('Process has timed out')

        if self._process.returncode != 0:
            raise ProcessError('Process has failed')

    def _run_process(self):
        try:
            # Start gathering information (stdout and stderr) from the opened process
            self._process.communicate(timeout=self._timeout)
            # Graceful termination of the opened process
            self._process.terminate()
        except subprocess.TimeoutExpired:
            self._process_has_timed_out = True
            # Force termination of the opened process
            self._process.kill()

        self._process_done = True

    @property
    def return_code(self):
        return self._process.returncode


I think that the subprocess.communicate method is a bit misleading: it actually fills the stdout and stderr that you specify in the subprocess.Popen.

Yet, reading from the subprocess.PIPE that you can provide to the subprocess.Popen‘s stdout and stderr parameters will eventually fill up OS pipe buffers and deadlock your app (especially if you’ve multiple processes/threads that must use subprocess).

My proposed solution is to provide the stdout and stderr with files – and read the files’ content instead of reading from the deadlocking PIPE. These files can be tempfile.NamedTemporaryFile() – which can also be accessed for reading while they’re being written into by subprocess.communicate.

Below is a sample usage:

        try:
            with ProcessRunner(('python', 'task.py'), env=os.environ.copy(), seconds_to_wait=0.01) as process_runner:
                for out in process_runner:
                    print(out)
        catch ProcessError as e:
            print(e.error_message)
            raise

And this is the source code which is ready to be used with as many comments as I could provide to explain what it does:

If you’re using python 2, please make sure to first install the latest version of the subprocess32 package from pypi.


import os
import sys
import threading
import time
import tempfile
import logging

if os.name == 'posix' and sys.version_info[0] < 3:
    # Support python 2
    import subprocess32 as subprocess
else:
    # Get latest and greatest from python 3
    import subprocess

logger = logging.getLogger(__name__)


class ProcessError(Exception):
    """Base exception for errors related to running the process"""


class ProcessTimeout(ProcessError):
    """Error that will be raised when the process execution will exceed a timeout"""


class ProcessRunner(object):
    def __init__(self, args, env=None, timeout=None, bufsize=-1, seconds_to_wait=0.25, **kwargs):
        """
        Constructor facade to subprocess.Popen that receives parameters which are more specifically required for the
        Process Runner. This is a class that should be used as a context manager - and that provides an iterator
        for reading captured output from subprocess.communicate in near realtime.

        Example usage:


        try:
            with ProcessRunner(('python', task_file_path), env=os.environ.copy(), seconds_to_wait=0.01) as process_runner:
                for out in process_runner:
                    print(out)
        catch ProcessError as e:
            print(e.error_message)
            raise

        :param args: same as subprocess.Popen
        :param env: same as subprocess.Popen
        :param timeout: same as subprocess.communicate
        :param bufsize: same as subprocess.Popen
        :param seconds_to_wait: time to wait between each readline from the temporary file
        :param kwargs: same as subprocess.Popen
        """
        self._seconds_to_wait = seconds_to_wait
        self._process_has_timed_out = False
        self._timeout = timeout
        self._process_done = False
        self._std_file_handle = tempfile.NamedTemporaryFile()
        self._process = subprocess.Popen(args, env=env, bufsize=bufsize,
                                         stdout=self._std_file_handle, stderr=self._std_file_handle, **kwargs)
        self._thread = threading.Thread(target=self._run_process)
        self._thread.daemon = True

    def __enter__(self):
        self._thread.start()
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        self._thread.join()
        self._std_file_handle.close()

    def __iter__(self):
        # read all output from stdout file that subprocess.communicate fills
        with open(self._std_file_handle.name, 'r') as stdout:
            # while process is alive, keep reading data
            while not self._process_done:
                out = stdout.readline()
                out_without_trailing_whitespaces = out.rstrip()
                if out_without_trailing_whitespaces:
                    # yield stdout data without trailing \n
                    yield out_without_trailing_whitespaces
                else:
                    # if there is nothing to read, then please wait a tiny little bit
                    time.sleep(self._seconds_to_wait)

            # this is a hack: terraform seems to write to buffer after process has finished
            out = stdout.read()
            if out:
                yield out

        if self._process_has_timed_out:
            raise ProcessTimeout('Process has timed out')

        if self._process.returncode != 0:
            raise ProcessError('Process has failed')

    def _run_process(self):
        try:
            # Start gathering information (stdout and stderr) from the opened process
            self._process.communicate(timeout=self._timeout)
            # Graceful termination of the opened process
            self._process.terminate()
        except subprocess.TimeoutExpired:
            self._process_has_timed_out = True
            # Force termination of the opened process
            self._process.kill()

        self._process_done = True

    @property
    def return_code(self):
        return self._process.returncode




回答 13

这是我在一个项目中使用的类。它将子流程的输出重定向到日志。刚开始,我尝试简单地重写写方法,但是由于子进程将永远不会调用它而无法工作(重定向发生在文件描述符级别)。因此,我使用自己的管道,类似于subprocess-module中的管道。这具有将所有日志记录/打印逻辑封装在适配器中的优点,并且您只需将记录器的实例传递给Popensubprocess.Popen("/path/to/binary", stderr = LogAdapter("foo"))

class LogAdapter(threading.Thread):

    def __init__(self, logname, level = logging.INFO):
        super().__init__()
        self.log = logging.getLogger(logname)
        self.readpipe, self.writepipe = os.pipe()

        logFunctions = {
            logging.DEBUG: self.log.debug,
            logging.INFO: self.log.info,
            logging.WARN: self.log.warn,
            logging.ERROR: self.log.warn,
        }

        try:
            self.logFunction = logFunctions[level]
        except KeyError:
            self.logFunction = self.log.info

    def fileno(self):
        #when fileno is called this indicates the subprocess is about to fork => start thread
        self.start()
        return self.writepipe

    def finished(self):
       """If the write-filedescriptor is not closed this thread will
       prevent the whole program from exiting. You can use this method
       to clean up after the subprocess has terminated."""
       os.close(self.writepipe)

    def run(self):
        inputFile = os.fdopen(self.readpipe)

        while True:
            line = inputFile.readline()

            if len(line) == 0:
                #no new data was added
                break

            self.logFunction(line.strip())

如果您不需要日志记录而只想使用print()它,则可以明显地删除大部分代码并使该类更短。您还可以通过__enter__and __exit__方法将其展开并调用finished__exit__以便可以轻松地将其用作上下文。

Here is a class which I’m using in one of my projects. It redirects output of a subprocess to the log. At first I tried simply overwriting the write-method but that doesn’t work as the subprocess will never call it (redirection happens on filedescriptor level). So I’m using my own pipe, similar to how it’s done in the subprocess-module. This has the advantage of encapsulating all logging/printing logic in the adapter and you can simply pass instances of the logger to Popen: subprocess.Popen("/path/to/binary", stderr = LogAdapter("foo"))

class LogAdapter(threading.Thread):

    def __init__(self, logname, level = logging.INFO):
        super().__init__()
        self.log = logging.getLogger(logname)
        self.readpipe, self.writepipe = os.pipe()

        logFunctions = {
            logging.DEBUG: self.log.debug,
            logging.INFO: self.log.info,
            logging.WARN: self.log.warn,
            logging.ERROR: self.log.warn,
        }

        try:
            self.logFunction = logFunctions[level]
        except KeyError:
            self.logFunction = self.log.info

    def fileno(self):
        #when fileno is called this indicates the subprocess is about to fork => start thread
        self.start()
        return self.writepipe

    def finished(self):
       """If the write-filedescriptor is not closed this thread will
       prevent the whole program from exiting. You can use this method
       to clean up after the subprocess has terminated."""
       os.close(self.writepipe)

    def run(self):
        inputFile = os.fdopen(self.readpipe)

        while True:
            line = inputFile.readline()

            if len(line) == 0:
                #no new data was added
                break

            self.logFunction(line.strip())

If you don’t need logging but simply want to use print() you can obviously remove large portions of the code and keep the class shorter. You could also expand it by an __enter__ and __exit__ method and call finished in __exit__ so that you could easily use it as context.


回答 14

没有Pythonic解决方案对我有用。事实证明,proc.stdout.read()类似的行为可能永远存在。

因此,我这样使用tee

subprocess.run('./my_long_running_binary 2>&1 | tee -a my_log_file.txt && exit ${PIPESTATUS}', shell=True, check=True, executable='/bin/bash')

如果您已经在使用此解决方案,将非常方便shell=True

${PIPESTATUS}捕获整个命令链的成功状态(仅在Bash中可用)。如果我省略&& exit ${PIPESTATUS},则它将始终返回零,因为tee从不失败。

unbuffer可能需要立即将每行打印到终端中,而不是等待太久直到“管道缓冲区”填满。但是,unbuffer吞没了assert(SIG Abort)的退出状态。

2>&1 还将stderror记录到文件中。

None of the Pythonic solutions worked for me. It turned out that proc.stdout.read() or similar may block forever.

Therefore, I use tee like this:

subprocess.run('./my_long_running_binary 2>&1 | tee -a my_log_file.txt && exit ${PIPESTATUS}', shell=True, check=True, executable='/bin/bash')

This solution is convenient if you are already using shell=True.

${PIPESTATUS} captures the success status of the entire command chain (only available in Bash). If I omitted the && exit ${PIPESTATUS}, then this would always return zero since tee never fails.

unbuffer might be necessary for printing each line immediately into the terminal, instead of waiting way too long until the “pipe buffer” gets filled. However, unbuffer swallows the exit status of assert (SIG Abort)…

2>&1 also logs stderror to the file.


Http-prompt 构建在HTTPie之上的交互式命令行HTTP和API测试客户端,具有自动完成、语法突出显示等功能

如何在Python中实现常见的bash习惯用法?[关闭]

问题:如何在Python中实现常见的bash习惯用法?[关闭]

目前,我通过一堆不记得的AWK,sed,Bash和一小部分Perl对文本文件进行操作。

我见过提到python在这种情况下有好处的几个地方。如何使用Python替换Shell脚本,AWK,sed和朋友?

I currently do my textfile manipulation through a bunch of badly remembered AWK, sed, Bash and a tiny bit of Perl.

I’ve seen mentioned a few places that python is good for this kind of thing. How can I use Python to replace shell scripting, AWK, sed and friends?


回答 0

任何外壳程序都有几套功能。

  • 基本的Linux / Unix命令。所有这些都可以通过子流程库获得。对于执行所有外部命令,这并不总是最好的首选。还要查看shutil中的一些命令,这些命令是独立的Linux命令,但是您可以直接在Python脚本中实现。os库中还有另一批Linux命令。您可以在Python中更简单地完成这些操作。

    还有-奖金! – 更快速。外壳程序中的每个单独的Linux命令(有一些exceptions)都会派生一个子进程。通过使用Python shutilos模块,您无需派生子进程。

  • 外壳环境功能。这包括设置命令环境的内容(当前目录和环境变量以及诸如此类)。您可以直接从Python轻松地对此进行管理。

  • Shell编程功能。这就是所有过程状态代码检查,各种逻辑命令(如果有,为……等),测试命令及其所有亲属。函数定义的东西。在Python中,这一切非常容易。这是摆脱bash并在Python中完成的巨大胜利之一。

  • 互动功能。这包括命令历史记录和“不”记录。编写shell脚本不需要此。这仅用于人类交互,而不用于脚本编写。

  • Shell文件管理功能。这包括重定向和管道。这比较棘手。其中大部分可以通过子流程来完成。但是一些容易在shell中执行的操作在Python中是令人不快的。具体来说就是这样的东西(a | b; c ) | something >result。这将并行运行两个进程(输出a作为的输入b),然后是第三个进程。该序列的输出与并行运行,something并将输出收集到名为的文件中result。用任何其他语言表达都是很复杂的。

特定程序(awk,sed,grep等)通常可以重写为Python模块。不要太过分。替换您需要的内容并发展您的“ grep”模块。不要以编写替换“ grep”的Python模块开始。

最好的事情是您可以分步执行此操作。

  1. 用Python替换AWK和PERL。别的一切。
  2. 看一下用Python替换GREP。这可能会稍微复杂一些,但是您的GREP版本可以根据您的处理需求进行定制。
  3. 看一下用来代替FIND的Python循环os.walk。这是一个很大的胜利,因为您不会产生那么多的进程。
  4. 看一下用Python脚本替换常见的shell逻辑(循环,决策等)。

Any shell has several sets of features.

  • The Essential Linux/Unix commands. All of these are available through the subprocess library. This isn’t always the best first choice for doing all external commands. Look also at shutil for some commands that are separate Linux commands, but you could probably implement directly in your Python scripts. Another huge batch of Linux commands are in the os library; you can do these more simply in Python.

    And — bonus! — more quickly. Each separate Linux command in the shell (with a few exceptions) forks a subprocess. By using Python shutil and os modules, you don’t fork a subprocess.

  • The shell environment features. This includes stuff that sets a command’s environment (current directory and environment variables and what-not). You can easily manage this from Python directly.

  • The shell programming features. This is all the process status code checking, the various logic commands (if, while, for, etc.) the test command and all of it’s relatives. The function definition stuff. This is all much, much easier in Python. This is one of the huge victories in getting rid of bash and doing it in Python.

  • Interaction features. This includes command history and what-not. You don’t need this for writing shell scripts. This is only for human interaction, and not for script-writing.

  • The shell file management features. This includes redirection and pipelines. This is trickier. Much of this can be done with subprocess. But some things that are easy in the shell are unpleasant in Python. Specifically stuff like (a | b; c ) | something >result. This runs two processes in parallel (with output of a as input to b), followed by a third process. The output from that sequence is run in parallel with something and the output is collected into a file named result. That’s just complex to express in any other language.

Specific programs (awk, sed, grep, etc.) can often be rewritten as Python modules. Don’t go overboard. Replace what you need and evolve your “grep” module. Don’t start out writing a Python module that replaces “grep”.

The best thing is that you can do this in steps.

  1. Replace AWK and PERL with Python. Leave everything else alone.
  2. Look at replacing GREP with Python. This can be a bit more complex, but your version of GREP can be tailored to your processing needs.
  3. Look at replacing FIND with Python loops that use os.walk. This is a big win because you don’t spawn as many processes.
  4. Look at replacing common shell logic (loops, decisions, etc.) with Python scripts.

回答 1

当然是 :)

看一下这些库,这些库可以帮助您不再编写Shell脚本(Plumbum的座右铭)。

另外,如果你要替换的awk,sed的和grep的东西基于Python的话,我建议小学项目

“ Pyed Piper”或pyp是类似于awk或sed的linux命令行文本操作工具,但是它使用标准的python字符串和列表方法以及自定义功能,这些功能在激烈的生产环境中可以快速生成结果。

Yes, of course :)

Take a look at these libraries which help you Never write shell scripts again (Plumbum’s motto).

Also, if you want to replace awk, sed and grep with something Python based then I recommend pyp

“The Pyed Piper”, or pyp, is a linux command line text manipulation tool similar to awk or sed, but which uses standard python string and list methods as well as custom functions evolved to generate fast results in an intense production environment.


回答 2

我刚刚发现了如何结合bash和ipython的最佳部分。到目前为止,对我来说,这比使用子流程等更舒服。您可以轻松地复制现有bash脚本的大部分内容,例如以python方式添加错误处理:)这是我的结果:

#!/usr/bin/env ipython3

# *** How to have the most comfort scripting experience of your life ***
# ######################################################################
#
# … by using ipython for scripting combined with subcommands from bash!
#
# 1. echo "#!/usr/bin/env ipython3" > scriptname.ipy    # creates new ipy-file
#
# 2. chmod +x scriptname.ipy                            # make in executable
#
# 3. starting with line 2, write normal python or do some of
#    the ! magic of ipython, so that you can use unix commands
#    within python and even assign their output to a variable via
#    var = !cmd1 | cmd2 | cmd3                          # enjoy ;)
#
# 4. run via ./scriptname.ipy - if it fails with recognizing % and !
#    but parses raw python fine, please check again for the .ipy suffix

# ugly example, please go and find more in the wild
files = !ls *.* | grep "y"
for file in files:
  !echo $file | grep "p"
# sorry for this nonsense example ;)

请参阅有关系统外壳程序命令的 IPython文档,并将其用作系统外壳程序

I just discovered how to combine the best parts of bash and ipython. Up to now this seems more comfortable to me than using subprocess and so on. You can easily copy big parts of existing bash scripts and e.g. add error handling in the python way :) And here is my result:

#!/usr/bin/env ipython3

# *** How to have the most comfort scripting experience of your life ***
# ######################################################################
#
# … by using ipython for scripting combined with subcommands from bash!
#
# 1. echo "#!/usr/bin/env ipython3" > scriptname.ipy    # creates new ipy-file
#
# 2. chmod +x scriptname.ipy                            # make in executable
#
# 3. starting with line 2, write normal python or do some of
#    the ! magic of ipython, so that you can use unix commands
#    within python and even assign their output to a variable via
#    var = !cmd1 | cmd2 | cmd3                          # enjoy ;)
#
# 4. run via ./scriptname.ipy - if it fails with recognizing % and !
#    but parses raw python fine, please check again for the .ipy suffix

# ugly example, please go and find more in the wild
files = !ls *.* | grep "y"
for file in files:
  !echo $file | grep "p"
# sorry for this nonsense example ;)

See IPython docs on system shell commands and using it as a system shell.


回答 3

从2015年和Python 3.4发行版开始,现在可以通过以下网址获得相当完整的用户交互shell:http : //xon.sh/https://github.com/scopatz/xonsh

演示视频不显示正在使用的管道,但他们在默认的shell模式下支持。

Xonsh(’conch’)会非常努力地模仿bash,因此您已经获得了肌肉记忆的东西,例如

env | uniq | sort -r | grep PATH

要么

my-web-server 2>&1 | my-log-sorter

仍然可以正常工作。

该教程篇幅很长,似乎涵盖了人们通常希望在ash或bash提示符下看到的大量功能:

  • 编译,评估和执行!
  • 命令历史记录和制表符完成
  • ?&的帮助和超级帮助??
  • 别名和自定义提示
  • 执行*.xsh也可以导入的命令和/或脚本
  • 环境变量,包括使用 ${}
  • 输入/输出重定向和组合
  • 后台作业和作业控制
  • 嵌套子流程,管道和协同流程
  • 存在命令时为子进程模式,否则为Python模式
  • 使用捕获的子过程,使用的未捕获子$()过程$[],使用的Python评估@()
  • 带有“ *或”的正则表达式的文件名,以及带有反引号的正则表达式的文件名

As of 2015 and Python 3.4’s release, there’s now a reasonably complete user-interactive shell available at: http://xon.sh/ or https://github.com/scopatz/xonsh

The demonstration video does not show pipes being used, but they ARE supported when in the default shell mode.

Xonsh (‘conch’) tries very hard to emulate bash, so things you’ve already gained muscle memory for, like

env | uniq | sort -r | grep PATH

or

my-web-server 2>&1 | my-log-sorter

will still work fine.

The tutorial is quite lengthy and seems to cover a significant amount of the functionality someone would generally expect at a ash or bash prompt:

  • Compiles, Evaluates, & Executes!
  • Command History and Tab Completion
  • Help & Superhelp with ? & ??
  • Aliases & Customized Prompts
  • Executes Commands and/or *.xsh Scripts which can also be imported
  • Environment Variables including Lookup with ${}
  • Input/Output Redirection and Combining
  • Background Jobs & Job Control
  • Nesting Subprocesses, Pipes, and Coprocesses
  • Subprocess-mode when a command exists, Python-mode otherwise
  • Captured Subprocess with $(), Uncaptured Subprocess with $[], Python Evaluation with @()
  • Filename Globbing with * or Regular Expression Filename Globbing with Backticks

回答 4

  • 如果要使用Python作为外壳,为什么不看看IPython?交互式学习语言也很好。
  • 如果您进行大量的文本操作,并且将Vim用作文本编辑器,则还可以直接在python中为Vim编写插件。只需在Vim中输入“:help python”,然后按照说明进行操作或查看此演示文稿即可。编写可直接在编辑器中使用的函数是如此简单和强大!
  • If you want to use Python as a shell, why not have a look at IPython ? It is also good to learn interactively the language.
  • If you do a lot of text manipulation, and if you use Vim as a text editor, you can also directly write plugins for Vim in python. just type “:help python” in Vim and follow the instructions or have a look at this presentation. It is so easy and powerfull to write functions that you will use directly in your editor!

回答 5

最初有sh,sed和awk(以及find,grep和…)。这很好。但是awk可能是一个奇怪的小野兽,如果您不经常使用它,将很难记住。然后,伟大的骆驼创造了Perl。Perl是系统管理员的梦想。就像在类固醇上编写外壳脚本一样。文本处理(包括正则表达式)只是该语言的一部分。然后它变得丑陋了。人们试图用Perl进行大型应用程序。现在,请不要误会我的意思,Perl可以是一个应用程序,但是如果您不太谨慎的话,它可能(可以!)看起来像一团糟。然后就是所有这些平面数据业务。这足以使程序员发疯。

输入Python,Ruby等。这些确实是非常好的通用语言。它们支持文本处理,并且做得很好(尽管在语言的基本核心中可能并不紧密地缠在一起)。但是它们也可以很好地扩展,并且到最后仍然具有漂亮的代码。他们还开发了相当庞大的社区,其中有大量的图书馆可以满足大多数需求。

现在,对Perl的许多负面影响只是一个见解,当然有些人可以编写非常简洁的Perl,但是由于许多人抱怨创建混淆代码太容易了,因此您知道其中有些道理。真正的问题就变成了,您是否打算将这种语言用于比简单的bash脚本替换更多的事情。如果没有,请学习更多Perl。另一方面,如果您想要一种语言,并且随着您想做更多的事情而发展,那么我建议使用Python或Ruby。

无论哪种方式,祝您好运!

In the beginning there was sh, sed, and awk (and find, and grep, and…). It was good. But awk can be an odd little beast and hard to remember if you don’t use it often. Then the great camel created Perl. Perl was a system administrator’s dream. It was like shell scripting on steroids. Text processing, including regular expressions were just part of the language. Then it got ugly… People tried to make big applications with Perl. Now, don’t get me wrong, Perl can be an application, but it can (can!) look like a mess if you’re not really careful. Then there is all this flat data business. It’s enough to drive a programmer nuts.

Enter Python, Ruby, et al. These are really very good general purpose languages. They support text processing, and do it well (though perhaps not as tightly entwined in the basic core of the language). But they also scale up very well, and still have nice looking code at the end of the day. They also have developed pretty hefty communities with plenty of libraries for most anything.

Now, much of the negativeness towards Perl is a matter of opinion, and certainly some people can write very clean Perl, but with this many people complaining about it being too easy to create obfuscated code, you know some grain of truth is there. The question really becomes then, are you ever going to use this language for more than simple bash script replacements. If not, learn some more Perl.. it is absolutely fantastic for that. If, on the other hand, you want a language that will grow with you as you want to do more, may I suggest Python or Ruby.

Either way, good luck!


回答 6

我建议使用很棒的在线书籍《Dive Into Python》。这就是我最初学习语言的方式。

除了教给您语言的基本结构和大量有用的数据结构外,它还有一章很好的文件处理章节,随后的一章是正则表达式等等。

I suggest the awesome online book Dive Into Python. It’s how I learned the language originally.

Beyond teaching you the basic structure of the language, and a whole lot of useful data structures, it has a good chapter on file handling and subsequent chapters on regular expressions and more.


回答 7

添加到先前的答案:检查pexpect模块以处理交互式命令(adduser,passwd等)

Adding to previous answers: check the pexpect module for dealing with interactive commands (adduser, passwd etc.)


回答 8

我喜欢Python的原因之一是,它比POSIX工具具有更好的标准化。我必须仔细检查每一位是否与其他操作系统兼容。在Linux系统上编写的程序在OSX的BSD系统上可能无法正常工作。使用Python,我只需要检查目标系统是否具有足够现代的Python版本。

更好的是,使用标准Python编写的程序甚至可以在Windows上运行!

One reason I love Python is that it is much better standardized than the POSIX tools. I have to double and triple check that each bit is compatible with other operating systems. A program written on a Linux system might not work the same on a BSD system of OSX. With Python, I just have to check that the target system has a sufficiently modern version of Python.

Even better, a program written in standard Python will even run on Windows!


回答 9

我将根据经验给出我的看法:

对于外壳:

  • Shell可以很容易地产生只读代码。编写它,当您重新使用它时,将永远不会再想出自己做了什么。这很容易做到。
  • shell可以使用管道在一行中进行大量文本处理,拆分等操作。
  • 当集成不同编程语言中的程序调用时,它是最好的粘合语言。

对于python:

  • 如果要包括Windows的可移植性,请使用python。
  • 当您必须要操作的不仅仅是文本(例如数字集合)时,python可能会更好。为此,我建议使用python。

我通常会为大多数事情选择bash,但是当我必须跨Windows边界进行操作时,我只会使用python。

I will give here my opinion based on experience:

For shell:

  • shell can very easily spawn read-only code. Write it and when you come back to it, you will never figure out what you did again. It’s very easy to accomplish this.
  • shell can do A LOT of text processing, splitting, etc in one line with pipes.
  • it is the best glue language when it comes to integrate the call of programs in different programming languages.

For python:

  • if you want portability to windows included, use python.
  • python can be better when you must manipulate just more than text, such as collections of numbers. For this, I recommend python.

I usually choose bash for most of the things, but when I have something that must cross windows boundaries, I just use python.


回答 10

pythonpy是一种工具,可使用python语法轻松访问awk和sed的许多功能:

$ echo me2 | py -x 're.sub("me", "you", x)'
you2

pythonpy is a tool that provides easy access to many of the features from awk and sed, but using python syntax:

$ echo me2 | py -x 're.sub("me", "you", x)'
you2

回答 11

我建立了半长的shell脚本(300-500行)和Python代码,它们具有相似的功能。当执行许多外部命令时,我发现该外壳更易于使用。当有大量的文本操作时,Perl也是一个不错的选择。

I have built semi-long shell scripts (300-500 lines) and Python code which does similar functionality. When many external commands are being executed, I find the shell is easier to use. Perl is also a good option when there is lots of text manipulation.


回答 12

在研究此主题时,我发现了这个概念验证代码(通过http://jlebar.com/2010/2/1/Replacing_Bash.html上的注释),可以使您“使用Python编写类似Shell的管道简洁的语法,并在有意义的地方利用现有的系统工具”:

for line in sh("cat /tmp/junk2") | cut(d=',',f=1) | 'sort' | uniq:
    sys.stdout.write(line)

While researching this topic, I found this proof-of-concept code (via a comment at http://jlebar.com/2010/2/1/Replacing_Bash.html) that lets you “write shell-like pipelines in Python using a terse syntax, and leveraging existing system tools where they make sense”:

for line in sh("cat /tmp/junk2") | cut(d=',',f=1) | 'sort' | uniq:
    sys.stdout.write(line)

回答 13

最好的选择是专门针对您的问题的工具。如果正在处理文本文件,则Sed,Awk和Perl是最有竞争力的竞争者。Python是一种通用的动态语言。与任何通用语言一样,文件处理也受支持,但这并不是其核心目的。如果我特别需要动态语言,我会考虑使用Python或Ruby。

简而言之,请非常好地学习Sed和Awk,以及带有* nix风格的所有其他好东西(所有Bash内置,grep,tr等)。如果您对文本文件处理感兴趣,那么您已经在使用正确的东西。

Your best bet is a tool that is specifically geared towards your problem. If it’s processing text files, then Sed, Awk and Perl are the top contenders. Python is a general-purpose dynamic language. As with any general purpose language, there’s support for file-manipulation, but that isn’t what it’s core purpose is. I would consider Python or Ruby if I had a requirement for a dynamic language in particular.

In short, learn Sed and Awk really well, plus all the other goodies that come with your flavour of *nix (All the Bash built-ins, grep, tr and so forth). If it’s text file processing you’re interested in, you’re already using the right stuff.


回答 14

您可以在ShellPy库中使用python代替bash 。

这是一个从Github下载Python用户的化身的示例:

import json
import os
import tempfile

# get the api answer with curl
answer = `curl https://api.github.com/users/python
# syntactic sugar for checking returncode of executed process for zero
if answer:
    answer_json = json.loads(answer.stdout)
    avatar_url = answer_json['avatar_url']

    destination = os.path.join(tempfile.gettempdir(), 'python.png')

    # execute curl once again, this time to get the image
    result = `curl {avatar_url} > {destination}
    if result:
        # if there were no problems show the file
        p`ls -l {destination}
    else:
        print('Failed to download avatar')

    print('Avatar downloaded')
else:
    print('Failed to access github api')

如您所见,重音符(`)符号内的所有表达式都在shell中执行。并且在Python代码中,您可以捕获此执行的结果并对其执行操作。例如:

log = `git log --pretty=oneline --grep='Create'

该行将首先git log --pretty=oneline --grep='Create'在shell中执行,然后将结果分配给log变量。结果具有以下属性:

标准输出从运行进程的标准输出的全部文本

stderr来自执行过程的stderr的全文

返回码的执行的返回码

这是库的一般概述,可在此处找到带有示例的更详细描述。

You can use python instead of bash with the ShellPy library.

Here is an example that downloads avatar of Python user from Github:

import json
import os
import tempfile

# get the api answer with curl
answer = `curl https://api.github.com/users/python
# syntactic sugar for checking returncode of executed process for zero
if answer:
    answer_json = json.loads(answer.stdout)
    avatar_url = answer_json['avatar_url']

    destination = os.path.join(tempfile.gettempdir(), 'python.png')

    # execute curl once again, this time to get the image
    result = `curl {avatar_url} > {destination}
    if result:
        # if there were no problems show the file
        p`ls -l {destination}
    else:
        print('Failed to download avatar')

    print('Avatar downloaded')
else:
    print('Failed to access github api')

As you can see, all expressions inside of grave accent ( ` ) symbol are executed in shell. And in Python code, you can capture results of this execution and perform actions on it. For example:

log = `git log --pretty=oneline --grep='Create'

This line will first execute git log --pretty=oneline --grep='Create' in shell and then assign the result to the log variable. The result has the following properties:

stdout the whole text from stdout of the executed process

stderr the whole text from stderr of the executed process

returncode returncode of the execution

This is general overview of the library, more detailed description with examples can be found here.


回答 15

如果您的文本文件操作通常是一次性的,可能是在shell提示符下完成的,那么您将无法从python得到更好的结果。

另一方面,如果通常您需要一遍又一遍地执行相同(或类似)的任务,并且必须编写脚本来执行此操作,那么python很棒-而且您可以轻松地创建自己的库(您可以也使用shell脚本,但这比较麻烦)。

一个非常简单的例子来让人感觉。

import popen2
stdout_text, stdin_text=popen2.popen2("your-shell-command-here")
for line in stdout_text:
  if line.startswith("#"):
    pass
  else
    jobID=int(line.split(",")[0].split()[1].lstrip("<").rstrip(">"))
    # do something with jobID

还要检查sys和getopt模块,它们是您首先需要的。

If your textfile manipulation usually is one-time, possibly done on the shell-prompt, you will not get anything better from python.

On the other hand, if you usually have to do the same (or similar) task over and over, and you have to write your scripts for doing that, then python is great – and you can easily create your own libraries (you can do that with shell scripts too, but it’s more cumbersome).

A very simple example to get a feeling.

import popen2
stdout_text, stdin_text=popen2.popen2("your-shell-command-here")
for line in stdout_text:
  if line.startswith("#"):
    pass
  else
    jobID=int(line.split(",")[0].split()[1].lstrip("<").rstrip(">"))
    # do something with jobID

Check also sys and getopt module, they are the first you will need.


回答 16

我已经在PyPI上发布了一个软件包:ez
使用pip install ez安装它。

它在shell中打包了常用命令,很好的是我的lib使用与shell基本相同的语法。例如,cp(源,目标)可以处理文件和文件夹!(shutil.copy shutil.copytree的包装,它决定何时使用哪一个)。更妙的是,它可以像R一样支持向量化!

另一个示例:没有os.walk,使用fls(path,regex)递归查找文件并使用正则表达式进行过滤,并返回具有或不具有全路径的文件列表

最后一个例子:您可以将它们组合起来以编写非常简单的脚本:
files = fls('.','py$'); cp(files, myDir)

一定要检查一下!我花了数百个小时来编写/改进它!

I have published a package on PyPI: ez.
Use pip install ez to install it.

It has packed common commands in shell and nicely my lib uses basically the same syntax as shell. e.g., cp(source, destination) can handle both file and folder! (wrapper of shutil.copy shutil.copytree and it decides when to use which one). Even more nicely, it can support vectorization like R!

Another example: no os.walk, use fls(path, regex) to recursively find files and filter with regular expression and it returns a list of files with or without fullpath

Final example: you can combine them to write very simply scripts:
files = fls('.','py$'); cp(files, myDir)

Definitely check it out! It has cost me hundreds of hours to write/improve it!


如何保存Python交互式会话?

问题:如何保存Python交互式会话?

我发现自己经常使用Python的解释器来处理数据库,文件等-基本上是半结构化数据的大量手动格式化。我没有按我的意愿正确地保存和清理有用的位。有没有一种方法可以将我的输入保存到外壳中(数据库连接,变量分配,很少用于循环和逻辑位)-交互式会话的一些历史记录?如果我使用类似的东西,script则会收到过多的标准输出噪音。我真的不需要腌制所有对象-尽管如果有解决方案可以做到这一点,那就可以了。理想情况下,我只剩下一个脚本,该脚本可以像我交互式创建的那样运行,并且我可以删除不需要的部分。是否有这样做的包装或DIY方法?

更新:我对这些软件包的质量和实用性感到非常惊讶。对于那些类似的痒:

  • IPython-早就应该使用它了
  • 重新互动 -非常令人印象深刻,我想了解有关可视化的更多信息,这似乎会在这里闪耀。排序图的gtk / gnome桌面应用程序。想象一下混合壳+图形计算器+迷你月食。此处的源代码分发:http : //www.reinteract.org/trac/wiki/GettingIt。可以在Ubuntu上很好地构建,也可以集成到gnome桌面,Windows和Mac安装程序中。
  • bpython-非常酷,有很多不错的功能,自动完成(!),倒带,一键保存到文件,缩进,做得很好。Python源代码发行版从sourceforge中提取了两个依赖项。

我被转换了,这些真的满足了解释器和编辑器之间的需求。

I find myself frequently using Python’s interpreter to work with databases, files, etc — basically a lot of manual formatting of semi-structured data. I don’t properly save and clean up the useful bits as often as I would like. Is there a way to save my input into the shell (db connections, variable assignments, little for loops and bits of logic) — some history of the interactive session? If I use something like script I get too much stdout noise. I don’t really need to pickle all the objects — though if there is a solution that does that, it would be OK. Ideally I would just be left with a script that ran as the one I created interactively, and I could just delete the bits I didn’t need. Is there a package that does this, or a DIY approach?

UPDATE: I am really amazed at the quality and usefulness of these packages. For those with a similar itch:

  • IPython — should have been using this for ages, kind of what I had in mind
  • reinteract — very impressive, I want to learn more about visualization and this seems like it will shine there. Sort of a gtk/gnome desktop app that renders graphs inline. Imagine a hybrid shell + graphing calculator + mini eclipse. Source distribution here: http://www.reinteract.org/trac/wiki/GettingIt . Built fine on Ubuntu, integrates into gnome desktop, Windows and Mac installers too.
  • bpython — extremely cool, lots of nice features, autocomplete(!), rewind, one keystroke save to file, indentation, well done. Python source distribution, pulled a couple of dependencies from sourceforge.

I am converted, these really fill a need between interpreter and editor.


回答 0

如果您喜欢使用交互式会话,则IPython非常有用。例如,对于您的用例,有一个%savemagic命令,您只需输入%save my_useful_session 10-20 23以保存输入行10至20和23至my_useful_session.py(为此,每行都以其编号作为前缀)。

此外,文档指出:

此函数对输入范围使用与%history相同的语法,然后将这些行保存到您指定的文件名中。

例如,这允许引用较旧的会话,例如

%save current_session ~0/
%save previous_session ~1/

观看演示页面的视频,以快速了解这些功能。

IPython is extremely useful if you like using interactive sessions. For example for your use-case there is the %save magic command, you just input %save my_useful_session 10-20 23 to save input lines 10 to 20 and 23 to my_useful_session.py (to help with this, every line is prefixed by its number).

Furthermore, the documentation states:

This function uses the same syntax as %history for input ranges, then saves the lines to the filename you specify.

This allows for example, to reference older sessions, such as

%save current_session ~0/
%save previous_session ~1/

Look at the videos on the presentation page to get a quick overview of the features.


回答 1

http://www.andrewhjon.es/save-interactive-python-session-history

import readline
readline.write_history_file('/home/ahj/history')

http://www.andrewhjon.es/save-interactive-python-session-history

import readline
readline.write_history_file('/home/ahj/history')

回答 2

有一个 方法可以做到。将文件存储在~/.pystartup

# Add auto-completion and a stored history file of commands to your Python
# interactive interpreter. Requires Python 2.0+, readline. Autocomplete is
# bound to the Esc key by default (you can change it - see readline docs).
#
# Store the file in ~/.pystartup, and set an environment variable to point
# to it:  "export PYTHONSTARTUP=/home/user/.pystartup" in bash.
#
# Note that PYTHONSTARTUP does *not* expand "~", so you have to put in the
# full path to your home directory.

import atexit
import os
import readline
import rlcompleter

historyPath = os.path.expanduser("~/.pyhistory")

def save_history(historyPath=historyPath):
    import readline
    readline.write_history_file(historyPath)

if os.path.exists(historyPath):
    readline.read_history_file(historyPath)

atexit.register(save_history)
del os, atexit, readline, rlcompleter, save_history, historyPath

然后PYTHONSTARTUP在您的shell中设置环境变量(例如在中~/.bashrc):

export PYTHONSTARTUP=$HOME/.pystartup

您还可以添加以下内容以免费获取自动完成功能:

readline.parse_and_bind('tab: complete')

请注意,这仅适用于* nix系统。由于readline仅在Unix平台上可用。

There is a way to do it. Store the file in ~/.pystartup

# Add auto-completion and a stored history file of commands to your Python
# interactive interpreter. Requires Python 2.0+, readline. Autocomplete is
# bound to the Esc key by default (you can change it - see readline docs).
#
# Store the file in ~/.pystartup, and set an environment variable to point
# to it:  "export PYTHONSTARTUP=/home/user/.pystartup" in bash.
#
# Note that PYTHONSTARTUP does *not* expand "~", so you have to put in the
# full path to your home directory.

import atexit
import os
import readline
import rlcompleter

historyPath = os.path.expanduser("~/.pyhistory")

def save_history(historyPath=historyPath):
    import readline
    readline.write_history_file(historyPath)

if os.path.exists(historyPath):
    readline.read_history_file(historyPath)

atexit.register(save_history)
del os, atexit, readline, rlcompleter, save_history, historyPath

and then set the environment variable PYTHONSTARTUP in your shell (e.g. in ~/.bashrc):

export PYTHONSTARTUP=$HOME/.pystartup

You can also add this to get autocomplete for free:

readline.parse_and_bind('tab: complete')

Please note that this will only work on *nix systems. As readline is only available in Unix platform.


回答 3

如果使用的是IPython,则可以使用魔术函数%history-f参数pe 将以前的所有命令保存到文件中:

%history -f /tmp/history.py

If you are using IPython you can save to a file all your previous commands using the magic function %history with the -f parameter, p.e:

%history -f /tmp/history.py

回答 4

安装Ipython并通过运行以下命令打开Ipython会话后:

ipython

从命令行中,只需运行以下Ipython’magic’命令以自动记录整个Ipython会话:

%logstart

这将创建一个唯一命名的.py文件,并存储您的会话,以供以后用作交互式Ipython会话或在您选择的脚本中使用。

After installing Ipython, and opening an Ipython session by running the command:

ipython

from your command line, just run the following Ipython ‘magic’ command to automatically log your entire Ipython session:

%logstart

This will create a uniquely named .py file and store your session for later use as an interactive Ipython session or for use in the script(s) of your choosing.


回答 5

同样,重新交互为您提供了类似于笔记本的Python会话界面。

Also, reinteract gives you a notebook-like interface to a Python session.


回答 6

除了IPython,类似的实用程序bpython还具有“将您输入的代码保存到文件中”的功能

In addition to IPython, a similar utility bpython has a “save the code you’ve entered to a file” feature


回答 7

我必须努力寻找答案,我对iPython环境非常陌生。

这会工作

如果您的iPython会话如下所示

In [1] : import numpy as np
....
In [135]: counter=collections.Counter(mapusercluster[3])
In [136]: counter
Out[136]: Counter({2: 700, 0: 351, 1: 233})

您想要保存从1到135的行,然后在同一ipython会话上使用此命令

In [137]: %save test.py 1-135

这会将所有python语句保存在当前目录(启动ipython的位置)的test.py文件中。

I had to struggle to find an answer, I was very new to iPython environment.

This will work

If your iPython session looks like this

In [1] : import numpy as np
....
In [135]: counter=collections.Counter(mapusercluster[3])
In [136]: counter
Out[136]: Counter({2: 700, 0: 351, 1: 233})

You want to save lines from 1 till 135 then on the same ipython session use this command

In [137]: %save test.py 1-135

This will save all your python statements in test.py file in your current directory ( where you initiated the ipython).


回答 8

有%history魔术可用于打印和保存输入历史记录(以及可选的输出)。

要将当前会话存储到名为的文件中my_history.py

>>> %hist -f my_history.py

历史记录IPython既存储您输入的命令,又存储它产生的结果。您可以使用上下箭头键轻松查看以前的命令,或者以更复杂的方式访问历史记录。

您可以使用%history magic函数来检查过去的输入和输出。先前会话的输入历史记录保存在数据库中,并且可以配置IPython来保存输出历史记录。

其他几个魔术功能也可以使用您的输入历史记录,包括%edit,%rerun,%recall,%macro,%save和%pastebin。您可以使用标准格式来引用行:

%pastebin 3 18-20 ~1/1-5

这将占用当前会话的第3行和第18至20行,以及上一会话的第1-5行。

看到%history?Docstring和更多示例。

另外,请确保探索%store magic在IPython中实现变量的轻量级持久性的功能。

在IPython的数据库中存储变量,别名和宏。

d = {'a': 1, 'b': 2}
%store d  # stores the variable
del d

%store -r d  # Refresh the variable from IPython's database.
>>> d
{'a': 1, 'b': 2}

要在启动c.StoreMagic.autorestore = True时自动恢复存储的变量,请在ipython_config.py中指定。

There is %history magic for printing and saving the input history (and optionally the output).

To store your current session to a file named my_history.py:

>>> %hist -f my_history.py

History IPython stores both the commands you enter, and the results it produces. You can easily go through previous commands with the up- and down-arrow keys, or access your history in more sophisticated ways.

You can use the %history magic function to examine past input and output. Input history from previous sessions is saved in a database, and IPython can be configured to save output history.

Several other magic functions can use your input history, including %edit, %rerun, %recall, %macro, %save and %pastebin. You can use a standard format to refer to lines:

%pastebin 3 18-20 ~1/1-5

This will take line 3 and lines 18 to 20 from the current session, and lines 1-5 from the previous session.

See %history? for the Docstring and more examples.

Also, be sure to explore the capabilities of %store magic for lightweight persistence of variables in IPython.

Stores variables, aliases and macros in IPython’s database.

d = {'a': 1, 'b': 2}
%store d  # stores the variable
del d

%store -r d  # Refresh the variable from IPython's database.
>>> d
{'a': 1, 'b': 2}

To autorestore stored variables on startup, specifyc.StoreMagic.autorestore = True in ipython_config.py.


回答 9

只是在碗里放另一个建议: Spyder

在此处输入图片说明

它具有历史记录日志变量资源管理器。如果您使用过MatLab,那么您将看到相似之处。

Just putting another suggesting in the bowl: Spyder

enter image description here

It has History log and Variable explorer. If you have worked with MatLab, then you’ll see the similarities.


回答 10

就Linux而言,人们可以使用script命令来记录整个会话。它是util-linux软件包的一部分,因此应在大多数Linux系统上使用。您可以创建将要调用的别名或函数script -c python,并将其保存到typescript文件中。例如,这是一个这样的文件的重印。

$ cat typescript                                                                                                      
Script started on Sat 14 May 2016 08:30:08 AM MDT
Python 2.7.6 (default, Jun 22 2015, 17:58:13) 
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> print 'Hello Pythonic World'
Hello Pythonic World
>>> 

Script done on Sat 14 May 2016 08:30:42 AM MDT

这里的一个小缺点是script,无论何时碰到退格键等等,它都会记录所有内容,甚至是换行。因此,您可能希望使用它col来清理输出(请参阅Unix&Linux Stackexchange上的这篇文章)。

As far as Linux goes, one can use script command to record the whole session. It is part of util-linux package so should be on most Linux systems . You can create and alias or function that will call script -c python and that will be saved to a typescript file. For instance, here’s a reprint of one such file.

$ cat typescript                                                                                                      
Script started on Sat 14 May 2016 08:30:08 AM MDT
Python 2.7.6 (default, Jun 22 2015, 17:58:13) 
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> print 'Hello Pythonic World'
Hello Pythonic World
>>> 

Script done on Sat 14 May 2016 08:30:42 AM MDT

Small disadvantage here is that the script records everything , even line-feeds, whenever you hit backspaces , etc. So you may want to use col to clean up the output (see this post on Unix&Linux Stackexchange) .


回答 11

%history命令很棒,但不幸的是,它不会让您将%paste的内容保存到sesh中。为此,我认为您必须在一开始就做%logstart (尽管我尚未确认这项工作有效)。

我喜欢做的是

%history -o -n -p -f filename.txt

它将在每个输入之前保存输出,行号和’>>>’(o,n和p选项)。在此处查看%history的文档。

The %history command is awesome, but unfortunately it won’t let you save things that were %paste ‘d into the sesh. To do that I think you have to do %logstart at the beginning (although I haven’t confirmed this works).

What I like to do is

%history -o -n -p -f filename.txt

which will save the output, line numbers, and ‘>>>’ before each input (o, n, and p options). See the docs for %history here.


回答 12

还有另一种选择— pyslice。在“ wxpython 2.8文档演示和工具”中,有一个名为“ pyslices”的开源程序。

您可以像编辑器一样使用它,它还可以像控制台一样使用—-像执行即时回显的交互式解释器一样执行每一行。

当然,所有代码块和每个块的结果将自动记录到txt文件中。

结果记录在相应的代码块后面。很方便。

切片的概述

there is another option — pyslice. in the “wxpython 2.8 docs demos and tools”, there is a open source program named “pyslices”.

you can use it like a editor, and it also support using like a console —- executing each line like a interactive interpreter with immediate echo.

of course, all the blocks of codes and results of each block will be recorded into a txt file automatically.

the results are logged just behind the corresponding block of code. very convenient.

the overview of pyslices


回答 13

如果使用bpython,则默认情况下所有命令历史记录都保存到~/.pythonhist

要保存命令以供以后重用,可以将它们复制到python脚本文件中:

$ cp ~/.pythonhist mycommands.py

然后编辑该文件以将其清理并放在Python路径(全局或虚拟环境的站点包,当前目录,*。pth中提及或其他方式)下。

要将命令包括到您的shell中,只需从保存的文件中导入它们:

>>> from mycommands import *

If you use bpython, all your command history is by default saved to ~/.pythonhist.

To save the commands for later reusage you can copy them to a python script file:

$ cp ~/.pythonhist mycommands.py

Then edit that file to clean it up and put it under Python path (global or virtual environment’s site-packages, current directory, mentioning in *.pth, or some other way).

To include the commands into your shell, just import them from the saved file:

>>> from mycommands import *

回答 14

一些评论询问如何立即保存所有IPython输入。对于IPython中的%s magic,可以如下所示以编程方式保存所有命令,以避免出现提示消息,也避免指定输入数字。currentLine = len(In)-1%保存-f my_session 1- $ currentLine

-f选项用于强制替换文件,并len(IN)-1在IPython中显示当前输入提示,从而允许您以编程方式保存整个会话。

Some comments were asking how to save all of the IPython inputs at once. For %save magic in IPython, you can save all of the commands programmatically as shown below, to avoid the prompt message and also to avoid specifying the input numbers. currentLine = len(In)-1 %save -f my_session 1-$currentLine

The -f option is used for forcing file replacement and the len(IN)-1 shows the current input prompt in IPython, allowing you to save the whole session programmatically.


回答 15

对于那些使用spacemacsipython附带的用户python-layer,由于在后台运行恒定的自动完成命令,例如save,魔术会产生很多不需要的输出:

len(all_suffixes)
';'.join(__PYTHON_EL_get_completions('''len'''))
';'.join(__PYTHON_EL_get_completions('''all_substa'''))
len(all_substantives_w_suffixes)
';'.join(__PYTHON_EL_get_completions('''len'''))
';'.join(__PYTHON_EL_get_completions('''all'''))
';'.join(__PYTHON_EL_get_completions('''all_'''))
';'.join(__PYTHON_EL_get_completions('''all_w'''))
';'.join(__PYTHON_EL_get_completions('''all_wo'''))
';'.join(__PYTHON_EL_get_completions('''all_wor'''))
';'.join(__PYTHON_EL_get_completions('''all_word'''))
';'.join(__PYTHON_EL_get_completions('''all_words'''))
len(all_words_w_logograms)
len(all_verbs)

为了避免这种情况,只需像平时保存其他任何文件一样保存ipython缓冲区即可: spc f s

For those using spacemacs, and ipython that comes with python-layer, save magic creates a lot of unwanted output, because of the constant auto-completion command working in the backround such as:

len(all_suffixes)
';'.join(__PYTHON_EL_get_completions('''len'''))
';'.join(__PYTHON_EL_get_completions('''all_substa'''))
len(all_substantives_w_suffixes)
';'.join(__PYTHON_EL_get_completions('''len'''))
';'.join(__PYTHON_EL_get_completions('''all'''))
';'.join(__PYTHON_EL_get_completions('''all_'''))
';'.join(__PYTHON_EL_get_completions('''all_w'''))
';'.join(__PYTHON_EL_get_completions('''all_wo'''))
';'.join(__PYTHON_EL_get_completions('''all_wor'''))
';'.join(__PYTHON_EL_get_completions('''all_word'''))
';'.join(__PYTHON_EL_get_completions('''all_words'''))
len(all_words_w_logograms)
len(all_verbs)

To avoid this just save the ipython buffer like you normally save any other: spc f s


回答 16

我想提出另一种在Linux上通过tmux维护python会话的方法。您运行tmux,将自己附加到您打开的会话中(如果直接打开后未附加)。执行python并在上面执行任何操作。然后脱离会话。从tmux会话中分离不会关闭该会话。会话保持打开状态。

这种方法的优点: 您可以从任何其他设备连接到此会话(以防万一您可以SSH电脑)

此方法的缺点: 在您实际存在python解释器之前,此方法不会放弃打开的python会话使用的资源。

I’d like to suggest another way to maintain python session through tmux on linux. you run tmux, attach your self to the session you opened (if not attached after opening it directly). execute python and do whatever you are doing on it. then detach from session. detaching from a tmux session does not close the session. the session remains open.

pros of this method: you can attach to this session from any other device (in case you can ssh your pc)

cons of this method: this method does not relinquish the resources used by the opened python session until you actually exist the python interpreter.


回答 17

要在XUbuntu上保存输入和输出

  1. 在XWindows中,从Xfce终端应用程序运行iPython
  2. Terminal在顶部菜单栏中单击,然后在save contents下拉菜单中查找

我发现这可以保存输入和输出,一直到打开终端时一直返回。这不是ipython特有的,并且可以与ssh会话或从终端窗口运行的其他任务一起使用。

To save input and output on XUbuntu:

  1. In XWindows, run iPython from the Xfce terminal app
  2. click Terminal in the top menu bar and look for save contents in the dropdown

I find this saves the input and output, going all the way back to when I opened the terminal. This is not ipython specific, and would work with ssh sessions or other tasks run from the terminal window.


运行shell命令并捕获输出

问题:运行shell命令并捕获输出

我想编写一个函数,它将执行shell命令并以字符串形式返回其输出,无论它是错误还是成功消息。我只想获得与命令行相同的结果。

能做到这一点的代码示例是什么?

例如:

def run_command(cmd):
    # ??????

print run_command('mysqladmin create test -uroot -pmysqladmin12')
# Should output something like:
# mysqladmin: CREATE DATABASE failed; error: 'Can't create database 'test'; database exists'

I want to write a function that will execute a shell command and return its output as a string, no matter, is it an error or success message. I just want to get the same result that I would have gotten with the command line.

What would be a code example that would do such a thing?

For example:

def run_command(cmd):
    # ??????

print run_command('mysqladmin create test -uroot -pmysqladmin12')
# Should output something like:
# mysqladmin: CREATE DATABASE failed; error: 'Can't create database 'test'; database exists'

回答 0

这个问题的答案取决于您使用的Python版本。最简单的方法是使用以下subprocess.check_output功能:

>>> subprocess.check_output(['ls', '-l'])
b'total 0\n-rw-r--r--  1 memyself  staff  0 Mar 14 11:04 files\n'

check_output运行一个仅接受参数作为输入的程序。1它完全返回打印到的结果stdout。如果您需要将输入内容写入stdin,请跳至runPopen部分。如果要执行复杂的Shell命令,请参阅shell=True此答案末尾的注释。

check_output功能适用于仍在广泛使用的几乎所有版本的Python(2.7+)。2但对于较新的版本,不再推荐使用此方法。

现代版本的Python(3.5或更高版本): run

如果您使用的是Python 3.5或更高版本,并且不需要向后兼容,则建议使用run功能。它为该subprocess模块提供了一个非常通用的高级API 。要捕获程序的输出,请将subprocess.PIPE标志传递给stdout关键字参数。然后访问stdout返回CompletedProcess对象的属性:

>>> import subprocess
>>> result = subprocess.run(['ls', '-l'], stdout=subprocess.PIPE)
>>> result.stdout
b'total 0\n-rw-r--r--  1 memyself  staff  0 Mar 14 11:04 files\n'

返回值是一个bytes对象,因此,如果需要正确的字符串,则需要decode它。假设被调用的进程返回一个UTF-8编码的字符串:

>>> result.stdout.decode('utf-8')
'total 0\n-rw-r--r--  1 memyself  staff  0 Mar 14 11:04 files\n'

所有这些都可以压缩为单线:

>>> subprocess.run(['ls', '-l'], stdout=subprocess.PIPE).stdout.decode('utf-8')
'total 0\n-rw-r--r--  1 memyself  staff  0 Mar 14 11:04 files\n'

如果要将输入传递给流程的stdinbytes请将一个对象传递给input关键字参数:

>>> cmd = ['awk', 'length($0) > 5']
>>> input = 'foo\nfoofoo\n'.encode('utf-8')
>>> result = subprocess.run(cmd, stdout=subprocess.PIPE, input=input)
>>> result.stdout.decode('utf-8')
'foofoo\n'

您可以通过传递stderr=subprocess.PIPE(捕获到result.stderr)或stderr=subprocess.STDOUT(捕获到result.stdout常规输出)来捕获错误。如果不关心安全性,您还可以shell=True按照下面的说明通过传递来运行更复杂的Shell命令。

与旧的处理方式相比,这仅增加了一点复杂性。但是我认为值得这样做:现在,您仅需使用该run功能就可以完成几乎所有需要做的事情。

旧版本的Python(2.7-3.4): check_output

如果您使用的是旧版本的Python,或者需要适度的向后兼容性,则可以使用check_output上面简要介绍的函数。自python 2.7开始提供。

subprocess.check_output(*popenargs, **kwargs)  

它采用与Popen(请参见下文)相同的参数,并返回一个包含程序输出的字符串。该答案的开头有一个更详细的用法示例。在Python 3.5及更高版本中,check_output等效于run使用check=Truestdout=PIPE,仅返回stdout属性。

您可以通过stderr=subprocess.STDOUT确保错误信息包含在返回的输出-但在Python中通过一些版本stderr=subprocess.PIPE,以check_output可引起死锁。如果不关心安全性,您还可以shell=True按照下面的说明通过传递来运行更复杂的Shell命令。

如果您需要通过管道stderr传递输入或将输入传递给流程,check_output则将无法完成任务。Popen在这种情况下,请参见下面的示例。

复杂的应用程序和Python的旧版(2.6及以下版本): Popen

如果需要深入的向后兼容性,或者需要比check_output提供的功能更复杂的功能,则必须直接使用Popen对象,这些对象封装了用于子流程的低级API。

所述Popen构造器接受单个命令没有参数,或列表包含指令作为其第一项,其次是任意数量的参数,每个作为列表一个单独的项目。shlex.split可以帮助将字符串解析为格式正确的列表。Popen对象还接受用于进程IO管理和低级配置的许多不同参数

发送输入和捕获输出communicate几乎总是首选方法。如:

output = subprocess.Popen(["mycmd", "myarg"], 
                          stdout=subprocess.PIPE).communicate()[0]

要么

>>> import subprocess
>>> p = subprocess.Popen(['ls', '-a'], stdout=subprocess.PIPE, 
...                                    stderr=subprocess.PIPE)
>>> out, err = p.communicate()
>>> print out
.
..
foo

如果设置stdin=PIPEcommunicate还允许您通过以下方式将数据传递到流程stdin

>>> cmd = ['awk', 'length($0) > 5']
>>> p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
...                           stderr=subprocess.PIPE,
...                           stdin=subprocess.PIPE)
>>> out, err = p.communicate('foo\nfoofoo\n')
>>> print out
foofoo

艾伦·霍尔的回答,这表明在某些系统上,你可能需要设置stdoutstderr以及stdin所有PIPE(或DEVNULL)得到communicate工作的。

在极少数情况下,您可能需要复杂的实时输出捕获。Vartec的答案提出了一条前进的道路,但是communicate如果不谨慎使用,则其他方法都容易出现死锁。

与上述所有功能一样,当不考虑安全性时,可以通过传递运行更复杂的Shell命令shell=True

笔记

1.运行shell命令:shell=True参数

通常,对runcheck_outputPopen构造函数的每次调用都会执行一个程序。这意味着没有花哨的bash风格的管道。如果要运行复杂的Shell命令,则可以传递shell=True,这三个功能都支持。

但是,这样做会引起安全问题。如果您要做的不仅仅是轻脚本编写,那么最好单独调用每个进程,并将每个进程的输出作为输入通过以下方式传递给下一个进程:

run(cmd, [stdout=etc...], input=other_output)

要么

Popen(cmd, [stdout=etc...]).communicate(other_output)

直接连接管道的诱惑力很强;抵抗它。否则,您很可能会遇到僵局,或者不得不执行类似此类的骇人行为。

2. Unicode注意事项

check_output在Python 2中返回一个字符串,但bytes在Python 3中返回一个对象。如果您还没有花时间学习unicode,那么值得花一点时间。

The answer to this question depends on the version of Python you’re using. The simplest approach is to use the subprocess.check_output function:

>>> subprocess.check_output(['ls', '-l'])
b'total 0\n-rw-r--r--  1 memyself  staff  0 Mar 14 11:04 files\n'

check_output runs a single program that takes only arguments as input.1 It returns the result exactly as printed to stdout. If you need to write input to stdin, skip ahead to the run or Popen sections. If you want to execute complex shell commands, see the note on shell=True at the end of this answer.

The check_output function works on almost all versions of Python still in wide use (2.7+).2 But for more recent versions, it is no longer the recommended approach.

Modern versions of Python (3.5 or higher): run

If you’re using Python 3.5 or higher, and do not need backwards compatibility, the new run function is recommended. It provides a very general, high-level API for the subprocess module. To capture the output of a program, pass the subprocess.PIPE flag to the stdout keyword argument. Then access the stdout attribute of the returned CompletedProcess object:

>>> import subprocess
>>> result = subprocess.run(['ls', '-l'], stdout=subprocess.PIPE)
>>> result.stdout
b'total 0\n-rw-r--r--  1 memyself  staff  0 Mar 14 11:04 files\n'

The return value is a bytes object, so if you want a proper string, you’ll need to decode it. Assuming the called process returns a UTF-8-encoded string:

>>> result.stdout.decode('utf-8')
'total 0\n-rw-r--r--  1 memyself  staff  0 Mar 14 11:04 files\n'

This can all be compressed to a one-liner:

>>> subprocess.run(['ls', '-l'], stdout=subprocess.PIPE).stdout.decode('utf-8')
'total 0\n-rw-r--r--  1 memyself  staff  0 Mar 14 11:04 files\n'

If you want to pass input to the process’s stdin, pass a bytes object to the input keyword argument:

>>> cmd = ['awk', 'length($0) > 5']
>>> input = 'foo\nfoofoo\n'.encode('utf-8')
>>> result = subprocess.run(cmd, stdout=subprocess.PIPE, input=input)
>>> result.stdout.decode('utf-8')
'foofoo\n'

You can capture errors by passing stderr=subprocess.PIPE (capture to result.stderr) or stderr=subprocess.STDOUT (capture to result.stdout along with regular output). When security is not a concern, you can also run more complex shell commands by passing shell=True as described in the notes below.

This adds just a bit of complexity, compared to the old way of doing things. But I think it’s worth the payoff: now you can do almost anything you need to do with the run function alone.

Older versions of Python (2.7-3.4): check_output

If you are using an older version of Python, or need modest backwards compatibility, you can probably use the check_output function as briefly described above. It has been available since Python 2.7.

subprocess.check_output(*popenargs, **kwargs)  

It takes takes the same arguments as Popen (see below), and returns a string containing the program’s output. The beginning of this answer has a more detailed usage example. In Python 3.5 and greater, check_output is equivalent to executing run with check=True and stdout=PIPE, and returning just the stdout attribute.

You can pass stderr=subprocess.STDOUT to ensure that error messages are included in the returned output — but in some versions of Python passing stderr=subprocess.PIPE to check_output can cause deadlocks. When security is not a concern, you can also run more complex shell commands by passing shell=True as described in the notes below.

If you need to pipe from stderr or pass input to the process, check_output won’t be up to the task. See the Popen examples below in that case.

Complex applications & legacy versions of Python (2.6 and below): Popen

If you need deep backwards compatibility, or if you need more sophisticated functionality than check_output provides, you’ll have to work directly with Popen objects, which encapsulate the low-level API for subprocesses.

The Popen constructor accepts either a single command without arguments, or a list containing a command as its first item, followed by any number of arguments, each as a separate item in the list. shlex.split can help parse strings into appropriately formatted lists. Popen objects also accept a host of different arguments for process IO management and low-level configuration.

To send input and capture output, communicate is almost always the preferred method. As in:

output = subprocess.Popen(["mycmd", "myarg"], 
                          stdout=subprocess.PIPE).communicate()[0]

Or

>>> import subprocess
>>> p = subprocess.Popen(['ls', '-a'], stdout=subprocess.PIPE, 
...                                    stderr=subprocess.PIPE)
>>> out, err = p.communicate()
>>> print out
.
..
foo

If you set stdin=PIPE, communicate also allows you to pass data to the process via stdin:

>>> cmd = ['awk', 'length($0) > 5']
>>> p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
...                           stderr=subprocess.PIPE,
...                           stdin=subprocess.PIPE)
>>> out, err = p.communicate('foo\nfoofoo\n')
>>> print out
foofoo

Note Aaron Hall’s answer, which indicates that on some systems, you may need to set stdout, stderr, and stdin all to PIPE (or DEVNULL) to get communicate to work at all.

In some rare cases, you may need complex, real-time output capturing. Vartec‘s answer suggests a way forward, but methods other than communicate are prone to deadlocks if not used carefully.

As with all the above functions, when security is not a concern, you can run more complex shell commands by passing shell=True.

Notes

1. Running shell commands: the shell=True argument

Normally, each call to run, check_output, or the Popen constructor executes a single program. That means no fancy bash-style pipes. If you want to run complex shell commands, you can pass shell=True, which all three functions support.

However, doing so raises security concerns. If you’re doing anything more than light scripting, you might be better off calling each process separately, and passing the output from each as an input to the next, via

run(cmd, [stdout=etc...], input=other_output)

Or

Popen(cmd, [stdout=etc...]).communicate(other_output)

The temptation to directly connect pipes is strong; resist it. Otherwise, you’ll likely see deadlocks or have to do hacky things like this.

2. Unicode considerations

check_output returns a string in Python 2, but a bytes object in Python 3. It’s worth taking a moment to learn about unicode if you haven’t already.


回答 1

这很容易,但仅适用于Unix(包括Cygwin)和Python2.7。

import commands
print commands.getstatusoutput('wc -l file')

它返回带有(return_value,output)的元组。

对于适用于Python2和Python3的解决方案,请改用subprocess模块:

from subprocess import Popen, PIPE
output = Popen(["date"],stdout=PIPE)
response = output.communicate()
print response

This is way easier, but only works on Unix (including Cygwin) and Python2.7.

import commands
print commands.getstatusoutput('wc -l file')

It returns a tuple with the (return_value, output).

For a solution that works in both Python2 and Python3, use the subprocess module instead:

from subprocess import Popen, PIPE
output = Popen(["date"],stdout=PIPE)
response = output.communicate()
print response

回答 2

像这样:

def runProcess(exe):    
    p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
    while(True):
        # returns None while subprocess is running
        retcode = p.poll() 
        line = p.stdout.readline()
        yield line
        if retcode is not None:
            break

请注意,我正在将stderr重定向到stdout,它可能并非您想要的,但我也想要错误消息。

此函数逐行产生(通常,您必须等待子进程完成才能获得整体输出)。

对于您的情况,用法是:

for line in runProcess('mysqladmin create test -uroot -pmysqladmin12'.split()):
    print line,

Something like that:

def runProcess(exe):    
    p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
    while(True):
        # returns None while subprocess is running
        retcode = p.poll() 
        line = p.stdout.readline()
        yield line
        if retcode is not None:
            break

Note, that I’m redirecting stderr to stdout, it might not be exactly what you want, but I want error messages also.

This function yields line by line as they come (normally you’d have to wait for subprocess to finish to get the output as a whole).

For your case the usage would be:

for line in runProcess('mysqladmin create test -uroot -pmysqladmin12'.split()):
    print line,

回答 3

Vartec的答案无法读取所有行,因此我制作了一个可以读取的版本:

def run_command(command):
    p = subprocess.Popen(command,
                         stdout=subprocess.PIPE,
                         stderr=subprocess.STDOUT)
    return iter(p.stdout.readline, b'')

用法与接受的答案相同:

command = 'mysqladmin create test -uroot -pmysqladmin12'.split()
for line in run_command(command):
    print(line)

Vartec’s answer doesn’t read all lines, so I made a version that did:

def run_command(command):
    p = subprocess.Popen(command,
                         stdout=subprocess.PIPE,
                         stderr=subprocess.STDOUT)
    return iter(p.stdout.readline, b'')

Usage is the same as the accepted answer:

command = 'mysqladmin create test -uroot -pmysqladmin12'.split()
for line in run_command(command):
    print(line)

回答 4

这是一个棘手超级简单的解决方案,可在许多情况下使用:

import os
os.system('sample_cmd > tmp')
print open('tmp', 'r').read()

使用命令的输出创建一个临时文件(这里是tmp),您可以从中读取所需的输出。

注释中的额外说明:如果是一次性作业,则可以删除tmp文件。如果您需要多次执行此操作,则无需删除tmp。

os.remove('tmp')

This is a tricky but super simple solution which works in many situations:

import os
os.system('sample_cmd > tmp')
print open('tmp', 'r').read()

A temporary file(here is tmp) is created with the output of the command and you can read from it your desired output.

Extra note from the comments: You can remove the tmp file in the case of one-time job. If you need to do this several times, there is no need to delete the tmp.

os.remove('tmp')

回答 5

我遇到了同样的问题,但是想出了一种非常简单的方法:

import subprocess
output = subprocess.getoutput("ls -l")
print(output)

希望能帮上忙

注意:此解决方案特定subprocess.getoutput()于Python3,因为在Python2中不起作用

I had the same problem but figured out a very simple way of doing this:

import subprocess
output = subprocess.getoutput("ls -l")
print(output)

Hope it helps out

Note: This solution is Python3 specific as subprocess.getoutput() doesn’t work in Python2


回答 6

您可以使用以下命令来运行任何shell命令。我在ubuntu上使用过它们。

import os
os.popen('your command here').read()

注意:自python 2.6起不推荐使用。现在,您必须使用subprocess.Popen。以下是示例

import subprocess

p = subprocess.Popen("Your command", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0]
print p.split("\n")

You can use following commands to run any shell command. I have used them on ubuntu.

import os
os.popen('your command here').read()

Note: This is deprecated since python 2.6. Now you must use subprocess.Popen. Below is the example

import subprocess

p = subprocess.Popen("Your command", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0]
print p.split("\n")

回答 7

您的里程可能会有所不同,我尝试使用@senderle在Windows 2.6.5上的Windows中使用Vartec的解决方案,但我遇到了错误,并且没有其他解决方案起作用。我的错误是:WindowsError: [Error 6] The handle is invalid

我发现必须将PIPE分配给每个句柄才能使其返回我期望的输出-以下内容对我有用。

import subprocess

def run_command(cmd):
    """given shell command, returns communication tuple of stdout and stderr"""
    return subprocess.Popen(cmd, 
                            stdout=subprocess.PIPE, 
                            stderr=subprocess.PIPE, 
                            stdin=subprocess.PIPE).communicate()

并像这样调用([0]获取元组的第一个元素stdout):

run_command('tracert 11.1.0.1')[0]

学习更多之后,我相信我需要这些管道参数,因为我正在使用不同句柄的自定义系统上工作,因此必须直接控制所有std。

要停止控制台弹出窗口(在Windows中),请执行以下操作:

def run_command(cmd):
    """given shell command, returns communication tuple of stdout and stderr"""
    # instantiate a startupinfo obj:
    startupinfo = subprocess.STARTUPINFO()
    # set the use show window flag, might make conditional on being in Windows:
    startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
    # pass as the startupinfo keyword argument:
    return subprocess.Popen(cmd,
                            stdout=subprocess.PIPE, 
                            stderr=subprocess.PIPE, 
                            stdin=subprocess.PIPE, 
                            startupinfo=startupinfo).communicate()

run_command('tracert 11.1.0.1')

Your Mileage May Vary, I attempted @senderle’s spin on Vartec’s solution in Windows on Python 2.6.5, but I was getting errors, and no other solutions worked. My error was: WindowsError: [Error 6] The handle is invalid.

I found that I had to assign PIPE to every handle to get it to return the output I expected – the following worked for me.

import subprocess

def run_command(cmd):
    """given shell command, returns communication tuple of stdout and stderr"""
    return subprocess.Popen(cmd, 
                            stdout=subprocess.PIPE, 
                            stderr=subprocess.PIPE, 
                            stdin=subprocess.PIPE).communicate()

and call like this, ([0] gets the first element of the tuple, stdout):

run_command('tracert 11.1.0.1')[0]

After learning more, I believe I need these pipe arguments because I’m working on a custom system that uses different handles, so I had to directly control all the std’s.

To stop console popups (with Windows), do this:

def run_command(cmd):
    """given shell command, returns communication tuple of stdout and stderr"""
    # instantiate a startupinfo obj:
    startupinfo = subprocess.STARTUPINFO()
    # set the use show window flag, might make conditional on being in Windows:
    startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
    # pass as the startupinfo keyword argument:
    return subprocess.Popen(cmd,
                            stdout=subprocess.PIPE, 
                            stderr=subprocess.PIPE, 
                            stdin=subprocess.PIPE, 
                            startupinfo=startupinfo).communicate()

run_command('tracert 11.1.0.1')

回答 8

对于以下问题,我对同一问题的口味略有不同:

  1. 当STDOUT消息在STDOUT缓冲区中累积时(即实时)捕获并返回它们。
    • @vartec通过使用生成器和
      上面的’yield’ 关键字以Python方式解决了这个问题
  2. 打印所有STDOUT行(即使在可以完全读取STDOUT缓冲区之前退出进程
  3. 不要浪费CPU周期以高频率轮询进程
  4. 检查子流程的返回码
  5. 如果得到非零错误返回码,则打印STDERR(与STDOUT分开)。

我结合并调整了先前的答案,以得出以下结论:

import subprocess
from time import sleep

def run_command(command):
    p = subprocess.Popen(command,
                         stdout=subprocess.PIPE,
                         stderr=subprocess.PIPE,
                         shell=True)
    # Read stdout from subprocess until the buffer is empty !
    for line in iter(p.stdout.readline, b''):
        if line: # Don't print blank lines
            yield line
    # This ensures the process has completed, AND sets the 'returncode' attr
    while p.poll() is None:                                                                                                                                        
        sleep(.1) #Don't waste CPU-cycles
    # Empty STDERR buffer
    err = p.stderr.read()
    if p.returncode != 0:
       # The run_command() function is responsible for logging STDERR 
       print("Error: " + str(err))

此代码将与以前的答案相同地执行:

for line in run_command(cmd):
    print(line)

I had a slightly different flavor of the same problem with the following requirements:

  1. Capture and return STDOUT messages as they accumulate in the STDOUT buffer (i.e. in realtime).
    • @vartec solved this Pythonically with his use of generators and the ‘yield’
      keyword above
  2. Print all STDOUT lines (even if process exits before STDOUT buffer can be fully read)
  3. Don’t waste CPU cycles polling the process at high-frequency
  4. Check the return code of the subprocess
  5. Print STDERR (separate from STDOUT) if we get a non-zero error return code.

I’ve combined and tweaked previous answers to come up with the following:

import subprocess
from time import sleep

def run_command(command):
    p = subprocess.Popen(command,
                         stdout=subprocess.PIPE,
                         stderr=subprocess.PIPE,
                         shell=True)
    # Read stdout from subprocess until the buffer is empty !
    for line in iter(p.stdout.readline, b''):
        if line: # Don't print blank lines
            yield line
    # This ensures the process has completed, AND sets the 'returncode' attr
    while p.poll() is None:                                                                                                                                        
        sleep(.1) #Don't waste CPU-cycles
    # Empty STDERR buffer
    err = p.stderr.read()
    if p.returncode != 0:
       # The run_command() function is responsible for logging STDERR 
       print("Error: " + str(err))

This code would be executed the same as previous answers:

for line in run_command(cmd):
    print(line)

回答 9

拆分初始命令 subprocess可能会很棘手且麻烦。

采用 shlex.split()帮助自己。

样例命令

git log -n 5 --since "5 years ago" --until "2 year ago"

编码

from subprocess import check_output
from shlex import split

res = check_output(split('git log -n 5 --since "5 years ago" --until "2 year ago"'))
print(res)
>>> b'commit 7696ab087a163e084d6870bb4e5e4d4198bdc61a\nAuthor: Artur Barseghyan...'

没有shlex.split()代码的话看起来如下

res = check_output([
    'git', 
    'log', 
    '-n', 
    '5', 
    '--since', 
    '5 years ago', 
    '--until', 
    '2 year ago'
])
print(res)
>>> b'commit 7696ab087a163e084d6870bb4e5e4d4198bdc61a\nAuthor: Artur Barseghyan...'

Splitting the initial command for the subprocess might be tricky and cumbersome.

Use shlex.split() to help yourself out.

Sample command

git log -n 5 --since "5 years ago" --until "2 year ago"

The code

from subprocess import check_output
from shlex import split

res = check_output(split('git log -n 5 --since "5 years ago" --until "2 year ago"'))
print(res)
>>> b'commit 7696ab087a163e084d6870bb4e5e4d4198bdc61a\nAuthor: Artur Barseghyan...'

Without shlex.split() the code would look as follows

res = check_output([
    'git', 
    'log', 
    '-n', 
    '5', 
    '--since', 
    '5 years ago', 
    '--until', 
    '2 year ago'
])
print(res)
>>> b'commit 7696ab087a163e084d6870bb4e5e4d4198bdc61a\nAuthor: Artur Barseghyan...'

回答 10

如果您需要在多个文件上运行一个shell命令,那么这对我就成功了。

import os
import subprocess

# Define a function for running commands and capturing stdout line by line
# (Modified from Vartec's solution because it wasn't printing all lines)
def runProcess(exe):    
    p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
    return iter(p.stdout.readline, b'')

# Get all filenames in working directory
for filename in os.listdir('./'):
    # This command will be run on each file
    cmd = 'nm ' + filename

    # Run the command and capture the output line by line.
    for line in runProcess(cmd.split()):
        # Eliminate leading and trailing whitespace
        line.strip()
        # Split the output 
        output = line.split()

        # Filter the output and print relevant lines
        if len(output) > 2:
            if ((output[2] == 'set_program_name')):
                print filename
                print line

编辑:刚刚看到了JF Sebastian的建议的Max Persson的解决方案。继续前进,并纳入。

If you need to run a shell command on multiple files, this did the trick for me.

import os
import subprocess

# Define a function for running commands and capturing stdout line by line
# (Modified from Vartec's solution because it wasn't printing all lines)
def runProcess(exe):    
    p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
    return iter(p.stdout.readline, b'')

# Get all filenames in working directory
for filename in os.listdir('./'):
    # This command will be run on each file
    cmd = 'nm ' + filename

    # Run the command and capture the output line by line.
    for line in runProcess(cmd.split()):
        # Eliminate leading and trailing whitespace
        line.strip()
        # Split the output 
        output = line.split()

        # Filter the output and print relevant lines
        if len(output) > 2:
            if ((output[2] == 'set_program_name')):
                print filename
                print line

Edit: Just saw Max Persson’s solution with J.F. Sebastian’s suggestion. Went ahead and incorporated that.


回答 11

根据@senderle,如果您像我一样使用python3.6:

def sh(cmd, input=""):
    rst = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, input=input.encode("utf-8"))
    assert rst.returncode == 0, rst.stderr.decode("utf-8")
    return rst.stdout.decode("utf-8")
sh("ls -a")

就像您在bash中运行命令一样

According to @senderle, if you use python3.6 like me:

def sh(cmd, input=""):
    rst = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, input=input.encode("utf-8"))
    assert rst.returncode == 0, rst.stderr.decode("utf-8")
    return rst.stdout.decode("utf-8")
sh("ls -a")

Will act exactly like you run the command in bash


回答 12

如果您使用 subprocess python模块,则可以分别处理STDOUT,STDERR和命令的返回代码。您可以看到完整的命令调用程序实现的示例。当然,您可以根据需要扩展它try..except

下面的函数返回STDOUT,STDERR和Return代码,因此您可以在其他脚本中处理它们。

import subprocess

def command_caller(command=None)
    sp = subprocess.Popen(command, stderr=subprocess.PIPE, stdout=subprocess.PIPE, shell=False)
    out, err = sp.communicate()
    if sp.returncode:
        print(
            "Return code: %(ret_code)s Error message: %(err_msg)s"
            % {"ret_code": sp.returncode, "err_msg": err}
            )
    return sp.returncode, out, err

If you use the subprocess python module, you are able to handle the STDOUT, STDERR and return code of command separately. You can see an example for the complete command caller implementation. Of course you can extend it with try..except if you want.

The below function returns the STDOUT, STDERR and Return code so you can handle them in the other script.

import subprocess

def command_caller(command=None)
    sp = subprocess.Popen(command, stderr=subprocess.PIPE, stdout=subprocess.PIPE, shell=False)
    out, err = sp.communicate()
    if sp.returncode:
        print(
            "Return code: %(ret_code)s Error message: %(err_msg)s"
            % {"ret_code": sp.returncode, "err_msg": err}
            )
    return sp.returncode, out, err

回答 13

例如,execute(’ls -ahl’)区分了三种/四种可能的收益和OS平台:

  1. 无输出,但运行成功
  2. 输出空行,运行成功
  3. 运行失败
  4. 输出一些东西,成功运行

功能如下

def execute(cmd, output=True, DEBUG_MODE=False):
"""Executes a bash command.
(cmd, output=True)
output: whether print shell output to screen, only affects screen display, does not affect returned values
return: ...regardless of output=True/False...
        returns shell output as a list with each elment is a line of string (whitespace stripped both sides) from output
        could be 
        [], ie, len()=0 --> no output;    
        [''] --> output empty line;     
        None --> error occured, see below

        if error ocurs, returns None (ie, is None), print out the error message to screen
"""
if not DEBUG_MODE:
    print "Command: " + cmd

    # https://stackoverflow.com/a/40139101/2292993
    def _execute_cmd(cmd):
        if os.name == 'nt' or platform.system() == 'Windows':
            # set stdin, out, err all to PIPE to get results (other than None) after run the Popen() instance
            p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
        else:
            # Use bash; the default is sh
            p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, executable="/bin/bash")

        # the Popen() instance starts running once instantiated (??)
        # additionally, communicate(), or poll() and wait process to terminate
        # communicate() accepts optional input as stdin to the pipe (requires setting stdin=subprocess.PIPE above), return out, err as tuple
        # if communicate(), the results are buffered in memory

        # Read stdout from subprocess until the buffer is empty !
        # if error occurs, the stdout is '', which means the below loop is essentially skipped
        # A prefix of 'b' or 'B' is ignored in Python 2; 
        # it indicates that the literal should become a bytes literal in Python 3 
        # (e.g. when code is automatically converted with 2to3).
        # return iter(p.stdout.readline, b'')
        for line in iter(p.stdout.readline, b''):
            # # Windows has \r\n, Unix has \n, Old mac has \r
            # if line not in ['','\n','\r','\r\n']: # Don't print blank lines
                yield line
        while p.poll() is None:                                                                                                                                        
            sleep(.1) #Don't waste CPU-cycles
        # Empty STDERR buffer
        err = p.stderr.read()
        if p.returncode != 0:
            # responsible for logging STDERR 
            print("Error: " + str(err))
            yield None

    out = []
    for line in _execute_cmd(cmd):
        # error did not occur earlier
        if line is not None:
            # trailing comma to avoid a newline (by print itself) being printed
            if output: print line,
            out.append(line.strip())
        else:
            # error occured earlier
            out = None
    return out
else:
    print "Simulation! The command is " + cmd
    print ""

eg, execute(‘ls -ahl’) differentiated three/four possible returns and OS platforms:

  1. no output, but run successfully
  2. output empty line, run successfully
  3. run failed
  4. output something, run successfully

function below

def execute(cmd, output=True, DEBUG_MODE=False):
"""Executes a bash command.
(cmd, output=True)
output: whether print shell output to screen, only affects screen display, does not affect returned values
return: ...regardless of output=True/False...
        returns shell output as a list with each elment is a line of string (whitespace stripped both sides) from output
        could be 
        [], ie, len()=0 --> no output;    
        [''] --> output empty line;     
        None --> error occured, see below

        if error ocurs, returns None (ie, is None), print out the error message to screen
"""
if not DEBUG_MODE:
    print "Command: " + cmd

    # https://stackoverflow.com/a/40139101/2292993
    def _execute_cmd(cmd):
        if os.name == 'nt' or platform.system() == 'Windows':
            # set stdin, out, err all to PIPE to get results (other than None) after run the Popen() instance
            p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
        else:
            # Use bash; the default is sh
            p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, executable="/bin/bash")

        # the Popen() instance starts running once instantiated (??)
        # additionally, communicate(), or poll() and wait process to terminate
        # communicate() accepts optional input as stdin to the pipe (requires setting stdin=subprocess.PIPE above), return out, err as tuple
        # if communicate(), the results are buffered in memory

        # Read stdout from subprocess until the buffer is empty !
        # if error occurs, the stdout is '', which means the below loop is essentially skipped
        # A prefix of 'b' or 'B' is ignored in Python 2; 
        # it indicates that the literal should become a bytes literal in Python 3 
        # (e.g. when code is automatically converted with 2to3).
        # return iter(p.stdout.readline, b'')
        for line in iter(p.stdout.readline, b''):
            # # Windows has \r\n, Unix has \n, Old mac has \r
            # if line not in ['','\n','\r','\r\n']: # Don't print blank lines
                yield line
        while p.poll() is None:                                                                                                                                        
            sleep(.1) #Don't waste CPU-cycles
        # Empty STDERR buffer
        err = p.stderr.read()
        if p.returncode != 0:
            # responsible for logging STDERR 
            print("Error: " + str(err))
            yield None

    out = []
    for line in _execute_cmd(cmd):
        # error did not occur earlier
        if line is not None:
            # trailing comma to avoid a newline (by print itself) being printed
            if output: print line,
            out.append(line.strip())
        else:
            # error occured earlier
            out = None
    return out
else:
    print "Simulation! The command is " + cmd
    print ""

回答 14

可以将输出重定向到文本文件,然后将其读回。

import subprocess
import os
import tempfile

def execute_to_file(command):
    """
    This function execute the command
    and pass its output to a tempfile then read it back
    It is usefull for process that deploy child process
    """
    temp_file = tempfile.NamedTemporaryFile(delete=False)
    temp_file.close()
    path = temp_file.name
    command = command + " > " + path
    proc = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
    if proc.stderr:
        # if command failed return
        os.unlink(path)
        return
    with open(path, 'r') as f:
        data = f.read()
    os.unlink(path)
    return data

if __name__ == "__main__":
    path = "Somepath"
    command = 'ecls.exe /files ' + path
    print(execute(command))

The output can be redirected to a text file and then read it back.

import subprocess
import os
import tempfile

def execute_to_file(command):
    """
    This function execute the command
    and pass its output to a tempfile then read it back
    It is usefull for process that deploy child process
    """
    temp_file = tempfile.NamedTemporaryFile(delete=False)
    temp_file.close()
    path = temp_file.name
    command = command + " > " + path
    proc = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
    if proc.stderr:
        # if command failed return
        os.unlink(path)
        return
    with open(path, 'r') as f:
        data = f.read()
    os.unlink(path)
    return data

if __name__ == "__main__":
    path = "Somepath"
    command = 'ecls.exe /files ' + path
    print(execute(command))

回答 15

刚刚写了一个小的bash脚本来使用curl做到这一点

https://gist.github.com/harish2704/bfb8abece94893c53ce344548ead8ba5

#!/usr/bin/env bash

# Usage: gdrive_dl.sh <url>

urlBase='https://drive.google.com'
fCookie=tmpcookies

curl="curl -L -b $fCookie -c $fCookie"
confirm(){
    $curl "$1" | grep jfk-button-action | sed -e 's/.*jfk-button-action" href="\(\S*\)".*/\1/' -e 's/\&amp;/\&/g'
}

$curl -O -J "${urlBase}$(confirm $1)"

just wrote a small bash script to do this using curl

https://gist.github.com/harish2704/bfb8abece94893c53ce344548ead8ba5

#!/usr/bin/env bash

# Usage: gdrive_dl.sh <url>

urlBase='https://drive.google.com'
fCookie=tmpcookies

curl="curl -L -b $fCookie -c $fCookie"
confirm(){
    $curl "$1" | grep jfk-button-action | sed -e 's/.*jfk-button-action" href="\(\S*\)".*/\1/' -e 's/\&amp;/\&/g'
}

$curl -O -J "${urlBase}$(confirm $1)"

我应该放#!(shebang)在Python脚本中,它应该采用什么形式?

问题:我应该放#!(shebang)在Python脚本中,它应该采用什么形式?

我应该把shebang放到我的Python脚本中吗?以什么形式?

#!/usr/bin/env python 

要么

#!/usr/local/bin/python

这些同样便携吗?最常用哪种形式?

注:龙卷风项目采用的家当。另一方面, Django项目没有。

Should I put the shebang in my Python scripts? In what form?

#!/usr/bin/env python 

or

#!/usr/local/bin/python

Are these equally portable? Which form is used most?

Note: the tornado project uses the shebang. On the other hand the Django project doesn’t.


回答 0

任何脚本中的shebang行都决定了脚本的执行能力,就像独立的可执行文件一样,无需python事先在终端中键入或在文件管理器中双击(正确配置时)。不必要,但通常放在那里,因此当有人看到在编辑器中打开文件时,他们会立即知道他们在看什么。但是,您使用的家当线IS重要。

Python 3脚本的正确用法是:

#!/usr/bin/env python3

默认为版本3.latest。对于Python 2.7.latest python2代替python3

不应使用以下内容(除了极少数情况下,您正在编写与Python 2.x和3.x兼容的代码):

#!/usr/bin/env python

中给出的原因是这些建议,PEP 394,是python可以指到python2python3在不同的系统。目前,它python2在大多数发行版中都涉及,但是在某些时候可能会改变。

另外,请勿使用:

#!/usr/local/bin/python

“在这种情况下,python可能安装在/ usr / bin / python或/ bin / python上,上述#!将失败。”

“#!/ usr / bin / env python”与“#!/ usr / local / bin / python”

The shebang line in any script determines the script’s ability to be executed like a standalone executable without typing python beforehand in the terminal or when double clicking it in a file manager (when configured properly). It isn’t necessary but generally put there so when someone sees the file opened in an editor, they immediately know what they’re looking at. However, which shebang line you use IS important.

Correct usage for Python 3 scripts is:

#!/usr/bin/env python3

This defaults to version 3.latest. For Python 2.7.latest use python2 in place of python3.

The following should NOT be used (except for the rare case that you are writing code which is compatible with both Python 2.x and 3.x):

#!/usr/bin/env python

The reason for these recommendations, given in PEP 394, is that python can refer either to python2 or python3 on different systems. It currently refers to python2 on most distributions, but that is likely to change at some point.

Also, DO NOT Use:

#!/usr/local/bin/python

“python may be installed at /usr/bin/python or /bin/python in those cases, the above #! will fail.”

“#!/usr/bin/env python” vs “#!/usr/local/bin/python”


回答 1

这实际上只是一个品味问题。添加shebang意味着人们可以根据需要直接调用脚本(假设它被标记为可执行文件);省略它只是意味着python必须手动调用。

无论哪种方式,运行该程序的最终结果都不会受到影响。这只是手段的选择。

It’s really just a matter of taste. Adding the shebang means people can invoke the script directly if they want (assuming it’s marked as executable); omitting it just means python has to be invoked manually.

The end result of running the program isn’t affected either way; it’s just options of the means.


回答 2

我应该把shebang放到我的Python脚本中吗?

将shebang放入Python脚本中以指示:

  • 该模块可以作为脚本运行
  • 它只能在python2,python3上运行还是与Python 2/3兼容?
  • 在POSIX上,如果要直接运行脚本而不python显式调用可执行文件,则很有必要

这些同样便携吗?最常用哪种形式?

如果您手动编写shebang ,请始终使用,#!/usr/bin/env python除非有特殊原因不使用它。即使在Windows(Python启动器)上也可以理解这种形式。

注意:已安装的脚本应使用特定的python可执行文件,例如/usr/bin/python/home/me/.virtualenvs/project/bin/python。如果您在Shell中激活virtualenv,如果某些工具损坏了,那就很糟糕。幸运的是,在大多数情况下,正确的shebang是由setuptools您或您的分发包工具自动创建的(在Windows上,setuptools可以.exe自动生成包装器脚本)。

换句话说,如果脚本在源签出中,则可能会看到#!/usr/bin/env python。如果已安装,则shebang是特定python可执行文件的路径,例如#!/usr/local/bin/python (注意:您不应手动编写来自后一类别的路径)。

要选择是否应该使用pythonpython2python3在家当,见PEP 394 -在类Unix系统中的“Python”命令

  • python应该仅在shebang行中用于与Python 2和3源兼容的脚本。

  • 为了最终更改Python的默认版本,应仅将Python 2脚本更新为与Python 3源兼容,或者python2在shebang行中使用。

Should I put the shebang in my Python scripts?

Put a shebang into a Python script to indicate:

  • this module can be run as a script
  • whether it can be run only on python2, python3 or is it Python 2/3 compatible
  • on POSIX, it is necessary if you want to run the script directly without invoking python executable explicitly

Are these equally portable? Which form is used most?

If you write a shebang manually then always use #!/usr/bin/env python unless you have a specific reason not to use it. This form is understood even on Windows (Python launcher).

Note: installed scripts should use a specific python executable e.g., /usr/bin/python or /home/me/.virtualenvs/project/bin/python. It is bad if some tool breaks if you activate a virtualenv in your shell. Luckily, the correct shebang is created automatically in most cases by setuptools or your distribution package tools (on Windows, setuptools can generate wrapper .exe scripts automatically).

In other words, if the script is in a source checkout then you will probably see #!/usr/bin/env python. If it is installed then the shebang is a path to a specific python executable such as #!/usr/local/bin/python (NOTE: you should not write the paths from the latter category manually).

To choose whether you should use python, python2, or python3 in the shebang, see PEP 394 – The “python” Command on Unix-Like Systems:

  • python should be used in the shebang line only for scripts that are source compatible with both Python 2 and 3.

  • in preparation for an eventual change in the default version of Python, Python 2 only scripts should either be updated to be source compatible with Python 3 or else to use python2 in the shebang line.


回答 3

如果您有多个版本的Python,并且脚本需要在特定版本下运行,那么在直接执行脚本时,she-bang可以确保使用正确的版本,例如:

#!/usr/bin/python2.7

请注意,脚本仍然可以通过完整的Python命令行或通过import运行,在这种情况下,she-bang会被忽略。但是对于直接运行的脚本,这是使用she-bang的一个不错的理由。

#!/usr/bin/env python 通常是更好的方法,但这在特殊情况下会有所帮助。

通常,最好建立一个Python虚拟环境,在这种情况下,泛型#!/usr/bin/env python将为virtualenv标识正确的Python实例。

If you have more than one version of Python and the script needs to run under a specific version, the she-bang can ensure the right one is used when the script is executed directly, for example:

#!/usr/bin/python2.7

Note the script could still be run via a complete Python command line, or via import, in which case the she-bang is ignored. But for scripts run directly, this is a decent reason to use the she-bang.

#!/usr/bin/env python is generally the better approach, but this helps with special cases.

Usually it would be better to establish a Python virtual environment, in which case the generic #!/usr/bin/env python would identify the correct instance of Python for the virtualenv.


回答 4

如果脚本旨在可执行,则应添加shebang。您还应该使用可将shebang修改为正确的安装软件来安装脚本,以使其可以在目标平台上运行。例如distutils和Distribute。

You should add a shebang if the script is intended to be executable. You should also install the script with an installing software that modifies the shebang to something correct so it will work on the target platform. Examples of this is distutils and Distribute.


回答 5

shebang的目的是让脚本在您要从外壳执行脚本时识别解释器类型。通常,并非总是如此,您可以通过从外部提供解释器来执行脚本。用法示例:python-x.x script.py

即使您没有shebang声明符,这也将起作用。

为什么第一个更“便携”的原因是因为它/usr/bin/env包含了PATH声明,该声明说明了系统可执行文件所在的所有目标。

注意:Tornado严格不使用shebang,而Django严格不使用。它随您执行应用程序主要功能的方式而异。

还:它与Python并没有变化。

The purpose of shebang is for the script to recognize the interpreter type when you want to execute the script from the shell. Mostly, and not always, you execute scripts by supplying the interpreter externally. Example usage: python-x.x script.py

This will work even if you don’t have a shebang declarator.

Why first one is more “portable” is because, /usr/bin/env contains your PATH declaration which accounts for all the destinations where your system executables reside.

NOTE: Tornado doesn’t strictly use shebangs, and Django strictly doesn’t. It varies with how you are executing your application’s main function.

ALSO: It doesn’t vary with Python.


回答 6

有时,如果答案不是很清楚(我的意思是,你不能,如果是或否决定),那么它没有太大的关系,直到答案,你可以忽略的问题清楚的。

#!唯一目的是为了启动脚本。Django会自行加载并使用源。不需要决定使用哪种解释器。这样,#!这里实际上没有任何意义。

通常,如果它是一个模块并且不能用作脚本,则无需使用#!。另一方面,模块源通常包含if __name__ == '__main__': ...至少一些琐碎的功能测试。然后#!再次有意义。

使用的一个好理由#!是当您同时使用Python 2和Python 3脚本时-它们必须由不同版本的Python解释。这样,您必须记住python手动启动脚本时必须使用的内容(无#!内部内容)。如果混合使用这些脚本,则最好使用#!内部脚本,使其成为可执行文件,然后将其作为可执行文件启动(chmod …)。

使用MS-Windows时,#!直到最近才有意义。Python 3.3引入了Windows Python启动器(py.exe和pyw.exe),该启动器读取#!行,检测已安装的Python版本并使用正确或明确需要的Python版本。由于扩展可以与程序相关联,因此在Windows中可以获得与基于Unix的系统中的execute标志类似的行为。

Sometimes, if the answer is not very clear (I mean you cannot decide if yes or no), then it does not matter too much, and you can ignore the problem until the answer is clear.

The #! only purpose is for launching the script. Django loads the sources on its own and uses them. It never needs to decide what interpreter should be used. This way, the #! actually makes no sense here.

Generally, if it is a module and cannot be used as a script, there is no need for using the #!. On the other hand, a module source often contains if __name__ == '__main__': ... with at least some trivial testing of the functionality. Then the #! makes sense again.

One good reason for using #! is when you use both Python 2 and Python 3 scripts — they must be interpreted by different versions of Python. This way, you have to remember what python must be used when launching the script manually (without the #! inside). If you have a mixture of such scripts, it is a good idea to use the #! inside, make them executable, and launch them as executables (chmod …).

When using MS-Windows, the #! had no sense — until recently. Python 3.3 introduces a Windows Python Launcher (py.exe and pyw.exe) that reads the #! line, detects the installed versions of Python, and uses the correct or explicitly wanted version of Python. As the extension can be associated with a program, you can get similar behaviour in Windows as with execute flag in Unix-based systems.


回答 7

当我最近在Windows 7上安装Python 3.6.1时,它还安装了Windows的Python启动器,该应用程序应该可以处理shebang行。但是,我发现Python Launcher并没有做到这一点:shebang行被忽略,并且始终使用Python 2.7.13(除非我使用py -3执行脚本)。

要解决此问题,我必须编辑Windows注册表项HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Python.File\shell\open\command。这仍然有价值

"C:\Python27\python.exe" "%1" %*

从我以前的Python 2.7安装中获取。我将此注册表项值修改为

"C:\Windows\py.exe" "%1" %*

并且Python Launcher shebang行处理如上所述。

When I installed Python 3.6.1 on Windows 7 recently, it also installed the Python Launcher for Windows, which is supposed to handle the shebang line. However, I found that the Python Launcher did not do this: the shebang line was ignored and Python 2.7.13 was always used (unless I executed the script using py -3).

To fix this, I had to edit the Windows registry key HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Python.File\shell\open\command. This still had the value

"C:\Python27\python.exe" "%1" %*

from my earlier Python 2.7 installation. I modified this registry key value to

"C:\Windows\py.exe" "%1" %*

and the Python Launcher shebang line processing worked as described above.


回答 8

如果您安装了不同的模块,并且需要使用特定的python安装,那么shebang似乎一开始受到限制。但是,您可以执行以下操作,使shebang首先作为shell脚本被调用,然后选择python。这是非常灵活的imo:

#!/bin/sh
#
# Choose the python we need. Explanation:
# a) '''\' translates to \ in shell, and starts a python multi-line string
# b) "" strings are treated as string concat by python, shell ignores them
# c) "true" command ignores its arguments
# c) exit before the ending ''' so the shell reads no further
# d) reset set docstrings to ignore the multiline comment code
#
"true" '''\'
PREFERRED_PYTHON=/Library/Frameworks/Python.framework/Versions/2.7/bin/python
ALTERNATIVE_PYTHON=/Library/Frameworks/Python.framework/Versions/3.6/bin/python3
FALLBACK_PYTHON=python3

if [ -x $PREFERRED_PYTHON ]; then
    echo Using preferred python $PREFERRED_PYTHON
    exec $PREFERRED_PYTHON "$0" "$@"
elif [ -x $ALTERNATIVE_PYTHON ]; then
    echo Using alternative python $ALTERNATIVE_PYTHON
    exec $ALTERNATIVE_PYTHON "$0" "$@"
else
    echo Using fallback python $FALLBACK_PYTHON
    exec python3 "$0" "$@"
fi
exit 127
'''

__doc__ = """What this file does"""
print(__doc__)
import platform
print(platform.python_version())

或许更好的办法是,促进跨多个python脚本的代码重用:

#!/bin/bash
"true" '''\'; source $(cd $(dirname ${BASH_SOURCE[@]}) &>/dev/null && pwd)/select.sh; exec $CHOSEN_PYTHON "$0" "$@"; exit 127; '''

然后select.sh具有:

PREFERRED_PYTHON=/Library/Frameworks/Python.framework/Versions/2.7/bin/python
ALTERNATIVE_PYTHON=/Library/Frameworks/Python.framework/Versions/3.6/bin/python3
FALLBACK_PYTHON=python3

if [ -x $PREFERRED_PYTHON ]; then
    CHOSEN_PYTHON=$PREFERRED_PYTHON
elif [ -x $ALTERNATIVE_PYTHON ]; then
    CHOSEN_PYTHON=$ALTERNATIVE_PYTHON
else
    CHOSEN_PYTHON=$FALLBACK_PYTHON
fi

If you have different modules installed and need to use a specific python install, then shebang appears to be limited at first. However, you can do tricks like the below to allow the shebang to be invoked first as a shell script and then choose python. This is very flexible imo:

#!/bin/sh
#
# Choose the python we need. Explanation:
# a) '''\' translates to \ in shell, and starts a python multi-line string
# b) "" strings are treated as string concat by python, shell ignores them
# c) "true" command ignores its arguments
# c) exit before the ending ''' so the shell reads no further
# d) reset set docstrings to ignore the multiline comment code
#
"true" '''\'
PREFERRED_PYTHON=/Library/Frameworks/Python.framework/Versions/2.7/bin/python
ALTERNATIVE_PYTHON=/Library/Frameworks/Python.framework/Versions/3.6/bin/python3
FALLBACK_PYTHON=python3

if [ -x $PREFERRED_PYTHON ]; then
    echo Using preferred python $PREFERRED_PYTHON
    exec $PREFERRED_PYTHON "$0" "$@"
elif [ -x $ALTERNATIVE_PYTHON ]; then
    echo Using alternative python $ALTERNATIVE_PYTHON
    exec $ALTERNATIVE_PYTHON "$0" "$@"
else
    echo Using fallback python $FALLBACK_PYTHON
    exec python3 "$0" "$@"
fi
exit 127
'''

__doc__ = """What this file does"""
print(__doc__)
import platform
print(platform.python_version())

Or better yet, perhaps, to facilitate code reuse across multiple python scripts:

#!/bin/bash
"true" '''\'; source $(cd $(dirname ${BASH_SOURCE[@]}) &>/dev/null && pwd)/select.sh; exec $CHOSEN_PYTHON "$0" "$@"; exit 127; '''

and then select.sh has:

PREFERRED_PYTHON=/Library/Frameworks/Python.framework/Versions/2.7/bin/python
ALTERNATIVE_PYTHON=/Library/Frameworks/Python.framework/Versions/3.6/bin/python3
FALLBACK_PYTHON=python3

if [ -x $PREFERRED_PYTHON ]; then
    CHOSEN_PYTHON=$PREFERRED_PYTHON
elif [ -x $ALTERNATIVE_PYTHON ]; then
    CHOSEN_PYTHON=$ALTERNATIVE_PYTHON
else
    CHOSEN_PYTHON=$FALLBACK_PYTHON
fi

回答 9

答:仅当您计划使其成为命令行可执行脚本时。

步骤如下:

首先,验证要使用的适当的shebang字符串:

which python

从中获取输出,并在第一行中将其添加(带有shebang#!)。

在我的系统上,它的响应如下:

$which python
/usr/bin/python

因此,您的shebang将如下所示:

#!/usr/bin/python

保存后,它仍将像以前一样运行,因为python会将第一行视为注释。

python filename.py

要使其成为命令,请将其复制以删除.py扩展名。

cp filename.py filename

告诉文件系统这将是可执行的:

chmod +x filename

要测试它,请使用:

./filename

最佳实践是将其移动到$ PATH中的某个位置,因此只需键入文件名即可。

sudo cp filename /usr/sbin

这样,它将可以在任何地方使用(文件名前没有./)

Answer: Only if you plan to make it a command-line executable script.

Here is the procedure:

Start off by verifying the proper shebang string to use:

which python

Take the output from that and add it (with the shebang #!) in the first line.

On my system it responds like so:

$which python
/usr/bin/python

So your shebang will look like:

#!/usr/bin/python

After saving, it will still run as before since python will see that first line as a comment.

python filename.py

To make it a command, copy it to drop the .py extension.

cp filename.py filename

Tell the file system that this will be executable:

chmod +x filename

To test it, use:

./filename

Best practice is to move it somewhere in your $PATH so all you need to type is the filename itself.

sudo cp filename /usr/sbin

That way it will work everywhere (without the ./ before the filename)


回答 10

绝对路径与逻辑路径:

关于可移植性,这实际上是一个关于Python解释器的路径是绝对路径还是Logical/usr/bin/env)的问题。

遇到这个和谈论这个问题在一般的方式,而不支持其他证明堆栈网站其他的答案,我已经进行了一些真的,真的在上过这个问题,颗粒测试和分析unix.stackexchange.com。与其在此处粘贴答案,不如将那些对比较分析感兴趣的人指向该答案:

https://unix.stackexchange.com/a/566019/334294

作为一名Linux工程师,我的目标始终是为我的开发人员客户端提供最合适的,优化的主机,因此,我确实需要一个可靠的解决方案来解决Python环境问题。测试后,我的观点是,在(2)选项中,she-bang 中的逻辑路径更好。

Absolute vs Logical Path:

This is really a question about whether the path to the Python interpreter should be absolute or Logical (/usr/bin/env) in respect to portability.

Encountering other answers on this and other Stack sites which talked about the issue in a general way without supporting proofs, I’ve performed some really, REALLY, granular testing & analysis on this very question on the unix.stackexchange.com. Rather than paste that answer here, I’ll point those interested to the comparative analysis to that answer:

https://unix.stackexchange.com/a/566019/334294

Being a Linux Engineer, my goal is always to provide the most suitable, optimized hosts for my developer clients, so the issue of Python environments was something I really needed a solid answer to. My view after the testing was that the logical path in the she-bang was the better of the (2) options.


回答 11

首先使用

which python

这将给出输出作为我的python解释器(二进制)所在的位置。

此输出可以是任何这样的

/usr/bin/python

要么

/bin/python

现在,适当选择shebang行并使用它。

概括地说,我们可以使用:

#!/usr/bin/env

要么

#!/bin/env

Use first

which python

This will give the output as the location where my python interpreter (binary) is present.

This output could be any such as

/usr/bin/python

or

/bin/python

Now appropriately select the shebang line and use it.

To generalize we can use:

#!/usr/bin/env

or

#!/bin/env

Gitsome-增强型Git/GitHub命令行界面(CLI)。GitHub和GitHub企业的官方集成:https://github.com/works-with/category/desktop-tools

一个Official Integration对于GitHub和GitHub Enterprise

为什么gitsome

Git命令行

虽然标准的Git命令行是管理基于Git的repo的一个很好的工具,但是它可以很难记住这个用法地址为:

  • 150多个瓷器和管道命令
  • 无数特定于命令的选项
  • 标签和分支等资源

Git命令行不与GitHub集成,强制您在命令行和浏览器之间切换

gitsome-具有自动完成功能的增压Git/GitHub CLI

gitsome旨在通过专注于以下方面来增强您的标准git/shell界面:

  • 提高易用性
  • 提高工作效率

深度GitHub集成

并不是所有的GitHub工作流都能在终端中很好地工作;gitsome试图将目标对准那些这样做的人

gitsome包括29个与一起使用的GitHub集成命令ALL外壳:

$ gh <command> [param] [options]

gh命令以及Git-Extrashub解锁更多GitHub集成的命令!

Imgur

带有交互式帮助的Git和GitHub自动完成程序

您可以运行可选壳牌:

 $ gitsome

要启用自动完成交互式帮助对于以下内容:

Imgur

Imgur

通用自动补全程序

gitsome自动完成以下内容:

  • Shell命令
  • 文件和目录
  • 环境变量
  • 手册页
  • python

要启用其他自动完成,请查看Enabling Bash Completions部分

Imgur

鱼式自动建议

gitsome支持鱼式自动建议。使用right arrow完成建议的关键

Imgur

Python REPL

gitsome由以下人员提供动力xonsh,它支持Python REPL

在shell命令旁边运行Python命令:

Imgur

附加内容xonsh功能可在xonsh tutorial

命令历史记录

gitsome跟踪您输入的命令并将其存储在~/.xonsh_history.json使用向上和向下箭头键循环查看命令历史记录

Imgur

可自定义的突出显示

可以控制用于突出显示的ansi颜色,方法是更新~/.gitsomeconfig文件

颜色选项包括:

'black', 'red', 'green', 'yellow',
'blue', 'magenta', 'cyan', 'white'

对于无颜色,请将值设置为Nonewhite在某些终端上可能显示为浅灰色

Imgur

可用的平台

gitsome适用于Mac、Linux、Unix、Windows,以及Docker

待办事项

并不是所有的GitHub工作流都能在终端中很好地工作;gitsome试图将目标对准那些这样做的人

  • 添加其他GitHub API集成

gitsome才刚刚开始。请随意……contribute!

索引

GitHub集成命令

安装和测试

杂项

GitHub集成命令语法

用法:

$ gh <command> [param] [options]

GitHub集成命令列表

  configure            Configure gitsome.
  create-comment       Create a comment on the given issue.
  create-issue         Create an issue.
  create-repo          Create a repo.
  emails               List all the user's registered emails.
  emojis               List all GitHub supported emojis.
  feed                 List all activity for the given user or repo.
  followers            List all followers and the total follower count.
  following            List all followed users and the total followed count.
  gitignore-template   Output the gitignore template for the given language.
  gitignore-templates  Output all supported gitignore templates.
  issue                Output detailed information about the given issue.
  issues               List all issues matching the filter.
  license              Output the license template for the given license.
  licenses             Output all supported license templates.
  me                   List information about the logged in user.
  notifications        List all notifications.
  octo                 Output an Easter egg or the given message from Octocat.
  pull-request         Output detailed information about the given pull request.
  pull-requests        List all pull requests.
  rate-limit           Output the rate limit.  Not available for Enterprise.
  repo                 Output detailed information about the given filter.
  repos                List all repos matching the given filter.
  search-issues        Search for all issues matching the given query.
  search-repos         Search for all repos matching the given query.
  starred              Output starred repos.
  trending             List trending repos for the given language.
  user                 List information about the given user.
  view                 View the given index in the terminal or a browser.

GitHub集成命令参考:COMMANDS.md

请参阅GitHub Integration Commands Reference in COMMANDS.md对于详细讨论所有GitHub集成命令、参数、选项和示例

请查看下一节,了解快速参考

GitHub集成命令快速参考

配置gitsome

要与GitHub正确集成,您必须首先配置gitsome

$ gh configure

对于GitHub Enterprise用户,使用-e/--enterprise标志:

$ gh configure -e

列表源

列出您的新闻源

$ gh feed

Imgur

列出用户的活动摘要

查看您的活动订阅源或其他用户的活动订阅源,也可以选择使用寻呼机-p/--pager这个pager option可用于许多命令

$ gh feed donnemartin -p

Imgur

列出回购的活动提要

$ gh feed donnemartin/gitsome -p

Imgur

列出通知

$ gh notifications

Imgur

列出拉式请求

查看您的回购的所有拉式请求:

$ gh pull-requests

Imgur

过滤问题

查看您提到的所有未决问题:

$ gh issues --issue_state open --issue_filter mentioned

Imgur

查看所有问题,只筛选分配给您的问题,而不考虑状态(打开、关闭):

$ gh issues --issue_state all --issue_filter assigned

有关过滤和州限定词的更多信息,请访问gh issues参考位置COMMANDS.md

过滤星级报告

$ gh starred "repo filter"

Imgur

搜索问题和报告

搜索问题

+1最多的搜索问题:

$ gh search-issues "is:open is:issue sort:reactions-+1-desc" -p

Imgur

评论最多的搜索问题:

$ gh search-issues "is:open is:issue sort:comments-desc" -p

使用“需要帮助”标签搜索问题:

$ gh search-issues "is:open is:issue label:\"help wanted\"" -p

已标记您的用户名的搜索问题@donnemartin

$ gh search-issues "is:issue donnemartin is:open" -p

搜索您所有未解决的私人问题:

$ gh search-issues "is:open is:issue is:private" -p

有关查询限定符的更多信息,请访问searching issues reference

搜索报告

搜索2015年或之后创建的所有Python repos,>=1000星:

$ gh search-repos "created:>=2015-01-01 stars:>=1000 language:python" --sort stars -p

Imgur

有关查询限定符的更多信息,请访问searching repos reference

列出趋势报告和开发人员

查看趋势回购:

$ gh trending [language] [-w/--weekly] [-m/--monthly] [-d/--devs] [-b/--browser]

Imgur

查看趋势DEV(目前仅浏览器支持DEV):

$ gh trending [language] --devs --browser

查看内容

这个view命令

查看前面列出的通知、拉取请求、问题、回复、用户等,HTML格式适合您的终端,也可以选择在您的浏览器中查看:

$ gh view [#] [-b/--browser]

Imgur

这个issue命令

查看问题:

$ gh issue donnemartin/saws/1

Imgur

这个pull-request命令

查看拉取请求:

$ gh pull-request donnemartin/awesome-aws/2

Imgur

设置.gitignore

列出所有可用的.gitignore模板:

$ gh gitignore-templates

Imgur

设置您的.gitignore

$ gh gitignore-template Python > .gitignore

Imgur

设置LICENSE

列出所有可用的LICENSE模板:

$ gh licenses

Imgur

设置您的或LICENSE

$ gh license MIT > LICENSE

Imgur

召唤十月猫

在十月猫那天打电话说出给定的信息或复活节彩蛋:

$ gh octo [say]

Imgur

查看配置文件

查看用户的配置文件

$ gh user octocat

Imgur

查看您的个人资料

使用查看您的个人资料gh user [YOUR_USER_ID]命令或使用以下快捷方式:

$ gh me

Imgur

创建评论、问题和报告

创建评论:

$ gh create-comment donnemartin/gitsome/1 -t "hello world"

创建问题:

$ gh create-issue donnemartin/gitsome -t "title" -b "body"

创建回购:

$ gh create-repo gitsome

选项:在寻呼机中查看

许多gh命令支持-p/--pager在寻呼机中显示结果的选项(如果可用)

用法:

$ gh <command> [param] [options] -p
$ gh <command> [param] [options] --pager

选项:在浏览器中查看

许多gh命令支持-b/--browser在默认浏览器(而不是终端)中显示结果的选项

用法:

$ gh <command> [param] [options] -b
$ gh <command> [param] [options] --browser

请参阅COMMANDS.md有关所有GitHub集成命令、参数、选项和示例的详细列表

记住这些命令有困难吗?看看手边的autocompleter with interactive help来指导您完成每个命令

注意,您可以将gitsome与其他实用程序(如Git-Extras

安装

PIP安装

PyPI versionPyPI

gitsome托管在PyPI将安装以下命令gitsome

$ pip3 install gitsome

您还可以安装最新的gitsome来自GitHub源,可能包含尚未推送到PyPI的更改:

$ pip3 install git+https://github.com/donnemartin/gitsome.git

如果您没有安装在virtualenv,您可能需要运行sudo

$ sudo pip3 install gitsome

pip3

根据您的设置,您可能还希望运行pip3使用-H flag

$ sudo -H pip3 install gitsome

对于大多数Linux用户来说,pip3可以使用python3-pip套餐

例如,Ubuntu用户可以运行:

$ sudo apt-get install python3-pip

看这个ticket有关更多详细信息,请参阅

虚拟环境安装

您可以将Python包安装在virtualenv要避免依赖项或权限的潜在问题,请执行以下操作

如果您是Windows用户,或者如果您想了解更多有关virtualenv,看看这个guide

安装virtualenvvirtualenvwrapper

$ pip3 install virtualenv
$ pip3 install virtualenvwrapper
$ export WORKON_HOME=~/.virtualenvs
$ source /usr/local/bin/virtualenvwrapper.sh

创建gitsomevirtualenv并安装gitsome

$ mkvirtualenv gitsome
$ pip3 install gitsome

如果pip安装不起作用,您可能默认运行的是Python2。检查您正在运行的Python版本:

$ python --version

如果上面的调用结果是Python 2,请找到Python 3的路径:

$ which python3  # Python 3 path for mkvirtualenv's --python option

如果需要,安装Python 3。调用时设置Python版本mkvirtualenv

$ mkvirtualenv --python [Python 3 path from above] gitsome
$ pip3 install gitsome

如果要激活gitsomevirtualenv稍后再次运行:

$ workon gitsome

要停用gitsomevirtualenv,运行:

$ deactivate

作为Docker容器运行

您可以在Docker容器中运行gitome,以避免安装Python和pip3当地的。要安装Docker,请查看official Docker documentation

一旦安装了docker,您就可以运行gitome:

$ docker run -ti --rm mariolet/gitsome

您可以使用Docker卷让gitome访问您的工作目录、本地的.gitSomeconfig和.gitconfig:

$ docker run -ti --rm -v $(pwd):/src/              \
   -v ${HOME}/.gitsomeconfig:/root/.gitsomeconfig  \
   -v ${HOME}/.gitconfig:/root/.gitconfig          \
   mariolet/gitsome

如果您经常运行此命令,则可能需要定义别名:

$ alias gitsome="docker run -ti --rm -v $(pwd):/src/              \
                  -v ${HOME}/.gitsomeconfig:/root/.gitsomeconfig  \
                  -v ${HOME}/.gitconfig:/root/.gitconfig          \
                  mariolet/gitsome"

要从源构建Docker映像,请执行以下操作:

$ git clone https://github.com/donnemartin/gitsome.git
$ cd gitsome
$ docker build -t gitsome .

启动gitsome

安装后,运行可选的gitsome带有交互式帮助的自动完成程序:

$ gitsome

运行可选的gitsomeShell将为您提供自动完成、交互式帮助、鱼式建议、Python REPL等

正在运行gh命令

运行GitHub集成命令:

$ gh <command> [param] [options]

注意:运行gitsome不需要执行外壳程序gh命令。之后installinggitsome你可以跑gh来自任何shell的命令

运行gh configure命令

要与GitHub正确集成,gitsome必须正确配置:

$ gh configure

针对GitHub企业用户

使用-e/--enterprise标志:

$ gh configure -e

要查看更多详细信息,请访问gh configure部分

启用Bash完成

默认情况下,gitsome查看以下内容locations to enable bash completions

要添加其他bash完成,请更新~/.xonshrc包含bash完成位置的文件

如果~/.xonshrc不存在,请创建它:

$ touch ~/.xonshrc

例如,如果在/usr/local/etc/my_bash_completion.d/completion.bash,将以下行添加到~/.xonshrc

$BASH_COMPLETIONS.append('/usr/local/etc/my_bash_completion.d/completion.bash')

您将需要重新启动gitsome要使更改生效,请执行以下操作

正在启用gh在外部完成制表符gitsome

你可以跑gh外部的命令gitsome外壳完成器。要启用gh此工作流的制表符完成,请将gh_complete.sh本地文件

让bash知道可以完成gh当前会话中的命令:

$ source /path/to/gh_complete.sh

要为所有终端会话启用制表符完成,请将以下内容添加到您的bashrc文件:

source /path/to/gh_complete.sh

重新加载您的bashrc

$ source ~/.bashrc

提示:.是的缩写source,因此您可以改为运行以下命令:

$ . ~/.bashrc

对于Zsh用户

zsh包括与bash完成兼容的模块

下载gh_complete.sh文件,并将以下内容附加到您的.zshrc

autoload bashcompinit
bashcompinit
source /path/to/gh_complete.sh

重新加载您的zshrc

 $ source ~/.zshrc

可选:安装PILPillow

将化身显示为gh megh user命令需要安装可选的PILPillow依赖性

Windows*和Mac:

$ pip3 install Pillow

*请参阅Windows Support有关化身限制的部分

Ubuntu用户,看看这些instructions on askubuntu

支持的Python版本

  • Python 3.4
  • Python 3.5
  • Python 3.6
  • Python 3.7

gitsome由以下人员提供动力xonsh,它当前不支持Python2.x,如本文中所讨论的ticket

支持的平台

  • Mac OS X
    • 在OS X 10.10上测试
  • Linux、Unix
    • 在Ubuntu 14.04 LTS上测试
  • 窗口
    • 在Windows 10上测试

Windows支持

gitsome已在Windows 10上进行了测试,cmdcmder

虽然您可以使用标准的Windows命令提示符,但使用这两种命令提示符都可能会有更好的体验cmderconemu

Imgur

纯文本化身

命令gh usergh me将永远拥有-t/--text_avatar标志已启用,因为img2txt不支持Windows上的ANSI头像

配置文件

在Windows上,.gitsomeconfig 文件可在以下位置找到%userprofile%例如:

C:\Users\dmartin\.gitsomeconfig

开发人员安装

如果您有兴趣为gitsome,请运行以下命令:

$ git clone https://github.com/donnemartin/gitsome.git
$ cd gitsome
$ pip3 install -e .
$ pip3 install -r requirements-dev.txt
$ gitsome
$ gh <command> [param] [options]

pip3

如果您在安装时收到一个错误,提示您需要Python 3.4+,这可能是因为您的pip命令是为旧版本的Python配置的。要解决此问题,建议安装pip3

$ sudo apt-get install python3-pip

看这个ticket有关更多详细信息,请参阅

持续集成

Build Status

有关持续集成的详细信息,请访问Travis CI

单元测试和代码覆盖率

在活动的Python环境中运行单元测试:

$ python tests/run_tests.py

使用运行单元测试tox在多个Python环境中:

$ tox

文档

源代码文档将很快在Readthedocs.org请查看source docstrings

运行以下命令构建文档:

$ scripts/update_docs.sh

贡献

欢迎投稿!

回顾Contributing Guidelines有关如何执行以下操作的详细信息,请执行以下操作:

  • 提交问题
  • 提交拉式请求

学分

联系信息

请随时与我联系,讨论任何问题、问题或评论

我的联系信息可以在我的GitHub page

许可证

我在开放源码许可下向您提供此存储库中的代码和资源。因为这是我的个人存储库,您获得的我的代码和资源的许可证来自我,而不是我的雇主(Facebook)

License

Copyright 2016 Donne Martin

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.