问题:如何在Python中启动后台进程?

我正在尝试将Shell脚本移植到可读性更高的python版本。原始的shell脚本在后台使用“&”启动多个进程(实用程序,监视器等)。如何在python中达到相同的效果?我希望这些过程在Python脚本完成后不会消失。我敢肯定它与守护程序的概念有关,但是我找不到如何轻松实现此目的。

I’m trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with “&”. How can I achieve the same effect in python? I’d like these processes not to die when the python scripts complete. I am sure it’s related to the concept of a daemon somehow, but I couldn’t find how to do this easily.


回答 0

注意:此答案的最新版本比2009年发布时要少。subprocess现在建议在文档中使用其他答案中显示的模块

(请注意,子流程模块提供了更强大的工具来生成新流程并检索其结果;使用该模块比使用这些功能更可取。)


如果您希望您的进程在后台启动,则可以使用system()与您的Shell脚本相同的方式来使用和调用它,也可以spawn

import os
os.spawnl(os.P_DETACH, 'some_long_running_command')

(或者,您也可以尝试使用便携性较差的os.P_NOWAIT标志)。

请参阅此处文档

Note: This answer is less current than it was when posted in 2009. Using the subprocess module shown in other answers is now recommended in the docs

(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)


If you want your process to start in the background you can either use system() and call it in the same way your shell script did, or you can spawn it:

import os
os.spawnl(os.P_DETACH, 'some_long_running_command')

(or, alternatively, you may try the less portable os.P_NOWAIT flag).

See the documentation here.


回答 1

尽管jkp的解决方案有效,但是更新的方式(以及文档建议的方式)是使用subprocess模块。对于简单的命令,它等效,但是如果您要执行复杂的操作,它提供了更多选项。

您的案例示例:

import subprocess
subprocess.Popen(["rm","-r","some.file"])

这将rm -r somefile在后台运行。请注意,调用.communicate()从返回的对象Popen将一直阻塞,直到完成为止,因此,如果要使其在后台运行,请不要这样做:

import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate()  # Will block for 30 seconds

请参阅此处的文档。

另外,有一点需要澄清:这里使用的“背景”纯粹是一个外壳概念;从技术上讲,您的意思是希望在等待进程完成时生成一个没有阻塞的进程。但是,我在这里使用“背景”来指代类似外壳背景的行为。

While jkp‘s solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess module. For simple commands its equivalent, but it offers more options if you want to do something complicated.

Example for your case:

import subprocess
subprocess.Popen(["rm","-r","some.file"])

This will run rm -r some.file in the background. Note that calling .communicate() on the object returned from Popen will block until it completes, so don’t do that if you want it to run in the background:

import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate()  # Will block for 30 seconds

See the documentation here.

Also, a point of clarification: “Background” as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I’ve used “background” here to refer to shell-background-like behavior.


回答 2

您可能需要答案“如何在Python中调用外部命令”

最简单的方法是使用该os.system函数,例如:

import os
os.system("some_command &")

基本上,传递给system函数的所有内容都将与将其传递给脚本中的shell一样执行。

You probably want the answer to “How to call an external command in Python”.

The simplest approach is to use the os.system function, e.g.:

import os
os.system("some_command &")

Basically, whatever you pass to the system function will be executed the same as if you’d passed it to the shell in a script.


回答 3

我在这里找到这个:

在Windows(win xp)上,父进程longtask.py只有在完成工作后才能完成。这不是您想要的CGI脚本。问题并非特定于Python,在PHP社区中,问题是相同的。

解决方案是将DETACHED_PROCESS 过程创建标志传递给CreateProcesswin API中的基础函数。如果碰巧安装了pywin32,则可以从win32process模块​​中导入该标志,否则,您应该自己定义它:

DETACHED_PROCESS = 0x00000008

pid = subprocess.Popen([sys.executable, "longtask.py"],
                       creationflags=DETACHED_PROCESS).pid

I found this here:

On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.

The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:

DETACHED_PROCESS = 0x00000008

pid = subprocess.Popen([sys.executable, "longtask.py"],
                       creationflags=DETACHED_PROCESS).pid

回答 4

subprocess.Popen()close_fds=True参数一起使用,这将允许将生成的子流程与Python流程本身分离,甚至在Python退出后也可以继续运行。

https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f

import os, time, sys, subprocess

if len(sys.argv) == 2:
    time.sleep(5)
    print 'track end'
    if sys.platform == 'darwin':
        subprocess.Popen(['say', 'hello'])
else:
    print 'main begin'
    subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
    print 'main end'

Use subprocess.Popen() with the close_fds=True parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.

https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f

import os, time, sys, subprocess

if len(sys.argv) == 2:
    time.sleep(5)
    print 'track end'
    if sys.platform == 'darwin':
        subprocess.Popen(['say', 'hello'])
else:
    print 'main begin'
    subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
    print 'main end'

回答 5

您可能想开始研究os模块以派生不同的线程(通过打开交互式会话并发出help(os))。相关功能是fork和任何exec功能。为了让您了解如何启动,请在执行fork的函数中放入类似的内容(该函数需要使用列表或元组’args’作为包含程序名称及其参数的参数;您可能还需要为新线程定义stdin,out和err):

try:
    pid = os.fork()
except OSError, e:
    ## some debug output
    sys.exit(1)
if pid == 0:
    ## eventually use os.putenv(..) to set environment variables
    ## os.execv strips of args[0] for the arguments
    os.execv(args[0], args)

You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple ‘args’ as an argument that contains the program’s name and its parameters; you may also want to define stdin, out and err for the new thread):

try:
    pid = os.fork()
except OSError, e:
    ## some debug output
    sys.exit(1)
if pid == 0:
    ## eventually use os.putenv(..) to set environment variables
    ## os.execv strips of args[0] for the arguments
    os.execv(args[0], args)

回答 6

捕获输出并在后台运行 threading

本答案所述,如果您使用捕获输出,stdout=然后尝试进行read(),则该过程将阻塞。

但是,在某些情况下您需要这样做。例如,我想启动两个进程,它们通过它们之间的端口进行通信,并将它们的stdout保存到日志文件和stdout中。

threading模块使我们能够做到这一点。

首先,看看如何在此问题中单独完成输出重定向:Python Popen:同时写入stdout和日志文件

然后:

main.py

#!/usr/bin/env python3

import os
import subprocess
import sys
import threading

def output_reader(proc, file):
    while True:
        byte = proc.stdout.read(1)
        if byte:
            sys.stdout.buffer.write(byte)
            sys.stdout.flush()
            file.buffer.write(byte)
        else:
            break

with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
     subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
     open('log1.log', 'w') as file1, \
     open('log2.log', 'w') as file2:
    t1 = threading.Thread(target=output_reader, args=(proc1, file1))
    t2 = threading.Thread(target=output_reader, args=(proc2, file2))
    t1.start()
    t2.start()
    t1.join()
    t2.join()

sleep.py

#!/usr/bin/env python3

import sys
import time

for i in range(4):
    print(i + int(sys.argv[1]))
    sys.stdout.flush()
    time.sleep(0.5)

运行后:

./main.py

标准输出每0.5秒更新一次,每两行包含一次:

0
10
1
11
2
12
3
13

每个日志文件都包含给定进程的相应日志。

灵感来源:https//eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/

已在Ubuntu 18.04,Python 3.6.7上测试。

Both capture output and run on background with threading

As mentioned on this answer, if you capture the output with stdout= and then try to read(), then the process blocks.

However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.

The threading module allows us to do that.

First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously

Then:

main.py

#!/usr/bin/env python3

import os
import subprocess
import sys
import threading

def output_reader(proc, file):
    while True:
        byte = proc.stdout.read(1)
        if byte:
            sys.stdout.buffer.write(byte)
            sys.stdout.flush()
            file.buffer.write(byte)
        else:
            break

with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
     subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
     open('log1.log', 'w') as file1, \
     open('log2.log', 'w') as file2:
    t1 = threading.Thread(target=output_reader, args=(proc1, file1))
    t2 = threading.Thread(target=output_reader, args=(proc2, file2))
    t1.start()
    t2.start()
    t1.join()
    t2.join()

sleep.py

#!/usr/bin/env python3

import sys
import time

for i in range(4):
    print(i + int(sys.argv[1]))
    sys.stdout.flush()
    time.sleep(0.5)

After running:

./main.py

stdout get updated every 0.5 seconds for every two lines to contain:

0
10
1
11
2
12
3
13

and each log file contains the respective log for a given process.

Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/

Tested on Ubuntu 18.04, Python 3.6.7.


声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。