标签归档:try-catch-finally

“最终”是否总是在Python中执行?

问题:“最终”是否总是在Python中执行?

对于Python中任何可能的try-finally块,是否保证finally将始终执行该块?

例如,假设我在一个except街区中返回:

try:
    1/0
except ZeroDivisionError:
    return
finally:
    print("Does this code run?")

或者,也许我重新提出一个Exception

try:
    1/0
except ZeroDivisionError:
    raise
finally:
    print("What about this code?")

测试表明finally上述示例确实可以执行,但我想我还没有想到其他场景。

在任何情况下,某个finally块可能无法在Python中执行?

For any possible try-finally block in Python, is it guaranteed that the finally block will always be executed?

For example, let’s say I return while in an except block:

try:
    1/0
except ZeroDivisionError:
    return
finally:
    print("Does this code run?")

Or maybe I re-raise an Exception:

try:
    1/0
except ZeroDivisionError:
    raise
finally:
    print("What about this code?")

Testing shows that finally does get executed for the above examples, but I imagine there are other scenarios I haven’t thought of.

Are there any scenarios in which a finally block can fail to execute in Python?


回答 0

“保证”一词比任何finally应得的实现都要强大得多。什么是保证的是,如果执行全部的流出tryfinally结构,它会通过finally这样做。无法保证执行将流出tryfinally

  • finally中一台生成器或异步协同程序可能永远不会运行,如果对象根本不会执行到结束。可能有很多方式发生。这是一个:

    def gen(text):
        try:
            for line in text:
                try:
                    yield int(line)
                except:
                    # Ignore blank lines - but catch too much!
                    pass
        finally:
            print('Doing important cleanup')
    
    text = ['1', '', '2', '', '3']
    
    if any(n > 1 for n in gen(text)):
        print('Found a number')
    
    print('Oops, no cleanup.')
    

    请注意,这个示例有些棘手:当生成器被垃圾回收时,Python尝试finally通过抛出GeneratorExit异常来运行该块,但是这里我们捕获了该异常,然后yield再次出现,此时Python打印警告(“生成器忽略了GeneratorExit ”)并放弃。有关详细信息,请参见PEP 342(通过增强型生成器的协程)

    生成器或协同程序可能不会执行到结束的其他方式包括:如果对象只是从来没有GC’ed(是的,这是可能的,即使在CPython的),或者如果async with awaitS IN __aexit__,或者如果对象awaitS或yieldS IN一个finally块。此列表并非详尽无遗。

  • finally如果所有非守护程序线程都首先退出,则守护程序线程中的A 可能永远不会执行

  • os._exit将立即停止该进程而不执行finally块。

  • os.fork可能导致finally执行两次。如果您对共享资源的访问未正确同步,则可能会导致并发访问冲突(崩溃,停顿等),这不仅会发生两次常见的正常问题,还会导致并发访问冲突。

    由于multiprocessing在使用fork start方法(Unix上的默认设置)时使用fork-without-exec创建工作进程,然后os._exit在工作程序完成后调用工作程序,finally因此multiprocessing交互可能会出现问题(示例)。

  • C级分段故障将阻止finally块运行。
  • kill -SIGKILL将阻止finally块运行。SIGTERM并且SIGHUP也将阻止finally运行,除非你安装一个处理器来控制自己的关断块; 默认情况下,Python不处理SIGTERMSIGHUP
  • 中的异常finally会阻止清理完成。其中特别值得注意的情况是,如果用户点击控制-C 只是因为我们已经开始执行该finally块。Python将引发KeyboardInterrupt并跳过该finally块内容的每一行。(KeyboardInterrupt-safe代码很难编写)。
  • 如果计算机断电,或者休眠且无法唤醒,finally则无法运行数据块。

finally区块不是交易系统;它不提供原子性保证或任何种类的保证。这些示例中的一些可能看起来很明显,但是很容易忘记这种事情可能发生并且finally过于依赖。

“Guaranteed” is a much stronger word than any implementation of finally deserves. What is guaranteed is that if execution flows out of the whole tryfinally construct, it will pass through the finally to do so. What is not guaranteed is that execution will flow out of the tryfinally.

  • A finally in a generator or async coroutine might never run, if the object never executes to conclusion. There are a lot of ways that could happen; here’s one:

    def gen(text):
        try:
            for line in text:
                try:
                    yield int(line)
                except:
                    # Ignore blank lines - but catch too much!
                    pass
        finally:
            print('Doing important cleanup')
    
    text = ['1', '', '2', '', '3']
    
    if any(n > 1 for n in gen(text)):
        print('Found a number')
    
    print('Oops, no cleanup.')
    

    Note that this example is a bit tricky: when the generator is garbage collected, Python attempts to run the finally block by throwing in a GeneratorExit exception, but here we catch that exception and then yield again, at which point Python prints a warning (“generator ignored GeneratorExit”) and gives up. See PEP 342 (Coroutines via Enhanced Generators) for details.

    Other ways a generator or coroutine might not execute to conclusion include if the object is just never GC’ed (yes, that’s possible, even in CPython), or if an async with awaits in __aexit__, or if the object awaits or yields in a finally block. This list is not intended to be exhaustive.

  • A finally in a daemon thread might never execute if all non-daemon threads exit first.

  • os._exit will halt the process immediately without executing finally blocks.

  • os.fork may cause finally blocks to execute twice. As well as just the normal problems you’d expect from things happening twice, this could cause concurrent access conflicts (crashes, stalls, …) if access to shared resources is not correctly synchronized.

    Since multiprocessing uses fork-without-exec to create worker processes when using the fork start method (the default on Unix), and then calls os._exit in the worker once the worker’s job is done, finally and multiprocessing interaction can be problematic (example).

  • A C-level segmentation fault will prevent finally blocks from running.
  • kill -SIGKILL will prevent finally blocks from running. SIGTERM and SIGHUP will also prevent finally blocks from running unless you install a handler to control the shutdown yourself; by default, Python does not handle SIGTERM or SIGHUP.
  • An exception in finally can prevent cleanup from completing. One particularly noteworthy case is if the user hits control-C just as we’re starting to execute the finally block. Python will raise a KeyboardInterrupt and skip every line of the finally block’s contents. (KeyboardInterrupt-safe code is very hard to write).
  • If the computer loses power, or if it hibernates and doesn’t wake up, finally blocks won’t run.

The finally block is not a transaction system; it doesn’t provide atomicity guarantees or anything of the sort. Some of these examples might seem obvious, but it’s easy to forget such things can happen and rely on finally for too much.


回答 1

是。 最后总是胜利。

克服它的唯一方法是在finally:有机会执行之前停止执行(例如,使解释器崩溃,关闭计算机,永远暂停生成器)。

我想还有其他我没想到的情况。

您可能还没有想到以下几点:

def foo():
    # finally always wins
    try:
        return 1
    finally:
        return 2

def bar():
    # even if he has to eat an unhandled exception, finally wins
    try:
        raise Exception('boom')
    finally:
        return 'no boom'

根据您退出解释器的方式,有时您可以最终“取消”,但不是这样:

>>> import sys
>>> try:
...     sys.exit()
... finally:
...     print('finally wins!')
... 
finally wins!
$

使用不稳定的方法os._exit(在我看来,这属于“使解释器崩溃”的原因):

>>> import os
>>> try:
...     os._exit(1)
... finally:
...     print('finally!')
... 
$

我当前正在运行以下代码,以测试在宇宙热死之后,是否最终仍然可以执行:

try:
    while True:
       sleep(1)
finally:
    print('done')

但是,我仍在等待结果,因此请稍后再检查。

Yes. Finally always wins.

The only way to defeat it is to halt execution before finally: gets a chance to execute (e.g. crash the interpreter, turn off your computer, suspend a generator forever).

I imagine there are other scenarios I haven’t thought of.

Here are a couple more you may not have thought about:

def foo():
    # finally always wins
    try:
        return 1
    finally:
        return 2

def bar():
    # even if he has to eat an unhandled exception, finally wins
    try:
        raise Exception('boom')
    finally:
        return 'no boom'

Depending on how you quit the interpreter, sometimes you can “cancel” finally, but not like this:

>>> import sys
>>> try:
...     sys.exit()
... finally:
...     print('finally wins!')
... 
finally wins!
$

Using the precarious os._exit (this falls under “crash the interpreter” in my opinion):

>>> import os
>>> try:
...     os._exit(1)
... finally:
...     print('finally!')
... 
$

I’m currently running this code, to test if finally will still execute after the heat death of the universe:

try:
    while True:
       sleep(1)
finally:
    print('done')

However, I’m still waiting on the result, so check back here later.


回答 2

根据Python文档

无论以前发生了什么,一旦代码块完成并处理了所有引发的异常,便会执行final块。即使异常处理程序或else块中存在错误,并且引发了新的异常,final块中的代码仍将运行。

还应注意,如果有多个return语句,包括finally块中的一个语句,则finally块返回是唯一将执行的语句。

According to the Python documentation:

No matter what happened previously, the final-block is executed once the code block is complete and any raised exceptions handled. Even if there’s an error in an exception handler or the else-block and a new exception is raised, the code in the final-block is still run.

It should also be noted that if there are multiple return statements, including one in the finally block, then the finally block return is the only one that will execute.


回答 3

好吧,是的,不是。

可以保证的是,Python将始终尝试执行finally块。如果您从该块返回或引发未捕获的异常,则在实际返回或引发异常之前执行finally块。

(只要运行问题中的代码,您本可以控制自己的一切)

我能想象的唯一情况是在Python解释器本身崩溃(例如在C代码内部或由于断电)时,将不会执行finally块。

Well, yes and no.

What is guaranteed is that Python will always try to execute the finally block. In the case where you return from the block or raise an uncaught exception, the finally block is executed just before actually returning or raising the exception.

(what you could have controlled yourself by simply running the code in your question)

The only case I can imagine where the finally block will not be executed is when the Python interpretor itself crashes for example inside C code or because of power outage.


回答 4

我没有使用生成器功能就发现了这一点:

import multiprocessing
import time

def fun(arg):
  try:
    print("tried " + str(arg))
    time.sleep(arg)
  finally:
    print("finally cleaned up " + str(arg))
  return foo

list = [1, 2, 3]
multiprocessing.Pool().map(fun, list)

睡眠可以是可能运行时间不一致的任何代码。

这里出现的情况是,第一个完成的并行处理成功地离开了try块,但随后尝试从该函数返回一个在任何地方都未定义的值(foo),这会导致异常。该异常会杀死映射,而不允许其他进程到达其finally块。

另外,如果您bar = bazz在try块中的sleep()调用之后添加该行。然后,到达该行的第一个进程将引发异常(因为未定义bazz),这将导致运行其自己的finally块,但随后杀死该映射,从而导致其他try块消失而未到达其finally块,并且第一个过程也不到达其return语句。

这对于Python多处理意味着什么,即使哪一个进程都可能有异常,您也不能相信异常处理机制来清理所有进程中的资源。在多处理映射调用之外需要其他信号处理或管理资源。

I found this one without using a generator function:

import multiprocessing
import time

def fun(arg):
  try:
    print("tried " + str(arg))
    time.sleep(arg)
  finally:
    print("finally cleaned up " + str(arg))
  return foo

list = [1, 2, 3]
multiprocessing.Pool().map(fun, list)

The sleep can be any code that might run for inconsistent amounts of time.

What appears to be happening here is that the first parallel process to finish leaves the try block successfully, but then attempts to return from the function a value (foo) that hasn’t been defined anywhere, which causes an exception. That exception kills the map without allowing the other processes to reach their finally blocks.

Also, if you add the line bar = bazz just after the sleep() call in the try block. Then the first process to reach that line throws an exception (because bazz isn’t defined), which causes its own finally block to be run, but then kills the map, causing the other try blocks to disappear without reaching their finally blocks, and the first process not to reach its return statement, either.

What this means for Python multiprocessing is that you can’t trust the exception-handling mechanism to clean up resources in all processes if even one of the processes can have an exception. Additional signal handling or managing the resources outside the multiprocessing map call would be necessary.


回答 5

接受的答案的附录,只是为了帮助了解它的工作原理,并提供了一些示例:

  • 这个:

     try:
         1
     except:
         print 'except'
     finally:
         print 'finally'

    将输出

    最后

  •    try:
           1/0
       except:
           print 'except'
       finally:
           print 'finally'

    将输出

    除了
    最后

Addendum to the accepted answer, just to help to see how it works, with a few examples:

  • This:

     try:
         1
     except:
         print 'except'
     finally:
         print 'finally'
    

    will output

    finally

  •    try:
           1/0
       except:
           print 'except'
       finally:
           print 'finally'
    

    will output

    except
    finally