禁用Tensorflow调试信息

问题:禁用Tensorflow调试信息

通过调试信息,我的意思是TensorFlow在我的终端中显示的有关加载的库和找到的设备等的内容,不是Python错误。

I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:900] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties: 
name: Graphics Device
major: 5 minor: 2 memoryClockRate (GHz) 1.0885
pciBusID 0000:04:00.0
Total memory: 12.00GiB
Free memory: 11.83GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:717] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Graphics Device, pci bus id: 0000:04:00.0)
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 1.0KiB
...

By debugging information I mean what TensorFlow shows in my terminal about loaded libraries and found devices etc. not Python errors.

I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:900] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties: 
name: Graphics Device
major: 5 minor: 2 memoryClockRate (GHz) 1.0885
pciBusID 0000:04:00.0
Total memory: 12.00GiB
Free memory: 11.83GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:717] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Graphics Device, pci bus id: 0000:04:00.0)
I tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:51] Creating bin of max chunk size 1.0KiB
...

回答 0

您可以使用os.environ以下命令禁用所有调试日志:

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' 
import tensorflow as tf

在tf 0.12和1.0上测试

详细来说,

0 = all messages are logged (default behavior)
1 = INFO messages are not printed
2 = INFO and WARNING messages are not printed
3 = INFO, WARNING, and ERROR messages are not printed

You can disable all debugging logs using os.environ :

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' 
import tensorflow as tf

Tested on tf 0.12 and 1.0

In details,

0 = all messages are logged (default behavior)
1 = INFO messages are not printed
2 = INFO and WARNING messages are not printed
3 = INFO, WARNING, and ERROR messages are not printed

回答 1

2.0更新(10/8/19) 设置TF_CPP_MIN_LOG_LEVEL仍然应该起作用(请参见v0.12 +更新中的以下内容),但是当前存在一个未解决的问题(请参阅问题#31870)。如果该设置TF_CPP_MIN_LOG_LEVEL对您不起作用(再次参见下文),请尝试执行以下操作来设置日志级别:

import tensorflow as tf
tf.get_logger().setLevel('INFO')

此外,请参阅有关tf.autograph.set_verbosity设置签名日志消息详细程度的文档-例如:

# Can also be set using the AUTOGRAPH_VERBOSITY environment variable
tf.autograph.set_verbosity(1)

v0.12 +更新(5/20/17),通过TF 2.0+进行工作:

在TensorFlow 0.12+中,针对此问题,您现在可以通过称为TF_CPP_MIN_LOG_LEVEL; 的环境变量控制日志记录。它默认为0(显示所有日志),但可以设置为该Level列下的以下值之一。

  Level | Level for Humans | Level Description                  
 -------|------------------|------------------------------------ 
  0     | DEBUG            | [Default] Print all messages       
  1     | INFO             | Filter out INFO messages           
  2     | WARNING          | Filter out INFO & WARNING messages 
  3     | ERROR            | Filter out all messages      

请参阅以下使用Python的通用OS示例:

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'  # or any {'0', '1', '2'}
import tensorflow as tf

为了更全面,您还调用了设置Python tf_logging模块的级别,该模块用于摘要操作,张量板,各种估计器等。

# append to lines above
tf.logging.set_verbosity(tf.logging.ERROR)  # or any {DEBUG, INFO, WARN, ERROR, FATAL}

对于1.14,如果不按以下说明使用v1 API,则会收到警告:

# append to lines above
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)  # or any {DEBUG, INFO, WARN, ERROR, FATAL}


对于TensorFlow或TF-Learn日志的早期版本(v0.11.x或更低版本):

查看以下页面以获取有关TensorFlow日志记录的信息; 与新的更新,你能在日志记录级别设置为DEBUGINFOWARNERROR,或FATAL。例如:

tf.logging.set_verbosity(tf.logging.ERROR)

该页面还翻过了可与TF-Learn模型一起使用的监视器。这是页面

但是,这不会阻止所有日志记录(仅TF-Learn)。我有两种解决方案;一种是“技术上正确”的解决方案(Linux),另一种是重建TensorFlow。

script -c 'python [FILENAME].py' | grep -v 'I tensorflow/'

对于其他,请参见此答案,其中涉及修改源和重建TensorFlow。

2.0 Update (10/8/19) Setting TF_CPP_MIN_LOG_LEVEL should still work (see below in v0.12+ update), but there is currently an issue open (see issue #31870). If setting TF_CPP_MIN_LOG_LEVEL does not work for you (again, see below), try doing the following to set the log level:

import tensorflow as tf
tf.get_logger().setLevel('INFO')

In addition, please see the documentation on tf.autograph.set_verbosity which sets the verbosity of autograph log messages – for example:

# Can also be set using the AUTOGRAPH_VERBOSITY environment variable
tf.autograph.set_verbosity(1)

v0.12+ Update (5/20/17), Working through TF 2.0+:

In TensorFlow 0.12+, per this issue, you can now control logging via the environmental variable called TF_CPP_MIN_LOG_LEVEL; it defaults to 0 (all logs shown) but can be set to one of the following values under the Level column.

  Level | Level for Humans | Level Description                  
 -------|------------------|------------------------------------ 
  0     | DEBUG            | [Default] Print all messages       
  1     | INFO             | Filter out INFO messages           
  2     | WARNING          | Filter out INFO & WARNING messages 
  3     | ERROR            | Filter out all messages      

See the following generic OS example using Python:

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'  # or any {'0', '1', '2'}
import tensorflow as tf

To be thorough, you call also set the level for the Python tf_logging module, which is used in e.g. summary ops, tensorboard, various estimators, etc.

# append to lines above
tf.logging.set_verbosity(tf.logging.ERROR)  # or any {DEBUG, INFO, WARN, ERROR, FATAL}

For 1.14 you will receive warnings if you do not change to use the v1 API as follows:

# append to lines above
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)  # or any {DEBUG, INFO, WARN, ERROR, FATAL}


For Prior Versions of TensorFlow or TF-Learn Logging (v0.11.x or lower):

View the page below for information on TensorFlow logging; with the new update, you’re able to set the logging verbosity to either DEBUG, INFO, WARN, ERROR, or FATAL. For example:

tf.logging.set_verbosity(tf.logging.ERROR)

The page additionally goes over monitors which can be used with TF-Learn models. Here is the page.

This doesn’t block all logging, though (only TF-Learn). I have two solutions; one is a ‘technically correct’ solution (Linux) and the other involves rebuilding TensorFlow.

script -c 'python [FILENAME].py' | grep -v 'I tensorflow/'

For the other, please see this answer which involves modifying source and rebuilding TensorFlow.


回答 2

我也有这个问题(位于tensorflow-0.10.0rc0),但是无法通过建议的答案解决过多的鼻子测试记录问题。

我设法通过直接研究tensorflow记录器来解决这个问题。不是最正确的修复程序,但效果很好,并且只会污染直接或间接导入张量流的测试文件:

# Place this before directly or indirectly importing tensorflow
import logging
logging.getLogger("tensorflow").setLevel(logging.WARNING)

I have had this problem as well (on tensorflow-0.10.0rc0), but could not fix the excessive nose tests logging problem via the suggested answers.

I managed to solve this by probing directly into the tensorflow logger. Not the most correct of fixes, but works great and only pollutes the test files which directly or indirectly import tensorflow:

# Place this before directly or indirectly importing tensorflow
import logging
logging.getLogger("tensorflow").setLevel(logging.WARNING)

回答 3

为了与Tensorflow 2.0兼容,您可以使用tf.get_logger

import logging
tf.get_logger().setLevel(logging.ERROR)

For compatibility with Tensorflow 2.0, you can use tf.get_logger

import logging
tf.get_logger().setLevel(logging.ERROR)

回答 4

由于TF_CPP_MIN_LOG_LEVEL对我不起作用,您可以尝试:

tf.logging.set_verbosity(tf.logging.WARN)

在Tensorflow v1.6.0中为我工作

As TF_CPP_MIN_LOG_LEVEL didn’t work for me you can try:

tf.logging.set_verbosity(tf.logging.WARN)

Worked for me in tensorflow v1.6.0


回答 5

通常的python3日志管理器为我使用tensorflow == 1.11.0:

import logging
logging.getLogger('tensorflow').setLevel(logging.INFO)

Usual python3 log manager works for me with tensorflow==1.11.0:

import logging
logging.getLogger('tensorflow').setLevel(logging.INFO)

回答 6

我用这篇文章解决了无法删除所有警告#27045,解决方法是:

import logging
logging.getLogger('tensorflow').disabled = True

I solved with this post Cannot remove all warnings #27045 , and the solution was:

import logging
logging.getLogger('tensorflow').disabled = True

回答 7

为了在此处增加一些灵活性,您可以编写一个函数来过滤出您喜欢的消息,从而实现对日志记录级别的更精细控制:

logging.getLogger('tensorflow').addFilter(my_filter_func)

其中,如果您希望抛出该消息,则my_filter_func接受一个LogRecord对象作为输入[ LogRecorddocs ]并返回零;否则为非零。

这是一个示例过滤器,仅保留每第n条信息消息(由于使用了nonlocalhere ,因此使用了Python 3 ):

def keep_every_nth_info(n):
    i = -1
    def filter_record(record):
        nonlocal i
        i += 1
        return int(record.levelname != 'INFO' or i % n == 0)
    return filter_record

# Example usage for TensorFlow:
logging.getLogger('tensorflow').addFilter(keep_every_nth_info(5))

以上所有假设均假设TensorFlow已设置其日志记录状态。您可以通过tf.logging.get_verbosity()添加过滤器之前进行调用来确保没有副作用。

To add some flexibility here, you can achieve more fine-grained control over the level of logging by writing a function that filters out messages however you like:

logging.getLogger('tensorflow').addFilter(my_filter_func)

where my_filter_func accepts a LogRecord object as input [LogRecord docs] and returns zero if you want the message thrown out; nonzero otherwise.

Here’s an example filter that only keeps every nth info message (Python 3 due to the use of nonlocal here):

def keep_every_nth_info(n):
    i = -1
    def filter_record(record):
        nonlocal i
        i += 1
        return int(record.levelname != 'INFO' or i % n == 0)
    return filter_record

# Example usage for TensorFlow:
logging.getLogger('tensorflow').addFilter(keep_every_nth_info(5))

All of the above has assumed that TensorFlow has set up its logging state already. You can ensure this without side effects by calling tf.logging.get_verbosity() before adding a filter.


回答 8

是的,我正在使用tf 2.0-beta,并且想要启用/禁用默认日志记录。tf1.X中的环境变量和方法似乎不再存在。

我进入PDB,发现它可以正常工作:

# close the TF2 logger
tf2logger = tf.get_logger()
tf2logger.error('Close TF2 logger handlers')
tf2logger.root.removeHandler(tf2logger.root.handlers[0])

然后,我添加自己的记录器API(在这种情况下,基于文件)

logtf = logging.getLogger('DST')
logtf.setLevel(logging.DEBUG)

# file handler
logfile='/tmp/tf_s.log'
fh = logging.FileHandler(logfile)
fh.setFormatter( logging.Formatter('fh %(asctime)s %(name)s %(filename)s:%(lineno)d :%(message)s') )
logtf.addHandler(fh)
logtf.info('writing to %s', logfile)

Yeah, I’m using tf 2.0-beta and want to enable/disable the default logging. The environment variable and methods in tf1.X don’t seem to exist anymore.

I stepped around in PDB and found this to work:

# close the TF2 logger
tf2logger = tf.get_logger()
tf2logger.error('Close TF2 logger handlers')
tf2logger.root.removeHandler(tf2logger.root.handlers[0])

I then add my own logger API (in this case file-based)

logtf = logging.getLogger('DST')
logtf.setLevel(logging.DEBUG)

# file handler
logfile='/tmp/tf_s.log'
fh = logging.FileHandler(logfile)
fh.setFormatter( logging.Formatter('fh %(asctime)s %(name)s %(filename)s:%(lineno)d :%(message)s') )
logtf.addHandler(fh)
logtf.info('writing to %s', logfile)

回答 9

对于tensorflow 2.1.0,以下代码可以正常工作。

import tensorflow as tf
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)

for tensorflow 2.1.0, following code works fine.

import tensorflow as tf
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)

回答 10

如果您只需要消除屏幕上的警告输出,则可以使用以下简单命令在导入tensorflow之后立即清除控制台屏幕(根据我的经验,它比禁用所有调试日志更有效):

在Windows中:

import os
os.system('cls')

在Linux或Mac中:

import os
os.system('clear')

If you only need to get rid of warning outputs on the screen, you might want to clear the console screen right after importing the tensorflow by using this simple command (Its more effective than disabling all debugging logs in my experience):

In windows:

import os
os.system('cls')

In Linux or Mac:

import os
os.system('clear')