标签归档:logging

哪里是logging.config.dictConfig的完整示例?

问题:哪里是logging.config.dictConfig的完整示例?

我想使用dictConfig,但是文档有点抽象。在哪里可以找到与结合使用的字典的具体,可复制和粘贴的示例dictConfig

I’d like to use dictConfig, but the documentation is a little bit abstract. Where can I find a concrete, copy+paste-able example of the dictionary used with dictConfig?


回答 0

怎么样

LOGGING_CONFIG = { 
    'version': 1,
    'disable_existing_loggers': True,
    'formatters': { 
        'standard': { 
            'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
        },
    },
    'handlers': { 
        'default': { 
            'level': 'INFO',
            'formatter': 'standard',
            'class': 'logging.StreamHandler',
            'stream': 'ext://sys.stdout',  # Default is stderr
        },
    },
    'loggers': { 
        '': {  # root logger
            'handlers': ['default'],
            'level': 'WARNING',
            'propagate': False
        },
        'my.packg': { 
            'handlers': ['default'],
            'level': 'INFO',
            'propagate': False
        },
        '__main__': {  # if __name__ == '__main__'
            'handlers': ['default'],
            'level': 'DEBUG',
            'propagate': False
        },
    } 
}

用法:

# Run once at startup:
logging.config.dictConfig(LOGGING_CONFIG)

# Include in each module:
log = logging.getLogger(__name__)
log.debug("Logging is configured.")

如果您从第三方软件包中看到太多日志,请确保在导入第三方软件包logging.config.dictConfig(LOGGING_CONFIG) 之前使用来运行此配置。

参考:https : //docs.python.org/3/library/logging.config.html#configuration-dictionary-schema

How about here! The corresponding documentation reference is configuration-dictionary-schema.

LOGGING_CONFIG = { 
    'version': 1,
    'disable_existing_loggers': True,
    'formatters': { 
        'standard': { 
            'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
        },
    },
    'handlers': { 
        'default': { 
            'level': 'INFO',
            'formatter': 'standard',
            'class': 'logging.StreamHandler',
            'stream': 'ext://sys.stdout',  # Default is stderr
        },
    },
    'loggers': { 
        '': {  # root logger
            'handlers': ['default'],
            'level': 'WARNING',
            'propagate': False
        },
        'my.packg': { 
            'handlers': ['default'],
            'level': 'INFO',
            'propagate': False
        },
        '__main__': {  # if __name__ == '__main__'
            'handlers': ['default'],
            'level': 'DEBUG',
            'propagate': False
        },
    } 
}

Usage:

# Run once at startup:
logging.config.dictConfig(LOGGING_CONFIG)

# Include in each module:
log = logging.getLogger(__name__)
log.debug("Logging is configured.")

In case you see too many logs from third-party packages, be sure to run this config using logging.config.dictConfig(LOGGING_CONFIG) before the third-party packages are imported.

To add additional custom info to each log message using a logging filter, consider this answer.


回答 1

接受的答案很好!但是,如果可以从不太复杂的东西开始呢?日志记录模块是非常强大的功能,并且文档对于新手来说有点不知所措。但是从一开始,您就不需要配置格式化程序和处理程序。您可以在确定需要的内容时添加它。

例如:

import logging.config

DEFAULT_LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'loggers': {
        '': {
            'level': 'INFO',
        },
        'another.module': {
            'level': 'DEBUG',
        },
    }
}

logging.config.dictConfig(DEFAULT_LOGGING)

logging.info('Hello, log')

The accepted answer is nice! But what if one could begin with something less complex? The logging module is very powerful thing and the documentation is kind of a little bit overwhelming especially for novice. But for the beginning you no need to configure formatters and handlers. You can add it when you figure out what you want.

For example:

import logging.config

DEFAULT_LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'loggers': {
        '': {
            'level': 'INFO',
        },
        'another.module': {
            'level': 'DEBUG',
        },
    }
}

logging.config.dictConfig(DEFAULT_LOGGING)

logging.info('Hello, log')

回答 2

流处理程序,文件处理程序,旋转文件处理程序和SMTP处理程序的示例

from logging.config import dictConfig

LOGGING_CONFIG = {
    'version': 1,
    'loggers': {
        '': {  # root logger
            'level': 'NOTSET',
            'handlers': ['debug_console_handler', 'info_rotating_file_handler', 'error_file_handler', 'critical_mail_handler'],
        },
        'my.package': { 
            'level': 'WARNING',
            'propagate': False,
            'handlers': ['info_rotating_file_handler', 'error_file_handler' ],
        },
    },
    'handlers': {
        'debug_console_handler': {
            'level': 'DEBUG',
            'formatter': 'info',
            'class': 'logging.StreamHandler',
            'stream': 'ext://sys.stdout',
        },
        'info_rotating_file_handler': {
            'level': 'INFO',
            'formatter': 'info',
            'class': 'logging.handlers.RotatingFileHandler',
            'filename': 'info.log',
            'mode': 'a',
            'maxBytes': 1048576,
            'backupCount': 10
        },
        'error_file_handler': {
            'level': 'WARNING',
            'formatter': 'error',
            'class': 'logging.FileHandler',
            'filename': 'error.log',
            'mode': 'a',
        },
        'critical_mail_handler': {
            'level': 'CRITICAL',
            'formatter': 'error',
            'class': 'logging.handlers.SMTPHandler',
            'mailhost' : 'localhost',
            'fromaddr': 'monitoring@domain.com',
            'toaddrs': ['dev@domain.com', 'qa@domain.com'],
            'subject': 'Critical error with application name'
        }
    },
    'formatters': {
        'info': {
            'format': '%(asctime)s-%(levelname)s-%(name)s::%(module)s|%(lineno)s:: %(message)s'
        },
        'error': {
            'format': '%(asctime)s-%(levelname)s-%(name)s-%(process)d::%(module)s|%(lineno)s:: %(message)s'
        },
    },

}

dictConfig(LOGGING_CONFIG)

Example with Stream Handler, File Handler, Rotating File Handler and SMTP Handler

from logging.config import dictConfig

LOGGING_CONFIG = {
    'version': 1,
    'loggers': {
        '': {  # root logger
            'level': 'NOTSET',
            'handlers': ['debug_console_handler', 'info_rotating_file_handler', 'error_file_handler', 'critical_mail_handler'],
        },
        'my.package': { 
            'level': 'WARNING',
            'propagate': False,
            'handlers': ['info_rotating_file_handler', 'error_file_handler' ],
        },
    },
    'handlers': {
        'debug_console_handler': {
            'level': 'DEBUG',
            'formatter': 'info',
            'class': 'logging.StreamHandler',
            'stream': 'ext://sys.stdout',
        },
        'info_rotating_file_handler': {
            'level': 'INFO',
            'formatter': 'info',
            'class': 'logging.handlers.RotatingFileHandler',
            'filename': 'info.log',
            'mode': 'a',
            'maxBytes': 1048576,
            'backupCount': 10
        },
        'error_file_handler': {
            'level': 'WARNING',
            'formatter': 'error',
            'class': 'logging.FileHandler',
            'filename': 'error.log',
            'mode': 'a',
        },
        'critical_mail_handler': {
            'level': 'CRITICAL',
            'formatter': 'error',
            'class': 'logging.handlers.SMTPHandler',
            'mailhost' : 'localhost',
            'fromaddr': 'monitoring@domain.com',
            'toaddrs': ['dev@domain.com', 'qa@domain.com'],
            'subject': 'Critical error with application name'
        }
    },
    'formatters': {
        'info': {
            'format': '%(asctime)s-%(levelname)s-%(name)s::%(module)s|%(lineno)s:: %(message)s'
        },
        'error': {
            'format': '%(asctime)s-%(levelname)s-%(name)s-%(process)d::%(module)s|%(lineno)s:: %(message)s'
        },
    },

}

dictConfig(LOGGING_CONFIG)

回答 3

我在下面找到了Django v1.11.15默认配置,希望对您有所帮助

DEFAULT_LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'filters': {
        'require_debug_false': {
            '()': 'django.utils.log.RequireDebugFalse',
        },
        'require_debug_true': {
            '()': 'django.utils.log.RequireDebugTrue',
        },
    },
    'formatters': {
        'django.server': {
            '()': 'django.utils.log.ServerFormatter',
            'format': '[%(server_time)s] %(message)s',
        }
    },
    'handlers': {
        'console': {
            'level': 'INFO',
            'filters': ['require_debug_true'],
            'class': 'logging.StreamHandler',
        },
        'django.server': {
            'level': 'INFO',
            'class': 'logging.StreamHandler',
            'formatter': 'django.server',
        },
        'mail_admins': {
            'level': 'ERROR',
            'filters': ['require_debug_false'],
            'class': 'django.utils.log.AdminEmailHandler'
        }
    },
    'loggers': {
        'django': {
            'handlers': ['console', 'mail_admins'],
            'level': 'INFO',
        },
        'django.server': {
            'handlers': ['django.server'],
            'level': 'INFO',
            'propagate': False,
        },
    }
}

I found Django v1.11.15 default config below, hope it helps

DEFAULT_LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'filters': {
        'require_debug_false': {
            '()': 'django.utils.log.RequireDebugFalse',
        },
        'require_debug_true': {
            '()': 'django.utils.log.RequireDebugTrue',
        },
    },
    'formatters': {
        'django.server': {
            '()': 'django.utils.log.ServerFormatter',
            'format': '[%(server_time)s] %(message)s',
        }
    },
    'handlers': {
        'console': {
            'level': 'INFO',
            'filters': ['require_debug_true'],
            'class': 'logging.StreamHandler',
        },
        'django.server': {
            'level': 'INFO',
            'class': 'logging.StreamHandler',
            'formatter': 'django.server',
        },
        'mail_admins': {
            'level': 'ERROR',
            'filters': ['require_debug_false'],
            'class': 'django.utils.log.AdminEmailHandler'
        }
    },
    'loggers': {
        'django': {
            'handlers': ['console', 'mail_admins'],
            'level': 'INFO',
        },
        'django.server': {
            'handlers': ['django.server'],
            'level': 'INFO',
            'propagate': False,
        },
    }
}

回答 4

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import logging
import logging.handlers
from logging.config import dictConfig

logger = logging.getLogger(__name__)

DEFAULT_LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
}
def configure_logging(logfile_path):
    """
    Initialize logging defaults for Project.

    :param logfile_path: logfile used to the logfile
    :type logfile_path: string

    This function does:

    - Assign INFO and DEBUG level to logger file handler and console handler

    """
    dictConfig(DEFAULT_LOGGING)

    default_formatter = logging.Formatter(
        "[%(asctime)s] [%(levelname)s] [%(name)s] [%(funcName)s():%(lineno)s] [PID:%(process)d TID:%(thread)d] %(message)s",
        "%d/%m/%Y %H:%M:%S")

    file_handler = logging.handlers.RotatingFileHandler(logfile_path, maxBytes=10485760,backupCount=300, encoding='utf-8')
    file_handler.setLevel(logging.INFO)

    console_handler = logging.StreamHandler()
    console_handler.setLevel(logging.DEBUG)

    file_handler.setFormatter(default_formatter)
    console_handler.setFormatter(default_formatter)

    logging.root.setLevel(logging.DEBUG)
    logging.root.addHandler(file_handler)
    logging.root.addHandler(console_handler)



[31/10/2015 22:00:33] [DEBUG] [yourmodulename] [yourfunction_name():9] [PID:61314 TID:140735248744448] this is logger infomation from hello module
#!/usr/bin/env python
# -*- coding: utf-8 -*-

import logging
import logging.handlers
from logging.config import dictConfig

logger = logging.getLogger(__name__)

DEFAULT_LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
}
def configure_logging(logfile_path):
    """
    Initialize logging defaults for Project.

    :param logfile_path: logfile used to the logfile
    :type logfile_path: string

    This function does:

    - Assign INFO and DEBUG level to logger file handler and console handler

    """
    dictConfig(DEFAULT_LOGGING)

    default_formatter = logging.Formatter(
        "[%(asctime)s] [%(levelname)s] [%(name)s] [%(funcName)s():%(lineno)s] [PID:%(process)d TID:%(thread)d] %(message)s",
        "%d/%m/%Y %H:%M:%S")

    file_handler = logging.handlers.RotatingFileHandler(logfile_path, maxBytes=10485760,backupCount=300, encoding='utf-8')
    file_handler.setLevel(logging.INFO)

    console_handler = logging.StreamHandler()
    console_handler.setLevel(logging.DEBUG)

    file_handler.setFormatter(default_formatter)
    console_handler.setFormatter(default_formatter)

    logging.root.setLevel(logging.DEBUG)
    logging.root.addHandler(file_handler)
    logging.root.addHandler(console_handler)



[31/10/2015 22:00:33] [DEBUG] [yourmodulename] [yourfunction_name():9] [PID:61314 TID:140735248744448] this is logger infomation from hello module

从IPython Notebook中的日志记录模块获取输出

问题:从IPython Notebook中的日志记录模块获取输出

当我在IPython Notebook中运行以下命令时,看不到任何输出:

import logging
logging.basicConfig(level=logging.DEBUG)
logging.debug("test")

有人知道怎么做,这样我才能在笔记本中看到“测试”消息吗?

When I running the following inside IPython Notebook I don’t see any output:

import logging
logging.basicConfig(level=logging.DEBUG)
logging.debug("test")

Anyone know how to make it so I can see the “test” message inside the notebook?


回答 0

请尝试以下操作:

import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test")

根据logging.basicConfig

通过创建带有默认Formatter的StreamHandler并将其添加到根记录器,对记录系统进行基本配置。如果没有为根记录器定义处理程序,则debug(),info(),warning(),error()和critical()函数将自动调用basicConfig()。

如果根记录器已经为其配置了处理程序,则此功能不执行任何操作。

似乎ipython笔记本在某处调用basicConfig(或设置处理程序)。

Try following:

import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test")

According to logging.basicConfig:

Does basic configuration for the logging system by creating a StreamHandler with a default Formatter and adding it to the root logger. The functions debug(), info(), warning(), error() and critical() will call basicConfig() automatically if no handlers are defined for the root logger.

This function does nothing if the root logger already has handlers configured for it.

It seems like ipython notebook call basicConfig (or set handler) somewhere.


回答 1

如果仍要使用basicConfig,请像这样重新加载日志记录模块

from importlib import reload  # Not needed in Python 2
import logging
reload(logging)
logging.basicConfig(format='%(asctime)s %(levelname)s:%(message)s', level=logging.DEBUG, datefmt='%I:%M:%S')

If you still want to use basicConfig, reload the logging module like this

from importlib import reload  # Not needed in Python 2
import logging
reload(logging)
logging.basicConfig(format='%(asctime)s %(levelname)s:%(message)s', level=logging.DEBUG, datefmt='%I:%M:%S')

回答 2

我的理解是IPython会话开始记录日志,因此basicConfig不起作用。这是对我有用的设置(我希望这看起来不太好,因为我想将其用于几乎所有笔记本电脑):

import logging
logger = logging.getLogger()
fhandler = logging.FileHandler(filename='mylog.log', mode='a')
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fhandler.setFormatter(formatter)
logger.addHandler(fhandler)
logger.setLevel(logging.DEBUG)

现在,当我运行时:

logging.error('hello!')
logging.debug('This is a debug message')
logging.info('this is an info message')
logging.warning('tbllalfhldfhd, warning.')

我在与笔记本相同的目录中得到一个“ mylog.log”文件,其中包含:

2015-01-28 09:49:25,026 - root - ERROR - hello!
2015-01-28 09:49:25,028 - root - DEBUG - This is a debug message
2015-01-28 09:49:25,029 - root - INFO - this is an info message
2015-01-28 09:49:25,032 - root - WARNING - tbllalfhldfhd, warning.

请注意,如果您在不重新启动IPython会话的情况下重新运行它,则会将重复的条目写入文件,因为现在将定义两个文件处理程序

My understanding is that the IPython session starts up logging so basicConfig doesn’t work. Here is the setup that works for me (I wish this was not so gross looking since I want to use it for almost all my notebooks):

import logging
logger = logging.getLogger()
fhandler = logging.FileHandler(filename='mylog.log', mode='a')
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fhandler.setFormatter(formatter)
logger.addHandler(fhandler)
logger.setLevel(logging.DEBUG)

Now when I run:

logging.error('hello!')
logging.debug('This is a debug message')
logging.info('this is an info message')
logging.warning('tbllalfhldfhd, warning.')

I get a “mylog.log” file in the same directory as my notebook that contains:

2015-01-28 09:49:25,026 - root - ERROR - hello!
2015-01-28 09:49:25,028 - root - DEBUG - This is a debug message
2015-01-28 09:49:25,029 - root - INFO - this is an info message
2015-01-28 09:49:25,032 - root - WARNING - tbllalfhldfhd, warning.

Note that if you rerun this without restarting the IPython session it will write duplicate entries to the file since there would now be two file handlers defined


回答 3

请记住,stderr是logging模块的默认流,因此在IPython和Jupyter笔记本中,除非将流配置为stdout,否则可能看不到任何内容:

import logging
import sys

logging.basicConfig(format='%(asctime)s | %(levelname)s : %(message)s',
                     level=logging.INFO, stream=sys.stdout)

logging.info('Hello world!')

Bear in mind that stderr is the default stream for the logging module, so in IPython and Jupyter notebooks you might not see anything unless you configure the stream to stdout:

import logging
import sys

logging.basicConfig(format='%(asctime)s | %(levelname)s : %(message)s',
                     level=logging.INFO, stream=sys.stdout)

logging.info('Hello world!')

回答 4

现在对我有用的(Jupyter,笔记本服务器是:5.4.1,IPython 7.0.1)

import logging
logging.basicConfig()
logger = logging.getLogger('Something')
logger.setLevel(logging.DEBUG)

现在,我可以使用记录器来打印信息,否则,我只会看到默认级别(logging.WARNING)或更高级别的消息。

What worked for me now (Jupyter, notebook server is: 5.4.1, IPython 7.0.1)

import logging
logging.basicConfig()
logger = logging.getLogger('Something')
logger.setLevel(logging.DEBUG)

Now I can use logger to print info, otherwise I would see only message from the default level (logging.WARNING) or above.


回答 5

您可以通过运行配置日志记录 %config Application.log_level="INFO"

有关更多信息,请参见IPython内核选项。

You can configure logging by running %config Application.log_level="INFO"

For more information, see IPython kernel options


回答 6

我为这两个文件都设置了一个记录器,我希望它能显示在笔记本上。事实证明,添加文件处理程序会清除默认的流处理程序。

logger = logging.getLogger()

formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')

# Setup file handler
fhandler  = logging.FileHandler('my.log')
fhandler.setLevel(logging.DEBUG)
fhandler.setFormatter(formatter)

# Configure stream handler for the cells
chandler = logging.StreamHandler()
chandler.setLevel(logging.DEBUG)
chandler.setFormatter(formatter)

# Add both handlers
logger.addHandler(fhandler)
logger.addHandler(chandler)
logger.setLevel(logging.DEBUG)

# Show the handlers
logger.handlers

# Log Something
logger.info("Test info")
logger.debug("Test debug")
logger.error("Test error")

I setup a logger for both file and I wanted it to show up on the notebook. Turns out adding a filehandler clears out the default stream handlder.

logger = logging.getLogger()

formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')

# Setup file handler
fhandler  = logging.FileHandler('my.log')
fhandler.setLevel(logging.DEBUG)
fhandler.setFormatter(formatter)

# Configure stream handler for the cells
chandler = logging.StreamHandler()
chandler.setLevel(logging.DEBUG)
chandler.setFormatter(formatter)

# Add both handlers
logger.addHandler(fhandler)
logger.addHandler(chandler)
logger.setLevel(logging.DEBUG)

# Show the handlers
logger.handlers

# Log Something
logger.info("Test info")
logger.debug("Test debug")
logger.error("Test error")

回答 7

似乎适用于ipython / jupyter早期版本的解决方案不再起作用。

这是适用于ipython 7.9.0的有效解决方案(也已通过jupyter服务器6.0.2测试):

import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test message")

DEBUG:root:test message

It seems that solutions that worked for older versions of ipython/jupyter no longer work.

Here is a working solution for ipython 7.9.0 (also tested with jupyter server 6.0.2):

import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test message")

DEBUG:root:test message

如何在Python中记录源文件名和行号

问题:如何在Python中记录源文件名和行号

是否有可能装饰/扩展python标准日志记录系统,以便在调用日志记录方法时也将文件和文件的行号记录在调用它的位置,或者可能是调用该文件的方法?

Is it possible to decorate/extend the python standard logging system, so that when a logging method is invoked it also logs the file and the line number where it was invoked or maybe the method that invoked it?


回答 0

当然,请检查日志记录文档中的格式化程序。特别是lineno和pathname变量。

%(pathname)s 发出日志记录调用的源文件完整路径名(如果有)。

%(filename)s 路径名文件名部分。

%(module)s 模块(文件名的名称部分)。

%(funcName)s 包含日志记录调用的函数名称。

%(lineno)d 发出记录调用的源行号(如果有)。

看起来像这样:

formatter = logging.Formatter('[%(asctime)s] p%(process)s {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s','%m-%d %H:%M:%S')

Sure, check formatters in logging docs. Specifically the lineno and pathname variables.

%(pathname)s Full pathname of the source file where the logging call was issued(if available).

%(filename)s Filename portion of pathname.

%(module)s Module (name portion of filename).

%(funcName)s Name of function containing the logging call.

%(lineno)d Source line number where the logging call was issued (if available).

Looks something like this:

formatter = logging.Formatter('[%(asctime)s] p%(process)s {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s','%m-%d %H:%M:%S')

回答 1

Seb的非常有用的答案之上,这是一个方便的代码段,以合理的格式演示了记录器的用法:

#!/usr/bin/env python
import logging

logging.basicConfig(format='%(asctime)s,%(msecs)d %(levelname)-8s [%(filename)s:%(lineno)d] %(message)s',
    datefmt='%Y-%m-%d:%H:%M:%S',
    level=logging.DEBUG)

logger = logging.getLogger(__name__)
logger.debug("This is a debug log")
logger.info("This is an info log")
logger.critical("This is critical")
logger.error("An error occurred")

生成此输出:

2017-06-06:17:07:02,158 DEBUG    [log.py:11] This is a debug log
2017-06-06:17:07:02,158 INFO     [log.py:12] This is an info log
2017-06-06:17:07:02,158 CRITICAL [log.py:13] This is critical
2017-06-06:17:07:02,158 ERROR    [log.py:14] An error occurred

On top of Seb’s very useful answer, here is a handy code snippet that demonstrates the logger usage with a reasonable format:

#!/usr/bin/env python
import logging

logging.basicConfig(format='%(asctime)s,%(msecs)d %(levelname)-8s [%(filename)s:%(lineno)d] %(message)s',
    datefmt='%Y-%m-%d:%H:%M:%S',
    level=logging.DEBUG)

logger = logging.getLogger(__name__)
logger.debug("This is a debug log")
logger.info("This is an info log")
logger.critical("This is critical")
logger.error("An error occurred")

Generates this output:

2017-06-06:17:07:02,158 DEBUG    [log.py:11] This is a debug log
2017-06-06:17:07:02,158 INFO     [log.py:12] This is an info log
2017-06-06:17:07:02,158 CRITICAL [log.py:13] This is critical
2017-06-06:17:07:02,158 ERROR    [log.py:14] An error occurred

回答 2

以将调试日志记录发送到标准输出的方式建立在上面的基础上:

import logging
import sys

root = logging.getLogger()
root.setLevel(logging.DEBUG)

ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.DEBUG)
FORMAT = "[%(filename)s:%(lineno)s - %(funcName)20s() ] %(message)s"
formatter = logging.Formatter(FORMAT)
ch.setFormatter(formatter)
root.addHandler(ch)

logging.debug("I am sent to standard out.")

将以上内容放入一个名为的文件中,将debug_logging_example.py产生输出:

[debug_logging_example.py:14 -             <module>() ] I am sent to standard out.

然后,如果要关闭日志记录注释掉root.setLevel(logging.DEBUG)

对于单个文件(例如,类分配),我发现这是比使用print()语句更好的方法。在此允许您在提交调试输出之前在一个位置关闭调试输出。

To build on the above in a way that sends debug logging to standard out:

import logging
import sys

root = logging.getLogger()
root.setLevel(logging.DEBUG)

ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.DEBUG)
FORMAT = "[%(filename)s:%(lineno)s - %(funcName)20s() ] %(message)s"
formatter = logging.Formatter(FORMAT)
ch.setFormatter(formatter)
root.addHandler(ch)

logging.debug("I am sent to standard out.")

Putting the above into a file called debug_logging_example.py produces the output:

[debug_logging_example.py:14 -             <module>() ] I am sent to standard out.

Then if you want to turn off logging comment out root.setLevel(logging.DEBUG).

For single files (e.g. class assignments) I’ve found this a far better way of doing this as opposed to using print() statements. Where it allows you to turn the debug output off in a single place before you submit it.


回答 3

对于使用PyCharm或Eclipse pydev的开发人员,以下内容将在控制台日志输出中生成指向log语句源的链接:

import logging, sys, os
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG, format='%(message)s | \'%(name)s:%(lineno)s\'')
log = logging.getLogger(os.path.basename(__file__))


log.debug("hello logging linked to source")

有关更多讨论和历史记录,请参见Eclipse控制台中的Pydev源文件超链接

For devs using PyCharm or Eclipse pydev, the following will produce a link to the source of the log statement in the console log output:

import logging, sys, os
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG, format='%(message)s | \'%(name)s:%(lineno)s\'')
log = logging.getLogger(os.path.basename(__file__))


log.debug("hello logging linked to source")

See Pydev source file hyperlinks in Eclipse console for longer discussion and history.


回答 4

# your imports above ...


logging.basicConfig(
    format='%(asctime)s,%(msecs)d %(levelname)-8s [%(pathname)s:%(lineno)d in 
    function %(funcName)s] %(message)s',
    datefmt='%Y-%m-%d:%H:%M:%S',
    level=logging.DEBUG
)

logger = logging.getLogger(__name__)

# your classes and methods below ...
# An naive Sample of usage:
try:
    logger.info('Sample of info log')
    # your code here
except Exception as e:
    logger.error(e)

与其他答案不同,这将记录文件的完整路径以及可能发生错误的函数名称。如果您的项目中有多个模块,并且在这些模块中分布了多个具有相同名称的文件,这将非常有用。

# your imports above ...


logging.basicConfig(
    format='%(asctime)s,%(msecs)d %(levelname)-8s [%(pathname)s:%(lineno)d in 
    function %(funcName)s] %(message)s',
    datefmt='%Y-%m-%d:%H:%M:%S',
    level=logging.DEBUG
)

logger = logging.getLogger(__name__)

# your classes and methods below ...
# An naive Sample of usage:
try:
    logger.info('Sample of info log')
    # your code here
except Exception as e:
    logger.error(e)

Different of the other answers, this will log full path of file and the function name that might have occurred an error. This is useful if you have a project with more than one module and several files with the same name distributed in these modules.


如何在Python中将日志记录配置为syslog?

问题:如何在Python中将日志记录配置为syslog?

我无法理解Python的logging模块。我的需求非常简单:我只想将所有内容记录到syslog中。阅读文档后,我想到了这个简单的测试脚本:

import logging
import logging.handlers

my_logger = logging.getLogger('MyLogger')
my_logger.setLevel(logging.DEBUG)

handler = logging.handlers.SysLogHandler()

my_logger.addHandler(handler)

my_logger.debug('this is debug')
my_logger.critical('this is critical')

但是此脚本不会在syslog中产生任何日志记录。怎么了?

I can’t get my head around Python’s logging module. My needs are very simple: I just want to log everything to syslog. After reading documentation I came up with this simple test script:

import logging
import logging.handlers

my_logger = logging.getLogger('MyLogger')
my_logger.setLevel(logging.DEBUG)

handler = logging.handlers.SysLogHandler()

my_logger.addHandler(handler)

my_logger.debug('this is debug')
my_logger.critical('this is critical')

But this script does not produce any log records in syslog. What’s wrong?


回答 0

将行更改为此:

handler = SysLogHandler(address='/dev/log')

这对我有用

import logging
import logging.handlers

my_logger = logging.getLogger('MyLogger')
my_logger.setLevel(logging.DEBUG)

handler = logging.handlers.SysLogHandler(address = '/dev/log')

my_logger.addHandler(handler)

my_logger.debug('this is debug')
my_logger.critical('this is critical')

Change the line to this:

handler = SysLogHandler(address='/dev/log')

This works for me

import logging
import logging.handlers

my_logger = logging.getLogger('MyLogger')
my_logger.setLevel(logging.DEBUG)

handler = logging.handlers.SysLogHandler(address = '/dev/log')

my_logger.addHandler(handler)

my_logger.debug('this is debug')
my_logger.critical('this is critical')

回答 1

无论是通过TCP堆栈登录到/ dev / log还是localhost,都应始终使用本地主机进行日志记录。这允许完全符合RFC且功能强大的系统日志记录守护程序来处理syslog。这消除了远程守护程序起作用的需要,并提供了syslog守护程序的增强功能,例如rsyslog和syslog-ng。SMTP也遵循相同的原则。只需将其交给本地SMTP软件即可。在这种情况下,请使用“程序模式”而不是守护程序,但这是相同的想法。让功能更强大的软件来处理它。使用TCP代替UDP进行syslog重试,排队,本地后台处理成为可能。您也可以按原样与代码分开[重新]配置这些守护程序。

为您的应用程序保存代码,让其他软件协同工作。

You should always use the local host for logging, whether to /dev/log or localhost through the TCP stack. This allows the fully RFC compliant and featureful system logging daemon to handle syslog. This eliminates the need for the remote daemon to be functional and provides the enhanced capabilities of syslog daemon’s such as rsyslog and syslog-ng for instance. The same philosophy goes for SMTP. Just hand it to the local SMTP software. In this case use ‘program mode’ not the daemon, but it’s the same idea. Let the more capable software handle it. Retrying, queuing, local spooling, using TCP instead of UDP for syslog and so forth become possible. You can also [re-]configure those daemons separately from your code as it should be.

Save your coding for your application, let other software do it’s job in concert.


回答 2

我发现syslog模块可以很容易地获得您描述的基本日志记录行为:

import syslog
syslog.syslog("This is a test message")
syslog.syslog(syslog.LOG_INFO, "Test message at INFO priority")

您还可以做其他事情,但是就我所知,即使只是前两行也可以满足您的要求。

I found the syslog module to make it quite easy to get the basic logging behavior you describe:

import syslog
syslog.syslog("This is a test message")
syslog.syslog(syslog.LOG_INFO, "Test message at INFO priority")

There are other things you could do, too, but even just the first two lines of that will get you what you’ve asked for as I understand it.


回答 3

从这里和其他地方将东西拼凑在一起,这就是我想出的在unbuntu 12.04和centOS6上运行的结果

创建一个/etc/rsyslog.d/以.conf结尾的文件,并添加以下文本

local6.*        /var/log/my-logfile

重新启动rsyslog,重新加载似乎不适用于新的日志文件。也许它只重新加载现有的conf文件?

sudo restart rsyslog

然后,您可以使用此测试程序来确保它确实有效。

import logging, sys
from logging import config

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'verbose': {
            'format': '%(levelname)s %(module)s P%(process)d T%(thread)d %(message)s'
            },
        },
    'handlers': {
        'stdout': {
            'class': 'logging.StreamHandler',
            'stream': sys.stdout,
            'formatter': 'verbose',
            },
        'sys-logger6': {
            'class': 'logging.handlers.SysLogHandler',
            'address': '/dev/log',
            'facility': "local6",
            'formatter': 'verbose',
            },
        },
    'loggers': {
        'my-logger': {
            'handlers': ['sys-logger6','stdout'],
            'level': logging.DEBUG,
            'propagate': True,
            },
        }
    }

config.dictConfig(LOGGING)


logger = logging.getLogger("my-logger")

logger.debug("Debug")
logger.info("Info")
logger.warn("Warn")
logger.error("Error")
logger.critical("Critical")

Piecing things together from here and other places, this is what I came up with that works on unbuntu 12.04 and centOS6

Create an file in /etc/rsyslog.d/ that ends in .conf and add the following text

local6.*        /var/log/my-logfile

Restart rsyslog, reloading did NOT seem to work for the new log files. Maybe it only reloads existing conf files?

sudo restart rsyslog

Then you can use this test program to make sure it actually works.

import logging, sys
from logging import config

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'verbose': {
            'format': '%(levelname)s %(module)s P%(process)d T%(thread)d %(message)s'
            },
        },
    'handlers': {
        'stdout': {
            'class': 'logging.StreamHandler',
            'stream': sys.stdout,
            'formatter': 'verbose',
            },
        'sys-logger6': {
            'class': 'logging.handlers.SysLogHandler',
            'address': '/dev/log',
            'facility': "local6",
            'formatter': 'verbose',
            },
        },
    'loggers': {
        'my-logger': {
            'handlers': ['sys-logger6','stdout'],
            'level': logging.DEBUG,
            'propagate': True,
            },
        }
    }

config.dictConfig(LOGGING)


logger = logging.getLogger("my-logger")

logger.debug("Debug")
logger.info("Info")
logger.warn("Warn")
logger.error("Error")
logger.critical("Critical")

回答 4

我添加了一些额外的注释,以防万一,它对任何人都有帮助,因为我发现此交换很有用,但需要一些额外的信息才能使其正常工作。

要使用SysLogHandler登录到特定工具,您需要指定工具值。例如,您已定义:

local3.* /var/log/mylog

在syslog中,那么您将要使用:

handler = logging.handlers.SysLogHandler(address = ('localhost',514), facility=19)

并且还需要让syslog侦听UDP以使用localhost而不是/ dev / log。

I add a little extra comment just in case it helps anyone because I found this exchange useful but needed this little extra bit of info to get it all working.

To log to a specific facility using SysLogHandler you need to specify the facility value. Say for example that you have defined:

local3.* /var/log/mylog

in syslog, then you’ll want to use:

handler = logging.handlers.SysLogHandler(address = ('localhost',514), facility=19)

and you also need to have syslog listening on UDP to use localhost instead of /dev/log.


回答 5

是否已将syslog.conf设置为处理工具=用户?

您可以使用工具参数设置python记录器使用的工具,如下所示:

handler = logging.handlers.SysLogHandler(facility=SysLogHandler.LOG_DAEMON)

Is your syslog.conf set up to handle facility=user?

You can set the facility used by the python logger with the facility argument, something like this:

handler = logging.handlers.SysLogHandler(facility=SysLogHandler.LOG_DAEMON)

回答 6

import syslog
syslog.openlog(ident="LOG_IDENTIFIER",logoption=syslog.LOG_PID, facility=syslog.LOG_LOCAL0)
syslog.syslog('Log processing initiated...')

上面的脚本将使用我们的自定义“ LOG_IDENTIFIER”登录到LOCAL0工具…您可以将LOCAL [0-7]用于本地目的。

import syslog
syslog.openlog(ident="LOG_IDENTIFIER",logoption=syslog.LOG_PID, facility=syslog.LOG_LOCAL0)
syslog.syslog('Log processing initiated...')

the above script will log to LOCAL0 facility with our custom “LOG_IDENTIFIER”… you can use LOCAL[0-7] for local purpose.


回答 7

来自https://github.com/luismartingil/per.scripts/tree/master/python_syslog

#!/usr/bin/python
# -*- coding: utf-8 -*-

'''
Implements a new handler for the logging module which uses the pure syslog python module.

@author:  Luis Martin Gil
@year: 2013
'''
import logging
import syslog

class SysLogLibHandler(logging.Handler):
    """A logging handler that emits messages to syslog.syslog."""
    FACILITY = [syslog.LOG_LOCAL0,
                syslog.LOG_LOCAL1,
                syslog.LOG_LOCAL2,
                syslog.LOG_LOCAL3,
                syslog.LOG_LOCAL4,
                syslog.LOG_LOCAL5,
                syslog.LOG_LOCAL6,
                syslog.LOG_LOCAL7]
    def __init__(self, n):
        """ Pre. (0 <= n <= 7) """
        try:
            syslog.openlog(logoption=syslog.LOG_PID, facility=self.FACILITY[n])
        except Exception , err:
            try:
                syslog.openlog(syslog.LOG_PID, self.FACILITY[n])
            except Exception, err:
                try:
                    syslog.openlog('my_ident', syslog.LOG_PID, self.FACILITY[n])
                except:
                    raise
        # We got it
        logging.Handler.__init__(self)

    def emit(self, record):
        syslog.syslog(self.format(record))

if __name__ == '__main__':
    """ Lets play with the log class. """
    # Some variables we need
    _id = 'myproj_v2.0'
    logStr = 'debug'
    logFacilityLocalN = 1

    # Defines a logging level and logging format based on a given string key.
    LOG_ATTR = {'debug': (logging.DEBUG,
                          _id + ' %(levelname)-9s %(name)-15s %(threadName)-14s +%(lineno)-4d %(message)s'),
                'info': (logging.INFO,
                         _id + ' %(levelname)-9s %(message)s'),
                'warning': (logging.WARNING,
                            _id + ' %(levelname)-9s %(message)s'),
                'error': (logging.ERROR,
                          _id + ' %(levelname)-9s %(message)s'),
                'critical': (logging.CRITICAL,
                             _id + ' %(levelname)-9s %(message)s')}
    loglevel, logformat = LOG_ATTR[logStr]

    # Configuring the logger
    logger = logging.getLogger()
    logger.setLevel(loglevel)

    # Clearing previous logs
    logger.handlers = []

    # Setting formaters and adding handlers.
    formatter = logging.Formatter(logformat)
    handlers = []
    handlers.append(SysLogLibHandler(logFacilityLocalN))
    for h in handlers:
        h.setFormatter(formatter)
        logger.addHandler(h)

    # Yep!
    logging.debug('test debug')
    logging.info('test info')
    logging.warning('test warning')
    logging.error('test error')
    logging.critical('test critical')

From https://github.com/luismartingil/per.scripts/tree/master/python_syslog

#!/usr/bin/python
# -*- coding: utf-8 -*-

'''
Implements a new handler for the logging module which uses the pure syslog python module.

@author:  Luis Martin Gil
@year: 2013
'''
import logging
import syslog

class SysLogLibHandler(logging.Handler):
    """A logging handler that emits messages to syslog.syslog."""
    FACILITY = [syslog.LOG_LOCAL0,
                syslog.LOG_LOCAL1,
                syslog.LOG_LOCAL2,
                syslog.LOG_LOCAL3,
                syslog.LOG_LOCAL4,
                syslog.LOG_LOCAL5,
                syslog.LOG_LOCAL6,
                syslog.LOG_LOCAL7]
    def __init__(self, n):
        """ Pre. (0 <= n <= 7) """
        try:
            syslog.openlog(logoption=syslog.LOG_PID, facility=self.FACILITY[n])
        except Exception , err:
            try:
                syslog.openlog(syslog.LOG_PID, self.FACILITY[n])
            except Exception, err:
                try:
                    syslog.openlog('my_ident', syslog.LOG_PID, self.FACILITY[n])
                except:
                    raise
        # We got it
        logging.Handler.__init__(self)

    def emit(self, record):
        syslog.syslog(self.format(record))

if __name__ == '__main__':
    """ Lets play with the log class. """
    # Some variables we need
    _id = 'myproj_v2.0'
    logStr = 'debug'
    logFacilityLocalN = 1

    # Defines a logging level and logging format based on a given string key.
    LOG_ATTR = {'debug': (logging.DEBUG,
                          _id + ' %(levelname)-9s %(name)-15s %(threadName)-14s +%(lineno)-4d %(message)s'),
                'info': (logging.INFO,
                         _id + ' %(levelname)-9s %(message)s'),
                'warning': (logging.WARNING,
                            _id + ' %(levelname)-9s %(message)s'),
                'error': (logging.ERROR,
                          _id + ' %(levelname)-9s %(message)s'),
                'critical': (logging.CRITICAL,
                             _id + ' %(levelname)-9s %(message)s')}
    loglevel, logformat = LOG_ATTR[logStr]

    # Configuring the logger
    logger = logging.getLogger()
    logger.setLevel(loglevel)

    # Clearing previous logs
    logger.handlers = []

    # Setting formaters and adding handlers.
    formatter = logging.Formatter(logformat)
    handlers = []
    handlers.append(SysLogLibHandler(logFacilityLocalN))
    for h in handlers:
        h.setFormatter(formatter)
        logger.addHandler(h)

    # Yep!
    logging.debug('test debug')
    logging.info('test info')
    logging.warning('test warning')
    logging.error('test error')
    logging.critical('test critical')

回答 8

这是推荐用于3.2及更高版本的yaml dictConfig方式。

在日志中cfg.yml

version: 1
disable_existing_loggers: true

formatters:
    default:
        format: "[%(process)d] %(name)s(%(funcName)s:%(lineno)s) - %(levelname)s: %(message)s"

handlers:
    syslog:
        class: logging.handlers.SysLogHandler
        level: DEBUG
        formatter: default
        address: /dev/log
        facility: local0

    rotating_file:
        class: logging.handlers.RotatingFileHandler
        level: DEBUG
        formatter: default
        filename: rotating.log
        maxBytes: 10485760 # 10MB
        backupCount: 20
        encoding: utf8

root:
    level: DEBUG
    handlers: [syslog, rotating_file]
    propogate: yes

loggers:
    main:
        level: DEBUG
        handlers: [syslog, rotating_file]
        propogate: yes

使用以下命令加载配置:

log_config = yaml.safe_load(open('cfg.yml'))
logging.config.dictConfig(log_config)

配置了系统日志和直接文件。请注意,/dev/log是特定于操作系统的。

Here’s the yaml dictConfig way recommended for 3.2 & later.

In log cfg.yml:

version: 1
disable_existing_loggers: true

formatters:
    default:
        format: "[%(process)d] %(name)s(%(funcName)s:%(lineno)s) - %(levelname)s: %(message)s"

handlers:
    syslog:
        class: logging.handlers.SysLogHandler
        level: DEBUG
        formatter: default
        address: /dev/log
        facility: local0

    rotating_file:
        class: logging.handlers.RotatingFileHandler
        level: DEBUG
        formatter: default
        filename: rotating.log
        maxBytes: 10485760 # 10MB
        backupCount: 20
        encoding: utf8

root:
    level: DEBUG
    handlers: [syslog, rotating_file]
    propogate: yes

loggers:
    main:
        level: DEBUG
        handlers: [syslog, rotating_file]
        propogate: yes

Load the config using:

log_config = yaml.safe_load(open('cfg.yml'))
logging.config.dictConfig(log_config)

Configured both syslog & a direct file. Note that the /dev/log is OS specific.


回答 9

我将其修复在笔记本上。rsyslog服务未侦听套接字服务。

我在/etc/rsyslog.conf文件中配置此行, 并解决了问题:

$SystemLogSocketName /dev/log

I fix it on my notebook. The rsyslog service did not listen on socket service.

I config this line bellow in /etc/rsyslog.conf file and solved the problem:

$SystemLogSocketName /dev/log


回答 10

您还可以添加文件处理程序或旋转文件处理程序,以将日志发送到本地文件:http : //docs.python.org/2/library/logging.handlers.html

You can also add a file handler or rotating file handler to send your logs to a local file: http://docs.python.org/2/library/logging.handlers.html


如何在Python的日志记录工具中添加自定义日志级别

问题:如何在Python的日志记录工具中添加自定义日志级别

我想为我的应用程序使用loglevel TRACE(5),因为我认为这debug()还不够。另外log(5, msg)不是我想要的。如何将自定义日志级别添加到Python记录器?

我有mylogger.py以下内容:

import logging

@property
def log(obj):
    myLogger = logging.getLogger(obj.__class__.__name__)
    return myLogger

在我的代码中,我通过以下方式使用它:

class ExampleClass(object):
    from mylogger import log

    def __init__(self):
        '''The constructor with the logger'''
        self.log.debug("Init runs")

现在我想打电话 self.log.trace("foo bar")

在此先感谢您的帮助。

编辑(2016年12月8日):我更改了pfa的公认答案,即IMHO,这是基于Eric S的非常好的建议的出色解决方案。

I’d like to have loglevel TRACE (5) for my application, as I don’t think that debug() is sufficient. Additionally log(5, msg) isn’t what I want. How can I add a custom loglevel to a Python logger?

I’ve a mylogger.py with the following content:

import logging

@property
def log(obj):
    myLogger = logging.getLogger(obj.__class__.__name__)
    return myLogger

In my code I use it in the following way:

class ExampleClass(object):
    from mylogger import log

    def __init__(self):
        '''The constructor with the logger'''
        self.log.debug("Init runs")

Now I’d like to call self.log.trace("foo bar")

Thanks in advance for your help.

Edit (Dec 8th 2016): I changed the accepted answer to pfa’s which is, IMHO, an excellent solution based on the very good proposal from Eric S.


回答 0

@Eric S.

Eric S.的回答很好,但是我通过实验得知,这将始终导致打印以新的调试级别记录的消息,而不管日志级别设置为什么。因此,如果您将的新级别号码设置为9,如果您调用setLevel(50),则较低级别的消息将被错误地打印。

为了防止这种情况的发生,您需要在“ debugv”函数内的另一行检查是否实际启用了相关日志记录级别。

固定示例检查是否启用了日志记录级别:

import logging
DEBUG_LEVELV_NUM = 9 
logging.addLevelName(DEBUG_LEVELV_NUM, "DEBUGV")
def debugv(self, message, *args, **kws):
    if self.isEnabledFor(DEBUG_LEVELV_NUM):
        # Yes, logger takes its '*args' as 'args'.
        self._log(DEBUG_LEVELV_NUM, message, args, **kws) 
logging.Logger.debugv = debugv

如果查看Python 2.7中的class Loggerin 代码logging.__init__.py,这就是所有标准日志功能(.critical,.debug等)的功能。

我显然不能发表因缺乏声誉而对其他人的回答的回复…希望埃里克(Eric)会在看到这一点后更新其帖子。=)

@Eric S.

Eric S.’s answer is excellent, but I learned by experimentation that this will always cause messages logged at the new debug level to be printed — regardless of what the log level is set to. So if you make a new level number of 9, if you call setLevel(50), the lower level messages will erroneously be printed.

To prevent that from happening, you need another line inside the “debugv” function to check if the logging level in question is actually enabled.

Fixed example that checks if the logging level is enabled:

import logging
DEBUG_LEVELV_NUM = 9 
logging.addLevelName(DEBUG_LEVELV_NUM, "DEBUGV")
def debugv(self, message, *args, **kws):
    if self.isEnabledFor(DEBUG_LEVELV_NUM):
        # Yes, logger takes its '*args' as 'args'.
        self._log(DEBUG_LEVELV_NUM, message, args, **kws) 
logging.Logger.debugv = debugv

If you look at the code for class Logger in logging.__init__.py for Python 2.7, this is what all the standard log functions do (.critical, .debug, etc.).

I apparently can’t post replies to others’ answers for lack of reputation… hopefully Eric will update his post if he sees this. =)


回答 1

我回答了“避免看到lambda”,不得不修改在添加log_at_my_log_level的位置。我也看到了Paul所做的问题“我认为这不起作用。您是否需要logger作为log_at_my_log_level中的第一个参数?” 这对我有用

import logging
DEBUG_LEVELV_NUM = 9 
logging.addLevelName(DEBUG_LEVELV_NUM, "DEBUGV")
def debugv(self, message, *args, **kws):
    # Yes, logger takes its '*args' as 'args'.
    self._log(DEBUG_LEVELV_NUM, message, args, **kws) 
logging.Logger.debugv = debugv

I took the avoid seeing “lambda” answer and had to modify where the log_at_my_log_level was being added. I too saw the problem that Paul did – I don’t think this works. Don’t you need logger as the first arg in log_at_my_log_level? This worked for me

import logging
DEBUG_LEVELV_NUM = 9 
logging.addLevelName(DEBUG_LEVELV_NUM, "DEBUGV")
def debugv(self, message, *args, **kws):
    # Yes, logger takes its '*args' as 'args'.
    self._log(DEBUG_LEVELV_NUM, message, args, **kws) 
logging.Logger.debugv = debugv

回答 2

将所有现有答案与大量使用经验相结合,我想我已经列出了确保完全无缝使用新级别所需要做的所有事情的清单。下面的步骤假定您要添加一个TRACE具有value 的新级别logging.DEBUG - 5 == 5

  1. logging.addLevelName(logging.DEBUG - 5, 'TRACE') 需要调用以在内部注册新级别,以便可以按名称引用它。
  2. logging为了保持一致性,需要将新级别添加为自身的属性logging.TRACE = logging.DEBUG - 5
  3. trace需要将一种称为的方法添加到logging模块中。它应该表现得就像debuginfo等等。
  4. trace需要将一种称为的方法添加到当前配置的记录器类中。由于不是100%保证是logging.Logger,请logging.getLoggerClass()改用。

下面的方法说明了所有步骤:

def addLoggingLevel(levelName, levelNum, methodName=None):
    """
    Comprehensively adds a new logging level to the `logging` module and the
    currently configured logging class.

    `levelName` becomes an attribute of the `logging` module with the value
    `levelNum`. `methodName` becomes a convenience method for both `logging`
    itself and the class returned by `logging.getLoggerClass()` (usually just
    `logging.Logger`). If `methodName` is not specified, `levelName.lower()` is
    used.

    To avoid accidental clobberings of existing attributes, this method will
    raise an `AttributeError` if the level name is already an attribute of the
    `logging` module or if the method name is already present 

    Example
    -------
    >>> addLoggingLevel('TRACE', logging.DEBUG - 5)
    >>> logging.getLogger(__name__).setLevel("TRACE")
    >>> logging.getLogger(__name__).trace('that worked')
    >>> logging.trace('so did this')
    >>> logging.TRACE
    5

    """
    if not methodName:
        methodName = levelName.lower()

    if hasattr(logging, levelName):
       raise AttributeError('{} already defined in logging module'.format(levelName))
    if hasattr(logging, methodName):
       raise AttributeError('{} already defined in logging module'.format(methodName))
    if hasattr(logging.getLoggerClass(), methodName):
       raise AttributeError('{} already defined in logger class'.format(methodName))

    # This method was inspired by the answers to Stack Overflow post
    # http://stackoverflow.com/q/2183233/2988730, especially
    # http://stackoverflow.com/a/13638084/2988730
    def logForLevel(self, message, *args, **kwargs):
        if self.isEnabledFor(levelNum):
            self._log(levelNum, message, args, **kwargs)
    def logToRoot(message, *args, **kwargs):
        logging.log(levelNum, message, *args, **kwargs)

    logging.addLevelName(levelNum, levelName)
    setattr(logging, levelName, levelNum)
    setattr(logging.getLoggerClass(), methodName, logForLevel)
    setattr(logging, methodName, logToRoot)

Combining all of the existing answers with a bunch of usage experience, I think that I have come up with a list of all the things that need to be done to ensure completely seamless usage of the new level. The steps below assume that you are adding a new level TRACE with value logging.DEBUG - 5 == 5:

  1. logging.addLevelName(logging.DEBUG - 5, 'TRACE') needs to be invoked to get the new level registered internally so that it can be referenced by name.
  2. The new level needs to be added as an attribute to logging itself for consistency: logging.TRACE = logging.DEBUG - 5.
  3. A method called trace needs to be added to the logging module. It should behave just like debug, info, etc.
  4. A method called trace needs to be added to the currently configured logger class. Since this is not 100% guaranteed to be logging.Logger, use logging.getLoggerClass() instead.

All the steps are illustrated in the method below:

def addLoggingLevel(levelName, levelNum, methodName=None):
    """
    Comprehensively adds a new logging level to the `logging` module and the
    currently configured logging class.

    `levelName` becomes an attribute of the `logging` module with the value
    `levelNum`. `methodName` becomes a convenience method for both `logging`
    itself and the class returned by `logging.getLoggerClass()` (usually just
    `logging.Logger`). If `methodName` is not specified, `levelName.lower()` is
    used.

    To avoid accidental clobberings of existing attributes, this method will
    raise an `AttributeError` if the level name is already an attribute of the
    `logging` module or if the method name is already present 

    Example
    -------
    >>> addLoggingLevel('TRACE', logging.DEBUG - 5)
    >>> logging.getLogger(__name__).setLevel("TRACE")
    >>> logging.getLogger(__name__).trace('that worked')
    >>> logging.trace('so did this')
    >>> logging.TRACE
    5

    """
    if not methodName:
        methodName = levelName.lower()

    if hasattr(logging, levelName):
       raise AttributeError('{} already defined in logging module'.format(levelName))
    if hasattr(logging, methodName):
       raise AttributeError('{} already defined in logging module'.format(methodName))
    if hasattr(logging.getLoggerClass(), methodName):
       raise AttributeError('{} already defined in logger class'.format(methodName))

    # This method was inspired by the answers to Stack Overflow post
    # http://stackoverflow.com/q/2183233/2988730, especially
    # http://stackoverflow.com/a/13638084/2988730
    def logForLevel(self, message, *args, **kwargs):
        if self.isEnabledFor(levelNum):
            self._log(levelNum, message, args, **kwargs)
    def logToRoot(message, *args, **kwargs):
        logging.log(levelNum, message, *args, **kwargs)

    logging.addLevelName(levelNum, levelName)
    setattr(logging, levelName, levelNum)
    setattr(logging.getLoggerClass(), methodName, logForLevel)
    setattr(logging, methodName, logToRoot)

回答 3

这个问题比较老,但是我只是处理相同的主题,并发现了一种与已经提到的类似的方法,对我来说似乎更干净。这已经在3.4上进行了测试,因此我不确定所使用的方法是否在较早的版本中存在:

from logging import getLoggerClass, addLevelName, setLoggerClass, NOTSET

VERBOSE = 5

class MyLogger(getLoggerClass()):
    def __init__(self, name, level=NOTSET):
        super().__init__(name, level)

        addLevelName(VERBOSE, "VERBOSE")

    def verbose(self, msg, *args, **kwargs):
        if self.isEnabledFor(VERBOSE):
            self._log(VERBOSE, msg, args, **kwargs)

setLoggerClass(MyLogger)

This question is rather old, but I just dealt with the same topic and found a way similiar to those already mentioned which appears a little cleaner to me. This was tested on 3.4, so I’m not sure whether the methods used exist in older versions:

from logging import getLoggerClass, addLevelName, setLoggerClass, NOTSET

VERBOSE = 5

class MyLogger(getLoggerClass()):
    def __init__(self, name, level=NOTSET):
        super().__init__(name, level)

        addLevelName(VERBOSE, "VERBOSE")

    def verbose(self, msg, *args, **kwargs):
        if self.isEnabledFor(VERBOSE):
            self._log(VERBOSE, msg, args, **kwargs)

setLoggerClass(MyLogger)

回答 4

谁开始使用内部方法(self._log)的错误做法,为什么每个答案都基于此?pythonic解决方案将改为使用,self.log因此您不必弄乱任何内部内容:

import logging

SUBDEBUG = 5
logging.addLevelName(SUBDEBUG, 'SUBDEBUG')

def subdebug(self, message, *args, **kws):
    self.log(SUBDEBUG, message, *args, **kws) 
logging.Logger.subdebug = subdebug

logging.basicConfig()
l = logging.getLogger()
l.setLevel(SUBDEBUG)
l.subdebug('test')
l.setLevel(logging.DEBUG)
l.subdebug('test')

Who started the bad practice of using internal methods (self._log) and why is each answer based on that?! The pythonic solution would be to use self.log instead so you don’t have to mess with any internal stuff:

import logging

SUBDEBUG = 5
logging.addLevelName(SUBDEBUG, 'SUBDEBUG')

def subdebug(self, message, *args, **kws):
    self.log(SUBDEBUG, message, *args, **kws) 
logging.Logger.subdebug = subdebug

logging.basicConfig()
l = logging.getLogger()
l.setLevel(SUBDEBUG)
l.subdebug('test')
l.setLevel(logging.DEBUG)
l.subdebug('test')

回答 5

我发现为通过log()函数的logger对象创建新属性更加容易。我认为出于这个原因,记录器模块提供了addLevelName()和log()。因此,不需要子类或新方法。

import logging

@property
def log(obj):
    logging.addLevelName(5, 'TRACE')
    myLogger = logging.getLogger(obj.__class__.__name__)
    setattr(myLogger, 'trace', lambda *args: myLogger.log(5, *args))
    return myLogger

现在

mylogger.trace('This is a trace message')

应该能按预期工作。

I find it easier to create a new attribute for the logger object that passes the log() function. I think the logger module provides the addLevelName() and the log() for this very reason. Thus no subclasses or new method needed.

import logging

@property
def log(obj):
    logging.addLevelName(5, 'TRACE')
    myLogger = logging.getLogger(obj.__class__.__name__)
    setattr(myLogger, 'trace', lambda *args: myLogger.log(5, *args))
    return myLogger

now

mylogger.trace('This is a trace message')

should work as expected.


回答 6

虽然我们已经有了很多正确的答案,但我认为以下内容更像pythonic:

import logging

from functools import partial, partialmethod

logging.TRACE = 5
logging.addLevelName(logging.TRACE, 'TRACE')
logging.Logger.trace = partialmethod(logging.Logger.log, logging.TRACE)
logging.trace = partial(logging.log, logging.TRACE)

如果要mypy在代码上使用,建议添加# type: ignore以禁止添加属性的警告。

While we have already plenty of correct answers, the following is in my opinion more pythonic:

import logging

from functools import partial, partialmethod

logging.TRACE = 5
logging.addLevelName(logging.TRACE, 'TRACE')
logging.Logger.trace = partialmethod(logging.Logger.log, logging.TRACE)
logging.trace = partial(logging.log, logging.TRACE)

If you want to use mypy on your code, it is recommended to add # type: ignore to suppress warnings from adding attribute.


回答 7

我认为您必须对该Logger类进行子类化,并添加一个称为的方法trace,该方法基本上Logger.log以低于的级别进行调用DEBUG。我还没有尝试过,但这就是文档所指示的

I think you’ll have to subclass the Logger class and add a method called trace which basically calls Logger.log with a level lower than DEBUG. I haven’t tried this but this is what the docs indicate.


回答 8

创建自定义记录器的提示:

  1. 不要使用_log,使用log(您不必检查isEnabledFor
  2. 日志记录模块应该是自定义记录器的一个创建实例,因为它在中getLogger起到了一些神奇作用,因此您需要通过以下方式设置类setLoggerClass
  3. __init__如果您不存储任何内容,则无需为记录器定义类
# Lower than debug which is 10
TRACE = 5
class MyLogger(logging.Logger):
    def trace(self, msg, *args, **kwargs):
        self.log(TRACE, msg, *args, **kwargs)

调用此记录器时,请使用setLoggerClass(MyLogger)它作为默认记录器getLogger

logging.setLoggerClass(MyLogger)
log = logging.getLogger(__name__)
# ...
log.trace("something specific")

您需要setFormattersetHandler以及setLevel(TRACE)handler与对log自身实际SE这低水平跟踪

Tips for creating a custom logger:

  1. Do not use _log, use log (you don’t have to check isEnabledFor)
  2. the logging module should be the one creating instance of the custom logger since it does some magic in getLogger, so you will need to set the class via setLoggerClass
  3. You do not need to define __init__ for the logger, class if you are not storing anything
# Lower than debug which is 10
TRACE = 5
class MyLogger(logging.Logger):
    def trace(self, msg, *args, **kwargs):
        self.log(TRACE, msg, *args, **kwargs)

When calling this logger use setLoggerClass(MyLogger) to make this the default logger from getLogger

logging.setLoggerClass(MyLogger)
log = logging.getLogger(__name__)
# ...
log.trace("something specific")

You will need to setFormatter, setHandler, and setLevel(TRACE) on the handler and on the log itself to actually se this low level trace


回答 9

这对我有用:

import logging
logging.basicConfig(
    format='  %(levelname)-8.8s %(funcName)s: %(message)s',
)
logging.NOTE = 32  # positive yet important
logging.addLevelName(logging.NOTE, 'NOTE')      # new level
logging.addLevelName(logging.CRITICAL, 'FATAL') # rename existing

log = logging.getLogger(__name__)
log.note = lambda msg, *args: log._log(logging.NOTE, msg, args)
log.note('school\'s out for summer! %s', 'dude')
log.fatal('file not found.')

lambda / funcName问题已通过@marqueed指出的logger._log解决。我认为使用lambda看起来更干净一些,但是缺点是它不能接受关键字参数。我自己从来没有用过,所以没什么大不了的。

  注意设置:暑假就要放学了!花花公子
  致命设置:找不到文件。

This worked for me:

import logging
logging.basicConfig(
    format='  %(levelname)-8.8s %(funcName)s: %(message)s',
)
logging.NOTE = 32  # positive yet important
logging.addLevelName(logging.NOTE, 'NOTE')      # new level
logging.addLevelName(logging.CRITICAL, 'FATAL') # rename existing

log = logging.getLogger(__name__)
log.note = lambda msg, *args: log._log(logging.NOTE, msg, args)
log.note('school\'s out for summer! %s', 'dude')
log.fatal('file not found.')

The lambda/funcName issue is fixed with logger._log as @marqueed pointed out. I think using lambda looks a bit cleaner, but the drawback is that it can’t take keyword arguments. I’ve never used that myself, so no biggie.

  NOTE     setup: school's out for summer! dude
  FATAL    setup: file not found.

回答 10

以我的经验,这是操作程序问题的完整解决方案…为了避免看到“ lambda”作为发出消息的函数,请深入了解:

MY_LEVEL_NUM = 25
logging.addLevelName(MY_LEVEL_NUM, "MY_LEVEL_NAME")
def log_at_my_log_level(self, message, *args, **kws):
    # Yes, logger takes its '*args' as 'args'.
    self._log(MY_LEVEL_NUM, message, args, **kws)
logger.log_at_my_log_level = log_at_my_log_level

我从未尝试过使用独立的记录器类,但我认为基本思想是相同的(使用_log)。

In my experience, this is the full solution the the op’s problem… to avoid seeing “lambda” as the function in which the message is emitted, go deeper:

MY_LEVEL_NUM = 25
logging.addLevelName(MY_LEVEL_NUM, "MY_LEVEL_NAME")
def log_at_my_log_level(self, message, *args, **kws):
    # Yes, logger takes its '*args' as 'args'.
    self._log(MY_LEVEL_NUM, message, args, **kws)
logger.log_at_my_log_level = log_at_my_log_level

I’ve never tried working with a standalone logger class, but I think the basic idea is the same (use _log).


回答 11

除了“疯狂物理学家”示exceptions,还可以使文件名和行号正确无误:

def logToRoot(message, *args, **kwargs):
    if logging.root.isEnabledFor(levelNum):
        logging.root._log(levelNum, message, args, **kwargs)

Addition to Mad Physicists example to get file name and line number correct:

def logToRoot(message, *args, **kwargs):
    if logging.root.isEnabledFor(levelNum):
        logging.root._log(levelNum, message, args, **kwargs)

回答 12

基于固定的答案,我写了一种方法可以自动创建新的日志记录级别

def set_custom_logging_levels(config={}):
    """
        Assign custom levels for logging
            config: is a dict, like
            {
                'EVENT_NAME': EVENT_LEVEL_NUM,
            }
        EVENT_LEVEL_NUM can't be like already has logging module
        logging.DEBUG       = 10
        logging.INFO        = 20
        logging.WARNING     = 30
        logging.ERROR       = 40
        logging.CRITICAL    = 50
    """
    assert isinstance(config, dict), "Configuration must be a dict"

    def get_level_func(level_name, level_num):
        def _blank(self, message, *args, **kws):
            if self.isEnabledFor(level_num):
                # Yes, logger takes its '*args' as 'args'.
                self._log(level_num, message, args, **kws) 
        _blank.__name__ = level_name.lower()
        return _blank

    for level_name, level_num in config.items():
        logging.addLevelName(level_num, level_name.upper())
        setattr(logging.Logger, level_name.lower(), get_level_func(level_name, level_num))

配置可能像这样:

new_log_levels = {
    # level_num is in logging.INFO section, that's why it 21, 22, etc..
    "FOO":      21,
    "BAR":      22,
}

based on pinned answer, i wrote a little method which automaticaly create new logging levels

def set_custom_logging_levels(config={}):
    """
        Assign custom levels for logging
            config: is a dict, like
            {
                'EVENT_NAME': EVENT_LEVEL_NUM,
            }
        EVENT_LEVEL_NUM can't be like already has logging module
        logging.DEBUG       = 10
        logging.INFO        = 20
        logging.WARNING     = 30
        logging.ERROR       = 40
        logging.CRITICAL    = 50
    """
    assert isinstance(config, dict), "Configuration must be a dict"

    def get_level_func(level_name, level_num):
        def _blank(self, message, *args, **kws):
            if self.isEnabledFor(level_num):
                # Yes, logger takes its '*args' as 'args'.
                self._log(level_num, message, args, **kws) 
        _blank.__name__ = level_name.lower()
        return _blank

    for level_name, level_num in config.items():
        logging.addLevelName(level_num, level_name.upper())
        setattr(logging.Logger, level_name.lower(), get_level_func(level_name, level_num))

config may smth like that:

new_log_levels = {
    # level_num is in logging.INFO section, that's why it 21, 22, etc..
    "FOO":      21,
    "BAR":      22,
}

回答 13

作为向Logger类添加额外方法的替代方法,我建议使用该Logger.log(level, msg)方法。

import logging

TRACE = 5
logging.addLevelName(TRACE, 'TRACE')
FORMAT = '%(levelname)s:%(name)s:%(lineno)d:%(message)s'


logging.basicConfig(format=FORMAT)
l = logging.getLogger()
l.setLevel(TRACE)
l.log(TRACE, 'trace message')
l.setLevel(logging.DEBUG)
l.log(TRACE, 'disabled trace message')

As alternative to adding an extra method to the Logger class I would recommend using the Logger.log(level, msg) method.

import logging

TRACE = 5
logging.addLevelName(TRACE, 'TRACE')
FORMAT = '%(levelname)s:%(name)s:%(lineno)d:%(message)s'


logging.basicConfig(format=FORMAT)
l = logging.getLogger()
l.setLevel(TRACE)
l.log(TRACE, 'trace message')
l.setLevel(logging.DEBUG)
l.log(TRACE, 'disabled trace message')

回答 14

我很困惑; 至少在python 3.5中,它可以正常工作:

import logging


TRACE = 5
"""more detail than debug"""

logging.basicConfig()
logging.addLevelName(TRACE,"TRACE")
logger = logging.getLogger('')
logger.debug("n")
logger.setLevel(logging.DEBUG)
logger.debug("y1")
logger.log(TRACE,"n")
logger.setLevel(TRACE)
logger.log(TRACE,"y2")
    

输出:

调试:root:y1

跟踪:root:y2

I’m confused; with python 3.5, at least, it just works:

import logging


TRACE = 5
"""more detail than debug"""

logging.basicConfig()
logging.addLevelName(TRACE,"TRACE")
logger = logging.getLogger('')
logger.debug("n")
logger.setLevel(logging.DEBUG)
logger.debug("y1")
logger.log(TRACE,"n")
logger.setLevel(TRACE)
logger.log(TRACE,"y2")
    

output:

DEBUG:root:y1

TRACE:root:y2


回答 15

万一有人想要一种自动的方式来动态地向日志记录模块(或其副本)添加新的日志记录级别,我创建了此函数,扩展了@pfa的答案:

def add_level(log_name,custom_log_module=None,log_num=None,
                log_call=None,
                   lower_than=None, higher_than=None, same_as=None,
              verbose=True):
    '''
    Function to dynamically add a new log level to a given custom logging module.
    <custom_log_module>: the logging module. If not provided, then a copy of
        <logging> module is used
    <log_name>: the logging level name
    <log_num>: the logging level num. If not provided, then function checks
        <lower_than>,<higher_than> and <same_as>, at the order mentioned.
        One of those three parameters must hold a string of an already existent
        logging level name.
    In case a level is overwritten and <verbose> is True, then a message in WARNING
        level of the custom logging module is established.
    '''
    if custom_log_module is None:
        import imp
        custom_log_module = imp.load_module('custom_log_module',
                                            *imp.find_module('logging'))
    log_name = log_name.upper()
    def cust_log(par, message, *args, **kws):
        # Yes, logger takes its '*args' as 'args'.
        if par.isEnabledFor(log_num):
            par._log(log_num, message, args, **kws)
    available_level_nums = [key for key in custom_log_module._levelNames
                            if isinstance(key,int)]

    available_levels = {key:custom_log_module._levelNames[key]
                             for key in custom_log_module._levelNames
                            if isinstance(key,str)}
    if log_num is None:
        try:
            if lower_than is not None:
                log_num = available_levels[lower_than]-1
            elif higher_than is not None:
                log_num = available_levels[higher_than]+1
            elif same_as is not None:
                log_num = available_levels[higher_than]
            else:
                raise Exception('Infomation about the '+
                                'log_num should be provided')
        except KeyError:
            raise Exception('Non existent logging level name')
    if log_num in available_level_nums and verbose:
        custom_log_module.warn('Changing ' +
                                  custom_log_module._levelNames[log_num] +
                                  ' to '+log_name)
    custom_log_module.addLevelName(log_num, log_name)

    if log_call is None:
        log_call = log_name.lower()

    setattr(custom_log_module.Logger, log_call, cust_log)
    return custom_log_module

In case anyone wants an automated way to add a new logging level to the logging module (or a copy of it) dynamically, I have created this function, expanding @pfa’s answer:

def add_level(log_name,custom_log_module=None,log_num=None,
                log_call=None,
                   lower_than=None, higher_than=None, same_as=None,
              verbose=True):
    '''
    Function to dynamically add a new log level to a given custom logging module.
    <custom_log_module>: the logging module. If not provided, then a copy of
        <logging> module is used
    <log_name>: the logging level name
    <log_num>: the logging level num. If not provided, then function checks
        <lower_than>,<higher_than> and <same_as>, at the order mentioned.
        One of those three parameters must hold a string of an already existent
        logging level name.
    In case a level is overwritten and <verbose> is True, then a message in WARNING
        level of the custom logging module is established.
    '''
    if custom_log_module is None:
        import imp
        custom_log_module = imp.load_module('custom_log_module',
                                            *imp.find_module('logging'))
    log_name = log_name.upper()
    def cust_log(par, message, *args, **kws):
        # Yes, logger takes its '*args' as 'args'.
        if par.isEnabledFor(log_num):
            par._log(log_num, message, args, **kws)
    available_level_nums = [key for key in custom_log_module._levelNames
                            if isinstance(key,int)]

    available_levels = {key:custom_log_module._levelNames[key]
                             for key in custom_log_module._levelNames
                            if isinstance(key,str)}
    if log_num is None:
        try:
            if lower_than is not None:
                log_num = available_levels[lower_than]-1
            elif higher_than is not None:
                log_num = available_levels[higher_than]+1
            elif same_as is not None:
                log_num = available_levels[higher_than]
            else:
                raise Exception('Infomation about the '+
                                'log_num should be provided')
        except KeyError:
            raise Exception('Non existent logging level name')
    if log_num in available_level_nums and verbose:
        custom_log_module.warn('Changing ' +
                                  custom_log_module._levelNames[log_num] +
                                  ' to '+log_name)
    custom_log_module.addLevelName(log_num, log_name)

    if log_call is None:
        log_call = log_name.lower()

    setattr(custom_log_module.Logger, log_call, cust_log)
    return custom_log_module

Python / Django:登录到runserver下的控制台,登录到Apache下的文件

问题:Python / Django:登录到runserver下的控制台,登录到Apache下的文件

print在之下运行Django应用程序时manage.py runserver,如何将跟踪消息发送至控制台(如),而在Apache下运行应用程序时如何将这些消息发送至日志文件?

我回顾了Django日志记录,尽管它对高级用途的灵活性和可配置性给我留下了深刻的印象,但我仍然对如何处理简单的用例感到困惑。

How can I send trace messages to the console (like print) when I’m running my Django app under manage.py runserver, but have those messages sent to a log file when I’m running the app under Apache?

I reviewed Django logging and although I was impressed with its flexibility and configurability for advanced uses, I’m still stumped with how to handle my simple use-case.


回答 0

在mod_wsgi下运行时,打印到stderr的文本将显示在httpd的错误日志中。您可以print直接使用,也可以logging改用。

print >>sys.stderr, 'Goodbye, cruel world!'

Text printed to stderr will show up in httpd’s error log when running under mod_wsgi. You can either use print directly, or use logging instead.

print >>sys.stderr, 'Goodbye, cruel world!'

回答 1

这是一个基于Django日志记录的解决方案。它使用DEBUG设置,而不是实际检查您是否在运行开发服务器,但是如果您找到一种更好的检查方法,则应该很容易进行调整。

LOGGING = {
    'version': 1,
    'formatters': {
        'verbose': {
            'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
        },
        'simple': {
            'format': '%(levelname)s %(message)s'
        },
    },
    'handlers': {
        'console': {
            'level': 'DEBUG',
            'class': 'logging.StreamHandler',
            'formatter': 'simple'
        },
        'file': {
            'level': 'DEBUG',
            'class': 'logging.FileHandler',
            'filename': '/path/to/your/file.log',
            'formatter': 'simple'
        },
    },
    'loggers': {
        'django': {
            'handlers': ['file'],
            'level': 'DEBUG',
            'propagate': True,
        },
    }
}

if DEBUG:
    # make all loggers use the console.
    for logger in LOGGING['loggers']:
        LOGGING['loggers'][logger]['handlers'] = ['console']

有关详细信息,请参见https://docs.djangoproject.com/en/dev/topics/logging/

Here’s a Django logging-based solution. It uses the DEBUG setting rather than actually checking whether or not you’re running the development server, but if you find a better way to check for that it should be easy to adapt.

LOGGING = {
    'version': 1,
    'formatters': {
        'verbose': {
            'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
        },
        'simple': {
            'format': '%(levelname)s %(message)s'
        },
    },
    'handlers': {
        'console': {
            'level': 'DEBUG',
            'class': 'logging.StreamHandler',
            'formatter': 'simple'
        },
        'file': {
            'level': 'DEBUG',
            'class': 'logging.FileHandler',
            'filename': '/path/to/your/file.log',
            'formatter': 'simple'
        },
    },
    'loggers': {
        'django': {
            'handlers': ['file'],
            'level': 'DEBUG',
            'propagate': True,
        },
    }
}

if DEBUG:
    # make all loggers use the console.
    for logger in LOGGING['loggers']:
        LOGGING['loggers'][logger]['handlers'] = ['console']

see https://docs.djangoproject.com/en/dev/topics/logging/ for details.


回答 2

您可以配置settings.py文件登录。

一个例子:

if DEBUG:
    # will output to your console
    logging.basicConfig(
        level = logging.DEBUG,
        format = '%(asctime)s %(levelname)s %(message)s',
    )
else:
    # will output to logging file
    logging.basicConfig(
        level = logging.DEBUG,
        format = '%(asctime)s %(levelname)s %(message)s',
        filename = '/my_log_file.log',
        filemode = 'a'
    )

但是,这取决于设置DEBUG,也许您不必担心它的设置方式。关于如何确定我的Django应用程序是否正在开发服务器上运行的信息,请参见此答案以更好的方式编写该条件。编辑:上面的示例来自Django 1.1项目,自该版本以来,Django中的日志记录配置已有所更改。

You can configure logging in your settings.py file.

One example:

if DEBUG:
    # will output to your console
    logging.basicConfig(
        level = logging.DEBUG,
        format = '%(asctime)s %(levelname)s %(message)s',
    )
else:
    # will output to logging file
    logging.basicConfig(
        level = logging.DEBUG,
        format = '%(asctime)s %(levelname)s %(message)s',
        filename = '/my_log_file.log',
        filemode = 'a'
    )

However that’s dependent upon setting DEBUG, and maybe you don’t want to have to worry about how it’s set up. See this answer on How can I tell whether my Django application is running on development server or not? for a better way of writing that conditional. Edit: the example above is from a Django 1.1 project, logging configuration in Django has changed somewhat since that version.


回答 3

我用这个:

logging.conf:

[loggers]
keys=root,applog
[handlers]
keys=rotateFileHandler,rotateConsoleHandler

[formatters]
keys=applog_format,console_format

[formatter_applog_format]
format=%(asctime)s-[%(levelname)-8s]:%(message)s

[formatter_console_format]
format=%(asctime)s-%(filename)s%(lineno)d[%(levelname)s]:%(message)s

[logger_root]
level=DEBUG
handlers=rotateFileHandler,rotateConsoleHandler

[logger_applog]
level=DEBUG
handlers=rotateFileHandler
qualname=simple_example

[handler_rotateFileHandler]
class=handlers.RotatingFileHandler
level=DEBUG
formatter=applog_format
args=('applog.log', 'a', 10000, 9)

[handler_rotateConsoleHandler]
class=StreamHandler
level=DEBUG
formatter=console_format
args=(sys.stdout,)

testapp.py:

import logging
import logging.config

def main():
    logging.config.fileConfig('logging.conf')
    logger = logging.getLogger('applog')

    logger.debug('debug message')
    logger.info('info message')
    logger.warn('warn message')
    logger.error('error message')
    logger.critical('critical message')
    #logging.shutdown()

if __name__ == '__main__':
    main()

I use this:

logging.conf:

[loggers]
keys=root,applog
[handlers]
keys=rotateFileHandler,rotateConsoleHandler

[formatters]
keys=applog_format,console_format

[formatter_applog_format]
format=%(asctime)s-[%(levelname)-8s]:%(message)s

[formatter_console_format]
format=%(asctime)s-%(filename)s%(lineno)d[%(levelname)s]:%(message)s

[logger_root]
level=DEBUG
handlers=rotateFileHandler,rotateConsoleHandler

[logger_applog]
level=DEBUG
handlers=rotateFileHandler
qualname=simple_example

[handler_rotateFileHandler]
class=handlers.RotatingFileHandler
level=DEBUG
formatter=applog_format
args=('applog.log', 'a', 10000, 9)

[handler_rotateConsoleHandler]
class=StreamHandler
level=DEBUG
formatter=console_format
args=(sys.stdout,)

testapp.py:

import logging
import logging.config

def main():
    logging.config.fileConfig('logging.conf')
    logger = logging.getLogger('applog')

    logger.debug('debug message')
    logger.info('info message')
    logger.warn('warn message')
    logger.error('error message')
    logger.critical('critical message')
    #logging.shutdown()

if __name__ == '__main__':
    main()

回答 4

您可以使用tagalog(https://github.com/dorkitude/tagalog)轻松完成此操作

例如,当标准python模块写入以附加模式打开的文件对象时,App Engine模块(https://github.com/dorkitude/tagalog/blob/master/tagalog_appengine.py)会覆盖此行为,而是使用logging.INFO

要在App Engine项目中获得此行为,只需执行以下操作:

import tagalog.tagalog_appengine as tagalog
tagalog.log('whatever message', ['whatever','tags'])

您可以自己扩展模块并覆盖日志功能,而不会遇到太大困难。

You can do this pretty easily with tagalog (https://github.com/dorkitude/tagalog)

For instance, while the standard python module writes to a file object opened in append mode, the App Engine module (https://github.com/dorkitude/tagalog/blob/master/tagalog_appengine.py) overrides this behavior and instead uses logging.INFO.

To get this behavior in an App Engine project, one could simply do:

import tagalog.tagalog_appengine as tagalog
tagalog.log('whatever message', ['whatever','tags'])

You could extend the module yourself and overwrite the log function without much difficulty.


回答 5

这在我的local.py中效果很好,省去了常规日志记录的麻烦:

from .settings import *

LOGGING['handlers']['console'] = {
    'level': 'DEBUG',
    'class': 'logging.StreamHandler',
    'formatter': 'verbose'
}
LOGGING['loggers']['foo.bar'] = {
    'handlers': ['console'],
    'propagate': False,
    'level': 'DEBUG',
}

This works quite well in my local.py, saves me messing up the regular logging:

from .settings import *

LOGGING['handlers']['console'] = {
    'level': 'DEBUG',
    'class': 'logging.StreamHandler',
    'formatter': 'verbose'
}
LOGGING['loggers']['foo.bar'] = {
    'handlers': ['console'],
    'propagate': False,
    'level': 'DEBUG',
}

在python中,为什么要使用日志记录而不是print?

问题:在python中,为什么要使用日志记录而不是print?

对于复杂项目中的简单调试,是否有理由使用python记录器而不是print?那其他用例呢?是否每个都有一个公认的最佳用例(尤其是在您仅查找标准输出时)?

我一直都听说这是“最佳实践”,但我一直无法弄清原因。

For simple debugging in a complex project is there a reason to use the python logger instead of print? What about other use-cases? Is there an accepted best use-case for each (especially when you’re only looking for stdout)?

I’ve always heard that this is a “best practice” but I haven’t been able to figure out why.


回答 0

日志记录程序包具有许多有用的功能:

  • 易于查看从何处以及何时(即使是哪一行)进行记录调用。
  • 您可以同时登录文件,套接字,几乎所有内容。
  • 您可以根据严重性区分日志记录。

印刷没有这些。

另外,如果您的项目打算由其他python工具导入,则对于您的包来说,将内容打印到stdout是不好的做法,因为用户很可能不知道打印消息的来源。使用日志记录,包的用户可以选择是否要传播您工具中的日志记录消息。

The logging package has a lot of useful features:

  • Easy to see where and when (even what line no.) a logging call is being made from.
  • You can log to files, sockets, pretty much anything, all at the same time.
  • You can differentiate your logging based on severity.

Print doesn’t have any of these.

Also, if your project is meant to be imported by other python tools, it’s bad practice for your package to print things to stdout, since the user likely won’t know where the print messages are coming from. With logging, users of your package can choose whether or not they want to propogate logging messages from your tool or not.


回答 1

正确记录的最大优点之一是您可以对消息进行分类,然后根据需要将其打开或关闭。例如,为项目的某个部分打开调试级别的消息,而为其他部分调低调试级别的消息,这样可能会很有用,以免被信息过载所取代,并轻松地专注于您需要执行的任务记录。

另外,日志是可配置的。您可以轻松地对其进行过滤,将其发送到文件,对其进行格式化,添加时间戳以及在全球范围内可能需要的任何其他事项。打印报表不容易管理。

One of the biggest advantages of proper logging is that you can categorize messages and turn them on or off depending on what you need. For example, it might be useful to turn on debugging level messages for a certain part of the project, but tone it down for other parts, so as not to be taken over by information overload and to easily concentrate on the task for which you need logging.

Also, logs are configurable. You can easily filter them, send them to files, format them, add timestamps, and any other things you might need on a global basis. Print statements are not easily managed.


回答 2

打印语句将在线调试器的负面影响与诊断工具结合在一起,是两全其美的方式。您必须修改程序,但无法从中获得更多有用的代码

在线调试器使您可以检查正在运行的程序的状态。但是真正的调试器的好处是您不必修改源代码。调试会话之前或之后;您只需将程序加载到调试器中,告诉调试器您要查找的位置,便一切就绪。

对应用程序进行检测可能需要先进行一些工作,以某种方式修改源代码,但是生成的诊断输出可能具有大量详细信息,并且可以在非常特定的程度上打开或关闭。python日志记录模块不仅可以显示记录的消息,还可以显示调用该消息的文件和函数,是否有回溯,是否发出了消息的实际时间等等。比那更多的; 诊断仪器永远不需要拆除;当程序完成并投入生产时,它与添加之日一样有效和有用。但是它可以将其输出卡在一个日志文件中,在该文件中它不会惹恼任何人,或者可以将日志级别调低以将所有最紧急的消息拒之门外。

真正比在测试时使用ipython更加容易,并且可以预测调试器的需求或用途,并且熟悉其用于控制​​内置pdb调试器的命令。

当您发现自己认为打印语句可能比使用pdb容易(通常如此)时,您会发现使用记录器比使用并随后删除打印语句更容易使程序处于工作状态。 。

我将编辑器配置为突出显示打印语句为语法错误,并将记录语句突出为注释,因为这就是我如何看待它们。

Print statements are sort of the worst of both worlds, combining the negative aspects of an online debugger with diagnostic instrumentation. You have to modify the program but you don’t get more, useful code from it.

An online debugger allows you to inspect the state of a running program; But the nice thing about a real debugger is that you don’t have to modify the source; neither before nor after the debugging session; You just load the program into the debugger, tell the debugger where you want to look, and you’re all set.

Instrumenting the application might take some work up front, modifying the source code in some way, but the resulting diagnostic output can have enormous amounts of detail, and can be turned on or off to a very specific degree. The python logging module can show not just the message logged, but also the file and function that called it, a traceback if there was one, the actual time that the message was emitted, and so on. More than that; diagnostic instrumentation need never be removed; It’s just as valid and useful when the program is finished and in production as it was the day it was added; but it can have it’s output stuck in a log file where it’s not likely to annoy anyone, or the log level can be turned down to keep all but the most urgent messages out.

anticipating the need or use for a debugger is really no harder than using ipython while you’re testing, and becoming familiar with the commands it uses to control the built in pdb debugger.

When you find yourself thinking that a print statement might be easier than using pdb (as it often is), You’ll find that using a logger pulls your program in a much easier to work on state than if you use and later remove print statements.

I have my editor configured to highlight print statements as syntax errors, and logging statements as comments, since that’s about how I regard them.


回答 3

如果使用日志记录,则负责部署的人员可以配置记录器以将其与自定义信息一起发送到自定义位置。如果仅打印,则仅此而已。

If you use logging then the person responsible for deployment can configure the logger to send it to a custom location, with custom information. If you only print, then that’s all they get.


回答 4

日志记录实质上会创建一个具有其他元数据(时间戳,日志级别,行号,进程等)的可打印输出的可搜索纯文本数据库。

这是纯金,我可以在python脚本运行在日志文件上运行egrep 。我可以调整egrep模式搜索以准确选择我感兴趣的内容,而忽略其余内容。认知负担的减少和以后通过反复试验选择我的egrep模式的自由是我的主要好处。

tail -f mylogfile.log | egrep "key_word1|key_word2"

现在,抛出其他打印无法完成的很棒的事情(发送至套接字,设置调试级别,logrotate,添加元数据等),您有充分的理由偏爱于记录普通打印语句。

我倾向于使用打印语句,因为它既懒惰又容易,添加日志记录需要一些样板代码,嘿,我们有yasnippets(emacs)和ultisnips(vim)以及其他模板工具,所以为什么要放弃对普通打印语句的记录!

Logging essentially creates a searchable plain text database of print outputs with other meta data (timestamp, loglevel, line number, process etc.).

This is pure gold, I can run egrep over the log file after the python script has run. I can tune my egrep pattern search to pick exactly what I am interested in and ignore the rest. This reduction of cognitive load and freedom to pick my egrep pattern later on by trial and error is the key benefit for me.

tail -f mylogfile.log | egrep "key_word1|key_word2"

Now throw in other cool things that print can’t do (sending to socket, setting debug levels, logrotate, adding meta data etc.), you have every reason to prefer logging over plain print statements.

I tend to use print statements because it’s lazy and easy, adding logging needs some boiler plate code, hey we have yasnippets (emacs) and ultisnips (vim) and other templating tools, so why give up logging for plain print statements!?


记录来自python-requests模块的所有请求

问题:记录来自python-requests模块的所有请求

我正在使用python Requests。我需要调试一些OAuth活动,为此,我希望它记录正在执行的所有请求。我可以通过获得此信息ngrep,但是很遗憾,无法grep https连接(所需OAuth

如何激活Requests正在访问的所有URL(+参数)的日志记录?

I am using python Requests. I need to debug some OAuth activity, and for that I would like it to log all requests being performed. I could get this information with ngrep, but unfortunately it is not possible to grep https connections (which are needed for OAuth)

How can I activate logging of all URLs (+ parameters) that Requests is accessing?


回答 0

基础urllib3库记录与logging模块(而不是POST正文)的所有新连接和URL 。对于GET请求,这应该足够了:

import logging

logging.basicConfig(level=logging.DEBUG)

这为您提供了最详细的日志记录选项;有关如何配置日志记录级别和目标的更多详细信息,请参见日志记录操作指南。

简短的演示:

>>> import requests
>>> import logging
>>> logging.basicConfig(level=logging.DEBUG)
>>> r = requests.get('http://httpbin.org/get?foo=bar&baz=python')
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): httpbin.org:80
DEBUG:urllib3.connectionpool:http://httpbin.org:80 "GET /get?foo=bar&baz=python HTTP/1.1" 200 366

根据urllib3的确切版本,将记录以下消息:

  • INFO:重定向
  • WARN:连接池已满(如果发生这种情况,通常会增加连接池的大小)
  • WARN:无法解析标头(格式无效的响应标头)
  • WARN:重试连接
  • WARN:证书与预期的主机名不匹配
  • WARN:处理分块响应时,接收到的内容长度和传输编码都包含响应
  • DEBUG:新连接(HTTP或HTTPS)
  • DEBUG:断开的连接
  • DEBUG:连接详细信息:方法,路径,HTTP版本,状态代码和响应长度
  • DEBUG:重试计数增量

这不包括标题或正文。urllib3使用http.client.HTTPConnection该类完成任务,但是该类不支持日志记录,通常只能将其配置为打印到stdout。但是,您可以绑定该模块以将所有调试信息发送到日志记录,而无需print在该模块中引入一个替代名称:

import logging
import http.client

httpclient_logger = logging.getLogger("http.client")

def httpclient_logging_patch(level=logging.DEBUG):
    """Enable HTTPConnection debug logging to the logging framework"""

    def httpclient_log(*args):
        httpclient_logger.log(level, " ".join(args))

    # mask the print() built-in in the http.client module to use
    # logging instead
    http.client.print = httpclient_log
    # enable debugging
    http.client.HTTPConnection.debuglevel = 1

调用httpclient_logging_patch()导致http.client连接将所有调试信息输出到标准记录器,因此被以下对象拾取logging.basicConfig()

>>> httpclient_logging_patch()
>>> r = requests.get('http://httpbin.org/get?foo=bar&baz=python')
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): httpbin.org:80
DEBUG:http.client:send: b'GET /get?foo=bar&baz=python HTTP/1.1\r\nHost: httpbin.org\r\nUser-Agent: python-requests/2.22.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n'
DEBUG:http.client:header: Date: Tue, 04 Feb 2020 13:36:53 GMT
DEBUG:http.client:header: Content-Type: application/json
DEBUG:http.client:header: Content-Length: 366
DEBUG:http.client:header: Connection: keep-alive
DEBUG:http.client:header: Server: gunicorn/19.9.0
DEBUG:http.client:header: Access-Control-Allow-Origin: *
DEBUG:http.client:header: Access-Control-Allow-Credentials: true
DEBUG:urllib3.connectionpool:http://httpbin.org:80 "GET /get?foo=bar&baz=python HTTP/1.1" 200 366

The underlying urllib3 library logs all new connections and URLs with the logging module, but not POST bodies. For GET requests this should be enough:

import logging

logging.basicConfig(level=logging.DEBUG)

which gives you the most verbose logging option; see the logging HOWTO for more details on how to configure logging levels and destinations.

Short demo:

>>> import requests
>>> import logging
>>> logging.basicConfig(level=logging.DEBUG)
>>> r = requests.get('http://httpbin.org/get?foo=bar&baz=python')
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): httpbin.org:80
DEBUG:urllib3.connectionpool:http://httpbin.org:80 "GET /get?foo=bar&baz=python HTTP/1.1" 200 366

Depending on the exact version of urllib3, the following messages are logged:

  • INFO: Redirects
  • WARN: Connection pool full (if this happens often increase the connection pool size)
  • WARN: Failed to parse headers (response headers with invalid format)
  • WARN: Retrying the connection
  • WARN: Certificate did not match expected hostname
  • WARN: Received response with both Content-Length and Transfer-Encoding, when processing a chunked response
  • DEBUG: New connections (HTTP or HTTPS)
  • DEBUG: Dropped connections
  • DEBUG: Connection details: method, path, HTTP version, status code and response length
  • DEBUG: Retry count increments

This doesn’t include headers or bodies. urllib3 uses the http.client.HTTPConnection class to do the grunt-work, but that class doesn’t support logging, it can normally only be configured to print to stdout. However, you can rig it to send all debug information to logging instead by introducing an alternative print name into that module:

import logging
import http.client

httpclient_logger = logging.getLogger("http.client")

def httpclient_logging_patch(level=logging.DEBUG):
    """Enable HTTPConnection debug logging to the logging framework"""

    def httpclient_log(*args):
        httpclient_logger.log(level, " ".join(args))

    # mask the print() built-in in the http.client module to use
    # logging instead
    http.client.print = httpclient_log
    # enable debugging
    http.client.HTTPConnection.debuglevel = 1

Calling httpclient_logging_patch() causes http.client connections to output all debug information to a standard logger, and so are picked up by logging.basicConfig():

>>> httpclient_logging_patch()
>>> r = requests.get('http://httpbin.org/get?foo=bar&baz=python')
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): httpbin.org:80
DEBUG:http.client:send: b'GET /get?foo=bar&baz=python HTTP/1.1\r\nHost: httpbin.org\r\nUser-Agent: python-requests/2.22.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n'
DEBUG:http.client:header: Date: Tue, 04 Feb 2020 13:36:53 GMT
DEBUG:http.client:header: Content-Type: application/json
DEBUG:http.client:header: Content-Length: 366
DEBUG:http.client:header: Connection: keep-alive
DEBUG:http.client:header: Server: gunicorn/19.9.0
DEBUG:http.client:header: Access-Control-Allow-Origin: *
DEBUG:http.client:header: Access-Control-Allow-Credentials: true
DEBUG:urllib3.connectionpool:http://httpbin.org:80 "GET /get?foo=bar&baz=python HTTP/1.1" 200 366

回答 1

您需要启用调试时httplib的水平(requests→交通urllib3→交通httplib)。

这是一些同时切换(..._on()..._off())或暂时启用它的功能:

import logging
import contextlib
try:
    from http.client import HTTPConnection # py3
except ImportError:
    from httplib import HTTPConnection # py2

def debug_requests_on():
    '''Switches on logging of the requests module.'''
    HTTPConnection.debuglevel = 1

    logging.basicConfig()
    logging.getLogger().setLevel(logging.DEBUG)
    requests_log = logging.getLogger("requests.packages.urllib3")
    requests_log.setLevel(logging.DEBUG)
    requests_log.propagate = True

def debug_requests_off():
    '''Switches off logging of the requests module, might be some side-effects'''
    HTTPConnection.debuglevel = 0

    root_logger = logging.getLogger()
    root_logger.setLevel(logging.WARNING)
    root_logger.handlers = []
    requests_log = logging.getLogger("requests.packages.urllib3")
    requests_log.setLevel(logging.WARNING)
    requests_log.propagate = False

@contextlib.contextmanager
def debug_requests():
    '''Use with 'with'!'''
    debug_requests_on()
    yield
    debug_requests_off()

演示用途:

>>> requests.get('http://httpbin.org/')
<Response [200]>

>>> debug_requests_on()
>>> requests.get('http://httpbin.org/')
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): httpbin.org
DEBUG:requests.packages.urllib3.connectionpool:"GET / HTTP/1.1" 200 12150
send: 'GET / HTTP/1.1\r\nHost: httpbin.org\r\nConnection: keep-alive\r\nAccept-
Encoding: gzip, deflate\r\nAccept: */*\r\nUser-Agent: python-requests/2.11.1\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
header: Server: nginx
...
<Response [200]>

>>> debug_requests_off()
>>> requests.get('http://httpbin.org/')
<Response [200]>

>>> with debug_requests():
...     requests.get('http://httpbin.org/')
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): httpbin.org
...
<Response [200]>

您将看到包括HEADERS和DATA在内的REQUEST,以及带有HEADERS但没有DATA的RESPONSE。唯一缺少的是没有记录的response.body。

资源

You need to enable debugging at httplib level (requestsurllib3httplib).

Here’s some functions to both toggle (..._on() and ..._off()) or temporarily have it on:

import logging
import contextlib
try:
    from http.client import HTTPConnection # py3
except ImportError:
    from httplib import HTTPConnection # py2

def debug_requests_on():
    '''Switches on logging of the requests module.'''
    HTTPConnection.debuglevel = 1

    logging.basicConfig()
    logging.getLogger().setLevel(logging.DEBUG)
    requests_log = logging.getLogger("requests.packages.urllib3")
    requests_log.setLevel(logging.DEBUG)
    requests_log.propagate = True

def debug_requests_off():
    '''Switches off logging of the requests module, might be some side-effects'''
    HTTPConnection.debuglevel = 0

    root_logger = logging.getLogger()
    root_logger.setLevel(logging.WARNING)
    root_logger.handlers = []
    requests_log = logging.getLogger("requests.packages.urllib3")
    requests_log.setLevel(logging.WARNING)
    requests_log.propagate = False

@contextlib.contextmanager
def debug_requests():
    '''Use with 'with'!'''
    debug_requests_on()
    yield
    debug_requests_off()

Demo use:

>>> requests.get('http://httpbin.org/')
<Response [200]>

>>> debug_requests_on()
>>> requests.get('http://httpbin.org/')
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): httpbin.org
DEBUG:requests.packages.urllib3.connectionpool:"GET / HTTP/1.1" 200 12150
send: 'GET / HTTP/1.1\r\nHost: httpbin.org\r\nConnection: keep-alive\r\nAccept-
Encoding: gzip, deflate\r\nAccept: */*\r\nUser-Agent: python-requests/2.11.1\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
header: Server: nginx
...
<Response [200]>

>>> debug_requests_off()
>>> requests.get('http://httpbin.org/')
<Response [200]>

>>> with debug_requests():
...     requests.get('http://httpbin.org/')
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): httpbin.org
...
<Response [200]>

You will see the REQUEST, including HEADERS and DATA, and RESPONSE with HEADERS but without DATA. The only thing missing will be the response.body which is not logged.

Source


回答 2

对于使用python 3+的用户

import requests
import logging
import http.client

http.client.HTTPConnection.debuglevel = 1

logging.basicConfig()
logging.getLogger().setLevel(logging.DEBUG)
requests_log = logging.getLogger("requests.packages.urllib3")
requests_log.setLevel(logging.DEBUG)
requests_log.propagate = True

For those using python 3+

import requests
import logging
import http.client

http.client.HTTPConnection.debuglevel = 1

logging.basicConfig()
logging.getLogger().setLevel(logging.DEBUG)
requests_log = logging.getLogger("requests.packages.urllib3")
requests_log.setLevel(logging.DEBUG)
requests_log.propagate = True

回答 3

当尝试使Python日志记录系统(import logging)发出低级调试日志消息时,我很惊讶地发现给定的内容:

requests --> urllib3 --> http.client.HTTPConnection

urllib3实际上仅使用Python logging系统:

  • requests 没有
  • http.client.HTTPConnection 没有
  • urllib3

当然,您可以HTTPConnection通过设置以下内容来提取调试消息:

HTTPConnection.debuglevel = 1

但是这些输出仅通过print语句发出。为了证明这一点,只需grep Python 3.7 client.py源代码并自己查看打印语句(感谢@Yohann):

curl https://raw.githubusercontent.com/python/cpython/3.7/Lib/http/client.py |grep -A1 debuglevel` 

大概以某种方式重定向stdout可能有助于将stdout拖入日志系统,并有可能捕获到例如日志文件中。

选择’ urllib3‘logger not’ requests.packages.urllib3

与互联网上的许多建议相反,urllib3通过Python 3 logging系统捕获调试信息,正如@MikeSmith指出的那样,您不会遇到很多运气拦截:

log = logging.getLogger('requests.packages.urllib3')

相反,您需要:

log = logging.getLogger('urllib3')

调试urllib3到日志文件

这是一些urllib3使用Python logging系统将工作记录到日志文件中的代码:

import requests
import logging
from http.client import HTTPConnection  # py3

# log = logging.getLogger('requests.packages.urllib3')  # useless
log = logging.getLogger('urllib3')  # works

log.setLevel(logging.DEBUG)  # needed
fh = logging.FileHandler("requests.log")
log.addHandler(fh)

requests.get('http://httpbin.org/')

结果:

Starting new HTTP connection (1): httpbin.org:80
http://httpbin.org:80 "GET / HTTP/1.1" 200 3168

启用HTTPConnection.debuglevelprint()语句

如果您设定 HTTPConnection.debuglevel = 1

from http.client import HTTPConnection  # py3
HTTPConnection.debuglevel = 1
requests.get('http://httpbin.org/')

您将获得其他多汁的低级别信息的print语句输出:

send: b'GET / HTTP/1.1\r\nHost: httpbin.org\r\nUser-Agent: python- 
requests/2.22.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
header: Access-Control-Allow-Credentials header: Access-Control-Allow-Origin 
header: Content-Encoding header: Content-Type header: Date header: ...

请记住,此输出使用的print不是Python logging系统,因此无法使用传统的方法捕获logging流或文件处理程序捕获(尽管可以通过重定向stdout将输出捕获到文件中)

结合以上两者-将所有可能的日志记录最大化到控制台

为了最大限度地利用所有可能的日志记录,您必须使用以下命令来满足控制台/ stdout输出的要求:

import requests
import logging
from http.client import HTTPConnection  # py3

log = logging.getLogger('urllib3')
log.setLevel(logging.DEBUG)

# logging from urllib3 to console
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
log.addHandler(ch)

# print statements from `http.client.HTTPConnection` to console/stdout
HTTPConnection.debuglevel = 1

requests.get('http://httpbin.org/')

提供完整的输出范围:

Starting new HTTP connection (1): httpbin.org:80
send: b'GET / HTTP/1.1\r\nHost: httpbin.org\r\nUser-Agent: python-requests/2.22.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
http://httpbin.org:80 "GET / HTTP/1.1" 200 3168
header: Access-Control-Allow-Credentials header: Access-Control-Allow-Origin 
header: Content-Encoding header: ...

When trying to get the Python logging system (import logging) to emit low level debug log messages, it suprised me to discover that given:

requests --> urllib3 --> http.client.HTTPConnection

that only urllib3 actually uses the Python logging system:

  • requests no
  • http.client.HTTPConnection no
  • urllib3 yes

Sure, you can extract debug messages from HTTPConnection by setting:

HTTPConnection.debuglevel = 1

but these outputs are merely emitted via the print statement. To prove this, simply grep the Python 3.7 client.py source code and view the print statements yourself (thanks @Yohann):

curl https://raw.githubusercontent.com/python/cpython/3.7/Lib/http/client.py |grep -A1 debuglevel` 

Presumably redirecting stdout in some way might work to shoe-horn stdout into the logging system and potentially capture to e.g. a log file.

Choose the ‘urllib3‘ logger not ‘requests.packages.urllib3

To capture urllib3 debug information through the Python 3 logging system, contrary to much advice on the internet, and as @MikeSmith points out, you won’t have much luck intercepting:

log = logging.getLogger('requests.packages.urllib3')

instead you need to:

log = logging.getLogger('urllib3')

Debugging urllib3 to a log file

Here is some code which logs urllib3 workings to a log file using the Python logging system:

import requests
import logging
from http.client import HTTPConnection  # py3

# log = logging.getLogger('requests.packages.urllib3')  # useless
log = logging.getLogger('urllib3')  # works

log.setLevel(logging.DEBUG)  # needed
fh = logging.FileHandler("requests.log")
log.addHandler(fh)

requests.get('http://httpbin.org/')

the result:

Starting new HTTP connection (1): httpbin.org:80
http://httpbin.org:80 "GET / HTTP/1.1" 200 3168

Enabling the HTTPConnection.debuglevel print() statements

If you set HTTPConnection.debuglevel = 1

from http.client import HTTPConnection  # py3
HTTPConnection.debuglevel = 1
requests.get('http://httpbin.org/')

you’ll get the print statement output of additional juicy low level info:

send: b'GET / HTTP/1.1\r\nHost: httpbin.org\r\nUser-Agent: python- 
requests/2.22.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
header: Access-Control-Allow-Credentials header: Access-Control-Allow-Origin 
header: Content-Encoding header: Content-Type header: Date header: ...

Remember this output uses print and not the Python logging system, and thus cannot be captured using a traditional logging stream or file handler (though it may be possible to capture output to a file by redirecting stdout).

Combine the two above – maximise all possible logging to console

To maximise all possible logging, you must settle for console/stdout output with this:

import requests
import logging
from http.client import HTTPConnection  # py3

log = logging.getLogger('urllib3')
log.setLevel(logging.DEBUG)

# logging from urllib3 to console
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
log.addHandler(ch)

# print statements from `http.client.HTTPConnection` to console/stdout
HTTPConnection.debuglevel = 1

requests.get('http://httpbin.org/')

giving the full range of output:

Starting new HTTP connection (1): httpbin.org:80
send: b'GET / HTTP/1.1\r\nHost: httpbin.org\r\nUser-Agent: python-requests/2.22.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
http://httpbin.org:80 "GET / HTTP/1.1" 200 3168
header: Access-Control-Allow-Credentials header: Access-Control-Allow-Origin 
header: Content-Encoding header: ...

回答 4

我正在使用python 3.4,请求2.19.1:

“ urllib3”是现在获取的记录器(不再是“ requests.packages.urllib3”)。在不设置http.client.HTTPConnection.debuglevel的情况下,仍然会发生基本日志记录

I’m using python 3.4, requests 2.19.1:

‘urllib3’ is the logger to get now (no longer ‘requests.packages.urllib3’). Basic logging will still happen without setting http.client.HTTPConnection.debuglevel


回答 5

具有用于网络协议调试的脚本或什至是应用程序的子系统,希望查看确切的请求-响应对,包括有效的URL,标头,有效负载和状态。通常,在各地检测单个请求是不切实际的。同时,出于性能方面的考虑,建议使用单个(或几个专门化)requests.Session,因此以下假定遵循了该建议

requests支持所谓的事件挂钩(从2.23开始,实际上只有response挂钩)。从本质上讲,它是一个事件侦听器,并且该事件是在从返回控件之前发出的requests.request。此时,请求和响应都已完全定义,因此可以记录下来。

import logging

import requests


logger = logging.getLogger('httplogger')

def logRoundtrip(response, *args, **kwargs):
    extra = {'req': response.request, 'res': response}
    logger.debug('HTTP roundtrip', extra=extra)

session = requests.Session()
session.hooks['response'].append(logRoundtrip)

基本上,这就是记录会话的所有HTTP往返的方式。

格式化HTTP往返日志记录

为了使上面的日志记录有用,可以使用专门的日志记录格式化程序,该格式化程序可以理解日志记录req并提供res额外的信息。它看起来可能像这样:

import textwrap

class HttpFormatter(logging.Formatter):   

    def _formatHeaders(self, d):
        return '\n'.join(f'{k}: {v}' for k, v in d.items())

    def formatMessage(self, record):
        result = super().formatMessage(record)
        if record.name == 'httplogger':
            result += textwrap.dedent('''
                ---------------- request ----------------
                {req.method} {req.url}
                {reqhdrs}

                {req.body}
                ---------------- response ----------------
                {res.status_code} {res.reason} {res.url}
                {reshdrs}

                {res.text}
            ''').format(
                req=record.req,
                res=record.res,
                reqhdrs=self._formatHeaders(record.req.headers),
                reshdrs=self._formatHeaders(record.res.headers),
            )

        return result

formatter = HttpFormatter('{asctime} {levelname} {name} {message}', style='{')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logging.basicConfig(level=logging.DEBUG, handlers=[handler])

现在,如果您使用进行一些请求session,例如:

session.get('https://httpbin.org/user-agent')
session.get('https://httpbin.org/status/200')

的输出stderr将如下所示。

2020-05-14 22:10:13,224 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): httpbin.org:443
2020-05-14 22:10:13,695 DEBUG urllib3.connectionpool https://httpbin.org:443 "GET /user-agent HTTP/1.1" 200 45
2020-05-14 22:10:13,698 DEBUG httplogger HTTP roundtrip
---------------- request ----------------
GET https://httpbin.org/user-agent
User-Agent: python-requests/2.23.0
Accept-Encoding: gzip, deflate
Accept: */*
Connection: keep-alive

None
---------------- response ----------------
200 OK https://httpbin.org/user-agent
Date: Thu, 14 May 2020 20:10:13 GMT
Content-Type: application/json
Content-Length: 45
Connection: keep-alive
Server: gunicorn/19.9.0
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true

{
  "user-agent": "python-requests/2.23.0"
}


2020-05-14 22:10:13,814 DEBUG urllib3.connectionpool https://httpbin.org:443 "GET /status/200 HTTP/1.1" 200 0
2020-05-14 22:10:13,818 DEBUG httplogger HTTP roundtrip
---------------- request ----------------
GET https://httpbin.org/status/200
User-Agent: python-requests/2.23.0
Accept-Encoding: gzip, deflate
Accept: */*
Connection: keep-alive

None
---------------- response ----------------
200 OK https://httpbin.org/status/200
Date: Thu, 14 May 2020 20:10:13 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 0
Connection: keep-alive
Server: gunicorn/19.9.0
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true

GUI方式

当您有很多查询时,使用简单的UI和筛选记录的方法就很方便。我将展示使用Chronologer(我是作者)。

首先,该钩子已被重写以产生记录,这些记录logging在通过电线发送时可以序列化。它看起来可能像这样:

def logRoundtrip(response, *args, **kwargs): 
    extra = {
        'req': {
            'method': response.request.method,
            'url': response.request.url,
            'headers': response.request.headers,
            'body': response.request.body,
        }, 
        'res': {
            'code': response.status_code,
            'reason': response.reason,
            'url': response.url,
            'headers': response.headers,
            'body': response.text
        },
    }
    logger.debug('HTTP roundtrip', extra=extra)

session = requests.Session()
session.hooks['response'].append(logRoundtrip)

其次,必须修改日志配置以使用logging.handlers.HTTPHandler(Chronologer理解)。

import logging.handlers

chrono = logging.handlers.HTTPHandler(
  'localhost:8080', '/api/v1/record', 'POST', credentials=('logger', ''))
handlers = [logging.StreamHandler(), chrono]
logging.basicConfig(level=logging.DEBUG, handlers=handlers)

最后,运行Chronologer实例。例如使用Docker:

docker run --rm -it -p 8080:8080 -v /tmp/db \
    -e CHRONOLOGER_STORAGE_DSN=sqlite:////tmp/db/chrono.sqlite \
    -e CHRONOLOGER_SECRET=example \
    -e CHRONOLOGER_ROLES="basic-reader query-reader writer" \
    saaj/chronologer \
    python -m chronologer -e production serve -u www-data -g www-data -m

并再次运行请求:

session.get('https://httpbin.org/user-agent')
session.get('https://httpbin.org/status/200')

流处理程序将生成:

DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): httpbin.org:443
DEBUG:urllib3.connectionpool:https://httpbin.org:443 "GET /user-agent HTTP/1.1" 200 45
DEBUG:httplogger:HTTP roundtrip
DEBUG:urllib3.connectionpool:https://httpbin.org:443 "GET /status/200 HTTP/1.1" 200 0
DEBUG:httplogger:HTTP roundtrip

现在,如果您打开http:// localhost:8080 /(使用“ logger”输入用户名,并使用空密码输入基本auth弹出窗口),然后单击“ Open”按钮,您应该会看到类似以下内容:

Having a script or even a subsystem of an application for a network protocol debugging, it’s desired to see what request-response pairs are exactly, including effective URLs, headers, payloads and the status. And it’s typically impractical to instrument individual requests all over the place. At the same time there are performance considerations that suggest using single (or few specialised) requests.Session, so the following assumes that the suggestion is followed.

requests supports so called event hooks (as of 2.23 there’s actually only response hook). It’s basically an event listener, and the event is emitted before returning control from requests.request. At this moment both request and response are fully defined, hence can be logged.

import logging

import requests


logger = logging.getLogger('httplogger')

def logRoundtrip(response, *args, **kwargs):
    extra = {'req': response.request, 'res': response}
    logger.debug('HTTP roundtrip', extra=extra)

session = requests.Session()
session.hooks['response'].append(logRoundtrip)

That’s basically how to log all HTTP round-trips of a session.

Formatting HTTP round-trip log records

For the logging above to be useful there can be specialised logging formatter that understands req and res extras on logging records. It can look like this:

import textwrap

class HttpFormatter(logging.Formatter):   

    def _formatHeaders(self, d):
        return '\n'.join(f'{k}: {v}' for k, v in d.items())

    def formatMessage(self, record):
        result = super().formatMessage(record)
        if record.name == 'httplogger':
            result += textwrap.dedent('''
                ---------------- request ----------------
                {req.method} {req.url}
                {reqhdrs}

                {req.body}
                ---------------- response ----------------
                {res.status_code} {res.reason} {res.url}
                {reshdrs}

                {res.text}
            ''').format(
                req=record.req,
                res=record.res,
                reqhdrs=self._formatHeaders(record.req.headers),
                reshdrs=self._formatHeaders(record.res.headers),
            )

        return result

formatter = HttpFormatter('{asctime} {levelname} {name} {message}', style='{')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logging.basicConfig(level=logging.DEBUG, handlers=[handler])

Now if you do some requests using the session, like:

session.get('https://httpbin.org/user-agent')
session.get('https://httpbin.org/status/200')

The output to stderr will look as follows.

2020-05-14 22:10:13,224 DEBUG urllib3.connectionpool Starting new HTTPS connection (1): httpbin.org:443
2020-05-14 22:10:13,695 DEBUG urllib3.connectionpool https://httpbin.org:443 "GET /user-agent HTTP/1.1" 200 45
2020-05-14 22:10:13,698 DEBUG httplogger HTTP roundtrip
---------------- request ----------------
GET https://httpbin.org/user-agent
User-Agent: python-requests/2.23.0
Accept-Encoding: gzip, deflate
Accept: */*
Connection: keep-alive

None
---------------- response ----------------
200 OK https://httpbin.org/user-agent
Date: Thu, 14 May 2020 20:10:13 GMT
Content-Type: application/json
Content-Length: 45
Connection: keep-alive
Server: gunicorn/19.9.0
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true

{
  "user-agent": "python-requests/2.23.0"
}


2020-05-14 22:10:13,814 DEBUG urllib3.connectionpool https://httpbin.org:443 "GET /status/200 HTTP/1.1" 200 0
2020-05-14 22:10:13,818 DEBUG httplogger HTTP roundtrip
---------------- request ----------------
GET https://httpbin.org/status/200
User-Agent: python-requests/2.23.0
Accept-Encoding: gzip, deflate
Accept: */*
Connection: keep-alive

None
---------------- response ----------------
200 OK https://httpbin.org/status/200
Date: Thu, 14 May 2020 20:10:13 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 0
Connection: keep-alive
Server: gunicorn/19.9.0
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true

A GUI way

When you have a lot of queries, having a simple UI and a way to filter records comes at handy. I’ll show to use Chronologer for that (which I’m the author of).

First, the hook has be rewritten to produce records that logging can serialise when sending over the wire. It can look like this:

def logRoundtrip(response, *args, **kwargs): 
    extra = {
        'req': {
            'method': response.request.method,
            'url': response.request.url,
            'headers': response.request.headers,
            'body': response.request.body,
        }, 
        'res': {
            'code': response.status_code,
            'reason': response.reason,
            'url': response.url,
            'headers': response.headers,
            'body': response.text
        },
    }
    logger.debug('HTTP roundtrip', extra=extra)

session = requests.Session()
session.hooks['response'].append(logRoundtrip)

Second, logging configuration has to be adapted to use logging.handlers.HTTPHandler (which Chronologer understands).

import logging.handlers

chrono = logging.handlers.HTTPHandler(
  'localhost:8080', '/api/v1/record', 'POST', credentials=('logger', ''))
handlers = [logging.StreamHandler(), chrono]
logging.basicConfig(level=logging.DEBUG, handlers=handlers)

Finally, run Chronologer instance. e.g. using Docker:

docker run --rm -it -p 8080:8080 -v /tmp/db \
    -e CHRONOLOGER_STORAGE_DSN=sqlite:////tmp/db/chrono.sqlite \
    -e CHRONOLOGER_SECRET=example \
    -e CHRONOLOGER_ROLES="basic-reader query-reader writer" \
    saaj/chronologer \
    python -m chronologer -e production serve -u www-data -g www-data -m

And run the requests again:

session.get('https://httpbin.org/user-agent')
session.get('https://httpbin.org/status/200')

The stream handler will produce:

DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): httpbin.org:443
DEBUG:urllib3.connectionpool:https://httpbin.org:443 "GET /user-agent HTTP/1.1" 200 45
DEBUG:httplogger:HTTP roundtrip
DEBUG:urllib3.connectionpool:https://httpbin.org:443 "GET /status/200 HTTP/1.1" 200 0
DEBUG:httplogger:HTTP roundtrip

Now if you open http://localhost:8080/ (use “logger” for username and empty password for the basic auth popup) and click “Open” button, you should see something like:


Django安装程序默认日志记录

问题:Django安装程序默认日志记录

我似乎无法弄清楚如何为Django安装设置“默认”记录器。我想在中使用Django 1.3的新LOGGING设置settings.py

我看过Django Logging Doc的示例,但在我看来,他们只设置了将为特定记录器记录日志的处理程序。在他们的示例中,他们为名为“ django”,“ django.request”和“ myproject.custom”的记录器设置了处理程序。

我要做的只是设置一个默认值logging.handlers.RotatingFileHandler,它将默认处理所有记录器。即,如果我在项目中的某个地方创建了一个新模块,并用如下符号表示:my_app_name.my_new_module,我应该能够做到这一点,并将所有日志记录转到旋转文件日志中。

# In file './my_app_name/my_new_module.py'
import logging
logger = logging.getLogger('my_app_name.my_new_module')
logger.debug('Hello logs!') # <-- This should get logged to my RotatingFileHandler that I setup in `settings.py`!

I can’t seem to figure out how to setup a “default” logger for my Django installation. I would like to use Django 1.3’s new LOGGING setting in settings.py.

I’ve looked at the Django Logging Doc’s example, but it looks to me like they only setup handlers which will do logging for particular loggers. In the case of their example they setup handler for the loggers named ‘django’,’django.request’, and ‘myproject.custom’.

All I want to do is setup a default logging.handlers.RotatingFileHandler which will handle all loggers by default. i.e., if I make a new module somewhere in my project and it is denoted by something like: my_app_name.my_new_module, I should be able to do this and have all logging goto the rotating file logs.

# In file './my_app_name/my_new_module.py'
import logging
logger = logging.getLogger('my_app_name.my_new_module')
logger.debug('Hello logs!') # <-- This should get logged to my RotatingFileHandler that I setup in `settings.py`!

回答 0

弄清楚了…

您可以使用空字符串引用来设置“全部捕获”记录器''

举例来说,在以下设置中,我将所有日志事件保存到logs/mylog.log,除了django.request日志事件将保存到logs/django_request.log。因为'propagate'设置False为我的django.request记录器,所以日志事件将永远不会到达“全部捕获”记录器。

LOGGING = {
    'version': 1,
    'disable_existing_loggers': True,
    'formatters': {
        'standard': {
            'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
        },
    },
    'handlers': {
        'default': {
            'level':'DEBUG',
            'class':'logging.handlers.RotatingFileHandler',
            'filename': 'logs/mylog.log',
            'maxBytes': 1024*1024*5, # 5 MB
            'backupCount': 5,
            'formatter':'standard',
        },  
        'request_handler': {
            'level':'DEBUG',
            'class':'logging.handlers.RotatingFileHandler',
            'filename': 'logs/django_request.log',
            'maxBytes': 1024*1024*5, # 5 MB
            'backupCount': 5,
            'formatter':'standard',
        },
    },
    'loggers': {
        '': {
            'handlers': ['default'],
            'level': 'DEBUG',
            'propagate': True
        },
        'django.request': {
            'handlers': ['request_handler'],
            'level': 'DEBUG',
            'propagate': False
        },
    }
}

Figured it out…

You set the ‘catch all’ logger by referencing it with the empty string: ''.

As an example, in the following setup I have the all log events getting saved to logs/mylog.log, with the exception of django.request log events which will be saved to logs/django_request.log. Because 'propagate' is set to False for my django.request logger, the log event will never reach the the ‘catch all’ logger.

LOGGING = {
    'version': 1,
    'disable_existing_loggers': True,
    'formatters': {
        'standard': {
            'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
        },
    },
    'handlers': {
        'default': {
            'level':'DEBUG',
            'class':'logging.handlers.RotatingFileHandler',
            'filename': 'logs/mylog.log',
            'maxBytes': 1024*1024*5, # 5 MB
            'backupCount': 5,
            'formatter':'standard',
        },  
        'request_handler': {
            'level':'DEBUG',
            'class':'logging.handlers.RotatingFileHandler',
            'filename': 'logs/django_request.log',
            'maxBytes': 1024*1024*5, # 5 MB
            'backupCount': 5,
            'formatter':'standard',
        },
    },
    'loggers': {
        '': {
            'handlers': ['default'],
            'level': 'DEBUG',
            'propagate': True
        },
        'django.request': {
            'handlers': ['request_handler'],
            'level': 'DEBUG',
            'propagate': False
        },
    }
}

回答 1

正如您在回答 Chris中所说,定义默认记录器的一种方法是使用空字符串作为其键。

但是,我认为预期的方法是root在日志记录配置字典的键下定义一个特殊的日志记录器。我在Python文档中找到了这个:

root-这将是root记录器的配置。配置的处理将与所有记录器相同,但该propagate设置将不适用。

这是您的答案更改为使用root键后的配置:

LOGGING = {
    'version': 1,
    'disable_existing_loggers': True,
    'formatters': {
        'standard': {
            'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
        },
    },
    'handlers': {
        'default': {
            'level':'DEBUG',
            'class':'logging.handlers.RotatingFileHandler',
            'filename': 'logs/mylog.log',
            'maxBytes': 1024*1024*5, # 5 MB
            'backupCount': 5,
            'formatter':'standard',
        },  
        'request_handler': {
            'level':'DEBUG',
            'class':'logging.handlers.RotatingFileHandler',
            'filename': 'logs/django_request.log',
            'maxBytes': 1024*1024*5, # 5 MB
            'backupCount': 5,
            'formatter':'standard',
        },
    },
    'root': {
        'handlers': ['default'],
        'level': 'DEBUG'
    },
    'loggers': {
        'django.request': {
            'handlers': ['request_handler'],
            'level': 'DEBUG',
            'propagate': False
        },
    }
}

公平地说,我看不到这两种配置在行为上有任何区别。似乎使用空字符串键定义记录器将修改根记录器,因为logging.getLogger('')它将返回根记录器。

我比较喜欢的唯一原因'root'''在于它是明确有关修改根记录。如果您很好奇,请定义两个都'root'覆盖'',这是因为根条目是最后处理的。

As you said in your answer, Chris, one option to define a default logger is to use the empty string as its key.

However, I think the intended way is to define a special logger under the root key of the logging configuration dictionary. I found this in the Python documentation:

root – this will be the configuration for the root logger. Processing of the configuration will be as for any logger, except that the propagate setting will not be applicable.

Here’s the configuration from your answer changed to use the root key:

LOGGING = {
    'version': 1,
    'disable_existing_loggers': True,
    'formatters': {
        'standard': {
            'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
        },
    },
    'handlers': {
        'default': {
            'level':'DEBUG',
            'class':'logging.handlers.RotatingFileHandler',
            'filename': 'logs/mylog.log',
            'maxBytes': 1024*1024*5, # 5 MB
            'backupCount': 5,
            'formatter':'standard',
        },  
        'request_handler': {
            'level':'DEBUG',
            'class':'logging.handlers.RotatingFileHandler',
            'filename': 'logs/django_request.log',
            'maxBytes': 1024*1024*5, # 5 MB
            'backupCount': 5,
            'formatter':'standard',
        },
    },
    'root': {
        'handlers': ['default'],
        'level': 'DEBUG'
    },
    'loggers': {
        'django.request': {
            'handlers': ['request_handler'],
            'level': 'DEBUG',
            'propagate': False
        },
    }
}

To be fair, I can’t see any difference in behaviour between the two configurations. It appears that defining a logger with an empty string key will modify the root logger, because logging.getLogger('') will return the root logger.

The only reason I prefer 'root' over '' is that it is explicit about modifying the root logger. In case you were curious, 'root' overrides '' if you define both, just because the root entry is processed last.


回答 2

import logging
logger = logging.getLogger(__name__)

添加后:

logging.basicConfig(
    level = logging.DEBUG,
    format = '%(name)s %(levelname)s %(message)s',
)

我们可以将格式更改为:

format = '"%(levelname)s:%(name)s:%(message)s"  ',

要么

format = '%(name)s %(asctime)s %(levelname)s %(message)s',
import logging
logger = logging.getLogger(__name__)

after add:

logging.basicConfig(
    level = logging.DEBUG,
    format = '%(name)s %(levelname)s %(message)s',
)

we may change format to:

format = '"%(levelname)s:%(name)s:%(message)s"  ',

or

format = '%(name)s %(asctime)s %(levelname)s %(message)s',

回答 3

我做了一个快速示例来检查在config dict中同时引用rootkey和empty ''logger时使用的配置。

import logging.config

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'fmt1': {
            'format': '[FMT1] %(asctime)-15s %(message)s',
        },
        'fmt2': {
            'format': '[FMT2] %(asctime)-15s %(message)s',
        }
    },
    'handlers': {
        'console1': {
            'level': 'INFO',
            'class': 'logging.StreamHandler',
            'formatter': 'fmt1',
        },
        'console2': {
            'level': 'INFO',
            'class': 'logging.StreamHandler',
            'formatter': 'fmt2',
        },
    },
    # First config for root logger: console1 -> fmt1
    'root': {
        'handlers': ['console1'],
        'level': 'DEBUG',
        'propagate': True,
    },
    'loggers': {
        # Second config for root logger: console2 -> fmt2
        '': {
            'handlers': ['console2'],
            'level': 'DEBUG',
            'propagate': True,
        },
    },
}

logging.config.dictConfig(LOGGING)

l1 = logging.getLogger()
l2 = logging.getLogger('')
root = logging.root

l1.info("l1")
l2.info("l2")
root.info("root logger")

打印以下结果:

[FMT1] 2018-12-18 17:24:47,691 l1
[FMT1] 2018-12-18 17:24:47,691 l2
[FMT1] 2018-12-18 17:24:47,691 root logger

表示root密钥下的配置具有最高优先级。如果删除了该块,则结果为:

[FMT2] 2018-12-18 17:25:43,757 l1
[FMT2] 2018-12-18 17:25:43,757 l2
[FMT2] 2018-12-18 17:25:43,757 root logger

在这两种情况下,我能够调试并确定所有三个记录器(l1l2root)引用相同的记录器实例,根记录器。

希望这将对像我一样对配置根记录程序的两种不同方式感到困惑的其他人有所帮助。

I made a quick sample to check what configuration is used when both root key and the empty '' logger are referenced in config dict.

import logging.config

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'fmt1': {
            'format': '[FMT1] %(asctime)-15s %(message)s',
        },
        'fmt2': {
            'format': '[FMT2] %(asctime)-15s %(message)s',
        }
    },
    'handlers': {
        'console1': {
            'level': 'INFO',
            'class': 'logging.StreamHandler',
            'formatter': 'fmt1',
        },
        'console2': {
            'level': 'INFO',
            'class': 'logging.StreamHandler',
            'formatter': 'fmt2',
        },
    },
    # First config for root logger: console1 -> fmt1
    'root': {
        'handlers': ['console1'],
        'level': 'DEBUG',
        'propagate': True,
    },
    'loggers': {
        # Second config for root logger: console2 -> fmt2
        '': {
            'handlers': ['console2'],
            'level': 'DEBUG',
            'propagate': True,
        },
    },
}

logging.config.dictConfig(LOGGING)

l1 = logging.getLogger()
l2 = logging.getLogger('')
root = logging.root

l1.info("l1")
l2.info("l2")
root.info("root logger")

Prints the following result:

[FMT1] 2018-12-18 17:24:47,691 l1
[FMT1] 2018-12-18 17:24:47,691 l2
[FMT1] 2018-12-18 17:24:47,691 root logger

indicating that configuration under root key has the highest priority. If the block is removed, the result is:

[FMT2] 2018-12-18 17:25:43,757 l1
[FMT2] 2018-12-18 17:25:43,757 l2
[FMT2] 2018-12-18 17:25:43,757 root logger

In both case, I was able to debug and determine that all three loggers (l1, l2 and root) referenced the same logger instance, the root logger.

Hope that will help others who, like me, were confused by the 2 different ways to configure the root logger.


使用单个文件进行Python日志记录(函数名,文件名,行号)

问题:使用单个文件进行Python日志记录(函数名,文件名,行号)

我正在尝试学习应用程序的工作方式。为此,我将调试命令作为每个函数主体的第一行插入,目的是记录该函数的名称以及将消息发送至日志输出的行号(在代码内)。最后,由于此应用程序包含许多文件,因此我想创建一个日志文件,以便可以更好地了解应用程序的控制流程。

这是我所知道的:

  1. 为了获得函数名,我可以使用,function_name.__name__但我不想使用function_name(这样我就可以Log.info("Message")在所有函数的主体中快速复制并粘贴泛型)。我知道这可以使用__func__宏在C语言中完成,但我不确定python。

  2. 用于获取文件名和行号,我已经看到了(我相信)我的应用程序是使用Python的locals()功能,但在语法,我不完全知道的如:options = "LOG.debug('%(flag)s : %(flag_get)s' % locals())我尝试过使用像LOG.info("My message %s" % locals())产生类似{'self': <__main__.Class_name object at 0x22f8cd0>}。请问对此有何意见?

  3. 我知道如何使用日志记录并向其添加处理程序以将其记录到文件中,但是我不确定是否可以使用单个文件来按项目中函数调用的正确顺序记录所有日志消息。

我将不胜感激任何帮助。

谢谢!

I am trying to learn how an application works. And for this I am inserting debug commands as the first line of each function’s body with the goal of logging the function’s name as well as the line number (within the code) where I send a message to the log output. Finally, since this application comprises of many files, I want to create a single log file so that I can better understand the control flow of the application.

Here is what I know:

  1. for getting function name, I can use function_name.__name__ but I don’t want to use the function_name (so that I could rapidly copy and paste a generic Log.info("Message") in the body of all functions). I know this could be done in C using __func__ macro but I am not sure about python.

  2. for getting the filename and line number, I have seen that (and I believe that) my application is using Python locals() function but in a syntax that I am not completely aware of e.g.: options = "LOG.debug('%(flag)s : %(flag_get)s' % locals()) and I tried it using like LOG.info("My message %s" % locals()) which produces something like {'self': <__main__.Class_name object at 0x22f8cd0>}. Any input on this please?

  3. I know how to use logging and add handler to it to log to a file but I am not sure if a single file can be used to record all log messages in correct order of function calls in the project.

I would greatly appreciate any help.

Thanks!


回答 0

您在这里有一些边际相关的问题。

我将从最简单的开始:(3)。使用,logging您可以将所有调用聚集到一个日志文件或其他输出目标中:它们将按照在处理过程中发生的顺序进行。

接下来:(2)。locals()提供当前范围的指示。因此,在没有其他参数的方法中,您将具有self作用域,该作用域包含对当前实例的引用。困扰您的技巧是使用字典作为%操作员的RHS的字符串格式。"%(foo)s" % bar将被替换为bar["foo"]is的任何值。

最后,您可以使用一些自省技巧,类似于pdb可以记录更多信息的自省技巧:

def autolog(message):
    "Automatically log the current function details."
    import inspect, logging
    # Get the previous frame in the stack, otherwise it would
    # be this function!!!
    func = inspect.currentframe().f_back.f_code
    # Dump the message + the name of this function to the log.
    logging.debug("%s: %s in %s:%i" % (
        message, 
        func.co_name, 
        func.co_filename, 
        func.co_firstlineno
    ))

这将记录传入的消息,以及(原始)函数名称,出现定义的文件名以及该文件中的行。看一下检查-检查活动对象以了解更多详细信息。

正如我在前面的评论中提到的那样,您还可以pdb通过插入行import pdb; pdb.set_trace()并重新运行程序来随时进入交互式调试提示符。这使您可以逐步检查代码,并根据需要检查数据。

You have a few marginally related questions here.

I’ll start with the easiest: (3). Using logging you can aggregate all calls to a single log file or other output target: they will be in the order they occurred in the process.

Next up: (2). locals() provides a dict of the current scope. Thus, in a method that has no other arguments, you have self in scope, which contains a reference to the current instance. The trick being used that is stumping you is the string formatting using a dict as the RHS of the % operator. "%(foo)s" % bar will be replaced by whatever the value of bar["foo"] is.

Finally, you can use some introspection tricks, similar to those used by pdb that can log more info:

def autolog(message):
    "Automatically log the current function details."
    import inspect, logging
    # Get the previous frame in the stack, otherwise it would
    # be this function!!!
    func = inspect.currentframe().f_back.f_code
    # Dump the message + the name of this function to the log.
    logging.debug("%s: %s in %s:%i" % (
        message, 
        func.co_name, 
        func.co_filename, 
        func.co_firstlineno
    ))

This will log the message passed in, plus the (original) function name, the filename in which the definition appears, and the line in that file. Have a look at inspect – Inspect live objects for more details.

As I mentioned in my comment earlier, you can also drop into a pdb interactive debugging prompt at any time by inserting the line import pdb; pdb.set_trace() in, and re-running your program. This enables you to step through the code, inspecting data as you choose.


回答 1

正确的答案是使用已经提供的funcName变量

import logging
logger = logging.getLogger('root')
FORMAT = "[%(filename)s:%(lineno)s - %(funcName)20s() ] %(message)s"
logging.basicConfig(format=FORMAT)
logger.setLevel(logging.DEBUG)

然后,在任何需要的地方,只需添加:

logger.debug('your message') 

我现在正在处理的脚本的示例输出:

[invRegex.py:150 -          handleRange() ] ['[A-Z]']
[invRegex.py:155 -     handleRepetition() ] [[<__main__.CharacterRangeEmitter object at 0x10ba03050>, '{', '1', '}']]
[invRegex.py:197 -          handleMacro() ] ['\\d']
[invRegex.py:155 -     handleRepetition() ] [[<__main__.CharacterRangeEmitter object at 0x10ba03950>, '{', '1', '}']]
[invRegex.py:210 -       handleSequence() ] [[<__main__.GroupEmitter object at 0x10b9fedd0>, <__main__.GroupEmitter object at 0x10ba03ad0>]]

The correct answer for this is to use the already provided funcName variable

import logging
logger = logging.getLogger('root')
FORMAT = "[%(filename)s:%(lineno)s - %(funcName)20s() ] %(message)s"
logging.basicConfig(format=FORMAT)
logger.setLevel(logging.DEBUG)

Then anywhere you want, just add:

logger.debug('your message') 

Example output from a script I’m working on right now:

[invRegex.py:150 -          handleRange() ] ['[A-Z]']
[invRegex.py:155 -     handleRepetition() ] [[<__main__.CharacterRangeEmitter object at 0x10ba03050>, '{', '1', '}']]
[invRegex.py:197 -          handleMacro() ] ['\\d']
[invRegex.py:155 -     handleRepetition() ] [[<__main__.CharacterRangeEmitter object at 0x10ba03950>, '{', '1', '}']]
[invRegex.py:210 -       handleSequence() ] [[<__main__.GroupEmitter object at 0x10b9fedd0>, <__main__.GroupEmitter object at 0x10ba03ad0>]]

回答 2

funcnamelinenamelineno提供有关执行记录的最后一个功能的信息。

如果您有记录器的包装器(例如,单例记录器),则@synthesizerpatel的答案可能对您不起作用。

要找出呼叫堆栈中的其他调用方法,您可以执行以下操作:

import logging
import inspect

class Singleton(type):
    _instances = {}

    def __call__(cls, *args, **kwargs):
        if cls not in cls._instances:
            cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
        return cls._instances[cls]

class MyLogger(metaclass=Singleton):
    logger = None

    def __init__(self):
        logging.basicConfig(
            level=logging.INFO,
            format="%(asctime)s - %(threadName)s - %(message)s",
            handlers=[
                logging.StreamHandler()
            ])

        self.logger = logging.getLogger(__name__ + '.logger')

    @staticmethod
    def __get_call_info():
        stack = inspect.stack()

        # stack[1] gives previous function ('info' in our case)
        # stack[2] gives before previous function and so on

        fn = stack[2][1]
        ln = stack[2][2]
        func = stack[2][3]

        return fn, func, ln

    def info(self, message, *args):
        message = "{} - {} at line {}: {}".format(*self.__get_call_info(), message)
        self.logger.info(message, *args)

funcname, linename and lineno provide information about the last function that did the logging.

If you have wrapper of logger (e.g singleton logger), then @synthesizerpatel’s answer might not work for you.

To find out the other callers in the call stack you can do:

import logging
import inspect

class Singleton(type):
    _instances = {}

    def __call__(cls, *args, **kwargs):
        if cls not in cls._instances:
            cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
        return cls._instances[cls]

class MyLogger(metaclass=Singleton):
    logger = None

    def __init__(self):
        logging.basicConfig(
            level=logging.INFO,
            format="%(asctime)s - %(threadName)s - %(message)s",
            handlers=[
                logging.StreamHandler()
            ])

        self.logger = logging.getLogger(__name__ + '.logger')

    @staticmethod
    def __get_call_info():
        stack = inspect.stack()

        # stack[1] gives previous function ('info' in our case)
        # stack[2] gives before previous function and so on

        fn = stack[2][1]
        ln = stack[2][2]
        func = stack[2][3]

        return fn, func, ln

    def info(self, message, *args):
        message = "{} - {} at line {}: {}".format(*self.__get_call_info(), message)
        self.logger.info(message, *args)