如何在Python中打破这条漫长的路线?

问题:如何在Python中打破这条漫长的路线?

您将如何格式化这样的长行?我希望宽度不超过80个字符:

logger.info("Skipping {0} because its thumbnail was already in our system as {1}.".format(line[indexes['url']], video.title))

这是我最好的选择吗?

url = "Skipping {0} because its thumbnail was already in our system as {1}."
logger.info(url.format(line[indexes['url']], video.title))

How would you go about formatting a long line such as this? I’d like to get it to no more than 80 characters wide:

logger.info("Skipping {0} because its thumbnail was already in our system as {1}.".format(line[indexes['url']], video.title))

Is this my best option?

url = "Skipping {0} because its thumbnail was already in our system as {1}."
logger.info(url.format(line[indexes['url']], video.title))

回答 0

那是一个开始。在使用长字符串的代码之外定义较长的字符串并不是一个坏习惯。这是分离数据和行为的一种方式。您的第一个选择是通过使字符串文字彼此相邻来隐式地将它们连接在一起:

("This is the first line of my text, "
"which will be joined to a second.")

或使用行尾连续符,这会更脆弱一些,因为这可行:

"This is the first line of my text, " \
"which will be joined to a second."

但这不是:

"This is the first line of my text, " \ 
"which will be joined to a second."

看到不同?没有?好吧,当它是您的代码时,您也不会。

隐式连接的不利之处在于,它仅适用于字符串文字,而不适用于从变量中提取的字符串,因此在进行重构时,事情可能会变得更加复杂。另外,您只能对整个组合字符串进行插值格式化。

另外,您可以使用串联运算符(+)显式加入:

("This is the first line of my text, " + 
"which will be joined to a second.")

正如zen python所说的那样,显式比隐式更好,但这会创建三个字符串而不是一个字符串,并且使用两倍的内存:您已经编写了两个字符串,再加上一个字符串,即两个字符串连接在一起,所以您必须知道何时忽略禅宗。好处是您可以将格式分别应用于每一行中的任何子字符串,也可以应用于位于括号外的全部子字符串。

最后,您可以使用三引号引起来的字符串:

"""This is the first line of my text
which will be joined to a second."""

这通常是我的最爱,尽管它的行为略有不同,因为换行符和后续行中的任何前导空格都会显示在您的最终字符串中。您可以使用转义的反斜杠消除换行符。

"""This is the first line of my text \
which will be joined to a second."""

这具有与上述相同技术相同的问题,因为正确的代码与不正确的代码的区别仅在于不可见的空格。

哪一个“最佳”取决于您的特定情况,但答案不只是简单的美学,而是微妙的不同行为之一。

That’s a start. It’s not a bad practice to define your longer strings outside of the code that uses them. It’s a way to separate data and behavior. Your first option is to join string literals together implicitly by making them adjacent to one another:

("This is the first line of my text, "
"which will be joined to a second.")

Or with line ending continuations, which is a little more fragile, as this works:

"This is the first line of my text, " \
"which will be joined to a second."

But this doesn’t:

"This is the first line of my text, " \ 
"which will be joined to a second."

See the difference? No? Well you won’t when it’s your code either.

The downside to implicit joining is that it only works with string literals, not with strings taken from variables, so things can get a little more hairy when you refactor. Also, you can only interpolate formatting on the combined string as a whole.

Alternatively, you can join explicitly using the concatenation operator (+):

("This is the first line of my text, " + 
"which will be joined to a second.")

Explicit is better than implicit, as the zen of python says, but this creates three strings instead of one, and uses twice as much memory: there are the two you have written, plus one which is the two of them joined together, so you have to know when to ignore the zen. The upside is you can apply formatting to any of the substrings separately on each line, or to the whole lot from outside the parentheses.

Finally, you can use triple-quoted strings:

"""This is the first line of my text
which will be joined to a second."""

This is often my favorite, though its behavior is slightly different as the newline and any leading whitespace on subsequent lines will show up in your final string. You can eliminate the newline with an escaping backslash.

"""This is the first line of my text \
which will be joined to a second."""

This has the same problem as the same technique above, in that correct code only differs from incorrect code by invisible whitespace.

Which one is “best” depends on your particular situation, but the answer is not simply aesthetic, but one of subtly different behaviors.


回答 1

连续字符串文字由编译器连接,带括号的表达式被视为单行代码:

logger.info("Skipping {0} because it's thumbnail was "
  "already in our system as {1}.".format(line[indexes['url']],
  video.title))

Consecutive string literals are joined by the compiler, and parenthesized expressions are considered to be a single line of code:

logger.info("Skipping {0} because it's thumbnail was "
  "already in our system as {1}.".format(line[indexes['url']],
  video.title))

回答 2

我个人不喜欢挂空块,所以我将其格式化为:

logger.info(
    'Skipping {0} because its thumbnail was already in our system as {1}.'
    .format(line[indexes['url']], video.title)
)

通常,我不会太费力地使代码完全适合80列行。可以将线长降低到合理的水平,但是硬80的限制已成过去。

Personally I dislike hanging open blocks, so I’d format it as:

logger.info(
    'Skipping {0} because its thumbnail was already in our system as {1}.'
    .format(line[indexes['url']], video.title)
)

In general I wouldn’t bother struggle too hard to make code fit exactly within a 80-column line. It’s worth keeping line length down to reasonable levels, but the hard 80 limit is a thing of the past.


回答 3

您可以使用textwrap模块将其分成多行

import textwrap
str="ABCDEFGHIJKLIMNO"
print("\n".join(textwrap.wrap(str,8)))

ABCDEFGH
IJKLIMNO

文档中

文本换行。wrap(text [,width [,…]])
将单个段落包装在文本(字符串)中,因此每一行最多为宽度字符。返回输出行列表,不带最终换行符。

可选的关键字参数与的实例属性相对应TextWrapper,如下所述。宽度默认为70

有关TextWrapper.wrap()wrap()行为的更多详细信息,请参见方法。

You can use textwrap module to break it in multiple lines

import textwrap
str="ABCDEFGHIJKLIMNO"
print("\n".join(textwrap.wrap(str,8)))

ABCDEFGH
IJKLIMNO

From the documentation:

textwrap.wrap(text[, width[, …]])
Wraps the single paragraph in text (a string) so every line is at most width characters long. Returns a list of output lines, without final newlines.

Optional keyword arguments correspond to the instance attributes of TextWrapper, documented below. width defaults to 70.

See the TextWrapper.wrap() method for additional details on how wrap() behaves.


回答 4

对于也尝试调用.format()长字符串并且在不中断后续.format(调用的情况下无法使用某些最受欢迎的字符串包装技术的任何人,您可以使用str.format("", 1, 2)代替"".format(1, 2)。这使您可以使用任何喜欢的技术来断开字符串。例如:

logger.info("Skipping {0} because its thumbnail was already in our system as {1}.".format(line[indexes['url']], video.title))

logger.info(str.format(("Skipping {0} because its thumbnail was already"
+ "in our system as {1}"), line[indexes['url']], video.title))

否则,唯一的可能性就是使用行尾延续,我个人并不喜欢。

For anyone who is also trying to call .format() on a long string, and is unable to use some of the most popular string wrapping techniques without breaking the subsequent .format( call, you can do str.format("", 1, 2) instead of "".format(1, 2). This lets you break the string with whatever technique you like. For example:

logger.info("Skipping {0} because its thumbnail was already in our system as {1}.".format(line[indexes['url']], video.title))

can be

logger.info(str.format(("Skipping {0} because its thumbnail was already"
+ "in our system as {1}"), line[indexes['url']], video.title))

Otherwise, the only possibility is using line ending continuations, which I personally am not a fan of.


如何使Python脚本像Linux中的服务或守护程序一样运行

问题:如何使Python脚本像Linux中的服务或守护程序一样运行

我已经编写了一个Python脚本,该脚本检查特定的电子邮件地址并将新的电子邮件传递给外部程序。如何获得此脚本以执行24/7,例如在Linux中将其转换为守护程序或服务。我是否还需要一个永无休止的循环,还是只需要多次重新执行代码就可以完成?

I have written a Python script that checks a certain e-mail address and passes new e-mails to an external program. How can I get this script to execute 24/7, such as turning it into daemon or service in Linux. Would I also need a loop that never ends in the program, or can it be done by just having the code re executed multiple times?


回答 0

您在这里有两个选择。

  1. 进行适当的cron作业来调用您的脚本。Cron是GNU / Linux守护程序的通用名称,该守护程序会根据您设置的时间表定期启动脚本。您将脚本添加到crontab中,或将其符号链接放置到特殊目录中,守护程序将在后台启动该脚本。您可以在Wikipedia上阅读更多内容。有各种不同的cron守护程序,但是您的GNU / Linux系统应该已经安装了它。

  2. 对您的脚本使用某种python方法(例如,一个库)可以使其自身守护进程。是的,这将需要一个简单的事件循环(您的事件可能是计时器触发的,可能由睡眠功能提供)。

我不建议您选择2.,因为实际上您将重复cron功能。Linux系统范例是让多个简单的工具交互并解决您的问题。除非有其他原因(除了定期触发)之外,您还应创建守护程序,否则请选择其他方法。

另外,如果将daemonize与循环一起使用,并且发生崩溃,此后没有人会检查邮件(如Ivan Nevostruev对此答案的评论中指出的)。如果将脚本添加为cron作业,它将再次触发。

You have two options here.

  1. Make a proper cron job that calls your script. Cron is a common name for a GNU/Linux daemon that periodically launches scripts according to a schedule you set. You add your script into a crontab or place a symlink to it into a special directory and the daemon handles the job of launching it in the background. You can read more at Wikipedia. There is a variety of different cron daemons, but your GNU/Linux system should have it already installed.

  2. Use some kind of python approach (a library, for example) for your script to be able to daemonize itself. Yes, it will require a simple event loop (where your events are timer triggering, possibly, provided by sleep function).

I wouldn’t recommend you to choose 2., because you would be, in fact, repeating cron functionality. The Linux system paradigm is to let multiple simple tools interact and solve your problems. Unless there are additional reasons why you should make a daemon (in addition to trigger periodically), choose the other approach.

Also, if you use daemonize with a loop and a crash happens, no one will check the mail after that (as pointed out by Ivan Nevostruev in comments to this answer). While if the script is added as a cron job, it will just trigger again.


回答 1

这是一个不错的类,它是从这里获取的

#!/usr/bin/env python

import sys, os, time, atexit
from signal import SIGTERM

class Daemon:
        """
        A generic daemon class.

        Usage: subclass the Daemon class and override the run() method
        """
        def __init__(self, pidfile, stdin='/dev/null', stdout='/dev/null', stderr='/dev/null'):
                self.stdin = stdin
                self.stdout = stdout
                self.stderr = stderr
                self.pidfile = pidfile

        def daemonize(self):
                """
                do the UNIX double-fork magic, see Stevens' "Advanced
                Programming in the UNIX Environment" for details (ISBN 0201563177)
                http://www.erlenstar.demon.co.uk/unix/faq_2.html#SEC16
                """
                try:
                        pid = os.fork()
                        if pid > 0:
                                # exit first parent
                                sys.exit(0)
                except OSError, e:
                        sys.stderr.write("fork #1 failed: %d (%s)\n" % (e.errno, e.strerror))
                        sys.exit(1)

                # decouple from parent environment
                os.chdir("/")
                os.setsid()
                os.umask(0)

                # do second fork
                try:
                        pid = os.fork()
                        if pid > 0:
                                # exit from second parent
                                sys.exit(0)
                except OSError, e:
                        sys.stderr.write("fork #2 failed: %d (%s)\n" % (e.errno, e.strerror))
                        sys.exit(1)

                # redirect standard file descriptors
                sys.stdout.flush()
                sys.stderr.flush()
                si = file(self.stdin, 'r')
                so = file(self.stdout, 'a+')
                se = file(self.stderr, 'a+', 0)
                os.dup2(si.fileno(), sys.stdin.fileno())
                os.dup2(so.fileno(), sys.stdout.fileno())
                os.dup2(se.fileno(), sys.stderr.fileno())

                # write pidfile
                atexit.register(self.delpid)
                pid = str(os.getpid())
                file(self.pidfile,'w+').write("%s\n" % pid)

        def delpid(self):
                os.remove(self.pidfile)

        def start(self):
                """
                Start the daemon
                """
                # Check for a pidfile to see if the daemon already runs
                try:
                        pf = file(self.pidfile,'r')
                        pid = int(pf.read().strip())
                        pf.close()
                except IOError:
                        pid = None

                if pid:
                        message = "pidfile %s already exist. Daemon already running?\n"
                        sys.stderr.write(message % self.pidfile)
                        sys.exit(1)

                # Start the daemon
                self.daemonize()
                self.run()

        def stop(self):
                """
                Stop the daemon
                """
                # Get the pid from the pidfile
                try:
                        pf = file(self.pidfile,'r')
                        pid = int(pf.read().strip())
                        pf.close()
                except IOError:
                        pid = None

                if not pid:
                        message = "pidfile %s does not exist. Daemon not running?\n"
                        sys.stderr.write(message % self.pidfile)
                        return # not an error in a restart

                # Try killing the daemon process       
                try:
                        while 1:
                                os.kill(pid, SIGTERM)
                                time.sleep(0.1)
                except OSError, err:
                        err = str(err)
                        if err.find("No such process") > 0:
                                if os.path.exists(self.pidfile):
                                        os.remove(self.pidfile)
                        else:
                                print str(err)
                                sys.exit(1)

        def restart(self):
                """
                Restart the daemon
                """
                self.stop()
                self.start()

        def run(self):
                """
                You should override this method when you subclass Daemon. It will be called after the process has been
                daemonized by start() or restart().
                """

Here’s a nice class that is taken from here:

#!/usr/bin/env python

import sys, os, time, atexit
from signal import SIGTERM

class Daemon:
        """
        A generic daemon class.

        Usage: subclass the Daemon class and override the run() method
        """
        def __init__(self, pidfile, stdin='/dev/null', stdout='/dev/null', stderr='/dev/null'):
                self.stdin = stdin
                self.stdout = stdout
                self.stderr = stderr
                self.pidfile = pidfile

        def daemonize(self):
                """
                do the UNIX double-fork magic, see Stevens' "Advanced
                Programming in the UNIX Environment" for details (ISBN 0201563177)
                http://www.erlenstar.demon.co.uk/unix/faq_2.html#SEC16
                """
                try:
                        pid = os.fork()
                        if pid > 0:
                                # exit first parent
                                sys.exit(0)
                except OSError, e:
                        sys.stderr.write("fork #1 failed: %d (%s)\n" % (e.errno, e.strerror))
                        sys.exit(1)

                # decouple from parent environment
                os.chdir("/")
                os.setsid()
                os.umask(0)

                # do second fork
                try:
                        pid = os.fork()
                        if pid > 0:
                                # exit from second parent
                                sys.exit(0)
                except OSError, e:
                        sys.stderr.write("fork #2 failed: %d (%s)\n" % (e.errno, e.strerror))
                        sys.exit(1)

                # redirect standard file descriptors
                sys.stdout.flush()
                sys.stderr.flush()
                si = file(self.stdin, 'r')
                so = file(self.stdout, 'a+')
                se = file(self.stderr, 'a+', 0)
                os.dup2(si.fileno(), sys.stdin.fileno())
                os.dup2(so.fileno(), sys.stdout.fileno())
                os.dup2(se.fileno(), sys.stderr.fileno())

                # write pidfile
                atexit.register(self.delpid)
                pid = str(os.getpid())
                file(self.pidfile,'w+').write("%s\n" % pid)

        def delpid(self):
                os.remove(self.pidfile)

        def start(self):
                """
                Start the daemon
                """
                # Check for a pidfile to see if the daemon already runs
                try:
                        pf = file(self.pidfile,'r')
                        pid = int(pf.read().strip())
                        pf.close()
                except IOError:
                        pid = None

                if pid:
                        message = "pidfile %s already exist. Daemon already running?\n"
                        sys.stderr.write(message % self.pidfile)
                        sys.exit(1)

                # Start the daemon
                self.daemonize()
                self.run()

        def stop(self):
                """
                Stop the daemon
                """
                # Get the pid from the pidfile
                try:
                        pf = file(self.pidfile,'r')
                        pid = int(pf.read().strip())
                        pf.close()
                except IOError:
                        pid = None

                if not pid:
                        message = "pidfile %s does not exist. Daemon not running?\n"
                        sys.stderr.write(message % self.pidfile)
                        return # not an error in a restart

                # Try killing the daemon process       
                try:
                        while 1:
                                os.kill(pid, SIGTERM)
                                time.sleep(0.1)
                except OSError, err:
                        err = str(err)
                        if err.find("No such process") > 0:
                                if os.path.exists(self.pidfile):
                                        os.remove(self.pidfile)
                        else:
                                print str(err)
                                sys.exit(1)

        def restart(self):
                """
                Restart the daemon
                """
                self.stop()
                self.start()

        def run(self):
                """
                You should override this method when you subclass Daemon. It will be called after the process has been
                daemonized by start() or restart().
                """

回答 2

您应该使用python-daemon库,它可以处理所有事情。

来自PyPI:库,用于实现行为良好的Unix守护进程。

You should use the python-daemon library, it takes care of everything.

From PyPI: Library to implement a well-behaved Unix daemon process.


回答 3

您可以使用fork()将脚本与tty分离,并使其继续运行,如下所示:

import os, sys
fpid = os.fork()
if fpid!=0:
  # Running as daemon now. PID is fpid
  sys.exit(0)

当然,您还需要实现一个无限循环,例如

while 1:
  do_your_check()
  sleep(5)

希望这可以帮助您开始。

You can use fork() to detach your script from the tty and have it continue to run, like so:

import os, sys
fpid = os.fork()
if fpid!=0:
  # Running as daemon now. PID is fpid
  sys.exit(0)

Of course you also need to implement an endless loop, like

while 1:
  do_your_check()
  sleep(5)

Hope this get’s you started.


回答 4

您还可以使用Shell脚本使python脚本作为服务运行。首先创建一个shell脚本来像这样运行python脚本(脚本名任意名称)

#!/bin/sh
script='/home/.. full path to script'
/usr/bin/python $script &

现在在/etc/init.d/scriptname中创建一个文件

#! /bin/sh

PATH=/bin:/usr/bin:/sbin:/usr/sbin
DAEMON=/home/.. path to shell script scriptname created to run python script
PIDFILE=/var/run/scriptname.pid

test -x $DAEMON || exit 0

. /lib/lsb/init-functions

case "$1" in
  start)
     log_daemon_msg "Starting feedparser"
     start_daemon -p $PIDFILE $DAEMON
     log_end_msg $?
   ;;
  stop)
     log_daemon_msg "Stopping feedparser"
     killproc -p $PIDFILE $DAEMON
     PID=`ps x |grep feed | head -1 | awk '{print $1}'`
     kill -9 $PID       
     log_end_msg $?
   ;;
  force-reload|restart)
     $0 stop
     $0 start
   ;;
  status)
     status_of_proc -p $PIDFILE $DAEMON atd && exit 0 || exit $?
   ;;
 *)
   echo "Usage: /etc/init.d/atd {start|stop|restart|force-reload|status}"
   exit 1
  ;;
esac

exit 0

现在,您可以使用命令/etc/init.d/scriptname start或stop启动和停止python脚本。

You can also make the python script run as a service using a shell script. First create a shell script to run the python script like this (scriptname arbitary name)

#!/bin/sh
script='/home/.. full path to script'
/usr/bin/python $script &

now make a file in /etc/init.d/scriptname

#! /bin/sh

PATH=/bin:/usr/bin:/sbin:/usr/sbin
DAEMON=/home/.. path to shell script scriptname created to run python script
PIDFILE=/var/run/scriptname.pid

test -x $DAEMON || exit 0

. /lib/lsb/init-functions

case "$1" in
  start)
     log_daemon_msg "Starting feedparser"
     start_daemon -p $PIDFILE $DAEMON
     log_end_msg $?
   ;;
  stop)
     log_daemon_msg "Stopping feedparser"
     killproc -p $PIDFILE $DAEMON
     PID=`ps x |grep feed | head -1 | awk '{print $1}'`
     kill -9 $PID       
     log_end_msg $?
   ;;
  force-reload|restart)
     $0 stop
     $0 start
   ;;
  status)
     status_of_proc -p $PIDFILE $DAEMON atd && exit 0 || exit $?
   ;;
 *)
   echo "Usage: /etc/init.d/atd {start|stop|restart|force-reload|status}"
   exit 1
  ;;
esac

exit 0

Now you can start and stop your python script using the command /etc/init.d/scriptname start or stop.


回答 5

一个简单且受支持的版本Daemonize

从Python软件包索引(PyPI)安装它:

$ pip install daemonize

然后像这样使用:

...
import os, sys
from daemonize import Daemonize
...
def main()
      # your code here

if __name__ == '__main__':
        myname=os.path.basename(sys.argv[0])
        pidfile='/tmp/%s' % myname       # any name
        daemon = Daemonize(app=myname,pid=pidfile, action=main)
        daemon.start()

A simple and supported version is Daemonize.

Install it from Python Package Index (PyPI):

$ pip install daemonize

and then use like:

...
import os, sys
from daemonize import Daemonize
...
def main()
      # your code here

if __name__ == '__main__':
        myname=os.path.basename(sys.argv[0])
        pidfile='/tmp/%s' % myname       # any name
        daemon = Daemonize(app=myname,pid=pidfile, action=main)
        daemon.start()

回答 6

cron显然,在许多方面都是不错的选择。但是,它不会按照您在OP中的请求创建服务或守护程序。 cron只是周期性地运行作业(意味着作业开始和停止),并且不超过一次/分钟。出现问题cron-例如,如果您的脚本的先前实例在下次cron计划表出现并启动新实例时仍在运行,可以吗? cron不处理依赖关系;时间表说的话,它只是试图开始工作。

如果发现确实需要守护程序的情况(一个永不停止运行的进程),请看一下supervisord。它提供了一种简单的方法来包装普通的,非守护进程的脚本或程序,并使其像守护进程一样运行。这比创建本地Python守护程序更好。

cron is clearly a great choice for many purposes. However it doesn’t create a service or daemon as you requested in the OP. cron just runs jobs periodically (meaning the job starts and stops), and no more often than once / minute. There are issues with cron — for example, if a prior instance of your script is still running the next time the cron schedule comes around and launches a new instance, is that OK? cron doesn’t handle dependencies; it just tries to start a job when the schedule says to.

If you find a situation where you truly need a daemon (a process that never stops running), take a look at supervisord. It provides a simple way to wrapper a normal, non-daemonized script or program and make it operate like a daemon. This is a much better way than creating a native Python daemon.


回答 7

$nohup在Linux上使用命令怎么样?

我使用它在Bluehost服务器上运行命令。

如果我错了,请指教。

how about using $nohup command on linux?

I use it for running my commands on my Bluehost server.

Please advice if I am wrong.


回答 8

如果您正在使用终端(ssh或其他东西),并且想要从终端注销后保持长时间运行的脚本,则可以尝试以下操作:

screen

apt-get install screen

在内部创建一个虚拟终端(即abc): screen -dmS abc

现在我们连接到abc: screen -r abc

因此,现在我们可以运行python脚本了: python keep_sending_mails.py

从现在开始,您可以直接关闭终端,但是python脚本将继续运行而不是被关闭

由于此keep_sending_mails.pyPID是虚拟屏幕的子进程,而不是终端(ssh)

如果要返回以检查脚本的运行状态,可以screen -r abc再次使用

If you are using terminal(ssh or something) and you want to keep a long-time script working after you log out from the terminal, you can try this:

screen

apt-get install screen

create a virtual terminal inside( namely abc): screen -dmS abc

now we connect to abc: screen -r abc

So, now we can run python script: python keep_sending_mails.py

from now on, you can directly close your terminal, however, the python script will keep running rather than being shut down

Since this keep_sending_mails.py‘s PID is a child process of the virtual screen rather than the terminal(ssh)

If you want to go back check your script running status, you can use screen -r abc again


回答 9

首先,阅读邮件别名。邮件别名将在邮件系统内执行此操作,而您无需四处寻找守护程序或服务或任何类似的内容。

您可以编写一个简单的脚本,该脚本将在每次将邮件发送到特定邮箱时由sendmail执行。

参见http://www.feep.net/sendmail/tutorial/intro/aliases.html

如果您确实想编写不必要的复杂服务器,则可以执行此操作。

nohup python myscript.py &

这就是全部。您的脚本只是循环而进入休眠状态。

import time
def do_the_work():
    # one round of polling -- checking email, whatever.
while True:
    time.sleep( 600 ) # 10 min.
    try:
        do_the_work()
    except:
        pass

First, read up on mail aliases. A mail alias will do this inside the mail system without you having to fool around with daemons or services or anything of the sort.

You can write a simple script that will be executed by sendmail each time a mail message is sent to a specific mailbox.

See http://www.feep.net/sendmail/tutorial/intro/aliases.html

If you really want to write a needlessly complex server, you can do this.

nohup python myscript.py &

That’s all it takes. Your script simply loops and sleeps.

import time
def do_the_work():
    # one round of polling -- checking email, whatever.
while True:
    time.sleep( 600 ) # 10 min.
    try:
        do_the_work()
    except:
        pass

回答 10

我会推荐这种解决方案。您需要继承和重写method run

import sys
import os
from signal import SIGTERM
from abc import ABCMeta, abstractmethod



class Daemon(object):
    __metaclass__ = ABCMeta


    def __init__(self, pidfile):
        self._pidfile = pidfile


    @abstractmethod
    def run(self):
        pass


    def _daemonize(self):
        # decouple threads
        pid = os.fork()

        # stop first thread
        if pid > 0:
            sys.exit(0)

        # write pid into a pidfile
        with open(self._pidfile, 'w') as f:
            print >> f, os.getpid()


    def start(self):
        # if daemon is started throw an error
        if os.path.exists(self._pidfile):
            raise Exception("Daemon is already started")

        # create and switch to daemon thread
        self._daemonize()

        # run the body of the daemon
        self.run()


    def stop(self):
        # check the pidfile existing
        if os.path.exists(self._pidfile):
            # read pid from the file
            with open(self._pidfile, 'r') as f:
                pid = int(f.read().strip())

            # remove the pidfile
            os.remove(self._pidfile)

            # kill daemon
            os.kill(pid, SIGTERM)

        else:
            raise Exception("Daemon is not started")


    def restart(self):
        self.stop()
        self.start()

I would recommend this solution. You need to inherit and override method run.

import sys
import os
from signal import SIGTERM
from abc import ABCMeta, abstractmethod



class Daemon(object):
    __metaclass__ = ABCMeta


    def __init__(self, pidfile):
        self._pidfile = pidfile


    @abstractmethod
    def run(self):
        pass


    def _daemonize(self):
        # decouple threads
        pid = os.fork()

        # stop first thread
        if pid > 0:
            sys.exit(0)

        # write pid into a pidfile
        with open(self._pidfile, 'w') as f:
            print >> f, os.getpid()


    def start(self):
        # if daemon is started throw an error
        if os.path.exists(self._pidfile):
            raise Exception("Daemon is already started")

        # create and switch to daemon thread
        self._daemonize()

        # run the body of the daemon
        self.run()


    def stop(self):
        # check the pidfile existing
        if os.path.exists(self._pidfile):
            # read pid from the file
            with open(self._pidfile, 'r') as f:
                pid = int(f.read().strip())

            # remove the pidfile
            os.remove(self._pidfile)

            # kill daemon
            os.kill(pid, SIGTERM)

        else:
            raise Exception("Daemon is not started")


    def restart(self):
        self.stop()
        self.start()

回答 11

创建一些像服务一样运行的东西,您可以使用以下东西:

您必须做的第一件事是安装Cement框架:Cement框架是一个CLI框架,您可以在其上部署应用程序。

应用程序的命令行界面:

interface.py

 from cement.core.foundation import CementApp
 from cement.core.controller import CementBaseController, expose
 from YourApp import yourApp

 class Meta:
    label = 'base'
    description = "your application description"
    arguments = [
        (['-r' , '--run'],
          dict(action='store_true', help='Run your application')),
        (['-v', '--version'],
          dict(action='version', version="Your app version")),
        ]
        (['-s', '--stop'],
          dict(action='store_true', help="Stop your application")),
        ]

    @expose(hide=True)
    def default(self):
        if self.app.pargs.run:
            #Start to running the your app from there !
            YourApp.yourApp()
        if self.app.pargs.stop:
            #Stop your application
            YourApp.yourApp.stop()

 class App(CementApp):
       class Meta:
       label = 'Uptime'
       base_controller = 'base'
       handlers = [MyBaseController]

 with App() as app:
       app.run()

YourApp.py类:

 import threading

 class yourApp:
     def __init__:
        self.loger = log_exception.exception_loger()
        thread = threading.Thread(target=self.start, args=())
        thread.daemon = True
        thread.start()

     def start(self):
        #Do every thing you want
        pass
     def stop(self):
        #Do some things to stop your application

请记住,您的应用必须在线程上运行才能成为守护进程

要运行该应用程序,只需在命令行中执行此操作

python interface.py-帮助

to creating some thing that is running like service you can use this thing :

The first thing that you must do is installing the Cement framework: Cement frame work is a CLI frame work that you can deploy your application on it.

command line interface of the app :

interface.py

 from cement.core.foundation import CementApp
 from cement.core.controller import CementBaseController, expose
 from YourApp import yourApp

 class Meta:
    label = 'base'
    description = "your application description"
    arguments = [
        (['-r' , '--run'],
          dict(action='store_true', help='Run your application')),
        (['-v', '--version'],
          dict(action='version', version="Your app version")),
        ]
        (['-s', '--stop'],
          dict(action='store_true', help="Stop your application")),
        ]

    @expose(hide=True)
    def default(self):
        if self.app.pargs.run:
            #Start to running the your app from there !
            YourApp.yourApp()
        if self.app.pargs.stop:
            #Stop your application
            YourApp.yourApp.stop()

 class App(CementApp):
       class Meta:
       label = 'Uptime'
       base_controller = 'base'
       handlers = [MyBaseController]

 with App() as app:
       app.run()

YourApp.py class:

 import threading

 class yourApp:
     def __init__:
        self.loger = log_exception.exception_loger()
        thread = threading.Thread(target=self.start, args=())
        thread.daemon = True
        thread.start()

     def start(self):
        #Do every thing you want
        pass
     def stop(self):
        #Do some things to stop your application

Keep in mind that your app must run on a thread to be daemon

To run the app just do this in command line

python interface.py –help


回答 12

使用系统提供的任何服务管理器-例如在Ubuntu下使用upstart。这将为您处理所有详细信息,例如启动时启动,崩溃时重启等。

Use whatever service manager your system offers – for example under Ubuntu use upstart. This will handle all the details for you such as start on boot, restart on crash, etc.


回答 13

假设您真的希望循环将24/7作为后台服务运行

对于不涉及使用库注入代码的解决方案,您可以简单地创建一个服务模板,因为您使用的是Linux:

将该文件放置在守护程序服务文件夹中(通常为/etc/systemd/system/),然后使用以下systemctl命令进行安装(可能需要sudo特权):

systemctl enable <service file name without extension>

systemctl daemon-reload

systemctl start <service file name without extension>

然后可以使用以下命令检查服务是否正在运行:

systemctl | grep running

Assuming that you would really want your loop to run 24/7 as a background service

For a solution that doesn’t involve injecting your code with libraries, you can simply create a service template, since you are using linux:

[Unit]
Description = <Your service description here>
After = network.target # Assuming you want to start after network interfaces are made available
 
[Service]
Type = simple
ExecStart = python <Path of the script you want to run>
User = # User to run the script as
Group = # Group to run the script as
Restart = on-failure # Restart when there are errors
SyslogIdentifier = <Name of logs for the service>
RestartSec = 5
TimeoutStartSec = infinity
 
[Install]
WantedBy = multi-user.target # Make it accessible to other users

Place that file in your daemon service folder (usually /etc/systemd/system/), in a *.service file, and install it using the following systemctl commands (will likely require sudo privileges):

systemctl enable <service file name without .service extension>

systemctl daemon-reload

systemctl start <service file name without .service extension>

You can then check that your service is running by using the command:

systemctl | grep running

使用Python在SQLite中插入行后如何检索插入的ID?

问题:使用Python在SQLite中插入行后如何检索插入的ID?

使用Python在SQLite中插入行后如何检索插入的ID?我有这样的表:

id INT AUTOINCREMENT PRIMARY KEY,
username VARCHAR(50),
password VARCHAR(50)

我用示例数据username="test"和插入新行password="test"。如何以交易安全的方式检索生成的ID?这是针对网站解决方案的,其中两个人可能同时插入数据。我知道我可以读到最后一行,但是我认为这不是事务安全的。有人可以给我一些建议吗?

How to retrieve inserted id after inserting row in SQLite using Python? I have table like this:

id INT AUTOINCREMENT PRIMARY KEY,
username VARCHAR(50),
password VARCHAR(50)

I insert a new row with example data username="test" and password="test". How do I retrieve the generated id in a transaction safe way? This is for a website solution, where two people may be inserting data at the same time. I know I can get the last read row, but I don’t think that is transaction safe. Can somebody give me some advice?


回答 0

您可以使用cursor.lastrowid(请参阅“可选的DB API扩展”):

connection=sqlite3.connect(':memory:')
cursor=connection.cursor()
cursor.execute('''CREATE TABLE foo (id integer primary key autoincrement ,
                                    username varchar(50),
                                    password varchar(50))''')
cursor.execute('INSERT INTO foo (username,password) VALUES (?,?)',
               ('test','test'))
print(cursor.lastrowid)
# 1

如果两个人同时插入,只要他们使用不同cursor的,cursor.lastrowid就会idcursor插入的最后一行返回:

cursor.execute('INSERT INTO foo (username,password) VALUES (?,?)',
               ('blah','blah'))

cursor2=connection.cursor()
cursor2.execute('INSERT INTO foo (username,password) VALUES (?,?)',
               ('blah','blah'))

print(cursor2.lastrowid)        
# 3
print(cursor.lastrowid)
# 2

cursor.execute('INSERT INTO foo (id,username,password) VALUES (?,?,?)',
               (100,'blah','blah'))
print(cursor.lastrowid)
# 100

请注意,使用一次插入多个行时会lastrowid返回:Noneexecutemany

cursor.executemany('INSERT INTO foo (username,password) VALUES (?,?)',
               (('baz','bar'),('bing','bop')))
print(cursor.lastrowid)
# None

You could use cursor.lastrowid (see “Optional DB API Extensions”):

connection=sqlite3.connect(':memory:')
cursor=connection.cursor()
cursor.execute('''CREATE TABLE foo (id integer primary key autoincrement ,
                                    username varchar(50),
                                    password varchar(50))''')
cursor.execute('INSERT INTO foo (username,password) VALUES (?,?)',
               ('test','test'))
print(cursor.lastrowid)
# 1

If two people are inserting at the same time, as long as they are using different cursors, cursor.lastrowid will return the id for the last row that cursor inserted:

cursor.execute('INSERT INTO foo (username,password) VALUES (?,?)',
               ('blah','blah'))

cursor2=connection.cursor()
cursor2.execute('INSERT INTO foo (username,password) VALUES (?,?)',
               ('blah','blah'))

print(cursor2.lastrowid)        
# 3
print(cursor.lastrowid)
# 2

cursor.execute('INSERT INTO foo (id,username,password) VALUES (?,?,?)',
               (100,'blah','blah'))
print(cursor.lastrowid)
# 100

Note that lastrowid returns None when you insert more than one row at a time with executemany:

cursor.executemany('INSERT INTO foo (username,password) VALUES (?,?)',
               (('baz','bar'),('bing','bop')))
print(cursor.lastrowid)
# None

如何在Python中忽略弃用警告

问题:如何在Python中忽略弃用警告

我不断得到这个:

DeprecationWarning: integer argument expected, got float

我如何使此消息消失?有没有一种方法可以避免Python中的警告?

I keep getting this :

DeprecationWarning: integer argument expected, got float

How do I make this message go away? Is there a way to avoid warnings in Python?


回答 0

warnings模块的文档中:

 #!/usr/bin/env python -W ignore::DeprecationWarning

如果您使用的是Windows,请-W ignore::DeprecationWarning作为参数传递给Python。最好通过强制转换为int来解决问题。

(请注意,在Python 3.2中,默认情况下会忽略弃用警告。)

From documentation of the warnings module:

 #!/usr/bin/env python -W ignore::DeprecationWarning

If you’re on Windows: pass -W ignore::DeprecationWarning as an argument to Python. Better though to resolve the issue, by casting to int.

(Note that in Python 3.2, deprecation warnings are ignored by default.)


回答 1

您应该只修复代码,以防万一,

import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning) 

You should just fix your code but just in case,

import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning) 

回答 2

我有这些:

/home/eddyp/virtualenv/lib/python2.6/site-packages/Twisted-8.2.0-py2.6-linux-x86_64.egg/twisted/persisted/sob.py:12:
DeprecationWarning: the md5 module is deprecated; use hashlib instead import os, md5, sys

/home/eddyp/virtualenv/lib/python2.6/site-packages/Twisted-8.2.0-py2.6-linux-x86_64.egg/twisted/python/filepath.py:12:
DeprecationWarning: the sha module is deprecated; use the hashlib module instead import sha

使用以下方法修复了该问题:

import warnings

with warnings.catch_warnings():
    warnings.filterwarnings("ignore",category=DeprecationWarning)
    import md5, sha

yourcode()

现在您仍然得到所有其他DeprecationWarnings,但不是由以下原因引起的:

import md5, sha

I had these:

/home/eddyp/virtualenv/lib/python2.6/site-packages/Twisted-8.2.0-py2.6-linux-x86_64.egg/twisted/persisted/sob.py:12:
DeprecationWarning: the md5 module is deprecated; use hashlib instead import os, md5, sys

/home/eddyp/virtualenv/lib/python2.6/site-packages/Twisted-8.2.0-py2.6-linux-x86_64.egg/twisted/python/filepath.py:12:
DeprecationWarning: the sha module is deprecated; use the hashlib module instead import sha

Fixed it with:

import warnings

with warnings.catch_warnings():
    warnings.filterwarnings("ignore",category=DeprecationWarning)
    import md5, sha

yourcode()

Now you still get all the other DeprecationWarnings, but not the ones caused by:

import md5, sha

回答 3

我发现最干净的方法(尤其是在Windows上)是通过将以下内容添加到C:\ Python26 \ Lib \ site-packages \ sitecustomize.py:

import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)

请注意,我必须创建此文件。当然,如果您的路径不同,请更改python的路径。

I found the cleanest way to do this (especially on windows) is by adding the following to C:\Python26\Lib\site-packages\sitecustomize.py:

import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)

Note that I had to create this file. Of course, change the path to python if yours is different.


回答 4

这些答案都不适合我,因此我将发布解决方法。我使用以下at the beginning of my main.py脚本,它运行正常。


照原样使用以下内容(复制粘贴):

def warn(*args, **kwargs):
    pass
import warnings
warnings.warn = warn

例:

import "blabla"
import "blabla"

def warn(*args, **kwargs):
    pass
import warnings
warnings.warn = warn

# more code here...
# more code here...

None of these answers worked for me so I will post my way to solve this. I use the following at the beginning of my main.py script and it works fine.


Use the following as it is (copy-paste it):

def warn(*args, **kwargs):
    pass
import warnings
warnings.warn = warn

Example:

import "blabla"
import "blabla"

def warn(*args, **kwargs):
    pass
import warnings
warnings.warn = warn

# more code here...
# more code here...


回答 5

通过正确的论点?:P

更严重的是,您可以在命令行上将参数-Wi :: DeprecationWarning传递给解释器,以忽略弃用警告。

Pass the correct arguments? :P

On the more serious note, you can pass the argument -Wi::DeprecationWarning on the command line to the interpreter to ignore the deprecation warnings.


回答 6

将参数转换为int。就这么简单

int(argument)

Convert the argument to int. It’s as simple as

int(argument)

回答 7

当您只想在功能中忽略警告时,可以执行以下操作。

import warnings
from functools import wraps


def ignore_warnings(f):
    @wraps(f)
    def inner(*args, **kwargs):
        with warnings.catch_warnings(record=True) as w:
            warnings.simplefilter("ignore")
            response = f(*args, **kwargs)
        return response
    return inner

@ignore_warnings
def foo(arg1, arg2):
    ...
    write your code here without warnings
    ...

@ignore_warnings
def foo2(arg1, arg2, arg3):
    ...
    write your code here without warnings
    ...

只需在要忽略所有警告的函数上添加@ignore_warnings装饰器

When you want to ignore warnings only in functions you can do the following.

import warnings
from functools import wraps


def ignore_warnings(f):
    @wraps(f)
    def inner(*args, **kwargs):
        with warnings.catch_warnings(record=True) as w:
            warnings.simplefilter("ignore")
            response = f(*args, **kwargs)
        return response
    return inner

@ignore_warnings
def foo(arg1, arg2):
    ...
    write your code here without warnings
    ...

@ignore_warnings
def foo2(arg1, arg2, arg3):
    ...
    write your code here without warnings
    ...

Just add the @ignore_warnings decorator on the function you want to ignore all warnings


回答 8

Docker解决方案

  • 在运行python应用程序之前禁用所有警告
    • 您也可以禁用dockerized测试
ENV PYTHONWARNINGS="ignore::DeprecationWarning"

Docker Solution

  • Disable ALL warnings before running the python application
    • You can disable your dockerized tests as well
ENV PYTHONWARNINGS="ignore::DeprecationWarning"

回答 9

如果您使用的是Python3,请尝试以下代码:

import sys

if not sys.warnoptions:
    import warnings
    warnings.simplefilter("ignore")

或尝试这个…

import warnings

def fxn():
    warnings.warn("deprecated", DeprecationWarning)

with warnings.catch_warnings():
    warnings.simplefilter("ignore")
    fxn()

或尝试这个…

import warnings
warnings.filterwarnings("ignore")

Try the below code if you’re Using Python3:

import sys

if not sys.warnoptions:
    import warnings
    warnings.simplefilter("ignore")

or try this…

import warnings

def fxn():
    warnings.warn("deprecated", DeprecationWarning)

with warnings.catch_warnings():
    warnings.simplefilter("ignore")
    fxn()

or try this…

import warnings
warnings.filterwarnings("ignore")

回答 10

Python 3

在编写代码之前,只需在容易记住的几行下面编写代码:

import warnings

warnings.filterwarnings("ignore")

Python 3

Just write below lines that are easy to remember before writing your code:

import warnings

warnings.filterwarnings("ignore")

回答 11

如果您知道自己在做什么,另一种方法就是简单地找到警告您的文件(该文件的路径显示在警告信息中),对生成警告的行进行注释。

If you know what you are doing, another way is simply find the file that warns you(the path of the file is shown in warning info), comment the lines that generate the warnings.


回答 12

对于python 3,只需编写以下代码即可忽略所有警告。

from warnings import filterwarnings
filterwarnings("ignore")

For python 3, just write below codes to ignore all warnings.

from warnings import filterwarnings
filterwarnings("ignore")

回答 13

不会打扰您,但会警告您,下次升级python时,您正在做的事情可能会停止工作。转换为int并完成它。

顺便说一句。您还可以编写自己的警告处理程序。只需分配一个不执行任何操作的函数即可。 如何将python警告重定向到自定义流?

Not to beat you up about it but you are being warned that what you are doing will likely stop working when you next upgrade python. Convert to int and be done with it.

BTW. You can also write your own warnings handler. Just assign a function that does nothing. How to redirect python warnings to a custom stream?


Django:登录后重定向到上一页

问题:Django:登录后重定向到上一页

我正在尝试建立一个简单的网站,其登录功能与SO上的登录功能非常相似。该用户应该能够以匿名用户身份浏览该网站,并且每个页面上都会有一个登录链接。当单击登录链接时,用户将被带到登录表单。成功登录后,应将用户带回到他首先单击登录链接的页面。我猜想我必须以某种方式将当前页面的url传递给处理登录表单的视图,但是我真的无法使其正常工作。

编辑:我想通了。我通过将当前页面作为GET参数传递来链接到登录表单,然后使用“下一个”重定向到该页面。谢谢!

编辑2:我的解释似乎不清楚,所以这里要求的是我的代码:假设我们在页面foo.html上,并且尚未登录。现在,我们希望在foo.html上有一个链接,该链接登录。我们可以在那里登录,然后将其重定向回foo.html。foo.html上的链接如下所示:

      <a href='/login/?next={{ request.path }}'>Login</a> 

现在,我编写了一个自定义登录视图,看起来像这样:

def login_view(request):
   redirect_to = request.REQUEST.get('next', '')
   if request.method=='POST':
      #create login form...
      if valid login credentials have been entered:
         return HttpResponseRedirect(redirect_to)  
   #...
   return render_to_response('login.html', locals())

还有login.html中的重要一行:

<form method="post" action="./?next={{ redirect_to }}">

是的,就这样,希望可以弄清楚。

I’m trying to build a simple website with login functionality very similar to the one here on SO. The user should be able to browse the site as an anonymous user and there will be a login link on every page. When clicking on the login link the user will be taken to the login form. After a successful login the user should be taken back to the page from where he clicked the login link in the first place. I’m guessing that I have to somehow pass the url of the current page to the view that handles the login form but I can’t really get it to work.

EDIT: I figured it out. I linked to the login form by passing the current page as a GET parameter and then used ‘next’ to redirect to that page. Thanks!

EDIT 2: My explanation did not seem to be clear so as requested here is my code: Lets say we are on a page foo.html and we are not logged in. Now we would like to have a link on foo.html that links to login.html. There we can login and are then redirected back to foo.html. The link on foo.html looks like this:

      <a href='/login/?next={{ request.path }}'>Login</a> 

Now I wrote a custom login view that looks somewhat like this:

def login_view(request):
   redirect_to = request.REQUEST.get('next', '')
   if request.method=='POST':
      #create login form...
      if valid login credentials have been entered:
         return HttpResponseRedirect(redirect_to)  
   #...
   return render_to_response('login.html', locals())

And the important line in login.html:

<form method="post" action="./?next={{ redirect_to }}">

So yeah thats pretty much it, hope that makes it clear.


回答 0

您无需为此额外查看,该功能已内置。

首先,每个具有登录链接的页面都需要知道当前路径,最简单的方法是将请求上下文前置变量添加到settings.py(默认为前四个),然后在每个请求中都可以使用请求对象:

settings.py:

TEMPLATE_CONTEXT_PROCESSORS = (
    "django.core.context_processors.auth",
    "django.core.context_processors.debug",
    "django.core.context_processors.i18n",
    "django.core.context_processors.media",
    "django.core.context_processors.request",
)

然后添加您想要“登录”链接的模板:

base.html:

<a href="{% url django.contrib.auth.views.login %}?next={{request.path}}">Login</a>

这会将GET参数添加到登录页面,该参数指向当前页面。

登录模板可以像下面这样简单:

registration / login.html:

{% block content %}
<form method="post" action="">
  {{form.as_p}}
<input type="submit" value="Login">
</form>
{% endblock %}

You do not need to make an extra view for this, the functionality is already built in.

First each page with a login link needs to know the current path, and the easiest way is to add the request context preprosessor to settings.py (the 4 first are default), then the request object will be available in each request:

settings.py:

TEMPLATE_CONTEXT_PROCESSORS = (
    "django.core.context_processors.auth",
    "django.core.context_processors.debug",
    "django.core.context_processors.i18n",
    "django.core.context_processors.media",
    "django.core.context_processors.request",
)

Then add in the template you want the Login link:

base.html:

<a href="{% url django.contrib.auth.views.login %}?next={{request.path}}">Login</a>

This will add a GET argument to the login page that points back to the current page.

The login template can then be as simple as this:

registration/login.html:

{% block content %}
<form method="post" action="">
  {{form.as_p}}
<input type="submit" value="Login">
</form>
{% endblock %}

virtualenv和pyenv之间是什么关系?

问题:virtualenv和pyenv之间是什么关系?

我最近学习了如何在工作流程中使用virtualenv和virtualenvwrapper,但是我在一些指南中看到了pyenv,但是我似乎无法了解pyenv是什么以及它与virtualenv有何不同/相似。pyenv是virtualenv的更好/更新的替代品还是免费的工具?如果后者有什么不同之处,以及两者(以及适用的virtualenvwrapper)如何一起工作?

I recently learned how to use virtualenv and virtualenvwrapper in my workflow but I’ve seen pyenv mentioned in a few guides but I can’t seem to get an understanding of what pyenv is and how it is different/similar to virtualenv. Is pyenv a better/newer replacement for virtualenv or a complimentary tool? If the latter what does it do differently and how do the two (and virtualenvwrapper if applicable) work together?


回答 0

Pyenvvirtualenv是非常不同的工具,它们以不同的方式工作以执行不同的操作:

  • Pyenv是bash扩展- 不适用于Windows-会拦截您对python,pip等的调用,以将其定向到多个系统python工具链之一。因此,您始终具有在选定的python版本中安装的所有库,因此,这对于必须在不同版本的python之间进行切换的用户而言非常有用。

  • VirtualEnv是纯python,因此可在任何地方使用,它会在激活环境中本地复制python和pip 的副本,或者可选地复制特定版本,该环境可能包含也可能不包含指向当前系统工具链的链接,如果不能,则可以仅将已知的库子集安装到该环境中。这样一来,几乎可以肯定,对于测试和部署而言,要好得多,因为您确切知道使用哪个库,使用了哪个版本,并且全局更改不会影响您的模块。

venv python> 3.3

请注意,从Python 3.3开始,有一个名为venv的VirtualEnv内置实现(在某些安装中,有一个名为pyvenv的包装器- 在Python 3.6中已弃用该包装器),应该优先使用它。为避免包装程序可能出现问题,通常最好直接使用/path/to/python3 -m venv desired/env/path或使用pyWindows上的优秀python选择器来使用它py -3 -m venv desired/env/path。它将创建用desired/env/pathconfigure 指定的目录并适当地填充它。通常,这非常类似于使用VirtualEnv。

其他工具

有许多值得一提和考虑的工具,因为它们可以帮助使用上述一种或多种:

  • VirtualEnvWrapper管理和简化VirtualEnv- Cross平台的使用和管理。
  • pyenv-virtualenvpyenv-installer安装,为PyEnv工具提供了用于管理和与VirtualEnv交互的工具-通过此工具,您可以进行基本安装,包括多个版本的python,并在每个版本中创建隔离的环境-Linux / OS- XJohann Visagie建议
  • PyInstaller可以获取可能在VirtualEnv下开发和测试的python代码,并将其捆绑在一起,以便它可以运行未安装python 版本的平台-请注意,它不是交叉编译器,因此您需要Windows(虚拟) -)机器来构建Windows安装等,但是即使您可以确定将安装python但不能确定python的版本和所有库是否与您的代码兼容,它也可以派上用场。

Pyenv and virtualenv are very different tools that work in different ways to do different things:

  • Pyenv is a bash extension – will not work on Windows – that intercepts your calls to python, pip, etc., to direct them to one of several of the system python tool-chains. So you always have all the libraries that you have installed in the selected python version available – as such it is good for users who have to switch between different versions of python.

  • VirtualEnv, is pure python so works everywhere, it makes a copy of, optionally a specific version of, python and pip local to the activate environment which may or may not include links to the current system tool-chain, if it does not you can install just a known subset of libraries into that environment. As such it is almost certainly much better for testing and deployment as you know exactly which libraries, at which versions, are used and a global change will not impact your module.

venv python > 3.3

Note that from Python 3.3 onward there is a built in implementation of VirtualEnv called venv (with, on some installations a wrapper called pyvenv – this wrapper is deprecated in Python 3.6), which should probably be used in preference. To avoid possible issues with the wrapper it is often a good idea to use it directly by using /path/to/python3 -m venv desired/env/path or you can use the excellent py python selector on windows with py -3 -m venv desired/env/path. It will create the directory specified with desired/env/path configure and populate it appropriately. In general it is very much like using VirtualEnv.

Additional Tools

There are a number of tools that it is worth mentioning, and considering, as they can help with the use of one or more of the above:

  • VirtualEnvWrapper Manage and simplify the use and management of VirtualEnv – Cross Platform.
  • pyenv-virtualenv, installed by pyenv-installer, which gives PyEnv tools for managing and interfacing to VirtualEnv – with this you can have a base installation that includes more than one version of python and create isolated environments within each of them – Linux/OS-X. Suggested by Johann Visagie
  • PyInstaller can take your python code, possibly developed & tested under VirtualEnv, and bundle it up so that it can run one platforms that do not have your version of python installed – Note that it is not a cross compiler you will need a Windows (virtual-)machine to build Windows installs, etc., but it can be handy even where you can be sure that python will be installed but cannot be sure that the version of python and all the libraries will be compatible with your code.

回答 1

virtualenv允许您在项目的子目录中创建自定义Python安装。因此,您的每个项目python在其各自的virtualenv下都可以拥有自己的(甚至几个)项目。某些/所有virtualenv甚至具有相同版本python(例如2.7.16)而没有冲突是完全可以的-它们独立存在并且彼此不认识。如果要使用其中任何一个python,则必须使用activate它(通过运行一个脚本来临时修改您的脚本,PATH以确保virtualenv的bin/目录位于第一位)。从那时起,调用python(或其他方法pip)将调用该virtualenv的版本,直到您使用deactivate它(还原PATH)为止。

pyenv它的运行范围比virtualenv-拥有的Python安装寄存器(可用于安装新的),并允许您配置使用python命令时运行哪个版本的Python 。听起来很相似,但实际用法却有所不同。它的工作方式是(永久地)python在您的填充脚本之前添加PATH(永久),然后确定python要调用的“真实” 脚本。您甚至可以配置pyenv以调用您的virtualenv python之一(通过使用pyenv-virtualenv插件)。使用pyenv进行安装的Python版本进入其$(pyenv root)/versions/目录(默认情况下pyenv根目录为〜/ .pyenv),因此比virtualenv更“全局”。通常,您不能复制通过安装的Python版本pyenv,至少这样做不是主要思想。

要创建具有特定Python版本的virtualenv,您需要将该版本放置在系统中的某个位置(无论是否在该版本中PATH),并将其本质上克隆到新创建的virtualenv中。当然,获得特定版本的一种方法是通过安装它pyenv。完成此操作后,可以通过将不同的模块(或其版本)安装到各个虚拟环境中来自由进行区分。

简而言之:

  • virtualenv 允许您通过从现有安装中进行克隆来创建本地独立的python安装
  • pyenv 允许您同时安装不同版本的python(在系统范围内或仅针对本地用户),然后选择在任意给定时间运行哪些python(包括由virtualenv或Anaconda创建的)

virtualenv allows you to create a custom Python installation e.g. in a subdirectory of your project. Each of your projects can thus have their own python (or even several) under their respective virtualenv. It is perfectly fine for some/all virtualenvs to even have the same version of python (e.g. 2.7.16) without conflict – they live separately and don’t know of each other. If you want to use any of those pythons, you have to activate it (by running a script which will temporarily modify your PATH to ensure that that virtualenv’s bin/ directory comes first). From that point, calling python (or pip etc.) will invoke that virtualenv’s version until you deactivate it (which restores the PATH).

pyenv operates on a wider scale than virtualenv – it holds a register of Python installations (and can be used to install new ones) and allows you to configure which version of Python to run when you use the python command. Sounds similar but practical use is a bit different. It works by prepending its shim python script to your PATH (permanently) and then deciding which “real” python to invoke. You can even configure pyenv to call into one of your virtualenv pythons (by using the pyenv-virtualenv plugin). Python versions you install using pyenv go into its $(pyenv root)/versions/ directory (by default, pyenv root is ~/.pyenv) so are more ‘global’ than virtualenv. Ordinarily, you can’t duplicate Python versions installed through pyenv, at least doing so is not the main idea.

To create a virtualenv with a specific Python version, you need to have that version somewhere in your system (whether it’s on the PATH or not) and essentially clone it into your newly created virtualenv. Of course, one way to obtain a particular version is to install it via pyenv. Once that’s done, individual virtualenvs are free to diverge by having different modules (or versions thereof) installed into them.

In short:

  • virtualenv allows you to create local, independent python installations by cloning from existing ones
  • pyenv allows you to install different versions of python simultaneously (either system-wide or just for the local user) and then choose which of the multitude of pythons to run at any given time (including those created by virtualenv or Anaconda)

以Unix时间戳格式获取当前GMT时间的最简单方法是什么?

问题:以Unix时间戳格式获取当前GMT时间的最简单方法是什么?

Python提供不同的套餐(datetimetimecalendar),可以看出这里为了应对时间。我通过使用以下命令获取当前GMT时间犯了一个大错误time.mktime(datetime.datetime.utcnow().timetuple())

在Unix时间戳中获取当前GMT时间的简单方法是什么?

Python provides different packages (datetime, time, calendar) as can be seen here in order to deal with time. I made a big mistake by using the following to get current GMT time time.mktime(datetime.datetime.utcnow().timetuple())

What is a simple way to get current GMT time in Unix timestamp?


回答 0

我将使用time.time()获得自该纪元以来的时间戳(以秒为单位)。

import time

time.time()

输出:

1369550494.884832

对于大多数平台上的标准CPython实现,这将返回UTC值。

I would use time.time() to get a timestamp in seconds since the epoch.

import time

time.time()

Output:

1369550494.884832

For the standard CPython implementation on most platforms this will return a UTC value.


回答 1

import time

int(time.time()) 

输出:

1521462189
import time

int(time.time()) 

Output:

1521462189

回答 2

这有帮助吗?

from datetime import datetime
import calendar

d = datetime.utcnow()
unixtime = calendar.timegm(d.utctimetuple())
print unixtime

如何将Python UTC日期时间对象转换为UNIX时间戳

Does this help?

from datetime import datetime
import calendar

d = datetime.utcnow()
unixtime = calendar.timegm(d.utctimetuple())
print unixtime

How to convert Python UTC datetime object to UNIX timestamp


回答 3

或者只是简单地使用datetime标准模块

In [2]: from datetime import timezone, datetime
   ...: int(datetime.now(tz=timezone.utc).timestamp() * 1000)
   ...: 
Out[2]: 1514901741720

您可以截断或乘以所需的分辨率。此示例输出毫。

如果您想要正确的Unix时间戳(以秒为单位),请删除* 1000

Or just simply using the datetime standard module

In [2]: from datetime import timezone, datetime
   ...: int(datetime.now(tz=timezone.utc).timestamp() * 1000)
   ...: 
Out[2]: 1514901741720

You can truncate or multiply depending on the resolution you want. This example is outputting millis.

If you want a proper Unix timestamp (in seconds) remove the * 1000


回答 4

python2python3

最好使用时间模块

import time
int(time.time())

1573708436

您还可以使用datetime模块,但是当您使用strftime(’%s’)时,strftime会将时间转换为本地时间!

python2

from datetime import datetime
datetime.utcnow().strftime('%s')

python3

from datetime import datetime
datetime.utcnow().timestamp()

python2 and python3

it is good to use time module

import time
int(time.time())

1573708436

you can also use datetime module, but when you use strftime(‘%s’), but strftime convert time to your local time!

python2

from datetime import datetime
datetime.utcnow().strftime('%s')

python3

from datetime import datetime
datetime.utcnow().timestamp()

回答 5

我喜欢这种方法:

import datetime, time

dts = datetime.datetime.utcnow()
epochtime = round(time.mktime(dts.timetuple()) + dts.microsecond/1e6)

此处发布的其他方法不能保证在所有平台上都具有UTC,或者只能报告整秒。如果您想获得完整的分辨率,则可以达到微秒级。

I like this method:

import datetime, time

dts = datetime.datetime.utcnow()
epochtime = round(time.mktime(dts.timetuple()) + dts.microsecond/1e6)

The other methods posted here are either not guaranteed to give you UTC on all platforms or only report whole seconds. If you want full resolution, this works, to the micro-second.


回答 6

from datetime import datetime as dt
dt.utcnow().strftime("%s")

输出:

1544524990
from datetime import datetime as dt
dt.utcnow().strftime("%s")

Output:

1544524990

回答 7

#First Example:
from datetime import datetime, timezone    
timstamp1 =int(datetime.now(tz=timezone.utc).timestamp() * 1000)
print(timstamp1)

输出:1572878043380

#second example:
import time
timstamp2 =int(time.time())
print(timstamp2)

输出:1572878043

  • 在这里,我们可以看到第一个示例比第二个示例提供了更准确的时间。
  • 在这里,我正在使用第一个。
#First Example:
from datetime import datetime, timezone    
timstamp1 =int(datetime.now(tz=timezone.utc).timestamp() * 1000)
print(timstamp1)

Output: 1572878043380

#second example:
import time
timstamp2 =int(time.time())
print(timstamp2)

Output: 1572878043

  • Here, we can see the first example gives more accurate time than second one.
  • Here I am using the first one.

回答 8

至少在python3中,这有效:

>>> datetime.strftime(datetime.utcnow(), "%s")
'1587503279'

At least in python3, this works:

>>> datetime.strftime(datetime.utcnow(), "%s")
'1587503279'

Python-检查Word是否在字符串中

问题:Python-检查Word是否在字符串中

我正在使用Python v2,并且正在尝试找出是否可以判断字符串中是否包含单词。

我发现了一些有关识别单词是否在字符串中的信息-使用.find,但是有一种方法可以执行IF语句。我想要以下内容:

if string.find(word):
    print 'success'

谢谢你的帮助。

I’m working with Python v2, and I’m trying to find out if you can tell if a word is in a string.

I have found some information about identifying if the word is in the string – using .find, but is there a way to do an IF statement. I would like to have something like the following:

if string.find(word):
    print 'success'

Thanks for any help.


回答 0

出什么问题了:

if word in mystring: 
   print 'success'

What is wrong with:

if word in mystring: 
   print 'success'

matplotlib中的日期刻度和旋转

问题:matplotlib中的日期刻度和旋转

我在尝试在matplotlib中旋转日期刻度时遇到问题。下面是一个小示例程序。如果我尝试最后旋转刻度线,则刻度线不会旋转。如果我尝试如注释“ crashes”下所示旋转刻度线,则matplot lib崩溃。

仅当x值为日期时,才会发生这种情况。如果我在的调用dates中将变量替换tavail_plot,则该xticks(rotation=70)调用在内部正常运行avail_plot

有任何想法吗?

import numpy as np
import matplotlib.pyplot as plt
import datetime as dt

def avail_plot(ax, x, y, label, lcolor):
    ax.plot(x,y,'b')
    ax.set_ylabel(label, rotation='horizontal', color=lcolor)
    ax.get_yaxis().set_ticks([])

    #crashes
    #plt.xticks(rotation=70)

    ax2 = ax.twinx()
    ax2.plot(x, [1 for a in y], 'b')
    ax2.get_yaxis().set_ticks([])
    ax2.set_ylabel('testing')

f, axs = plt.subplots(2, sharex=True, sharey=True)
t = np.arange(0.01, 5, 1)
s1 = np.exp(t)
start = dt.datetime.now()
dates=[]
for val in t:
    next_val = start + dt.timedelta(0,val)
    dates.append(next_val)
    start = next_val

avail_plot(axs[0], dates, s1, 'testing', 'green')
avail_plot(axs[1], dates, s1, 'testing2', 'red')
plt.subplots_adjust(hspace=0, bottom=0.3)
plt.yticks([0.5,],("",""))
#doesn't crash, but does not rotate the xticks
#plt.xticks(rotation=70)
plt.show()

I am having an issue trying to get my date ticks rotated in matplotlib. A small sample program is below. If I try to rotate the ticks at the end, the ticks do not get rotated. If I try to rotate the ticks as shown under the comment ‘crashes’, then matplot lib crashes.

This only happens if the x-values are dates. If I replaces the variable dates with the variable t in the call to avail_plot, the xticks(rotation=70) call works just fine inside avail_plot.

Any ideas?

import numpy as np
import matplotlib.pyplot as plt
import datetime as dt

def avail_plot(ax, x, y, label, lcolor):
    ax.plot(x,y,'b')
    ax.set_ylabel(label, rotation='horizontal', color=lcolor)
    ax.get_yaxis().set_ticks([])

    #crashes
    #plt.xticks(rotation=70)

    ax2 = ax.twinx()
    ax2.plot(x, [1 for a in y], 'b')
    ax2.get_yaxis().set_ticks([])
    ax2.set_ylabel('testing')

f, axs = plt.subplots(2, sharex=True, sharey=True)
t = np.arange(0.01, 5, 1)
s1 = np.exp(t)
start = dt.datetime.now()
dates=[]
for val in t:
    next_val = start + dt.timedelta(0,val)
    dates.append(next_val)
    start = next_val

avail_plot(axs[0], dates, s1, 'testing', 'green')
avail_plot(axs[1], dates, s1, 'testing2', 'red')
plt.subplots_adjust(hspace=0, bottom=0.3)
plt.yticks([0.5,],("",""))
#doesn't crash, but does not rotate the xticks
#plt.xticks(rotation=70)
plt.show()

回答 0

如果您喜欢非面向对象的方法,请在两个调用之前移至plt.xticks(rotation=70)右侧,例如avail_plot

plt.xticks(rotation=70)
avail_plot(axs[0], dates, s1, 'testing', 'green')
avail_plot(axs[1], dates, s1, 'testing2', 'red')

这将在设置标签之前设置旋转属性。由于这里有两个轴,因此plt.xticks在绘制了两个图后会感到困惑。而此时点plt.xticks什么都不做,plt.gca()没有给你想要修改的轴等plt.xticks作用于当前坐标,是行不通的。

对于不使用的面向对象方法plt.xticks,可以使用

plt.setp( axs[1].xaxis.get_majorticklabels(), rotation=70 )

两次avail_plot通话之后。这样可以专门设置正确轴上的旋转。

If you prefer a non-object-oriented approach, move plt.xticks(rotation=70) to right before the two avail_plot calls, eg

plt.xticks(rotation=70)
avail_plot(axs[0], dates, s1, 'testing', 'green')
avail_plot(axs[1], dates, s1, 'testing2', 'red')

This sets the rotation property before setting up the labels. Since you have two axes here, plt.xticks gets confused after you’ve made the two plots. At the point when plt.xticks doesn’t do anything, plt.gca() does not give you the axes you want to modify, and so plt.xticks, which acts on the current axes, is not going to work.

For an object-oriented approach not using plt.xticks, you can use

plt.setp( axs[1].xaxis.get_majorticklabels(), rotation=70 )

after the two avail_plot calls. This sets the rotation on the correct axes specifically.


回答 1

解决方案适用于Matplotlib 2.1+

存在tick_params可以更改刻度属性的轴方法。它也作为轴方法存在set_tick_params

ax.tick_params(axis='x', rotation=45)

要么

ax.xaxis.set_tick_params(rotation=45)

附带说明一下,当前解决方案通过使用command将有状态接口(使用pyplot)与面向对象的接口混合在一起plt.xticks(rotation=70)。由于问题中的代码使用面向对象的方法,因此最好始终坚持使用该方法。该解决方案确实提供了一个很好的显式解决方案plt.setp( axs[1].xaxis.get_majorticklabels(), rotation=70 )

Solution works for matplotlib 2.1+

There exists an axes method tick_params that can change tick properties. It also exists as an axis method as set_tick_params

ax.tick_params(axis='x', rotation=45)

Or

ax.xaxis.set_tick_params(rotation=45)

As a side note, the current solution mixes the stateful interface (using pyplot) with the object-oriented interface by using the command plt.xticks(rotation=70). Since the code in the question uses the object-oriented approach, it’s best to stick to that approach throughout. The solution does give a good explicit solution with plt.setp( axs[1].xaxis.get_majorticklabels(), rotation=70 )


回答 2

一个简单的解决方案是使用

fig.autofmt_xdate()

该命令自动旋转xaxis标签并调整其位置。默认值为旋转角度30°和水平对齐“向右”。但是可以在函数调用中更改它们

fig.autofmt_xdate(bottom=0.2, rotation=30, ha='right')

附加bottom参数等效于setting plt.subplots_adjust(bottom=bottom),它允许将底部轴的padding设置为更大的值,以承载旋转的ticklabel。

因此,基本上,这里您具有所有需要的设置,只需一个命令即可拥有一个漂亮的日期轴。

在matplotlib页面上可以找到一个很好的例子

An easy solution which avoids looping over the ticklabes is to just use

fig.autofmt_xdate()

This command automatically rotates the xaxis labels and adjusts their position. The default values are a rotation angle 30° and horizontal alignment “right”. But they can be changed in the function call

fig.autofmt_xdate(bottom=0.2, rotation=30, ha='right')

The additional bottom argument is equivalent to setting plt.subplots_adjust(bottom=bottom), which allows to set the bottom axes padding to a larger value to host the rotated ticklabels.

So basically here you have all the settings you need to have a nice date axis in a single command.

A good example can be found on the matplotlib page.


回答 3

申请的另一种方式horizontalalignment,并rotation给每个刻度标签做for了你要更改的刻度标记循环:

import numpy as np
import matplotlib.pyplot as plt
import datetime as dt

now = dt.datetime.now()
hours = [now + dt.timedelta(minutes=x) for x in range(0,24*60,10)]
days = [now + dt.timedelta(days=x) for x in np.arange(0,30,1/4.)]
hours_value = np.random.random(len(hours))
days_value = np.random.random(len(days))

fig, axs = plt.subplots(2)
fig.subplots_adjust(hspace=0.75)
axs[0].plot(hours,hours_value)
axs[1].plot(days,days_value)

for label in axs[0].get_xmajorticklabels() + axs[1].get_xmajorticklabels():
    label.set_rotation(30)
    label.set_horizontalalignment("right")

这是一个示例,如果您想控制主要和次要刻度线的位置:

import numpy as np
import matplotlib.pyplot as plt
import datetime as dt

fig, axs = plt.subplots(2)
fig.subplots_adjust(hspace=0.75)
now = dt.datetime.now()
hours = [now + dt.timedelta(minutes=x) for x in range(0,24*60,10)]
days = [now + dt.timedelta(days=x) for x in np.arange(0,30,1/4.)]

axs[0].plot(hours,np.random.random(len(hours)))
x_major_lct = mpl.dates.AutoDateLocator(minticks=2,maxticks=10, interval_multiples=True)
x_minor_lct = matplotlib.dates.HourLocator(byhour = range(0,25,1))
x_fmt = matplotlib.dates.AutoDateFormatter(x_major_lct)
axs[0].xaxis.set_major_locator(x_major_lct)
axs[0].xaxis.set_minor_locator(x_minor_lct)
axs[0].xaxis.set_major_formatter(x_fmt)
axs[0].set_xlabel("minor ticks set to every hour, major ticks start with 00:00")

axs[1].plot(days,np.random.random(len(days)))
x_major_lct = mpl.dates.AutoDateLocator(minticks=2,maxticks=10, interval_multiples=True)
x_minor_lct = matplotlib.dates.DayLocator(bymonthday = range(0,32,1))
x_fmt = matplotlib.dates.AutoDateFormatter(x_major_lct)
axs[1].xaxis.set_major_locator(x_major_lct)
axs[1].xaxis.set_minor_locator(x_minor_lct)
axs[1].xaxis.set_major_formatter(x_fmt)
axs[1].set_xlabel("minor ticks set to every day, major ticks show first day of month")
for label in axs[0].get_xmajorticklabels() + axs[1].get_xmajorticklabels():
    label.set_rotation(30)
    label.set_horizontalalignment("right")

Another way to applyhorizontalalignment and rotation to each tick label is doing a for loop over the tick labels you want to change:

import numpy as np
import matplotlib.pyplot as plt
import datetime as dt

now = dt.datetime.now()
hours = [now + dt.timedelta(minutes=x) for x in range(0,24*60,10)]
days = [now + dt.timedelta(days=x) for x in np.arange(0,30,1/4.)]
hours_value = np.random.random(len(hours))
days_value = np.random.random(len(days))

fig, axs = plt.subplots(2)
fig.subplots_adjust(hspace=0.75)
axs[0].plot(hours,hours_value)
axs[1].plot(days,days_value)

for label in axs[0].get_xmajorticklabels() + axs[1].get_xmajorticklabels():
    label.set_rotation(30)
    label.set_horizontalalignment("right")

And here is an example if you want to control the location of major and minor ticks:

import numpy as np
import matplotlib.pyplot as plt
import datetime as dt

fig, axs = plt.subplots(2)
fig.subplots_adjust(hspace=0.75)
now = dt.datetime.now()
hours = [now + dt.timedelta(minutes=x) for x in range(0,24*60,10)]
days = [now + dt.timedelta(days=x) for x in np.arange(0,30,1/4.)]

axs[0].plot(hours,np.random.random(len(hours)))
x_major_lct = mpl.dates.AutoDateLocator(minticks=2,maxticks=10, interval_multiples=True)
x_minor_lct = matplotlib.dates.HourLocator(byhour = range(0,25,1))
x_fmt = matplotlib.dates.AutoDateFormatter(x_major_lct)
axs[0].xaxis.set_major_locator(x_major_lct)
axs[0].xaxis.set_minor_locator(x_minor_lct)
axs[0].xaxis.set_major_formatter(x_fmt)
axs[0].set_xlabel("minor ticks set to every hour, major ticks start with 00:00")

axs[1].plot(days,np.random.random(len(days)))
x_major_lct = mpl.dates.AutoDateLocator(minticks=2,maxticks=10, interval_multiples=True)
x_minor_lct = matplotlib.dates.DayLocator(bymonthday = range(0,32,1))
x_fmt = matplotlib.dates.AutoDateFormatter(x_major_lct)
axs[1].xaxis.set_major_locator(x_major_lct)
axs[1].xaxis.set_minor_locator(x_minor_lct)
axs[1].xaxis.set_major_formatter(x_fmt)
axs[1].set_xlabel("minor ticks set to every day, major ticks show first day of month")
for label in axs[0].get_xmajorticklabels() + axs[1].get_xmajorticklabels():
    label.set_rotation(30)
    label.set_horizontalalignment("right")


回答 4

只需使用

ax.set_xticklabels(label_list, rotation=45)

Simply use

ax.set_xticklabels(label_list, rotation=45)

断言对模拟方法的后续调用

问题:断言对模拟方法的后续调用

模拟有一个有用的assert_called_with()方法。但是,据我了解,这仅检查对方法的最后一次调用。
如果我有连续3次调用该模拟方法的代码,每次使用不同的参数,那么该如何用其特定的参数来断言这3次调用?

Mock has a helpful assert_called_with() method. However, as far as I understand this only checks the last call to a method.
If I have code that calls the mocked method 3 times successively, each time with different parameters, how can I assert these 3 calls with their specific parameters?


回答 0

assert_has_calls 是解决此问题的另一种方法。

从文档:

assert_has_calls (calls,any_order = False)

断言已使用指定的调用调用了该模拟。检查嘲笑列表中是否有呼叫。

如果any_order为False(默认设置),则调用必须是连续的。在指定呼叫之前或之后可能会有额外的呼叫。

如果any_order为True,则调用可以按任何顺序进行,但是它们必须全部出现在模拟调用中。

例:

>>> from unittest.mock import call, Mock
>>> mock = Mock(return_value=None)
>>> mock(1)
>>> mock(2)
>>> mock(3)
>>> mock(4)
>>> calls = [call(2), call(3)]
>>> mock.assert_has_calls(calls)
>>> calls = [call(4), call(2), call(3)]
>>> mock.assert_has_calls(calls, any_order=True)

来源:https : //docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.assert_has_calls

assert_has_calls is another approach to this problem.

From the docs:

assert_has_calls (calls, any_order=False)

assert the mock has been called with the specified calls. The mock_calls list is checked for the calls.

If any_order is False (the default) then the calls must be sequential. There can be extra calls before or after the specified calls.

If any_order is True then the calls can be in any order, but they must all appear in mock_calls.

Example:

>>> from unittest.mock import call, Mock
>>> mock = Mock(return_value=None)
>>> mock(1)
>>> mock(2)
>>> mock(3)
>>> mock(4)
>>> calls = [call(2), call(3)]
>>> mock.assert_has_calls(calls)
>>> calls = [call(4), call(2), call(3)]
>>> mock.assert_has_calls(calls, any_order=True)

Source: https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.assert_has_calls


回答 1

通常,我不关心呼叫的顺序,只关心它们的发生。在这种情况下,我结合assert_any_call了有关的断言call_count

>>> import mock
>>> m = mock.Mock()
>>> m(1)
<Mock name='mock()' id='37578160'>
>>> m(2)
<Mock name='mock()' id='37578160'>
>>> m(3)
<Mock name='mock()' id='37578160'>
>>> m.assert_any_call(1)
>>> m.assert_any_call(2)
>>> m.assert_any_call(3)
>>> assert 3 == m.call_count
>>> m.assert_any_call(4)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "[python path]\lib\site-packages\mock.py", line 891, in assert_any_call
    '%s call not found' % expected_string
AssertionError: mock(4) call not found

我发现用这种方法比传递给单个方法的大量调用更容易阅读和理解。

如果您确实关心订单,或者希望有多个相同的电话,则assert_has_calls可能更合适。

编辑

自从我发布了这个答案以来,我就重新考虑了一般的测试方法。我认为值得一提的是,如果您的测试变得如此复杂,则可能是测试不合适或存在设计问题。模拟设计用于在面向对象的设计中测试对象之间的通信。如果您的设计不是面向对象的(如在更多过程或功能上),则该模拟可能完全不合适。您可能还会在方法内部进行过多操作,或者您可能正在测试最好不要进行内部模拟的内部细节。当我的代码不是非常面向对象时,我开发了此方法中提到的策略,并且我相信我也在测试内部细节,而这些细节最好不要假装。

Usually, I don’t care about the order of the calls, only that they happened. In that case, I combine assert_any_call with an assertion about call_count.

>>> import mock
>>> m = mock.Mock()
>>> m(1)
<Mock name='mock()' id='37578160'>
>>> m(2)
<Mock name='mock()' id='37578160'>
>>> m(3)
<Mock name='mock()' id='37578160'>
>>> m.assert_any_call(1)
>>> m.assert_any_call(2)
>>> m.assert_any_call(3)
>>> assert 3 == m.call_count
>>> m.assert_any_call(4)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "[python path]\lib\site-packages\mock.py", line 891, in assert_any_call
    '%s call not found' % expected_string
AssertionError: mock(4) call not found

I find doing it this way to be easier to read and understand than a large list of calls passed into a single method.

If you do care about order or you expect multiple identical calls, assert_has_calls might be more appropriate.

Edit

Since I posted this answer, I’ve rethought my approach to testing in general. I think it’s worth mentioning that if your test is getting this complicated, you may be testing inappropriately or have a design problem. Mocks are designed for testing inter-object communication in an object oriented design. If your design is not objected oriented (as in more procedural or functional), the mock may be totally inappropriate. You may also have too much going on inside the method, or you might be testing internal details that are best left unmocked. I developed the strategy mentioned in this method when my code was not very object oriented, and I believe I was also testing internal details that would have been best left unmocked.


回答 2

您可以使用该Mock.call_args_list属性将参数与以前的方法调用进行比较。结合Mock.call_count属性应该可以完全控制您。

You can use the Mock.call_args_list attribute to compare parameters to previous method calls. That in conjunction with Mock.call_count attribute should give you full control.


回答 3

我总是不得不一次又一次地看一下这个,所以这是我的答案。


在同一个类的不同对象上声明多个方法调用

假设我们有一个重载类(我们要模拟):

In [1]: class HeavyDuty(object):
   ...:     def __init__(self):
   ...:         import time
   ...:         time.sleep(2)  # <- Spends a lot of time here
   ...:     
   ...:     def do_work(self, arg1, arg2):
   ...:         print("Called with %r and %r" % (arg1, arg2))
   ...:  

这是一些使用HeavyDuty该类的两个实例的代码 :

In [2]: def heavy_work():
   ...:     hd1 = HeavyDuty()
   ...:     hd1.do_work(13, 17)
   ...:     hd2 = HeavyDuty()
   ...:     hd2.do_work(23, 29)
   ...:    


现在,这是该heavy_work功能的测试用例:

In [3]: from unittest.mock import patch, call
   ...: def test_heavy_work():
   ...:     expected_calls = [call.do_work(13, 17),call.do_work(23, 29)]
   ...:     
   ...:     with patch('__main__.HeavyDuty') as MockHeavyDuty:
   ...:         heavy_work()
   ...:         MockHeavyDuty.return_value.assert_has_calls(expected_calls)
   ...:  

我们正在用嘲笑HeavyDuty课堂MockHeavyDuty。要断言来自每个HeavyDuty实例MockHeavyDuty.return_value.assert_has_calls(而不是)的方法调用MockHeavyDuty.assert_has_calls。另外,在列表中,expected_calls我们必须指定对断言调用感兴趣的方法名称。因此,我们的清单由对的调用组成call.do_work,而不是简单的call

行使测试用例可以证明它是成功的:

In [4]: print(test_heavy_work())
None


如果我们修改heavy_work函数,则测试将失败并产生有用的错误消息:

In [5]: def heavy_work():
   ...:     hd1 = HeavyDuty()
   ...:     hd1.do_work(113, 117)  # <- call args are different
   ...:     hd2 = HeavyDuty()
   ...:     hd2.do_work(123, 129)  # <- call args are different
   ...:     

In [6]: print(test_heavy_work())
---------------------------------------------------------------------------
(traceback omitted for clarity)

AssertionError: Calls not found.
Expected: [call.do_work(13, 17), call.do_work(23, 29)]
Actual: [call.do_work(113, 117), call.do_work(123, 129)]


断言对一个函数的多次调用

与上面的对比,这是一个示例,该示例演示如何模拟对一个函数的多次调用:

In [7]: def work_function(arg1, arg2):
   ...:     print("Called with args %r and %r" % (arg1, arg2))

In [8]: from unittest.mock import patch, call
   ...: def test_work_function():
   ...:     expected_calls = [call(13, 17), call(23, 29)]    
   ...:     with patch('__main__.work_function') as mock_work_function:
   ...:         work_function(13, 17)
   ...:         work_function(23, 29)
   ...:         mock_work_function.assert_has_calls(expected_calls)
   ...:    

In [9]: print(test_work_function())
None


有两个主要区别。第一个是在模拟函数时,我们使用call而不是使用来设置预期的调用call.some_method。第二个就是我们所说assert_has_callsmock_work_function,而不是上mock_work_function.return_value

I always have to look this one up time and time again, so here is my answer.


Asserting multiple method calls on different objects of the same class

Suppose we have a heavy duty class (which we want to mock):

In [1]: class HeavyDuty(object):
   ...:     def __init__(self):
   ...:         import time
   ...:         time.sleep(2)  # <- Spends a lot of time here
   ...:     
   ...:     def do_work(self, arg1, arg2):
   ...:         print("Called with %r and %r" % (arg1, arg2))
   ...:  

here is some code that uses two instances of the HeavyDuty class:

In [2]: def heavy_work():
   ...:     hd1 = HeavyDuty()
   ...:     hd1.do_work(13, 17)
   ...:     hd2 = HeavyDuty()
   ...:     hd2.do_work(23, 29)
   ...:    


Now, here is a test case for the heavy_work function:

In [3]: from unittest.mock import patch, call
   ...: def test_heavy_work():
   ...:     expected_calls = [call.do_work(13, 17),call.do_work(23, 29)]
   ...:     
   ...:     with patch('__main__.HeavyDuty') as MockHeavyDuty:
   ...:         heavy_work()
   ...:         MockHeavyDuty.return_value.assert_has_calls(expected_calls)
   ...:  

We are mocking the HeavyDuty class with MockHeavyDuty. To assert method calls coming from every HeavyDuty instance we have to refer to MockHeavyDuty.return_value.assert_has_calls, instead of MockHeavyDuty.assert_has_calls. In addition, in the list of expected_calls we have to specify which method name we are interested in asserting calls for. So our list is made of calls to call.do_work, as opposed to simply call.

Exercising the test case shows us it is successful:

In [4]: print(test_heavy_work())
None


If we modify the heavy_work function, the test fails and produces a helpful error message:

In [5]: def heavy_work():
   ...:     hd1 = HeavyDuty()
   ...:     hd1.do_work(113, 117)  # <- call args are different
   ...:     hd2 = HeavyDuty()
   ...:     hd2.do_work(123, 129)  # <- call args are different
   ...:     

In [6]: print(test_heavy_work())
---------------------------------------------------------------------------
(traceback omitted for clarity)

AssertionError: Calls not found.
Expected: [call.do_work(13, 17), call.do_work(23, 29)]
Actual: [call.do_work(113, 117), call.do_work(123, 129)]


Asserting multiple calls to a function

To contrast with the above, here is an example that shows how to mock multiple calls to a function:

In [7]: def work_function(arg1, arg2):
   ...:     print("Called with args %r and %r" % (arg1, arg2))

In [8]: from unittest.mock import patch, call
   ...: def test_work_function():
   ...:     expected_calls = [call(13, 17), call(23, 29)]    
   ...:     with patch('__main__.work_function') as mock_work_function:
   ...:         work_function(13, 17)
   ...:         work_function(23, 29)
   ...:         mock_work_function.assert_has_calls(expected_calls)
   ...:    

In [9]: print(test_work_function())
None


There are two main differences. The first one is that when mocking a function we setup our expected calls using call, instead of using call.some_method. The second one is that we call assert_has_calls on mock_work_function, instead of on mock_work_function.return_value.


有趣好用的Python教程

退出移动版
微信支付
请使用 微信 扫码支付