标签归档:Python

Mac OS X-EnvironmentError:找不到mysql_config

问题:Mac OS X-EnvironmentError:找不到mysql_config

首先,是的,我已经看到了:

pip安装mysql-python失败,并显示EnvironmentError:找不到mysql_config

问题

我正在尝试在Google App Engine项目上使用Django。但是,由于以下原因,服务器无法正常启动,因此无法启动。

ImproperlyConfigured("Error loading MySQLdb module: %s" % e)
ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb

我做了一些研究,所有这些都指出必须安装Mysql-python,因为它显然不在我的系统上。我实际上尝试卸载它,并得到了:

Cannot uninstall requirement mysql-python, not installed

每当我真正尝试通过以下方式安装时:

sudo pip install MySQL-python

我收到一条错误消息:

raise EnvironmentError("%s not found" % (mysql_config.path,))
EnvironmentError: mysql_config not found

我已经尝试运行:

export PATH=$PATH:/usr/local/mysql/bin

但这似乎没有帮助,因为我再次运行了安装命令,但仍然失败。

有任何想法吗?

请注意,我不在virtualenv中。

First off, yeah, I’ve already seen this:

pip install mysql-python fails with EnvironmentError: mysql_config not found

The problem

I am trying to use Django on a Google App Engine project. However, I haven’t been able to get started as the server fails to start properly due to:

ImproperlyConfigured("Error loading MySQLdb module: %s" % e)
ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb

I did some research and it all pointed to having to install Mysql-python, as apparently it isn’t on my system. I actually tried uninstalling it and got this:

Cannot uninstall requirement mysql-python, not installed

Whenever I actually do try to install via:

sudo pip install MySQL-python

I get an error stating:

raise EnvironmentError("%s not found" % (mysql_config.path,))
EnvironmentError: mysql_config not found

I’ve already tried running:

export PATH=$PATH:/usr/local/mysql/bin

but that didn’t seem to help, as I ran the installation command again and it still failed.

Any ideas?

Please note I’m not in a virtualenv.


回答 0

好吧,首先,让我检查一下我是否和您在同一页面上:

  • 您安装了python
  • 你做了 brew install mysql
  • 你做了 export PATH=$PATH:/usr/local/mysql/bin
  • 最后,您做到了pip install MySQL-Python(或者pip3 install mysqlclient如果使用python 3)

如果您以相同的顺序执行了所有这些步骤,但是仍然出现错误,请继续阅读,如果您没有按照这些确切的步骤尝试,那么请从头开始。

因此,您按照这些步骤进行操作,仍然遇到错误,可以尝试以下操作:

  1. 尝试which mysql_config从bash 运行。可能找不到。这就是为什么构建也找不到它的原因。尝试运行locate mysql_config,看看是否有任何返回。该二进制文件的路径需要在您外壳的$ PATH环境变量中,或者必须在该模块的setup.py文件中显式地存在(假设它正在某个特定位置查找该文件)。

  2. 除了使用MySQL-Python,还可以尝试使用“ mysql-connector-python”,它可以使用安装pip install mysql-connector-python。有关此的更多信息,请参见此处此处

  3. 手动找到“ mysql / bin”,“ mysql_config”和“ MySQL-Python”的位置,并将所有这些都添加到$ PATH环境变量中。

  4. 如果以上所有步骤均失败,则可以尝试使用MacPorts安装“ mysql”,在这种情况下,文件“ mysql_config”实际上将被称为“ mysql_config5”,在这种情况下,必须在安装后执行以下操作:export PATH=$PATH:/opt/local/lib/mysql5/bin。您可以在此处找到更多详细信息。

注意1:我见过有人说安装python-dev并提供了libmysqlclient-dev帮助,但是我不知道这些软件包是否在Mac OS上可用。

注意2:此外,请确保尝试以root用户身份运行命令。

我得到了我的答案,从(除了我的大脑)这些地方(也许你可以看看他们,看看它是否会有所帮助):1234

我希望我能提供帮助,并且很高兴知道其中任何一个是否有效。祝好运。

Ok, well, first of all, let me check if I am on the same page as you:

  • You installed python
  • You did brew install mysql
  • You did export PATH=$PATH:/usr/local/mysql/bin
  • And finally, you did pip install MySQL-Python (or pip3 install mysqlclient if using python 3)

If you did all those steps in the same order, and you still got an error, read on to the end, if, however, you did not follow these exact steps try, following them from the very beginning.

So, you followed the steps, and you’re still geting an error, well, there are a few things you could try:

  1. Try running which mysql_config from bash. It probably won’t be found. That’s why the build isn’t finding it either. Try running locate mysql_config and see if anything comes back. The path to this binary needs to be either in your shell’s $PATH environment variable, or it needs to be explicitly in the setup.py file for the module assuming it’s looking in some specific place for that file.

  2. Instead of using MySQL-Python, try using ‘mysql-connector-python’, it can be installed using pip install mysql-connector-python. More information on this can be found here and here.

  3. Manually find the location of ‘mysql/bin’, ‘mysql_config’, and ‘MySQL-Python’, and add all these to the $PATH environment variable.

  4. If all above steps fail, then you could try installing ‘mysql’ using MacPorts, in which case the file ‘mysql_config’ would actually be called ‘mysql_config5’, and in this case, you would have to do this after installing: export PATH=$PATH:/opt/local/lib/mysql5/bin. You can find more details here.

Note1: I’ve seen some people saying that installing python-dev and libmysqlclient-dev also helped, however I do not know if these packages are available on Mac OS.

Note2: Also, make sure to try running the commands as root.

I got my answers from (besides my brain) these places (maybe you could have a look at them, to see if it would help): 1, 2, 3, 4.

I hoped I helped, and would be happy to know if any of this worked, or not. Good luck.


回答 1

我一直在调试此问题-3小时17分钟。令我特别烦恼的是,我已经通过先前的uni工作在系统上安装了sql,但是pip / pip3却无法识别它。这些线程以及我在互联网上搜索过的许多其他线程有助于消除问题,但实际上并没有解决问题。

回答

Pip在相对于Macintosh HD @ 的Homebrew目录中寻找mysql二进制文件

/usr/local/Cellar/

所以我发现这需要您进行一些更改

步骤1:如果尚未下载MySql,请下载https://dev.mysql.com/downloads/

步骤2:相对于Macintosh HD和cd 找到它

/usr/local/mysql/bin

第3步:打开终端并使用所选的文本编辑器后-我本人是个Neovim家伙,所以我打了一下(Mac并不会自动附带……另一天的故事)

nvim mysql_config

第4步:您将在第112行看到

# Create options 
libs="-L$pkglibdir"
libs="$libs -l "

改成

# Create options 
libs="-L$pkglibdir"
libs="$libs -lmysqlclient -lssl -lcrypto"

*您会注意到该文件具有只读访问权限,因此,如果您使用vim或neovim

:w !sudo tee %

步骤5:转到主目录并编辑.bash_profile文件

cd ~

然后

nvim .bash_profile

并添加

export PATH="/usr/local/mysql/bin:$PATH"

到文件然后保存

步骤6:相对于Macintosh HD找到路径并将其添加

cd /private/etc/

然后

nvim paths

并添加

/usr/local/mysql/bin

*您会再次注意到该文件具有只读访问权限,因此,如果您使用vim或neovim

:w !sudo tee % 

然后

cd ~

然后通过运行您的更改刷新终端

source .bash_profile

最后

pip3 install mysqlclient

还有中提琴 记住这是一种氛围。

I had been debugging this problem forever – 3 hours 17 mins. What particularly annoyed me was that I already had sql installed on my system through prior uni work but pip/pip3 wasn’t recognising it. These threads above and many other I scoured the internet for were helpful in eluminating the problem but didn’t actually solve things.

ANSWER

Pip is looking for mysql binaries in the Homebrew Directory which is located relative to Macintosh HD @

/usr/local/Cellar/

so I found that this requires you making a few changes

step 1: Download MySql if not already done so https://dev.mysql.com/downloads/

Step 2: Locate it relative to Macintosh HD and cd

/usr/local/mysql/bin

Step 3: Once there open terminal and use a text editor of choice – I’m a neovim guy myself so I typed (doesn’t automatically come with Mac… another story for another day)

nvim mysql_config

Step 4: You will see at approx line 112

# Create options 
libs="-L$pkglibdir"
libs="$libs -l "

Change to

# Create options 
libs="-L$pkglibdir"
libs="$libs -lmysqlclient -lssl -lcrypto"

*you’ll notice that this file has read-only access so if your using vim or neovim

:w !sudo tee %

Step 5: Head to the home directory and edit the .bash_profile file

cd ~

Then

nvim .bash_profile

and add

export PATH="/usr/local/mysql/bin:$PATH"

to the file then save

Step 6: relative to Macintosh HD locate paths and add to it

cd /private/etc/

then

nvim paths

and add

/usr/local/mysql/bin

*you’ll again notice that this file has read-only access so if your using vim or neovim

:w !sudo tee % 

then

cd ~

then refresh the terminal with your changes by running

source .bash_profile

Finally

pip3 install mysqlclient

And Viola. Remember it’s a vibe.


回答 2

如果您不想安装完整的mysql,我们可以通过安装mysqlclient来解决此问题。cmd brew install mysqlclient 完成后,将要求在下面添加以下行~/.bash_profile

echo 'export PATH="/usr/local/opt/mysql-client/bin:$PATH"' >> ~/.bash_profile

关闭终端并启动新终端,然后继续 pip install mysqlclient

If you don’t want to install full mysql, we can fix this by just installing mysqlclient brew install mysqlclient Once cmd is completed it will ask to add below line to ~/.bash_profile:

echo 'export PATH="/usr/local/opt/mysql-client/bin:$PATH"' >> ~/.bash_profile

Close terminal and start new terminal and proceed with pip install mysqlclient


回答 3

我在MacOS Catalina上运行Python 3.6。我的问题是我尝试安装mysqlclient==1.4.2.post1,并不断抛出mysql_config not found错误。

这是我解决问题的步骤。

  1. 使用brew安装mysql-connector-c(如果已经有mysql,请先安装unlink brew unlink mysql)-brew install mysql-connector-c
  2. 打开mysql_config并编辑第112行附近的文件
# Create options 
libs="-L$pkglibdir"
libs="$libs -lmysqlclient -lssl -lcrypto"
  1. brew info openssl -这将为您提供更多有关将openssl放入PATH中需要执行的操作的信息
  2. 关于步骤3,您需要执行以下操作将openssl放入PATH中- echo 'export PATH="/usr/local/opt/openssl/bin:$PATH"' >> ~/.bash_profile
  3. 供编译器找到openssl- export LDFLAGS="-L/usr/local/opt/openssl/lib"
  4. 供编译器找到openssl- export CPPFLAGS="-I/usr/local/opt/openssl/include"

I am running Python 3.6 on MacOS Catalina. My issue was that I tried to install mysqlclient==1.4.2.post1 and it keeps throwing mysql_config not found error.

This is the steps I took to solve the issue.

  1. Install mysql-connector-c using brew (if you have mysql already install unlink first brew unlink mysql) – brew install mysql-connector-c
  2. Open mysql_config and edit the file around line 112
# Create options 
libs="-L$pkglibdir"
libs="$libs -lmysqlclient -lssl -lcrypto"
  1. brew info openssl – this will give you more information on what needs to be done about putting openssl in PATH
  2. in relation to step 3, you need to do this to put openssl in PATH – echo 'export PATH="/usr/local/opt/openssl/bin:$PATH"' >> ~/.bash_profile
  3. for compilers to find openssl – export LDFLAGS="-L/usr/local/opt/openssl/lib"
  4. for compilers to find openssl – export CPPFLAGS="-I/usr/local/opt/openssl/include"

回答 4

当我安装mysqlclient时也会发生这种情况,

$ pip install mysqlclient

如用户3429036所说,

$ brew install mysql

Also this happens when I was installing mysqlclient,

$ pip install mysqlclient

As user3429036 said,

$ brew install mysql

回答 5

此答案适用于brew不是从官方安装而是从官方安装的MacOS用户.dmg/.pkg。该安装程序无法编辑您的PATH,从而导致出现问题:

  1. 所有MySQL命令一样mysqlmysqladminmysql_config,等不能被发现,并作为一个结果:
  2. “MySQL的预置面板”未出现在系统偏好设置,
  3. 您不能安装任何与MySQL通信的API,包括 mysqlclient

您要做的是附加MySQL bin文件夹(通常/usr/local/mysql/bin在您PATH~/.bash_profile文件中添加以下行:

export PATH="/usr/local/mysql/bin/:$PATH"

然后,您应该重新加载您~/.bash_profile的更改,以使更改在当前的终端会话中生效:

source ~/.bash_profile

mysqlclient但是,在安装之前,您需要接受XcodeBuild许可证:

sudo xcodebuild -license

按照他们的指示退出您的家人,之后您应该可以顺利安装mysqlclient

pip install mysqlclient

安装在此之后,你必须做一件事来解决运行时错误,船舶与MySQL(动态库libmysqlclient.dylib未找到),加入这行到你的系统动态库路径:

export DYLD_LIBRARY_PATH=/usr/local/mysql/lib/:$DYLD_LIBRARY_PATH

This answer is for MacOS users who did not install from brew but rather from the official .dmg/.pkg. That installer fails to edit your PATH, causing things to break out of the box:

  1. All MySQL commands like mysql, mysqladmin, mysql_config, etc cannot be found, and as a result:
  2. the “MySQL Preference Pane” fails to appear in System Preferences, and
  3. you cannot install any API that communicates with MySQL, including mysqlclient

What you have to do is appending the MySQL bin folder (typically /usr/local/mysql/bin in your PATH by adding this line in your ~/.bash_profile file:

export PATH="/usr/local/mysql/bin/:$PATH"

You should then reload your ~/.bash_profile for the change to take effect in your current Terminal session:

source ~/.bash_profile

Before installing mysqlclient, however, you need to accept the XcodeBuild license:

sudo xcodebuild -license

Follow their directions to sign away your family, after which you should be able to install mysqlclient without issue:

pip install mysqlclient

After installing that, you must do one more thing to fix a runtime bug that ships with MySQL (Dynamic Library libmysqlclient.dylib not found), by adding this line to your system dynamic libraries path:

export DYLD_LIBRARY_PATH=/usr/local/mysql/lib/:$DYLD_LIBRARY_PATH


回答 6

如果您通过指定版本使用Homebrew安装了mysql ,则mysql_config将出现在此处。– /usr/local/Cellar/mysql@5.6/5.6.47/bin

您可以在/ usr / local /目录中使用ls命令找到sql bin的路径

/usr/local/Cellar/mysql@5.6/5.6.47/bin

像这样将路径添加到bash配置文件。

nano ~/.bash_profile

export PATH="/usr/local/Cellar/mysql@5.6/5.6.47/bin:$PATH"

If you have installed mysql using Homebrew by specifying a version then mysql_config would be present here. – /usr/local/Cellar/mysql@5.6/5.6.47/bin

you can find the path of the sql bin by using ls command in /usr/local/ directory

/usr/local/Cellar/mysql@5.6/5.6.47/bin

Add the path to bash profile like this.

nano ~/.bash_profile

export PATH="/usr/local/Cellar/mysql@5.6/5.6.47/bin:$PATH"

回答 7

在我的情况下,问题是我在python虚拟环境中运行命令,尽管我将其放在.bash_profile文件中,但它没有/ usr / local / mysql / bin的路径。只是在虚拟环境中导出路径对我有用。

对于您的信息,sql_config位于bin目录中。

The problem in my case was that I was running the command inside a python virtual environment and it didn’t had the path to /usr/local/mysql/bin though I have put it in the .bash_profile file. Just exporting the path in the virtual env worked for me.

For your info sql_config resides inside bin directory.


回答 8

对我来说,安装brew或apt-get也不容易,所以我通过以下网址下载了mysql:https//dev.mysql.com/downloads/connector/python/,并安装了它。因此,我可以在以下目录中找到mysql_config:/ usr / local / mysql / bin

下一步是:

  1. 导出PATH = $ PATH:/ usr / local / mysql / bin
  2. 点安装MySQL-python == 1.2.5

Install brew or apt-get is also not easy for me so I downloaded mysql via: https://dev.mysql.com/downloads/connector/python/, installed it. So I can find mysql_config int this directory: /usr/local/mysql/bin

the next step is:

  1. export PATH=$PATH:/usr/local/mysql/bin
  2. pip install MySQL-python==1.2.5

从列表或元组中明确选择项目

问题:从列表或元组中明确选择项目

我有以下Python列表(也可以是元组):

myList = ['foo', 'bar', 'baz', 'quux']

我可以说

>>> myList[0:3]
['foo', 'bar', 'baz']
>>> myList[::2]
['foo', 'baz']
>>> myList[1::2]
['bar', 'quux']

如何显式挑选索引没有特定模式的项目?例如,我要选择[0,2,3]。或者,从1000个很大的清单中,我要选择[87, 342, 217, 998, 500]。是否有一些Python语法可以做到这一点?看起来像这样:

>>> myBigList[87, 342, 217, 998, 500]

I have the following Python list (can also be a tuple):

myList = ['foo', 'bar', 'baz', 'quux']

I can say

>>> myList[0:3]
['foo', 'bar', 'baz']
>>> myList[::2]
['foo', 'baz']
>>> myList[1::2]
['bar', 'quux']

How do I explicitly pick out items whose indices have no specific patterns? For example, I want to select [0,2,3]. Or from a very big list of 1000 items, I want to select [87, 342, 217, 998, 500]. Is there some Python syntax that does that? Something that looks like:

>>> myBigList[87, 342, 217, 998, 500]

回答 0

list( myBigList[i] for i in [87, 342, 217, 998, 500] )

我将答案与python 2.5.2进行了比较:

  • 19.7微秒: [ myBigList[i] for i in [87, 342, 217, 998, 500] ]

  • 20.6 USEC: map(myBigList.__getitem__, (87, 342, 217, 998, 500))

  • 22.7 USEC: itemgetter(87, 342, 217, 998, 500)(myBigList)

  • 24.6 USEC: list( myBigList[i] for i in [87, 342, 217, 998, 500] )

请注意,在Python 3中,第1个已更改为与第4个相同。


另一种选择是以a开头,numpy.array它允许通过列表或a进行索引numpy.array

>>> import numpy
>>> myBigList = numpy.array(range(1000))
>>> myBigList[(87, 342, 217, 998, 500)]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
IndexError: invalid index
>>> myBigList[[87, 342, 217, 998, 500]]
array([ 87, 342, 217, 998, 500])
>>> myBigList[numpy.array([87, 342, 217, 998, 500])]
array([ 87, 342, 217, 998, 500])

tuple不工作方式相同那些片。

list( myBigList[i] for i in [87, 342, 217, 998, 500] )

I compared the answers with python 2.5.2:

  • 19.7 usec: [ myBigList[i] for i in [87, 342, 217, 998, 500] ]

  • 20.6 usec: map(myBigList.__getitem__, (87, 342, 217, 998, 500))

  • 22.7 usec: itemgetter(87, 342, 217, 998, 500)(myBigList)

  • 24.6 usec: list( myBigList[i] for i in [87, 342, 217, 998, 500] )

Note that in Python 3, the 1st was changed to be the same as the 4th.


Another option would be to start out with a numpy.array which allows indexing via a list or a numpy.array:

>>> import numpy
>>> myBigList = numpy.array(range(1000))
>>> myBigList[(87, 342, 217, 998, 500)]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
IndexError: invalid index
>>> myBigList[[87, 342, 217, 998, 500]]
array([ 87, 342, 217, 998, 500])
>>> myBigList[numpy.array([87, 342, 217, 998, 500])]
array([ 87, 342, 217, 998, 500])

The tuple doesn’t work the same way as those are slices.


回答 1

那这个呢:

from operator import itemgetter
itemgetter(0,2,3)(myList)
('foo', 'baz', 'quux')

What about this:

from operator import itemgetter
itemgetter(0,2,3)(myList)
('foo', 'baz', 'quux')

回答 2

它不是内置的,但是如果您愿意,可以创建一个将元组作为“索引”的list的子类:

class MyList(list):

    def __getitem__(self, index):
        if isinstance(index, tuple):
            return [self[i] for i in index]
        return super(MyList, self).__getitem__(index)


seq = MyList("foo bar baaz quux mumble".split())
print seq[0]
print seq[2,4]
print seq[1::2]

印刷

foo
['baaz', 'mumble']
['bar', 'quux']

It isn’t built-in, but you can make a subclass of list that takes tuples as “indexes” if you’d like:

class MyList(list):

    def __getitem__(self, index):
        if isinstance(index, tuple):
            return [self[i] for i in index]
        return super(MyList, self).__getitem__(index)


seq = MyList("foo bar baaz quux mumble".split())
print seq[0]
print seq[2,4]
print seq[1::2]

printing

foo
['baaz', 'mumble']
['bar', 'quux']

回答 3

也许列表理解是按顺序进行的:

L = ['a', 'b', 'c', 'd', 'e', 'f']
print [ L[index] for index in [1,3,5] ]

生成:

['b', 'd', 'f']

那是您要找的东西吗?

Maybe a list comprehension is in order:

L = ['a', 'b', 'c', 'd', 'e', 'f']
print [ L[index] for index in [1,3,5] ]

Produces:

['b', 'd', 'f']

Is that what you are looking for?


回答 4

>>> map(myList.__getitem__, (2,2,1,3))
('baz', 'baz', 'bar', 'quux')

您也可以创建自己的List类,该类支持将元组用作__getitem__要执行的操作的参数myList[(2,2,1,3)]

>>> map(myList.__getitem__, (2,2,1,3))
('baz', 'baz', 'bar', 'quux')

You can also create your own List class which supports tuples as arguments to __getitem__ if you want to be able to do myList[(2,2,1,3)].


回答 5

我只想指出,即使itemgetter的语法看起来也很整洁,但是在大型列表上执行时有点慢。

import timeit
from operator import itemgetter
start=timeit.default_timer()
for i in range(1000000):
    itemgetter(0,2,3)(myList)
print ("Itemgetter took ", (timeit.default_timer()-start))

物品获取者1.065209062149279

start=timeit.default_timer()
for i in range(1000000):
    myList[0],myList[2],myList[3]
print ("Multiple slice took ", (timeit.default_timer()-start))

多个切片花费0.6225321444745759

I just want to point out, even syntax of itemgetter looks really neat, but it’s kinda slow when perform on large list.

import timeit
from operator import itemgetter
start=timeit.default_timer()
for i in range(1000000):
    itemgetter(0,2,3)(myList)
print ("Itemgetter took ", (timeit.default_timer()-start))

Itemgetter took 1.065209062149279

start=timeit.default_timer()
for i in range(1000000):
    myList[0],myList[2],myList[3]
print ("Multiple slice took ", (timeit.default_timer()-start))

Multiple slice took 0.6225321444745759


回答 6

另一个可能的解决方案:

sek=[]
L=[1,2,3,4,5,6,7,8,9,0]
for i in [2, 4, 7, 0, 3]:
   a=[L[i]]
   sek=sek+a
print (sek)

Another possible solution:

sek=[]
L=[1,2,3,4,5,6,7,8,9,0]
for i in [2, 4, 7, 0, 3]:
   a=[L[i]]
   sek=sek+a
print (sek)

回答 7

当你有一个布尔numpy数组时,就像 mask

[mylist[i] for i in np.arange(len(mask), dtype=int)[mask]]

适用于任何序列或np.array的lambda:

subseq = lambda myseq, mask : [myseq[i] for i in np.arange(len(mask), dtype=int)[mask]]

newseq = subseq(myseq, mask)

like often when you have a boolean numpy array like mask

[mylist[i] for i in np.arange(len(mask), dtype=int)[mask]]

A lambda that works for any sequence or np.array:

subseq = lambda myseq, mask : [myseq[i] for i in np.arange(len(mask), dtype=int)[mask]]

newseq = subseq(myseq, mask)


如何从日期时间对象中删除pytz时区?

问题:如何从日期时间对象中删除pytz时区?

有没有一种简单的方法可以从pytz datetime对象中删除时区?
例如,dtdt_tz本示例中进行重构:

>>> import datetime
>>> import pytz
>>> dt = datetime.datetime.now()
>>> dt
datetime.datetime(2012, 6, 8, 9, 27, 32, 601000)
>>> dt_tz = pytz.utc.localize(dt)
>>> dt_tz
datetime.datetime(2012, 6, 8, 9, 27, 32, 601000, tzinfo=<UTC>)

Is there a simple way to remove the timezone from a pytz datetime object?
e.g. reconstructing dt from dt_tz in this example:

>>> import datetime
>>> import pytz
>>> dt = datetime.datetime.now()
>>> dt
datetime.datetime(2012, 6, 8, 9, 27, 32, 601000)
>>> dt_tz = pytz.utc.localize(dt)
>>> dt_tz
datetime.datetime(2012, 6, 8, 9, 27, 32, 601000, tzinfo=<UTC>)

回答 0

要从日期时间对象中删除时区(tzinfo):

# dt_tz is a datetime.datetime object
dt = dt_tz.replace(tzinfo=None)

如果您使用的是诸如arrow的库,则可以通过简单地将arrow对象转换为datetime对象来删除时区,然后执行与上述示例相同的操作。

# <Arrow [2014-10-09T10:56:09.347444-07:00]>
arrowObj = arrow.get('2014-10-09T10:56:09.347444-07:00')

# datetime.datetime(2014, 10, 9, 10, 56, 9, 347444, tzinfo=tzoffset(None, -25200))
tmpDatetime = arrowObj.datetime

# datetime.datetime(2014, 10, 9, 10, 56, 9, 347444)
tmpDatetime = tmpDatetime.replace(tzinfo=None)

你为什么要这样做?一个例子是mysql不支持DATETIME类型的时区。因此,使用ORM之类的sqlalchemy时,只要为datetime.datetime对象提供要插入数据库的时区,它便会删除时区。解决方案是将datetime.datetime对象转换为UTC(由于无法指定时区,因此数据库中的所有内容均为UTC),然后将其插入数据库(无论如何都删除了时区),也可以自行删除。还要注意,您不能比较datetime.datetime其中一个是时区感知而另一个是时区幼稚的对象。

##############################################################################
# MySQL example! where MySQL doesn't support timezones with its DATETIME type!
##############################################################################

arrowObj = arrow.get('2014-10-09T10:56:09.347444-07:00')

arrowDt = arrowObj.to("utc").datetime

# inserts datetime.datetime(2014, 10, 9, 17, 56, 9, 347444, tzinfo=tzutc())
insertIntoMysqlDatabase(arrowDt)

# returns datetime.datetime(2014, 10, 9, 17, 56, 9, 347444)
dbDatetimeNoTz = getFromMysqlDatabase()

# cannot compare timzeone aware and timezone naive
dbDatetimeNoTz == arrowDt # False, or TypeError on python versions before 3.3

# compare datetimes that are both aware or both naive work however
dbDatetimeNoTz == arrowDt.replace(tzinfo=None) # True

To remove a timezone (tzinfo) from a datetime object:

# dt_tz is a datetime.datetime object
dt = dt_tz.replace(tzinfo=None)

If you are using a library like arrow, then you can remove timezone by simply converting an arrow object to to a datetime object, then doing the same thing as the example above.

# <Arrow [2014-10-09T10:56:09.347444-07:00]>
arrowObj = arrow.get('2014-10-09T10:56:09.347444-07:00')

# datetime.datetime(2014, 10, 9, 10, 56, 9, 347444, tzinfo=tzoffset(None, -25200))
tmpDatetime = arrowObj.datetime

# datetime.datetime(2014, 10, 9, 10, 56, 9, 347444)
tmpDatetime = tmpDatetime.replace(tzinfo=None)

Why would you do this? One example is that mysql does not support timezones with its DATETIME type. So using ORM’s like sqlalchemy will simply remove the timezone when you give it a datetime.datetime object to insert into the database. The solution is to convert your datetime.datetime object to UTC (so everything in your database is UTC since it can’t specify timezone) then either insert it into the database (where the timezone is removed anyway) or remove it yourself. Also note that you cannot compare datetime.datetime objects where one is timezone aware and another is timezone naive.

##############################################################################
# MySQL example! where MySQL doesn't support timezones with its DATETIME type!
##############################################################################

arrowObj = arrow.get('2014-10-09T10:56:09.347444-07:00')

arrowDt = arrowObj.to("utc").datetime

# inserts datetime.datetime(2014, 10, 9, 17, 56, 9, 347444, tzinfo=tzutc())
insertIntoMysqlDatabase(arrowDt)

# returns datetime.datetime(2014, 10, 9, 17, 56, 9, 347444)
dbDatetimeNoTz = getFromMysqlDatabase()

# cannot compare timzeone aware and timezone naive
dbDatetimeNoTz == arrowDt # False, or TypeError on python versions before 3.3

# compare datetimes that are both aware or both naive work however
dbDatetimeNoTz == arrowDt.replace(tzinfo=None) # True

为什么在split()结果中返回空字符串?

问题:为什么在split()结果中返回空字符串?

什么是点'/segment/segment/'.split('/')回来['', 'segment', 'segment', '']

注意空元素。如果您要分割的分隔符恰好位于字符串的第一位置,并且位于字符串的末尾,那么它又能为您带来什么额外的价值呢?

What is the point of '/segment/segment/'.split('/') returning ['', 'segment', 'segment', '']?

Notice the empty elements. If you’re splitting on a delimiter that happens to be at position one and at the very end of a string, what extra value does it give you to have the empty string returned from each end?


回答 0

str.splitstr.join,所以

"/".join(['', 'segment', 'segment', ''])

让您返回原始字符串。

如果没有空字符串,则第一个和最后符串'/'将丢失join()

str.split complements str.join, so

"/".join(['', 'segment', 'segment', ''])

gets you back the original string.

If the empty strings were not there, the first and last '/' would be missing after the join()


回答 1

更一般而言,要删除split()结果中返回的空字符串,您可能需要查看该filter函数。

例:

filter(None, '/segment/segment/'.split('/'))

退货

['segment', 'segment']

More generally, to remove empty strings returned in split() results, you may want to look at the filter function.

Example:

f = filter(None, '/segment/segment/'.split('/'))
s_all = list(f)

returns

['segment', 'segment']

回答 2

这里有两点要考虑:

  • 期望结果'/segment/segment/'.split('/')['segment', 'segment']于是合理的,但这会丢失信息。如果split()按照您想要的方式工作,如果我告诉您a.split('/') == ['segment', 'segment'],您将无法告诉我是什么a
  • 结果应该是什么'a//b'.split()['a', 'b']?或['a', '', 'b']?即,是否应split()合并相邻的定界符?如果需要,那么将很难解析由字符分隔的数据,并且某些字段可以为空。我可以肯定,有很多人确实想要上述情况的结果中的空值!

最后,归结为两点:

一致性:如果我有n定界符,则在中a,我会在n+1返回值split()

应该可以做复杂的事情,并且可以轻松地做简单的事情:如果由于想要忽略空字符串split(),可以始终这样做:

def mysplit(s, delim=None):
    return [x for x in s.split(delim) if x]

但是如果不想忽略空值,则应该可以。

该语言必须选择一种定义split()-有太多不同的用例无法满足所有人的默认要求。我认为Python的选择是不错的选择,也是最合乎逻辑的选择。(顺便说一句,我不喜欢C的原因之一strtok()是因为它合并了相邻的定界符,因此很难对其进行认真的解析/标记化处理。)

有一个exceptions:a.split()没有参数会挤压连续的空格,但是有人可以认为在这种情况下这样做是正确的。如果您不想要这种行为,则可以始终这样做a.split(' ')

There are two main points to consider here:

  • Expecting the result of '/segment/segment/'.split('/') to be equal to ['segment', 'segment'] is reasonable, but then this loses information. If split() worked the way you wanted, if I tell you that a.split('/') == ['segment', 'segment'], you can’t tell me what a was.
  • What should be the result of 'a//b'.split() be? ['a', 'b']?, or ['a', '', 'b']? I.e., should split() merge adjacent delimiters? If it should, then it will be very hard to parse data that’s delimited by a character, and some of the fields can be empty. I am fairly sure there are many people who do want the empty values in the result for the above case!

In the end, it boils down to two things:

Consistency: if I have n delimiters, in a, I get n+1 values back after the split().

It should be possible to do complex things, and easy to do simple things: if you want to ignore empty strings as a result of the split(), you can always do:

def mysplit(s, delim=None):
    return [x for x in s.split(delim) if x]

but if one doesn’t want to ignore the empty values, one should be able to.

The language has to pick one definition of split()—there are too many different use cases to satisfy everyone’s requirement as a default. I think that Python’s choice is a good one, and is the most logical. (As an aside, one of the reasons I don’t like C’s strtok() is because it merges adjacent delimiters, making it extremely hard to do serious parsing/tokenization with it.)

There is one exception: a.split() without an argument squeezes consecutive white-space, but one can argue that this is the right thing to do in that case. If you don’t want the behavior, you can always to a.split(' ').


回答 3

x.split(y)始终返回列表1 + x.count(y)项是一种珍贵的规律性-为@ gnibbler本已指出,这让splitjoin对方的确切逆(因为它们显然应该是),这也正是各种分隔符连记录的语义(映射例如csv文件行[[引用网络的净额]],/etc/groupUnix中的行等等),它允许(如@Roman的回答所述)轻松检查(例如)绝对路径与相对路径(在文件路径和URL中),等等。

另一种看待它的方式是,您不应该肆意地将信息扔出窗外,以免获得任何好处。x.split(y)等于会得到什么x.strip(y).split(y)?没事,当然-它很容易使用第二种形式时,这就是你的意思,但如果第一种形式是任意视为指第二个,你有很多工作要做,当你希望第一个(如上一段所指出的,这绝非罕见。

但是实际上,根据数学规律进行思考是您可以自学的设计可传递API的最简单,最通用的方法。举一个不同的例子,对于任何有效的xy x == x[:y] + x[y:]-这立即表明为什么应该排除切片的一个极端非常重要。您可以表述不变的断言越简单,就越有可能产生的语义就是您在现实生活中需要的语义-这是神秘的事实,即数学在处理宇宙中非常有用。

尝试为split前导和尾随定界符是特殊情况的方言制定不变式…反例:像这样的字符串方法isspace并不是最大程度的简单- x.isspace()等效于x and all(c in string.whitespace for c in x)-愚蠢的前导x and是您经常发现自己编码的原因not x or x.isspace(),返回到应该is...字符串方法中设计的简单性(因此,空字符串“就是”您想要的任何东西)与街上人马的感觉相反,也许[[空集,如零&c,始终使大多数人感到困惑;-)]],但完全符合明显完善的数学常识!-)。

Having x.split(y) always return a list of 1 + x.count(y) items is a precious regularity — as @gnibbler’s already pointed out it makes split and join exact inverses of each other (as they obviously should be), it also precisely maps the semantics of all kinds of delimiter-joined records (such as csv file lines [[net of quoting issues]], lines from /etc/group in Unix, and so on), it allows (as @Roman’s answer mentioned) easy checks for (e.g.) absolute vs relative paths (in file paths and URLs), and so forth.

Another way to look at it is that you shouldn’t wantonly toss information out of the window for no gain. What would be gained in making x.split(y) equivalent to x.strip(y).split(y)? Nothing, of course — it’s easy to use the second form when that’s what you mean, but if the first form was arbitrarily deemed to mean the second one, you’d have lot of work to do when you do want the first one (which is far from rare, as the previous paragraph points out).

But really, thinking in terms of mathematical regularity is the simplest and most general way you can teach yourself to design passable APIs. To take a different example, it’s very important that for any valid x and y x == x[:y] + x[y:] — which immediately indicates why one extreme of a slicing should be excluded. The simpler the invariant assertion you can formulate, the likelier it is that the resulting semantics are what you need in real life uses — part of the mystical fact that maths is very useful in dealing with the universe.

Try formulating the invariant for a split dialect in which leading and trailing delimiters are special-cased… counter-example: string methods such as isspace are not maximally simple — x.isspace() is equivalent to x and all(c in string.whitespace for c in x) — that silly leading x and is why you so often find yourself coding not x or x.isspace(), to get back to the simplicity which should have been designed into the is... string methods (whereby an empty string “is” anything you want — contrary to man-in-the-street horse-sense, maybe [[empty sets, like zero &c, have always confused most people;-)]], but fully conforming to obvious well-refined mathematical common-sense!-).


回答 4

我不确定您要寻找哪种答案?您得到三个匹配,因为您有三个定界符。如果您不想要那个空的,只需使用:

'/segment/segment/'.strip('/').split('/')

I’m not sure what kind of answer you’re looking for? You get three matches because you have three delimiters. If you don’t want that empty one, just use:

'/segment/segment/'.strip('/').split('/')

回答 5

好吧,它让您知道那里有一个定界符。因此,看到4个结果会让您知道您有3个定界符。这使您能够使用此信息执行任何所需的操作,而不是让Python删除空元素,然后让您手动检查是否需要定界符(如果需要)。

一个简单的例子:假设您要检查绝对文件名和相对文件名。这样,您就可以使用拆分完成所有操作,而不必检查文件名的第一个字符是什么。

Well, it lets you know there was a delimiter there. So, seeing 4 results lets you know you had 3 delimiters. This gives you the power to do whatever you want with this information, rather than having Python drop the empty elements, and then making you manually check for starting or ending delimiters if you need to know it.

Simple example: Say you want to check for absolute vs. relative filenames. This way you can do it all with the split, without also having to check what the first character of your filename is.


回答 6

考虑以下最小示例:

>>> '/'.split('/')
['', '']

split必须给您定界符前后的内容'/',但没有其他字符。因此,它必须为您提供一个空字符串,从技术上讲,它在之前和之后'/',因为'' + '/' + '' == '/'

Consider this minimal example:

>>> '/'.split('/')
['', '']

split must give you what’s before and after the delimiter '/', but there are no other characters. So it has to give you the empty string, which technically precedes and follows the '/', because '' + '/' + '' == '/'.


virtualenv的问题-无法激活

问题:virtualenv的问题-无法激活

我在项目周围创建了一个virtualenv,但是当我尝试激活它时却无法。它可能只是语法或文件夹位置,但是我现在很困惑。

您可以在下面看到,我创建了virtualenv并将其称为venv。一切看起来不错,然后我尝试通过运行来激活它source venv/bin/activate

我在想这可能与我的系统路径有关,但不确定要指向什么(我确实知道如何编辑路径)。我在python 7 / Windows OS上,虚拟环境2.2.x

处理virtualenv的依赖项
已完成virtualenv的处理依赖性

c:\ testdjangoproj \ mysite> virtualenv --no-site-packages venv
--no-site-packages标志已弃用;现在是默认行为。
使用真实的前缀'C:\\ Program Files(x86)\\ Python'
venv \ Scripts \ python.exe中的新python可执行文件
venv \ Lib \ distutils \ distutils.cfg文件存在不同的内容;不过度
ting
安装setuptools ........完成。
安装pip .........完成。

c:\ testdjangoproj \ mysite>源venv / bin / activate
无法将“源”识别为内部或外部命令,
可操作的程序或批处理文件。

c:\ testdjangoproj \ mysite>源venv / bin / activate
无法将“源”识别为内部或外部命令,
可操作的程序或批处理文件。

c:\ testdjangoproj \ mysite>来源mysite / bin / activate
无法将“源”识别为内部或外部命令,
可操作的程序或批处理文件。

c:\ testdjangoproj \ mysite>

I created a virtualenv around my project, but when I try to activate it I cannot. It might just be syntax or folder location, but I am stumped right now.

You can see below, I create the virtualenv and call it venv. Everything looks good, then I try to activate it by running source venv/bin/activate

I’m thinking it might just have to do with my system path, but not sure what to point it to (I do know how to edit the path). I’m on python 7 / windows os, virtual env 2.2.x

Processing dependencies for virtualenv
Finished processing dependencies for virtualenv

c:\testdjangoproj\mysite>virtualenv --no-site-packages venv
The --no-site-packages flag is deprecated; it is now the default behavior.
Using real prefix 'C:\\Program Files (x86)\\Python'
New python executable in venv\Scripts\python.exe
File venv\Lib\distutils\distutils.cfg exists with different content; not overwri
ting
Installing setuptools.................done.
Installing pip...................done.

c:\testdjangoproj\mysite>source venv/bin/activate
'source' is not recognized as an internal or external command,
operable program or batch file.

c:\testdjangoproj\mysite>source venv/bin/activate
'source' is not recognized as an internal or external command,
operable program or batch file.

c:\testdjangoproj\mysite>source mysite/bin/activate
'source' is not recognized as an internal or external command,
operable program or batch file.

c:\testdjangoproj\mysite>

回答 0

source 是为在Linux(或任何Posix,但不包括Windows)上运行的用户设计的shell命令。

在Windows上,virtualenv会创建一个批处理文件,因此您应该venv\Scripts\activate改为运行它(根据Activate 脚本上的 virtualenv 文档)。

编辑: 这里的Windows技巧不是指定BAT扩展名:

PS C:\ DEV \ aProject \ env \ Scripts> ..activate
(env)PS C:\ DEV \ aProject \ env \ Scripts>

source is a shell command designed for users running on Linux (or any Posix, but whatever, not Windows).

On Windows, virtualenv creates a batch file, so you should run venv\Scripts\activate instead (per the virtualenv documentation on the activate script).

Edit: The trick here for Windows is not specifying the BAT extension:

PS C:\DEV\aProject\env\Scripts> & .\activate
(env) PS C:\DEV\aProject\env\Scripts>


回答 1

我的Windows 10机器中也遇到了同样的问题。我尝试过的步骤是:

转到andconda终端步骤1

pip3 install -U pip virtualenv

第2步

virtualenv --system-site-packages -p python ./venv

要么

virtualenv --system-site-packages -p python3 ./venv

第三步

.\venv\Scripts\activate

您可以通过键入Python中的蜘蛛工具来检查它 import tensorflow as tf

I was also facing the same issue in my Windows 10 machine. What steps i tried were:

Go to andconda terminal Step 1

pip3 install -U pip virtualenv

Step 2

virtualenv --system-site-packages -p python ./venv

or

virtualenv --system-site-packages -p python3 ./venv

Step 3

.\venv\Scripts\activate

You can check it via spider tool in anaconda by typing import tensorflow as tf


回答 2

我有同样的问题。我正在使用Python 2,Windows 10和Git Bash。事实证明,您需要使用Git Bash:

 source venv/Scripts/activate

I had the same problem. I was using Python 2, Windows 10 and Git Bash. Turns out in Git Bash you need to use:

 source venv/Scripts/activate

回答 3

  1. 要进行激活,您可以通过转到venv您的virtualenv目录cd venv

  2. 然后在Windows上,键入dir(在UNIX上,键入ls)。您将获得5个文件夹includeLibScriptstcl和60

  3. 现在键入.\Scripts\activate以激活您的virtualenv venv

您的提示将更改以指示您现在正在虚拟环境中操作。它看起来像这样(venv)user@host:~/venv$

并且您venv现在被激活了。

  1. For activation you can go to the venv your virtualenv directory by cd venv.

  2. Then on Windows, type dir (on unix, type ls). You will get 5 folders include, Lib, Scripts, tcl and 60

  3. Now type .\Scripts\activate to activate your virtualenv venv.

Your prompt will change to indicate that you are now operating within the virtual environment. It will look something like this (venv)user@host:~/venv$.

And your venv is activated now.


回答 4

对于Windows,在没有引号的终端中键入“ C:\ Users \ Sid \ venv \ FirstProject \ Scripts \ activate”。只需提供您的Scripts文件夹在项目中的位置。因此,该命令将为location_of_the_Scripts_Folder \ activate。

For windows, type “C:\Users\Sid\venv\FirstProject\Scripts\activate” in the terminal without quotes. Simply give the location of your Scripts folder in your project. So, the command will be location_of_the_Scripts_Folder\activate.


回答 5

确保venv在那,并按照以下命令操作。它适用于Windows 10。

转到您希望虚拟环境驻留的路径:

> cd <my_venv_path>

创建名为“ env”的虚拟环境:

> python -m venv env 

将路径添加到git ignore文件(可选):

> echo env/ >> .gitignore

激活虚拟环境:

> .\env\Scripts\activate

Ensure venv is there and just follow the commands below. It works in Windows 10.

Go to the path where you want your virtual enviroments to reside:

> cd <my_venv_path>

Create the virtual environment named “env”:

> python -m venv env 

Add the path to the git ignore file (optional):

> echo env/ >> .gitignore

Activate the virtual env:

> .\env\Scripts\activate

回答 6

virtualenv在使用git bash的Windows上玩得很开心,我通常最终会明确指定python二进制文件。

如果我的环境在,.env我会通过./.env/Scripts/python.exe …或在shebang行中调用python#!./.env/Scripts/python.exe ;

两者都假设您的工作目录包含您的virtualenv(.env)。

I have a hell of a time using virtualenv on windows with git bash, I usually end up specifying the python binary explicitly.

If my environment is in say .env I’ll call python via ./.env/Scripts/python.exe …, or in a shebang line #!./.env/Scripts/python.exe;

Both assuming your working directory contains your virtualenv (.env).


回答 7

您可以在cygwin终端上运行source命令

You can run the source command on cygwin terminal


回答 8

如果你看到5个文件夹(IncludeLibScriptstclpip-selfcheck使用后)virtualenv yourenvname命令,更改目录到Scripts文件夹在CMD本身并简单地用“ 激活 ”命令。

If you see the 5 folders (Include,Lib,Scripts,tcl,pip-selfcheck) after using the virtualenv yourenvname command, change directory to Scripts folder in the cmd itself and simply use “activate” command.


回答 9

使用任何gitbash控制台打开文件夹。例如,使用visualCode和Gitbash控制台程序:1)为Windows安装Gitbash

2)使用VisualCode IDE,右键单击在终端控制台中打开的项目 选项

3)在Visualcode的窗口控制台上,查找Select-> 默认外壳并将其更改为Gitbash

4)现在您的项目已使用bash控制台和正确的路径打开,放入 源代码./Scripts/activate

顺便说一句:空格=

open the folder with any gitbash console. for example using visualCode and Gitbash console program: 1)Install Gitbash for windows

2) using VisualCode IDE, right click over the project open in terminal console option

3) on window console in Visualcode, looking for a Select->default shell and change it for Gitbash

4)now your project is open with bash console and right path, put source ./Scripts/activate

btw : . with blank space = source


回答 10

提醒一下,但我在Win10 cmd上使用了错误的方式。根据python 文档,activate命令是:C:\> <venv>\Scripts\activate.bat 浏览目录时,例如cd .env/Scripts

因此,创建我使用过的静脉python -m venv --copies .env并激活.env\Scripts\activate.bat

A small reminder, but I had my slashes the wrong way on Win10 cmd. According to python documentation the activate command is: C:\> <venv>\Scripts\activate.bat When you’re browsing directories it’s e.g. cd .env/Scripts

So to create my venv I used python -m venv --copies .env and to activate .env\Scripts\activate.bat


回答 11

source命令正式用于Unix操作系统家族,您基本上不能在Windows上使用它。相反,您可以使用venv\Scripts\activate命令来激活您的虚拟环境。

source command is officially for Unix operating systems family and you can’t use it on windows basically. instead, you can use venv\Scripts\activate command to activate your virtual environment.


回答 12

如果您使用的是Windows,请使用命令“ venv \ Scripts \ activate”(无单词source)激活虚拟环境。如果您使用的是PowerShell,则可能需要大写激活。

If you’re using Windows, use the command “venv\Scripts\activate” (without the word source) to activate the virtual environment. If you’re using PowerShell, you might need to capitalize Activate.


回答 13

如果您使用的是Windows操作系统,则在Gitbash终端中使用以下命令$ source venv / Scripts / activate。这将帮助您进入虚拟环境。

If you are using windows OS then in Gitbash terminal use the following command $source venv/Scripts/activate. This will help you to enter the virtual environment.


回答 14

  1. 使用VS代码编辑器打开项目。
  2. 将vs代码终端中的默认shell更改为git bash。

  3. 现在,您的项目已使用bash控制台打开且路径正确,将“ source venv \ Scripts \ activate”放入Windows

  1. Open your project using VS code editor .
  2. Change the default shell in vs code terminal to git bash.

  3. now your project is open with bash console and right path, put “source venv\Scripts\activate” in Windows


回答 15

导航到您的virtualenv文件夹,例如,..\project1_env> 然后输入

source scripts/activate

例如 ..\project1_env>source scripts/activate

Navigate to your virtualenv folder eg ..\project1_env> Then type

source scripts/activate

eg ..\project1_env>source scripts/activate


回答 16

如果像我这样的初学者遵循了多个Python教程,则现在可能具有多个Python版本和/或pip / virtualenv / pipenv的多个版本…

在这种情况下,列出的答案虽然很正确,但可能无济于事。

我要在您的位置尝试的第一件事是卸载并重新安装Python,然后从那里开始。

If some beginner, like me, has followed multiple Python tutorials now possible has multiple Python versions and/or multiple versions of pip/virtualenv/pipenv…

In that case, answers listed, while many correct, might not help.

The first thing I would try in your place is uninstall and reinstall Python and go from there.


回答 17

在Windows平台上,

您应该将此命令与在安装虚拟环境的位置指定的路径一起使用。

$ .\env\Scripts\activate 

这样,您应该能够在Windows上激活它。

In Windows platform,

you should use this command with path specified where you have installed a virtual environment.

$ .\env\Scripts\activate 

By this, You should be able to activate this on windows.


回答 18

  1. 以管理员身份打开Powershell
  2. 输入“ Set-ExecutionPolicy RemoteSigned -Force
  3. 运行“ gpedit.msc”,然后转到>管理模板> Windows组件> Windows Powershell
  4. 查找“激活脚本执行”并将其设置为“已激活”
  5. 将执行指令设置为“全部允许”
  6. 应用
  7. 刷新您的环境
  1. Open your powershell as admin
  2. Enter “Set-ExecutionPolicy RemoteSigned -Force
  3. Run “gpedit.msc” and go to >Administrative Templates>Windows Components>Windows Powershell
  4. Look for “Activate scripts execution” and set it on “Activated”
  5. Set execution directive to “Allow All”
  6. Apply
  7. Refresh your env

回答 19

如果您在Windows上使用Anaconda / miniconda,请在命令提示符下使用

conda activate <your-environmentname>

例如,peopleanalytics是我的虚拟环境的名称-说

conda activate peopleanalytics

Incase you are using Anaconda / miniconda on windows – in your command prompt use

conda activate <your-environmentname>

e.g. peopleanalytics is name of my virtual environment – Is say

conda activate peopleanalytics

回答 20

如果您仅在Windows 10中已经CD了项目类型

Scripts/activate

这对我行得通:)

if you already cd your project type only in windows 10

Scripts/activate

That works for me:)


如何在不覆盖数据的情况下(使用熊猫)写入现有的excel文件?

问题:如何在不覆盖数据的情况下(使用熊猫)写入现有的excel文件?

我使用熊猫以以下方式写入excel文件:

import pandas

writer = pandas.ExcelWriter('Masterfile.xlsx') 

data_filtered.to_excel(writer, "Main", cols=['Diff1', 'Diff2'])

writer.save()

Masterfile.xlsx已经包含许多不同的选项卡。但是,它尚未包含“ Main”。

熊猫正确地写入了“主要”表,不幸的是,它也删除了所有其他标签。

I use pandas to write to excel file in the following fashion:

import pandas

writer = pandas.ExcelWriter('Masterfile.xlsx') 

data_filtered.to_excel(writer, "Main", cols=['Diff1', 'Diff2'])

writer.save()

Masterfile.xlsx already consists of number of different tabs. However, it does not yet contain “Main”.

Pandas correctly writes to “Main” sheet, unfortunately it also deletes all other tabs.


回答 0

熊猫文档说,它对xlsx文件使用openpyxl。快速浏览一下其中的代码ExcelWriter可以提示可能会发生以下情况:

import pandas
from openpyxl import load_workbook

book = load_workbook('Masterfile.xlsx')
writer = pandas.ExcelWriter('Masterfile.xlsx', engine='openpyxl') 
writer.book = book

## ExcelWriter for some reason uses writer.sheets to access the sheet.
## If you leave it empty it will not know that sheet Main is already there
## and will create a new sheet.

writer.sheets = dict((ws.title, ws) for ws in book.worksheets)

data_filtered.to_excel(writer, "Main", cols=['Diff1', 'Diff2'])

writer.save()

Pandas docs says it uses openpyxl for xlsx files. Quick look through the code in ExcelWriter gives a clue that something like this might work out:

import pandas
from openpyxl import load_workbook

book = load_workbook('Masterfile.xlsx')
writer = pandas.ExcelWriter('Masterfile.xlsx', engine='openpyxl') 
writer.book = book

## ExcelWriter for some reason uses writer.sheets to access the sheet.
## If you leave it empty it will not know that sheet Main is already there
## and will create a new sheet.

writer.sheets = dict((ws.title, ws) for ws in book.worksheets)

data_filtered.to_excel(writer, "Main", cols=['Diff1', 'Diff2'])

writer.save()

回答 1

这是一个辅助函数:

def append_df_to_excel(filename, df, sheet_name='Sheet1', startrow=None,
                       truncate_sheet=False, 
                       **to_excel_kwargs):
    """
    Append a DataFrame [df] to existing Excel file [filename]
    into [sheet_name] Sheet.
    If [filename] doesn't exist, then this function will create it.

    Parameters:
      filename : File path or existing ExcelWriter
                 (Example: '/path/to/file.xlsx')
      df : dataframe to save to workbook
      sheet_name : Name of sheet which will contain DataFrame.
                   (default: 'Sheet1')
      startrow : upper left cell row to dump data frame.
                 Per default (startrow=None) calculate the last row
                 in the existing DF and write to the next row...
      truncate_sheet : truncate (remove and recreate) [sheet_name]
                       before writing DataFrame to Excel file
      to_excel_kwargs : arguments which will be passed to `DataFrame.to_excel()`
                        [can be dictionary]

    Returns: None
    """
    from openpyxl import load_workbook

    # ignore [engine] parameter if it was passed
    if 'engine' in to_excel_kwargs:
        to_excel_kwargs.pop('engine')

    writer = pd.ExcelWriter(filename, engine='openpyxl')

    # Python 2.x: define [FileNotFoundError] exception if it doesn't exist 
    try:
        FileNotFoundError
    except NameError:
        FileNotFoundError = IOError


    try:
        # try to open an existing workbook
        writer.book = load_workbook(filename)

        # get the last row in the existing Excel sheet
        # if it was not specified explicitly
        if startrow is None and sheet_name in writer.book.sheetnames:
            startrow = writer.book[sheet_name].max_row

        # truncate sheet
        if truncate_sheet and sheet_name in writer.book.sheetnames:
            # index of [sheet_name] sheet
            idx = writer.book.sheetnames.index(sheet_name)
            # remove [sheet_name]
            writer.book.remove(writer.book.worksheets[idx])
            # create an empty sheet [sheet_name] using old index
            writer.book.create_sheet(sheet_name, idx)

        # copy existing sheets
        writer.sheets = {ws.title:ws for ws in writer.book.worksheets}
    except FileNotFoundError:
        # file does not exist yet, we will create it
        pass

    if startrow is None:
        startrow = 0

    # write out the new sheet
    df.to_excel(writer, sheet_name, startrow=startrow, **to_excel_kwargs)

    # save the workbook
    writer.save()

注意:对于<0.21.0的熊猫,请替换sheet_namesheetname

用法示例:

append_df_to_excel('d:/temp/test.xlsx', df)

append_df_to_excel('d:/temp/test.xlsx', df, header=None, index=False)

append_df_to_excel('d:/temp/test.xlsx', df, sheet_name='Sheet2', index=False)

append_df_to_excel('d:/temp/test.xlsx', df, sheet_name='Sheet2', index=False, startrow=25)

Here is a helper function:

def append_df_to_excel(filename, df, sheet_name='Sheet1', startrow=None,
                       truncate_sheet=False, 
                       **to_excel_kwargs):
    """
    Append a DataFrame [df] to existing Excel file [filename]
    into [sheet_name] Sheet.
    If [filename] doesn't exist, then this function will create it.

    Parameters:
      filename : File path or existing ExcelWriter
                 (Example: '/path/to/file.xlsx')
      df : dataframe to save to workbook
      sheet_name : Name of sheet which will contain DataFrame.
                   (default: 'Sheet1')
      startrow : upper left cell row to dump data frame.
                 Per default (startrow=None) calculate the last row
                 in the existing DF and write to the next row...
      truncate_sheet : truncate (remove and recreate) [sheet_name]
                       before writing DataFrame to Excel file
      to_excel_kwargs : arguments which will be passed to `DataFrame.to_excel()`
                        [can be dictionary]

    Returns: None

    (c) [MaxU](https://stackoverflow.com/users/5741205/maxu?tab=profile)
    """
    from openpyxl import load_workbook

    # ignore [engine] parameter if it was passed
    if 'engine' in to_excel_kwargs:
        to_excel_kwargs.pop('engine')

    writer = pd.ExcelWriter(filename, engine='openpyxl')

    # Python 2.x: define [FileNotFoundError] exception if it doesn't exist 
    try:
        FileNotFoundError
    except NameError:
        FileNotFoundError = IOError


    try:
        # try to open an existing workbook
        writer.book = load_workbook(filename)
        
        # get the last row in the existing Excel sheet
        # if it was not specified explicitly
        if startrow is None and sheet_name in writer.book.sheetnames:
            startrow = writer.book[sheet_name].max_row

        # truncate sheet
        if truncate_sheet and sheet_name in writer.book.sheetnames:
            # index of [sheet_name] sheet
            idx = writer.book.sheetnames.index(sheet_name)
            # remove [sheet_name]
            writer.book.remove(writer.book.worksheets[idx])
            # create an empty sheet [sheet_name] using old index
            writer.book.create_sheet(sheet_name, idx)
        
        # copy existing sheets
        writer.sheets = {ws.title:ws for ws in writer.book.worksheets}
    except FileNotFoundError:
        # file does not exist yet, we will create it
        pass

    if startrow is None:
        startrow = 0

    # write out the new sheet
    df.to_excel(writer, sheet_name, startrow=startrow, **to_excel_kwargs)

    # save the workbook
    writer.save()
            

NOTE: for Pandas < 0.21.0, replace sheet_name with sheetname!

Usage examples:

append_df_to_excel('d:/temp/test.xlsx', df)

append_df_to_excel('d:/temp/test.xlsx', df, header=None, index=False)

append_df_to_excel('d:/temp/test.xlsx', df, sheet_name='Sheet2', index=False)

append_df_to_excel('d:/temp/test.xlsx', df, sheet_name='Sheet2', index=False, startrow=25)

回答 2

使用openpyxlversion 2.4.0pandasversion 0.19.2,@ ski提出的过程变得更加简单:

import pandas
from openpyxl import load_workbook

with pandas.ExcelWriter('Masterfile.xlsx', engine='openpyxl') as writer:
    writer.book = load_workbook('Masterfile.xlsx')
    data_filtered.to_excel(writer, "Main", cols=['Diff1', 'Diff2'])
#That's it!

With openpyxlversion 2.4.0 and pandasversion 0.19.2, the process @ski came up with gets a bit simpler:

import pandas
from openpyxl import load_workbook

with pandas.ExcelWriter('Masterfile.xlsx', engine='openpyxl') as writer:
    writer.book = load_workbook('Masterfile.xlsx')
    data_filtered.to_excel(writer, "Main", cols=['Diff1', 'Diff2'])
#That's it!

回答 3

从pandas 0.24开始,您可以使用mode关键字参数简化此操作ExcelWriter

import pandas as pd

with pd.ExcelWriter('the_file.xlsx', engine='openpyxl', mode='a') as writer: 
     data_filtered.to_excel(writer) 

Starting in pandas 0.24 you can simplify this with the mode keyword argument of ExcelWriter:

import pandas as pd

with pd.ExcelWriter('the_file.xlsx', engine='openpyxl', mode='a') as writer: 
     data_filtered.to_excel(writer) 

回答 4

老问题了,但我猜有些人还在搜索这个-所以…

我发现此方法不错,因为所有工作表都加载到工作表名称和数据框对的字典中,该字典由熊猫使用sheetname = None选项创建。在将电子表格读取为dict格式并将其从dict写回之前,添加,删除或修改工作表很简单。对于我来说,就速度和格式而言,xlsxwriter在执行此特定任务方面比openpyxl更好。

注意:未来版本的熊猫(0.21.0+)将把“ sheetname”参数更改为“ sheet_name”。

# read a single or multi-sheet excel file
# (returns dict of sheetname(s), dataframe(s))
ws_dict = pd.read_excel(excel_file_path,
                        sheetname=None)

# all worksheets are accessible as dataframes.

# easy to change a worksheet as a dataframe:
mod_df = ws_dict['existing_worksheet']

# do work on mod_df...then reassign
ws_dict['existing_worksheet'] = mod_df

# add a dataframe to the workbook as a new worksheet with
# ws name, df as dict key, value:
ws_dict['new_worksheet'] = some_other_dataframe

# when done, write dictionary back to excel...
# xlsxwriter honors datetime and date formats
# (only included as example)...
with pd.ExcelWriter(excel_file_path,
                    engine='xlsxwriter',
                    datetime_format='yyyy-mm-dd',
                    date_format='yyyy-mm-dd') as writer:

    for ws_name, df_sheet in ws_dict.items():
        df_sheet.to_excel(writer, sheet_name=ws_name)

对于2013年问题中的示例:

ws_dict = pd.read_excel('Masterfile.xlsx',
                        sheetname=None)

ws_dict['Main'] = data_filtered[['Diff1', 'Diff2']]

with pd.ExcelWriter('Masterfile.xlsx',
                    engine='xlsxwriter') as writer:

    for ws_name, df_sheet in ws_dict.items():
        df_sheet.to_excel(writer, sheet_name=ws_name)

Old question, but I am guessing some people still search for this – so…

I find this method nice because all worksheets are loaded into a dictionary of sheet name and dataframe pairs, created by pandas with the sheetname=None option. It is simple to add, delete or modify worksheets between reading the spreadsheet into the dict format and writing it back from the dict. For me the xlsxwriter works better than openpyxl for this particular task in terms of speed and format.

Note: future versions of pandas (0.21.0+) will change the “sheetname” parameter to “sheet_name”.

# read a single or multi-sheet excel file
# (returns dict of sheetname(s), dataframe(s))
ws_dict = pd.read_excel(excel_file_path,
                        sheetname=None)

# all worksheets are accessible as dataframes.

# easy to change a worksheet as a dataframe:
mod_df = ws_dict['existing_worksheet']

# do work on mod_df...then reassign
ws_dict['existing_worksheet'] = mod_df

# add a dataframe to the workbook as a new worksheet with
# ws name, df as dict key, value:
ws_dict['new_worksheet'] = some_other_dataframe

# when done, write dictionary back to excel...
# xlsxwriter honors datetime and date formats
# (only included as example)...
with pd.ExcelWriter(excel_file_path,
                    engine='xlsxwriter',
                    datetime_format='yyyy-mm-dd',
                    date_format='yyyy-mm-dd') as writer:

    for ws_name, df_sheet in ws_dict.items():
        df_sheet.to_excel(writer, sheet_name=ws_name)

For the example in the 2013 question:

ws_dict = pd.read_excel('Masterfile.xlsx',
                        sheetname=None)

ws_dict['Main'] = data_filtered[['Diff1', 'Diff2']]

with pd.ExcelWriter('Masterfile.xlsx',
                    engine='xlsxwriter') as writer:

    for ws_name, df_sheet in ws_dict.items():
        df_sheet.to_excel(writer, sheet_name=ws_name)

回答 5

我知道这是一个较旧的线程,但这是您在搜索时发现的第一项,并且如果需要将图表保留在已创建的工作簿中,则上述解决方案将不起作用。在这种情况下,xlwings是一个更好的选择-它允许您写入Excel书并保留图表/图表数据。

简单的例子:

import xlwings as xw
import pandas as pd

#create DF
months = ['2017-01','2017-02','2017-03','2017-04','2017-05','2017-06','2017-07','2017-08','2017-09','2017-10','2017-11','2017-12']
value1 = [x * 5+5 for x in range(len(months))]
df = pd.DataFrame(value1, index = months, columns = ['value1'])
df['value2'] = df['value1']+5
df['value3'] = df['value2']+5

#load workbook that has a chart in it
wb = xw.Book('C:\\data\\bookwithChart.xlsx')

ws = wb.sheets['chartData']

ws.range('A1').options(index=False).value = df

wb = xw.Book('C:\\data\\bookwithChart_updated.xlsx')

xw.apps[0].quit()

I know this is an older thread, but this is the first item you find when searching, and the above solutions don’t work if you need to retain charts in a workbook that you already have created. In that case, xlwings is a better option – it allows you to write to the excel book and keeps the charts/chart data.

simple example:

import xlwings as xw
import pandas as pd

#create DF
months = ['2017-01','2017-02','2017-03','2017-04','2017-05','2017-06','2017-07','2017-08','2017-09','2017-10','2017-11','2017-12']
value1 = [x * 5+5 for x in range(len(months))]
df = pd.DataFrame(value1, index = months, columns = ['value1'])
df['value2'] = df['value1']+5
df['value3'] = df['value2']+5

#load workbook that has a chart in it
wb = xw.Book('C:\\data\\bookwithChart.xlsx')

ws = wb.sheets['chartData']

ws.range('A1').options(index=False).value = df

wb = xw.Book('C:\\data\\bookwithChart_updated.xlsx')

xw.apps[0].quit()

回答 6

在pandas 0.24中有一个更好的解决方案:

with pd.ExcelWriter(path, mode='a') as writer:
    s.to_excel(writer, sheet_name='another sheet', index=False)

之前:

后:

因此,立即升级您的熊猫:

pip install --upgrade pandas

There is a better solution in pandas 0.24:

with pd.ExcelWriter(path, mode='a') as writer:
    s.to_excel(writer, sheet_name='another sheet', index=False)

before:

after:

so upgrade your pandas now:

pip install --upgrade pandas

回答 7

def append_sheet_to_master(self, master_file_path, current_file_path, sheet_name):
    try:
        master_book = load_workbook(master_file_path)
        master_writer = pandas.ExcelWriter(master_file_path, engine='openpyxl')
        master_writer.book = master_book
        master_writer.sheets = dict((ws.title, ws) for ws in master_book.worksheets)
        current_frames = pandas.ExcelFile(current_file_path).parse(pandas.ExcelFile(current_file_path).sheet_names[0],
                                                               header=None,
                                                               index_col=None)
        current_frames.to_excel(master_writer, sheet_name, index=None, header=False)

        master_writer.save()
    except Exception as e:
        raise e

这非常完美,只有主文件(添加新工作表的文件)的格式丢失了。

def append_sheet_to_master(self, master_file_path, current_file_path, sheet_name):
    try:
        master_book = load_workbook(master_file_path)
        master_writer = pandas.ExcelWriter(master_file_path, engine='openpyxl')
        master_writer.book = master_book
        master_writer.sheets = dict((ws.title, ws) for ws in master_book.worksheets)
        current_frames = pandas.ExcelFile(current_file_path).parse(pandas.ExcelFile(current_file_path).sheet_names[0],
                                                               header=None,
                                                               index_col=None)
        current_frames.to_excel(master_writer, sheet_name, index=None, header=False)

        master_writer.save()
    except Exception as e:
        raise e

This works perfectly fine only thing is that formatting of the master file(file to which we add new sheet) is lost.


回答 8

writer = pd.ExcelWriter('prueba1.xlsx'engine='openpyxl',keep_date_col=True)

“ keep_date_col”希望对您有所帮助

writer = pd.ExcelWriter('prueba1.xlsx'engine='openpyxl',keep_date_col=True)

The “keep_date_col” hope help you


回答 9

book = load_workbook(xlsFilename)
writer = pd.ExcelWriter(self.xlsFilename)
writer.book = book
writer.sheets = dict((ws.title, ws) for ws in book.worksheets)
df.to_excel(writer, sheet_name=sheetName, index=False)
writer.save()
book = load_workbook(xlsFilename)
writer = pd.ExcelWriter(self.xlsFilename)
writer.book = book
writer.sheets = dict((ws.title, ws) for ws in book.worksheets)
df.to_excel(writer, sheet_name=sheetName, index=False)
writer.save()

最干净,最Pythonic的方式来获取明天的约会?

问题:最干净,最Pythonic的方式来获取明天的约会?

获取明天约会的最干净,最Python方式是什么?必须有比在一天中增加一天,在月末处理天等等的更好的方法。

What is the cleanest and most Pythonic way to get tomorrow’s date? There must be a better way than to add one to the day, handle days at the end of the month, etc.


回答 0

datetime.date.today() + datetime.timedelta(days=1) 应该可以

datetime.date.today() + datetime.timedelta(days=1) should do the trick


回答 1

timedelta 可以处理增加的天,秒,微秒,毫秒,分钟,小时或星期。

>>> import datetime
>>> today = datetime.date.today()
>>> today
datetime.date(2009, 10, 1)
>>> today + datetime.timedelta(days=1)
datetime.date(2009, 10, 2)
>>> datetime.date(2009,10,31) + datetime.timedelta(hours=24)
datetime.date(2009, 11, 1)

如评论中所问,leap日没有问题:

>>> datetime.date(2004, 2, 28) + datetime.timedelta(days=1)
datetime.date(2004, 2, 29)
>>> datetime.date(2004, 2, 28) + datetime.timedelta(days=2)
datetime.date(2004, 3, 1)
>>> datetime.date(2005, 2, 28) + datetime.timedelta(days=1)
datetime.date(2005, 3, 1)

timedelta can handle adding days, seconds, microseconds, milliseconds, minutes, hours, or weeks.

>>> import datetime
>>> today = datetime.date.today()
>>> today
datetime.date(2009, 10, 1)
>>> today + datetime.timedelta(days=1)
datetime.date(2009, 10, 2)
>>> datetime.date(2009,10,31) + datetime.timedelta(hours=24)
datetime.date(2009, 11, 1)

As asked in a comment, leap days pose no problem:

>>> datetime.date(2004, 2, 28) + datetime.timedelta(days=1)
datetime.date(2004, 2, 29)
>>> datetime.date(2004, 2, 28) + datetime.timedelta(days=2)
datetime.date(2004, 3, 1)
>>> datetime.date(2005, 2, 28) + datetime.timedelta(days=1)
datetime.date(2005, 3, 1)

回答 2

没有处理的闰秒寿:

>>> from datetime import datetime, timedelta
>>> dt = datetime(2008,12,31,23,59,59)
>>> str(dt)
'2008-12-31 23:59:59'
>>> # leap second was added at the end of 2008, 
>>> # adding one second should create a datetime
>>> # of '2008-12-31 23:59:60'
>>> str(dt+timedelta(0,1))
'2009-01-01 00:00:00'
>>> str(dt+timedelta(0,2))
'2009-01-01 00:00:01'

该死的

编辑-@Mark:文档说“是”,但是代码说“不是很多”:

>>> time.strptime("2008-12-31 23:59:60","%Y-%m-%d %H:%M:%S")
(2008, 12, 31, 23, 59, 60, 2, 366, -1)
>>> time.mktime(time.strptime("2008-12-31 23:59:60","%Y-%m-%d %H:%M:%S"))
1230789600.0
>>> time.gmtime(time.mktime(time.strptime("2008-12-31 23:59:60","%Y-%m-%d %H:%M:%S")))
(2009, 1, 1, 6, 0, 0, 3, 1, 0)
>>> time.localtime(time.mktime(time.strptime("2008-12-31 23:59:60","%Y-%m-%d %H:%M:%S")))
(2009, 1, 1, 0, 0, 0, 3, 1, 0)

我认为gmtime或localtime将采用mktime返回的值,并以60秒为单位返回给我原始的元组。该测试表明这些leap秒会逐渐消失…

>>> a = time.mktime(time.strptime("2008-12-31 23:59:60","%Y-%m-%d %H:%M:%S"))
>>> b = time.mktime(time.strptime("2009-01-01 00:00:00","%Y-%m-%d %H:%M:%S"))
>>> a,b
(1230789600.0, 1230789600.0)
>>> b-a
0.0

No handling of leap seconds tho:

>>> from datetime import datetime, timedelta
>>> dt = datetime(2008,12,31,23,59,59)
>>> str(dt)
'2008-12-31 23:59:59'
>>> # leap second was added at the end of 2008, 
>>> # adding one second should create a datetime
>>> # of '2008-12-31 23:59:60'
>>> str(dt+timedelta(0,1))
'2009-01-01 00:00:00'
>>> str(dt+timedelta(0,2))
'2009-01-01 00:00:01'

darn.

EDIT – @Mark: The docs say “yes”, but the code says “not so much”:

>>> time.strptime("2008-12-31 23:59:60","%Y-%m-%d %H:%M:%S")
(2008, 12, 31, 23, 59, 60, 2, 366, -1)
>>> time.mktime(time.strptime("2008-12-31 23:59:60","%Y-%m-%d %H:%M:%S"))
1230789600.0
>>> time.gmtime(time.mktime(time.strptime("2008-12-31 23:59:60","%Y-%m-%d %H:%M:%S")))
(2009, 1, 1, 6, 0, 0, 3, 1, 0)
>>> time.localtime(time.mktime(time.strptime("2008-12-31 23:59:60","%Y-%m-%d %H:%M:%S")))
(2009, 1, 1, 0, 0, 0, 3, 1, 0)

I would think that gmtime or localtime would take the value returned by mktime and given me back the original tuple, with 60 as the number of seconds. And this test shows that these leap seconds can just fade away…

>>> a = time.mktime(time.strptime("2008-12-31 23:59:60","%Y-%m-%d %H:%M:%S"))
>>> b = time.mktime(time.strptime("2009-01-01 00:00:00","%Y-%m-%d %H:%M:%S"))
>>> a,b
(1230789600.0, 1230789600.0)
>>> b-a
0.0

回答 3

即使是基本time模块也可以处理此问题:

import time
time.localtime(time.time() + 24*3600)

Even the basic time module can handle this:

import time
time.localtime(time.time() + 24*3600)

遍历所有嵌套的字典值?

问题:遍历所有嵌套的字典值?

for k, v in d.iteritems():
    if type(v) is dict:
        for t, c in v.iteritems():
            print "{0} : {1}".format(t, c)

我试图遍历字典并打印出所有值不是嵌套字典的键值对。如果值是字典,我想进入它并打印出它的键值对…等等。有什么帮助吗?

编辑

这个怎么样?它仍然只打印一件事。

def printDict(d):
    for k, v in d.iteritems():
        if type(v) is dict:
            printDict(v)
        else:
            print "{0} : {1}".format(k, v)

完整的测试用例

字典:

{u'xml': {u'config': {u'portstatus': {u'status': u'good'}, u'target': u'1'},
      u'port': u'11'}}

结果:

xml : {u'config': {u'portstatus': {u'status': u'good'}, u'target': u'1'}, u'port': u'11'}
for k, v in d.iteritems():
    if type(v) is dict:
        for t, c in v.iteritems():
            print "{0} : {1}".format(t, c)

I’m trying to loop through a dictionary and print out all key value pairs where the value is not a nested dictionary. If the value is a dictionary I want to go into it and print out its key value pairs…etc. Any help?

EDIT

How about this? It still only prints one thing.

def printDict(d):
    for k, v in d.iteritems():
        if type(v) is dict:
            printDict(v)
        else:
            print "{0} : {1}".format(k, v)

Full Test Case

Dictionary:

{u'xml': {u'config': {u'portstatus': {u'status': u'good'}, u'target': u'1'},
      u'port': u'11'}}

Result:

xml : {u'config': {u'portstatus': {u'status': u'good'}, u'target': u'1'}, u'port': u'11'}

回答 0

如Niklas所说,您需要递归,即您想定义一个函数来打印您的字典,如果该值是一个字典,则想使用这个新字典来调用您的打印函数。

就像是 :

def myprint(d):
    for k, v in d.items():
        if isinstance(v, dict):
            myprint(v)
        else:
            print("{0} : {1}".format(k, v))

As said by Niklas, you need recursion, i.e. you want to define a function to print your dict, and if the value is a dict, you want to call your print function using this new dict.

Something like :

def myprint(d):
    for k, v in d.items():
        if isinstance(v, dict):
            myprint(v)
        else:
            print("{0} : {1}".format(k, v))

回答 1

如果您编写自己的递归实现或带有堆栈的迭代等效项,则可能会出现问题。请参阅以下示例:

    dic = {}
    dic["key1"] = {}
    dic["key1"]["key1.1"] = "value1"
    dic["key2"]  = {}
    dic["key2"]["key2.1"] = "value2"
    dic["key2"]["key2.2"] = dic["key1"]
    dic["key2"]["key2.3"] = dic

在正常情况下,嵌套字典将是像数据结构一样的n元树。但是定义不排除出现交叉边缘甚至后边缘的可能性(因此不再是树)。例如,这里key2.2key1保留到字典,key2.3指向整个字典(后沿/循环)。当有后沿(循环)时,堆栈/递归将无限运行。

                          root<-------back edge
                        /      \           |
                     _key1   __key2__      |
                    /       /   \    \     |
               |->key1.1 key2.1 key2.2 key2.3
               |   /       |      |
               | value1  value2   |
               |                  | 
              cross edge----------|

如果您使用Scharron的此实现打印此词典

    def myprint(d):
      for k, v in d.items():
        if isinstance(v, dict):
          myprint(v)
        else:
          print "{0} : {1}".format(k, v)

您会看到此错误:

    RuntimeError: maximum recursion depth exceeded while calling a Python object

senderle的实现也是如此

同样,您可以从Fred Foo的此实现中获得无限循环:

    def myprint(d):
        stack = list(d.items())
        while stack:
            k, v = stack.pop()
            if isinstance(v, dict):
                stack.extend(v.items())
            else:
                print("%s: %s" % (k, v))

但是,Python实际上会检测嵌套字典中的循环:

    print dic
    {'key2': {'key2.1': 'value2', 'key2.3': {...}, 
       'key2.2': {'key1.1': 'value1'}}, 'key1': {'key1.1': 'value1'}}

“ {…}”是检测到循环的位置。

根据Moondra的要求,这是一种避免循环(DFS)的方法:

def myprint(d): 
  stack = list(d.items()) 
  visited = set() 
  while stack: 
    k, v = stack.pop() 
    if isinstance(v, dict): 
      if k not in visited: 
        stack.extend(v.items()) 
      else: 
        print("%s: %s" % (k, v)) 
      visited.add(k)

There are potential problems if you write your own recursive implementation or the iterative equivalent with stack. See this example:

    dic = {}
    dic["key1"] = {}
    dic["key1"]["key1.1"] = "value1"
    dic["key2"]  = {}
    dic["key2"]["key2.1"] = "value2"
    dic["key2"]["key2.2"] = dic["key1"]
    dic["key2"]["key2.3"] = dic

In the normal sense, nested dictionary will be a n-nary tree like data structure. But the definition doesn’t exclude the possibility of a cross edge or even a back edge (thus no longer a tree). For instance, here key2.2 holds to the dictionary from key1, key2.3 points to the entire dictionary(back edge/cycle). When there is a back edge(cycle), the stack/recursion will run infinitely.

                          root<-------back edge
                        /      \           |
                     _key1   __key2__      |
                    /       /   \    \     |
               |->key1.1 key2.1 key2.2 key2.3
               |   /       |      |
               | value1  value2   |
               |                  | 
              cross edge----------|

If you print this dictionary with this implementation from Scharron

    def myprint(d):
      for k, v in d.items():
        if isinstance(v, dict):
          myprint(v)
        else:
          print "{0} : {1}".format(k, v)

You would see this error:

    RuntimeError: maximum recursion depth exceeded while calling a Python object

The same goes with the implementation from senderle.

Similarly, you get an infinite loop with this implementation from Fred Foo:

    def myprint(d):
        stack = list(d.items())
        while stack:
            k, v = stack.pop()
            if isinstance(v, dict):
                stack.extend(v.items())
            else:
                print("%s: %s" % (k, v))

However, Python actually detects cycles in nested dictionary:

    print dic
    {'key2': {'key2.1': 'value2', 'key2.3': {...}, 
       'key2.2': {'key1.1': 'value1'}}, 'key1': {'key1.1': 'value1'}}

“{…}” is where a cycle is detected.

As requested by Moondra this is a way to avoid cycles (DFS):

def myprint(d): 
  stack = list(d.items()) 
  visited = set() 
  while stack: 
    k, v = stack.pop() 
    if isinstance(v, dict): 
      if k not in visited: 
        stack.extend(v.items()) 
      else: 
        print("%s: %s" % (k, v)) 
      visited.add(k)

回答 2

由于a dict是可迭代的,因此您只需稍作一些更改就可以将经典的嵌套容器可迭代公式应用于此问题。这是Python 2版本(请参阅下面的3):

import collections
def nested_dict_iter(nested):
    for key, value in nested.iteritems():
        if isinstance(value, collections.Mapping):
            for inner_key, inner_value in nested_dict_iter(value):
                yield inner_key, inner_value
        else:
            yield key, value

测试:

list(nested_dict_iter({'a':{'b':{'c':1, 'd':2}, 
                            'e':{'f':3, 'g':4}}, 
                       'h':{'i':5, 'j':6}}))
# output: [('g', 4), ('f', 3), ('c', 1), ('d', 2), ('i', 5), ('j', 6)]

在Python 2中,可能可以创建一个Mapping限定为,Mapping但不包含的自定义iteritems,在这种情况下,这将失败。文档没有指出这iteritems是必需的Mapping;另一方面,Mapping类型提供了一种iteritems方法。因此,对于custom Mappings,从collections.Mapping显式继承以防万一。

在Python 3中,有许多改进。从Python 3.3开始,抽象基类存在于中collections.abc。它们也保持collections向后兼容,但是将我们的抽象基类放在一个命名空间中会更好。因此,这是abc从导入的collections。Python 3.3还添加了yield from,它专门用于这种情况。这不是空的语法糖。它可能导致更快的代码和与协同程序更明智的交互。

from collections import abc
def nested_dict_iter(nested):
    for key, value in nested.items():
        if isinstance(value, abc.Mapping):
            yield from nested_dict_iter(value)
        else:
            yield key, value

Since a dict is iterable, you can apply the classic nested container iterable formula to this problem with only a couple of minor changes. Here’s a Python 2 version (see below for 3):

import collections
def nested_dict_iter(nested):
    for key, value in nested.iteritems():
        if isinstance(value, collections.Mapping):
            for inner_key, inner_value in nested_dict_iter(value):
                yield inner_key, inner_value
        else:
            yield key, value

Test:

list(nested_dict_iter({'a':{'b':{'c':1, 'd':2}, 
                            'e':{'f':3, 'g':4}}, 
                       'h':{'i':5, 'j':6}}))
# output: [('g', 4), ('f', 3), ('c', 1), ('d', 2), ('i', 5), ('j', 6)]

In Python 2, It might be possible to create a custom Mapping that qualifies as a Mapping but doesn’t contain iteritems, in which case this will fail. The docs don’t indicate that iteritems is required for a Mapping; on the other hand, the source gives Mapping types an iteritems method. So for custom Mappings, inherit from collections.Mapping explicitly just in case.

In Python 3, there are a number of improvements to be made. As of Python 3.3, abstract base classes live in collections.abc. They remain in collections too for backwards compatibility, but it’s nicer having our abstract base classes together in one namespace. So this imports abc from collections. Python 3.3 also adds yield from, which is designed for just these sorts of situations. This is not empty syntactic sugar; it may lead to faster code and more sensible interactions with coroutines.

from collections import abc
def nested_dict_iter(nested):
    for key, value in nested.items():
        if isinstance(value, abc.Mapping):
            yield from nested_dict_iter(value)
        else:
            yield key, value

回答 3

替代迭代解决方案:

def myprint(d):
    stack = d.items()
    while stack:
        k, v = stack.pop()
        if isinstance(v, dict):
            stack.extend(v.iteritems())
        else:
            print("%s: %s" % (k, v))

Alternative iterative solution:

def myprint(d):
    stack = d.items()
    while stack:
        k, v = stack.pop()
        if isinstance(v, dict):
            stack.extend(v.iteritems())
        else:
            print("%s: %s" % (k, v))

回答 4

我写的版本略有不同,跟踪到达那里的过程中的按键

def print_dict(v, prefix=''):
    if isinstance(v, dict):
        for k, v2 in v.items():
            p2 = "{}['{}']".format(prefix, k)
            print_dict(v2, p2)
    elif isinstance(v, list):
        for i, v2 in enumerate(v):
            p2 = "{}[{}]".format(prefix, i)
            print_dict(v2, p2)
    else:
        print('{} = {}'.format(prefix, repr(v)))

在您的数据上,它将打印

data['xml']['config']['portstatus']['status'] = u'good'
data['xml']['config']['target'] = u'1'
data['xml']['port'] = u'11'

修改它以将前缀作为键的元组而不是字符串来跟踪前缀(如果您需要的话)也很容易。

Slightly different version I wrote that keeps track of the keys along the way to get there

def print_dict(v, prefix=''):
    if isinstance(v, dict):
        for k, v2 in v.items():
            p2 = "{}['{}']".format(prefix, k)
            print_dict(v2, p2)
    elif isinstance(v, list):
        for i, v2 in enumerate(v):
            p2 = "{}[{}]".format(prefix, i)
            print_dict(v2, p2)
    else:
        print('{} = {}'.format(prefix, repr(v)))

On your data, it’ll print

data['xml']['config']['portstatus']['status'] = u'good'
data['xml']['config']['target'] = u'1'
data['xml']['port'] = u'11'

It’s also easy to modify it to track the prefix as a tuple of keys rather than a string if you need it that way.


回答 5

这是pythonic的方法。此功能将允许您在所有级别中遍历键值对。它不会将整个内容保存到内存中,而是在您遍历字典时逐步执行

def recursive_items(dictionary):
    for key, value in dictionary.items():
        if type(value) is dict:
            yield (key, value)
            yield from recursive_items(value)
        else:
            yield (key, value)

a = {'a': {1: {1: 2, 3: 4}, 2: {5: 6}}}

for key, value in recursive_items(a):
    print(key, value)

版画

a {1: {1: 2, 3: 4}, 2: {5: 6}}
1 {1: 2, 3: 4}
1 2
3 4
2 {5: 6}
5 6

Here is pythonic way to do it. This function will allow you to loop through key-value pair in all the levels. It does not save the whole thing to the memory but rather walks through the dict as you loop through it

def recursive_items(dictionary):
    for key, value in dictionary.items():
        if type(value) is dict:
            yield (key, value)
            yield from recursive_items(value)
        else:
            yield (key, value)

a = {'a': {1: {1: 2, 3: 4}, 2: {5: 6}}}

for key, value in recursive_items(a):
    print(key, value)

Prints

a {1: {1: 2, 3: 4}, 2: {5: 6}}
1 {1: 2, 3: 4}
1 2
3 4
2 {5: 6}
5 6

回答 6

迭代解决方案作为替代方案:

def traverse_nested_dict(d):
    iters = [d.iteritems()]

    while iters:
        it = iters.pop()
        try:
            k, v = it.next()
        except StopIteration:
            continue

        iters.append(it)

        if isinstance(v, dict):
            iters.append(v.iteritems())
        else:
            yield k, v


d = {"a": 1, "b": 2, "c": {"d": 3, "e": {"f": 4}}}
for k, v in traverse_nested_dict(d):
    print k, v

Iterative solution as an alternative:

def traverse_nested_dict(d):
    iters = [d.iteritems()]

    while iters:
        it = iters.pop()
        try:
            k, v = it.next()
        except StopIteration:
            continue

        iters.append(it)

        if isinstance(v, dict):
            iters.append(v.iteritems())
        else:
            yield k, v


d = {"a": 1, "b": 2, "c": {"d": 3, "e": {"f": 4}}}
for k, v in traverse_nested_dict(d):
    print k, v

回答 7

基于Scharron解决方案的另一种使用列表的解决方案

def myprint(d):
    my_list = d.iteritems() if isinstance(d, dict) else enumerate(d)

    for k, v in my_list:
        if isinstance(v, dict) or isinstance(v, list):
            myprint(v)
        else:
            print u"{0} : {1}".format(k, v)

A alternative solution to work with lists based on Scharron’s solution

def myprint(d):
    my_list = d.iteritems() if isinstance(d, dict) else enumerate(d)

    for k, v in my_list:
        if isinstance(v, dict) or isinstance(v, list):
            myprint(v)
        else:
            print u"{0} : {1}".format(k, v)

回答 8

考虑到该值可能是包含字典的列表,我正在使用以下代码来打印嵌套字典的所有值。当我将JSON文件解析为字典并且需要快速检查其任何值是否为时,这对我很有用None

    d = {
            "user": 10,
            "time": "2017-03-15T14:02:49.301000",
            "metadata": [
                {"foo": "bar"},
                "some_string"
            ]
        }


    def print_nested(d):
        if isinstance(d, dict):
            for k, v in d.items():
                print_nested(v)
        elif hasattr(d, '__iter__') and not isinstance(d, str):
            for item in d:
                print_nested(item)
        elif isinstance(d, str):
            print(d)

        else:
            print(d)

    print_nested(d)

输出:

    10
    2017-03-15T14:02:49.301000
    bar
    some_string

I am using the following code to print all the values of a nested dictionary, taking into account where the value could be a list containing dictionaries. This was useful to me when parsing a JSON file into a dictionary and needing to quickly check whether any of its values are None.

    d = {
            "user": 10,
            "time": "2017-03-15T14:02:49.301000",
            "metadata": [
                {"foo": "bar"},
                "some_string"
            ]
        }


    def print_nested(d):
        if isinstance(d, dict):
            for k, v in d.items():
                print_nested(v)
        elif hasattr(d, '__iter__') and not isinstance(d, str):
            for item in d:
                print_nested(item)
        elif isinstance(d, str):
            print(d)

        else:
            print(d)

    print_nested(d)

Output:

    10
    2017-03-15T14:02:49.301000
    bar
    some_string

回答 9

这是Fred Foo对Python 2的回答的修改版本。在原始响应中,仅输出最深层的嵌套。如果将键输出为列表,则可以保留所有级别的键,尽管要引用它们,则需要引用列表。

功能如下:

def NestIter(nested):
    for key, value in nested.iteritems():
        if isinstance(value, collections.Mapping):
            for inner_key, inner_value in NestIter(value):
                yield [key, inner_key], inner_value
        else:
            yield [key],value

引用键:

for keys, vals in mynested: 
    print(mynested[keys[0]][keys[1][0]][keys[1][1][0]])

三级字典。

您需要在访问多个键之前知道级别的数量,并且级别的数量应该是恒定的(在遍历值时可以添加一小段脚本来检查嵌套级别的数量,但是我没有还没看这个)。

Here’s a modified version of Fred Foo’s answer for Python 2. In the original response, only the deepest level of nesting is output. If you output the keys as lists, you can keep the keys for all levels, although to reference them you need to reference a list of lists.

Here’s the function:

def NestIter(nested):
    for key, value in nested.iteritems():
        if isinstance(value, collections.Mapping):
            for inner_key, inner_value in NestIter(value):
                yield [key, inner_key], inner_value
        else:
            yield [key],value

To reference the keys:

for keys, vals in mynested: 
    print(mynested[keys[0]][keys[1][0]][keys[1][1][0]])

for a three-level dictionary.

You need to know the number of levels before to access multiple keys and the number of levels should be constant (it may be possible to add a small bit of script to check the number of nesting levels when iterating through values, but I haven’t yet looked at this).


回答 10

我发现这种方法更加灵活,这里您仅提供生成器函数,该函数可以生成键,值对,并且可以轻松扩展以遍历列表。

def traverse(value, key=None):
    if isinstance(value, dict):
        for k, v in value.items():
            yield from traverse(v, k)
    else:
        yield key, value

然后,您可以编写自己的myprint函数,然后打印这些键值对。

def myprint(d):
    for k, v in traverse(d):
        print(f"{k} : {v}")

一个测试:

myprint({
    'xml': {
        'config': {
            'portstatus': {
                'status': 'good',
            },
            'target': '1',
        },
        'port': '11',
    },
})

输出:

status : good
target : 1
port : 11

我在Python 3.6上进行了测试。

I find this approach a bit more flexible, here you just providing generator function that emits key, value pairs and can be easily extended to also iterate over lists.

def traverse(value, key=None):
    if isinstance(value, dict):
        for k, v in value.items():
            yield from traverse(v, k)
    else:
        yield key, value

Then you can write your own myprint function, then would print those key value pairs.

def myprint(d):
    for k, v in traverse(d):
        print(f"{k} : {v}")

A test:

myprint({
    'xml': {
        'config': {
            'portstatus': {
                'status': 'good',
            },
            'target': '1',
        },
        'port': '11',
    },
})

Output:

status : good
target : 1
port : 11

I tested this on Python 3.6.


回答 11

这些答案仅适用于2级子词典。有关更多信息,请尝试以下方法:

nested_dict = {'dictA': {'key_1': 'value_1', 'key_1A': 'value_1A','key_1Asub1': {'Asub1': 'Asub1_val', 'sub_subA1': {'sub_subA1_key':'sub_subA1_val'}}},
                'dictB': {'key_2': 'value_2'},
                1: {'key_3': 'value_3', 'key_3A': 'value_3A'}}

def print_dict(dictionary):
    dictionary_array = [dictionary]
    for sub_dictionary in dictionary_array:
        if type(sub_dictionary) is dict:
            for key, value in sub_dictionary.items():
                print("key=", key)
                print("value", value)
                if type(value) is dict:
                    dictionary_array.append(value)



print_dict(nested_dict)

These answers work for only 2 levels of sub-dictionaries. For more try this:

nested_dict = {'dictA': {'key_1': 'value_1', 'key_1A': 'value_1A','key_1Asub1': {'Asub1': 'Asub1_val', 'sub_subA1': {'sub_subA1_key':'sub_subA1_val'}}},
                'dictB': {'key_2': 'value_2'},
                1: {'key_3': 'value_3', 'key_3A': 'value_3A'}}

def print_dict(dictionary):
    dictionary_array = [dictionary]
    for sub_dictionary in dictionary_array:
        if type(sub_dictionary) is dict:
            for key, value in sub_dictionary.items():
                print("key=", key)
                print("value", value)
                if type(value) is dict:
                    dictionary_array.append(value)



print_dict(nested_dict)

Python UTC日期时间对象的ISO格式不包含Z(Zulu或零偏移)

问题:Python UTC日期时间对象的ISO格式不包含Z(Zulu或零偏移)

为什么python 2.7不像JavaScript那样在UTC日期时间对象的isoformat字符串的末尾不包含Z字符(Zulu或零偏移)?

>>> datetime.datetime.utcnow().isoformat()
'2013-10-29T09:14:03.895210'

而在javascript中

>>>  console.log(new Date().toISOString()); 
2013-10-29T09:38:41.341Z

Why python 2.7 doesn’t include Z character (Zulu or zero offset) at the end of UTC datetime object’s isoformat string unlike JavaScript?

>>> datetime.datetime.utcnow().isoformat()
'2013-10-29T09:14:03.895210'

Whereas in javascript

>>>  console.log(new Date().toISOString()); 
2013-10-29T09:38:41.341Z

回答 0

Python datetime对象默认没有时区信息,没有它,Python实际上违反了ISO 8601规范(如果未提供时区信息,则假定为本地时间)。您可以使用pytz包获取一些默认时区,或者直接tzinfo自己子类化:

from datetime import datetime, tzinfo, timedelta
class simple_utc(tzinfo):
    def tzname(self,**kwargs):
        return "UTC"
    def utcoffset(self, dt):
        return timedelta(0)

然后,您可以将时区信息手动添加到utcnow()

>>> datetime.utcnow().replace(tzinfo=simple_utc()).isoformat()
'2014-05-16T22:51:53.015001+00:00'

请注意,此DOES符合ISO 8601格式,该格式允许Z+00:00作为UTC的后缀。请注意,后者实际上更好地符合了标准,并以一般方式表示时区(UTC是一种特例)。

Python datetime objects don’t have time zone info by default, and without it, Python actually violates the ISO 8601 specification (if no time zone info is given, assumed to be local time). You can use the pytz package to get some default time zones, or directly subclass tzinfo yourself:

from datetime import datetime, tzinfo, timedelta
class simple_utc(tzinfo):
    def tzname(self,**kwargs):
        return "UTC"
    def utcoffset(self, dt):
        return timedelta(0)

Then you can manually add the time zone info to utcnow():

>>> datetime.utcnow().replace(tzinfo=simple_utc()).isoformat()
'2014-05-16T22:51:53.015001+00:00'

Note that this DOES conform to the ISO 8601 format, which allows for either Z or +00:00 as the suffix for UTC. Note that the latter actually conforms to the standard better, with how time zones are represented in general (UTC is a special case.)


回答 1

选项: isoformat()

Python datetime不支持军事时区后缀,例如UTC的’Z’后缀。以下简单的字符串替换可以解决问题:

In [1]: import datetime

In [2]: d = datetime.datetime(2014, 12, 10, 12, 0, 0)

In [3]: str(d).replace('+00:00', 'Z')
Out[3]: '2014-12-10 12:00:00Z'

str(d) 基本上与 d.isoformat(sep=' ')

请参阅:日期时间,Python标准库

选项: strftime()

或者您可以使用strftime以达到相同的效果:

In [4]: d.strftime('%Y-%m-%d %H:%M:%SZ')
Out[4]: '2014-12-10 12:00:00Z'

注意:仅当您知道指定的日期为UTC时,此选项才有效。

请参阅:datetime.strftime()


附加:人类可读的时区

更进一步,您可能对显示人类可读的时区信息感兴趣,pytz带有strftime %Z时区标志:

In [5]: import pytz

In [6]: d = datetime.datetime(2014, 12, 10, 12, 0, 0, tzinfo=pytz.utc)

In [7]: d
Out[7]: datetime.datetime(2014, 12, 10, 12, 0, tzinfo=<UTC>)

In [8]: d.strftime('%Y-%m-%d %H:%M:%S %Z')
Out[8]: '2014-12-10 12:00:00 UTC'

Option: isoformat()

Python’s datetime does not support the military timezone suffixes like ‘Z’ suffix for UTC. The following simple string replacement does the trick:

In [1]: import datetime

In [2]: d = datetime.datetime(2014, 12, 10, 12, 0, 0)

In [3]: str(d).replace('+00:00', 'Z')
Out[3]: '2014-12-10 12:00:00Z'

str(d) is essentially the same as d.isoformat(sep=' ')

See: Datetime, Python Standard Library

Option: strftime()

Or you could use strftime to achieve the same effect:

In [4]: d.strftime('%Y-%m-%d %H:%M:%SZ')
Out[4]: '2014-12-10 12:00:00Z'

Note: This option works only when you know the date specified is in UTC.

See: datetime.strftime()


Additional: Human Readable Timezone

Going further, you may be interested in displaying human readable timezone information, pytz with strftime %Z timezone flag:

In [5]: import pytz

In [6]: d = datetime.datetime(2014, 12, 10, 12, 0, 0, tzinfo=pytz.utc)

In [7]: d
Out[7]: datetime.datetime(2014, 12, 10, 12, 0, tzinfo=<UTC>)

In [8]: d.strftime('%Y-%m-%d %H:%M:%S %Z')
Out[8]: '2014-12-10 12:00:00 UTC'

回答 2

以下javascript和python脚本提供相同的输出。我认为这就是您要寻找的。

的JavaScript

new Date().toISOString()

Python

import datetime

datetime.datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3] + 'Z'

他们给出的输出是utc(zelda)时间,格式为ISO字符串,有效数字为3毫秒,并附加Z。

2019-01-19T23:20:25.459Z

The following javascript and python scripts give identical outputs. I think it’s what you are looking for.

JavaScript

new Date().toISOString()

Python

from datetime import datetime

datetime.utcnow().isoformat()[:-3]+'Z'

The output they give is the utc (zelda) time formatted as an ISO string with a 3 millisecond significant digit and appended with a Z.

2019-01-19T23:20:25.459Z

回答 3

在Python> = 3.2中,您可以简单地使用以下代码:

>>> from datetime import datetime, timezone
>>> datetime.now(timezone.utc).isoformat()
'2019-03-14T07:55:36.979511+00:00'

In Python >= 3.2 you can simply use this:

>>> from datetime import datetime, timezone
>>> datetime.now(timezone.utc).isoformat()
'2019-03-14T07:55:36.979511+00:00'

回答 4

Python日期时间有点笨拙。使用arrow

> str(arrow.utcnow())
'2014-05-17T01:18:47.944126+00:00'

Arrow具有与datetime基本上相同的api,但是具有时区和一些其他优点,应该在主库中提供。

与Javascript兼容的格式可以通过以下方式实现:

arrow.utcnow().isoformat().replace("+00:00", "Z")
'2018-11-30T02:46:40.714281Z'

Javascript Date.parse将从时间戳悄然下降微秒。

Python datetimes are a little clunky. Use arrow.

> str(arrow.utcnow())
'2014-05-17T01:18:47.944126+00:00'

Arrow has essentially the same api as datetime, but with timezones and some extra niceties that should be in the main library.

A format compatible with Javascript can be achieved by:

arrow.utcnow().isoformat().replace("+00:00", "Z")
'2018-11-30T02:46:40.714281Z'

Javascript Date.parse will quietly drop microseconds from the timestamp.


回答 5

帖子上有很多不错的答案,但我希望格式能像JavaScript一样准确。这就是我正在使用的并且效果很好。

In [1]: import datetime

In [1]: now = datetime.datetime.utcnow()

In [1]: now.strftime('%Y-%m-%dT%H:%M:%S') + now.strftime('.%f')[:4] + 'Z'
Out[3]: '2018-10-16T13:18:34.856Z'

There are a lot of good answers on the post, but I wanted the format to come out exactly as it does with JavaScript. This is what I’m using and it works well.

In [1]: import datetime

In [1]: now = datetime.datetime.utcnow()

In [1]: now.strftime('%Y-%m-%dT%H:%M:%S') + now.strftime('.%f')[:4] + 'Z'
Out[3]: '2018-10-16T13:18:34.856Z'

回答 6

pip install python-dateutil
>>> a = "2019-06-27T02:14:49.443814497Z"
>>> dateutil.parser.parse(a)
datetime.datetime(2019, 6, 27, 2, 14, 49, 443814, tzinfo=tzutc())
pip install python-dateutil
>>> a = "2019-06-27T02:14:49.443814497Z"
>>> dateutil.parser.parse(a)
datetime.datetime(2019, 6, 27, 2, 14, 49, 443814, tzinfo=tzutc())

回答 7

我通过一些目标解决了这个问题:

  • 生成ISO 8601格式的UTC“感知”日期时间字符串
  • 使用只有 Python标准库函数DateTime对象和字符串创建
  • 使用Django timezone实用程序功能和dateutil解析器验证datetime对象和字符串
  • 使用JavaScript函数来验证ISO 8601日期时间字符串是否支持UTC

该方法包含Z后缀,并且不使用utcnow(),但是它基于Python文档中建议,并且通过Django和JavaScript传递了标记。

让我们从传递UTC时区对象开始,datetime.now()而不是使用datetime.utcnow()

from datetime import datetime, timezone

datetime.now(timezone.utc)
>>> datetime.datetime(2020, 1, 8, 6, 6, 24, 260810, tzinfo=datetime.timezone.utc)

datetime.now(timezone.utc).isoformat()
>>> '2020-01-08T06:07:04.492045+00:00'

看起来不错,所以让我们看看Django和dateutil思考:

from django.utils.timezone import is_aware

is_aware(datetime.now(timezone.utc))
>>> True

import dateutil.parser

is_aware(dateutil.parser.parse(datetime.now(timezone.utc).isoformat()))
>>> True

好的,Python日期时间对象和ISO 8601字符串都是UTC“可识别的”。现在,让我们看一下JavaScript对datetime字符串的看法。从这个答案中我们得到:

let date= '2020-01-08T06:07:04.492045+00:00';
const dateParsed = new Date(Date.parse(date))

document.write(dateParsed);
document.write("\n");
// Tue Jan 07 2020 22:07:04 GMT-0800 (Pacific Standard Time)

document.write(dateParsed.toISOString());
document.write("\n");
// 2020-01-08T06:07:04.492Z

document.write(dateParsed.toUTCString());
document.write("\n");
// Wed, 08 Jan 2020 06:07:04 GMT

您可能还想阅读此博客文章

Pass a UTC timezone object to datetime.now() instead of using datetime.utcnow():

from datetime import datetime, timezone

datetime.now(timezone.utc)
>>> datetime.datetime(2020, 1, 8, 6, 6, 24, 260810, tzinfo=datetime.timezone.utc)

datetime.now(timezone.utc).isoformat()
>>> '2020-01-08T06:07:04.492045+00:00'

That looks good, so let’s see what Django and dateutil think:

from django.utils.timezone import is_aware

is_aware(datetime.now(timezone.utc))
>>> True

import dateutil.parser

is_aware(dateutil.parser.parse(datetime.now(timezone.utc).isoformat()))
>>> True

Okay, the Python datetime object and the ISO 8601 string are UTC “aware”. Now let’s look at what JavaScript thinks of the datetime string. Borrowing from this answer we get:

let date= '2020-01-08T06:07:04.492045+00:00';
const dateParsed = new Date(Date.parse(date))

document.write(dateParsed);
document.write("\n");
// Tue Jan 07 2020 22:07:04 GMT-0800 (Pacific Standard Time)

document.write(dateParsed.toISOString());
document.write("\n");
// 2020-01-08T06:07:04.492Z

document.write(dateParsed.toUTCString());
document.write("\n");
// Wed, 08 Jan 2020 06:07:04 GMT

Notes:

I approached this problem with a few goals:

  • generate a UTC “aware” datetime string in ISO 8601 format
  • use only Python Standard Library functions for datetime object and string creation
  • validate the datetime object and string with the Django timezone utility function and the dateutil parser
  • use JavaScript functions to validate that the ISO 8601 datetime string is UTC aware

Note that this approach does not include a Z suffix and does not use utcnow(). But it’s based on the recommendation in the Python documentation and it passes muster with both Django and JavaScript.

You may also want to read this blog post.


回答 8

通过结合以上所有答案,我得到了以下功能:

from datetime import datetime, tzinfo, timedelta
class simple_utc(tzinfo):
    def tzname(self,**kwargs):
        return "UTC"
    def utcoffset(self, dt):
        return timedelta(0)


def getdata(yy, mm, dd, h, m, s) :
    d = datetime(yy, mm, dd, h, m, s)
    d = d.replace(tzinfo=simple_utc()).isoformat()
    d = str(d).replace('+00:00', 'Z')
    return d


print getdata(2018, 02, 03, 15, 0, 14)

By combining all answers above I came with following function :

from datetime import datetime, tzinfo, timedelta
class simple_utc(tzinfo):
    def tzname(self,**kwargs):
        return "UTC"
    def utcoffset(self, dt):
        return timedelta(0)


def getdata(yy, mm, dd, h, m, s) :
    d = datetime(yy, mm, dd, h, m, s)
    d = d.replace(tzinfo=simple_utc()).isoformat()
    d = str(d).replace('+00:00', 'Z')
    return d


print getdata(2018, 02, 03, 15, 0, 14)

回答 9

我使用摆锤:

import pendulum


d = pendulum.now("UTC").to_iso8601_string()
print(d)

>>> 2019-10-30T00:11:21.818265Z

I use pendulum:

import pendulum


d = pendulum.now("UTC").to_iso8601_string()
print(d)

>>> 2019-10-30T00:11:21.818265Z

回答 10

>>> import arrow

>>> now = arrow.utcnow().format('YYYY-MM-DDTHH:mm:ss.SSS')
>>> now
'2018-11-28T21:34:59.235'
>>> zulu = "{}Z".format(now)
>>> zulu
'2018-11-28T21:34:59.235Z'

或者,一口气获得它:

>>> zulu = "{}Z".format(arrow.utcnow().format('YYYY-MM-DDTHH:mm:ss.SSS'))
>>> zulu
'2018-11-28T21:54:49.639Z'
>>> import arrow

>>> now = arrow.utcnow().format('YYYY-MM-DDTHH:mm:ss.SSS')
>>> now
'2018-11-28T21:34:59.235'
>>> zulu = "{}Z".format(now)
>>> zulu
'2018-11-28T21:34:59.235Z'

Or, to get it in one fell swoop:

>>> zulu = "{}Z".format(arrow.utcnow().format('YYYY-MM-DDTHH:mm:ss.SSS'))
>>> zulu
'2018-11-28T21:54:49.639Z'

如何处理列表推导中的异常?

问题:如何处理列表推导中的异常?

我在Python中有一些列表理解,其中每次迭代都可能引发异常。

例如,如果我有:

eggs = (1,3,0,3,2)

[1/egg for egg in eggs]

我将ZeroDivisionError在第3个元素中得到一个exceptions。

如何处理此异常并继续执行列表理解?

我能想到的唯一方法是使用辅助函数:

def spam(egg):
    try:
        return 1/egg
    except ZeroDivisionError:
        # handle division by zero error
        # leave empty for now
        pass

但这对我来说有点麻烦。

有没有更好的方法在Python中执行此操作?

注意: 这是我做的一个简单示例(请参阅上面的“ 例如 ”),因为我的实际示例需要一些上下文。我对避免除以零错误不感兴趣,但对处理列表理解中的异常不感兴趣。

I have some a list comprehension in Python in which each iteration can throw an exception.

For instance, if I have:

eggs = (1,3,0,3,2)

[1/egg for egg in eggs]

I’ll get a ZeroDivisionError exception in the 3rd element.

How can I handle this exception and continue execution of the list comprehension?

The only way I can think of is to use a helper function:

def spam(egg):
    try:
        return 1/egg
    except ZeroDivisionError:
        # handle division by zero error
        # leave empty for now
        pass

But this looks a bit cumbersome to me.

Is there a better way to do this in Python?

Note: This is a simple example (see “for instance” above) that I contrived because my real example requires some context. I’m not interested in avoiding divide by zero errors but in handling exceptions in a list comprehension.


回答 0

Python中没有内置表达式可让您忽略异常(或在异常的情况下返回替代值&c),因此从字面上来讲,“处理列表推导中的异常”是不可能的,因为列表推导是一个表达式包含其他表达式,仅此而已(即,没有语句,只有语句可以捕获/忽略/处理异常)。

函数调用是表达式,函数主体可以包含您想要的所有语句,因此,如您所注意到的,将易于发生异常的子表达式的评估委托给函数是一种可行的解决方法(其他可行时,检查可能引发异常的值,如其他答案中所建议)。

对“如何处理列表理解中的异常”这一问题的正确回答都表达了所有这些事实的一部分:1)从字面上,即从词法上讲,在理解本身中,你不能做到;2)实际上,在可行的情况下,您将作业委派给某个函数或检查易于出错的值。您一再声称这不是一个答案是没有根据的。

There is no built-in expression in Python that lets you ignore an exception (or return alternate values &c in case of exceptions), so it’s impossible, literally speaking, to “handle exceptions in a list comprehension” because a list comprehension is an expression containing other expression, nothing more (i.e., no statements, and only statements can catch/ignore/handle exceptions).

Function calls are expression, and the function bodies can include all the statements you want, so delegating the evaluation of the exception-prone sub-expression to a function, as you’ve noticed, is one feasible workaround (others, when feasible, are checks on values that might provoke exceptions, as also suggested in other answers).

The correct responses to the question “how to handle exceptions in a list comprehension” are all expressing part of all of this truth: 1) literally, i.e. lexically IN the comprehension itself, you can’t; 2) practically, you delegate the job to a function or check for error prone values when that’s feasible. Your repeated claim that this is not an answer is thus unfounded.


回答 1

我意识到这个问题已经很老了,但是您也可以创建一个通用函数来简化这种事情:

def catch(func, handle=lambda e : e, *args, **kwargs):
    try:
        return func(*args, **kwargs)
    except Exception as e:
        return handle(e)

然后,在您的理解中:

eggs = (1,3,0,3,2)
[catch(lambda : 1/egg) for egg in eggs]
[1, 0, ('integer division or modulo by zero'), 0, 0]

当然,您可以随意设置默认的句柄函数(例如,您宁愿默认返回“ None”)。

希望这对您或该问题的将来的读者有帮助!

注意:在python 3中,我只将’handle’参数作为关键字,并将其放在参数列表的末尾。这将使实际传递的论点变得更加自然。

I realize this question is quite old, but you can also create a general function to make this kind of thing easier:

def catch(func, handle=lambda e : e, *args, **kwargs):
    try:
        return func(*args, **kwargs)
    except Exception as e:
        return handle(e)

Then, in your comprehension:

eggs = (1,3,0,3,2)
[catch(lambda : 1/egg) for egg in eggs]
[1, 0, ('integer division or modulo by zero'), 0, 0]

You can of course make the default handle function whatever you want (say you’d rather return ‘None’ by default).

Hope this helps you or any future viewers of this question!

Note: in python 3, I would make the ‘handle’ argument keyword only, and put it at the end of the argument list. This would make actually passing arguments and such through catch much more natural.


回答 2

您可以使用

[1/egg for egg in eggs if egg != 0]

这只会跳过零元素。

You can use

[1/egg for egg in eggs if egg != 0]

this will simply skip elements that are zero.


回答 3

没有,没有更好的方法。在很多情况下,您可以像Peter一样使用回避

您的另一选择是不使用理解

eggs = (1,3,0,3,2)

result=[]
for egg in eggs:
    try:
        result.append(egg/0)
    except ZeroDivisionError:
        # handle division by zero error
        # leave empty for now
        pass

由您决定是否比较麻烦

No there’s not a better way. In a lot of cases you can use avoidance like Peter does

Your other option is to not use comprehensions

eggs = (1,3,0,3,2)

result=[]
for egg in eggs:
    try:
        result.append(egg/0)
    except ZeroDivisionError:
        # handle division by zero error
        # leave empty for now
        pass

Up to you to decide whether that is more cumbersome or not


回答 4

我认为,正如提出最初问题的人和布莱恩·海德(Bryan Head)所建议的那样,辅助功能很好,一点也不麻烦。单行执行所有工作的魔术代码总是不可能的,因此,如果要避免for循环,辅助函数是一个完美的解决方案。但是我将其修改为此:

# A modified version of the helper function by the Question starter 
def spam(egg):
    try:
        return 1/egg, None
    except ZeroDivisionError as err:
        # handle division by zero error        
        return None, err

输出将是this [(1/1, None), (1/3, None), (None, ZeroDivisionError), (1/3, None), (1/2, None)]。有了这个答案,您完全可以控制以任何您想要的方式继续。

I think a helper function, as suggested by the one who asks the initial question and Bryan Head as well, is good and not cumbersome at all. A single line of magic code which does all the work is just not always possible so a helper function is a perfect solution if one wants to avoid for loops. However I would modify it to this one:

# A modified version of the helper function by the Question starter 
def spam(egg):
    try:
        return 1/egg, None
    except ZeroDivisionError as err:
        # handle division by zero error        
        return None, err

The output will be this [(1/1, None), (1/3, None), (None, ZeroDivisionError), (1/3, None), (1/2, None)]. With this answer you are in full control to continue in any way you want.

Alternative:

def spam2(egg):
    try:
        return 1/egg 
    except ZeroDivisionError:
        # handle division by zero error        
        return ZeroDivisionError

Yes, the error is returned, not raised.


回答 5

我没有任何答案提及此事。但是此示例将是一种防止在已知失败案例中引发异常的方法。

eggs = (1,3,0,3,2)
[1/egg if egg > 0 else None for egg in eggs]


Output: [1, 0, None, 0, 0]

I didn’t see any answer mention this. But this example would be one way of preventing an exception from being raised for known failing cases.

eggs = (1,3,0,3,2)
[1/egg if egg > 0 else None for egg in eggs]


Output: [1, 0, None, 0, 0]