标签归档:pandas

如何找到已安装的熊猫版本

问题:如何找到已安装的熊猫版本

我在使用某些熊猫功能时遇到了麻烦。如何检查我的安装版本是什么?

I am having trouble with some of pandas functionalities. How do I check what is my installation version?


回答 0

检查pandas.__version__

In [76]: import pandas as pd

In [77]: pd.__version__
Out[77]: '0.12.0-933-g281dc4e'

Pandas还提供了一个实用程序功能,它还pd.show_versions()报告其依赖项的版本:

In [53]: pd.show_versions(as_json=False)

INSTALLED VERSIONS
------------------
commit: None
python: 2.7.6.final.0
python-bits: 64
OS: Linux
OS-release: 3.13.0-45-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8

pandas: 0.15.2-113-g5531341
nose: 1.3.1
Cython: 0.21.1
numpy: 1.8.2
scipy: 0.14.0.dev-371b4ff
statsmodels: 0.6.0.dev-a738b4f
IPython: 2.0.0-dev
sphinx: 1.2.2
patsy: 0.3.0
dateutil: 1.5
pytz: 2012c
bottleneck: None
tables: 3.1.1
numexpr: 2.2.2
matplotlib: 1.4.2
openpyxl: None
xlrd: 0.9.3
xlwt: 0.7.5
xlsxwriter: None
lxml: 3.3.3
bs4: 4.3.2
html5lib: 0.999
httplib2: 0.8
apiclient: None
rpy2: 2.5.5
sqlalchemy: 0.9.8
pymysql: None
psycopg2: 2.4.5 (dt dec mx pq3 ext)

Check pandas.__version__:

In [76]: import pandas as pd

In [77]: pd.__version__
Out[77]: '0.12.0-933-g281dc4e'

Pandas also provides a utility function, pd.show_versions(), which reports the version of its dependencies as well:

In [53]: pd.show_versions(as_json=False)

INSTALLED VERSIONS
------------------
commit: None
python: 2.7.6.final.0
python-bits: 64
OS: Linux
OS-release: 3.13.0-45-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8

pandas: 0.15.2-113-g5531341
nose: 1.3.1
Cython: 0.21.1
numpy: 1.8.2
scipy: 0.14.0.dev-371b4ff
statsmodels: 0.6.0.dev-a738b4f
IPython: 2.0.0-dev
sphinx: 1.2.2
patsy: 0.3.0
dateutil: 1.5
pytz: 2012c
bottleneck: None
tables: 3.1.1
numexpr: 2.2.2
matplotlib: 1.4.2
openpyxl: None
xlrd: 0.9.3
xlwt: 0.7.5
xlsxwriter: None
lxml: 3.3.3
bs4: 4.3.2
html5lib: 0.999
httplib2: 0.8
apiclient: None
rpy2: 2.5.5
sqlalchemy: 0.9.8
pymysql: None
psycopg2: 2.4.5 (dt dec mx pq3 ext)

回答 1

跑:

pip  list

您应该获得软件包(包括熊猫)及其版本的列表,例如:

beautifulsoup4 (4.5.1)
cycler (0.10.0)
jdcal (1.3)
matplotlib (1.5.3)
numpy (1.11.1)
openpyxl (2.2.0b1)
pandas (0.18.1)
pip (8.1.2)
pyparsing (2.1.9)
python-dateutil (2.2)
python-nmap (0.6.1)
pytz (2016.6.1)
requests (2.11.1)
setuptools (20.10.1)
six (1.10.0)
SQLAlchemy (1.0.15)
xlrd (1.0.0)

Run:

pip  list

You should get a list of packages (including panda) and their versions, e.g.:

beautifulsoup4 (4.5.1)
cycler (0.10.0)
jdcal (1.3)
matplotlib (1.5.3)
numpy (1.11.1)
openpyxl (2.2.0b1)
pandas (0.18.1)
pip (8.1.2)
pyparsing (2.1.9)
python-dateutil (2.2)
python-nmap (0.6.1)
pytz (2016.6.1)
requests (2.11.1)
setuptools (20.10.1)
six (1.10.0)
SQLAlchemy (1.0.15)
xlrd (1.0.0)

回答 2

最简单的解决方案

码:

import pandas as pd
pd.__version__

**在“版本”一词之前和之后,其双下划线。

输出:

'0.14.1'

Simplest Solution

Code:

import pandas as pd
pd.__version__

**Its double underscore before and after the word “version”.

Output:

'0.14.1'

回答 3

pip freeze

它的工作原理与上述相同。

pip show pandas

显示有关特定软件包的信息。有关更多信息,请查看pip help

Run

pip freeze

It works the same as above.

pip show pandas

Displays information about a specific package. For more information, check out pip help


回答 4

视窗

python -c "import pandas as pd; print(pd.__version__)"
conda list | findstr pandas  # Anaconda / Conda
pip freeze | findstr pandas
pip show pandas | findstr Version

的Linux

python -c "import pandas as pd; print(pd.__version__)"
conda list | grep numpy  # Anaconda / Conda
pip freeze | grep numpy  # pip

Windows

python -c "import pandas as pd; print(pd.__version__)"
conda list | findstr pandas  # Anaconda / Conda
pip freeze | findstr pandas
pip show pandas | findstr Version

Linux

python -c "import pandas as pd; print(pd.__version__)"
conda list | grep numpy  # Anaconda / Conda
pip freeze | grep numpy  # pip

回答 5

在jupyter笔记本计算机中: pip freeze | grep pandas

In a jupyter notebook cell: pip freeze | grep pandas


删除具有重复索引的行(Pandas DataFrame和TimeSeries)

问题:删除具有重复索引的行(Pandas DataFrame和TimeSeries)

我正在从网络上读取一些自动气象数据。观测每5分钟发生一次,并被汇总到每个气象站的月度文件中。解析完文件后,DataFrame如下所示:

                      Sta  Precip1hr  Precip5min  Temp  DewPnt  WindSpd  WindDir  AtmPress
Date                                                                                      
2001-01-01 00:00:00  KPDX          0           0     4       3        0        0     30.31
2001-01-01 00:05:00  KPDX          0           0     4       3        0        0     30.30
2001-01-01 00:10:00  KPDX          0           0     4       3        4       80     30.30
2001-01-01 00:15:00  KPDX          0           0     3       2        5       90     30.30
2001-01-01 00:20:00  KPDX          0           0     3       2       10      110     30.28

我遇到的问题是,有时科学家会回头并更正观察结果-不是通过编辑错误的行,而是通过将重复的行附加到文件末尾来进行的。下面是这种情况的简单示例:

import pandas 
import datetime
startdate = datetime.datetime(2001, 1, 1, 0, 0)
enddate = datetime.datetime(2001, 1, 1, 5, 0)
index = pandas.DatetimeIndex(start=startdate, end=enddate, freq='H')
data1 = {'A' : range(6), 'B' : range(6)}
data2 = {'A' : [20, -30, 40], 'B' : [-50, 60, -70]}
df1 = pandas.DataFrame(data=data1, index=index)
df2 = pandas.DataFrame(data=data2, index=index[:3])
df3 = df2.append(df1)
df3
                       A   B
2001-01-01 00:00:00   20 -50
2001-01-01 01:00:00  -30  60
2001-01-01 02:00:00   40 -70
2001-01-01 03:00:00    3   3
2001-01-01 04:00:00    4   4
2001-01-01 05:00:00    5   5
2001-01-01 00:00:00    0   0
2001-01-01 01:00:00    1   1
2001-01-01 02:00:00    2   2

因此,我需要df3断然成为:

                       A   B
2001-01-01 00:00:00    0   0
2001-01-01 01:00:00    1   1
2001-01-01 02:00:00    2   2
2001-01-01 03:00:00    3   3
2001-01-01 04:00:00    4   4
2001-01-01 05:00:00    5   5

我认为添加一列行号(df3['rownum'] = range(df3.shape[0]))可以帮助我为的任何值选择最底端的行DatetimeIndex,但是我一直想弄清楚group_byor pivot(或???)语句才能使该行生效。

I’m reading some automated weather data from the web. The observations occur every 5 minutes and are compiled into monthly files for each weather station. Once I’m done parsing a file, the DataFrame looks something like this:

                      Sta  Precip1hr  Precip5min  Temp  DewPnt  WindSpd  WindDir  AtmPress
Date                                                                                      
2001-01-01 00:00:00  KPDX          0           0     4       3        0        0     30.31
2001-01-01 00:05:00  KPDX          0           0     4       3        0        0     30.30
2001-01-01 00:10:00  KPDX          0           0     4       3        4       80     30.30
2001-01-01 00:15:00  KPDX          0           0     3       2        5       90     30.30
2001-01-01 00:20:00  KPDX          0           0     3       2       10      110     30.28

The problem I’m having is that sometimes a scientist goes back and corrects observations — not by editing the erroneous rows, but by appending a duplicate row to the end of a file. Simple example of such a case is illustrated below:

import pandas 
import datetime
startdate = datetime.datetime(2001, 1, 1, 0, 0)
enddate = datetime.datetime(2001, 1, 1, 5, 0)
index = pandas.DatetimeIndex(start=startdate, end=enddate, freq='H')
data1 = {'A' : range(6), 'B' : range(6)}
data2 = {'A' : [20, -30, 40], 'B' : [-50, 60, -70]}
df1 = pandas.DataFrame(data=data1, index=index)
df2 = pandas.DataFrame(data=data2, index=index[:3])
df3 = df2.append(df1)
df3
                       A   B
2001-01-01 00:00:00   20 -50
2001-01-01 01:00:00  -30  60
2001-01-01 02:00:00   40 -70
2001-01-01 03:00:00    3   3
2001-01-01 04:00:00    4   4
2001-01-01 05:00:00    5   5
2001-01-01 00:00:00    0   0
2001-01-01 01:00:00    1   1
2001-01-01 02:00:00    2   2

And so I need df3 to evenutally become:

                       A   B
2001-01-01 00:00:00    0   0
2001-01-01 01:00:00    1   1
2001-01-01 02:00:00    2   2
2001-01-01 03:00:00    3   3
2001-01-01 04:00:00    4   4
2001-01-01 05:00:00    5   5

I thought that adding a column of row numbers (df3['rownum'] = range(df3.shape[0])) would help me select out the bottom-most row for any value of the DatetimeIndex, but I am stuck on figuring out the group_by or pivot (or ???) statements to make that work.


回答 0

我建议对熊猫指数本身使用重复的方法:

df3 = df3.loc[~df3.index.duplicated(keep='first')]

尽管所有其他方法都起作用,但是对于所提供的示例,当前接受的答案到目前为止性能最低。此外,尽管groupby方法的性能稍差一些,但我发现重复的方法更具可读性。

使用提供的样本数据:

>>> %timeit df3.reset_index().drop_duplicates(subset='index', keep='first').set_index('index')
1000 loops, best of 3: 1.54 ms per loop

>>> %timeit df3.groupby(df3.index).first()
1000 loops, best of 3: 580 µs per loop

>>> %timeit df3[~df3.index.duplicated(keep='first')]
1000 loops, best of 3: 307 µs per loop

请注意,您可以通过更改keep参数保留最后一个元素。

还应该注意,该方法也可以MultiIndex使用(使用Paul的示例中指定的df1 ):

>>> %timeit df1.groupby(level=df1.index.names).last()
1000 loops, best of 3: 771 µs per loop

>>> %timeit df1[~df1.index.duplicated(keep='last')]
1000 loops, best of 3: 365 µs per loop

I would suggest using the duplicated method on the Pandas Index itself:

df3 = df3[~df3.index.duplicated(keep='first')]

While all the other methods work, the currently accepted answer is by far the least performant for the provided example. Furthermore, while the groupby method is only slightly less performant, I find the duplicated method to be more readable.

Using the sample data provided:

>>> %timeit df3.reset_index().drop_duplicates(subset='index', keep='first').set_index('index')
1000 loops, best of 3: 1.54 ms per loop

>>> %timeit df3.groupby(df3.index).first()
1000 loops, best of 3: 580 µs per loop

>>> %timeit df3[~df3.index.duplicated(keep='first')]
1000 loops, best of 3: 307 µs per loop

Note that you can keep the last element by changing the keep argument to 'last'.

It should also be noted that this method works with MultiIndex as well (using df1 as specified in Paul’s example):

>>> %timeit df1.groupby(level=df1.index.names).last()
1000 loops, best of 3: 771 µs per loop

>>> %timeit df1[~df1.index.duplicated(keep='last')]
1000 loops, best of 3: 365 µs per loop

回答 1

我的原始答案现在已经过时了,仅供参考。

一个简单的解决方案是使用 drop_duplicates

df4 = df3.drop_duplicates(subset='rownum', keep='last')

对我而言,这可以在大型数据集上快速运行。

这就要求’rownum’是重复的列。在修改后的示例中,“ rownum”没有重复项,因此不会消除任何内容。我们真正想要的是将“ cols”设置为索引。我还没有找到一种方法来告诉drop_duplicates只考虑索引。

这是一个将索引添加为dataframe列的解决方案,在其上删除重复项,然后删除新列:

df3 = df3.reset_index().drop_duplicates(subset='index', keep='last').set_index('index')

如果您希望事情以正确的顺序返回,只需调用sort数据框。

df3 = df3.sort()

This adds the index as a dataframe column, drops duplicates on that, then removes the new column:

df = df.reset_index().drop_duplicates(subset='index', keep='last').set_index('index').sort_index()

Note that the use of .sort_index() above at the end is as needed and is optional.


回答 2

天啊。这实际上是如此简单!

grouped = df3.groupby(level=0)
df4 = grouped.last()
df4
                      A   B  rownum

2001-01-01 00:00:00   0   0       6
2001-01-01 01:00:00   1   1       7
2001-01-01 02:00:00   2   2       8
2001-01-01 03:00:00   3   3       3
2001-01-01 04:00:00   4   4       4
2001-01-01 05:00:00   5   5       5

跟进编辑2013-10-29 如果我比较复杂MultiIndex,我认为我喜欢这种groupby方法。这是后代的简单示例:

import numpy as np
import pandas

# fake index
idx = pandas.MultiIndex.from_tuples([('a', letter) for letter in list('abcde')])

# random data + naming the index levels
df1 = pandas.DataFrame(np.random.normal(size=(5,2)), index=idx, columns=['colA', 'colB'])
df1.index.names = ['iA', 'iB']

# artificially append some duplicate data
df1 = df1.append(df1.select(lambda idx: idx[1] in ['c', 'e']))
df1
#           colA      colB
#iA iB                    
#a  a  -1.297535  0.691787
#   b  -1.688411  0.404430
#   c   0.275806 -0.078871
#   d  -0.509815 -0.220326
#   e  -0.066680  0.607233
#   c   0.275806 -0.078871  # <--- dup 1
#   e  -0.066680  0.607233  # <--- dup 2

这是重要的部分

# group the data, using df1.index.names tells pandas to look at the entire index
groups = df1.groupby(level=df1.index.names)  
groups.last() # or .first()
#           colA      colB
#iA iB                    
#a  a  -1.297535  0.691787
#   b  -1.688411  0.404430
#   c   0.275806 -0.078871
#   d  -0.509815 -0.220326
#   e  -0.066680  0.607233

Oh my. This is actually so simple!

grouped = df3.groupby(level=0)
df4 = grouped.last()
df4
                      A   B  rownum

2001-01-01 00:00:00   0   0       6
2001-01-01 01:00:00   1   1       7
2001-01-01 02:00:00   2   2       8
2001-01-01 03:00:00   3   3       3
2001-01-01 04:00:00   4   4       4
2001-01-01 05:00:00   5   5       5

Follow up edit 2013-10-29 In the case where I have a fairly complex MultiIndex, I think I prefer the groupby approach. Here’s simple example for posterity:

import numpy as np
import pandas

# fake index
idx = pandas.MultiIndex.from_tuples([('a', letter) for letter in list('abcde')])

# random data + naming the index levels
df1 = pandas.DataFrame(np.random.normal(size=(5,2)), index=idx, columns=['colA', 'colB'])
df1.index.names = ['iA', 'iB']

# artificially append some duplicate data
df1 = df1.append(df1.select(lambda idx: idx[1] in ['c', 'e']))
df1
#           colA      colB
#iA iB                    
#a  a  -1.297535  0.691787
#   b  -1.688411  0.404430
#   c   0.275806 -0.078871
#   d  -0.509815 -0.220326
#   e  -0.066680  0.607233
#   c   0.275806 -0.078871  # <--- dup 1
#   e  -0.066680  0.607233  # <--- dup 2

and here’s the important part

# group the data, using df1.index.names tells pandas to look at the entire index
groups = df1.groupby(level=df1.index.names)  
groups.last() # or .first()
#           colA      colB
#iA iB                    
#a  a  -1.297535  0.691787
#   b  -1.688411  0.404430
#   c   0.275806 -0.078871
#   d  -0.509815 -0.220326
#   e  -0.066680  0.607233

回答 3

不幸的是,我不认为熊猫允许人们放弃指数。我建议以下内容:

df3 = df3.reset_index() # makes date column part of your data
df3.columns = ['timestamp','A','B','rownum'] # set names
df3 = df3.drop_duplicates('timestamp',take_last=True).set_index('timestamp') #done!

Unfortunately, I don’t think Pandas allows one to drop dups off the indices. I would suggest the following:

df3 = df3.reset_index() # makes date column part of your data
df3.columns = ['timestamp','A','B','rownum'] # set names
df3 = df3.drop_duplicates('timestamp',take_last=True).set_index('timestamp') #done!

回答 4

如果像我这样的人喜欢使用熊猫点符号(例如管道)来进行可链接的数据操作,则以下内容可能会有用:

df3 = df3.query('~index.duplicated()')

这将启用链接语句,如下所示:

df3.assign(C=2).query('~index.duplicated()').mean()

If anyone like me likes chainable data manipulation using the pandas dot notation (like piping), then the following may be useful:

df3 = df3.query('~index.duplicated()')

This enables chaining statements like this:

df3.assign(C=2).query('~index.duplicated()').mean()

回答 5

删除重复项(保留第一)

idx = np.unique( df.index.values, return_index = True )[1]
df = df.iloc[idx]

删除重复项(保留最后)

df = df[::-1]
df = df.iloc[ np.unique( df.index.values, return_index = True )[1] ]

测试:使用OP的数据进行1万次循环

numpy method - 3.03 seconds
df.loc[~df.index.duplicated(keep='first')] - 4.43 seconds
df.groupby(df.index).first() - 21 seconds
reset_index() method - 29 seconds

Remove duplicates (Keeping First)

idx = np.unique( df.index.values, return_index = True )[1]
df = df.iloc[idx]

Remove duplicates (Keeping Last)

df = df[::-1]
df = df.iloc[ np.unique( df.index.values, return_index = True )[1] ]

Tests: 10k loops using OP’s data

numpy method - 3.03 seconds
df.loc[~df.index.duplicated(keep='first')] - 4.43 seconds
df.groupby(df.index).first() - 21 seconds
reset_index() method - 29 seconds

使用groupby获取分组中具有最大计数的行

问题:使用groupby获取分组中具有最大计数的行

count['Sp','Mt']列分组后,如何找到熊猫数据框中所有具有列最大值的行?

示例1:以下数据框,我将其分组['Sp','Mt']

   Sp   Mt Value   count
0  MM1  S1   a      **3**
1  MM1  S1   n      2
2  MM1  S3   cb     5
3  MM2  S3   mk      **8**
4  MM2  S4   bg     **10**
5  MM2  S4   dgd      1
6  MM4  S2  rd     2
7  MM4  S2   cb      2
8  MM4  S2   uyi      **7**

预期输出:获取结果行的数量在组之间最大,例如:

0  MM1  S1   a      **3**
1 3  MM2  S3   mk      **8**
4  MM2  S4   bg     **10** 
8  MM4  S2   uyi      **7**

示例2:此数据框,我将其分组['Sp','Mt']

   Sp   Mt   Value  count
4  MM2  S4   bg     10
5  MM2  S4   dgd    1
6  MM4  S2   rd     2
7  MM4  S2   cb     8
8  MM4  S2   uyi    8

对于上面的示例,我想获取每个组中等于max的所有行,count例如:

MM2  S4   bg     10
MM4  S2   cb     8
MM4  S2   uyi    8

How do I find all rows in a pandas dataframe which have the max value for count column, after grouping by ['Sp','Mt'] columns?

Example 1: the following dataFrame, which I group by ['Sp','Mt']:

   Sp   Mt Value   count
0  MM1  S1   a      **3**
1  MM1  S1   n      2
2  MM1  S3   cb     5
3  MM2  S3   mk      **8**
4  MM2  S4   bg     **10**
5  MM2  S4   dgd      1
6  MM4  S2  rd     2
7  MM4  S2   cb      2
8  MM4  S2   uyi      **7**

Expected output: get the result rows whose count is max between the groups, like:

0  MM1  S1   a      **3**
1 3  MM2  S3   mk      **8**
4  MM2  S4   bg     **10** 
8  MM4  S2   uyi      **7**

Example 2: this dataframe, which I group by ['Sp','Mt']:

   Sp   Mt   Value  count
4  MM2  S4   bg     10
5  MM2  S4   dgd    1
6  MM4  S2   rd     2
7  MM4  S2   cb     8
8  MM4  S2   uyi    8

For the above example, I want to get all the rows where count equals max, in each group e.g :

MM2  S4   bg     10
MM4  S2   cb     8
MM4  S2   uyi    8

回答 0

In [1]: df
Out[1]:
    Sp  Mt Value  count
0  MM1  S1     a      3
1  MM1  S1     n      2
2  MM1  S3    cb      5
3  MM2  S3    mk      8
4  MM2  S4    bg     10
5  MM2  S4   dgd      1
6  MM4  S2    rd      2
7  MM4  S2    cb      2
8  MM4  S2   uyi      7

In [2]: df.groupby(['Mt'], sort=False)['count'].max()
Out[2]:
Mt
S1     3
S3     8
S4    10
S2     7
Name: count

要获取原始DF的索引,您可以执行以下操作:

In [3]: idx = df.groupby(['Mt'])['count'].transform(max) == df['count']

In [4]: df[idx]
Out[4]:
    Sp  Mt Value  count
0  MM1  S1     a      3
3  MM2  S3    mk      8
4  MM2  S4    bg     10
8  MM4  S2   uyi      7

请注意,如果每个组有多个最大值,则将全部返回。

更新资料

在OP所要求的情况下,这真是万劫不复:

In [5]: df['count_max'] = df.groupby(['Mt'])['count'].transform(max)

In [6]: df
Out[6]:
    Sp  Mt Value  count  count_max
0  MM1  S1     a      3          3
1  MM1  S1     n      2          3
2  MM1  S3    cb      5          8
3  MM2  S3    mk      8          8
4  MM2  S4    bg     10         10
5  MM2  S4   dgd      1         10
6  MM4  S2    rd      2          7
7  MM4  S2    cb      2          7
8  MM4  S2   uyi      7          7
In [1]: df
Out[1]:
    Sp  Mt Value  count
0  MM1  S1     a      3
1  MM1  S1     n      2
2  MM1  S3    cb      5
3  MM2  S3    mk      8
4  MM2  S4    bg     10
5  MM2  S4   dgd      1
6  MM4  S2    rd      2
7  MM4  S2    cb      2
8  MM4  S2   uyi      7

In [2]: df.groupby(['Mt'], sort=False)['count'].max()
Out[2]:
Mt
S1     3
S3     8
S4    10
S2     7
Name: count

To get the indices of the original DF you can do:

In [3]: idx = df.groupby(['Mt'])['count'].transform(max) == df['count']

In [4]: df[idx]
Out[4]:
    Sp  Mt Value  count
0  MM1  S1     a      3
3  MM2  S3    mk      8
4  MM2  S4    bg     10
8  MM4  S2   uyi      7

Note that if you have multiple max values per group, all will be returned.

Update

On a hail mary chance that this is what the OP is requesting:

In [5]: df['count_max'] = df.groupby(['Mt'])['count'].transform(max)

In [6]: df
Out[6]:
    Sp  Mt Value  count  count_max
0  MM1  S1     a      3          3
1  MM1  S1     n      2          3
2  MM1  S3    cb      5          8
3  MM2  S3    mk      8          8
4  MM2  S4    bg     10         10
5  MM2  S4   dgd      1         10
6  MM4  S2    rd      2          7
7  MM4  S2    cb      2          7
8  MM4  S2   uyi      7          7

回答 1

您可以按计数对dataFrame排序,然后删除重复项。我认为这更容易:

df.sort_values('count', ascending=False).drop_duplicates(['Sp','Mt'])

You can sort the dataFrame by count and then remove duplicates. I think it’s easier:

df.sort_values('count', ascending=False).drop_duplicates(['Sp','Mt'])

回答 2

一个简单的解决方案是应用:idxmax()函数来获取具有最大值的行的索引。 这将过滤出组中具有最大值的所有行。

In [365]: import pandas as pd

In [366]: df = pd.DataFrame({
'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4','MM4'],
'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
'count' : [3,2,5,8,10,1,2,2,7]
})

In [367]: df                                                                                                       
Out[367]: 
   count  mt   sp  val
0      3  S1  MM1    a
1      2  S1  MM1    n
2      5  S3  MM1   cb
3      8  S3  MM2   mk
4     10  S4  MM2   bg
5      1  S4  MM2  dgb
6      2  S2  MM4   rd
7      2  S2  MM4   cb
8      7  S2  MM4  uyi


### Apply idxmax() and use .loc() on dataframe to filter the rows with max values:
In [368]: df.loc[df.groupby(["sp", "mt"])["count"].idxmax()]                                                       
Out[368]: 
   count  mt   sp  val
0      3  S1  MM1    a
2      5  S3  MM1   cb
3      8  S3  MM2   mk
4     10  S4  MM2   bg
8      7  S2  MM4  uyi

### Just to show what values are returned by .idxmax() above:
In [369]: df.groupby(["sp", "mt"])["count"].idxmax().values                                                        
Out[369]: array([0, 2, 3, 4, 8])

Easy solution would be to apply : idxmax() function to get indices of rows with max values. This would filter out all the rows with max value in the group.

In [365]: import pandas as pd

In [366]: df = pd.DataFrame({
'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4','MM4'],
'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
'count' : [3,2,5,8,10,1,2,2,7]
})

In [367]: df                                                                                                       
Out[367]: 
   count  mt   sp  val
0      3  S1  MM1    a
1      2  S1  MM1    n
2      5  S3  MM1   cb
3      8  S3  MM2   mk
4     10  S4  MM2   bg
5      1  S4  MM2  dgb
6      2  S2  MM4   rd
7      2  S2  MM4   cb
8      7  S2  MM4  uyi


### Apply idxmax() and use .loc() on dataframe to filter the rows with max values:
In [368]: df.loc[df.groupby(["sp", "mt"])["count"].idxmax()]                                                       
Out[368]: 
   count  mt   sp  val
0      3  S1  MM1    a
2      5  S3  MM1   cb
3      8  S3  MM2   mk
4     10  S4  MM2   bg
8      7  S2  MM4  uyi

### Just to show what values are returned by .idxmax() above:
In [369]: df.groupby(["sp", "mt"])["count"].idxmax().values                                                        
Out[369]: array([0, 2, 3, 4, 8])

回答 3

在较大的DataFrame(约40万行)上尝试了Zelazny建议的解决方案后,我发现它非常慢。这是我发现在数据集上运行速度快几个数量级的替代方法。

df = pd.DataFrame({
    'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4', 'MM4'],
    'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
    'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
    'count' : [3,2,5,8,10,1,2,2,7]
    })

df_grouped = df.groupby(['sp', 'mt']).agg({'count':'max'})

df_grouped = df_grouped.reset_index()

df_grouped = df_grouped.rename(columns={'count':'count_max'})

df = pd.merge(df, df_grouped, how='left', on=['sp', 'mt'])

df = df[df['count'] == df['count_max']]

Having tried the solution suggested by Zelazny on a relatively large DataFrame (~400k rows) I found it to be very slow. Here is an alternative that I found to run orders of magnitude faster on my data set.

df = pd.DataFrame({
    'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4', 'MM4'],
    'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
    'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
    'count' : [3,2,5,8,10,1,2,2,7]
    })

df_grouped = df.groupby(['sp', 'mt']).agg({'count':'max'})

df_grouped = df_grouped.reset_index()

df_grouped = df_grouped.rename(columns={'count':'count_max'})

df = pd.merge(df, df_grouped, how='left', on=['sp', 'mt'])

df = df[df['count'] == df['count_max']]

回答 4

您可能不需要使用sort_values+ 来分组drop_duplicates

df.sort_values('count').drop_duplicates(['Sp','Mt'],keep='last')
Out[190]: 
    Sp  Mt Value  count
0  MM1  S1     a      3
2  MM1  S3    cb      5
8  MM4  S2   uyi      7
3  MM2  S3    mk      8
4  MM2  S4    bg     10

通过使用也几乎相同的逻辑 tail

df.sort_values('count').groupby(['Sp', 'Mt']).tail(1)
Out[52]: 
    Sp  Mt Value  count
0  MM1  S1     a      3
2  MM1  S3    cb      5
8  MM4  S2   uyi      7
3  MM2  S3    mk      8
4  MM2  S4    bg     10

You may not need to do with group by , using sort_values+ drop_duplicates

df.sort_values('count').drop_duplicates(['Sp','Mt'],keep='last')
Out[190]: 
    Sp  Mt Value  count
0  MM1  S1     a      3
2  MM1  S3    cb      5
8  MM4  S2   uyi      7
3  MM2  S3    mk      8
4  MM2  S4    bg     10

Also almost same logic by using tail

df.sort_values('count').groupby(['Sp', 'Mt']).tail(1)
Out[52]: 
    Sp  Mt Value  count
0  MM1  S1     a      3
2  MM1  S3    cb      5
8  MM4  S2   uyi      7
3  MM2  S3    mk      8
4  MM2  S4    bg     10

回答 5

对我来说,最简单的解决方案是当count等于最大值时保持值。因此,以下一行命令就足够了:

df[df['count'] == df.groupby(['Mt'])['count'].transform(max)]

For me, the easiest solution would be keep value when count is equal to the maximum. Therefore, the following one line command is enough :

df[df['count'] == df.groupby(['Mt'])['count'].transform(max)]

回答 6

用途groupbyidxmax方法:

  1. 转移col datedatetime

    df['date']=pd.to_datetime(df['date'])
  2. 得到的索引max列的date,后groupyby ad_id

    idx=df.groupby(by='ad_id')['date'].idxmax()
  3. 获取所需数据:

    df_max=df.loc[idx,]

出[54]:

ad_id  price       date
7     22      2 2018-06-11
6     23      2 2018-06-22
2     24      2 2018-06-30
3     28      5 2018-06-22

Use groupby and idxmax methods:

  1. transfer col date to datetime:

    df['date']=pd.to_datetime(df['date'])
    
  2. get the index of max of column date, after groupyby ad_id:

    idx=df.groupby(by='ad_id')['date'].idxmax()
    
  3. get the wanted data:

    df_max=df.loc[idx,]
    

Out[54]:

ad_id  price       date
7     22      2 2018-06-11
6     23      2 2018-06-22
2     24      2 2018-06-30
3     28      5 2018-06-22

回答 7

df = pd.DataFrame({
'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4','MM4'],
'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
'count' : [3,2,5,8,10,1,2,2,7]
})

df.groupby(['sp', 'mt']).apply(lambda grp: grp.nlargest(1, 'count'))
df = pd.DataFrame({
'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4','MM4'],
'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
'count' : [3,2,5,8,10,1,2,2,7]
})

df.groupby(['sp', 'mt']).apply(lambda grp: grp.nlargest(1, 'count'))

回答 8

意识到将“最大”应用groupby对象同样有效:

附加优势- 如果需要,还可以获取 前n个值

In [85]: import pandas as pd

In [86]: df = pd.DataFrame({
    ...: 'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4','MM4'],
    ...: 'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
    ...: 'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
    ...: 'count' : [3,2,5,8,10,1,2,2,7]
    ...: })

## Apply nlargest(1) to find the max val df, and nlargest(n) gives top n values for df:
In [87]: df.groupby(["sp", "mt"]).apply(lambda x: x.nlargest(1, "count")).reset_index(drop=True)
Out[87]:
   count  mt   sp  val
0      3  S1  MM1    a
1      5  S3  MM1   cb
2      8  S3  MM2   mk
3     10  S4  MM2   bg
4      7  S2  MM4  uyi

Realizing that “applying” “nlargest” to groupby object works just as fine:

Additional advantage – also can fetch top n values if required:

In [85]: import pandas as pd

In [86]: df = pd.DataFrame({
    ...: 'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4','MM4'],
    ...: 'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
    ...: 'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
    ...: 'count' : [3,2,5,8,10,1,2,2,7]
    ...: })

## Apply nlargest(1) to find the max val df, and nlargest(n) gives top n values for df:
In [87]: df.groupby(["sp", "mt"]).apply(lambda x: x.nlargest(1, "count")).reset_index(drop=True)
Out[87]:
   count  mt   sp  val
0      3  S1  MM1    a
1      5  S3  MM1   cb
2      8  S3  MM2   mk
3     10  S4  MM2   bg
4      7  S2  MM4  uyi

回答 9

尝试在groupby对象上使用“ nlargest”。使用nlargest的优点是它返回从中获取“最大的项目”的行的索引。注意:由于我们的索引由元组组成(例如(s1,0)),因此我们对索引的second(1)元素进行了切片。

df = pd.DataFrame({
'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4','MM4'],
'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
'count' : [3,2,5,8,10,1,2,2,7]
})

d = df.groupby('mt')['count'].nlargest(1) # pass 1 since we want the max

df.iloc[[i[1] for i in d.index], :] # pass the index of d as list comprehension

在此处输入图片说明

Try using “nlargest” on the groupby object. The advantage of using nlargest is that it returns the index of the rows where “the nlargest item(s)” were fetched from. Note: we slice the second(1) element of our index since our index in this case consist of tuples(eg.(s1, 0)).

df = pd.DataFrame({
'sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4','MM4'],
'mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
'val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
'count' : [3,2,5,8,10,1,2,2,7]
})

d = df.groupby('mt')['count'].nlargest(1) # pass 1 since we want the max

df.iloc[[i[1] for i in d.index], :] # pass the index of d as list comprehension

enter image description here


回答 10

我已经在许多小组操作中使用了这种功能风格:

df = pd.DataFrame({
   'Sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4', 'MM4'],
   'Mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
   'Val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
   'Count' : [3,2,5,8,10,1,2,2,7]
})

df.groupby('Mt')\
  .apply(lambda group: group[group.Count == group.Count.max()])\
  .reset_index(drop=True)

    sp  mt  val  count
0  MM1  S1    a      3
1  MM4  S2  uyi      7
2  MM2  S3   mk      8
3  MM2  S4   bg     10

.reset_index(drop=True) 通过删除组索引可以使您回到原始索引。

I’ve been using this functional style for many group operations:

df = pd.DataFrame({
   'Sp' : ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4', 'MM4'],
   'Mt' : ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
   'Val' : ['a', 'n', 'cb', 'mk', 'bg', 'dgb', 'rd', 'cb', 'uyi'],
   'Count' : [3,2,5,8,10,1,2,2,7]
})

df.groupby('Mt')\
  .apply(lambda group: group[group.Count == group.Count.max()])\
  .reset_index(drop=True)

    sp  mt  val  count
0  MM1  S1    a      3
1  MM4  S2  uyi      7
2  MM2  S3   mk      8
3  MM2  S4   bg     10

.reset_index(drop=True) gets you back to the original index by dropping the group-index.


熊猫:从多级列索引中删除一级?

问题:熊猫:从多级列索引中删除一级?

如果我有一个多级列索引:

>>> cols = pd.MultiIndex.from_tuples([("a", "b"), ("a", "c")])
>>> pd.DataFrame([[1,2], [3,4]], columns=cols)
    一个
   --- +-
    b | C
-+ --- +-
0 | 1 | 2
1 | 3 | 4

如何删除该索引的“ a”级,所以我得到以下结果:

    b | C
-+ --- +-
0 | 1 | 2
1 | 3 | 4

If I’ve got a multi-level column index:

>>> cols = pd.MultiIndex.from_tuples([("a", "b"), ("a", "c")])
>>> pd.DataFrame([[1,2], [3,4]], columns=cols)
    a
   ---+--
    b | c
--+---+--
0 | 1 | 2
1 | 3 | 4

How can I drop the “a” level of that index, so I end up with:

    b | c
--+---+--
0 | 1 | 2
1 | 3 | 4

回答 0

您可以使用MultiIndex.droplevel

>>> cols = pd.MultiIndex.from_tuples([("a", "b"), ("a", "c")])
>>> df = pd.DataFrame([[1,2], [3,4]], columns=cols)
>>> df
   a   
   b  c
0  1  2
1  3  4

[2 rows x 2 columns]
>>> df.columns = df.columns.droplevel()
>>> df
   b  c
0  1  2
1  3  4

[2 rows x 2 columns]

You can use MultiIndex.droplevel:

>>> cols = pd.MultiIndex.from_tuples([("a", "b"), ("a", "c")])
>>> df = pd.DataFrame([[1,2], [3,4]], columns=cols)
>>> df
   a   
   b  c
0  1  2
1  3  4

[2 rows x 2 columns]
>>> df.columns = df.columns.droplevel()
>>> df
   b  c
0  1  2
1  3  4

[2 rows x 2 columns]

回答 1

删除索引的另一种方法是使用列表理解:

df.columns = [col[1] for col in df.columns]

   b  c
0  1  2
1  3  4

如果要合并两个级别的名称,例如下面的示例,其中最底层包含两个“ y”,则此策略也很有用:

cols = pd.MultiIndex.from_tuples([("A", "x"), ("A", "y"), ("B", "y")])
df = pd.DataFrame([[1,2, 8 ], [3,4, 9]], columns=cols)

   A     B
   x  y  y
0  1  2  8
1  3  4  9

删除顶级将保留两列的索引为“ y”。通过将名称与列表理解结合在一起可以避免这种情况。

df.columns = ['_'.join(col) for col in df.columns]

    A_x A_y B_y
0   1   2   8
1   3   4   9

这是我在进行分组排序后遇到的一个问题,花了一段时间才找到另一个解决问题的方法。我在这里针对特定情况调整了该解决方案。

Another way to drop the index is to use a list comprehension:

df.columns = [col[1] for col in df.columns]

   b  c
0  1  2
1  3  4

This strategy is also useful if you want to combine the names from both levels like in the example below where the bottom level contains two ‘y’s:

cols = pd.MultiIndex.from_tuples([("A", "x"), ("A", "y"), ("B", "y")])
df = pd.DataFrame([[1,2, 8 ], [3,4, 9]], columns=cols)

   A     B
   x  y  y
0  1  2  8
1  3  4  9

Dropping the top level would leave two columns with the index ‘y’. That can be avoided by joining the names with the list comprehension.

df.columns = ['_'.join(col) for col in df.columns]

    A_x A_y B_y
0   1   2   8
1   3   4   9

That’s a problem I had after doing a groupby and it took a while to find this other question that solved it. I adapted that solution to the specific case here.


回答 2

另一种方法是使用.xs方法df基于的横截面重新分配。df

>>> df

    a
    b   c
0   1   2
1   3   4

>>> df = df.xs('a', axis=1, drop_level=True)

    # 'a' : key on which to get cross section
    # axis=1 : get cross section of column
    # drop_level=True : returns cross section without the multilevel index

>>> df

    b   c
0   1   2
1   3   4

Another way to do this is to reassign df based on a cross section of df, using the .xs method.

>>> df

    a
    b   c
0   1   2
1   3   4

>>> df = df.xs('a', axis=1, drop_level=True)

    # 'a' : key on which to get cross section
    # axis=1 : get cross section of column
    # drop_level=True : returns cross section without the multilevel index

>>> df

    b   c
0   1   2
1   3   4

回答 3

从Pandas 0.24.0开始,我们现在可以使用DataFrame.droplevel()

cols = pd.MultiIndex.from_tuples([("a", "b"), ("a", "c")])
df = pd.DataFrame([[1,2], [3,4]], columns=cols)

df.droplevel(0, axis=1) 

#   b  c
#0  1  2
#1  3  4

如果要保持DataFrame方法链滚动,这将非常有用。

As of Pandas 0.24.0, we can now use DataFrame.droplevel():

cols = pd.MultiIndex.from_tuples([("a", "b"), ("a", "c")])
df = pd.DataFrame([[1,2], [3,4]], columns=cols)

df.droplevel(0, axis=1) 

#   b  c
#0  1  2
#1  3  4

This is very useful if you want to keep your DataFrame method-chain rolling.


回答 4

您也可以通过重命名列来实现:

df.columns = ['a', 'b']

这涉及手动步骤,但可以选择,特别是如果最终要重命名数据框。

You could also achieve that by renaming the columns:

df.columns = ['a', 'b']

This involves a manual step but could be an option especially if you would eventually rename your data frame.


回答 5

一个sum 与level = 1一起使用的小技巧(当level = 1唯一时,工作)

df.sum(level=1,axis=1)
Out[202]: 
   b  c
0  1  2
1  3  4

更常见的解决方案 get_level_values

df.columns=df.columns.get_level_values(1)
df
Out[206]: 
   b  c
0  1  2
1  3  4

A small trick using sum with level=1(work when level=1 is all unique)

df.sum(level=1,axis=1)
Out[202]: 
   b  c
0  1  2
1  3  4

More common solution get_level_values

df.columns=df.columns.get_level_values(1)
df
Out[206]: 
   b  c
0  1  2
1  3  4

回答 6

由于我不知道为什么我的droplevel()函数不起作用,所以我一直在努力解决此问题。遍历几个,并了解表中的“ a”是列名,“ b”,“ c”是索引。这样做会有所帮助

df.columns.name = None
df.reset_index() #make index become label

I have struggled with this problem since I don’t know why my droplevel() function does not work. Work through several and learn that ‘a’ in your table is columns name and ‘b’, ‘c’ are index. Do like this will help

df.columns.name = None
df.reset_index() #make index become label

将Pandas列转换为DateTime

问题:将Pandas列转换为DateTime

我在以字符串格式导入的pandas DataFrame中有一个字段。它应该是日期时间变量。如何将其转换为datetime列,然后根据日期进行过滤。

例:

  • 数据框名称:raw_data
  • 列名称:Mycol
  • 列中的值格式:“ 05SEP2014:00:00:00.000”

I have one field in a pandas DataFrame that was imported as string format. It should be a datetime variable. How do I convert it to a datetime column and then filter based on date.

Example:

  • DataFrame Name: raw_data
  • Column Name: Mycol
  • Value Format in Column: ’05SEP2014:00:00:00.000′

回答 0

使用该to_datetime函数,指定一种格式以匹配您的数据。

raw_data['Mycol'] =  pd.to_datetime(raw_data['Mycol'], format='%d%b%Y:%H:%M:%S.%f')

Use the to_datetime function, specifying a format to match your data.

raw_data['Mycol'] =  pd.to_datetime(raw_data['Mycol'], format='%d%b%Y:%H:%M:%S.%f')

回答 1

您可以使用DataFrame方法.apply()对Mycol中的值进行操作:

>>> df = pd.DataFrame(['05SEP2014:00:00:00.000'],columns=['Mycol'])
>>> df
                    Mycol
0  05SEP2014:00:00:00.000
>>> import datetime as dt
>>> df['Mycol'] = df['Mycol'].apply(lambda x: 
                                    dt.datetime.strptime(x,'%d%b%Y:%H:%M:%S.%f'))
>>> df
       Mycol
0 2014-09-05

You can use the DataFrame method .apply() to operate on the values in Mycol:

>>> df = pd.DataFrame(['05SEP2014:00:00:00.000'],columns=['Mycol'])
>>> df
                    Mycol
0  05SEP2014:00:00:00.000
>>> import datetime as dt
>>> df['Mycol'] = df['Mycol'].apply(lambda x: 
                                    dt.datetime.strptime(x,'%d%b%Y:%H:%M:%S.%f'))
>>> df
       Mycol
0 2014-09-05

回答 2

如果要转换的列不止一个,则可以执行以下操作:

df[["col1", "col2", "col3"]] = df[["col1", "col2", "col3"]].apply(pd.to_datetime)

If you have more than one column to be converted you can do the following:

df[["col1", "col2", "col3"]] = df[["col1", "col2", "col3"]].apply(pd.to_datetime)

回答 3

raw_data['Mycol'] =  pd.to_datetime(raw_data['Mycol'], format='%d%b%Y:%H:%M:%S.%f')

可以,但是会导致Python警告:试图在DataFrame的切片副本上设置一个值。尝试.loc[row_indexer,col_indexer] = value改用

我猜这是由于一些链接索引。

raw_data['Mycol'] =  pd.to_datetime(raw_data['Mycol'], format='%d%b%Y:%H:%M:%S.%f')

works, however it results in a Python warning of A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead

I would guess this is due to some chaining indexing.


回答 4

使用pandas to_datetime函数将列解析为DateTime。另外,通过使用infer_datetime_format=True,它将自动检测格式并将提到的列转换为DateTime。

import pandas as pd
raw_data['Mycol'] =  pd.to_datetime(raw_data['Mycol'], infer_datetime_format=True)

Use the pandas to_datetime function to parse the column as DateTime. Also, by using infer_datetime_format=True, it will automatically detect the format and convert the mentioned column to DateTime.

import pandas as pd
raw_data['Mycol'] =  pd.to_datetime(raw_data['Mycol'], infer_datetime_format=True)

为什么我的Pandas的“应用”功能不能引用多个列?[关闭]

问题:为什么我的Pandas的“应用”功能不能引用多个列?[关闭]

当将多个列与以下数据框一起使用时,Pandas Apply函数存在一些问题

df = DataFrame ({'a' : np.random.randn(6),
                 'b' : ['foo', 'bar'] * 3,
                 'c' : np.random.randn(6)})

和以下功能

def my_test(a, b):
    return a % b

当我尝试使用以下功能时:

df['Value'] = df.apply(lambda row: my_test(row[a], row[c]), axis=1)

我收到错误消息:

NameError: ("global name 'a' is not defined", u'occurred at index 0')

我不明白此消息,我正确定义了名称。

非常感谢您对此问题的帮助

更新资料

谢谢你的帮助。我确实在代码中犯了一些语法错误,索引应该放在”。但是,使用更复杂的功能仍然会遇到相同的问题,例如:

def my_test(a):
    cum_diff = 0
    for ix in df.index():
        cum_diff = cum_diff + (a - df['a'][ix])
    return cum_diff 

I have some problems with the Pandas apply function, when using multiple columns with the following dataframe

df = DataFrame ({'a' : np.random.randn(6),
                 'b' : ['foo', 'bar'] * 3,
                 'c' : np.random.randn(6)})

and the following function

def my_test(a, b):
    return a % b

When I try to apply this function with :

df['Value'] = df.apply(lambda row: my_test(row[a], row[c]), axis=1)

I get the error message:

NameError: ("global name 'a' is not defined", u'occurred at index 0')

I do not understand this message, I defined the name properly.

I would highly appreciate any help on this issue

Update

Thanks for your help. I made indeed some syntax mistakes with the code, the index should be put ”. However I still get the same issue using a more complex function such as:

def my_test(a):
    cum_diff = 0
    for ix in df.index():
        cum_diff = cum_diff + (a - df['a'][ix])
    return cum_diff 

回答 0

好像您忘记了''字符串。

In [43]: df['Value'] = df.apply(lambda row: my_test(row['a'], row['c']), axis=1)

In [44]: df
Out[44]:
                    a    b         c     Value
          0 -1.674308  foo  0.343801  0.044698
          1 -2.163236  bar -2.046438 -0.116798
          2 -0.199115  foo -0.458050 -0.199115
          3  0.918646  bar -0.007185 -0.001006
          4  1.336830  foo  0.534292  0.268245
          5  0.976844  bar -0.773630 -0.570417

在我看来,顺便说一句,以下方式更为优雅:

In [53]: def my_test2(row):
....:     return row['a'] % row['c']
....:     

In [54]: df['Value'] = df.apply(my_test2, axis=1)

Seems you forgot the '' of your string.

In [43]: df['Value'] = df.apply(lambda row: my_test(row['a'], row['c']), axis=1)

In [44]: df
Out[44]:
                    a    b         c     Value
          0 -1.674308  foo  0.343801  0.044698
          1 -2.163236  bar -2.046438 -0.116798
          2 -0.199115  foo -0.458050 -0.199115
          3  0.918646  bar -0.007185 -0.001006
          4  1.336830  foo  0.534292  0.268245
          5  0.976844  bar -0.773630 -0.570417

BTW, in my opinion, following way is more elegant:

In [53]: def my_test2(row):
....:     return row['a'] % row['c']
....:     

In [54]: df['Value'] = df.apply(my_test2, axis=1)

回答 1

如果您只想计算(a栏)%(b栏),则不需要apply,只需直接执行:

In [7]: df['a'] % df['c']                                                                                                                                                        
Out[7]: 
0   -1.132022                                                                                                                                                                    
1   -0.939493                                                                                                                                                                    
2    0.201931                                                                                                                                                                    
3    0.511374                                                                                                                                                                    
4   -0.694647                                                                                                                                                                    
5   -0.023486                                                                                                                                                                    
Name: a

If you just want to compute (column a) % (column b), you don’t need apply, just do it directly:

In [7]: df['a'] % df['c']                                                                                                                                                        
Out[7]: 
0   -1.132022                                                                                                                                                                    
1   -0.939493                                                                                                                                                                    
2    0.201931                                                                                                                                                                    
3    0.511374                                                                                                                                                                    
4   -0.694647                                                                                                                                                                    
5   -0.023486                                                                                                                                                                    
Name: a

回答 2

假设我们要对DataFrame df的列“ a”和“ b”应用add5函数

def add5(x):
    return x+5

df[['a', 'b']].apply(add5)

Let’s say we want to apply a function add5 to columns ‘a’ and ‘b’ of DataFrame df

def add5(x):
    return x+5

df[['a', 'b']].apply(add5)

回答 3

以上所有建议均有效,但如果您希望提高计算效率,则应利用numpy向量运算(如此处所述)

import pandas as pd
import numpy as np


df = pd.DataFrame ({'a' : np.random.randn(6),
             'b' : ['foo', 'bar'] * 3,
             'c' : np.random.randn(6)})

示例1:循环pandas.apply()

%%timeit
def my_test2(row):
    return row['a'] % row['c']

df['Value'] = df.apply(my_test2, axis=1)

最慢的运行时间比最快的运行时间长7.49倍。这可能意味着正在缓存中间结果。1000个循环,最佳3:每个循环481 µs

示例2:使用进行矢量化pandas.apply()

%%timeit
df['a'] % df['c']

最慢的运行时间比最快的运行时间长458.85倍。这可能意味着正在缓存中间结果。10000次循环,最好为3次:每个循环70.9 µs

示例3:使用numpy数组进行向量化:

%%timeit
df['a'].values % df['c'].values

最慢的运行时间比最快的运行时间长7.98倍。这可能意味着正在缓存中间结果。100000次循环,每循环3:6.39 µs最佳

因此,使用numpy数组进行向量化将速度提高了近两个数量级。

All of the suggestions above work, but if you want your computations to by more efficient, you should take advantage of numpy vector operations (as pointed out here).

import pandas as pd
import numpy as np


df = pd.DataFrame ({'a' : np.random.randn(6),
             'b' : ['foo', 'bar'] * 3,
             'c' : np.random.randn(6)})

Example 1: looping with pandas.apply():

%%timeit
def my_test2(row):
    return row['a'] % row['c']

df['Value'] = df.apply(my_test2, axis=1)

The slowest run took 7.49 times longer than the fastest. This could mean that an intermediate result is being cached. 1000 loops, best of 3: 481 µs per loop

Example 2: vectorize using pandas.apply():

%%timeit
df['a'] % df['c']

The slowest run took 458.85 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 70.9 µs per loop

Example 3: vectorize using numpy arrays:

%%timeit
df['a'].values % df['c'].values

The slowest run took 7.98 times longer than the fastest. This could mean that an intermediate result is being cached. 100000 loops, best of 3: 6.39 µs per loop

So vectorizing using numpy arrays improved the speed by almost two orders of magnitude.


回答 4

这与先前的解决方案相同,但是我已经在df.apply本身中定义了该函数:

df['Value'] = df.apply(lambda row: row['a']%row['c'], axis=1)

This is same as the previous solution but I have defined the function in df.apply itself:

df['Value'] = df.apply(lambda row: row['a']%row['c'], axis=1)

回答 5

我已经比较了上面讨论的所有三个。

使用值

%timeit df ['value'] = df ['a']。values%df ['c']。values

每个回路139 µs±1.91 µs(平均±标准偏差,共运行7次,每个回路10000个)

没有价值

%timeit df ['value'] = df ['a']%df ['c'] 

每个循环216 µs±1.86 µs(平均±标准偏差,共运行7次,每个循环1000个)

套用功能

%timeit df ['Value'] = df.apply(lambda row:row ['a']%row ['c'],axis = 1)

每个回路474 µs±5.07 µs(平均±标准偏差,共运行7次,每个回路1000个)

I have given the comparison of all three discussed above.

Using values

%timeit df['value'] = df['a'].values % df['c'].values

139 µs ± 1.91 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Without values

%timeit df['value'] = df['a']%df['c'] 

216 µs ± 1.86 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Apply function

%timeit df['Value'] = df.apply(lambda row: row['a']%row['c'], axis=1)

474 µs ± 5.07 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)


熊猫读表不带标题

问题:熊猫读表不带标题

我如何使用熊猫读取.csv文件(无标题),并且只希望使用列的子集(例如总共20列中的第4和第7列)?我似乎无法做usecols

How can I read in a .csv file (with no headers) and when I only want a subset of the columns (say 4th and 7th out of a total of 20 columns), using pandas? I cannot seem to be able to do usecols


回答 0

为了读取没有标题的csv,仅对于某些列,您需要传递params header=None以及usecols=[3,6]第4列和第7列:

df = pd.read_csv(file_path, header=None, usecols=[3,6])

查看文件

In order to read a csv in that doesn’t have a header and for only certain columns you need to pass params header=None and usecols=[3,6] for the 4th and 7th columns:

df = pd.read_csv(file_path, header=None, usecols=[3,6])

See the docs


回答 1

先前的答案是正确的,但我认为,额外的names参数可以使其完美,这是推荐的方式,尤其是在csv没有时headers

用途usecolsnames参数

df = pd.read_csv(file_path, usecols=[3,6], names=['colA', 'colB'])

补充阅读

或用于header=None显式地告诉人们csv没有标题(无论如何,两行都是相同的

df = pd.read_csv(file_path, usecols=[3,6], names=['colA', 'colB'], header=None)

这样您就可以通过以下方式检索数据

# with `names` parameter
df['colA']
df['colB'] 

代替

# without `names` parameter
df[0]
df[1]

说明

基于read_csv,当names显式传递时,header将表现为None而不是0,因此存在header=None时可以跳过names

Previous answers were good and correct, but in my opinion, an extra names parameter will make it perfect, and it should be the recommended way, especially when the csv has no headers.

Solution

Use usecols and names parameters

df = pd.read_csv(file_path, usecols=[3,6], names=['colA', 'colB'])

Additional reading

or use header=None to explicitly tells people that the csv has no headers (anyway both lines are identical)

df = pd.read_csv(file_path, usecols=[3,6], names=['colA', 'colB'], header=None)

So that you can retrieve your data by

# with `names` parameter
df['colA']
df['colB'] 

instead of

# without `names` parameter
df[0]
df[1]

Explain

Based on read_csv, when names are passed explicitly, then header will be behaving like None instead of 0, so one can skip header=None when names exist.


回答 2

确保为第4列和第7列指定pass header=None和add usecols=[3,6]

Make sure you specify pass header=None and add usecols=[3,6] for the 4th and 7th columns.


如何从一列中排序熊猫数据框

问题:如何从一列中排序熊猫数据框

我有一个像这样的数据框:

print(df)

        0          1     2
0   354.7      April   4.0
1    55.4     August   8.0
2   176.5   December  12.0
3    95.5   February   2.0
4    85.6    January   1.0
5     152       July   7.0
6   238.7       June   6.0
7   104.8      March   3.0
8   283.5        May   5.0
9   278.8   November  11.0
10  249.6    October  10.0
11  212.7  September   9.0

如您所见,月份不是按日历顺序排列的。因此,我创建了第二列以获取与每月(1-12)相对应的月份号。从那里,如何根据日历月的顺序对数据框进行排序?

I have a data frame like this:

print(df)

        0          1     2
0   354.7      April   4.0
1    55.4     August   8.0
2   176.5   December  12.0
3    95.5   February   2.0
4    85.6    January   1.0
5     152       July   7.0
6   238.7       June   6.0
7   104.8      March   3.0
8   283.5        May   5.0
9   278.8   November  11.0
10  249.6    October  10.0
11  212.7  September   9.0

As you can see, months are not in calendar order. So I created a second column to get the month number corresponding to each month (1-12). From there, how can I sort this data frame according to calendar months’ order?


回答 0

用于sort_values按特定列的值对df进行排序:

In [18]:
df.sort_values('2')

Out[18]:
        0          1     2
4    85.6    January   1.0
3    95.5   February   2.0
7   104.8      March   3.0
0   354.7      April   4.0
8   283.5        May   5.0
6   238.7       June   6.0
5   152.0       July   7.0
1    55.4     August   8.0
11  212.7  September   9.0
10  249.6    October  10.0
9   278.8   November  11.0
2   176.5   December  12.0

如果要按两列进行排序sort_values,请将列标签列表传递给,并根据排序优先级对列标签进行排序。如果使用df.sort_values(['2', '0']),结果将按列2然后按列排序0。当然,对于这个示例,这实际上没有任何意义,因为其中的每个值df['2']都是唯一的。

Use sort_values to sort the df by a specific column’s values:

In [18]:
df.sort_values('2')

Out[18]:
        0          1     2
4    85.6    January   1.0
3    95.5   February   2.0
7   104.8      March   3.0
0   354.7      April   4.0
8   283.5        May   5.0
6   238.7       June   6.0
5   152.0       July   7.0
1    55.4     August   8.0
11  212.7  September   9.0
10  249.6    October  10.0
9   278.8   November  11.0
2   176.5   December  12.0

If you want to sort by two columns, pass a list of column labels to sort_values with the column labels ordered according to sort priority. If you use df.sort_values(['2', '0']), the result would be sorted by column 2 then column 0. Granted, this does not really make sense for this example because each value in df['2'] is unique.


回答 1

我尝试了上述解决方案,但没有取得结果,因此我找到了一个对我有用的解决方案。该升=假是订购数据框在递减顺序,默认为真。我正在使用python 3.6.6和pandas 0.23.4版本。

final_df = df.sort_values(by=['2'], ascending=False)

您可以在此处查看pandas文档中的更多详细信息。

I tried the solutions above and I do not achieve results, so I found a different solution that works for me. The ascending=False is to order the dataframe in descending order, by default it is True. I am using python 3.6.6 and pandas 0.23.4 versions.

final_df = df.sort_values(by=['2'], ascending=False)

You can see more details in pandas documentation here.


回答 2

只是添加一些对数据的操作。假设我们有一个数据框df,我们可以执行几个操作以获得所需的输出

ID         cost      tax    label
1       216590      1600    test      
2       523213      1800    test 
3          250      1500    experiment

(df['label'].value_counts().to_frame().reset_index()).sort_values('label', ascending=False)

sorted标签输出作为dataframe

    index   label
0   test        2
1   experiment  1

Just adding some more operations on data. Suppose we have a dataframe df, we can do several operations to get desired outputs

ID         cost      tax    label
1       216590      1600    test      
2       523213      1800    test 
3          250      1500    experiment

(df['label'].value_counts().to_frame().reset_index()).sort_values('label', ascending=False)

will give sorted output of labels as a dataframe

    index   label
0   test        2
1   experiment  1

回答 3

就像另一个解决方案:

您可以对字符串数据(月份名称)进行分类并按如下方式进行排序:

df.rename(columns={1:'month'},inplace=True)
df['month'] = pd.Categorical(df['month'],categories=['December','November','October','September','August','July','June','May','April','March','February','January'],ordered=True)
df = df.sort_values('month',ascending=False)

它将month name按照您在创建Categorical对象时指定的顺序为您提供排序的数据。

Just as another solution:

Instead of creating the second column, you can categorize your string data(month name) and sort by that like this:

df.rename(columns={1:'month'},inplace=True)
df['month'] = pd.Categorical(df['month'],categories=['December','November','October','September','August','July','June','May','April','March','February','January'],ordered=True)
df = df.sort_values('month',ascending=False)

It will give you the ordered data by month name as you specified while creating the Categorical object.


从pandas.DataFrame使用复杂条件选择

问题:从pandas.DataFrame使用复杂条件选择

例如,我有简单的DF:

import pandas as pd
from random import randint

df = pd.DataFrame({'A': [randint(1, 9) for x in xrange(10)],
                   'B': [randint(1, 9)*10 for x in xrange(10)],
                   'C': [randint(1, 9)*100 for x in xrange(10)]})

我可以使用熊猫的方法和惯用法从“ A”中选择与B对应的值大于50的值,对于C对应的值大于900的值吗?

For example I have simple DF:

import pandas as pd
from random import randint

df = pd.DataFrame({'A': [randint(1, 9) for x in xrange(10)],
                   'B': [randint(1, 9)*10 for x in xrange(10)],
                   'C': [randint(1, 9)*100 for x in xrange(10)]})

Can I select values from ‘A’ for which corresponding values for ‘B’ will be greater than 50, and for ‘C’ – not equal 900, using methods and idioms of Pandas?


回答 0

当然!建立:

>>> import pandas as pd
>>> from random import randint
>>> df = pd.DataFrame({'A': [randint(1, 9) for x in range(10)],
                   'B': [randint(1, 9)*10 for x in range(10)],
                   'C': [randint(1, 9)*100 for x in range(10)]})
>>> df
   A   B    C
0  9  40  300
1  9  70  700
2  5  70  900
3  8  80  900
4  7  50  200
5  9  30  900
6  2  80  700
7  2  80  400
8  5  80  300
9  7  70  800

我们可以应用列操作并获取布尔系列对象:

>>> df["B"] > 50
0    False
1     True
2     True
3     True
4    False
5    False
6     True
7     True
8     True
9     True
Name: B
>>> (df["B"] > 50) & (df["C"] == 900)
0    False
1    False
2     True
3     True
4    False
5    False
6    False
7    False
8    False
9    False

[更新,切换到新样式.loc]:

然后我们可以使用它们来索引对象。对于读取访问,可以链接索引:

>>> df["A"][(df["B"] > 50) & (df["C"] == 900)]
2    5
3    8
Name: A, dtype: int64

但是由于视图和执行写操作的副本之间的差异,您可能会遇到麻烦。您可以.loc改用:

>>> df.loc[(df["B"] > 50) & (df["C"] == 900), "A"]
2    5
3    8
Name: A, dtype: int64
>>> df.loc[(df["B"] > 50) & (df["C"] == 900), "A"].values
array([5, 8], dtype=int64)
>>> df.loc[(df["B"] > 50) & (df["C"] == 900), "A"] *= 1000
>>> df
      A   B    C
0     9  40  300
1     9  70  700
2  5000  70  900
3  8000  80  900
4     7  50  200
5     9  30  900
6     2  80  700
7     2  80  400
8     5  80  300
9     7  70  800

请注意,我不小心输入了== 900not != 900和或~(df["C"] == 900),但是我懒得修复它。为读者练习。:^)

Sure! Setup:

>>> import pandas as pd
>>> from random import randint
>>> df = pd.DataFrame({'A': [randint(1, 9) for x in range(10)],
                   'B': [randint(1, 9)*10 for x in range(10)],
                   'C': [randint(1, 9)*100 for x in range(10)]})
>>> df
   A   B    C
0  9  40  300
1  9  70  700
2  5  70  900
3  8  80  900
4  7  50  200
5  9  30  900
6  2  80  700
7  2  80  400
8  5  80  300
9  7  70  800

We can apply column operations and get boolean Series objects:

>>> df["B"] > 50
0    False
1     True
2     True
3     True
4    False
5    False
6     True
7     True
8     True
9     True
Name: B
>>> (df["B"] > 50) & (df["C"] == 900)
0    False
1    False
2     True
3     True
4    False
5    False
6    False
7    False
8    False
9    False

[Update, to switch to new-style .loc]:

And then we can use these to index into the object. For read access, you can chain indices:

>>> df["A"][(df["B"] > 50) & (df["C"] == 900)]
2    5
3    8
Name: A, dtype: int64

but you can get yourself into trouble because of the difference between a view and a copy doing this for write access. You can use .loc instead:

>>> df.loc[(df["B"] > 50) & (df["C"] == 900), "A"]
2    5
3    8
Name: A, dtype: int64
>>> df.loc[(df["B"] > 50) & (df["C"] == 900), "A"].values
array([5, 8], dtype=int64)
>>> df.loc[(df["B"] > 50) & (df["C"] == 900), "A"] *= 1000
>>> df
      A   B    C
0     9  40  300
1     9  70  700
2  5000  70  900
3  8000  80  900
4     7  50  200
5     9  30  900
6     2  80  700
7     2  80  400
8     5  80  300
9     7  70  800

Note that I accidentally typed == 900 and not != 900, or ~(df["C"] == 900), but I’m too lazy to fix it. Exercise for the reader. :^)


回答 1

另一种解决方案是使用查询方法:

import pandas as pd

from random import randint
df = pd.DataFrame({'A': [randint(1, 9) for x in xrange(10)],
                   'B': [randint(1, 9) * 10 for x in xrange(10)],
                   'C': [randint(1, 9) * 100 for x in xrange(10)]})
print df

   A   B    C
0  7  20  300
1  7  80  700
2  4  90  100
3  4  30  900
4  7  80  200
5  7  60  800
6  3  80  900
7  9  40  100
8  6  40  100
9  3  10  600

print df.query('B > 50 and C != 900')

   A   B    C
1  7  80  700
2  4  90  100
4  7  80  200
5  7  60  800

现在,如果要更改A列中的返回值,可以保存其索引:

my_query_index = df.query('B > 50 & C != 900').index

…并用于.iloc更改它们,即:

df.iloc[my_query_index, 0] = 5000

print df

      A   B    C
0     7  20  300
1  5000  80  700
2  5000  90  100
3     4  30  900
4  5000  80  200
5  5000  60  800
6     3  80  900
7     9  40  100
8     6  40  100
9     3  10  600

Another solution is to use the query method:

import pandas as pd

from random import randint
df = pd.DataFrame({'A': [randint(1, 9) for x in xrange(10)],
                   'B': [randint(1, 9) * 10 for x in xrange(10)],
                   'C': [randint(1, 9) * 100 for x in xrange(10)]})
print df

   A   B    C
0  7  20  300
1  7  80  700
2  4  90  100
3  4  30  900
4  7  80  200
5  7  60  800
6  3  80  900
7  9  40  100
8  6  40  100
9  3  10  600

print df.query('B > 50 and C != 900')

   A   B    C
1  7  80  700
2  4  90  100
4  7  80  200
5  7  60  800

Now if you want to change the returned values in column A you can save their index:

my_query_index = df.query('B > 50 & C != 900').index

….and use .iloc to change them i.e:

df.iloc[my_query_index, 0] = 5000

print df

      A   B    C
0     7  20  300
1  5000  80  700
2  5000  90  100
3     4  30  900
4  5000  80  200
5  5000  60  800
6     3  80  900
7     9  40  100
8     6  40  100
9     3  10  600

回答 2

并记住要使用括号!

请记住,&运算符的优先级高于诸如>或的运算符<。这就是为什么

4 < 5 & 6 > 4

评估为False。因此,如果使用pd.loc,则需要在逻辑语句两边加上方括号,否则会出现错误。这就是为什么:

df.loc[(df['A'] > 10) & (df['B'] < 15)]

代替

df.loc[df['A'] > 10 & df['B'] < 15]

这将导致

TypeError:无法将dtyped [float64]数组与类型为[bool]的标量进行比较

And remember to use parenthesis!

Keep in mind that & operator takes a precedence over operators such as > or < etc. That is why

4 < 5 & 6 > 4

evaluates to False. Therefore if you’re using pd.loc, you need to put brackets around your logical statements, otherwise you get an error. That’s why do:

df.loc[(df['A'] > 10) & (df['B'] < 15)]

instead of

df.loc[df['A'] > 10 & df['B'] < 15]

which would result in

TypeError: cannot compare a dtyped [float64] array with a scalar of type [bool]


回答 3

您可以使用熊猫它具有一些内置功能进行比较。因此,如果您想选择“ B”和“ C”条件满足的“ A”值(假设您要返回一个DataFrame pandas对象)

df[['A']][df.B.gt(50) & df.C.ne(900)]

df[['A']] 将以DataFrame格式返回A列。

pandas的“ gt”函数将返回大于50的列B的位置,而“ ne”函数将返回不等于900的位置。

You can use pandas it has some built in functions for comparison. So if you want to select values of “A” that are met by the conditions of “B” and “C” (assuming you want back a DataFrame pandas object)

df[['A']][df.B.gt(50) & df.C.ne(900)]

df[['A']] will give you back column A in DataFrame format.

pandas ‘gt’ function will return the positions of column B that are greater than 50 and ‘ne’ will return the positions not equal to 900.


如何从pandas DataFrame中选择一个或多个null的行而不显式列出列?

问题:如何从pandas DataFrame中选择一个或多个null的行而不显式列出列?

我有一个约30万行和约40列的数据框。我想找出是否有任何行包含空值-并将这些“空”行放入单独的数据框中,以便我可以轻松地探索它们。

我可以显式创建一个遮罩:

mask = False
for col in df.columns: 
    mask = mask | df[col].isnull()
dfnulls = df[mask]

或者我可以做类似的事情:

df.ix[df.index[(df.T == np.nan).sum() > 1]]

有没有更优雅的方法(找到行中包含null的行)?

I have a dataframe with ~300K rows and ~40 columns. I want to find out if any rows contain null values – and put these ‘null’-rows into a separate dataframe so that I could explore them easily.

I can create a mask explicitly:

mask = False
for col in df.columns: 
    mask = mask | df[col].isnull()
dfnulls = df[mask]

Or I can do something like:

df.ix[df.index[(df.T == np.nan).sum() > 1]]

Is there a more elegant way of doing it (locating rows with nulls in them)?


回答 0

[已更新以适应现代pandas,它已isnull成为一种方法DataFrame。]

您可以使用isnullany构建布尔系列,并使用它来索引您的框架:

>>> df = pd.DataFrame([range(3), [0, np.NaN, 0], [0, 0, np.NaN], range(3), range(3)])
>>> df.isnull()
       0      1      2
0  False  False  False
1  False   True  False
2  False  False   True
3  False  False  False
4  False  False  False
>>> df.isnull().any(axis=1)
0    False
1     True
2     True
3    False
4    False
dtype: bool
>>> df[df.isnull().any(axis=1)]
   0   1   2
1  0 NaN   0
2  0   0 NaN

[较老pandas:]

您可以使用函数isnull代替方法:

In [56]: df = pd.DataFrame([range(3), [0, np.NaN, 0], [0, 0, np.NaN], range(3), range(3)])

In [57]: df
Out[57]: 
   0   1   2
0  0   1   2
1  0 NaN   0
2  0   0 NaN
3  0   1   2
4  0   1   2

In [58]: pd.isnull(df)
Out[58]: 
       0      1      2
0  False  False  False
1  False   True  False
2  False  False   True
3  False  False  False
4  False  False  False

In [59]: pd.isnull(df).any(axis=1)
Out[59]: 
0    False
1     True
2     True
3    False
4    False

导致相当紧凑:

In [60]: df[pd.isnull(df).any(axis=1)]
Out[60]: 
   0   1   2
1  0 NaN   0
2  0   0 NaN

[Updated to adapt to modern pandas, which has isnull as a method of DataFrames..]

You can use isnull and any to build a boolean Series and use that to index into your frame:

>>> df = pd.DataFrame([range(3), [0, np.NaN, 0], [0, 0, np.NaN], range(3), range(3)])
>>> df.isnull()
       0      1      2
0  False  False  False
1  False   True  False
2  False  False   True
3  False  False  False
4  False  False  False
>>> df.isnull().any(axis=1)
0    False
1     True
2     True
3    False
4    False
dtype: bool
>>> df[df.isnull().any(axis=1)]
   0   1   2
1  0 NaN   0
2  0   0 NaN

[For older pandas:]

You could use the function isnull instead of the method:

In [56]: df = pd.DataFrame([range(3), [0, np.NaN, 0], [0, 0, np.NaN], range(3), range(3)])

In [57]: df
Out[57]: 
   0   1   2
0  0   1   2
1  0 NaN   0
2  0   0 NaN
3  0   1   2
4  0   1   2

In [58]: pd.isnull(df)
Out[58]: 
       0      1      2
0  False  False  False
1  False   True  False
2  False  False   True
3  False  False  False
4  False  False  False

In [59]: pd.isnull(df).any(axis=1)
Out[59]: 
0    False
1     True
2     True
3    False
4    False

leading to the rather compact:

In [60]: df[pd.isnull(df).any(axis=1)]
Out[60]: 
   0   1   2
1  0 NaN   0
2  0   0 NaN

回答 1

def nans(df): return df[df.isnull().any(axis=1)]

然后,当您需要时可以键入:

nans(your_dataframe)
def nans(df): return df[df.isnull().any(axis=1)]

then when ever you need it you can type:

nans(your_dataframe)

回答 2

.any()并且.all()非常适合极端情况,但不适用于要查找特定数量的空值的情况。这是完成我认为您要问的事情的一种非常简单的方法。它很冗长,但很实用。

import pandas as pd
import numpy as np

# Some test data frame
df = pd.DataFrame({'num_legs':          [2, 4,      np.nan, 0, np.nan],
                   'num_wings':         [2, 0,      np.nan, 0, 9],
                   'num_specimen_seen': [10, np.nan, 1,     8, np.nan]})

# Helper : Gets NaNs for some row
def row_nan_sums(df):
    sums = []
    for row in df.values:
        sum = 0
        for el in row:
            if el != el: # np.nan is never equal to itself. This is "hacky", but complete.
                sum+=1
        sums.append(sum)
    return sums

# Returns a list of indices for rows with k+ NaNs
def query_k_plus_sums(df, k):
    sums = row_nan_sums(df)
    indices = []
    i = 0
    for sum in sums:
        if (sum >= k):
            indices.append(i)
        i += 1
    return indices

# test
print(df)
print(query_k_plus_sums(df, 2))

输出量

   num_legs  num_wings  num_specimen_seen
0       2.0        2.0               10.0
1       4.0        0.0                NaN
2       NaN        NaN                1.0
3       0.0        0.0                8.0
4       NaN        9.0                NaN
[2, 4]

然后,如果您像我一样,并且想要清除这些行,则只需编写以下代码:

# drop the rows from the data frame
df.drop(query_k_plus_sums(df, 2),inplace=True)
# Reshuffle up data (if you don't do this, the indices won't reset)
df = df.sample(frac=1).reset_index(drop=True)
# print data frame
print(df)

输出:

   num_legs  num_wings  num_specimen_seen
0       4.0        0.0                NaN
1       0.0        0.0                8.0
2       2.0        2.0               10.0

.any() and .all() are great for the extreme cases, but not when you’re looking for a specific number of null values. Here’s an extremely simple way to do what I believe you’re asking. It’s pretty verbose, but functional.

import pandas as pd
import numpy as np

# Some test data frame
df = pd.DataFrame({'num_legs':          [2, 4,      np.nan, 0, np.nan],
                   'num_wings':         [2, 0,      np.nan, 0, 9],
                   'num_specimen_seen': [10, np.nan, 1,     8, np.nan]})

# Helper : Gets NaNs for some row
def row_nan_sums(df):
    sums = []
    for row in df.values:
        sum = 0
        for el in row:
            if el != el: # np.nan is never equal to itself. This is "hacky", but complete.
                sum+=1
        sums.append(sum)
    return sums

# Returns a list of indices for rows with k+ NaNs
def query_k_plus_sums(df, k):
    sums = row_nan_sums(df)
    indices = []
    i = 0
    for sum in sums:
        if (sum >= k):
            indices.append(i)
        i += 1
    return indices

# test
print(df)
print(query_k_plus_sums(df, 2))

Output

   num_legs  num_wings  num_specimen_seen
0       2.0        2.0               10.0
1       4.0        0.0                NaN
2       NaN        NaN                1.0
3       0.0        0.0                8.0
4       NaN        9.0                NaN
[2, 4]

Then, if you’re like me and want to clear those rows out, you just write this:

# drop the rows from the data frame
df.drop(query_k_plus_sums(df, 2),inplace=True)
# Reshuffle up data (if you don't do this, the indices won't reset)
df = df.sample(frac=1).reset_index(drop=True)
# print data frame
print(df)

Output:

   num_legs  num_wings  num_specimen_seen
0       4.0        0.0                NaN
1       0.0        0.0                8.0
2       2.0        2.0               10.0