标签归档:pandas

在Pandas DataFrame中查找列的值最大的行

问题:在Pandas DataFrame中查找列的值最大的行

如何找到特定列的值最大的行

df.max() 会给我每列的最大值,我不知道如何获取对应的行。

How can I find the row for which the value of a specific column is maximal?

df.max() will give me the maximal value for each column, I don’t know how to get the corresponding row.


回答 0

使用熊猫idxmax功能。很简单:

>>> import pandas
>>> import numpy as np
>>> df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
>>> df
          A         B         C
0  1.232853 -1.979459 -0.573626
1  0.140767  0.394940  1.068890
2  0.742023  1.343977 -0.579745
3  2.125299 -0.649328 -0.211692
4 -0.187253  1.908618 -1.862934
>>> df['A'].argmax()
3
>>> df['B'].argmax()
4
>>> df['C'].argmax()
1
  • 或者,您也可以使用numpy.argmax,例如numpy.argmax(df['A'])-它提供相同的功能,并且至少与idxmax粗略观察中的显示速度一样快。

  • idxmax() 返回索引标签,而不是整数。

    • 示例”:如果您将字符串值用作索引标签,例如行“ a”至“ e”,则可能要知道最大值出现在第4行(而不是“ d”行)。
    • 如果您希望该标签在其中的整数位置,则Index必须手动获取它(由于允许使用重复的行标签,因此可能很棘手)。

历史记录:

  • idxmax()以前在0.11之前被调用argmax()
  • argmax 在1.0.0之前弃用,并在1.0.0中完全删除
  • 早于Pandas 0.16,argmax曾经存在并执行相同的功能(尽管运行速度比慢idxmax)。
    • argmax函数返回最大元素的行位置的索引内的整数位置
    • 熊猫开始使用行标签代替整数索引。位置整数索引曾经很常见,比标签更常见,尤其是在重复行标签很常见的应用程序中。

例如,考虑以下DataFrame带有重复行标签的玩具:

In [19]: dfrm
Out[19]: 
          A         B         C
a  0.143693  0.653810  0.586007
b  0.623582  0.312903  0.919076
c  0.165438  0.889809  0.000967
d  0.308245  0.787776  0.571195
e  0.870068  0.935626  0.606911
f  0.037602  0.855193  0.728495
g  0.605366  0.338105  0.696460
h  0.000000  0.090814  0.963927
i  0.688343  0.188468  0.352213
i  0.879000  0.105039  0.900260

In [20]: dfrm['A'].idxmax()
Out[20]: 'i'

In [21]: dfrm.iloc[dfrm['A'].idxmax()]  # .ix instead of .iloc in older versions of pandas
Out[21]: 
          A         B         C
i  0.688343  0.188468  0.352213
i  0.879000  0.105039  0.900260

因此,单单使用idxmax不足以达到此目的,而旧形式的argmax可以正确提供最大行的位置(在这种情况下为位置9)。

这恰恰是动态类型语言中那些容易发生错误的令人讨厌的行为之一,这种行为使这种事情非常不幸,值得一搏。如果您正在编写系统代码,而系统突然被用于某些在加入之前未正确清理的数据集,则很容易以重复的行标签结尾,尤其是字符串标签,例如金融资产的CUSIP或SEDOL标识符。您无法轻松地使用类型系统来帮助您,并且可能无法在索引中意外丢失数据而无法对索引实施唯一性。

因此,您只希望单元测试能够覆盖所有内容(它们没有,或者很可能没有人编写任何测试)-否则(很可能)您只需要等待,看看是否碰巧遇到了这个问题运行时错误,在这种情况下,你可能不得不去从你输出结果,碰你的头反对IPython的墙试图手动重现问题数据库中删除多个小时的工作价值,终于搞清楚,这是因为idxmax可以报告最大行的标签,然后感到失望的是,没有标准函数会自动为您获取最大行的位置,您自己编写一个有问题的实现,编辑代码,并祈祷您不再遇到问题。

Use the pandas idxmax function. It’s straightforward:

>>> import pandas
>>> import numpy as np
>>> df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
>>> df
          A         B         C
0  1.232853 -1.979459 -0.573626
1  0.140767  0.394940  1.068890
2  0.742023  1.343977 -0.579745
3  2.125299 -0.649328 -0.211692
4 -0.187253  1.908618 -1.862934
>>> df['A'].argmax()
3
>>> df['B'].argmax()
4
>>> df['C'].argmax()
1
  • Alternatively you could also use numpy.argmax, such as numpy.argmax(df['A']) — it provides the same thing, and appears at least as fast as idxmax in cursory observations.

  • idxmax() returns indices labels, not integers.

    • Example’: if you have string values as your index labels, like rows ‘a’ through ‘e’, you might want to know that the max occurs in row 4 (not row ‘d’).
    • if you want the integer position of that label within the Index you have to get it manually (which can be tricky now that duplicate row labels are allowed).

HISTORICAL NOTES:

  • idxmax() used to be called argmax() prior to 0.11
  • argmax was deprecated prior to 1.0.0 and removed entirely in 1.0.0
  • back as of Pandas 0.16, argmax used to exist and perform the same function (though appeared to run more slowly than idxmax).
    • argmax function returned the integer position within the index of the row location of the maximum element.
    • pandas moved to using row labels instead of integer indices. Positional integer indices used to be very common, more common than labels, especially in applications where duplicate row labels are common.

For example, consider this toy DataFrame with a duplicate row label:

In [19]: dfrm
Out[19]: 
          A         B         C
a  0.143693  0.653810  0.586007
b  0.623582  0.312903  0.919076
c  0.165438  0.889809  0.000967
d  0.308245  0.787776  0.571195
e  0.870068  0.935626  0.606911
f  0.037602  0.855193  0.728495
g  0.605366  0.338105  0.696460
h  0.000000  0.090814  0.963927
i  0.688343  0.188468  0.352213
i  0.879000  0.105039  0.900260

In [20]: dfrm['A'].idxmax()
Out[20]: 'i'

In [21]: dfrm.iloc[dfrm['A'].idxmax()]  # .ix instead of .iloc in older versions of pandas
Out[21]: 
          A         B         C
i  0.688343  0.188468  0.352213
i  0.879000  0.105039  0.900260

So here a naive use of idxmax is not sufficient, whereas the old form of argmax would correctly provide the positional location of the max row (in this case, position 9).

This is exactly one of those nasty kinds of bug-prone behaviors in dynamically typed languages that makes this sort of thing so unfortunate, and worth beating a dead horse over. If you are writing systems code and your system suddenly gets used on some data sets that are not cleaned properly before being joined, it’s very easy to end up with duplicate row labels, especially string labels like a CUSIP or SEDOL identifier for financial assets. You can’t easily use the type system to help you out, and you may not be able to enforce uniqueness on the index without running into unexpectedly missing data.

So you’re left with hoping that your unit tests covered everything (they didn’t, or more likely no one wrote any tests) — otherwise (most likely) you’re just left waiting to see if you happen to smack into this error at runtime, in which case you probably have to go drop many hours worth of work from the database you were outputting results to, bang your head against the wall in IPython trying to manually reproduce the problem, finally figuring out that it’s because idxmax can only report the label of the max row, and then being disappointed that no standard function automatically gets the positions of the max row for you, writing a buggy implementation yourself, editing the code, and praying you don’t run into the problem again.


回答 1

您也可以尝试idxmax

In [5]: df = pandas.DataFrame(np.random.randn(10,3),columns=['A','B','C'])

In [6]: df
Out[6]: 
          A         B         C
0  2.001289  0.482561  1.579985
1 -0.991646 -0.387835  1.320236
2  0.143826 -1.096889  1.486508
3 -0.193056 -0.499020  1.536540
4 -2.083647 -3.074591  0.175772
5 -0.186138 -1.949731  0.287432
6 -0.480790 -1.771560 -0.930234
7  0.227383 -0.278253  2.102004
8 -0.002592  1.434192 -1.624915
9  0.404911 -2.167599 -0.452900

In [7]: df.idxmax()
Out[7]: 
A    0
B    8
C    7

例如

In [8]: df.loc[df['A'].idxmax()]
Out[8]: 
A    2.001289
B    0.482561
C    1.579985

You might also try idxmax:

In [5]: df = pandas.DataFrame(np.random.randn(10,3),columns=['A','B','C'])

In [6]: df
Out[6]: 
          A         B         C
0  2.001289  0.482561  1.579985
1 -0.991646 -0.387835  1.320236
2  0.143826 -1.096889  1.486508
3 -0.193056 -0.499020  1.536540
4 -2.083647 -3.074591  0.175772
5 -0.186138 -1.949731  0.287432
6 -0.480790 -1.771560 -0.930234
7  0.227383 -0.278253  2.102004
8 -0.002592  1.434192 -1.624915
9  0.404911 -2.167599 -0.452900

In [7]: df.idxmax()
Out[7]: 
A    0
B    8
C    7

e.g.

In [8]: df.loc[df['A'].idxmax()]
Out[8]: 
A    2.001289
B    0.482561
C    1.579985

回答 2

如果有多行取最大值,上述两个答案都只会返回一个索引。如果要所有行,似乎没有功能。但这并不难。以下是系列的示例;对于DataFrame也可以这样做:

In [1]: from pandas import Series, DataFrame

In [2]: s=Series([2,4,4,3],index=['a','b','c','d'])

In [3]: s.idxmax()
Out[3]: 'b'

In [4]: s[s==s.max()]
Out[4]: 
b    4
c    4
dtype: int64

Both above answers would only return one index if there are multiple rows that take the maximum value. If you want all the rows, there does not seem to have a function. But it is not hard to do. Below is an example for Series; the same can be done for DataFrame:

In [1]: from pandas import Series, DataFrame

In [2]: s=Series([2,4,4,3],index=['a','b','c','d'])

In [3]: s.idxmax()
Out[3]: 'b'

In [4]: s[s==s.max()]
Out[4]: 
b    4
c    4
dtype: int64

回答 3

df.iloc[df['columnX'].argmax()]

argmax()将提供对应于columnX最大值的索引。iloc可用于获取此索引的DataFrame df行。

df.iloc[df['columnX'].argmax()]

argmax() would provide the index corresponding to the max value for the columnX. iloc can be used to get the row of the DataFrame df for this index.


回答 4

直接的“ .argmax()”解决方案对我不起作用。

@ely提供的上一个示例

>>> import pandas
>>> import numpy as np
>>> df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
>>> df
      A         B         C
0  1.232853 -1.979459 -0.573626
1  0.140767  0.394940  1.068890
2  0.742023  1.343977 -0.579745
3  2.125299 -0.649328 -0.211692
4 -0.187253  1.908618 -1.862934
>>> df['A'].argmax()
3
>>> df['B'].argmax()
4
>>> df['C'].argmax()
1

返回以下消息:

FutureWarning: 'argmax' is deprecated, use 'idxmax' instead. The behavior of 'argmax' 
will be corrected to return the positional maximum in the future.
Use 'series.values.argmax' to get the position of the maximum now.

所以我的解决方案是:

df['A'].values.argmax()

The direct “.argmax()” solution does not work for me.

The previous example provided by @ely

>>> import pandas
>>> import numpy as np
>>> df = pandas.DataFrame(np.random.randn(5,3),columns=['A','B','C'])
>>> df
      A         B         C
0  1.232853 -1.979459 -0.573626
1  0.140767  0.394940  1.068890
2  0.742023  1.343977 -0.579745
3  2.125299 -0.649328 -0.211692
4 -0.187253  1.908618 -1.862934
>>> df['A'].argmax()
3
>>> df['B'].argmax()
4
>>> df['C'].argmax()
1

returns the following message :

FutureWarning: 'argmax' is deprecated, use 'idxmax' instead. The behavior of 'argmax' 
will be corrected to return the positional maximum in the future.
Use 'series.values.argmax' to get the position of the maximum now.

So that my solution is :

df['A'].values.argmax()

回答 5

mx.iloc[0].idxmax()

这段代码将为您提供如何从数据帧中的一行中找到最大值的方法,这里mx是数据帧,它iloc[0]指示第0个索引。

mx.iloc[0].idxmax()

This one line of code will give you how to find the maximum value from a row in dataframe, here mx is the dataframe and iloc[0] indicates the 0th index.


回答 6

idmax数据帧的返回与最大值和行为的行的标记指数argmax取决于版本pandas(现在它返回警告)。如果要使用位置索引,可以执行以下操作:

max_row = df['A'].values.argmax()

要么

import numpy as np
max_row = np.argmax(df['A'].values)

请注意,如果您使用的np.argmax(df['A'])行为与相同df['A'].argmax()

The idmax of the DataFrame returns the label index of the row with the maximum value and the behavior of argmax depends on version of pandas (right now it returns a warning). If you want to use the positional index, you can do the following:

max_row = df['A'].values.argmax()

or

import numpy as np
max_row = np.argmax(df['A'].values)

Note that if you use np.argmax(df['A']) behaves the same as df['A'].argmax().


在熊猫中加入和合并有什么区别?

问题:在熊猫中加入和合并有什么区别?

假设我有两个像这样的DataFrame:

left = pd.DataFrame({'key1': ['foo', 'bar'], 'lval': [1, 2]})

right = pd.DataFrame({'key2': ['foo', 'bar'], 'rval': [4, 5]})

我想合并它们,所以我尝试这样的事情:

pd.merge(left, right, left_on='key1', right_on='key2')

我很开心

    key1    lval    key2    rval
0   foo     1       foo     4
1   bar     2       bar     5

但是我正在尝试使用join方法,我被认为这是非常相似的。

left.join(right, on=['key1', 'key2'])

我得到这个:

//anaconda/lib/python2.7/site-packages/pandas/tools/merge.pyc in _validate_specification(self)
    406             if self.right_index:
    407                 if not ((len(self.left_on) == self.right.index.nlevels)):
--> 408                     raise AssertionError()
    409                 self.right_on = [None] * n
    410         elif self.right_on is not None:

AssertionError: 

我想念什么?

Suppose I have two DataFrames like so:

left = pd.DataFrame({'key1': ['foo', 'bar'], 'lval': [1, 2]})

right = pd.DataFrame({'key2': ['foo', 'bar'], 'rval': [4, 5]})

I want to merge them, so I try something like this:

pd.merge(left, right, left_on='key1', right_on='key2')

And I’m happy

    key1    lval    key2    rval
0   foo     1       foo     4
1   bar     2       bar     5

But I’m trying to use the join method, which I’ve been lead to believe is pretty similar.

left.join(right, on=['key1', 'key2'])

And I get this:

//anaconda/lib/python2.7/site-packages/pandas/tools/merge.pyc in _validate_specification(self)
    406             if self.right_index:
    407                 if not ((len(self.left_on) == self.right.index.nlevels)):
--> 408                     raise AssertionError()
    409                 self.right_on = [None] * n
    410         elif self.right_on is not None:

AssertionError: 

What am I missing?


回答 0

我总是join在索引上使用:

import pandas as pd
left = pd.DataFrame({'key': ['foo', 'bar'], 'val': [1, 2]}).set_index('key')
right = pd.DataFrame({'key': ['foo', 'bar'], 'val': [4, 5]}).set_index('key')
left.join(right, lsuffix='_l', rsuffix='_r')

     val_l  val_r
key            
foo      1      4
bar      2      5

通过merge在以下各列上使用,可以具有相同的功能:

left = pd.DataFrame({'key': ['foo', 'bar'], 'val': [1, 2]})
right = pd.DataFrame({'key': ['foo', 'bar'], 'val': [4, 5]})
left.merge(right, on=('key'), suffixes=('_l', '_r'))

   key  val_l  val_r
0  foo      1      4
1  bar      2      5

I always use join on indices:

import pandas as pd
left = pd.DataFrame({'key': ['foo', 'bar'], 'val': [1, 2]}).set_index('key')
right = pd.DataFrame({'key': ['foo', 'bar'], 'val': [4, 5]}).set_index('key')
left.join(right, lsuffix='_l', rsuffix='_r')

     val_l  val_r
key            
foo      1      4
bar      2      5

The same functionality can be had by using merge on the columns follows:

left = pd.DataFrame({'key': ['foo', 'bar'], 'val': [1, 2]})
right = pd.DataFrame({'key': ['foo', 'bar'], 'val': [4, 5]})
left.merge(right, on=('key'), suffixes=('_l', '_r'))

   key  val_l  val_r
0  foo      1      4
1  bar      2      5

回答 1

pandas.merge() 是用于所有合并/联接行为的基础函数。

DataFrames提供pandas.DataFrame.merge()pandas.DataFrame.join()方法,作为一种方便的方法来访问的功能pandas.merge()。例如,df1.merge(right=df2, ...)等效于pandas.merge(left=df1, right=df2, ...)

这些是df.join()和之间的主要区别df.merge()

  1. 在右表上查找:df1.join(df2)始终通过的索引进行连接df2,但df1.merge(df2)可以与df2(默认)的一个或多个列或df2(与right_index=True)的索引进行连接。
  2. 在左表上查找:默认情况下,df1.join(df2)使用的索引df1df1.merge(df2)使用的列df1。可以通过指定df1.join(df2, on=key_or_keys)或覆盖df1.merge(df2, left_index=True)
  3. 左vs内部联接:df1.join(df2)默认情况下执行左联接(保留的所有行df1),但df.merge默认情况下进行内部联接(仅返回df1和的匹配行df2)。

因此,通用方法是使用pandas.merge(df1, df2)df1.merge(df2)。但是在许多常见情况下(将中的所有行保留df1并连接到中的索引df2),您可以使用df1.join(df2)代替保存一些类型。

http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging上的文档中针对这些问题的一些说明:

merge 是pandas命名空间中的一个函数,它也可以作为DataFrame实例方法使用,调用的DataFrame被隐式视为联接中的左侧对象。

相关DataFrame.join方法在merge内部用于索引索引连接和列索引连接,但是默认情况下在索引上进行连接,而不是尝试在公共列上进行连接(的默认行为merge)。如果您要加入索引,则不妨使用它DataFrame.join来保存自己的输入内容。

这两个函数调用是完全等效的:

left.join(right, on=key_or_keys)
pd.merge(left, right, left_on=key_or_keys, right_index=True, how='left', sort=False)

pandas.merge() is the underlying function used for all merge/join behavior.

DataFrames provide the pandas.DataFrame.merge() and pandas.DataFrame.join() methods as a convenient way to access the capabilities of pandas.merge(). For example, df1.merge(right=df2, ...) is equivalent to pandas.merge(left=df1, right=df2, ...).

These are the main differences between df.join() and df.merge():

  1. lookup on right table: df1.join(df2) always joins via the index of df2, but df1.merge(df2) can join to one or more columns of df2 (default) or to the index of df2 (with right_index=True).
  2. lookup on left table: by default, df1.join(df2) uses the index of df1 and df1.merge(df2) uses column(s) of df1. That can be overridden by specifying df1.join(df2, on=key_or_keys) or df1.merge(df2, left_index=True).
  3. left vs inner join: df1.join(df2) does a left join by default (keeps all rows of df1), but df.merge does an inner join by default (returns only matching rows of df1 and df2).

So, the generic approach is to use pandas.merge(df1, df2) or df1.merge(df2). But for a number of common situations (keeping all rows of df1 and joining to an index in df2), you can save some typing by using df1.join(df2) instead.

Some notes on these issues from the documentation at http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging:

merge is a function in the pandas namespace, and it is also available as a DataFrame instance method, with the calling DataFrame being implicitly considered the left object in the join.

The related DataFrame.join method, uses merge internally for the index-on-index and index-on-column(s) joins, but joins on indexes by default rather than trying to join on common columns (the default behavior for merge). If you are joining on index, you may wish to use DataFrame.join to save yourself some typing.

These two function calls are completely equivalent:

left.join(right, on=key_or_keys)
pd.merge(left, right, left_on=key_or_keys, right_index=True, how='left', sort=False)

回答 2

我相信这join()只是一种方便的方法。请尝试尝试df1.merge(df2),它允许您指定left_onright_on

In [30]: left.merge(right, left_on="key1", right_on="key2")
Out[30]: 
  key1  lval key2  rval
0  foo     1  foo     4
1  bar     2  bar     5

I believe that join() is just a convenience method. Try df1.merge(df2) instead, which allows you to specify left_on and right_on:

In [30]: left.merge(right, left_on="key1", right_on="key2")
Out[30]: 
  key1  lval key2  rval
0  foo     1  foo     4
1  bar     2  bar     5

回答 3

本文档

pandas提供一个合并功能,作为DataFrame对象之间所有标准数据库联接操作的入口点:

merge(left, right, how='inner', on=None, left_on=None, right_on=None,
      left_index=False, right_index=False, sort=True,
      suffixes=('_x', '_y'), copy=True, indicator=False)

和:

DataFrame.join是一种将两个可能具有不同索引的DataFrame的列组合到单个结果DataFrame中的便捷方法。这是一个非常基本的示例:此处的数据对齐在索引(行标签)上。使用合并加上指示它使用索引的其他参数,可以实现相同的行为:

result = pd.merge(left, right, left_index=True, right_index=True,
how='outer')

From this documentation

pandas provides a single function, merge, as the entry point for all standard database join operations between DataFrame objects:

merge(left, right, how='inner', on=None, left_on=None, right_on=None,
      left_index=False, right_index=False, sort=True,
      suffixes=('_x', '_y'), copy=True, indicator=False)

And :

DataFrame.join is a convenient method for combining the columns of two potentially differently-indexed DataFrames into a single result DataFrame. Here is a very basic example: The data alignment here is on the indexes (row labels). This same behavior can be achieved using merge plus additional arguments instructing it to use the indexes:

result = pd.merge(left, right, left_index=True, right_index=True,
how='outer')

回答 4

区别之一merge是创建新索引,并join保留左侧索引。如果您错误地认为索引未使用进行更改,则可能对以后的转换产生重大影响merge

例如:

import pandas as pd

df1 = pd.DataFrame({'org_index': [101, 102, 103, 104],
                    'date': [201801, 201801, 201802, 201802],
                    'val': [1, 2, 3, 4]}, index=[101, 102, 103, 104])
df1

       date  org_index  val
101  201801        101    1
102  201801        102    2
103  201802        103    3
104  201802        104    4

df2 = pd.DataFrame({'date': [201801, 201802], 'dateval': ['A', 'B']}).set_index('date')
df2

       dateval
date          
201801       A
201802       B

df1.merge(df2, on='date')

     date  org_index  val dateval
0  201801        101    1       A
1  201801        102    2       A
2  201802        103    3       B
3  201802        104    4       B

df1.join(df2, on='date')
       date  org_index  val dateval
101  201801        101    1       A
102  201801        102    2       A
103  201802        103    3       B
104  201802        104    4       B

One of the difference is that merge is creating a new index, and join is keeping the left side index. It can have a big consequence on your later transformations if you wrongly assume that your index isn’t changed with merge.

For example:

import pandas as pd

df1 = pd.DataFrame({'org_index': [101, 102, 103, 104],
                    'date': [201801, 201801, 201802, 201802],
                    'val': [1, 2, 3, 4]}, index=[101, 102, 103, 104])
df1

       date  org_index  val
101  201801        101    1
102  201801        102    2
103  201802        103    3
104  201802        104    4

df2 = pd.DataFrame({'date': [201801, 201802], 'dateval': ['A', 'B']}).set_index('date')
df2

       dateval
date          
201801       A
201802       B

df1.merge(df2, on='date')

     date  org_index  val dateval
0  201801        101    1       A
1  201801        102    2       A
2  201802        103    3       B
3  201802        104    4       B

df1.join(df2, on='date')
       date  org_index  val dateval
101  201801        101    1       A
102  201801        102    2       A
103  201802        103    3       B
104  201802        104    4       B

回答 5

  • 联接:默认索引(如果使用相同的列名,则由于未定义lsuffix或rsuffix,它将在默认模式下引发错误)
df_1.join(df_2)
  • 合并:默认相同的列名(如果没有相同的列名,则在默认模式下将引发错误)
df_1.merge(df_2)
  • on 在这两种情况下,参数具有不同的含义
df_1.merge(df_2, on='column_1')

df_1.join(df_2, on='column_1') // It will throw error
df_1.join(df_2.set_index('column_1'), on='column_1')
  • Join: Default Index (If any same column name then it will throw an error in default mode because u have not defined lsuffix or rsuffix))
df_1.join(df_2)
  • Merge: Default Same Column Names (If no same column name it will throw an error in default mode)
df_1.merge(df_2)
  • on parameter has different meaning in both cases
df_1.merge(df_2, on='column_1')

df_1.join(df_2, on='column_1') // It will throw error
df_1.join(df_2.set_index('column_1'), on='column_1')

回答 6

用类似于SQL的方式表示“ Pandas合并是外部/内部联接,Pandas联接是自然联接”。因此,当您在熊猫中使用合并时,您要指定要使用哪种sqlish联接,而当使用熊猫联接时,您确实希望有一个匹配的列标签以确保其联接

To put it analogously to SQL “Pandas merge is to outer/inner join and Pandas join is to natural join”. Hence when you use merge in pandas, you want to specify which kind of sqlish join you want to use whereas when you use pandas join, you really want to have a matching column label to ensure it joins


熊猫分组和

问题:熊猫分组和

我正在使用此数据框:

Fruit   Date      Name  Number
Apples  10/6/2016 Bob    7
Apples  10/6/2016 Bob    8
Apples  10/6/2016 Mike   9
Apples  10/7/2016 Steve 10
Apples  10/7/2016 Bob    1
Oranges 10/7/2016 Bob    2
Oranges 10/6/2016 Tom   15
Oranges 10/6/2016 Mike  57
Oranges 10/6/2016 Bob   65
Oranges 10/7/2016 Tony   1
Grapes  10/7/2016 Bob    1
Grapes  10/7/2016 Tom   87
Grapes  10/7/2016 Bob   22
Grapes  10/7/2016 Bob   12
Grapes  10/7/2016 Tony  15

我想按名称然后按水果进行汇总,以获得每个名称的水果总数。

Bob,Apples,16 ( for example )

我尝试按名称和水果分组,但是如何获取水果总数。

I am using this data frame:

Fruit   Date      Name  Number
Apples  10/6/2016 Bob    7
Apples  10/6/2016 Bob    8
Apples  10/6/2016 Mike   9
Apples  10/7/2016 Steve 10
Apples  10/7/2016 Bob    1
Oranges 10/7/2016 Bob    2
Oranges 10/6/2016 Tom   15
Oranges 10/6/2016 Mike  57
Oranges 10/6/2016 Bob   65
Oranges 10/7/2016 Tony   1
Grapes  10/7/2016 Bob    1
Grapes  10/7/2016 Tom   87
Grapes  10/7/2016 Bob   22
Grapes  10/7/2016 Bob   12
Grapes  10/7/2016 Tony  15

I want to aggregate this by name and then by fruit to get a total number of fruit per name.

Bob,Apples,16 ( for example )

I tried grouping by Name and Fruit but how do I get the total number of fruit.


回答 0

用途GroupBy.sum

df.groupby(['Fruit','Name']).sum()

Out[31]: 
               Number
Fruit   Name         
Apples  Bob        16
        Mike        9
        Steve      10
Grapes  Bob        35
        Tom        87
        Tony       15
Oranges Bob        67
        Mike       57
        Tom        15
        Tony        1

Use GroupBy.sum:

df.groupby(['Fruit','Name']).sum()

Out[31]: 
               Number
Fruit   Name         
Apples  Bob        16
        Mike        9
        Steve      10
Grapes  Bob        35
        Tom        87
        Tony       15
Oranges Bob        67
        Mike       57
        Tom        15
        Tony        1

回答 1

你也可以使用agg函数

df.groupby(['Name', 'Fruit'])['Number'].agg('sum')

Also you can use agg function,

df.groupby(['Name', 'Fruit'])['Number'].agg('sum')

回答 2

如果要保留原始列FruitName,请使用reset_index()。否则,FruitName将成为指数的一部分。

df.groupby(['Fruit','Name'])['Number'].sum().reset_index()

Fruit   Name       Number
Apples  Bob        16
Apples  Mike        9
Apples  Steve      10
Grapes  Bob        35
Grapes  Tom        87
Grapes  Tony       15
Oranges Bob        67
Oranges Mike       57
Oranges Tom        15
Oranges Tony        1

如其他答案所示:

df.groupby(['Fruit','Name'])['Number'].sum()

               Number
Fruit   Name         
Apples  Bob        16
        Mike        9
        Steve      10
Grapes  Bob        35
        Tom        87
        Tony       15
Oranges Bob        67
        Mike       57
        Tom        15
        Tony        1

If you want to keep the original columns Fruit and Name, use reset_index(). Otherwise Fruit and Name will become part of the index.

df.groupby(['Fruit','Name'])['Number'].sum().reset_index()

Fruit   Name       Number
Apples  Bob        16
Apples  Mike        9
Apples  Steve      10
Grapes  Bob        35
Grapes  Tom        87
Grapes  Tony       15
Oranges Bob        67
Oranges Mike       57
Oranges Tom        15
Oranges Tony        1

As seen in the other answers:

df.groupby(['Fruit','Name'])['Number'].sum()

               Number
Fruit   Name         
Apples  Bob        16
        Mike        9
        Steve      10
Grapes  Bob        35
        Tom        87
        Tony       15
Oranges Bob        67
        Mike       57
        Tom        15
        Tony        1

回答 3

其他两个答案都能满足您的需求。

您可以使用该pivot功能将数据排列在一个漂亮的表中

df.groupby(['Fruit','Name'],as_index = False).sum().pivot('Fruit','Name').fillna(0)



Name    Bob     Mike    Steve   Tom    Tony
Fruit                   
Apples  16.0    9.0     10.0    0.0     0.0
Grapes  35.0    0.0     0.0     87.0    15.0
Oranges 67.0    57.0    0.0     15.0    1.0

Both the other answers accomplish what you want.

You can use the pivot functionality to arrange the data in a nice table

df.groupby(['Fruit','Name'],as_index = False).sum().pivot('Fruit','Name').fillna(0)



Name    Bob     Mike    Steve   Tom    Tony
Fruit                   
Apples  16.0    9.0     10.0    0.0     0.0
Grapes  35.0    0.0     0.0     87.0    15.0
Oranges 67.0    57.0    0.0     15.0    1.0

回答 4

df.groupby(['Fruit','Name'])['Number'].sum()

您可以选择不同的列来对数字求和。

df.groupby(['Fruit','Name'])['Number'].sum()

You can select different columns to sum numbers.


回答 5

您可以将设置groupbyindex ,然后使用sumlevel

df.set_index(['Fruit','Name']).sum(level=[0,1])
Out[175]: 
               Number
Fruit   Name         
Apples  Bob        16
        Mike        9
        Steve      10
Oranges Bob        67
        Tom        15
        Mike       57
        Tony        1
Grapes  Bob        35
        Tom        87
        Tony       15

You can set the groupby column to index then using sum with level

df.set_index(['Fruit','Name']).sum(level=[0,1])
Out[175]: 
               Number
Fruit   Name         
Apples  Bob        16
        Mike        9
        Steve      10
Oranges Bob        67
        Tom        15
        Mike       57
        Tony        1
Grapes  Bob        35
        Tom        87
        Tony       15

回答 6

.agg()函数的变体;提供以下功能:(1)持久化类型DataFrame,(2)应用平均值,计数,求和等,以及(3)在保持易读性的同时在多个列上启用groupby。

df.groupby(['att1', 'att2']).agg({'att1': "count", 'att3': "sum",'att4': 'mean'})

用你的价值观…

df.groupby(['Name', 'Fruit']).agg({'Number': "sum"})

A variation on the .agg() function; provides the ability to (1) persist type DataFrame, (2) apply averages, counts, summations, etc. and (3) enables groupby on multiple columns while maintaining legibility.

df.groupby(['att1', 'att2']).agg({'att1': "count", 'att3': "sum",'att4': 'mean'})

using your values…

df.groupby(['Name', 'Fruit']).agg({'Number': "sum"})

使用pandas.to_datetime时仅保留日期部分

问题:使用pandas.to_datetime时仅保留日期部分

pandas.to_datetime用来解析数据中的日期。默认情况下,熊猫代表日期,datetime64[ns]即使所有日期都是每天也是如此。我想知道是否存在一种优雅/巧妙的方法来将日期转换为datetime.date或,datetime64[D]以便当我将数据写入CSV时,日期不附加00:00:00。我知道我可以手动逐个元素地转换类型:

[dt.to_datetime().date() for dt in df.dates]

但这确实很慢,因为我有很多行,这有点违反了使用目的pandas.to_datetime。有没有一种方法可以一次转换dtype整个列?或者,是否pandas.to_datetime支持精度规范,以便在处理日常数据时可以省去时间部分?

I use pandas.to_datetime to parse the dates in my data. Pandas by default represents the dates with datetime64[ns] even though the dates are all daily only. I wonder whether there is an elegant/clever way to convert the dates to datetime.date or datetime64[D] so that, when I write the data to CSV, the dates are not appended with 00:00:00. I know I can convert the type manually element-by-element:

[dt.to_datetime().date() for dt in df.dates]

But this is really slow since I have many rows and it sort of defeats the purpose of using pandas.to_datetime. Is there a way to convert the dtype of the entire column at once? Or alternatively, does pandas.to_datetime support a precision specification so that I can get rid of the time part while working with daily data?


回答 0

从版本开始,0.15.0现在可以轻松地通过.dt仅访问日期组件来完成此操作:

df['just_date'] = df['dates'].dt.date

上面的方法返回一个datetime.datedtype,如果您想要一个a,datetime64则可以normalize将时间分量设置为午夜,以便将所有值设置为00:00:00

df['normalised_date'] = df['dates'].dt.normalize()

这会使dtype保持不变,datetime64但显示屏仅显示该date值。

Since version 0.15.0 this can now be easily done using .dt to access just the date component:

df['just_date'] = df['dates'].dt.date

The above returns a datetime.date dtype, if you want to have a datetime64 then you can just normalize the time component to midnight so it sets all the values to 00:00:00:

df['normalised_date'] = df['dates'].dt.normalize()

This keeps the dtype as datetime64, but the display shows just the date value.


回答 1

简单的解决方案:

df['date_only'] = df['date_time_column'].dt.date

Simple Solution:

df['date_only'] = df['date_time_column'].dt.date

回答 2

虽然我赞成EdChum的答案,这是对OP提出的问题的最直接答案,但它并不能真正解决性能问题(它仍然依赖于python datetime对象,因此对它们的任何操作都不会被矢量化-即,它会很慢)。

性能更好的替代方法是使用df['dates'].dt.floor('d')。严格来说,它不会“仅保留日期部分”,因为它只是将时间设置为00:00:00。但是它确实可以按OP的要求运行,例如:

  • 打印到屏幕
  • 保存到csv
  • 使用列来 groupby

…并且效率更高,因为该操作已矢量化。

编辑:其实,在OP的宁愿答案很可能是“最近的版本pandas没有时间写为csv如果是00:00:00对所有的意见”。

While I upvoted EdChum’s answer, which is the most direct answer to the question the OP posed, it does not really solve the performance problem (it still relies on python datetime objects, and hence any operation on them will be not vectorized – that is, it will be slow).

A better performing alternative is to use df['dates'].dt.floor('d'). Strictly speaking, it does not “keep only date part”, since it just sets the time to 00:00:00. But it does work as desired by the OP when, for instance:

  • printing to screen
  • saving to csv
  • using the column to groupby

… and it is much more efficient, since the operation is vectorized.

EDIT: in fact, the answer the OP’s would have preferred is probably “recent versions of pandas do not write the time to csv if it is 00:00:00 for all observations”.


回答 3

熊猫DatetimeIndexSeries有一种方法normalize可以完全满足您的需求。

您可以在此答案中了解更多信息。

可以用作 ser.dt.normalize()

Pandas DatetimeIndex and Series have a method called normalize that does exactly what you want.

You can read more about it in this answer.

It can be used as ser.dt.normalize()


回答 4

熊猫v0.13 +:to_csvdate_format参数一起使用

尽可能避免将您的datetime64[ns]系列转换为objectdtype系列的datetime.date对象。后者通常使用构造pd.Series.dt.date,存储为指针数组,相对于基于NumPy的纯序列而言效率低下。

由于在写入CSV时您关注的是格式,因此只需使用date_format参数to_csv。例如:

df.to_csv(filename, date_format='%Y-%m-%d')

有关格式设置约定,请参见Python的strftime指令

Pandas v0.13+: Use to_csv with date_format parameter

Avoid, where possible, converting your datetime64[ns] series to an object dtype series of datetime.date objects. The latter, often constructed using pd.Series.dt.date, is stored as an array of pointers and is inefficient relative to a pure NumPy-based series.

Since your concern is format when writing to CSV, just use the date_format parameter of to_csv. For example:

df.to_csv(filename, date_format='%Y-%m-%d')

See Python’s strftime directives for formatting conventions.


回答 5

这是提取日期的简单方法:

import pandas as pd

d='2015-01-08 22:44:09' 
date=pd.to_datetime(d).date()
print(date)

This is a simple way to extract the date:

import pandas as pd

d='2015-01-08 22:44:09' 
date=pd.to_datetime(d).date()
print(date)

回答 6

转换为datetime64[D]

df.dates.values.astype('M8[D]')

尽管将其重新分配给DataFrame col将其恢复为[ns]。

如果您想要实际的datetime.date

dt = pd.DatetimeIndex(df.dates)
dates = np.array([datetime.date(*date_tuple) for date_tuple in zip(dt.year, dt.month, dt.day)])

Converting to datetime64[D]:

df.dates.values.astype('M8[D]')

Though re-assigning that to a DataFrame col will revert it back to [ns].

If you wanted actual datetime.date:

dt = pd.DatetimeIndex(df.dates)
dates = np.array([datetime.date(*date_tuple) for date_tuple in zip(dt.year, dt.month, dt.day)])

回答 7

如果有人看到此旧帖子,请给出一个最新的答案。

转换为日期时间时添加“ utc = False”将删除时区部分,仅将日期保留为datetime64 [ns]数据类型。

pd.to_datetime(df['Date'], utc=False)

您将能够将其保存在excel中,而不会出现错误“ ValueError:Excel不支持带时区的日期时间。在写入Excel之前,请确保日期时间不知道时区。”

在此处输入图片说明

Just giving a more up to date answer in case someone sees this old post.

Adding “utc=False” when converting to datetime will remove the timezone component and keep only the date in a datetime64[ns] data type.

pd.to_datetime(df['Date'], utc=False)

You will be able to save it in excel without getting the error “ValueError: Excel does not support datetimes with timezones. Please ensure that datetimes are timezone unaware before writing to Excel.”

enter image description here


回答 8

我希望能够更改数据框中一组列的类型,然后删除保持一天的时间。round(),floor(),ceil()全部工作

df[date_columns] = df[date_columns].apply(pd.to_datetime)
df[date_columns] = df[date_columns].apply(lambda t: t.dt.floor('d'))

I wanted to be able to change the type for a set of columns in a data frame and then remove the time keeping the day. round(), floor(), ceil() all work

df[date_columns] = df[date_columns].apply(pd.to_datetime)
df[date_columns] = df[date_columns].apply(lambda t: t.dt.floor('d'))

将熊猫数据框字符串条目拆分(分解)为单独的行

问题:将熊猫数据框字符串条目拆分(分解)为单独的行

我有一个pandas dataframe文本字符串的一列包含逗号分隔的值。我想拆分每个CSV字段,并为每个条目创建一个新行(假设CSV干净并且只需要在’,’上拆分)。例如,a应变为b

In [7]: a
Out[7]: 
    var1  var2
0  a,b,c     1
1  d,e,f     2

In [8]: b
Out[8]: 
  var1  var2
0    a     1
1    b     1
2    c     1
3    d     2
4    e     2
5    f     2

到目前为止,我已经尝试了各种简单的函数,但是该.apply方法似乎只在轴上使用一行作为返回值,而我无法开始.transform工作。我们欢迎所有的建议!

示例数据:

from pandas import DataFrame
import numpy as np
a = DataFrame([{'var1': 'a,b,c', 'var2': 1},
               {'var1': 'd,e,f', 'var2': 2}])
b = DataFrame([{'var1': 'a', 'var2': 1},
               {'var1': 'b', 'var2': 1},
               {'var1': 'c', 'var2': 1},
               {'var1': 'd', 'var2': 2},
               {'var1': 'e', 'var2': 2},
               {'var1': 'f', 'var2': 2}])

我知道这是行不通的,因为我们通过numpy丢失了DataFrame元数据,但是它应该使您了解我尝试做的事情:

def fun(row):
    letters = row['var1']
    letters = letters.split(',')
    out = np.array([row] * len(letters))
    out['var1'] = letters
a['idx'] = range(a.shape[0])
z = a.groupby('idx')
z.transform(fun)

I have a pandas dataframe in which one column of text strings contains comma-separated values. I want to split each CSV field and create a new row per entry (assume that CSV are clean and need only be split on ‘,’). For example, a should become b:

In [7]: a
Out[7]: 
    var1  var2
0  a,b,c     1
1  d,e,f     2

In [8]: b
Out[8]: 
  var1  var2
0    a     1
1    b     1
2    c     1
3    d     2
4    e     2
5    f     2

So far, I have tried various simple functions, but the .apply method seems to only accept one row as return value when it is used on an axis, and I can’t get .transform to work. Any suggestions would be much appreciated!

Example data:

from pandas import DataFrame
import numpy as np
a = DataFrame([{'var1': 'a,b,c', 'var2': 1},
               {'var1': 'd,e,f', 'var2': 2}])
b = DataFrame([{'var1': 'a', 'var2': 1},
               {'var1': 'b', 'var2': 1},
               {'var1': 'c', 'var2': 1},
               {'var1': 'd', 'var2': 2},
               {'var1': 'e', 'var2': 2},
               {'var1': 'f', 'var2': 2}])

I know this won’t work because we lose DataFrame meta-data by going through numpy, but it should give you a sense of what I tried to do:

def fun(row):
    letters = row['var1']
    letters = letters.split(',')
    out = np.array([row] * len(letters))
    out['var1'] = letters
a['idx'] = range(a.shape[0])
z = a.groupby('idx')
z.transform(fun)

回答 0

这样的事情怎么样:

In [55]: pd.concat([Series(row['var2'], row['var1'].split(','))              
                    for _, row in a.iterrows()]).reset_index()
Out[55]: 
  index  0
0     a  1
1     b  1
2     c  1
3     d  2
4     e  2
5     f  2

然后,您只需要重命名列

How about something like this:

In [55]: pd.concat([Series(row['var2'], row['var1'].split(','))              
                    for _, row in a.iterrows()]).reset_index()
Out[55]: 
  index  0
0     a  1
1     b  1
2     c  1
3     d  2
4     e  2
5     f  2

Then you just have to rename the columns


回答 1

UPDATE2:更通用的矢量化函数,可用于normal多个list

def explode(df, lst_cols, fill_value='', preserve_index=False):
    # make sure `lst_cols` is list-alike
    if (lst_cols is not None
        and len(lst_cols) > 0
        and not isinstance(lst_cols, (list, tuple, np.ndarray, pd.Series))):
        lst_cols = [lst_cols]
    # all columns except `lst_cols`
    idx_cols = df.columns.difference(lst_cols)
    # calculate lengths of lists
    lens = df[lst_cols[0]].str.len()
    # preserve original index values    
    idx = np.repeat(df.index.values, lens)
    # create "exploded" DF
    res = (pd.DataFrame({
                col:np.repeat(df[col].values, lens)
                for col in idx_cols},
                index=idx)
             .assign(**{col:np.concatenate(df.loc[lens>0, col].values)
                            for col in lst_cols}))
    # append those rows that have empty lists
    if (lens == 0).any():
        # at least one list in cells is empty
        res = (res.append(df.loc[lens==0, idx_cols], sort=False)
                  .fillna(fill_value))
    # revert the original index order
    res = res.sort_index()
    # reset index if requested
    if not preserve_index:        
        res = res.reset_index(drop=True)
    return res

演示:

list列-所有list列每行中必须具有相同的元素数:

In [134]: df
Out[134]:
   aaa  myid        num          text
0   10     1  [1, 2, 3]  [aa, bb, cc]
1   11     2         []            []
2   12     3     [1, 2]      [cc, dd]
3   13     4         []            []

In [135]: explode(df, ['num','text'], fill_value='')
Out[135]:
   aaa  myid num text
0   10     1   1   aa
1   10     1   2   bb
2   10     1   3   cc
3   11     2
4   12     3   1   cc
5   12     3   2   dd
6   13     4

保留原始索引值:

In [136]: explode(df, ['num','text'], fill_value='', preserve_index=True)
Out[136]:
   aaa  myid num text
0   10     1   1   aa
0   10     1   2   bb
0   10     1   3   cc
1   11     2
2   12     3   1   cc
2   12     3   2   dd
3   13     4

建立:

df = pd.DataFrame({
 'aaa': {0: 10, 1: 11, 2: 12, 3: 13},
 'myid': {0: 1, 1: 2, 2: 3, 3: 4},
 'num': {0: [1, 2, 3], 1: [], 2: [1, 2], 3: []},
 'text': {0: ['aa', 'bb', 'cc'], 1: [], 2: ['cc', 'dd'], 3: []}
})

CSV栏:

In [46]: df
Out[46]:
        var1  var2 var3
0      a,b,c     1   XX
1  d,e,f,x,y     2   ZZ

In [47]: explode(df.assign(var1=df.var1.str.split(',')), 'var1')
Out[47]:
  var1  var2 var3
0    a     1   XX
1    b     1   XX
2    c     1   XX
3    d     2   ZZ
4    e     2   ZZ
5    f     2   ZZ
6    x     2   ZZ
7    y     2   ZZ

使用这个小技巧,我们可以将类似CSV的列转换为list列:

In [48]: df.assign(var1=df.var1.str.split(','))
Out[48]:
              var1  var2 var3
0        [a, b, c]     1   XX
1  [d, e, f, x, y]     2   ZZ

更新: 通用矢量化方法(也适用于多列):

原始DF:

In [177]: df
Out[177]:
        var1  var2 var3
0      a,b,c     1   XX
1  d,e,f,x,y     2   ZZ

解:

首先让我们将CSV字符串转换为列表:

In [178]: lst_col = 'var1' 

In [179]: x = df.assign(**{lst_col:df[lst_col].str.split(',')})

In [180]: x
Out[180]:
              var1  var2 var3
0        [a, b, c]     1   XX
1  [d, e, f, x, y]     2   ZZ

现在我们可以这样做:

In [181]: pd.DataFrame({
     ...:     col:np.repeat(x[col].values, x[lst_col].str.len())
     ...:     for col in x.columns.difference([lst_col])
     ...: }).assign(**{lst_col:np.concatenate(x[lst_col].values)})[x.columns.tolist()]
     ...:
Out[181]:
  var1  var2 var3
0    a     1   XX
1    b     1   XX
2    c     1   XX
3    d     2   ZZ
4    e     2   ZZ
5    f     2   ZZ
6    x     2   ZZ
7    y     2   ZZ

旧答案:

受到@AFinkelstein解决方案的启发,我想让它更通用一些,可以应用于多于两列的DF,其速度与AFinkelstein解决方案一样快,几乎一样快):

In [2]: df = pd.DataFrame(
   ...:    [{'var1': 'a,b,c', 'var2': 1, 'var3': 'XX'},
   ...:     {'var1': 'd,e,f,x,y', 'var2': 2, 'var3': 'ZZ'}]
   ...: )

In [3]: df
Out[3]:
        var1  var2 var3
0      a,b,c     1   XX
1  d,e,f,x,y     2   ZZ

In [4]: (df.set_index(df.columns.drop('var1',1).tolist())
   ...:    .var1.str.split(',', expand=True)
   ...:    .stack()
   ...:    .reset_index()
   ...:    .rename(columns={0:'var1'})
   ...:    .loc[:, df.columns]
   ...: )
Out[4]:
  var1  var2 var3
0    a     1   XX
1    b     1   XX
2    c     1   XX
3    d     2   ZZ
4    e     2   ZZ
5    f     2   ZZ
6    x     2   ZZ
7    y     2   ZZ

UPDATE2: more generic vectorized function, which will work for multiple normal and multiple list columns

def explode(df, lst_cols, fill_value='', preserve_index=False):
    # make sure `lst_cols` is list-alike
    if (lst_cols is not None
        and len(lst_cols) > 0
        and not isinstance(lst_cols, (list, tuple, np.ndarray, pd.Series))):
        lst_cols = [lst_cols]
    # all columns except `lst_cols`
    idx_cols = df.columns.difference(lst_cols)
    # calculate lengths of lists
    lens = df[lst_cols[0]].str.len()
    # preserve original index values    
    idx = np.repeat(df.index.values, lens)
    # create "exploded" DF
    res = (pd.DataFrame({
                col:np.repeat(df[col].values, lens)
                for col in idx_cols},
                index=idx)
             .assign(**{col:np.concatenate(df.loc[lens>0, col].values)
                            for col in lst_cols}))
    # append those rows that have empty lists
    if (lens == 0).any():
        # at least one list in cells is empty
        res = (res.append(df.loc[lens==0, idx_cols], sort=False)
                  .fillna(fill_value))
    # revert the original index order
    res = res.sort_index()
    # reset index if requested
    if not preserve_index:        
        res = res.reset_index(drop=True)
    return res

Demo:

Multiple list columns – all list columns must have the same # of elements in each row:

In [134]: df
Out[134]:
   aaa  myid        num          text
0   10     1  [1, 2, 3]  [aa, bb, cc]
1   11     2         []            []
2   12     3     [1, 2]      [cc, dd]
3   13     4         []            []

In [135]: explode(df, ['num','text'], fill_value='')
Out[135]:
   aaa  myid num text
0   10     1   1   aa
1   10     1   2   bb
2   10     1   3   cc
3   11     2
4   12     3   1   cc
5   12     3   2   dd
6   13     4

preserving original index values:

In [136]: explode(df, ['num','text'], fill_value='', preserve_index=True)
Out[136]:
   aaa  myid num text
0   10     1   1   aa
0   10     1   2   bb
0   10     1   3   cc
1   11     2
2   12     3   1   cc
2   12     3   2   dd
3   13     4

Setup:

df = pd.DataFrame({
 'aaa': {0: 10, 1: 11, 2: 12, 3: 13},
 'myid': {0: 1, 1: 2, 2: 3, 3: 4},
 'num': {0: [1, 2, 3], 1: [], 2: [1, 2], 3: []},
 'text': {0: ['aa', 'bb', 'cc'], 1: [], 2: ['cc', 'dd'], 3: []}
})

CSV column:

In [46]: df
Out[46]:
        var1  var2 var3
0      a,b,c     1   XX
1  d,e,f,x,y     2   ZZ

In [47]: explode(df.assign(var1=df.var1.str.split(',')), 'var1')
Out[47]:
  var1  var2 var3
0    a     1   XX
1    b     1   XX
2    c     1   XX
3    d     2   ZZ
4    e     2   ZZ
5    f     2   ZZ
6    x     2   ZZ
7    y     2   ZZ

using this little trick we can convert CSV-like column to list column:

In [48]: df.assign(var1=df.var1.str.split(','))
Out[48]:
              var1  var2 var3
0        [a, b, c]     1   XX
1  [d, e, f, x, y]     2   ZZ

UPDATE: generic vectorized approach (will work also for multiple columns):

Original DF:

In [177]: df
Out[177]:
        var1  var2 var3
0      a,b,c     1   XX
1  d,e,f,x,y     2   ZZ

Solution:

first let’s convert CSV strings to lists:

In [178]: lst_col = 'var1' 

In [179]: x = df.assign(**{lst_col:df[lst_col].str.split(',')})

In [180]: x
Out[180]:
              var1  var2 var3
0        [a, b, c]     1   XX
1  [d, e, f, x, y]     2   ZZ

Now we can do this:

In [181]: pd.DataFrame({
     ...:     col:np.repeat(x[col].values, x[lst_col].str.len())
     ...:     for col in x.columns.difference([lst_col])
     ...: }).assign(**{lst_col:np.concatenate(x[lst_col].values)})[x.columns.tolist()]
     ...:
Out[181]:
  var1  var2 var3
0    a     1   XX
1    b     1   XX
2    c     1   XX
3    d     2   ZZ
4    e     2   ZZ
5    f     2   ZZ
6    x     2   ZZ
7    y     2   ZZ

OLD answer:

Inspired by @AFinkelstein solution, i wanted to make it bit more generalized which could be applied to DF with more than two columns and as fast, well almost, as fast as AFinkelstein’s solution):

In [2]: df = pd.DataFrame(
   ...:    [{'var1': 'a,b,c', 'var2': 1, 'var3': 'XX'},
   ...:     {'var1': 'd,e,f,x,y', 'var2': 2, 'var3': 'ZZ'}]
   ...: )

In [3]: df
Out[3]:
        var1  var2 var3
0      a,b,c     1   XX
1  d,e,f,x,y     2   ZZ

In [4]: (df.set_index(df.columns.drop('var1',1).tolist())
   ...:    .var1.str.split(',', expand=True)
   ...:    .stack()
   ...:    .reset_index()
   ...:    .rename(columns={0:'var1'})
   ...:    .loc[:, df.columns]
   ...: )
Out[4]:
  var1  var2 var3
0    a     1   XX
1    b     1   XX
2    c     1   XX
3    d     2   ZZ
4    e     2   ZZ
5    f     2   ZZ
6    x     2   ZZ
7    y     2   ZZ

回答 2

经过艰苦的实验,找到比接受的答案更快的方法,我得到了它。在我尝试过的数据集上,它的运行速度快了约100倍。

如果有人知道使这种方式更优雅的方法,请务必修改我的代码。如果没有将要保留的其他列设置为索引,然后重设索引并重命名这些列,我找不到一种可行的方法,但是我想还有其他方法可以解决。

b = DataFrame(a.var1.str.split(',').tolist(), index=a.var2).stack()
b = b.reset_index()[[0, 'var2']] # var1 variable is currently labeled 0
b.columns = ['var1', 'var2'] # renaming var1

After painful experimentation to find something faster than the accepted answer, I got this to work. It ran around 100x faster on the dataset I tried it on.

If someone knows a way to make this more elegant, by all means please modify my code. I couldn’t find a way that works without setting the other columns you want to keep as the index and then resetting the index and re-naming the columns, but I’d imagine there’s something else that works.

b = DataFrame(a.var1.str.split(',').tolist(), index=a.var2).stack()
b = b.reset_index()[[0, 'var2']] # var1 variable is currently labeled 0
b.columns = ['var1', 'var2'] # renaming var1

回答 3

这是为这项常见任务编写函数。它比Series/ stack方法更有效。列顺序和名称将保留。

def tidy_split(df, column, sep='|', keep=False):
    """
    Split the values of a column and expand so the new DataFrame has one split
    value per row. Filters rows where the column is missing.

    Params
    ------
    df : pandas.DataFrame
        dataframe with the column to split and expand
    column : str
        the column to split and expand
    sep : str
        the string used to split the column's values
    keep : bool
        whether to retain the presplit value as it's own row

    Returns
    -------
    pandas.DataFrame
        Returns a dataframe with the same columns as `df`.
    """
    indexes = list()
    new_values = list()
    df = df.dropna(subset=[column])
    for i, presplit in enumerate(df[column].astype(str)):
        values = presplit.split(sep)
        if keep and len(values) > 1:
            indexes.append(i)
            new_values.append(presplit)
        for value in values:
            indexes.append(i)
            new_values.append(value)
    new_df = df.iloc[indexes, :].copy()
    new_df[column] = new_values
    return new_df

使用此功能,原始问题很简单:

tidy_split(a, 'var1', sep=',')

Here’s a function I wrote for this common task. It’s more efficient than the Series/stack methods. Column order and names are retained.

def tidy_split(df, column, sep='|', keep=False):
    """
    Split the values of a column and expand so the new DataFrame has one split
    value per row. Filters rows where the column is missing.

    Params
    ------
    df : pandas.DataFrame
        dataframe with the column to split and expand
    column : str
        the column to split and expand
    sep : str
        the string used to split the column's values
    keep : bool
        whether to retain the presplit value as it's own row

    Returns
    -------
    pandas.DataFrame
        Returns a dataframe with the same columns as `df`.
    """
    indexes = list()
    new_values = list()
    df = df.dropna(subset=[column])
    for i, presplit in enumerate(df[column].astype(str)):
        values = presplit.split(sep)
        if keep and len(values) > 1:
            indexes.append(i)
            new_values.append(presplit)
        for value in values:
            indexes.append(i)
            new_values.append(value)
    new_df = df.iloc[indexes, :].copy()
    new_df[column] = new_values
    return new_df

With this function, the original question is as simple as:

tidy_split(a, 'var1', sep=',')

回答 4

熊猫> = 0.25

系列和数据帧的方法定义一个.explode()方法爆炸列出在不同的行。请参阅爆炸类似列表的docs部分。

由于您有一个用逗号分隔的字符串列表,因此请在逗号上分割字符串以获取元素列表,然后explode在该列上调用。

df = pd.DataFrame({'var1': ['a,b,c', 'd,e,f'], 'var2': [1, 2]})
df
    var1  var2
0  a,b,c     1
1  d,e,f     2

df.assign(var1=df['var1'].str.split(',')).explode('var1')

  var1  var2
0    a     1
0    b     1
0    c     1
1    d     2
1    e     2
1    f     2

请注意,explode仅适用于单列(目前)。


NaN和空列表将获得应有的待遇,而您无需花钱就可以解决问题。

df = pd.DataFrame({'var1': ['d,e,f', '', np.nan], 'var2': [1, 2, 3]})
df
    var1  var2
0  d,e,f     1
1            2
2    NaN     3

df['var1'].str.split(',')

0    [d, e, f]
1           []
2          NaN

df.assign(var1=df['var1'].str.split(',')).explode('var1')

  var1  var2
0    d     1
0    e     1
0    f     1
1          2  # empty list entry becomes empty string after exploding 
2  NaN     3  # NaN left un-touched

与基于ravel+ repeat的解决方案(完全忽略空列表并阻塞NaN)相比,这是一个重大优势

Pandas >= 0.25

Series and DataFrame methods define a .explode() method that explodes lists into separate rows. See the docs section on Exploding a list-like column.

Since you have a list of comma separated strings, split the string on comma to get a list of elements, then call explode on that column.

df = pd.DataFrame({'var1': ['a,b,c', 'd,e,f'], 'var2': [1, 2]})
df
    var1  var2
0  a,b,c     1
1  d,e,f     2

df.assign(var1=df['var1'].str.split(',')).explode('var1')

  var1  var2
0    a     1
0    b     1
0    c     1
1    d     2
1    e     2
1    f     2

Note that explode only works on a single column (for now).


NaNs and empty lists get the treatment they deserve without you having to jump through hoops to get it right.

df = pd.DataFrame({'var1': ['d,e,f', '', np.nan], 'var2': [1, 2, 3]})
df
    var1  var2
0  d,e,f     1
1            2
2    NaN     3

df['var1'].str.split(',')

0    [d, e, f]
1           []
2          NaN

df.assign(var1=df['var1'].str.split(',')).explode('var1')

  var1  var2
0    d     1
0    e     1
0    f     1
1          2  # empty list entry becomes empty string after exploding 
2  NaN     3  # NaN left un-touched

This is a serious advantage over ravel + repeat -based solutions (which ignore empty lists completely, and choke on NaNs).


回答 5

类似的问题:熊猫:如何将一列中的文本分成多行?

您可以这样做:

>> a=pd.DataFrame({"var1":"a,b,c d,e,f".split(),"var2":[1,2]})
>> s = a.var1.str.split(",").apply(pd.Series, 1).stack()
>> s.index = s.index.droplevel(-1)
>> del a['var1']
>> a.join(s)
   var2 var1
0     1    a
0     1    b
0     1    c
1     2    d
1     2    e
1     2    f

Similar question as: pandas: How do I split text in a column into multiple rows?

You could do:

>> a=pd.DataFrame({"var1":"a,b,c d,e,f".split(),"var2":[1,2]})
>> s = a.var1.str.split(",").apply(pd.Series, 1).stack()
>> s.index = s.index.droplevel(-1)
>> del a['var1']
>> a.join(s)
   var2 var1
0     1    a
0     1    b
0     1    c
1     2    d
1     2    e
1     2    f

回答 6

TL; DR

import pandas as pd
import numpy as np

def explode_str(df, col, sep):
    s = df[col]
    i = np.arange(len(s)).repeat(s.str.count(sep) + 1)
    return df.iloc[i].assign(**{col: sep.join(s).split(sep)})

def explode_list(df, col):
    s = df[col]
    i = np.arange(len(s)).repeat(s.str.len())
    return df.iloc[i].assign(**{col: np.concatenate(s)})

示范

explode_str(a, 'var1', ',')

  var1  var2
0    a     1
0    b     1
0    c     1
1    d     2
1    e     2
1    f     2

让我们创建一个d具有列表的新数据框

d = a.assign(var1=lambda d: d.var1.str.split(','))

explode_list(d, 'var1')

  var1  var2
0    a     1
0    b     1
0    c     1
1    d     2
1    e     2
1    f     2

普通的留言

我将使用np.arangewith repeat来生成可用于的数据框索引位置iloc

常问问题

我为什么不使用loc

因为索引可能不是唯一的并且使用 loc将返回与查询的索引匹配的每一行。

你为什么不使用 values属性并对它进行切片?

调用时values,如果数据帧的整体位于一个内聚的“块”中,则Pandas将返回作为“块”的数组的视图。否则,熊猫将不得不拼凑出一个新的阵列。排序时,该数组必须具有统一的dtype。通常,这意味着返回dtype为的数组object。通过使用iloc而不是切片values属性,我减轻了自己。

你为什么用 assign

当我使用 assign使用相同的列名说我炸响,我覆盖现有的列并保持其在数据帧的位置。

为什么索引值重复?

通过iloc在重复位置上使用,所得索引显示相同的重复模式。对列表或字符串的每个元素重复一次。
可以使用reset_index(drop=True)


对于字符串

我不想过早地拆分字符串。所以我算了sep假设如果要拆分,则参数结果列表的长度将比分隔符的数量多一。

然后,我将其sep用于join字符串split

def explode_str(df, col, sep):
    s = df[col]
    i = np.arange(len(s)).repeat(s.str.count(sep) + 1)
    return df.iloc[i].assign(**{col: sep.join(s).split(sep)})

对于列表

与字符串相似,除了我不需要计算出现的次数 sep因为它已经分裂了。

我用Numpy concatenate将清单加在一起。

import pandas as pd
import numpy as np

def explode_list(df, col):
    s = df[col]
    i = np.arange(len(s)).repeat(s.str.len())
    return df.iloc[i].assign(**{col: np.concatenate(s)})

TL;DR

import pandas as pd
import numpy as np

def explode_str(df, col, sep):
    s = df[col]
    i = np.arange(len(s)).repeat(s.str.count(sep) + 1)
    return df.iloc[i].assign(**{col: sep.join(s).split(sep)})

def explode_list(df, col):
    s = df[col]
    i = np.arange(len(s)).repeat(s.str.len())
    return df.iloc[i].assign(**{col: np.concatenate(s)})

Demonstration

explode_str(a, 'var1', ',')

  var1  var2
0    a     1
0    b     1
0    c     1
1    d     2
1    e     2
1    f     2

Let’s create a new dataframe d that has lists

d = a.assign(var1=lambda d: d.var1.str.split(','))

explode_list(d, 'var1')

  var1  var2
0    a     1
0    b     1
0    c     1
1    d     2
1    e     2
1    f     2

General Comments

I’ll use np.arange with repeat to produce dataframe index positions that I can use with iloc.

FAQ

Why don’t I use loc?

Because the index may not be unique and using loc will return every row that matches a queried index.

Why don’t you use the values attribute and slice that?

When calling values, if the entirety of the the dataframe is in one cohesive “block”, Pandas will return a view of the array that is the “block”. Otherwise Pandas will have to cobble together a new array. When cobbling, that array must be of a uniform dtype. Often that means returning an array with dtype that is object. By using iloc instead of slicing the values attribute, I alleviate myself from having to deal with that.

Why do you use assign?

When I use assign using the same column name that I’m exploding, I overwrite the existing column and maintain its position in the dataframe.

Why are the index values repeat?

By virtue of using iloc on repeated positions, the resulting index shows the same repeated pattern. One repeat for each element the list or string.
This can be reset with reset_index(drop=True)


For Strings

I don’t want to have to split the strings prematurely. So instead I count the occurrences of the sep argument assuming that if I were to split, the length of the resulting list would be one more than the number of separators.

I then use that sep to join the strings then split.

def explode_str(df, col, sep):
    s = df[col]
    i = np.arange(len(s)).repeat(s.str.count(sep) + 1)
    return df.iloc[i].assign(**{col: sep.join(s).split(sep)})

For Lists

Similar as for strings except I don’t need to count occurrences of sep because its already split.

I use Numpy’s concatenate to jam the lists together.

import pandas as pd
import numpy as np

def explode_list(df, col):
    s = df[col]
    i = np.arange(len(s)).repeat(s.str.len())
    return df.iloc[i].assign(**{col: np.concatenate(s)})


回答 7

有可能在不更改数据帧结构的情况下拆分和爆炸数据帧

拆分和扩展特定列的数据

输入:

    var1    var2
0   a,b,c   1
1   d,e,f   2



#Get the indexes which are repetative with the split 
temp = df['var1'].str.split(',')
df = df.reindex(df.index.repeat(temp.apply(len)))


df['var1'] = np.hstack(temp)

出:

    var1    var2
0   a   1
0   b   1
0   c   1
1   d   2
1   e   2
1   f   2

编辑1

拆分和扩展多列的行

Filename    RGB                                             RGB_type
0   A   [[0, 1650, 6, 39], [0, 1691, 1, 59], [50, 1402...   [r, g, b]
1   B   [[0, 1423, 16, 38], [0, 1445, 16, 46], [0, 141...   [r, g, b]

根据参考列重新索引并将列值信息与堆栈对齐

df = df.reindex(df.index.repeat(df['RGB_type'].apply(len)))
df = df.groupby('Filename').apply(lambda x:x.apply(lambda y: pd.Series(y.iloc[0])))
df.reset_index(drop=True).ffill()

出:

                Filename    RGB_type    Top 1 colour    Top 1 frequency Top 2 colour    Top 2 frequency
    Filename                            
 A  0       A   r   0   1650    6   39
    1       A   g   0   1691    1   59
    2       A   b   50  1402    49  187
 B  0       B   r   0   1423    16  38
    1       B   g   0   1445    16  46
    2       B   b   0   1419    16  39

There is a possibility to split and explode the dataframe without changing the structure of dataframe

Split and expand data of specific columns

Input:

    var1    var2
0   a,b,c   1
1   d,e,f   2



#Get the indexes which are repetative with the split 
df['var1'] = df['var1'].str.split(',')
df = df.explode('var1')

Out:

    var1    var2
0   a   1
0   b   1
0   c   1
1   d   2
1   e   2
1   f   2

Edit-1

Split and Expand of rows for Multiple columns

Filename    RGB                                             RGB_type
0   A   [[0, 1650, 6, 39], [0, 1691, 1, 59], [50, 1402...   [r, g, b]
1   B   [[0, 1423, 16, 38], [0, 1445, 16, 46], [0, 141...   [r, g, b]

Re indexing based on the reference column and aligning the column value information with stack

df = df.reindex(df.index.repeat(df['RGB_type'].apply(len)))
df = df.groupby('Filename').apply(lambda x:x.apply(lambda y: pd.Series(y.iloc[0])))
df.reset_index(drop=True).ffill()

Out:

                Filename    RGB_type    Top 1 colour    Top 1 frequency Top 2 colour    Top 2 frequency
    Filename                            
 A  0       A   r   0   1650    6   39
    1       A   g   0   1691    1   59
    2       A   b   50  1402    49  187
 B  0       B   r   0   1423    16  38
    1       B   g   0   1445    16  46
    2       B   b   0   1419    16  39

回答 8

我想出了一种针对具有任意列数的数据框的解决方案(尽管仍然一次只分隔一个列的条目)。

def splitDataFrameList(df,target_column,separator):
    ''' df = dataframe to split,
    target_column = the column containing the values to split
    separator = the symbol used to perform the split

    returns: a dataframe with each entry for the target column separated, with each element moved into a new row. 
    The values in the other columns are duplicated across the newly divided rows.
    '''
    def splitListToRows(row,row_accumulator,target_column,separator):
        split_row = row[target_column].split(separator)
        for s in split_row:
            new_row = row.to_dict()
            new_row[target_column] = s
            row_accumulator.append(new_row)
    new_rows = []
    df.apply(splitListToRows,axis=1,args = (new_rows,target_column,separator))
    new_df = pandas.DataFrame(new_rows)
    return new_df

I came up with a solution for dataframes with arbitrary numbers of columns (while still only separating one column’s entries at a time).

def splitDataFrameList(df,target_column,separator):
    ''' df = dataframe to split,
    target_column = the column containing the values to split
    separator = the symbol used to perform the split

    returns: a dataframe with each entry for the target column separated, with each element moved into a new row. 
    The values in the other columns are duplicated across the newly divided rows.
    '''
    def splitListToRows(row,row_accumulator,target_column,separator):
        split_row = row[target_column].split(separator)
        for s in split_row:
            new_row = row.to_dict()
            new_row[target_column] = s
            row_accumulator.append(new_row)
    new_rows = []
    df.apply(splitListToRows,axis=1,args = (new_rows,target_column,separator))
    new_df = pandas.DataFrame(new_rows)
    return new_df

回答 9

这是一条使用split熊猫方法的相当简单的消息str访问器中,然后使用NumPy将每一行展平为单个数组。

通过将非拆分列重复正确的次数来检索相应的值np.repeat

var1 = df.var1.str.split(',', expand=True).values.ravel()
var2 = np.repeat(df.var2.values, len(var1) / len(df))

pd.DataFrame({'var1': var1,
              'var2': var2})

  var1  var2
0    a     1
1    b     1
2    c     1
3    d     2
4    e     2
5    f     2

Here is a fairly straightforward message that uses the split method from pandas str accessor and then uses NumPy to flatten each row into a single array.

The corresponding values are retrieved by repeating the non-split column the correct number of times with np.repeat.

var1 = df.var1.str.split(',', expand=True).values.ravel()
var2 = np.repeat(df.var2.values, len(var1) / len(df))

pd.DataFrame({'var1': var1,
              'var2': var2})

  var1  var2
0    a     1
1    b     1
2    c     1
3    d     2
4    e     2
5    f     2

回答 10

我一直在用各种方式来爆炸我的列表,以解决内存不足的问题,因此我准备了一些基准测试来帮助我决定支持哪些答案。我测试了五种情况,它们的列表长度与列表数量的比例不同。分享以下结果:

时间:(越少越好,点击查看大图)

速度

峰值内存使用情况:(越少越好)

峰值内存使用

结论

  • @MaxU的答案(更新2),代号串联在几乎每种情况下都能提供最佳速度,同时保持较低的窥视内存使用率,
  • 如果您需要处理具有相对较小列表的许多行并且可以提供更大的峰值内存,请参见@DMulligan的答案(代号堆栈),
  • 对于行数少但列表很大的数据帧,可接受的@Chang答案很好。

完整的细节(功能和基准代码)在GitHub gist中。请注意,基准测试问题已得到简化,并且不包括将字符串拆分为列表-大多数解决方案都以类似的方式执行。

I have been struggling with out-of-memory experience using various way to explode my lists so I prepared some benchmarks to help me decide which answers to upvote. I tested five scenarios with varying proportions of the list length to the number of lists. Sharing the results below:

Time: (less is better, click to view large version)

Speed

Peak memory usage: (less is better)

Peak memory usage

Conclusions:

  • @MaxU’s answer (update 2), codename concatenate offers the best speed in almost every case, while keeping the peek memory usage low,
  • see @DMulligan’s answer (codename stack) if you need to process lots of rows with relatively small lists and can afford increased peak memory,
  • the accepted @Chang’s answer works well for data frames that have a few rows but very large lists.

Full details (functions and benchmarking code) are in this GitHub gist. Please note that the benchmark problem was simplified and did not include splitting of strings into the list – which most solutions performed in a similar fashion.


回答 11

基于出色的@DMulligan 解决方案,这是一个通用的矢量化(无循环)功能,该功能将数据帧的一列拆分为多行,然后将其合并回原始数据帧。它也change_column_order从这个答案中使用了很棒的泛型函数。

def change_column_order(df, col_name, index):
    cols = df.columns.tolist()
    cols.remove(col_name)
    cols.insert(index, col_name)
    return df[cols]

def split_df(dataframe, col_name, sep):
    orig_col_index = dataframe.columns.tolist().index(col_name)
    orig_index_name = dataframe.index.name
    orig_columns = dataframe.columns
    dataframe = dataframe.reset_index()  # we need a natural 0-based index for proper merge
    index_col_name = (set(dataframe.columns) - set(orig_columns)).pop()
    df_split = pd.DataFrame(
        pd.DataFrame(dataframe[col_name].str.split(sep).tolist())
        .stack().reset_index(level=1, drop=1), columns=[col_name])
    df = dataframe.drop(col_name, axis=1)
    df = pd.merge(df, df_split, left_index=True, right_index=True, how='inner')
    df = df.set_index(index_col_name)
    df.index.name = orig_index_name
    # merge adds the column to the last place, so we need to move it back
    return change_column_order(df, col_name, orig_col_index)

例:

df = pd.DataFrame([['a:b', 1, 4], ['c:d', 2, 5], ['e:f:g:h', 3, 6]], 
                  columns=['Name', 'A', 'B'], index=[10, 12, 13])
df
        Name    A   B
    10   a:b     1   4
    12   c:d     2   5
    13   e:f:g:h 3   6

split_df(df, 'Name', ':')
    Name    A   B
10   a       1   4
10   b       1   4
12   c       2   5
12   d       2   5
13   e       3   6
13   f       3   6    
13   g       3   6    
13   h       3   6    

请注意,它保留了列的原始索引和顺序。它也适用于具有非顺序索引的数据帧。

Based on the excellent @DMulligan’s solution, here is a generic vectorized (no loops) function which splits a column of a dataframe into multiple rows, and merges it back to the original dataframe. It also uses a great generic change_column_order function from this answer.

def change_column_order(df, col_name, index):
    cols = df.columns.tolist()
    cols.remove(col_name)
    cols.insert(index, col_name)
    return df[cols]

def split_df(dataframe, col_name, sep):
    orig_col_index = dataframe.columns.tolist().index(col_name)
    orig_index_name = dataframe.index.name
    orig_columns = dataframe.columns
    dataframe = dataframe.reset_index()  # we need a natural 0-based index for proper merge
    index_col_name = (set(dataframe.columns) - set(orig_columns)).pop()
    df_split = pd.DataFrame(
        pd.DataFrame(dataframe[col_name].str.split(sep).tolist())
        .stack().reset_index(level=1, drop=1), columns=[col_name])
    df = dataframe.drop(col_name, axis=1)
    df = pd.merge(df, df_split, left_index=True, right_index=True, how='inner')
    df = df.set_index(index_col_name)
    df.index.name = orig_index_name
    # merge adds the column to the last place, so we need to move it back
    return change_column_order(df, col_name, orig_col_index)

Example:

df = pd.DataFrame([['a:b', 1, 4], ['c:d', 2, 5], ['e:f:g:h', 3, 6]], 
                  columns=['Name', 'A', 'B'], index=[10, 12, 13])
df
        Name    A   B
    10   a:b     1   4
    12   c:d     2   5
    13   e:f:g:h 3   6

split_df(df, 'Name', ':')
    Name    A   B
10   a       1   4
10   b       1   4
12   c       2   5
12   d       2   5
13   e       3   6
13   f       3   6    
13   g       3   6    
13   h       3   6    

Note that it preserves the original index and order of the columns. It also works with dataframes which have non-sequential index.


回答 12

字符串函数split可以使用选项布尔参数’expand’。

这是使用此参数的解决方案:

(a.var1
  .str.split(",",expand=True)
  .set_index(a.var2)
  .stack()
  .reset_index(level=1, drop=True)
  .reset_index()
  .rename(columns={0:"var1"}))

The string function split can take an option boolean argument ‘expand’.

Here is a solution using this argument:

(a.var1
  .str.split(",",expand=True)
  .set_index(a.var2)
  .stack()
  .reset_index(level=1, drop=True)
  .reset_index()
  .rename(columns={0:"var1"}))

回答 13

只是从上面使用了jiln的出色答案,但需要扩展以拆分多列。以为我会分享。

def splitDataFrameList(df,target_column,separator):
''' df = dataframe to split,
target_column = the column containing the values to split
separator = the symbol used to perform the split

returns: a dataframe with each entry for the target column separated, with each element moved into a new row. 
The values in the other columns are duplicated across the newly divided rows.
'''
def splitListToRows(row, row_accumulator, target_columns, separator):
    split_rows = []
    for target_column in target_columns:
        split_rows.append(row[target_column].split(separator))
    # Seperate for multiple columns
    for i in range(len(split_rows[0])):
        new_row = row.to_dict()
        for j in range(len(split_rows)):
            new_row[target_columns[j]] = split_rows[j][i]
        row_accumulator.append(new_row)
new_rows = []
df.apply(splitListToRows,axis=1,args = (new_rows,target_column,separator))
new_df = pd.DataFrame(new_rows)
return new_df

Just used jiln’s excellent answer from above, but needed to expand to split multiple columns. Thought I would share.

def splitDataFrameList(df,target_column,separator):
''' df = dataframe to split,
target_column = the column containing the values to split
separator = the symbol used to perform the split

returns: a dataframe with each entry for the target column separated, with each element moved into a new row. 
The values in the other columns are duplicated across the newly divided rows.
'''
def splitListToRows(row, row_accumulator, target_columns, separator):
    split_rows = []
    for target_column in target_columns:
        split_rows.append(row[target_column].split(separator))
    # Seperate for multiple columns
    for i in range(len(split_rows[0])):
        new_row = row.to_dict()
        for j in range(len(split_rows)):
            new_row[target_columns[j]] = split_rows[j][i]
        row_accumulator.append(new_row)
new_rows = []
df.apply(splitListToRows,axis=1,args = (new_rows,target_column,separator))
new_df = pd.DataFrame(new_rows)
return new_df

回答 14

通过MultiIndex支持升级了MaxU的答案

def explode(df, lst_cols, fill_value='', preserve_index=False):
    """
    usage:
        In [134]: df
        Out[134]:
           aaa  myid        num          text
        0   10     1  [1, 2, 3]  [aa, bb, cc]
        1   11     2         []            []
        2   12     3     [1, 2]      [cc, dd]
        3   13     4         []            []

        In [135]: explode(df, ['num','text'], fill_value='')
        Out[135]:
           aaa  myid num text
        0   10     1   1   aa
        1   10     1   2   bb
        2   10     1   3   cc
        3   11     2
        4   12     3   1   cc
        5   12     3   2   dd
        6   13     4
    """
    # make sure `lst_cols` is list-alike
    if (lst_cols is not None
        and len(lst_cols) > 0
        and not isinstance(lst_cols, (list, tuple, np.ndarray, pd.Series))):
        lst_cols = [lst_cols]
    # all columns except `lst_cols`
    idx_cols = df.columns.difference(lst_cols)
    # calculate lengths of lists
    lens = df[lst_cols[0]].str.len()
    # preserve original index values    
    idx = np.repeat(df.index.values, lens)
    res = (pd.DataFrame({
                col:np.repeat(df[col].values, lens)
                for col in idx_cols},
                index=idx)
             .assign(**{col:np.concatenate(df.loc[lens>0, col].values)
                            for col in lst_cols}))
    # append those rows that have empty lists
    if (lens == 0).any():
        # at least one list in cells is empty
        res = (res.append(df.loc[lens==0, idx_cols], sort=False)
                  .fillna(fill_value))
    # revert the original index order
    res = res.sort_index()
    # reset index if requested
    if not preserve_index:        
        res = res.reset_index(drop=True)

    # if original index is MultiIndex build the dataframe from the multiindex
    # create "exploded" DF
    if isinstance(df.index, pd.MultiIndex):
        res = res.reindex(
            index=pd.MultiIndex.from_tuples(
                res.index,
                names=['number', 'color']
            )
    )
    return res

upgraded MaxU’s answer with MultiIndex support

def explode(df, lst_cols, fill_value='', preserve_index=False):
    """
    usage:
        In [134]: df
        Out[134]:
           aaa  myid        num          text
        0   10     1  [1, 2, 3]  [aa, bb, cc]
        1   11     2         []            []
        2   12     3     [1, 2]      [cc, dd]
        3   13     4         []            []

        In [135]: explode(df, ['num','text'], fill_value='')
        Out[135]:
           aaa  myid num text
        0   10     1   1   aa
        1   10     1   2   bb
        2   10     1   3   cc
        3   11     2
        4   12     3   1   cc
        5   12     3   2   dd
        6   13     4
    """
    # make sure `lst_cols` is list-alike
    if (lst_cols is not None
        and len(lst_cols) > 0
        and not isinstance(lst_cols, (list, tuple, np.ndarray, pd.Series))):
        lst_cols = [lst_cols]
    # all columns except `lst_cols`
    idx_cols = df.columns.difference(lst_cols)
    # calculate lengths of lists
    lens = df[lst_cols[0]].str.len()
    # preserve original index values    
    idx = np.repeat(df.index.values, lens)
    res = (pd.DataFrame({
                col:np.repeat(df[col].values, lens)
                for col in idx_cols},
                index=idx)
             .assign(**{col:np.concatenate(df.loc[lens>0, col].values)
                            for col in lst_cols}))
    # append those rows that have empty lists
    if (lens == 0).any():
        # at least one list in cells is empty
        res = (res.append(df.loc[lens==0, idx_cols], sort=False)
                  .fillna(fill_value))
    # revert the original index order
    res = res.sort_index()
    # reset index if requested
    if not preserve_index:        
        res = res.reset_index(drop=True)

    # if original index is MultiIndex build the dataframe from the multiindex
    # create "exploded" DF
    if isinstance(df.index, pd.MultiIndex):
        res = res.reindex(
            index=pd.MultiIndex.from_tuples(
                res.index,
                names=['number', 'color']
            )
    )
    return res

回答 15

一线使用split(___, expand=True)levelname参数reset_index()

>>> b = a.var1.str.split(',', expand=True).set_index(a.var2).stack().reset_index(level=0, name='var1')
>>> b
   var2 var1
0     1    a
1     1    b
2     1    c
0     2    d
1     2    e
2     2    f

如果您需要b看起来像问题中的样子,还可以执行以下操作:

>>> b = b.reset_index(drop=True)[['var1', 'var2']]
>>> b
  var1  var2
0    a     1
1    b     1
2    c     1
3    d     2
4    e     2
5    f     2

One-liner using split(___, expand=True) and the level and name arguments to reset_index():

>>> b = a.var1.str.split(',', expand=True).set_index(a.var2).stack().reset_index(level=0, name='var1')
>>> b
   var2 var1
0     1    a
1     1    b
2     1    c
0     2    d
1     2    e
2     2    f

If you need b to look exactly like in the question, you can additionally do:

>>> b = b.reset_index(drop=True)[['var1', 'var2']]
>>> b
  var1  var2
0    a     1
1    b     1
2    c     1
3    d     2
4    e     2
5    f     2

回答 16

我针对此问题提出了以下解决方案:

def iter_var1(d):
    for _, row in d.iterrows():
        for v in row["var1"].split(","):
            yield (v, row["var2"])

new_a = DataFrame.from_records([i for i in iter_var1(a)],
        columns=["var1", "var2"])

I have come up with the following solution to this problem:

def iter_var1(d):
    for _, row in d.iterrows():
        for v in row["var1"].split(","):
            yield (v, row["var2"])

new_a = DataFrame.from_records([i for i in iter_var1(a)],
        columns=["var1", "var2"])

回答 17

使用python复制包的另一种解决方案

import copy
new_observations = list()
def pandas_explode(df, column_to_explode):
    new_observations = list()
    for row in df.to_dict(orient='records'):
        explode_values = row[column_to_explode]
        del row[column_to_explode]
        if type(explode_values) is list or type(explode_values) is tuple:
            for explode_value in explode_values:
                new_observation = copy.deepcopy(row)
                new_observation[column_to_explode] = explode_value
                new_observations.append(new_observation) 
        else:
            new_observation = copy.deepcopy(row)
            new_observation[column_to_explode] = explode_values
            new_observations.append(new_observation) 
    return_df = pd.DataFrame(new_observations)
    return return_df

df = pandas_explode(df, column_name)

Another solution that uses python copy package

import copy
new_observations = list()
def pandas_explode(df, column_to_explode):
    new_observations = list()
    for row in df.to_dict(orient='records'):
        explode_values = row[column_to_explode]
        del row[column_to_explode]
        if type(explode_values) is list or type(explode_values) is tuple:
            for explode_value in explode_values:
                new_observation = copy.deepcopy(row)
                new_observation[column_to_explode] = explode_value
                new_observations.append(new_observation) 
        else:
            new_observation = copy.deepcopy(row)
            new_observation[column_to_explode] = explode_values
            new_observations.append(new_observation) 
    return_df = pd.DataFrame(new_observations)
    return return_df

df = pandas_explode(df, column_name)

回答 18

这里有很多答案,但令我惊讶的是,没有人提到内置的熊猫爆炸功能。查看以下链接: https //pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html#pandas.DataFrame.explode

由于某种原因,我无法访问该功能,因此我使用了以下代码:

import pandas_explode
pandas_explode.patch()
df_zlp_people_cnt3 = df_zlp_people_cnt2.explode('people')

在此处输入图片说明

以上是我的数据示例。如您所见,“ 人员”列中有一系列人员,而我正试图使其爆炸。我给的代码适用于列表类型数据。因此,请尝试将以逗号分隔的文本数据转换为列表格式。另外,由于我的代码使用内置函数,因此它比自定义/应用函数快得多。

注意:您可能需要使用pip安装pandas_explode。

There are a lot of answers here but I’m surprised no one has mentioned the built in pandas explode function. Check out the link below: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html#pandas.DataFrame.explode

For some reason I was unable to access that function, so I used the below code:

import pandas_explode
pandas_explode.patch()
df_zlp_people_cnt3 = df_zlp_people_cnt2.explode('people')

enter image description here

Above is a sample of my data. As you can see the people column had series of people, and I was trying to explode it. The code I have given works for list type data. So try to get your comma separated text data into list format. Also since my code uses built in functions, it is much faster than custom/apply functions.

Note: You may need to install pandas_explode with pip.


回答 19

我有一个类似的问题,我的解决方案是先将数据框转换为字典列表,然后进行转换。这是函数:

import copy
import re

def separate_row(df, column_name):
    ls = []
    for row_dict in df.to_dict('records'):
        for word in re.split(',', row_dict[column_name]):
            row = copy.deepcopy(row_dict)
            row[column_name]=word
            ls(row)
    return pd.DataFrame(ls)

例:

>>> from pandas import DataFrame
>>> import numpy as np
>>> a = DataFrame([{'var1': 'a,b,c', 'var2': 1},
               {'var1': 'd,e,f', 'var2': 2}])
>>> a
    var1  var2
0  a,b,c     1
1  d,e,f     2
>>> separate_row(a, "var1")
  var1  var2
0    a     1
1    b     1
2    c     1
3    d     2
4    e     2
5    f     2

您也可以稍微更改功能以支持分隔列表类型的行。

I had a similar problem, my solution was converting the dataframe to a list of dictionaries first, then do the transition. Here is the function:

import re
import pandas as pd

def separate_row(df, column_name):
    ls = []
    for row_dict in df.to_dict('records'):
        for word in re.split(',', row_dict[column_name]):
            row = row_dict.copy()
            row[column_name]=word
            ls.append(row)
    return pd.DataFrame(ls)

Example:

>>> from pandas import DataFrame
>>> import numpy as np
>>> a = DataFrame([{'var1': 'a,b,c', 'var2': 1},
               {'var1': 'd,e,f', 'var2': 2}])
>>> a
    var1  var2
0  a,b,c     1
1  d,e,f     2
>>> separate_row(a, "var1")
  var1  var2
0    a     1
1    b     1
2    c     1
3    d     2
4    e     2
5    f     2

You can also change the function a bit to support separating list type rows.


如何将一列分为两列?

问题:如何将一列分为两列?

我有一个带有一列的数据框,我想将其分为两列,其中一列标题为’ fips',另一列为'row'

我的数据框df如下所示:

          row
0    00000 UNITED STATES
1    01000 ALABAMA
2    01001 Autauga County, AL
3    01003 Baldwin County, AL
4    01005 Barbour County, AL

我不知道如何使用df.row.str[:]以达到分割行单元的目的。我可以df['fips'] = hello用来添加一个新列,并用填充它hello。有任何想法吗?

         fips       row
0    00000 UNITED STATES
1    01000 ALABAMA 
2    01001 Autauga County, AL
3    01003 Baldwin County, AL
4    01005 Barbour County, AL

I have a data frame with one (string) column and I’d like to split it into two (string) columns, with one column header as ‘fips' and the other 'row'

My dataframe df looks like this:

          row
0    00000 UNITED STATES
1    01000 ALABAMA
2    01001 Autauga County, AL
3    01003 Baldwin County, AL
4    01005 Barbour County, AL

I do not know how to use df.row.str[:] to achieve my goal of splitting the row cell. I can use df['fips'] = hello to add a new column and populate it with hello. Any ideas?

         fips       row
0    00000 UNITED STATES
1    01000 ALABAMA 
2    01001 Autauga County, AL
3    01003 Baldwin County, AL
4    01005 Barbour County, AL

回答 0

也许有更好的方法,但这是一种方法:

                            row
    0       00000 UNITED STATES
    1             01000 ALABAMA
    2  01001 Autauga County, AL
    3  01003 Baldwin County, AL
    4  01005 Barbour County, AL
df = pd.DataFrame(df.row.str.split(' ',1).tolist(),
                                 columns = ['flips','row'])
   flips                 row
0  00000       UNITED STATES
1  01000             ALABAMA
2  01001  Autauga County, AL
3  01003  Baldwin County, AL
4  01005  Barbour County, AL

There might be a better way, but this here’s one approach:

                            row
    0       00000 UNITED STATES
    1             01000 ALABAMA
    2  01001 Autauga County, AL
    3  01003 Baldwin County, AL
    4  01005 Barbour County, AL
df = pd.DataFrame(df.row.str.split(' ',1).tolist(),
                                 columns = ['flips','row'])
   flips                 row
0  00000       UNITED STATES
1  01000             ALABAMA
2  01001  Autauga County, AL
3  01003  Baldwin County, AL
4  01005  Barbour County, AL

回答 1

TL; DR版本:

对于以下简单情况:

  • 我有一个带有定界符的文本列,我想要两列

最简单的解决方案是:

df['A'], df['B'] = df['AB'].str.split(' ', 1).str

或者,您可以使用以下方法自动为拆分的每个条目创建一个带有一列的DataFrame:

df['AB'].str.split(' ', 1, expand=True)

expand=True如果字符串的分割数不一致,并且要None替换缺失的值,则必须使用。

请注意,无论哪种情况,该.tolist()方法都是不必要的。都不是zip()

详细地:

安迪·海登(Andy Hayden)的解决方案最能证明该str.extract()方法的强大功能。

但是对于在已知分隔符上的简单拆分(例如,用破折号拆分或通过空格拆分),该.str.split()方法就足够了1。它对字符串的一列(系列)进行操作,并返回列表的一列(系列):

>>> import pandas as pd
>>> df = pd.DataFrame({'AB': ['A1-B1', 'A2-B2']})
>>> df

      AB
0  A1-B1
1  A2-B2
>>> df['AB_split'] = df['AB'].str.split('-')
>>> df

      AB  AB_split
0  A1-B1  [A1, B1]
1  A2-B2  [A2, B2]

1:如果不确定.str.split()do 的前两个参数是什么,我建议使用该方法纯Python版本的文档。

但是你如何去做:

  • 包含两个元素的列表的列

至:

  • 两列,每列包含列表的相应元素?

好吧,我们需要仔细查看.str列的属性。

这是一个神奇的对象,用于收集将列中的每个元素视为字符串的方法,然后在每个元素中尽可能有效地应用相应的方法:

>>> upper_lower_df = pd.DataFrame({"U": ["A", "B", "C"]})
>>> upper_lower_df

   U
0  A
1  B
2  C
>>> upper_lower_df["L"] = upper_lower_df["U"].str.lower()
>>> upper_lower_df

   U  L
0  A  a
1  B  b
2  C  c

但是它也有一个“索引”接口,用于通过其索引获取字符串的每个元素:

>>> df['AB'].str[0]

0    A
1    A
Name: AB, dtype: object

>>> df['AB'].str[1]

0    1
1    2
Name: AB, dtype: object

当然,.str只要可以对其建立索引,则此索引接口并不真正在乎它所索引的每个元素是否实际上是字符串,因此:

>>> df['AB'].str.split('-', 1).str[0]

0    A1
1    A2
Name: AB, dtype: object

>>> df['AB'].str.split('-', 1).str[1]

0    B1
1    B2
Name: AB, dtype: object

然后,只需利用Python元组对可迭代对象进行拆包即可

>>> df['A'], df['B'] = df['AB'].str.split('-', 1).str
>>> df

      AB  AB_split   A   B
0  A1-B1  [A1, B1]  A1  B1
1  A2-B2  [A2, B2]  A2  B2

当然,从拆分一列字符串中获取一个DataFrame非常有用,以至于该.str.split()方法可以通过expand=True参数为您做到这一点:

>>> df['AB'].str.split('-', 1, expand=True)

    0   1
0  A1  B1
1  A2  B2

因此,完成我们想要的工作的另一种方法是:

>>> df = df[['AB']]
>>> df

      AB
0  A1-B1
1  A2-B2

>>> df.join(df['AB'].str.split('-', 1, expand=True).rename(columns={0:'A', 1:'B'}))

      AB   A   B
0  A1-B1  A1  B1
1  A2-B2  A2  B2

expand=True版本虽然较长,但与元组拆包方法相比具有明显的优势。元组解压缩不能很好地处理不同长度的拆分:

>>> df = pd.DataFrame({'AB': ['A1-B1', 'A2-B2', 'A3-B3-C3']})
>>> df
         AB
0     A1-B1
1     A2-B2
2  A3-B3-C3
>>> df['A'], df['B'], df['C'] = df['AB'].str.split('-')
Traceback (most recent call last):
  [...]    
ValueError: Length of values does not match length of index
>>> 

但是expand=True通过放置None没有足够“拆分”的列来很好地处理它:

>>> df.join(
...     df['AB'].str.split('-', expand=True).rename(
...         columns={0:'A', 1:'B', 2:'C'}
...     )
... )
         AB   A   B     C
0     A1-B1  A1  B1  None
1     A2-B2  A2  B2  None
2  A3-B3-C3  A3  B3    C3

TL;DR version:

For the simple case of:

  • I have a text column with a delimiter and I want two columns

The simplest solution is:

df[['A', 'B']] = df['AB'].str.split(' ', 1, expand=True)

You must use expand=True if your strings have a non-uniform number of splits and you want None to replace the missing values.

Notice how, in either case, the .tolist() method is not necessary. Neither is zip().

In detail:

Andy Hayden’s solution is most excellent in demonstrating the power of the str.extract() method.

But for a simple split over a known separator (like, splitting by dashes, or splitting by whitespace), the .str.split() method is enough1. It operates on a column (Series) of strings, and returns a column (Series) of lists:

>>> import pandas as pd
>>> df = pd.DataFrame({'AB': ['A1-B1', 'A2-B2']})
>>> df

      AB
0  A1-B1
1  A2-B2
>>> df['AB_split'] = df['AB'].str.split('-')
>>> df

      AB  AB_split
0  A1-B1  [A1, B1]
1  A2-B2  [A2, B2]

1: If you’re unsure what the first two parameters of .str.split() do, I recommend the docs for the plain Python version of the method.

But how do you go from:

  • a column containing two-element lists

to:

  • two columns, each containing the respective element of the lists?

Well, we need to take a closer look at the .str attribute of a column.

It’s a magical object that is used to collect methods that treat each element in a column as a string, and then apply the respective method in each element as efficient as possible:

>>> upper_lower_df = pd.DataFrame({"U": ["A", "B", "C"]})
>>> upper_lower_df

   U
0  A
1  B
2  C
>>> upper_lower_df["L"] = upper_lower_df["U"].str.lower()
>>> upper_lower_df

   U  L
0  A  a
1  B  b
2  C  c

But it also has an “indexing” interface for getting each element of a string by its index:

>>> df['AB'].str[0]

0    A
1    A
Name: AB, dtype: object

>>> df['AB'].str[1]

0    1
1    2
Name: AB, dtype: object

Of course, this indexing interface of .str doesn’t really care if each element it’s indexing is actually a string, as long as it can be indexed, so:

>>> df['AB'].str.split('-', 1).str[0]

0    A1
1    A2
Name: AB, dtype: object

>>> df['AB'].str.split('-', 1).str[1]

0    B1
1    B2
Name: AB, dtype: object

Then, it’s a simple matter of taking advantage of the Python tuple unpacking of iterables to do

>>> df['A'], df['B'] = df['AB'].str.split('-', 1).str
>>> df

      AB  AB_split   A   B
0  A1-B1  [A1, B1]  A1  B1
1  A2-B2  [A2, B2]  A2  B2

Of course, getting a DataFrame out of splitting a column of strings is so useful that the .str.split() method can do it for you with the expand=True parameter:

>>> df['AB'].str.split('-', 1, expand=True)

    0   1
0  A1  B1
1  A2  B2

So, another way of accomplishing what we wanted is to do:

>>> df = df[['AB']]
>>> df

      AB
0  A1-B1
1  A2-B2

>>> df.join(df['AB'].str.split('-', 1, expand=True).rename(columns={0:'A', 1:'B'}))

      AB   A   B
0  A1-B1  A1  B1
1  A2-B2  A2  B2

The expand=True version, although longer, has a distinct advantage over the tuple unpacking method. Tuple unpacking doesn’t deal well with splits of different lengths:

>>> df = pd.DataFrame({'AB': ['A1-B1', 'A2-B2', 'A3-B3-C3']})
>>> df
         AB
0     A1-B1
1     A2-B2
2  A3-B3-C3
>>> df['A'], df['B'], df['C'] = df['AB'].str.split('-')
Traceback (most recent call last):
  [...]    
ValueError: Length of values does not match length of index
>>> 

But expand=True handles it nicely by placing None in the columns for which there aren’t enough “splits”:

>>> df.join(
...     df['AB'].str.split('-', expand=True).rename(
...         columns={0:'A', 1:'B', 2:'C'}
...     )
... )
         AB   A   B     C
0     A1-B1  A1  B1  None
1     A2-B2  A2  B2  None
2  A3-B3-C3  A3  B3    C3

回答 2

您可以使用正则表达式模式整齐地提取不同部分:

In [11]: df.row.str.extract('(?P<fips>\d{5})((?P<state>[A-Z ]*$)|(?P<county>.*?), (?P<state_code>[A-Z]{2}$))')
Out[11]: 
    fips                    1           state           county state_code
0  00000        UNITED STATES   UNITED STATES              NaN        NaN
1  01000              ALABAMA         ALABAMA              NaN        NaN
2  01001   Autauga County, AL             NaN   Autauga County         AL
3  01003   Baldwin County, AL             NaN   Baldwin County         AL
4  01005   Barbour County, AL             NaN   Barbour County         AL

[5 rows x 5 columns]

要解释有点长的正则表达式:

(?P<fips>\d{5})
  • 匹配五个数字(\d)并命名"fips"

下一部分:

((?P<state>[A-Z ]*$)|(?P<county>.*?), (?P<state_code>[A-Z]{2}$))

|)做以下两件事之一:

(?P<state>[A-Z ]*$)
  • 匹配任意数量(*)的大写字母或空格([A-Z ]),并"state"在字符串($)末尾之前对其进行命名,

要么

(?P<county>.*?), (?P<state_code>[A-Z]{2}$))
  • .*然后匹配其他任何()
  • 然后用逗号和空格
  • 匹配state_code字符串($)末尾的两位数字。

在示例中:
请注意,前两行命中“州”(将NaN保留在county和state_code列中),而后三行命中县(即state_code)(将NaN保留在state列中)。

You can extract the different parts out quite neatly using a regex pattern:

In [11]: df.row.str.extract('(?P<fips>\d{5})((?P<state>[A-Z ]*$)|(?P<county>.*?), (?P<state_code>[A-Z]{2}$))')
Out[11]: 
    fips                    1           state           county state_code
0  00000        UNITED STATES   UNITED STATES              NaN        NaN
1  01000              ALABAMA         ALABAMA              NaN        NaN
2  01001   Autauga County, AL             NaN   Autauga County         AL
3  01003   Baldwin County, AL             NaN   Baldwin County         AL
4  01005   Barbour County, AL             NaN   Barbour County         AL

[5 rows x 5 columns]

To explain the somewhat long regex:

(?P<fips>\d{5})
  • Matches the five digits (\d) and names them "fips".

The next part:

((?P<state>[A-Z ]*$)|(?P<county>.*?), (?P<state_code>[A-Z]{2}$))

Does either (|) one of two things:

(?P<state>[A-Z ]*$)
  • Matches any number (*) of capital letters or spaces ([A-Z ]) and names this "state" before the end of the string ($),

or

(?P<county>.*?), (?P<state_code>[A-Z]{2}$))
  • matches anything else (.*) then
  • a comma and a space then
  • matches the two digit state_code before the end of the string ($).

In the example:
Note that the first two rows hit the “state” (leaving NaN in the county and state_code columns), whilst the last three hit the county, state_code (leaving NaN in the state column).


回答 3

df[['fips', 'row']] = df['row'].str.split(' ', n=1, expand=True)
df[['fips', 'row']] = df['row'].str.split(' ', n=1, expand=True)

回答 4

如果您不想创建新的数据框,或者您的数据框具有比仅要拆分的列更多的列,则可以:

df["flips"], df["row_name"] = zip(*df["row"].str.split().tolist())
del df["row"]  

If you don’t want to create a new dataframe, or if your dataframe has more columns than just the ones you want to split, you could:

df["flips"], df["row_name"] = zip(*df["row"].str.split().tolist())
del df["row"]  

回答 5

您可以使用str.split空格(默认分隔符)和参数expand=True用于将DataFrame其分配给新列:

df = pd.DataFrame({'row': ['00000 UNITED STATES', '01000 ALABAMA', 
                           '01001 Autauga County, AL', '01003 Baldwin County, AL', 
                           '01005 Barbour County, AL']})
print (df)
                        row
0       00000 UNITED STATES
1             01000 ALABAMA
2  01001 Autauga County, AL
3  01003 Baldwin County, AL
4  01005 Barbour County, AL



df[['a','b']] = df['row'].str.split(n=1, expand=True)
print (df)
                        row      a                   b
0       00000 UNITED STATES  00000       UNITED STATES
1             01000 ALABAMA  01000             ALABAMA
2  01001 Autauga County, AL  01001  Autauga County, AL
3  01003 Baldwin County, AL  01003  Baldwin County, AL
4  01005 Barbour County, AL  01005  Barbour County, AL

修改是否需要删除原始列 DataFrame.pop

df[['a','b']] = df.pop('row').str.split(n=1, expand=True)
print (df)
       a                   b
0  00000       UNITED STATES
1  01000             ALABAMA
2  01001  Autauga County, AL
3  01003  Baldwin County, AL
4  01005  Barbour County, AL

什么是一样的:

df[['a','b']] = df['row'].str.split(n=1, expand=True)
df = df.drop('row', axis=1)
print (df)

       a                   b
0  00000       UNITED STATES
1  01000             ALABAMA
2  01001  Autauga County, AL
3  01003  Baldwin County, AL
4  01005  Barbour County, AL

如果出现错误:

#remove n=1 for split by all whitespaces
df[['a','b']] = df['row'].str.split(expand=True)

ValueError:列的长度必须与键的长度相同

您可以检查并返回4列DataFrame,不仅2:

print (df['row'].str.split(expand=True))
       0        1        2     3
0  00000   UNITED   STATES  None
1  01000  ALABAMA     None  None
2  01001  Autauga  County,    AL
3  01003  Baldwin  County,    AL
4  01005  Barbour  County,    AL

然后将解决方案添加到新DataFramejoin

df = pd.DataFrame({'row': ['00000 UNITED STATES', '01000 ALABAMA', 
                           '01001 Autauga County, AL', '01003 Baldwin County, AL', 
                           '01005 Barbour County, AL'],
                    'a':range(5)})
print (df)
   a                       row
0  0       00000 UNITED STATES
1  1             01000 ALABAMA
2  2  01001 Autauga County, AL
3  3  01003 Baldwin County, AL
4  4  01005 Barbour County, AL

df = df.join(df['row'].str.split(expand=True))
print (df)

   a                       row      0        1        2     3
0  0       00000 UNITED STATES  00000   UNITED   STATES  None
1  1             01000 ALABAMA  01000  ALABAMA     None  None
2  2  01001 Autauga County, AL  01001  Autauga  County,    AL
3  3  01003 Baldwin County, AL  01003  Baldwin  County,    AL
4  4  01005 Barbour County, AL  01005  Barbour  County,    AL

使用删除原始列(如果还有其他列):

df = df.join(df.pop('row').str.split(expand=True))
print (df)
   a      0        1        2     3
0  0  00000   UNITED   STATES  None
1  1  01000  ALABAMA     None  None
2  2  01001  Autauga  County,    AL
3  3  01003  Baldwin  County,    AL
4  4  01005  Barbour  County,    AL   

You can use str.split by whitespace (default separator) and parameter expand=True for DataFrame with assign to new columns:

df = pd.DataFrame({'row': ['00000 UNITED STATES', '01000 ALABAMA', 
                           '01001 Autauga County, AL', '01003 Baldwin County, AL', 
                           '01005 Barbour County, AL']})
print (df)
                        row
0       00000 UNITED STATES
1             01000 ALABAMA
2  01001 Autauga County, AL
3  01003 Baldwin County, AL
4  01005 Barbour County, AL



df[['a','b']] = df['row'].str.split(n=1, expand=True)
print (df)
                        row      a                   b
0       00000 UNITED STATES  00000       UNITED STATES
1             01000 ALABAMA  01000             ALABAMA
2  01001 Autauga County, AL  01001  Autauga County, AL
3  01003 Baldwin County, AL  01003  Baldwin County, AL
4  01005 Barbour County, AL  01005  Barbour County, AL

Modification if need remove original column with DataFrame.pop

df[['a','b']] = df.pop('row').str.split(n=1, expand=True)
print (df)
       a                   b
0  00000       UNITED STATES
1  01000             ALABAMA
2  01001  Autauga County, AL
3  01003  Baldwin County, AL
4  01005  Barbour County, AL

What is same like:

df[['a','b']] = df['row'].str.split(n=1, expand=True)
df = df.drop('row', axis=1)
print (df)

       a                   b
0  00000       UNITED STATES
1  01000             ALABAMA
2  01001  Autauga County, AL
3  01003  Baldwin County, AL
4  01005  Barbour County, AL

If get error:

#remove n=1 for split by all whitespaces
df[['a','b']] = df['row'].str.split(expand=True)

ValueError: Columns must be same length as key

You can check and it return 4 column DataFrame, not only 2:

print (df['row'].str.split(expand=True))
       0        1        2     3
0  00000   UNITED   STATES  None
1  01000  ALABAMA     None  None
2  01001  Autauga  County,    AL
3  01003  Baldwin  County,    AL
4  01005  Barbour  County,    AL

Then solution is append new DataFrame by join:

df = pd.DataFrame({'row': ['00000 UNITED STATES', '01000 ALABAMA', 
                           '01001 Autauga County, AL', '01003 Baldwin County, AL', 
                           '01005 Barbour County, AL'],
                    'a':range(5)})
print (df)
   a                       row
0  0       00000 UNITED STATES
1  1             01000 ALABAMA
2  2  01001 Autauga County, AL
3  3  01003 Baldwin County, AL
4  4  01005 Barbour County, AL

df = df.join(df['row'].str.split(expand=True))
print (df)

   a                       row      0        1        2     3
0  0       00000 UNITED STATES  00000   UNITED   STATES  None
1  1             01000 ALABAMA  01000  ALABAMA     None  None
2  2  01001 Autauga County, AL  01001  Autauga  County,    AL
3  3  01003 Baldwin County, AL  01003  Baldwin  County,    AL
4  4  01005 Barbour County, AL  01005  Barbour  County,    AL

With remove original column (if there are also another columns):

df = df.join(df.pop('row').str.split(expand=True))
print (df)
   a      0        1        2     3
0  0  00000   UNITED   STATES  None
1  1  01000  ALABAMA     None  None
2  2  01001  Autauga  County,    AL
3  3  01003  Baldwin  County,    AL
4  4  01005  Barbour  County,    AL   

回答 6

如果要基于定界符将字符串分为两列以上,则可以省略“ maximum splits”参数。
您可以使用:

df['column_name'].str.split('/', expand=True)

这将自动创建与您的任何初始字符串中包含的最大字段数一样多的列。

If you want to split a string into more than two columns based on a delimiter you can omit the ‘maximum splits’ parameter.
You can use:

df['column_name'].str.split('/', expand=True)

This will automatically create as many columns as the maximum number of fields included in any of your initial strings.


回答 7

感到惊讶的是我还没有看到这个。如果您只需要两个分割,我强烈建议您。。。

Series.str.partition

partition 在分隔符上执行一次拆分,通常表现出色。

df['row'].str.partition(' ')[[0, 2]]

       0                   2
0  00000       UNITED STATES
1  01000             ALABAMA
2  01001  Autauga County, AL
3  01003  Baldwin County, AL
4  01005  Barbour County, AL

如果您需要重命名行,

df['row'].str.partition(' ')[[0, 2]].rename({0: 'fips', 2: 'row'}, axis=1)

    fips                 row
0  00000       UNITED STATES
1  01000             ALABAMA
2  01001  Autauga County, AL
3  01003  Baldwin County, AL
4  01005  Barbour County, AL

如果您需要将其恢复为原始版本,请使用joinconcat

df.join(df['row'].str.partition(' ')[[0, 2]])

pd.concat([df, df['row'].str.partition(' ')[[0, 2]]], axis=1)

                        row      0                   2
0       00000 UNITED STATES  00000       UNITED STATES
1             01000 ALABAMA  01000             ALABAMA
2  01001 Autauga County, AL  01001  Autauga County, AL
3  01003 Baldwin County, AL  01003  Baldwin County, AL
4  01005 Barbour County, AL  01005  Barbour County, AL

Surprised I haven’t seen this one yet. If you only need two splits, I highly recommend. . .

Series.str.partition

partition performs one split on the separator, and is generally quite performant.

df['row'].str.partition(' ')[[0, 2]]

       0                   2
0  00000       UNITED STATES
1  01000             ALABAMA
2  01001  Autauga County, AL
3  01003  Baldwin County, AL
4  01005  Barbour County, AL

If you need to rename the rows,

df['row'].str.partition(' ')[[0, 2]].rename({0: 'fips', 2: 'row'}, axis=1)

    fips                 row
0  00000       UNITED STATES
1  01000             ALABAMA
2  01001  Autauga County, AL
3  01003  Baldwin County, AL
4  01005  Barbour County, AL

If you need to join this back to the original, use join or concat:

df.join(df['row'].str.partition(' ')[[0, 2]])

pd.concat([df, df['row'].str.partition(' ')[[0, 2]]], axis=1)

                        row      0                   2
0       00000 UNITED STATES  00000       UNITED STATES
1             01000 ALABAMA  01000             ALABAMA
2  01001 Autauga County, AL  01001  Autauga County, AL
3  01003 Baldwin County, AL  01003  Baldwin County, AL
4  01005 Barbour County, AL  01005  Barbour County, AL

回答 8

我更喜欢导出相应的熊猫系列(即我需要的列),使用apply函数将列内容分为多个系列,然后加入生成的列到现有的数据帧。当然,应删除源列。

例如

 col1 = df["<col_name>"].apply(<function>)
 col2 = ...
 df = df.join(col1.to_frame(name="<name1>"))
 df = df.join(col2.toframe(name="<name2>"))
 df = df.drop(["<col_name>"], axis=1)

拆分两个单词的字符串函数应该是这样的:

lambda x: x.split(" ")[0] # for the first element
lambda x: x.split(" ")[-1] # for the last element

I prefer exporting the corresponding pandas series (i.e. the columns I need), using the apply function to split the column content into multiple series and then join the generated columns to the existing DataFrame. Of course, the source column should be removed.

e.g.

 col1 = df["<col_name>"].apply(<function>)
 col2 = ...
 df = df.join(col1.to_frame(name="<name1>"))
 df = df.join(col2.toframe(name="<name2>"))
 df = df.drop(["<col_name>"], axis=1)

To split two words strings function should be something like that:

lambda x: x.split(" ")[0] # for the first element
lambda x: x.split(" ")[-1] # for the last element

回答 9

我看到没有人使用过切片法,所以在这里我放了2美分。

df["<col_name>"].str.slice(stop=5)
df["<col_name>"].str.slice(start=6)

此方法将创建两个新列。

I saw that no one had used the slice method, so here I put my 2 cents here.

df["<col_name>"].str.slice(stop=5)
df["<col_name>"].str.slice(start=6)

This method will create two new columns.


回答 10

使用df.assign创建一个新的DF。参见http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

split = df_selected['name'].str.split(',', 1, expand=True)
df_split = df_selected.assign(first_name=split[0], last_name=split[1])
df_split.drop('name', 1, inplace=True)

Use df.assign to create a new df. See http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

split = df_selected['name'].str.split(',', 1, expand=True)
df_split = df_selected.assign(first_name=split[0], last_name=split[1])
df_split.drop('name', 1, inplace=True)

检测并排除熊猫数据框中的异常值

问题:检测并排除熊猫数据框中的异常值

我有一个只有几列的熊猫数据框。

现在我知道某些行是基于某个列值的离群值。

例如

“ Vol”列的所有值都在周围,12xx而一个值是4000(离群值)。

现在,我想排除具有Vol此类列的行。

因此,从本质上讲,我需要在数据帧上放置一个过滤器,以便我们选择某一列的值在均值例如3个标准差以内的所有行。

有什么优雅的方法可以做到这一点?

I have a pandas data frame with few columns.

Now I know that certain rows are outliers based on a certain column value.

For instance

column ‘Vol’ has all values around 12xx and one value is 4000 (outlier).

Now I would like to exclude those rows that have Vol column like this.

So, essentially I need to put a filter on the data frame such that we select all rows where the values of a certain column are within, say, 3 standard deviations from mean.

What is an elegant way to achieve this?


回答 0

如果您的数据框中有多个列,并且想要删除至少一列中具有异常值的所有行,则以下表达式可以一次性完成。

df = pd.DataFrame(np.random.randn(100, 3))

from scipy import stats
df[(np.abs(stats.zscore(df)) < 3).all(axis=1)]

描述:

  • 对于每列,首先要计算列中每个值相对于列均值和标准差的Z分数。
  • 然后取Z分数的绝对值,因为方向无关紧要,只有方向低于阈值时才行。
  • all(axis = 1)确保对于每一行,所有列均满足约束。
  • 最后,此条件的结果用于索引数据帧。

If you have multiple columns in your dataframe and would like to remove all rows that have outliers in at least one column, the following expression would do that in one shot.

df = pd.DataFrame(np.random.randn(100, 3))

from scipy import stats
df[(np.abs(stats.zscore(df)) < 3).all(axis=1)]

description:

  • For each column, first it computes the Z-score of each value in the column, relative to the column mean and standard deviation.
  • Then is takes the absolute of Z-score because the direction does not matter, only if it is below the threshold.
  • all(axis=1) ensures that for each row, all column satisfy the constraint.
  • Finally, result of this condition is used to index the dataframe.

回答 1

boolean就像在索引中那样使用索引numpy.array

df = pd.DataFrame({'Data':np.random.normal(size=200)})
# example dataset of normally distributed data. 

df[np.abs(df.Data-df.Data.mean()) <= (3*df.Data.std())]
# keep only the ones that are within +3 to -3 standard deviations in the column 'Data'.

df[~(np.abs(df.Data-df.Data.mean()) > (3*df.Data.std()))]
# or if you prefer the other way around

对于系列,它类似于:

S = pd.Series(np.random.normal(size=200))
S[~((S-S.mean()).abs() > 3*S.std())]

Use boolean indexing as you would do in numpy.array

df = pd.DataFrame({'Data':np.random.normal(size=200)})
# example dataset of normally distributed data. 

df[np.abs(df.Data-df.Data.mean()) <= (3*df.Data.std())]
# keep only the ones that are within +3 to -3 standard deviations in the column 'Data'.

df[~(np.abs(df.Data-df.Data.mean()) > (3*df.Data.std()))]
# or if you prefer the other way around

For a series it is similar:

S = pd.Series(np.random.normal(size=200))
S[~((S-S.mean()).abs() > 3*S.std())]

回答 2

对于每个dataframe列,您可以使用以下方法获得分位数:

q = df["col"].quantile(0.99)

然后过滤:

df[df["col"] < q]

如果需要删除上下限离群值,则将条件与AND语句结合使用:

q_low = df["col"].quantile(0.01)
q_hi  = df["col"].quantile(0.99)

df_filtered = df[(df["col"] < q_hi) & (df["col"] > q_low)]

For each of your dataframe column, you could get quantile with:

q = df["col"].quantile(0.99)

and then filter with:

df[df["col"] < q]

If one need to remove lower and upper outliers, combine condition with an AND statement:

q_low = df["col"].quantile(0.01)
q_hi  = df["col"].quantile(0.99)

df_filtered = df[(df["col"] < q_hi) & (df["col"] > q_low)]

回答 3

此答案与@tanemaki提供的答案相似,但使用的是lambda表达式而不是scipy stats

df = pd.DataFrame(np.random.randn(100, 3), columns=list('ABC'))

df[df.apply(lambda x: np.abs(x - x.mean()) / x.std() < 3).all(axis=1)]

要过滤只有一个列(例如“ B”)在三个标准差以内的DataFrame:

df[((df.B - df.B.mean()) / df.B.std()).abs() < 3]

有关如何在滚动基础上应用此z评分的信息,请参见此处:将滚动z评分应用于熊猫数据框

This answer is similar to that provided by @tanemaki, but uses a lambda expression instead of scipy stats.

df = pd.DataFrame(np.random.randn(100, 3), columns=list('ABC'))

df[df.apply(lambda x: np.abs(x - x.mean()) / x.std() < 3).all(axis=1)]

To filter the DataFrame where only ONE column (e.g. ‘B’) is within three standard deviations:

df[((df.B - df.B.mean()) / df.B.std()).abs() < 3]

See here for how to apply this z-score on a rolling basis: Rolling Z-score applied to pandas dataframe


回答 4

#------------------------------------------------------------------------------
# accept a dataframe, remove outliers, return cleaned data in a new dataframe
# see http://www.itl.nist.gov/div898/handbook/prc/section1/prc16.htm
#------------------------------------------------------------------------------
def remove_outlier(df_in, col_name):
    q1 = df_in[col_name].quantile(0.25)
    q3 = df_in[col_name].quantile(0.75)
    iqr = q3-q1 #Interquartile range
    fence_low  = q1-1.5*iqr
    fence_high = q3+1.5*iqr
    df_out = df_in.loc[(df_in[col_name] > fence_low) & (df_in[col_name] < fence_high)]
    return df_out
#------------------------------------------------------------------------------
# accept a dataframe, remove outliers, return cleaned data in a new dataframe
# see http://www.itl.nist.gov/div898/handbook/prc/section1/prc16.htm
#------------------------------------------------------------------------------
def remove_outlier(df_in, col_name):
    q1 = df_in[col_name].quantile(0.25)
    q3 = df_in[col_name].quantile(0.75)
    iqr = q3-q1 #Interquartile range
    fence_low  = q1-1.5*iqr
    fence_high = q3+1.5*iqr
    df_out = df_in.loc[(df_in[col_name] > fence_low) & (df_in[col_name] < fence_high)]
    return df_out

回答 5

对于数据框中的每个系列,您可以使用betweenquantile删除异常值。

x = pd.Series(np.random.normal(size=200)) # with outliers
x = x[x.between(x.quantile(.25), x.quantile(.75))] # without outliers

For each series in the dataframe, you could use between and quantile to remove outliers.

x = pd.Series(np.random.normal(size=200)) # with outliers
x = x[x.between(x.quantile(.25), x.quantile(.75))] # without outliers

回答 6

由于我还没有看到涉及数字非数字属性的答案,因此这里是一个补充性答案。

您可能只想将离群值放在数字属性上(分类变量几乎不可能是离群值)。

功能定义

我还扩展了@tanemaki的建议,以在还存在非数字属性时处理数据:

from scipy import stats

def drop_numerical_outliers(df, z_thresh=3):
    # Constrains will contain `True` or `False` depending on if it is a value below the threshold.
    constrains = df.select_dtypes(include=[np.number]) \
        .apply(lambda x: np.abs(stats.zscore(x)) < z_thresh, reduce=False) \
        .all(axis=1)
    # Drop (inplace) values set to be rejected
    df.drop(df.index[~constrains], inplace=True)

用法

drop_numerical_outliers(df)

想象一个数据集df,其中包含有关房屋的一些值:胡同,土地轮廓,售价,…例如:数据文档

首先,您要可视化散点图上的数据(z分数Thresh = 3):

# Plot data before dropping those greater than z-score 3. 
# The scatterAreaVsPrice function's definition has been removed for readability's sake.
scatterAreaVsPrice(df)

在此之前-Gr Liv Area与SalePrice

# Drop the outliers on every attributes
drop_numerical_outliers(train_df)

# Plot the result. All outliers were dropped. Note that the red points are not
# the same outliers from the first plot, but the new computed outliers based on the new data-frame.
scatterAreaVsPrice(train_df)

售后-Gr Liv地区对售价

Since I haven’t seen an answer that deal with numerical and non-numerical attributes, here is a complement answer.

You might want to drop the outliers only on numerical attributes (categorical variables can hardly be outliers).

Function definition

I have extended @tanemaki’s suggestion to handle data when non-numeric attributes are also present:

from scipy import stats

def drop_numerical_outliers(df, z_thresh=3):
    # Constrains will contain `True` or `False` depending on if it is a value below the threshold.
    constrains = df.select_dtypes(include=[np.number]) \
        .apply(lambda x: np.abs(stats.zscore(x)) < z_thresh, reduce=False) \
        .all(axis=1)
    # Drop (inplace) values set to be rejected
    df.drop(df.index[~constrains], inplace=True)

Usage

drop_numerical_outliers(df)

Example

Imagine a dataset df with some values about houses: alley, land contour, sale price, … E.g: Data Documentation

First, you want to visualise the data on a scatter graph (with z-score Thresh=3):

# Plot data before dropping those greater than z-score 3. 
# The scatterAreaVsPrice function's definition has been removed for readability's sake.
scatterAreaVsPrice(df)

Before - Gr Liv Area Versus SalePrice

# Drop the outliers on every attributes
drop_numerical_outliers(train_df)

# Plot the result. All outliers were dropped. Note that the red points are not
# the same outliers from the first plot, but the new computed outliers based on the new data-frame.
scatterAreaVsPrice(train_df)

After - Gr Liv Area Versus SalePrice


回答 7

scipy.stats有方法trim1()trimboth()可以根据排名和删除值的引入百分比将异常值切成一行。

scipy.stats has methods trim1() and trimboth() to cut the outliers out in a single row, according to the ranking and an introduced percentage of removed values.


回答 8

另一种选择是转换数据,以减轻异常值的影响。您可以通过取消存储数据来做到这一点。

import pandas as pd
from scipy.stats import mstats
%matplotlib inline

test_data = pd.Series(range(30))
test_data.plot()

原始数据

# Truncate values to the 5th and 95th percentiles
transformed_test_data = pd.Series(mstats.winsorize(test_data, limits=[0.05, 0.05])) 
transformed_test_data.plot()

Winsorized数据

Another option is to transform your data so that the effect of outliers is mitigated. You can do this by winsorizing your data.

import pandas as pd
from scipy.stats import mstats
%matplotlib inline

test_data = pd.Series(range(30))
test_data.plot()

Original data

# Truncate values to the 5th and 95th percentiles
transformed_test_data = pd.Series(mstats.winsorize(test_data, limits=[0.05, 0.05])) 
transformed_test_data.plot()

Winsorized data


回答 9

如果您喜欢方法链接,则可以为所有数字列获取布尔条件,如下所示:

df.sub(df.mean()).div(df.std()).abs().lt(3)

每列的每个值将True/False根据其与平均值之间的距离是否小于三个标准偏差进行转换。

If you like method chaining, you can get your boolean condition for all numeric columns like this:

df.sub(df.mean()).div(df.std()).abs().lt(3)

Each value of each column will be converted to True/False based on whether its less than three standard deviations away from the mean or not.


回答 10

您可以使用布尔掩码:

import pandas as pd

def remove_outliers(df, q=0.05):
    upper = df.quantile(1-q)
    lower = df.quantile(q)
    mask = (df < upper) & (df > lower)
    return mask

t = pd.DataFrame({'train': [1,1,2,3,4,5,6,7,8,9,9],
                  'y': [1,0,0,1,1,0,0,1,1,1,0]})

mask = remove_outliers(t['train'], 0.1)

print(t[mask])

输出:

   train  y
2      2  0
3      3  1
4      4  1
5      5  0
6      6  0
7      7  1
8      8  1

You can use boolean mask:

import pandas as pd

def remove_outliers(df, q=0.05):
    upper = df.quantile(1-q)
    lower = df.quantile(q)
    mask = (df < upper) & (df > lower)
    return mask

t = pd.DataFrame({'train': [1,1,2,3,4,5,6,7,8,9,9],
                  'y': [1,0,0,1,1,0,0,1,1,1,0]})

mask = remove_outliers(t['train'], 0.1)

print(t[mask])

output:

   train  y
2      2  0
3      3  1
4      4  1
5      5  0
6      6  0
7      7  1
8      8  1

回答 11

由于我正处于数据科学之旅的早期阶段,因此我使用以下代码处理异常值。

#Outlier Treatment

def outlier_detect(df):
    for i in df.describe().columns:
        Q1=df.describe().at['25%',i]
        Q3=df.describe().at['75%',i]
        IQR=Q3 - Q1
        LTV=Q1 - 1.5 * IQR
        UTV=Q3 + 1.5 * IQR
        x=np.array(df[i])
        p=[]
        for j in x:
            if j < LTV or j>UTV:
                p.append(df[i].median())
            else:
                p.append(j)
        df[i]=p
    return df

Since I am in a very early stage of my data science journey, I am treating outliers with the code below.

#Outlier Treatment

def outlier_detect(df):
    for i in df.describe().columns:
        Q1=df.describe().at['25%',i]
        Q3=df.describe().at['75%',i]
        IQR=Q3 - Q1
        LTV=Q1 - 1.5 * IQR
        UTV=Q3 + 1.5 * IQR
        x=np.array(df[i])
        p=[]
        for j in x:
            if j < LTV or j>UTV:
                p.append(df[i].median())
            else:
                p.append(j)
        df[i]=p
    return df

回答 12

获得第98和第2个百分位数作为离群值的限制

upper_limit = np.percentile(X_train.logerror.values, 98) 
lower_limit = np.percentile(X_train.logerror.values, 2) # Filter the outliers from the dataframe
data[‘target’].loc[X_train[‘target’]>upper_limit] = upper_limit data[‘target’].loc[X_train[‘target’]<lower_limit] = lower_limit

Get the 98th and 2nd percentile as the limits of our outliers

upper_limit = np.percentile(X_train.logerror.values, 98) 
lower_limit = np.percentile(X_train.logerror.values, 2) # Filter the outliers from the dataframe
data[‘target’].loc[X_train[‘target’]>upper_limit] = upper_limit data[‘target’].loc[X_train[‘target’]<lower_limit] = lower_limit

回答 13

包含数据和2个组的完整示例如下:

进口:

from StringIO import StringIO
import pandas as pd
#pandas config
pd.set_option('display.max_rows', 20)

具有2组的数据示例:G1:组1。G2:组2:

TESTDATA = StringIO("""G1;G2;Value
1;A;1.6
1;A;5.1
1;A;7.1
1;A;8.1

1;B;21.1
1;B;22.1
1;B;24.1
1;B;30.6

2;A;40.6
2;A;51.1
2;A;52.1
2;A;60.6

2;B;80.1
2;B;70.6
2;B;90.6
2;B;85.1
""")

将文本数据读取到pandas数据框:

df = pd.read_csv(TESTDATA, sep=";")

使用标准偏差定义离群值

stds = 1.0
outliers = df[['G1', 'G2', 'Value']].groupby(['G1','G2']).transform(
           lambda group: (group - group.mean()).abs().div(group.std())) > stds

定义过滤后的数据值和离群值:

dfv = df[outliers.Value == False]
dfo = df[outliers.Value == True]

打印结果:

print '\n'*5, 'All values with decimal 1 are non-outliers. In the other hand, all values with 6 in the decimal are.'
print '\nDef DATA:\n%s\n\nFiltred Values with %s stds:\n%s\n\nOutliers:\n%s' %(df, stds, dfv, dfo)

a full example with data and 2 groups follows:

Imports:

from StringIO import StringIO
import pandas as pd
#pandas config
pd.set_option('display.max_rows', 20)

Data example with 2 groups: G1:Group 1. G2: Group 2:

TESTDATA = StringIO("""G1;G2;Value
1;A;1.6
1;A;5.1
1;A;7.1
1;A;8.1

1;B;21.1
1;B;22.1
1;B;24.1
1;B;30.6

2;A;40.6
2;A;51.1
2;A;52.1
2;A;60.6

2;B;80.1
2;B;70.6
2;B;90.6
2;B;85.1
""")

Read text data to pandas dataframe:

df = pd.read_csv(TESTDATA, sep=";")

Define the outliers using standard deviations

stds = 1.0
outliers = df[['G1', 'G2', 'Value']].groupby(['G1','G2']).transform(
           lambda group: (group - group.mean()).abs().div(group.std())) > stds

Define filtered data values and the outliers:

dfv = df[outliers.Value == False]
dfo = df[outliers.Value == True]

Print the result:

print '\n'*5, 'All values with decimal 1 are non-outliers. In the other hand, all values with 6 in the decimal are.'
print '\nDef DATA:\n%s\n\nFiltred Values with %s stds:\n%s\n\nOutliers:\n%s' %(df, stds, dfv, dfo)

回答 14

我删除异常值的功能

def drop_outliers(df, field_name):
    distance = 1.5 * (np.percentile(df[field_name], 75) - np.percentile(df[field_name], 25))
    df.drop(df[df[field_name] > distance + np.percentile(df[field_name], 75)].index, inplace=True)
    df.drop(df[df[field_name] < np.percentile(df[field_name], 25) - distance].index, inplace=True)

My function for dropping outliers

def drop_outliers(df, field_name):
    distance = 1.5 * (np.percentile(df[field_name], 75) - np.percentile(df[field_name], 25))
    df.drop(df[df[field_name] > distance + np.percentile(df[field_name], 75)].index, inplace=True)
    df.drop(df[df[field_name] < np.percentile(df[field_name], 25) - distance].index, inplace=True)

回答 15

我更喜欢剪辑而不是放下。下面的内容将在第2个和第98个百分位处固定。

df_list = list(df)
minPercentile = 0.02
maxPercentile = 0.98

for _ in range(numCols):
    df[df_list[_]] = df[df_list[_]].clip((df[df_list[_]].quantile(minPercentile)),(df[df_list[_]].quantile(maxPercentile)))

I prefer to clip rather than drop. the following will clip inplace at the 2nd and 98th pecentiles.

df_list = list(df)
minPercentile = 0.02
maxPercentile = 0.98

for _ in range(numCols):
    df[df_list[_]] = df[df_list[_]].clip((df[df_list[_]].quantile(minPercentile)),(df[df_list[_]].quantile(maxPercentile)))

回答 16

我认为从统计上删除和删除异常值是错误的。它使数据与原始数据不同。也使数据不均匀地成形,因此最好的方法是通过对数据进行对数变换来减少或避免离群值的影响。这为我工作:

np.log(data.iloc[:, :])

Deleting and dropping outliers I believe is wrong statistically. It makes the data different from original data. Also makes data unequally shaped and hence best way is to reduce or avoid the effect of outliers by log transform the data. This worked for me:

np.log(data.iloc[:, :])

在pandas DataFrame中更改特定的列名称

问题:在pandas DataFrame中更改特定的列名称

我一直在寻找一种优雅的方法来更改中的指定列名称DataFrame

播放数据…

import pandas as pd
d = {
         'one': [1, 2, 3, 4, 5],
         'two': [9, 8, 7, 6, 5],
         'three': ['a', 'b', 'c', 'd', 'e']
    }
df = pd.DataFrame(d)

到目前为止,我发现的最优雅的解决方案…

names = df.columns.tolist()
names[names.index('two')] = 'new_name'
df.columns = names

我希望有一个简单的单线…此尝试失败了…

df.columns[df.columns.tolist().index('one')] = 'another_name'

非常感谢收到的任何提示。

I was looking for an elegant way to change a specified column name in a DataFrame.

play data …

import pandas as pd
d = {
         'one': [1, 2, 3, 4, 5],
         'two': [9, 8, 7, 6, 5],
         'three': ['a', 'b', 'c', 'd', 'e']
    }
df = pd.DataFrame(d)

The most elegant solution I have found so far …

names = df.columns.tolist()
names[names.index('two')] = 'new_name'
df.columns = names

I was hoping for a simple one-liner … this attempt failed …

df.columns[df.columns.tolist().index('one')] = 'another_name'

Any hints gratefully received.


回答 0

确实存在一行代码:

In [27]: df=df.rename(columns = {'two':'new_name'})

In [28]: df
Out[28]: 
  one three  new_name
0    1     a         9
1    2     b         8
2    3     c         7
3    4     d         6
4    5     e         5

以下是该rename方法的文档字符串。

定义:df.rename(self,index = None,columns = None,copy = True,inplace = False)
Docstring:
使用输入功能更改索引和/或列或
功能。函数/字典值必须唯一(1对1)。标签不行
dict / Series中包含的内容将保持不变。

参量
----------
index:类似dict或函数,可选
    转换以应用于索引值
列:类似字典或函数,可选
    转换以应用于列值
复制:布尔值,默认为True
    同时复制基础数据
inplace:布尔值,默认为False
    是否返回新的DataFrame。如果为True,则复制值为
    忽略了。

也可以看看
--------
Series.rename

退货
-------
重命名为:DataFrame(新对象)

A one liner does exist:

In [27]: df=df.rename(columns = {'two':'new_name'})

In [28]: df
Out[28]: 
  one three  new_name
0    1     a         9
1    2     b         8
2    3     c         7
3    4     d         6
4    5     e         5

Following is the docstring for the rename method.

Definition: df.rename(self, index=None, columns=None, copy=True, inplace=False)
Docstring:
Alter index and / or columns using input function or
functions. Function / dict values must be unique (1-to-1). Labels not
contained in a dict / Series will be left as-is.

Parameters
----------
index : dict-like or function, optional
    Transformation to apply to index values
columns : dict-like or function, optional
    Transformation to apply to column values
copy : boolean, default True
    Also copy underlying data
inplace : boolean, default False
    Whether to return a new DataFrame. If True then value of copy is
    ignored.

See also
--------
Series.rename

Returns
-------
renamed : DataFrame (new object)

回答 1

由于inplaceargument是可用的,因此您无需复制原始数据帧并将其分配回自身,而是执行以下操作:

df.rename(columns={'two':'new_name'}, inplace=True)

Since inplace argument is available, you don’t need to copy and assign the original data frame back to itself, but do as follows:

df.rename(columns={'two':'new_name'}, inplace=True)

回答 2

关于什么?

df.columns.values[2] = "new_name"

What about?

df.columns.values[2] = "new_name"

回答 3

熊猫0.21现在具有轴参数

重命名方法已获得一个axis参数,以匹配其余大多数熊猫API。

因此,除此以外:

df.rename(columns = {'two':'new_name'})

你可以做:

df.rename({'two':'new_name'}, axis=1)

要么

df.rename({'two':'new_name'}, axis='columns')

Pandas 0.21 now has an axis parameter

The rename method has gained an axis parameter to match most of the rest of the pandas API.

So, in addition to this:

df.rename(columns = {'two':'new_name'})

You can do:

df.rename({'two':'new_name'}, axis=1)

or

df.rename({'two':'new_name'}, axis='columns')

回答 4

如果您知道它是哪一列(第一列/第二列/第n列),那么在类似问题上发布的此解决方案都可以工作,无论它是命名还是未命名,都在一行中:https : //stackoverflow.com/a/26336314/ 4355695

df.rename(columns = {list(df)[1]:'new_name'}, inplace=True)
# 1 is for second column (0,1,2..)

If you know which column # it is (first / second / nth) then this solution posted on a similar question works regardless of whether it is named or unnamed, and in one line: https://stackoverflow.com/a/26336314/4355695

df.rename(columns = {list(df)[1]:'new_name'}, inplace=True)
# 1 is for second column (0,1,2..)

回答 5

要重命名列,这里是一种简单的方法,它既Default(0,1,2,etc;)适用于现有的列,也适用于现有的列,但对于较大的数据集(具有许多列)而言,用处不大。

对于更大的数据集,我们可以切片所需的列并应用以下代码:

df.columns = ['new_name','new_name1','old_name']

For renaming the columns here is the simple one which will work for both Default(0,1,2,etc;) and existing columns but not much useful for a larger data sets(having many columns).

For a larger data set we can slice the columns that we need and apply the below code:

df.columns = ['new_name','new_name1','old_name']

回答 6

以下简短代码可以帮助您:

df3 = df3.rename(columns={c: c.replace(' ', '') for c in df3.columns})

从列中删除空格。

Following short code can help:

df3 = df3.rename(columns={c: c.replace(' ', '') for c in df3.columns})

Remove spaces from columns.


回答 7

熊猫0.23.4版

df.rename(index=str,columns={'old_name':'new_name'},inplace=True)

作为记录:

省略index = str将给出错误替换,带有意外参数’columns’

pandas version 0.23.4

df.rename(index=str,columns={'old_name':'new_name'},inplace=True)

For the record:

omitting index=str will give error replace has an unexpected argument ‘columns’


回答 8

另一种选择是简单地复制和删除列:

df = pd.DataFrame(d)
df['new_name'] = df['two']
df = df.drop('two', axis=1)
df.head()

之后,您将得到结果:

    one three   new_name
0   1   a       9
1   2   b       8
2   3   c       7
3   4   d       6
4   5   e       5

Another option would be to simply copy & drop the column:

df = pd.DataFrame(d)
df['new_name'] = df['two']
df = df.drop('two', axis=1)
df.head()

After that you get the result:

    one three   new_name
0   1   a       9
1   2   b       8
2   3   c       7
3   4   d       6
4   5   e       5

Python中的Pandas和NumPy + SciPy有什么区别?[关闭]

问题:Python中的Pandas和NumPy + SciPy有什么区别?[关闭]

他们似乎都非常相似,我很好奇哪个软件包对财务数据分析更有利。

They both seem exceedingly similar and I’m curious as to which package would be more beneficial for financial data analysis.


回答 0

熊猫提供了基于NumPy构建的高级数据处理工具。NumPy本身是一个相当底层的工具,类似于MATLAB。另一方面,pandas提供了丰富的时间序列功能,数据对齐,对NA友好的统计信息,groupby,合并和联接方法以及许多其他便利。近年来,它在金融应用中变得非常流行。我的下一本书将专门介绍使用熊猫进行财务数据分析的一章。

pandas provides high level data manipulation tools built on top of NumPy. NumPy by itself is a fairly low-level tool, similar to MATLAB. pandas on the other hand provides rich time series functionality, data alignment, NA-friendly statistics, groupby, merge and join methods, and lots of other conveniences. It has become very popular in recent years in financial applications. I will have a chapter dedicated to financial data analysis using pandas in my upcoming book.


回答 1

熊猫(以及几乎所有用于Python的数值工具)都需要Numpy。熊猫不是严格要求Scipy,但被列为“可选依赖项”。我不会说熊猫是Numpy和/或Scipy的替代品。相反,它是一个额外的工具,它提供了更简化的方式来使用Python中的数字和表格数据。您可以使用pandas数据结构,但可以自由利用Numpy和Scipy函数进行操作。

Numpy is required by pandas (and by virtually all numerical tools for Python). Scipy is not strictly required for pandas but is listed as an “optional dependency”. I wouldn’t say that pandas is an alternative to Numpy and/or Scipy. Rather, it’s an extra tool that provides a more streamlined way of working with numerical and tabular data in Python. You can use pandas data structures but freely draw on Numpy and Scipy functions to manipulate them.


回答 2

熊猫提供了一种处理表格的好方法,因为您可以使装箱变得容易(在Python中以熊猫装箱数据框)并计算统计信息。在熊猫中另一个很棒的事情是Panel类,您可以将具有不同属性的一系列图层结合在一起,并使用groupby函数将其组合。

Pandas offer a great way to manipulate tables, as you can make binning easy (binning a dataframe in pandas in Python) and calculate statistics. Other thing that is great in pandas is the Panel class that you can join series of layers with different properties and combine it using groupby function.


选择两个日期之间的DataFrame行

问题:选择两个日期之间的DataFrame行

我正在从csv创建DataFrame,如下所示:

stock = pd.read_csv('data_in/' + filename + '.csv', skipinitialspace=True)

DataFrame有一个日期列。有没有一种方法来创建一个新的DataFrame(或仅覆盖现有的DataFrame),该DataFrame仅包含日期值在指定日期范围内或两个指定日期值之间的行?

I am creating a DataFrame from a csv as follows:

stock = pd.read_csv('data_in/' + filename + '.csv', skipinitialspace=True)

The DataFrame has a date column. Is there a way to create a new DataFrame (or just overwrite the existing one) which only contains rows with date values that fall within a specified date range or between two specified date values?


回答 0

有两种可能的解决方案:

  • 使用布尔型掩码,然后使用 df.loc[mask]
  • 将日期列设置为DatetimeIndex,然后使用 df[start_date : end_date]

使用布尔型掩码

确保df['date']是dtype系列datetime64[ns]

df['date'] = pd.to_datetime(df['date'])  

制作一个布尔型面具。start_date并且end_date可以datetime.datetimeS, np.datetime64S,pd.TimestampS,甚至日期时间字符串:

#greater than the start date and smaller than the end date
mask = (df['date'] > start_date) & (df['date'] <= end_date)

选择子DataFrame:

df.loc[mask]

或重新分配给 df

df = df.loc[mask]

例如,

import numpy as np
import pandas as pd

df = pd.DataFrame(np.random.random((200,3)))
df['date'] = pd.date_range('2000-1-1', periods=200, freq='D')
mask = (df['date'] > '2000-6-1') & (df['date'] <= '2000-6-10')
print(df.loc[mask])

Yield

            0         1         2       date
153  0.208875  0.727656  0.037787 2000-06-02
154  0.750800  0.776498  0.237716 2000-06-03
155  0.812008  0.127338  0.397240 2000-06-04
156  0.639937  0.207359  0.533527 2000-06-05
157  0.416998  0.845658  0.872826 2000-06-06
158  0.440069  0.338690  0.847545 2000-06-07
159  0.202354  0.624833  0.740254 2000-06-08
160  0.465746  0.080888  0.155452 2000-06-09
161  0.858232  0.190321  0.432574 2000-06-10

使用DatetimeIndex

如果您打算按日期进行很多选择,则将date列首先设置为索引可能会更快 。然后,您可以使用按日期选择行 df.loc[start_date:end_date]

import numpy as np
import pandas as pd

df = pd.DataFrame(np.random.random((200,3)))
df['date'] = pd.date_range('2000-1-1', periods=200, freq='D')
df = df.set_index(['date'])
print(df.loc['2000-6-1':'2000-6-10'])

Yield

                   0         1         2
date                                    
2000-06-01  0.040457  0.326594  0.492136    # <- includes start_date
2000-06-02  0.279323  0.877446  0.464523
2000-06-03  0.328068  0.837669  0.608559
2000-06-04  0.107959  0.678297  0.517435
2000-06-05  0.131555  0.418380  0.025725
2000-06-06  0.999961  0.619517  0.206108
2000-06-07  0.129270  0.024533  0.154769
2000-06-08  0.441010  0.741781  0.470402
2000-06-09  0.682101  0.375660  0.009916
2000-06-10  0.754488  0.352293  0.339337

相反,虽然Python列表索引(例如seq[start:end]包括start但不包括),但endPandas 如果在索引中,则在结果中df.loc[start_date : end_date]包括两个端点。但是,既不是start_date也不end_date是必须包含在索引中。


还要注意,pd.read_csvparse_dates参数具有可用于将date列解析为datetime64s的参数。因此,如果使用parse_dates,则无需使用df['date'] = pd.to_datetime(df['date'])

There are two possible solutions:

  • Use a boolean mask, then use df.loc[mask]
  • Set the date column as a DatetimeIndex, then use df[start_date : end_date]

Using a boolean mask:

Ensure df['date'] is a Series with dtype datetime64[ns]:

df['date'] = pd.to_datetime(df['date'])  

Make a boolean mask. start_date and end_date can be datetime.datetimes, np.datetime64s, pd.Timestamps, or even datetime strings:

#greater than the start date and smaller than the end date
mask = (df['date'] > start_date) & (df['date'] <= end_date)

Select the sub-DataFrame:

df.loc[mask]

or re-assign to df

df = df.loc[mask]

For example,

import numpy as np
import pandas as pd

df = pd.DataFrame(np.random.random((200,3)))
df['date'] = pd.date_range('2000-1-1', periods=200, freq='D')
mask = (df['date'] > '2000-6-1') & (df['date'] <= '2000-6-10')
print(df.loc[mask])

yields

            0         1         2       date
153  0.208875  0.727656  0.037787 2000-06-02
154  0.750800  0.776498  0.237716 2000-06-03
155  0.812008  0.127338  0.397240 2000-06-04
156  0.639937  0.207359  0.533527 2000-06-05
157  0.416998  0.845658  0.872826 2000-06-06
158  0.440069  0.338690  0.847545 2000-06-07
159  0.202354  0.624833  0.740254 2000-06-08
160  0.465746  0.080888  0.155452 2000-06-09
161  0.858232  0.190321  0.432574 2000-06-10

Using a DatetimeIndex:

If you are going to do a lot of selections by date, it may be quicker to set the date column as the index first. Then you can select rows by date using df.loc[start_date:end_date].

import numpy as np
import pandas as pd

df = pd.DataFrame(np.random.random((200,3)))
df['date'] = pd.date_range('2000-1-1', periods=200, freq='D')
df = df.set_index(['date'])
print(df.loc['2000-6-1':'2000-6-10'])

yields

                   0         1         2
date                                    
2000-06-01  0.040457  0.326594  0.492136    # <- includes start_date
2000-06-02  0.279323  0.877446  0.464523
2000-06-03  0.328068  0.837669  0.608559
2000-06-04  0.107959  0.678297  0.517435
2000-06-05  0.131555  0.418380  0.025725
2000-06-06  0.999961  0.619517  0.206108
2000-06-07  0.129270  0.024533  0.154769
2000-06-08  0.441010  0.741781  0.470402
2000-06-09  0.682101  0.375660  0.009916
2000-06-10  0.754488  0.352293  0.339337

While Python list indexing, e.g. seq[start:end] includes start but not end, in contrast, Pandas df.loc[start_date : end_date] includes both end-points in the result if they are in the index. Neither start_date nor end_date has to be in the index however.


Also note that pd.read_csv has a parse_dates parameter which you could use to parse the date column as datetime64s. Thus, if you use parse_dates, you would not need to use df['date'] = pd.to_datetime(df['date']).


回答 1

我觉得最好的选择是使用直接检查而不是loc函数:

df = df[(df['date'] > '2000-6-1') & (df['date'] <= '2000-6-10')]

这个对我有用。

带有切片的loc函数的主要问题是限制应该出现在实际值中,否则,将导致KeyError。

I feel the best option will be to use the direct checks rather than using loc function:

df = df[(df['date'] > '2000-6-1') & (df['date'] <= '2000-6-10')]

It works for me.

Major issue with loc function with a slice is that the limits should be present in the actual values, if not this will result in KeyError.


回答 2

您也可以使用between

df[df.some_date.between(start_date, end_date)]

You can also use between:

df[df.some_date.between(start_date, end_date)]

回答 3

您可以像这样使用列 isin上的方法datedf[df["date"].isin(pd.date_range(start_date, end_date))]

注意:这仅适用于日期(如问题所述),而不适用于时间戳记。

例:

import numpy as np   
import pandas as pd

# Make a DataFrame with dates and random numbers
df = pd.DataFrame(np.random.random((30, 3)))
df['date'] = pd.date_range('2017-1-1', periods=30, freq='D')

# Select the rows between two dates
in_range_df = df[df["date"].isin(pd.date_range("2017-01-15", "2017-01-20"))]

print(in_range_df)  # print result

这使

           0         1         2       date
14  0.960974  0.144271  0.839593 2017-01-15
15  0.814376  0.723757  0.047840 2017-01-16
16  0.911854  0.123130  0.120995 2017-01-17
17  0.505804  0.416935  0.928514 2017-01-18
18  0.204869  0.708258  0.170792 2017-01-19
19  0.014389  0.214510  0.045201 2017-01-20

You can use the isin method on the date column like so df[df["date"].isin(pd.date_range(start_date, end_date))]

Note: This only works with dates (as the question asks) and not timestamps.

Example:

import numpy as np   
import pandas as pd

# Make a DataFrame with dates and random numbers
df = pd.DataFrame(np.random.random((30, 3)))
df['date'] = pd.date_range('2017-1-1', periods=30, freq='D')

# Select the rows between two dates
in_range_df = df[df["date"].isin(pd.date_range("2017-01-15", "2017-01-20"))]

print(in_range_df)  # print result

which gives

           0         1         2       date
14  0.960974  0.144271  0.839593 2017-01-15
15  0.814376  0.723757  0.047840 2017-01-16
16  0.911854  0.123130  0.120995 2017-01-17
17  0.505804  0.416935  0.928514 2017-01-18
18  0.204869  0.708258  0.170792 2017-01-19
19  0.014389  0.214510  0.045201 2017-01-20

回答 4

保持解决方案简单和Pythonic,建议您尝试一下。

如果您要经常执行此操作,最好的解决方案是首先将date列设置为索引,这将转换DateTimeIndex中的列,并使用以下条件来分割任何日期范围。

import pandas as pd

data_frame = data_frame.set_index('date')

df = data_frame[(data_frame.index > '2017-08-10') & (data_frame.index <= '2017-08-15')]

Keeping the solution simple and pythonic, I would suggest you to try this.

In case if you are going to do this frequently the best solution would be to first set the date column as index which will convert the column in DateTimeIndex and use the following condition to slice any range of dates.

import pandas as pd

data_frame = data_frame.set_index('date')

df = data_frame[(data_frame.index > '2017-08-10') & (data_frame.index <= '2017-08-15')]

回答 5

通过对pandas版本的测试,0.22.0您现在只需使用即可使用更具可读性的代码更轻松地回答此问题between

# create a single column DataFrame with dates going from Jan 1st 2018 to Jan 1st 2019
df = pd.DataFrame({'dates':pd.date_range('2018-01-01','2019-01-01')})

假设您想获取2018年11月27日至2019年1月15日之间的日期:

# use the between statement to get a boolean mask
df['dates'].between('2018-11-27','2019-01-15', inclusive=False)

0    False
1    False
2    False
3    False
4    False

# you can pass this boolean mask straight to loc
df.loc[df['dates'].between('2018-11-27','2019-01-15', inclusive=False)]

    dates
331 2018-11-28
332 2018-11-29
333 2018-11-30
334 2018-12-01
335 2018-12-02

注意包含参数。当您想明确说明范围时非常有用。请注意,设置为True时,我们也将在2018年11月27日返回:

df.loc[df['dates'].between('2018-11-27','2019-01-15', inclusive=True)]

    dates
330 2018-11-27
331 2018-11-28
332 2018-11-29
333 2018-11-30
334 2018-12-01

此方法也比前面提到的isin方法快:

%%timeit -n 5
df.loc[df['dates'].between('2018-11-27','2019-01-15', inclusive=True)]
868 µs ± 164 µs per loop (mean ± std. dev. of 7 runs, 5 loops each)


%%timeit -n 5

df.loc[df['dates'].isin(pd.date_range('2018-01-01','2019-01-01'))]
1.53 ms ± 305 µs per loop (mean ± std. dev. of 7 runs, 5 loops each)

但是,仅当已创建遮罩时,它比unutbu提供的当前接受的答案快。但是,如果掩码是动态的,并且需要一遍又一遍地重新分配,则我的方法可能会更有效:

# already create the mask THEN time the function

start_date = dt.datetime(2018,11,27)
end_date = dt.datetime(2019,1,15)
mask = (df['dates'] > start_date) & (df['dates'] <= end_date)

%%timeit -n 5
df.loc[mask]
191 µs ± 28.5 µs per loop (mean ± std. dev. of 7 runs, 5 loops each)

With my testing of pandas version 0.22.0 you can now answer this question easier with more readable code by simply using between.

# create a single column DataFrame with dates going from Jan 1st 2018 to Jan 1st 2019
df = pd.DataFrame({'dates':pd.date_range('2018-01-01','2019-01-01')})

Let’s say you want to grab the dates between Nov 27th 2018 and Jan 15th 2019:

# use the between statement to get a boolean mask
df['dates'].between('2018-11-27','2019-01-15', inclusive=False)

0    False
1    False
2    False
3    False
4    False

# you can pass this boolean mask straight to loc
df.loc[df['dates'].between('2018-11-27','2019-01-15', inclusive=False)]

    dates
331 2018-11-28
332 2018-11-29
333 2018-11-30
334 2018-12-01
335 2018-12-02

Notice the inclusive argument. very helpful when you want to be explicit about your range. notice when set to True we return Nov 27th of 2018 as well:

df.loc[df['dates'].between('2018-11-27','2019-01-15', inclusive=True)]

    dates
330 2018-11-27
331 2018-11-28
332 2018-11-29
333 2018-11-30
334 2018-12-01

This method is also faster than the previously mentioned isin method:

%%timeit -n 5
df.loc[df['dates'].between('2018-11-27','2019-01-15', inclusive=True)]
868 µs ± 164 µs per loop (mean ± std. dev. of 7 runs, 5 loops each)


%%timeit -n 5

df.loc[df['dates'].isin(pd.date_range('2018-01-01','2019-01-01'))]
1.53 ms ± 305 µs per loop (mean ± std. dev. of 7 runs, 5 loops each)

However, it is not faster than the currently accepted answer, provided by unutbu, only if the mask is already created. but if the mask is dynamic and needs to be reassigned over and over, my method may be more efficient:

# already create the mask THEN time the function

start_date = dt.datetime(2018,11,27)
end_date = dt.datetime(2019,1,15)
mask = (df['dates'] > start_date) & (df['dates'] <= end_date)

%%timeit -n 5
df.loc[mask]
191 µs ± 28.5 µs per loop (mean ± std. dev. of 7 runs, 5 loops each)

回答 6

我宁愿不更改df

一种选择是检索indexstartend日期:

import numpy as np   
import pandas as pd

#Dummy DataFrame
df = pd.DataFrame(np.random.random((30, 3)))
df['date'] = pd.date_range('2017-1-1', periods=30, freq='D')

#Get the index of the start and end dates respectively
start = df[df['date']=='2017-01-07'].index[0]
end = df[df['date']=='2017-01-14'].index[0]

#Show the sliced df (from 2017-01-07 to 2017-01-14)
df.loc[start:end]

结果是:

     0   1   2       date
6  0.5 0.8 0.8 2017-01-07
7  0.0 0.7 0.3 2017-01-08
8  0.8 0.9 0.0 2017-01-09
9  0.0 0.2 1.0 2017-01-10
10 0.6 0.1 0.9 2017-01-11
11 0.5 0.3 0.9 2017-01-12
12 0.5 0.4 0.3 2017-01-13
13 0.4 0.9 0.9 2017-01-14

I prefer not to alter the df.

An option is to retrieve the index of the start and end dates:

import numpy as np   
import pandas as pd

#Dummy DataFrame
df = pd.DataFrame(np.random.random((30, 3)))
df['date'] = pd.date_range('2017-1-1', periods=30, freq='D')

#Get the index of the start and end dates respectively
start = df[df['date']=='2017-01-07'].index[0]
end = df[df['date']=='2017-01-14'].index[0]

#Show the sliced df (from 2017-01-07 to 2017-01-14)
df.loc[start:end]

which results in:

     0   1   2       date
6  0.5 0.8 0.8 2017-01-07
7  0.0 0.7 0.3 2017-01-08
8  0.8 0.9 0.0 2017-01-09
9  0.0 0.2 1.0 2017-01-10
10 0.6 0.1 0.9 2017-01-11
11 0.5 0.3 0.9 2017-01-12
12 0.5 0.4 0.3 2017-01-13
13 0.4 0.9 0.9 2017-01-14

回答 7

实现此目标的另一种pandas.DataFrame.query()方法是使用方法。让我为您展示有关以下数据框的示例df

>>> df = pd.DataFrame(np.random.random((5, 1)), columns=['col_1'])
>>> df['date'] = pd.date_range('2020-1-1', periods=5, freq='D')
>>> print(df)
      col_1       date
0  0.015198 2020-01-01
1  0.638600 2020-01-02
2  0.348485 2020-01-03
3  0.247583 2020-01-04
4  0.581835 2020-01-05

作为参数,使用如下条件进行过滤:

>>> start_date, end_date = '2020-01-02', '2020-01-04'
>>> print(df.query('date >= @start_date and date <= @end_date'))
      col_1       date
1  0.244104 2020-01-02
2  0.374775 2020-01-03
3  0.510053 2020-01-04

如果您不想包括边界,则只需更改条件,如下所示:

>>> print(df.query('date > @start_date and date < @end_date'))
      col_1       date
2  0.374775 2020-01-03

Another option, how to achieve this, is by using pandas.DataFrame.query() method. Let me show you an example on the following data frame called df.

>>> df = pd.DataFrame(np.random.random((5, 1)), columns=['col_1'])
>>> df['date'] = pd.date_range('2020-1-1', periods=5, freq='D')
>>> print(df)
      col_1       date
0  0.015198 2020-01-01
1  0.638600 2020-01-02
2  0.348485 2020-01-03
3  0.247583 2020-01-04
4  0.581835 2020-01-05

As an argument, use the condition for filtering like this:

>>> start_date, end_date = '2020-01-02', '2020-01-04'
>>> print(df.query('date >= @start_date and date <= @end_date'))
      col_1       date
1  0.244104 2020-01-02
2  0.374775 2020-01-03
3  0.510053 2020-01-04

If you do not want to include boundaries, just change the condition like following:

>>> print(df.query('date > @start_date and date < @end_date'))
      col_1       date
2  0.374775 2020-01-03