标签归档:pandas

如何获得熊猫系列的按元素逻辑非?

问题:如何获得熊猫系列的按元素逻辑非?

我有一个Series包含布尔值的pandas 对象。如何获得包含NOT每个值逻辑的序列?

例如,考虑一个包含以下内容的系列:

True
True
True
False

我想要获得的系列将包含:

False
False
False
True

这似乎应该相当简单,但是显然我放错了我的mojo =(

I have a pandas Series object containing boolean values. How can I get a series containing the logical NOT of each value?

For example, consider a series containing:

True
True
True
False

The series I’d like to get would contain:

False
False
False
True

This seems like it should be reasonably simple, but apparently I’ve misplaced my mojo =(


回答 0

要反转布尔系列,请使用~s

In [7]: s = pd.Series([True, True, False, True])

In [8]: ~s
Out[8]: 
0    False
1    False
2     True
3    False
dtype: bool

使用Python2.7,NumPy 1.8.0,Pandas 0.13.1:

In [119]: s = pd.Series([True, True, False, True]*10000)

In [10]:  %timeit np.invert(s)
10000 loops, best of 3: 91.8 µs per loop

In [11]: %timeit ~s
10000 loops, best of 3: 73.5 µs per loop

In [12]: %timeit (-s)
10000 loops, best of 3: 73.5 µs per loop

从Pandas 0.13.0开始,Series不再是numpy.ndarray;的子类。它们现在是的子类pd.NDFrame。这可能与为什么np.invert(s)不再像~s或一样快有关-s

注意:timeit结果可能取决于许多因素,包括硬件,编译器,操作系统,Python,NumPy和Pandas版本。

To invert a boolean Series, use ~s:

In [7]: s = pd.Series([True, True, False, True])

In [8]: ~s
Out[8]: 
0    False
1    False
2     True
3    False
dtype: bool

Using Python2.7, NumPy 1.8.0, Pandas 0.13.1:

In [119]: s = pd.Series([True, True, False, True]*10000)

In [10]:  %timeit np.invert(s)
10000 loops, best of 3: 91.8 µs per loop

In [11]: %timeit ~s
10000 loops, best of 3: 73.5 µs per loop

In [12]: %timeit (-s)
10000 loops, best of 3: 73.5 µs per loop

As of Pandas 0.13.0, Series are no longer subclasses of numpy.ndarray; they are now subclasses of pd.NDFrame. This might have something to do with why np.invert(s) is no longer as fast as ~s or -s.

Caveat: timeit results may vary depending on many factors including hardware, compiler, OS, Python, NumPy and Pandas versions.


回答 1

@unutbu的答案是正确的,只是想添加一个警告,说明您的蒙版必须是dtype bool,而不是’object’。也就是说,您的面具永远都不会有。看到这里 -即使您的面具现在是不含纳米的,它仍将是“对象”类型。

“对象”系列的逆函数不会引发错误,相反,您将获得整数的垃圾掩码,这些掩码将无法按预期工作。

In[1]: df = pd.DataFrame({'A':[True, False, np.nan], 'B':[True, False, True]})
In[2]: df.dropna(inplace=True)
In[3]: df['A']
Out[3]:
0    True
1   False
Name: A, dtype object
In[4]: ~df['A']
Out[4]:
0   -2
0   -1
Name: A, dtype object

与同事讨论了这个问题之后,我得到了一个解释:看起来熊猫正在恢复按位运算符:

In [1]: ~True
Out[1]: -2

正如@geher所说,您可以先将其转换为具有astype的bool,然后再使用〜逆

~df['A'].astype(bool)
0    False
1     True
Name: A, dtype: bool
(~df['A']).astype(bool)
0    True
1    True
Name: A, dtype: bool

@unutbu’s answer is spot on, just wanted to add a warning that your mask needs to be dtype bool, not ‘object’. Ie your mask can’t have ever had any nan’s. See here – even if your mask is nan-free now, it will remain ‘object’ type.

The inverse of an ‘object’ series won’t throw an error, instead you’ll get a garbage mask of ints that won’t work as you expect.

In[1]: df = pd.DataFrame({'A':[True, False, np.nan], 'B':[True, False, True]})
In[2]: df.dropna(inplace=True)
In[3]: df['A']
Out[3]:
0    True
1   False
Name: A, dtype object
In[4]: ~df['A']
Out[4]:
0   -2
0   -1
Name: A, dtype object

After speaking with colleagues about this one I have an explanation: It looks like pandas is reverting to the bitwise operator:

In [1]: ~True
Out[1]: -2

As @geher says, you can convert it to bool with astype before you inverse with ~

~df['A'].astype(bool)
0    False
1     True
Name: A, dtype: bool
(~df['A']).astype(bool)
0    True
1    True
Name: A, dtype: bool

回答 2

我只是试一试:

In [9]: s = Series([True, True, True, False])

In [10]: s
Out[10]: 
0     True
1     True
2     True
3    False

In [11]: -s
Out[11]: 
0    False
1    False
2    False
3     True

I just give it a shot:

In [9]: s = Series([True, True, True, False])

In [10]: s
Out[10]: 
0     True
1     True
2     True
3    False

In [11]: -s
Out[11]: 
0    False
1    False
2    False
3     True

回答 3

您也可以使用numpy.invert

In [1]: import numpy as np

In [2]: import pandas as pd

In [3]: s = pd.Series([True, True, False, True])

In [4]: np.invert(s)
Out[4]: 
0    False
1    False
2     True
3    False

编辑:性能差异出现在Ubuntu 12.04,Python 2.7,NumPy 1.7.0上-尽管使用NumPy 1.6.2似乎不存在:

In [5]: %timeit (-s)
10000 loops, best of 3: 26.8 us per loop

In [6]: %timeit np.invert(s)
100000 loops, best of 3: 7.85 us per loop

In [7]: %timeit ~s
10000 loops, best of 3: 27.3 us per loop

You can also use numpy.invert:

In [1]: import numpy as np

In [2]: import pandas as pd

In [3]: s = pd.Series([True, True, False, True])

In [4]: np.invert(s)
Out[4]: 
0    False
1    False
2     True
3    False

EDIT: The difference in performance appears on Ubuntu 12.04, Python 2.7, NumPy 1.7.0 – doesn’t seem to exist using NumPy 1.6.2 though:

In [5]: %timeit (-s)
10000 loops, best of 3: 26.8 us per loop

In [6]: %timeit np.invert(s)
100000 loops, best of 3: 7.85 us per loop

In [7]: %timeit ~s
10000 loops, best of 3: 27.3 us per loop

回答 4

NumPy较慢,因为它将输入强制转换为布尔值(因此None和0变为False,其他所有值变为True)。

import pandas as pd
import numpy as np
s = pd.Series([True, None, False, True])
np.logical_not(s)

给你

0    False
1     True
2     True
3    False
dtype: object

而〜s会崩溃。在大多数情况下,与NumPy相比,波浪号是一个更安全的选择。

熊猫0.25,小米1.17

NumPy is slower because it casts the input to boolean values (so None and 0 becomes False and everything else becomes True).

import pandas as pd
import numpy as np
s = pd.Series([True, None, False, True])
np.logical_not(s)

gives you

0    False
1     True
2     True
3    False
dtype: object

whereas ~s would crash. In most cases tilde would be a safer choice than NumPy.

Pandas 0.25, NumPy 1.17


熊猫获取不在其他数据框中的行

问题:熊猫获取不在其他数据框中的行

我有两个大熊猫数据框,它们有一些共同点。

假设dataframe2是dataframe1的子集。

如何获取dataframe1中不在dataframe2中的行?

df1 = pandas.DataFrame(data = {'col1' : [1, 2, 3, 4, 5], 'col2' : [10, 11, 12, 13, 14]}) 
df2 = pandas.DataFrame(data = {'col1' : [1, 2, 3], 'col2' : [10, 11, 12]})

I’ve two pandas data frames which have some rows in common.

Suppose dataframe2 is a subset of dataframe1.

How can I get the rows of dataframe1 which are not in dataframe2?

df1 = pandas.DataFrame(data = {'col1' : [1, 2, 3, 4, 5], 'col2' : [10, 11, 12, 13, 14]}) 
df2 = pandas.DataFrame(data = {'col1' : [1, 2, 3], 'col2' : [10, 11, 12]})

回答 0

一种方法是存储两个dfs内部合并的结果,然后我们可以简单地选择当一列的值不在此通用值中时的行:

In [119]:

common = df1.merge(df2,on=['col1','col2'])
print(common)
df1[(~df1.col1.isin(common.col1))&(~df1.col2.isin(common.col2))]
   col1  col2
0     1    10
1     2    11
2     3    12
Out[119]:
   col1  col2
3     4    13
4     5    14

编辑

您发现的另一种方法是使用isin它将产生NaN可删除的行:

In [138]:

df1[~df1.isin(df2)].dropna()
Out[138]:
   col1  col2
3     4    13
4     5    14

但是,如果df2不能以相同的方式开始行,那么它将行不通:

df2 = pd.DataFrame(data = {'col1' : [2, 3,4], 'col2' : [11, 12,13]})

将产生整个df:

In [140]:

df1[~df1.isin(df2)].dropna()
Out[140]:
   col1  col2
0     1    10
1     2    11
2     3    12
3     4    13
4     5    14

One method would be to store the result of an inner merge form both dfs, then we can simply select the rows when one column’s values are not in this common:

In [119]:

common = df1.merge(df2,on=['col1','col2'])
print(common)
df1[(~df1.col1.isin(common.col1))&(~df1.col2.isin(common.col2))]
   col1  col2
0     1    10
1     2    11
2     3    12
Out[119]:
   col1  col2
3     4    13
4     5    14

EDIT

Another method as you’ve found is to use isin which will produce NaN rows which you can drop:

In [138]:

df1[~df1.isin(df2)].dropna()
Out[138]:
   col1  col2
3     4    13
4     5    14

However if df2 does not start rows in the same manner then this won’t work:

df2 = pd.DataFrame(data = {'col1' : [2, 3,4], 'col2' : [11, 12,13]})

will produce the entire df:

In [140]:

df1[~df1.isin(df2)].dropna()
Out[140]:
   col1  col2
0     1    10
1     2    11
2     3    12
3     4    13
4     5    14

回答 1

当前选择的解决方案产生不正确的结果。为了正确解决此问题,我们可以执行从df1到的左联接df2,确保首先仅获得的唯一行df2

首先,我们需要修改原始DataFrame以添加包含数据的行[3,10]。

df1 = pd.DataFrame(data = {'col1' : [1, 2, 3, 4, 5, 3], 
                           'col2' : [10, 11, 12, 13, 14, 10]}) 
df2 = pd.DataFrame(data = {'col1' : [1, 2, 3],
                           'col2' : [10, 11, 12]})

df1

   col1  col2
0     1    10
1     2    11
2     3    12
3     4    13
4     5    14
5     3    10

df2

   col1  col2
0     1    10
1     2    11
2     3    12

执行左联接,消除中的重复项,df2以便df1联接的每一行都恰好有1行df2。使用参数indicator返回额外的一列,指示该行来自哪个表。

df_all = df1.merge(df2.drop_duplicates(), on=['col1','col2'], 
                   how='left', indicator=True)
df_all

   col1  col2     _merge
0     1    10       both
1     2    11       both
2     3    12       both
3     4    13  left_only
4     5    14  left_only
5     3    10  left_only

创建一个布尔条件:

df_all['_merge'] == 'left_only'

0    False
1    False
2    False
3     True
4     True
5     True
Name: _merge, dtype: bool

为什么其他解决方案是错误的

一些解决方案会犯同样的错误-他们仅检查每个值在每一列中是否独立,而不是在同一行中。添加最后一行,这是唯一的,但具有两列中的值,则会显示df2以下错误:

common = df1.merge(df2,on=['col1','col2'])
(~df1.col1.isin(common.col1))&(~df1.col2.isin(common.col2))
0    False
1    False
2    False
3     True
4     True
5    False
dtype: bool

此解决方案得到相同的错误结果:

df1.isin(df2.to_dict('l')).all(1)

The currently selected solution produces incorrect results. To correctly solve this problem, we can perform a left-join from df1 to df2, making sure to first get just the unique rows for df2.

First, we need to modify the original DataFrame to add the row with data [3, 10].

df1 = pd.DataFrame(data = {'col1' : [1, 2, 3, 4, 5, 3], 
                           'col2' : [10, 11, 12, 13, 14, 10]}) 
df2 = pd.DataFrame(data = {'col1' : [1, 2, 3],
                           'col2' : [10, 11, 12]})

df1

   col1  col2
0     1    10
1     2    11
2     3    12
3     4    13
4     5    14
5     3    10

df2

   col1  col2
0     1    10
1     2    11
2     3    12

Perform a left-join, eliminating duplicates in df2 so that each row of df1 joins with exactly 1 row of df2. Use the parameter indicator to return an extra column indicating which table the row was from.

df_all = df1.merge(df2.drop_duplicates(), on=['col1','col2'], 
                   how='left', indicator=True)
df_all

   col1  col2     _merge
0     1    10       both
1     2    11       both
2     3    12       both
3     4    13  left_only
4     5    14  left_only
5     3    10  left_only

Create a boolean condition:

df_all['_merge'] == 'left_only'

0    False
1    False
2    False
3     True
4     True
5     True
Name: _merge, dtype: bool

Why other solutions are wrong

A few solutions make the same mistake – they only check that each value is independently in each column, not together in the same row. Adding the last row, which is unique but has the values from both columns from df2 exposes the mistake:

common = df1.merge(df2,on=['col1','col2'])
(~df1.col1.isin(common.col1))&(~df1.col2.isin(common.col2))
0    False
1    False
2    False
3     True
4     True
5    False
dtype: bool

This solution gets the same wrong result:

df1.isin(df2.to_dict('l')).all(1)

回答 2

假设索引在数据框中是一致的(不考虑实际col值):

df1[~df1.index.isin(df2.index)]

Assuming that the indexes are consistent in the dataframes (not taking into account the actual col values):

df1[~df1.index.isin(df2.index)]

回答 3

如前所述,isin要求列和索引必须相同才能匹配。如果匹配仅在行内容上,则获得用于过滤存在的行的掩码的一种方法是将行转换为(Multi)Index:

In [77]: df1 = pandas.DataFrame(data = {'col1' : [1, 2, 3, 4, 5, 3], 'col2' : [10, 11, 12, 13, 14, 10]})
In [78]: df2 = pandas.DataFrame(data = {'col1' : [1, 3, 4], 'col2' : [10, 12, 13]})
In [79]: df1.loc[~df1.set_index(list(df1.columns)).index.isin(df2.set_index(list(df2.columns)).index)]
Out[79]:
   col1  col2
1     2    11
4     5    14
5     3    10

如果应考虑索引,则set_index具有关键字参数append,以将列附加到现有索引。如果列未对齐,则可以用列规范替换list(df.columns)以对齐数据。

pandas.MultiIndex.from_tuples(df<N>.to_records(index = False).tolist())

可以替代地使用它来创建索引,尽管我怀疑这样做效率更高。

As already hinted at, isin requires columns and indices to be the same for a match. If match should only be on row contents, one way to get the mask for filtering the rows present is to convert the rows to a (Multi)Index:

In [77]: df1 = pandas.DataFrame(data = {'col1' : [1, 2, 3, 4, 5, 3], 'col2' : [10, 11, 12, 13, 14, 10]})
In [78]: df2 = pandas.DataFrame(data = {'col1' : [1, 3, 4], 'col2' : [10, 12, 13]})
In [79]: df1.loc[~df1.set_index(list(df1.columns)).index.isin(df2.set_index(list(df2.columns)).index)]
Out[79]:
   col1  col2
1     2    11
4     5    14
5     3    10

If index should be taken into account, set_index has keyword argument append to append columns to existing index. If columns do not line up, list(df.columns) can be replaced with column specifications to align the data.

pandas.MultiIndex.from_tuples(df<N>.to_records(index = False).tolist())

could alternatively be used to create the indices, though I doubt this is more efficient.


回答 4

假设您有两个具有多个字段(column_names)的数据帧df_1和df_2,并且您要基于某些字段(例如,fields_x,fields_y)查找df_1中唯一不在df_2中的那些条目,请执行以下步骤。

步骤1.将列key1和key2分别添加到df_1和df_2。

步骤2合并数据框,如下所示。field_x和field_y是我们想要的列。

第三步:仅从df_1中选择key1不等于key2的那些行。

步骤4放下key1和key2。

这种方法将解决您的问题,即使使用大数据集也可以快速运行。我已经尝试将其用于具有超过1,000,000行的数据帧。

df_1['key1'] = 1
df_2['key2'] = 1
df_1 = pd.merge(df_1, df_2, on=['field_x', 'field_y'], how = 'left')
df_1 = df_1[~(df_1.key2 == df_1.key1)]
df_1 = df_1.drop(['key1','key2'], axis=1)

Suppose you have two dataframes, df_1 and df_2 having multiple fields(column_names) and you want to find the only those entries in df_1 that are not in df_2 on the basis of some fields(e.g. fields_x, fields_y), follow the following steps.

Step1.Add a column key1 and key2 to df_1 and df_2 respectively.

Step2.Merge the dataframes as shown below. field_x and field_y are our desired columns.

Step3.Select only those rows from df_1 where key1 is not equal to key2.

Step4.Drop key1 and key2.

This method will solve your problem and works fast even with big data sets. I have tried it for dataframes with more than 1,000,000 rows.

df_1['key1'] = 1
df_2['key2'] = 1
df_1 = pd.merge(df_1, df_2, on=['field_x', 'field_y'], how = 'left')
df_1 = df_1[~(df_1.key2 == df_1.key1)]
df_1 = df_1.drop(['key1','key2'], axis=1)

回答 5

有点晚了,但是值得检查pd.merge的“ indicator”参数。

参见另一个问题示例: 比较PandaS DataFrames并返回第一个缺少的行

a bit late, but it might be worth checking the “indicator” parameter of pd.merge.

See this other question for an example: Compare PandaS DataFrames and return rows that are missing from the first one


回答 6

您可以使用isin(dict)方法进行操作:

In [74]: df1[~df1.isin(df2.to_dict('l')).all(1)]
Out[74]:
   col1  col2
3     4    13
4     5    14

说明:

In [75]: df2.to_dict('l')
Out[75]: {'col1': [1, 2, 3], 'col2': [10, 11, 12]}

In [76]: df1.isin(df2.to_dict('l'))
Out[76]:
    col1   col2
0   True   True
1   True   True
2   True   True
3  False  False
4  False  False

In [77]: df1.isin(df2.to_dict('l')).all(1)
Out[77]:
0     True
1     True
2     True
3    False
4    False
dtype: bool

you can do it using isin(dict) method:

In [74]: df1[~df1.isin(df2.to_dict('l')).all(1)]
Out[74]:
   col1  col2
3     4    13
4     5    14

Explanation:

In [75]: df2.to_dict('l')
Out[75]: {'col1': [1, 2, 3], 'col2': [10, 11, 12]}

In [76]: df1.isin(df2.to_dict('l'))
Out[76]:
    col1   col2
0   True   True
1   True   True
2   True   True
3  False  False
4  False  False

In [77]: df1.isin(df2.to_dict('l')).all(1)
Out[77]:
0     True
1     True
2     True
3    False
4    False
dtype: bool

回答 7

您也可以Concat的df1df2

x = pd.concat([df1, df2])

然后删除所有重复项:

y = x.drop_duplicates(keep=False, inplace=False)

You can also concat df1, df2:

x = pd.concat([df1, df2])

and then remove all duplicates:

y = x.drop_duplicates(keep=False, inplace=False)

回答 8

这个怎么样:

df1 = pandas.DataFrame(data = {'col1' : [1, 2, 3, 4, 5], 
                               'col2' : [10, 11, 12, 13, 14]}) 
df2 = pandas.DataFrame(data = {'col1' : [1, 2, 3], 
                               'col2' : [10, 11, 12]})
records_df2 = set([tuple(row) for row in df2.values])
in_df2_mask = np.array([tuple(row) in records_df2 for row in df1.values])
result = df1[~in_df2_mask]

How about this:

df1 = pandas.DataFrame(data = {'col1' : [1, 2, 3, 4, 5], 
                               'col2' : [10, 11, 12, 13, 14]}) 
df2 = pandas.DataFrame(data = {'col1' : [1, 2, 3], 
                               'col2' : [10, 11, 12]})
records_df2 = set([tuple(row) for row in df2.values])
in_df2_mask = np.array([tuple(row) in records_df2 for row in df1.values])
result = df1[~in_df2_mask]

回答 9

这是解决此问题的另一种方法:

df1[~df1.index.isin(df1.merge(df2, how='inner', on=['col1', 'col2']).index)]

要么:

df1.loc[df1.index.difference(df1.merge(df2, how='inner', on=['col1', 'col2']).index)]

Here is another way of solving this:

df1[~df1.index.isin(df1.merge(df2, how='inner', on=['col1', 'col2']).index)]

Or:

df1.loc[df1.index.difference(df1.merge(df2, how='inner', on=['col1', 'col2']).index)]

回答 10

我这样做的方法涉及添加一个数据框唯一的新列,并使用此列选择是否保留条目

df2[col3] = 1
df1 = pd.merge(df_1, df_2, on=['field_x', 'field_y'], how = 'outer')
df1['Empt'].fillna(0, inplace=True)

这使得df1中的每个条目都有一个代码-如果它对于df1是唯一的,则为0;如果在两个dataFrames中都是唯一的,则为1。然后,您可以使用它来限制所需的内容

answer = nonuni[nonuni['Empt'] == 0]

My way of doing this involves adding a new column that is unique to one dataframe and using this to choose whether to keep an entry

df2[col3] = 1
df1 = pd.merge(df_1, df_2, on=['field_x', 'field_y'], how = 'outer')
df1['Empt'].fillna(0, inplace=True)

This makes it so every entry in df1 has a code – 0 if it is unique to df1, 1 if it is in both dataFrames. You then use this to restrict to what you want

answer = nonuni[nonuni['Empt'] == 0]

回答 11

使用合并功能提取不相似的行
df = df.merge(same.drop_duplicates(), on=['col1','col2'], 
               how='left', indicator=True)
将不同的行保存为CSV
df[df['_merge'] == 'left_only'].to_csv('output.csv')
extract the dissimilar rows using the merge function
df = df.merge(same.drop_duplicates(), on=['col1','col2'], 
               how='left', indicator=True)
save the dissimilar rows in CSV
df[df['_merge'] == 'left_only'].to_csv('output.csv')

将DataFrame列类型从字符串转换为日期时间,格式为dd / mm / yyyy

问题:将DataFrame列类型从字符串转换为日期时间,格式为dd / mm / yyyy

如何将字符串的DataFrame列(以dd / mm / yyyy格式)转换为日期时间?

How can I convert a DataFrame column of strings (in dd/mm/yyyy format) to datetimes?


回答 0

最简单的方法是使用to_datetime

df['col'] = pd.to_datetime(df['col'])

它还dayfirst为欧洲时代提供了依据(但请注意,这并不严格)。

它在起作用:

In [11]: pd.to_datetime(pd.Series(['05/23/2005']))
Out[11]:
0   2005-05-23 00:00:00
dtype: datetime64[ns]

您可以传递特定格式

In [12]: pd.to_datetime(pd.Series(['05/23/2005']), format="%m/%d/%Y")
Out[12]:
0   2005-05-23
dtype: datetime64[ns]

The easiest way is to use to_datetime:

df['col'] = pd.to_datetime(df['col'])

It also offers a dayfirst argument for European times (but beware this isn’t strict).

Here it is in action:

In [11]: pd.to_datetime(pd.Series(['05/23/2005']))
Out[11]:
0   2005-05-23 00:00:00
dtype: datetime64[ns]

You can pass a specific format:

In [12]: pd.to_datetime(pd.Series(['05/23/2005']), format="%m/%d/%Y")
Out[12]:
0   2005-05-23
dtype: datetime64[ns]

回答 1

如果您的日期列是格式为’2017-01-01’的字符串,则可以使用pandas astype将其转换为日期时间。

df['date'] = df['date'].astype('datetime64[ns]')

或使用datetime64 [D](如果您想要“天”精度而不是纳秒)

print(type(df_launath['date'].iloc[0]))

Yield

<class 'pandas._libs.tslib.Timestamp'> 与使用pandas.to_datetime时相同

您可以尝试使用其他格式,然后是’%Y-%m-%d’,但至少可以使用。

If your date column is a string of the format ‘2017-01-01’ you can use pandas astype to convert it to datetime.

df['date'] = df['date'].astype('datetime64[ns]')

or use datetime64[D] if you want Day precision and not nanoseconds

print(type(df_launath['date'].iloc[0]))

yields

<class 'pandas._libs.tslib.Timestamp'> the same as when you use pandas.to_datetime

You can try it with other formats then ‘%Y-%m-%d’ but at least this works.


回答 2

如果要指定复杂的格式,可以使用以下内容:

df['date_col'] =  pd.to_datetime(df['date_col'], format='%d/%m/%Y')

format此处的更多详细信息:

You can use the following if you want to specify tricky formats:

df['date_col'] =  pd.to_datetime(df['date_col'], format='%d/%m/%Y')

More details on format here:


回答 3

如果您的约会中有多种格式,别忘了设定infer_datetime_format=True以简化生活

df['date'] = pd.to_datetime(df['date'], infer_datetime_format=True)

资料来源:pd.to_datetime

或者,如果您想要定制的方法:

def autoconvert_datetime(value):
    formats = ['%m/%d/%Y', '%m-%d-%y']  # formats to try
    result_format = '%d-%m-%Y'  # output format
    for dt_format in formats:
        try:
            dt_obj = datetime.strptime(value, dt_format)
            return dt_obj.strftime(result_format)
        except Exception as e:  # throws exception when format doesn't match
            pass
    return value  # let it be if it doesn't match

df['date'] = df['date'].apply(autoconvert_datetime)

If you have a mixture of formats in your date, don’t forget to set infer_datetime_format=True to make life easier

df['date'] = pd.to_datetime(df['date'], infer_datetime_format=True)

Source: pd.to_datetime

or if you want a customized approach:

def autoconvert_datetime(value):
    formats = ['%m/%d/%Y', '%m-%d-%y']  # formats to try
    result_format = '%d-%m-%Y'  # output format
    for dt_format in formats:
        try:
            dt_obj = datetime.strptime(value, dt_format)
            return dt_obj.strftime(result_format)
        except Exception as e:  # throws exception when format doesn't match
            pass
    return value  # let it be if it doesn't match

df['date'] = df['date'].apply(autoconvert_datetime)

从Numpy数组创建Pandas DataFrame:如何指定索引列和列标题?

问题:从Numpy数组创建Pandas DataFrame:如何指定索引列和列标题?

我有一个由列表列表组成的Numpy数组,代表带有行标签和列名的二维数组,如下所示:

data = array([['','Col1','Col2'],['Row1',1,2],['Row2',3,4]])

我希望所得的DataFrame将Row1和Row2作为索引值,并将Col1,Col2作为标头值

我可以指定索引如下:

df = pd.DataFrame(data,index=data[:,0]),

但是我不确定如何最好地分配列标题。

I have a Numpy array consisting of a list of lists, representing a two-dimensional array with row labels and column names as shown below:

data = array([['','Col1','Col2'],['Row1',1,2],['Row2',3,4]])

I’d like the resulting DataFrame to have Row1 and Row2 as index values, and Col1, Col2 as header values

I can specify the index as follows:

df = pd.DataFrame(data,index=data[:,0]),

however I am unsure how to best assign column headers.


回答 0

您需要指定dataindexcolumnsDataFrame构造函数,如:

>>> pd.DataFrame(data=data[1:,1:],    # values
...              index=data[1:,0],    # 1st column as index
...              columns=data[0,1:])  # 1st row as the column names

编辑:如@joris注释中一样,您可能需要更改上述内容np.int_(data[1:,1:])才能具有正确的数据类型。

You need to specify data, index and columns to DataFrame constructor, as in:

>>> pd.DataFrame(data=data[1:,1:],    # values
...              index=data[1:,0],    # 1st column as index
...              columns=data[0,1:])  # 1st row as the column names

edit: as in the @joris comment, you may need to change above to np.int_(data[1:,1:]) to have correct data type.


回答 1

这是一个易于理解的解决方案

import numpy as np
import pandas as pd

# Creating a 2 dimensional numpy array
>>> data = np.array([[5.8, 2.8], [6.0, 2.2]])
>>> print(data)
>>> data
array([[5.8, 2.8],
       [6. , 2.2]])

# Creating pandas dataframe from numpy array
>>> dataset = pd.DataFrame({'Column1': data[:, 0], 'Column2': data[:, 1]})
>>> print(dataset)
   Column1  Column2
0      5.8      2.8
1      6.0      2.2

Here is an easy to understand solution

import numpy as np
import pandas as pd

# Creating a 2 dimensional numpy array
>>> data = np.array([[5.8, 2.8], [6.0, 2.2]])
>>> print(data)
>>> data
array([[5.8, 2.8],
       [6. , 2.2]])

# Creating pandas dataframe from numpy array
>>> dataset = pd.DataFrame({'Column1': data[:, 0], 'Column2': data[:, 1]})
>>> print(dataset)
   Column1  Column2
0      5.8      2.8
1      6.0      2.2

回答 2

我同意Joris;似乎您应该以不同的方式执行此操作,例如使用numpy record arrays。从这个好答案中修改“选项2” ,您可以像这样进行操作:

import pandas
import numpy

dtype = [('Col1','int32'), ('Col2','float32'), ('Col3','float32')]
values = numpy.zeros(20, dtype=dtype)
index = ['Row'+str(i) for i in range(1, len(values)+1)]

df = pandas.DataFrame(values, index=index)

I agree with Joris; it seems like you should be doing this differently, like with numpy record arrays. Modifying “option 2” from this great answer, you could do it like this:

import pandas
import numpy

dtype = [('Col1','int32'), ('Col2','float32'), ('Col3','float32')]
values = numpy.zeros(20, dtype=dtype)
index = ['Row'+str(i) for i in range(1, len(values)+1)]

df = pandas.DataFrame(values, index=index)

回答 3

只需使用pandas DataFrame的from_records即可完成此操作

import numpy as np
import pandas as pd
# Creating a numpy array
x = np.arange(1,10,1).reshape(-1,1)
dataframe = pd.DataFrame.from_records(x)

This can be done simply by using from_records of pandas DataFrame

import numpy as np
import pandas as pd
# Creating a numpy array
x = np.arange(1,10,1).reshape(-1,1)
dataframe = pd.DataFrame.from_records(x)

回答 4

    >>import pandas as pd
    >>import numpy as np
    >>data.shape
    (480,193)
    >>type(data)
    numpy.ndarray
    >>df=pd.DataFrame(data=data[0:,0:],
    ...        index=[i for i in range(data.shape[0])],
    ...        columns=['f'+str(i) for i in range(data.shape[1])])
    >>df.head()
    [![array to dataframe][1]][1]

    >>import pandas as pd
    >>import numpy as np
    >>data.shape
    (480,193)
    >>type(data)
    numpy.ndarray
    >>df=pd.DataFrame(data=data[0:,0:],
    ...        index=[i for i in range(data.shape[0])],
    ...        columns=['f'+str(i) for i in range(data.shape[1])])
    >>df.head()
    [![array to dataframe][1]][1]


回答 5

添加到@ behzad.nouri的答案-我们可以创建一个帮助程序来处理这种常见情况:

def csvDf(dat,**kwargs): 
  from numpy import array
  data = array(dat)
  if data is None or len(data)==0 or len(data[0])==0:
    return None
  else:
    return pd.DataFrame(data[1:,1:],index=data[1:,0],columns=data[0,1:],**kwargs)

让我们尝试一下:

data = [['','a','b','c'],['row1','row1cola','row1colb','row1colc'],
     ['row2','row2cola','row2colb','row2colc'],['row3','row3cola','row3colb','row3colc']]
csvDf(data)

In [61]: csvDf(data)
Out[61]:
             a         b         c
row1  row1cola  row1colb  row1colc
row2  row2cola  row2colb  row2colc
row3  row3cola  row3colb  row3colc

Adding to @behzad.nouri ‘s answer – we can create a helper routine to handle this common scenario:

def csvDf(dat,**kwargs): 
  from numpy import array
  data = array(dat)
  if data is None or len(data)==0 or len(data[0])==0:
    return None
  else:
    return pd.DataFrame(data[1:,1:],index=data[1:,0],columns=data[0,1:],**kwargs)

Let’s try it out:

data = [['','a','b','c'],['row1','row1cola','row1colb','row1colc'],
     ['row2','row2cola','row2colb','row2colc'],['row3','row3cola','row3colb','row3colc']]
csvDf(data)

In [61]: csvDf(data)
Out[61]:
             a         b         c
row1  row1cola  row1colb  row1colc
row2  row2cola  row2colb  row2colc
row3  row3cola  row3colb  row3colc

在熊猫中将两个系列组合到一个DataFrame中

问题:在熊猫中将两个系列组合到一个DataFrame中

我有两个Series,s1并且s2索引相同(非连续)。如何合并s1s2成为DataFrame中的两列,并将其中一个索引保留为第三列?

I have two Series s1 and s2 with the same (non-consecutive) indices. How do I combine s1 and s2 to being two columns in a DataFrame and keep one of the indices as a third column?


回答 0

我认为这concat是个不错的方法。如果存在它们,则将“系列”的名称属性用作列(否则,将它们简单地编号):

In [1]: s1 = pd.Series([1, 2], index=['A', 'B'], name='s1')

In [2]: s2 = pd.Series([3, 4], index=['A', 'B'], name='s2')

In [3]: pd.concat([s1, s2], axis=1)
Out[3]:
   s1  s2
A   1   3
B   2   4

In [4]: pd.concat([s1, s2], axis=1).reset_index()
Out[4]:
  index  s1  s2
0     A   1   3
1     B   2   4

注意:这扩展到2个以上的系列。

I think concat is a nice way to do this. If they are present it uses the name attributes of the Series as the columns (otherwise it simply numbers them):

In [1]: s1 = pd.Series([1, 2], index=['A', 'B'], name='s1')

In [2]: s2 = pd.Series([3, 4], index=['A', 'B'], name='s2')

In [3]: pd.concat([s1, s2], axis=1)
Out[3]:
   s1  s2
A   1   3
B   2   4

In [4]: pd.concat([s1, s2], axis=1).reset_index()
Out[4]:
  index  s1  s2
0     A   1   3
1     B   2   4

Note: This extends to more than 2 Series.


回答 1

如果两个索引都相同,为什么不只使用.to_frame?

> = v0.23

a.to_frame().join(b)

< v0.23

a.to_frame().join(b.to_frame())

Why don’t you just use .to_frame if both have the same indexes?

>= v0.23

a.to_frame().join(b)

< v0.23

a.to_frame().join(b.to_frame())

回答 2

熊猫会自动将这些通过的序列对齐并创建联合索引。它们在这里恰好是相同的。reset_index将索引移到列。

In [2]: s1 = Series(randn(5),index=[1,2,4,5,6])

In [4]: s2 = Series(randn(5),index=[1,2,4,5,6])

In [8]: DataFrame(dict(s1 = s1, s2 = s2)).reset_index()
Out[8]: 
   index        s1        s2
0      1 -0.176143  0.128635
1      2 -1.286470  0.908497
2      4 -0.995881  0.528050
3      5  0.402241  0.458870
4      6  0.380457  0.072251

Pandas will automatically align these passed in series and create the joint index They happen to be the same here. reset_index moves the index to a column.

In [2]: s1 = Series(randn(5),index=[1,2,4,5,6])

In [4]: s2 = Series(randn(5),index=[1,2,4,5,6])

In [8]: DataFrame(dict(s1 = s1, s2 = s2)).reset_index()
Out[8]: 
   index        s1        s2
0      1 -0.176143  0.128635
1      2 -1.286470  0.908497
2      4 -0.995881  0.528050
3      5  0.402241  0.458870
4      6  0.380457  0.072251

回答 3

示例代码:

a = pd.Series([1,2,3,4], index=[7,2,8,9])
b = pd.Series([5,6,7,8], index=[7,2,8,9])
data = pd.DataFrame({'a': a,'b':b, 'idx_col':a.index})

Pandas允许您从中创建一个DataFramedictSeries作为值,将列名作为键。当找到a Series作为值时,它将使用Series索引作为索引的一部分DataFrame。数据对齐是熊猫的主要特权之一。因此,除非您有其他需求,否则新创建的商品DataFrame具有重复的价值。在上述示例中,data['idx_col']具有与相同的数据data.index

Example code:

a = pd.Series([1,2,3,4], index=[7,2,8,9])
b = pd.Series([5,6,7,8], index=[7,2,8,9])
data = pd.DataFrame({'a': a,'b':b, 'idx_col':a.index})

Pandas allows you to create a DataFrame from a dict with Series as the values and the column names as the keys. When it finds a Series as a value, it uses the Series index as part of the DataFrame index. This data alignment is one of the main perks of Pandas. Consequently, unless you have other needs, the freshly created DataFrame has duplicated value. In the above example, data['idx_col'] has the same data as data.index.


回答 4

如果我可以回答这个问题。

将系列转换为数据框的基本原理是要了解

从概念上讲,数据框中的每一列都是一个序列。

2.而且,每个列名都是映射到系列的键名。

如果牢记以上两个概念,则可以想到许多将系列转换为数据框的方法。一个简单的解决方案将是这样的:

在这里创建两个系列

import pandas as pd

series_1 = pd.Series(list(range(10)))

series_2 = pd.Series(list(range(20,30)))

使用所需的列名创建一个空的数据框

df = pd.DataFrame(columns = ['Column_name#1', 'Column_name#1'])

使用映射概念将序列值放入数据框内

df['Column_name#1'] = series_1

df['Column_name#2'] = series_2

立即检查结果

df.head(5)

If I may answer this.

The fundamentals behind converting series to data frame is to understand that

1. At conceptual level, every column in data frame is a series.

2. And, every column name is a key name that maps to a series.

If you keep above two concepts in mind, you can think of many ways to convert series to data frame. One easy solution will be like this:

Create two series here

import pandas as pd

series_1 = pd.Series(list(range(10)))

series_2 = pd.Series(list(range(20,30)))

Create an empty data frame with just desired column names

df = pd.DataFrame(columns = ['Column_name#1', 'Column_name#1'])

Put series value inside data frame using mapping concept

df['Column_name#1'] = series_1

df['Column_name#2'] = series_2

Check results now

df.head(5)

回答 5

不确定我是否完全理解您的问题,但这是您想做的吗?

pd.DataFrame(data=dict(s1=s1, s2=s2), index=s1.index)

index=s1.index这里甚至没有必要)

Not sure I fully understand your question, but is this what you want to do?

pd.DataFrame(data=dict(s1=s1, s2=s2), index=s1.index)

(index=s1.index is not even necessary here)


回答 6

基于以下方式的解决方案的简化join()

df = a.to_frame().join(b)

A simplification of the solution based on join():

df = a.to_frame().join(b)

回答 7

我使用了pandas将numpy数组或iseries转换为数据框,然后添加并按键将其他附加列作为“预测”。如果您需要将数据框转换回列表,请使用values.tolist()

output=pd.DataFrame(X_test)
output['prediction']=y_pred

list=output.values.tolist()     

I used pandas to convert my numpy array or iseries to an dataframe then added and additional the additional column by key as ‘prediction’. If you need dataframe converted back to a list then use values.tolist()

output=pd.DataFrame(X_test)
output['prediction']=y_pred

list=output.values.tolist()     

如何选择除熊猫中的一列以外的所有列?

问题:如何选择除熊猫中的一列以外的所有列?

我有一个数据框看起来像这样:

import pandas
import numpy as np
df = DataFrame(np.random.rand(4,4), columns = list('abcd'))
df
      a         b         c         d
0  0.418762  0.042369  0.869203  0.972314
1  0.991058  0.510228  0.594784  0.534366
2  0.407472  0.259811  0.396664  0.894202
3  0.726168  0.139531  0.324932  0.906575

我如何才能获得除以外的所有列column b

I have a dataframe look like this:

import pandas
import numpy as np
df = DataFrame(np.random.rand(4,4), columns = list('abcd'))
df
      a         b         c         d
0  0.418762  0.042369  0.869203  0.972314
1  0.991058  0.510228  0.594784  0.534366
2  0.407472  0.259811  0.396664  0.894202
3  0.726168  0.139531  0.324932  0.906575

How I can get all columns except column b?


回答 0

当列不是MultiIndex时,df.columns仅是列名称的数组,因此您可以执行以下操作:

df.loc[:, df.columns != 'b']

          a         c         d
0  0.561196  0.013768  0.772827
1  0.882641  0.615396  0.075381
2  0.368824  0.651378  0.397203
3  0.788730  0.568099  0.869127

When the columns are not a MultiIndex, df.columns is just an array of column names so you can do:

df.loc[:, df.columns != 'b']

          a         c         d
0  0.561196  0.013768  0.772827
1  0.882641  0.615396  0.075381
2  0.368824  0.651378  0.397203
3  0.788730  0.568099  0.869127

回答 1

不要使用ix。它弃用。最可读和惯用的方法是df.drop()

>>> df

          a         b         c         d
0  0.175127  0.191051  0.382122  0.869242
1  0.414376  0.300502  0.554819  0.497524
2  0.142878  0.406830  0.314240  0.093132
3  0.337368  0.851783  0.933441  0.949598

>>> df.drop('b', axis=1)

          a         c         d
0  0.175127  0.382122  0.869242
1  0.414376  0.554819  0.497524
2  0.142878  0.314240  0.093132
3  0.337368  0.933441  0.949598

请注意,默认情况下,.drop()它不会就地运行;尽管名称不祥,但df不受此过程的影响。如果你想永久删除bdf,做的df.drop('b', inplace=True)

df.drop()还接受标签列表,例如df.drop(['a', 'b'], axis=1)将drop column ab

Don’t use ix. It’s deprecated. The most readable and idiomatic way of doing this is df.drop():

>>> df

          a         b         c         d
0  0.175127  0.191051  0.382122  0.869242
1  0.414376  0.300502  0.554819  0.497524
2  0.142878  0.406830  0.314240  0.093132
3  0.337368  0.851783  0.933441  0.949598

>>> df.drop('b', axis=1)

          a         c         d
0  0.175127  0.382122  0.869242
1  0.414376  0.554819  0.497524
2  0.142878  0.314240  0.093132
3  0.337368  0.933441  0.949598

Note that by default, .drop() does not operate inplace; despite the ominous name, df is unharmed by this process. If you want to permanently remove b from df, do df.drop('b', inplace=True).

df.drop() also accepts a list of labels, e.g. df.drop(['a', 'b'], axis=1) will drop column a and b.


回答 2

df[df.columns.difference(['b'])]

Out: 
          a         c         d
0  0.427809  0.459807  0.333869
1  0.678031  0.668346  0.645951
2  0.996573  0.673730  0.314911
3  0.786942  0.719665  0.330833
df[df.columns.difference(['b'])]

Out: 
          a         c         d
0  0.427809  0.459807  0.333869
1  0.678031  0.668346  0.645951
2  0.996573  0.673730  0.314911
3  0.786942  0.719665  0.330833

回答 3

您可以使用 df.columns.isin()

df.loc[:, ~df.columns.isin(['b'])]

当您要删除多列时,简单如下:

df.loc[:, ~df.columns.isin(['col1', 'col2'])]

You can use df.columns.isin()

df.loc[:, ~df.columns.isin(['b'])]

When you want to drop multiple columns, as simple as:

df.loc[:, ~df.columns.isin(['col1', 'col2'])]

回答 4

这是另一种方式:

df[[i for i in list(df.columns) if i != '<your column>']]

您只需要传递所有要显示的列即可,不需要的列除外。

Here is another way:

df[[i for i in list(df.columns) if i != '<your column>']]

You just pass all columns to be shown except of the one you do not want.


回答 5

对@Salvador Dali的另一项轻微修改使列列表可以排除:

df[[i for i in list(df.columns) if i not in [list_of_columns_to_exclude]]]

要么

df.loc[:,[i for i in list(df.columns) if i not in [list_of_columns_to_exclude]]]

Another slight modification to @Salvador Dali enables a list of columns to exclude:

df[[i for i in list(df.columns) if i not in [list_of_columns_to_exclude]]]

or

df.loc[:,[i for i in list(df.columns) if i not in [list_of_columns_to_exclude]]]

回答 6

我认为最好的方法是@Salvador Dali提到的方法。并不是说其他​​人是错的。

因为当您拥有一个数据集时,您只想选择一列并将其放入一个变量中,而将其余列放入另一变量中以进行比较或计算。然后删除数据集的列可能无济于事。当然,也有一些用例。

x_cols = [x for x in data.columns if x != 'name of column to be excluded']

然后,您可以x_colsx_cols1其他计算一样将那些变量中的列集合放入另一个变量中。

ex: x_cols1 = data[x_cols]

I think the best way to do is the way mentioned by @Salvador Dali. Not that the others are wrong.

Because when you have a data set where you just want to select one column and put it into one variable and the rest of the columns into another for comparison or computational purposes. Then dropping the column of the data set might not help. Of course there are use cases for that as well.

x_cols = [x for x in data.columns if x != 'name of column to be excluded']

Then you can put those collection of columns in variable x_cols into another variable like x_cols1 for other computation.

ex: x_cols1 = data[x_cols]

回答 7

这是一行lambda:

df[map(lambda x :x not in ['b'], list(df.columns))]

之前

import pandas
import numpy as np
df = pd.DataFrame(np.random.rand(4,4), columns = list('abcd'))
df

       a           b           c           d
0   0.774951    0.079351    0.118437    0.735799
1   0.615547    0.203062    0.437672    0.912781
2   0.804140    0.708514    0.156943    0.104416
3   0.226051    0.641862    0.739839    0.434230

之后

df[map(lambda x :x not in ['b'], list(df.columns))]

        a          c          d
0   0.774951    0.118437    0.735799
1   0.615547    0.437672    0.912781
2   0.804140    0.156943    0.104416
3   0.226051    0.739839    0.434230

Here is a one line lambda:

df[map(lambda x :x not in ['b'], list(df.columns))]

before:

import pandas
import numpy as np
df = pd.DataFrame(np.random.rand(4,4), columns = list('abcd'))
df

       a           b           c           d
0   0.774951    0.079351    0.118437    0.735799
1   0.615547    0.203062    0.437672    0.912781
2   0.804140    0.708514    0.156943    0.104416
3   0.226051    0.641862    0.739839    0.434230

after:

df[map(lambda x :x not in ['b'], list(df.columns))]

        a          c          d
0   0.774951    0.118437    0.735799
1   0.615547    0.437672    0.912781
2   0.804140    0.156943    0.104416
3   0.226051    0.739839    0.434230

从字符串创建Pandas DataFrame

问题:从字符串创建Pandas DataFrame

为了测试某些功能,我想DataFrame从字符串创建一个。假设我的测试数据如下:

TESTDATA="""col1;col2;col3
1;4.4;99
2;4.5;200
3;4.7;65
4;3.2;140
"""

将数据读入熊猫的最简单方法是什么DataFrame

In order to test some functionality I would like to create a DataFrame from a string. Let’s say my test data looks like:

TESTDATA="""col1;col2;col3
1;4.4;99
2;4.5;200
3;4.7;65
4;3.2;140
"""

What is the simplest way to read that data into a Pandas DataFrame?


回答 0

一种简单的方法是使用StringIO.StringIO(python2)io.StringIO(python3)并将其传递给pandas.read_csv函数。例如:

import sys
if sys.version_info[0] < 3: 
    from StringIO import StringIO
else:
    from io import StringIO

import pandas as pd

TESTDATA = StringIO("""col1;col2;col3
    1;4.4;99
    2;4.5;200
    3;4.7;65
    4;3.2;140
    """)

df = pd.read_csv(TESTDATA, sep=";")

A simple way to do this is to use StringIO.StringIO (python2) or io.StringIO (python3) and pass that to the pandas.read_csv function. E.g:

import sys
if sys.version_info[0] < 3: 
    from StringIO import StringIO
else:
    from io import StringIO

import pandas as pd

TESTDATA = StringIO("""col1;col2;col3
    1;4.4;99
    2;4.5;200
    3;4.7;65
    4;3.2;140
    """)

df = pd.read_csv(TESTDATA, sep=";")

回答 1

分割法

data = input_string
df = pd.DataFrame([x.split(';') for x in data.split('\n')])
print(df)

Split Method

data = input_string
df = pd.DataFrame([x.split(';') for x in data.split('\n')])
print(df)

回答 2

交互式工作的快速简便解决方案是通过从剪贴板加载数据来复制和粘贴文本。

用鼠标选择字符串的内容:

在Python Shell中使用 read_clipboard()

>>> pd.read_clipboard()
  col1;col2;col3
0       1;4.4;99
1      2;4.5;200
2       3;4.7;65
3      4;3.2;140

使用适当的分隔符:

>>> pd.read_clipboard(sep=';')
   col1  col2  col3
0     1   4.4    99
1     2   4.5   200
2     3   4.7    65
3     4   3.2   140

>>> df = pd.read_clipboard(sep=';') # save to dataframe

A quick and easy solution for interactive work is to copy-and-paste the text by loading the data from the clipboard.

Select the content of the string with your mouse:

In the Python shell use read_clipboard()

>>> pd.read_clipboard()
  col1;col2;col3
0       1;4.4;99
1      2;4.5;200
2       3;4.7;65
3      4;3.2;140

Use the appropriate separator:

>>> pd.read_clipboard(sep=';')
   col1  col2  col3
0     1   4.4    99
1     2   4.5   200
2     3   4.7    65
3     4   3.2   140

>>> df = pd.read_clipboard(sep=';') # save to dataframe

回答 3

传统的可变宽度CSV无法将数据存储为字符串变量。尤其是在.py文件内部使用时,请考虑使用定宽管道分隔数据。各种IDE和编辑器可能都有一个插件,用于将管道分隔的文本格式化为整齐的表。

使用 read_csv

将以下内容存储在实用程序模块中,例如util/pandas.py。函数的文档字符串中包含一个示例。

import io
import re

import pandas as pd


def read_psv(str_input: str, **kwargs) -> pd.DataFrame:
    """Read a Pandas object from a pipe-separated table contained within a string.

    Input example:
        | int_score | ext_score | eligible |
        |           | 701       | True     |
        | 221.3     | 0         | False    |
        |           | 576       | True     |
        | 300       | 600       | True     |

    The leading and trailing pipes are optional, but if one is present,
    so must be the other.

    `kwargs` are passed to `read_csv`. They must not include `sep`.

    In PyCharm, the "Pipe Table Formatter" plugin has a "Format" feature that can 
    be used to neatly format a table.

    Ref: https://stackoverflow.com/a/46471952/
    """

    substitutions = [
        ('^ *', ''),  # Remove leading spaces
        (' *$', ''),  # Remove trailing spaces
        (r' *\| *', '|'),  # Remove spaces between columns
    ]
    if all(line.lstrip().startswith('|') and line.rstrip().endswith('|') for line in str_input.strip().split('\n')):
        substitutions.extend([
            (r'^\|', ''),  # Remove redundant leading delimiter
            (r'\|$', ''),  # Remove redundant trailing delimiter
        ])
    for pattern, replacement in substitutions:
        str_input = re.sub(pattern, replacement, str_input, flags=re.MULTILINE)
    return pd.read_csv(io.StringIO(str_input), sep='|', **kwargs)

非工作选择

以下代码无法正常运行,因为它在左侧和右侧都添加了一个空列。

df = pd.read_csv(io.StringIO(df_str), sep=r'\s*\|\s*', engine='python')

至于read_fwf,它实际上并没有使用太多read_csv接受和使用的可选kwarg 。因此,它根本不应该用于管道分隔的数据。

This answer applies when a string is manually entered, not when it’s read from somewhere.

A traditional variable-width CSV is unreadable for storing data as a string variable. Especially for use inside a .py file, consider fixed-width pipe-separated data instead. Various IDEs and editors may have a plugin to format pipe-separated text into a neat table.

Using read_csv

Store the following in a utility module, e.g. util/pandas.py. An example is included in the function’s docstring.

import io
import re

import pandas as pd


def read_psv(str_input: str, **kwargs) -> pd.DataFrame:
    """Read a Pandas object from a pipe-separated table contained within a string.

    Input example:
        | int_score | ext_score | eligible |
        |           | 701       | True     |
        | 221.3     | 0         | False    |
        |           | 576       | True     |
        | 300       | 600       | True     |

    The leading and trailing pipes are optional, but if one is present,
    so must be the other.

    `kwargs` are passed to `read_csv`. They must not include `sep`.

    In PyCharm, the "Pipe Table Formatter" plugin has a "Format" feature that can 
    be used to neatly format a table.

    Ref: https://stackoverflow.com/a/46471952/
    """

    substitutions = [
        ('^ *', ''),  # Remove leading spaces
        (' *$', ''),  # Remove trailing spaces
        (r' *\| *', '|'),  # Remove spaces between columns
    ]
    if all(line.lstrip().startswith('|') and line.rstrip().endswith('|') for line in str_input.strip().split('\n')):
        substitutions.extend([
            (r'^\|', ''),  # Remove redundant leading delimiter
            (r'\|$', ''),  # Remove redundant trailing delimiter
        ])
    for pattern, replacement in substitutions:
        str_input = re.sub(pattern, replacement, str_input, flags=re.MULTILINE)
    return pd.read_csv(io.StringIO(str_input), sep='|', **kwargs)

Non-working alternatives

The code below doesn’t work properly because it adds an empty column on both the left and right sides.

df = pd.read_csv(io.StringIO(df_str), sep=r'\s*\|\s*', engine='python')

As for read_fwf, it doesn’t actually use so many of the optional kwargs that read_csv accepts and uses. As such, it shouldn’t be used at all for pipe-separated data.


回答 4

最简单的方法是将其保存到临时文件,然后读取它:

import pandas as pd

CSV_FILE_NAME = 'temp_file.csv'  # Consider creating temp file, look URL below
with open(CSV_FILE_NAME, 'w') as outfile:
    outfile.write(TESTDATA)
df = pd.read_csv(CSV_FILE_NAME, sep=';')

创建临时文件的正确方法:如何在Python中创建tmp文件?

Simplest way is to save it to temp file and then read it:

import pandas as pd

CSV_FILE_NAME = 'temp_file.csv'  # Consider creating temp file, look URL below
with open(CSV_FILE_NAME, 'w') as outfile:
    outfile.write(TESTDATA)
df = pd.read_csv(CSV_FILE_NAME, sep=';')

Right way of creating temp file: How can I create a tmp file in Python?


如何在pandas groupby中将数据框行分组为列表?

问题:如何在pandas groupby中将数据框行分组为列表?

我有一个熊猫数据框,df例如:

a b
A 1
A 2
B 5
B 5
B 4
C 6

我想按第一列分组并获得第二列作为行中的列表

A [1,2]
B [5,5,4]
C [6]

可以使用pandas groupby来做类似的事情吗?

I have a pandas data frame df like:

a b
A 1
A 2
B 5
B 5
B 4
C 6

I want to group by the first column and get second column as lists in rows:

A [1,2]
B [5,5,4]
C [6]

Is it possible to do something like this using pandas groupby?


回答 0

您可以使用以下方法groupby来对感兴趣的列进行分组,然后apply list对每个分组进行分组:

In [1]: df = pd.DataFrame( {'a':['A','A','B','B','B','C'], 'b':[1,2,5,5,4,6]})
        df

Out[1]: 
   a  b
0  A  1
1  A  2
2  B  5
3  B  5
4  B  4
5  C  6

In [2]: df.groupby('a')['b'].apply(list)
Out[2]: 
a
A       [1, 2]
B    [5, 5, 4]
C          [6]
Name: b, dtype: object

In [3]: df1 = df.groupby('a')['b'].apply(list).reset_index(name='new')
        df1
Out[3]: 
   a        new
0  A     [1, 2]
1  B  [5, 5, 4]
2  C        [6]

You can do this using groupby to group on the column of interest and then apply list to every group:

In [1]: df = pd.DataFrame( {'a':['A','A','B','B','B','C'], 'b':[1,2,5,5,4,6]})
        df

Out[1]: 
   a  b
0  A  1
1  A  2
2  B  5
3  B  5
4  B  4
5  C  6

In [2]: df.groupby('a')['b'].apply(list)
Out[2]: 
a
A       [1, 2]
B    [5, 5, 4]
C          [6]
Name: b, dtype: object

In [3]: df1 = df.groupby('a')['b'].apply(list).reset_index(name='new')
        df1
Out[3]: 
   a        new
0  A     [1, 2]
1  B  [5, 5, 4]
2  C        [6]

回答 1

如果性能很重要,请降低到numpy级别:

import numpy as np

df = pd.DataFrame({'a': np.random.randint(0, 60, 600), 'b': [1, 2, 5, 5, 4, 6]*100})

def f(df):
         keys, values = df.sort_values('a').values.T
         ukeys, index = np.unique(keys, True)
         arrays = np.split(values, index[1:])
         df2 = pd.DataFrame({'a':ukeys, 'b':[list(a) for a in arrays]})
         return df2

测试:

In [301]: %timeit f(df)
1000 loops, best of 3: 1.64 ms per loop

In [302]: %timeit df.groupby('a')['b'].apply(list)
100 loops, best of 3: 5.26 ms per loop

If performance is important go down to numpy level:

import numpy as np

df = pd.DataFrame({'a': np.random.randint(0, 60, 600), 'b': [1, 2, 5, 5, 4, 6]*100})

def f(df):
         keys, values = df.sort_values('a').values.T
         ukeys, index = np.unique(keys, True)
         arrays = np.split(values, index[1:])
         df2 = pd.DataFrame({'a':ukeys, 'b':[list(a) for a in arrays]})
         return df2

Tests:

In [301]: %timeit f(df)
1000 loops, best of 3: 1.64 ms per loop

In [302]: %timeit df.groupby('a')['b'].apply(list)
100 loops, best of 3: 5.26 ms per loop

回答 2

实现此目的的便捷方法是:

df.groupby('a').agg({'b':lambda x: list(x)})

研究编写自定义聚合:https//www.kaggle.com/akshaysehgal/how-to-group-by-aggregate-using-py

A handy way to achieve this would be:

df.groupby('a').agg({'b':lambda x: list(x)})

Look into writing Custom Aggregations: https://www.kaggle.com/akshaysehgal/how-to-group-by-aggregate-using-py


回答 3

正如您所说的groupbypd.DataFrame对象的方法可以完成这项工作。

 L = ['A','A','B','B','B','C']
 N = [1,2,5,5,4,6]

 import pandas as pd
 df = pd.DataFrame(zip(L,N),columns = list('LN'))


 groups = df.groupby(df.L)

 groups.groups
      {'A': [0, 1], 'B': [2, 3, 4], 'C': [5]}

给出了各个组的索引说明。

例如,要获取单个组的元素,您可以做

 groups.get_group('A')

     L  N
  0  A  1
  1  A  2

  groups.get_group('B')

     L  N
  2  B  5
  3  B  5
  4  B  4

As you were saying the groupby method of a pd.DataFrame object can do the job.

Example

 L = ['A','A','B','B','B','C']
 N = [1,2,5,5,4,6]

 import pandas as pd
 df = pd.DataFrame(zip(L,N),columns = list('LN'))


 groups = df.groupby(df.L)

 groups.groups
      {'A': [0, 1], 'B': [2, 3, 4], 'C': [5]}

which gives and index-wise description of the groups.

To get elements of single groups, you can do, for instance

 groups.get_group('A')

     L  N
  0  A  1
  1  A  2

  groups.get_group('B')

     L  N
  2  B  5
  3  B  5
  4  B  4

回答 4

要解决数据框的几列问题:

In [5]: df = pd.DataFrame( {'a':['A','A','B','B','B','C'], 'b':[1,2,5,5,4,6],'c'
   ...: :[3,3,3,4,4,4]})

In [6]: df
Out[6]: 
   a  b  c
0  A  1  3
1  A  2  3
2  B  5  3
3  B  5  4
4  B  4  4
5  C  6  4

In [7]: df.groupby('a').agg(lambda x: list(x))
Out[7]: 
           b          c
a                      
A     [1, 2]     [3, 3]
B  [5, 5, 4]  [3, 4, 4]
C        [6]        [4]

该答案的灵感来自Anamika Modi的答案。谢谢!

To solve this for several columns of a dataframe:

In [5]: df = pd.DataFrame( {'a':['A','A','B','B','B','C'], 'b':[1,2,5,5,4,6],'c'
   ...: :[3,3,3,4,4,4]})

In [6]: df
Out[6]: 
   a  b  c
0  A  1  3
1  A  2  3
2  B  5  3
3  B  5  4
4  B  4  4
5  C  6  4

In [7]: df.groupby('a').agg(lambda x: list(x))
Out[7]: 
           b          c
a                      
A     [1, 2]     [3, 3]
B  [5, 5, 4]  [3, 4, 4]
C        [6]        [4]

This answer was inspired from Anamika Modi‘s answer. Thank you!


回答 5

使用以下任何一种groupbyagg配方。

# Setup
df = pd.DataFrame({
  'a': ['A', 'A', 'B', 'B', 'B', 'C'],
  'b': [1, 2, 5, 5, 4, 6],
  'c': ['x', 'y', 'z', 'x', 'y', 'z']
})
df

   a  b  c
0  A  1  x
1  A  2  y
2  B  5  z
3  B  5  x
4  B  4  y
5  C  6  z

要将多个列聚合为列表,请使用以下任一方法:

df.groupby('a').agg(list)
df.groupby('a').agg(pd.Series.tolist)

           b          c
a                      
A     [1, 2]     [x, y]
B  [5, 5, 4]  [z, x, y]
C        [6]        [z]

要仅对单个列进行组列出,请将groupby转换为SeriesGroupBy对象,然后调用SeriesGroupBy.agg。用,

df.groupby('a').agg({'b': list})  # 4.42 ms 
df.groupby('a')['b'].agg(list)    # 2.76 ms - faster

a
A       [1, 2]
B    [5, 5, 4]
C          [6]
Name: b, dtype: object

Use any of the following groupby and agg recipes.

# Setup
df = pd.DataFrame({
  'a': ['A', 'A', 'B', 'B', 'B', 'C'],
  'b': [1, 2, 5, 5, 4, 6],
  'c': ['x', 'y', 'z', 'x', 'y', 'z']
})
df

   a  b  c
0  A  1  x
1  A  2  y
2  B  5  z
3  B  5  x
4  B  4  y
5  C  6  z

To aggregate multiple columns as lists, use any of the following:

df.groupby('a').agg(list)
df.groupby('a').agg(pd.Series.tolist)

           b          c
a                      
A     [1, 2]     [x, y]
B  [5, 5, 4]  [z, x, y]
C        [6]        [z]

To group-listify a single column only, convert the groupby to a SeriesGroupBy object, then call SeriesGroupBy.agg. Use,

df.groupby('a').agg({'b': list})  # 4.42 ms 
df.groupby('a')['b'].agg(list)    # 2.76 ms - faster

a
A       [1, 2]
B    [5, 5, 4]
C          [6]
Name: b, dtype: object

回答 6

如果在对多个列进行分组时查找唯一 列表,则可能会有所帮助:

df.groupby('a').agg(lambda x: list(set(x))).reset_index()

If looking for a unique list while grouping multiple columns this could probably help:

df.groupby('a').agg(lambda x: list(set(x))).reset_index()

回答 7

让我们df.groupby与列表和Series构造函数一起使用

pd.Series({x : y.b.tolist() for x , y in df.groupby('a')})
Out[664]: 
A       [1, 2]
B    [5, 5, 4]
C          [6]
dtype: object

Let us using df.groupby with list and Series constructor

pd.Series({x : y.b.tolist() for x , y in df.groupby('a')})
Out[664]: 
A       [1, 2]
B    [5, 5, 4]
C          [6]
dtype: object

回答 8

是时候使用agg而不是了apply

什么时候

df = pd.DataFrame( {'a':['A','A','B','B','B','C'], 'b':[1,2,5,5,4,6], 'c': [1,2,5,5,4,6]})

如果要将多个列堆叠到list中,将导致 pd.DataFrame

df.groupby('a')[['b', 'c']].agg(list)
# or 
df.groupby('a').agg(list)

如果您想要列表中的单列,则结果为 ps.Series

df.groupby('a')['b'].agg(list)
#or
df.groupby('a')['b'].apply(list)

请注意,结果仅比汇总单个列(在多列情况下使用)pd.DataFrame要慢10倍ps.Series

It is time to use agg instead of apply .

When

df = pd.DataFrame( {'a':['A','A','B','B','B','C'], 'b':[1,2,5,5,4,6], 'c': [1,2,5,5,4,6]})

If you want multiple columns stack into list , result in pd.DataFrame

df.groupby('a')[['b', 'c']].agg(list)
# or 
df.groupby('a').agg(list)

If you want single column in list, result in ps.Series

df.groupby('a')['b'].agg(list)
#or
df.groupby('a')['b'].apply(list)

Note, result in pd.DataFrame is about 10x slower than result in ps.Series when you only aggregate single column, use it in multicolumns case .


回答 9

在这里,我将元素与“ |”分组 作为分隔符

    import pandas as pd

    df = pd.read_csv('input.csv')

    df
    Out[1]:
      Area  Keywords
    0  A  1
    1  A  2
    2  B  5
    3  B  5
    4  B  4
    5  C  6

    df.dropna(inplace =  True)
    df['Area']=df['Area'].apply(lambda x:x.lower().strip())
    print df.columns
    df_op = df.groupby('Area').agg({"Keywords":lambda x : "|".join(x)})

    df_op.to_csv('output.csv')
    Out[2]:
    df_op
    Area  Keywords

    A       [1| 2]
    B    [5| 5| 4]
    C          [6]

Here I have grouped elements with “|” as a separator

    import pandas as pd

    df = pd.read_csv('input.csv')

    df
    Out[1]:
      Area  Keywords
    0  A  1
    1  A  2
    2  B  5
    3  B  5
    4  B  4
    5  C  6

    df.dropna(inplace =  True)
    df['Area']=df['Area'].apply(lambda x:x.lower().strip())
    print df.columns
    df_op = df.groupby('Area').agg({"Keywords":lambda x : "|".join(x)})

    df_op.to_csv('output.csv')
    Out[2]:
    df_op
    Area  Keywords

    A       [1| 2]
    B    [5| 5| 4]
    C          [6]

回答 10

我没有看到的最简单方法是至少对于一列都无法实现大多数相同的事情,这与Anamika的答案类似,只是聚合函数的元组语法。

df.groupby('a').agg(b=('b','unique'), c=('c','unique'))

The easiest way I have see no achieve most of the same thing at least for one column which is similar to Anamika’s answer just with the tuple syntax for the aggregate function.

df.groupby('a').agg(b=('b','unique'), c=('c','unique'))

如何检查熊猫中是否存在列

问题:如何检查熊猫中是否存在列

有没有一种方法可以检查Pandas DataFrame中是否存在列?

假设我有以下DataFrame:

>>> import pandas as pd
>>> from random import randint
>>> df = pd.DataFrame({'A': [randint(1, 9) for x in xrange(10)],
                       'B': [randint(1, 9)*10 for x in xrange(10)],
                       'C': [randint(1, 9)*100 for x in xrange(10)]})
>>> df
   A   B    C
0  3  40  100
1  6  30  200
2  7  70  800
3  3  50  200
4  7  50  400
5  4  10  400
6  3  70  500
7  8  30  200
8  3  40  800
9  6  60  200

我想计算 df['sum'] = df['A'] + df['C']

但是首先我要检查是否df['A']存在,如果不存在,我要计算df['sum'] = df['B'] + df['C']

Is there a way to check if a column exists in a Pandas DataFrame?

Suppose that I have the following DataFrame:

>>> import pandas as pd
>>> from random import randint
>>> df = pd.DataFrame({'A': [randint(1, 9) for x in xrange(10)],
                       'B': [randint(1, 9)*10 for x in xrange(10)],
                       'C': [randint(1, 9)*100 for x in xrange(10)]})
>>> df
   A   B    C
0  3  40  100
1  6  30  200
2  7  70  800
3  3  50  200
4  7  50  400
5  4  10  400
6  3  70  500
7  8  30  200
8  3  40  800
9  6  60  200

and I want to calculate df['sum'] = df['A'] + df['C']

But first I want to check if df['A'] exists, and if not, I want to calculate df['sum'] = df['B'] + df['C'] instead.


回答 0

这将起作用:

if 'A' in df:

但是为了清楚起见,我可能将其写为:

if 'A' in df.columns:

This will work:

if 'A' in df:

But for clarity, I’d probably write it as:

if 'A' in df.columns:

回答 1

要检查是否存在全部一列或多列,可以使用set.issubset,如下所示:

if set(['A','C']).issubset(df.columns):
   df['sum'] = df['A'] + df['C']                

正如@brianpck在评论中指出的那样,set([])也可以使用花括号来构造它,

if {'A', 'C'}.issubset(df.columns):

有关大括号语法的讨论,请参见此问题

或者,您可以使用列表推导,如:

if all([item in df.columns for item in ['A','C']]):

To check if one or more columns all exist, you can use set.issubset, as in:

if set(['A','C']).issubset(df.columns):
   df['sum'] = df['A'] + df['C']                

As @brianpck points out in a comment, set([]) can alternatively be constructed with curly braces,

if {'A', 'C'}.issubset(df.columns):

See this question for a discussion of the curly-braces syntax.

Or, you can use a list comprehension, as in:

if all([item in df.columns for item in ['A','C']]):

回答 2

只是建议另一种方法而不使用if语句,您可以将get()方法用于DataFrames。根据问题求和:

df['sum'] = df.get('A', df['B']) + df['C']

DataFrameget方法也有类似的行为,Python字典。

Just to suggest another way without using if statements, you can use the get() method for DataFrames. For performing the sum based on the question:

df['sum'] = df.get('A', df['B']) + df['C']

The DataFrame get method has similar behavior as python dictionaries.


Python Pandas:获取列匹配特定值的行的索引

问题:Python Pandas:获取列匹配特定值的行的索引

给定一个带有“ BoolCol”列的DataFrame,我们要查找其中“ BoolCol” == True的值的DataFrame索引

我目前有迭代的方式来做,很完美:

for i in range(100,3000):
    if df.iloc[i]['BoolCol']== True:
         print i,df.iloc[i]['BoolCol']

但这不是正确的熊猫方法。经过研究,我目前正在使用以下代码:

df[df['BoolCol'] == True].index.tolist()

这给了我一份索引列表,但是当我通过以下方法检查它们时,它们不匹配:

df.iloc[i]['BoolCol']

结果实际上是错误的!

哪一种是正确的Pandas方法?

Given a DataFrame with a column “BoolCol”, we want to find the indexes of the DataFrame in which the values for “BoolCol” == True

I currently have the iterating way to do it, which works perfectly:

for i in range(100,3000):
    if df.iloc[i]['BoolCol']== True:
         print i,df.iloc[i]['BoolCol']

But this is not the correct panda’s way to do it. After some research, I am currently using this code:

df[df['BoolCol'] == True].index.tolist()

This one gives me a list of indexes, but they dont match, when I check them by doing:

df.iloc[i]['BoolCol']

The result is actually False!!

Which would be the correct Pandas way to do this?


回答 0

df.iloc[i]返回的ithdfi不引用索引标签,i是基于0的索引。

相反,该属性index返回实际的索引标签,而不是数字的行索引:

df.index[df['BoolCol'] == True].tolist()

或等效地,

df.index[df['BoolCol']].tolist()

通过使用具有非默认索引的DataFrame玩,可以很清楚地看到差异,该索引与行的数字位置不相等:

df = pd.DataFrame({'BoolCol': [True, False, False, True, True]},
       index=[10,20,30,40,50])

In [53]: df
Out[53]: 
   BoolCol
10    True
20   False
30   False
40    True
50    True

[5 rows x 1 columns]

In [54]: df.index[df['BoolCol']].tolist()
Out[54]: [10, 40, 50]

如果要使用索引

In [56]: idx = df.index[df['BoolCol']]

In [57]: idx
Out[57]: Int64Index([10, 40, 50], dtype='int64')

那么您可以使用loc代替来选择行iloc

In [58]: df.loc[idx]
Out[58]: 
   BoolCol
10    True
40    True
50    True

[3 rows x 1 columns]

注意,loc也可以接受布尔数组

In [55]: df.loc[df['BoolCol']]
Out[55]: 
   BoolCol
10    True
40    True
50    True

[3 rows x 1 columns]

如果您有一个布尔数组,mask并且需要序数索引值,则可以使用进行计算np.flatnonzero

In [110]: np.flatnonzero(df['BoolCol'])
Out[112]: array([0, 3, 4])

用于df.iloc按顺序索引选择行:

In [113]: df.iloc[np.flatnonzero(df['BoolCol'])]
Out[113]: 
   BoolCol
10    True
40    True
50    True

df.iloc[i] returns the ith row of df. i does not refer to the index label, i is a 0-based index.

In contrast, the attribute index returns actual index labels, not numeric row-indices:

df.index[df['BoolCol'] == True].tolist()

or equivalently,

df.index[df['BoolCol']].tolist()

You can see the difference quite clearly by playing with a DataFrame with a non-default index that does not equal to the row’s numerical position:

df = pd.DataFrame({'BoolCol': [True, False, False, True, True]},
       index=[10,20,30,40,50])

In [53]: df
Out[53]: 
   BoolCol
10    True
20   False
30   False
40    True
50    True

[5 rows x 1 columns]

In [54]: df.index[df['BoolCol']].tolist()
Out[54]: [10, 40, 50]

If you want to use the index,

In [56]: idx = df.index[df['BoolCol']]

In [57]: idx
Out[57]: Int64Index([10, 40, 50], dtype='int64')

then you can select the rows using loc instead of iloc:

In [58]: df.loc[idx]
Out[58]: 
   BoolCol
10    True
40    True
50    True

[3 rows x 1 columns]

Note that loc can also accept boolean arrays:

In [55]: df.loc[df['BoolCol']]
Out[55]: 
   BoolCol
10    True
40    True
50    True

[3 rows x 1 columns]

If you have a boolean array, mask, and need ordinal index values, you can compute them using np.flatnonzero:

In [110]: np.flatnonzero(df['BoolCol'])
Out[112]: array([0, 3, 4])

Use df.iloc to select rows by ordinal index:

In [113]: df.iloc[np.flatnonzero(df['BoolCol'])]
Out[113]: 
   BoolCol
10    True
40    True
50    True

回答 1

可以使用numpy where()函数来完成:

import pandas as pd
import numpy as np

In [716]: df = pd.DataFrame({"gene_name": ['SLC45A1', 'NECAP2', 'CLIC4', 'ADC', 'AGBL4'] , "BoolCol": [False, True, False, True, True] },
       index=list("abcde"))

In [717]: df
Out[717]: 
  BoolCol gene_name
a   False   SLC45A1
b    True    NECAP2
c   False     CLIC4
d    True       ADC
e    True     AGBL4

In [718]: np.where(df["BoolCol"] == True)
Out[718]: (array([1, 3, 4]),)

In [719]: select_indices = list(np.where(df["BoolCol"] == True)[0])

In [720]: df.iloc[select_indices]
Out[720]: 
  BoolCol gene_name
b    True    NECAP2
d    True       ADC
e    True     AGBL4

虽然您并不总是需要索引来进行匹配,但是如果需要的话:

In [796]: df.iloc[select_indices].index
Out[796]: Index([u'b', u'd', u'e'], dtype='object')

In [797]: df.iloc[select_indices].index.tolist()
Out[797]: ['b', 'd', 'e']

Can be done using numpy where() function:

import pandas as pd
import numpy as np

In [716]: df = pd.DataFrame({"gene_name": ['SLC45A1', 'NECAP2', 'CLIC4', 'ADC', 'AGBL4'] , "BoolCol": [False, True, False, True, True] },
       index=list("abcde"))

In [717]: df
Out[717]: 
  BoolCol gene_name
a   False   SLC45A1
b    True    NECAP2
c   False     CLIC4
d    True       ADC
e    True     AGBL4

In [718]: np.where(df["BoolCol"] == True)
Out[718]: (array([1, 3, 4]),)

In [719]: select_indices = list(np.where(df["BoolCol"] == True)[0])

In [720]: df.iloc[select_indices]
Out[720]: 
  BoolCol gene_name
b    True    NECAP2
d    True       ADC
e    True     AGBL4

Though you don’t always need index for a match, but incase if you need:

In [796]: df.iloc[select_indices].index
Out[796]: Index([u'b', u'd', u'e'], dtype='object')

In [797]: df.iloc[select_indices].index.tolist()
Out[797]: ['b', 'd', 'e']

回答 2

一种简单的方法是在过滤之前重置DataFrame的索引:

df_reset = df.reset_index()
df_reset[df_reset['BoolCol']].index.tolist()

有点hacky,但是很快!

Simple way is to reset the index of the DataFrame prior to filtering:

df_reset = df.reset_index()
df_reset[df_reset['BoolCol']].index.tolist()

Bit hacky, but it’s quick!


回答 3

首先,您可以检查query目标列的类型bool (PS:关于如何使用它,请检查链接

df.query('BoolCol')
Out[123]: 
    BoolCol
10     True
40     True
50     True

在通过Boolean列过滤原始df之后,我们可以选择索引。

df=df.query('BoolCol')
df.index
Out[125]: Int64Index([10, 40, 50], dtype='int64')

大熊猫也有nonzero,我们只需选择行的位置True然后使用它对DataFrameindex

df.index[df.BoolCol.nonzero()[0]]
Out[128]: Int64Index([10, 40, 50], dtype='int64')

First you may check query when the target column is type bool (PS: about how to use it please check link )

df.query('BoolCol')
Out[123]: 
    BoolCol
10     True
40     True
50     True

After we filter the original df by the Boolean column we can pick the index .

df=df.query('BoolCol')
df.index
Out[125]: Int64Index([10, 40, 50], dtype='int64')

Also pandas have nonzero, we just select the position of True row and using it slice the DataFrame or index

df.index[df.BoolCol.nonzero()[0]]
Out[128]: Int64Index([10, 40, 50], dtype='int64')

回答 4

如果只想使用一次数据框对象,请使用:

df['BoolCol'].loc[lambda x: x==True].index

If you want to use your dataframe object only once, use:

df['BoolCol'].loc[lambda x: x==True].index

回答 5

我扩展这个问题是如何获取rowcolumn并且value所有的比赛价值?

这是解决方案:

import pandas as pd
import numpy as np


def search_coordinate(df_data: pd.DataFrame, search_set: set) -> list:
    nda_values = df_data.values
    tuple_index = np.where(np.isin(nda_values, [e for e in search_set]))
    return [(row, col, nda_values[row][col]) for row, col in zip(tuple_index[0], tuple_index[1])]


if __name__ == '__main__':
    test_datas = [['cat', 'dog', ''],
                  ['goldfish', '', 'kitten'],
                  ['Puppy', 'hamster', 'mouse']
                  ]
    df_data = pd.DataFrame(test_datas)
    print(df_data)
    result_list = search_coordinate(df_data, {'dog', 'Puppy'})
    print(f"\n\n{'row':<4} {'col':<4} {'name':>10}")
    [print(f"{row:<4} {col:<4} {name:>10}") for row, col, name in result_list]

输出:

          0        1       2
0       cat      dog        
1  goldfish           kitten
2     Puppy  hamster   mouse


row  col        name
0    1           dog
2    0         Puppy

I extended this question that is how to gets the row, columnand value of all matches value?

here is solution:

import pandas as pd
import numpy as np


def search_coordinate(df_data: pd.DataFrame, search_set: set) -> list:
    nda_values = df_data.values
    tuple_index = np.where(np.isin(nda_values, [e for e in search_set]))
    return [(row, col, nda_values[row][col]) for row, col in zip(tuple_index[0], tuple_index[1])]


if __name__ == '__main__':
    test_datas = [['cat', 'dog', ''],
                  ['goldfish', '', 'kitten'],
                  ['Puppy', 'hamster', 'mouse']
                  ]
    df_data = pd.DataFrame(test_datas)
    print(df_data)
    result_list = search_coordinate(df_data, {'dog', 'Puppy'})
    print(f"\n\n{'row':<4} {'col':<4} {'name':>10}")
    [print(f"{row:<4} {col:<4} {name:>10}") for row, col, name in result_list]

Output:

          0        1       2
0       cat      dog        
1  goldfish           kitten
2     Puppy  hamster   mouse


row  col        name
0    1           dog
2    0         Puppy