从pandas DataFrame中删除名称包含特定字符串的列

问题:从pandas DataFrame中删除名称包含特定字符串的列

我有一个带有以下列名称的pandas数据框:

Result1,Test1,Result2,Test2,Result3,Test3等…

我想删除名称包含单词“ Test”的所有列。这样的列数不是静态的,而是取决于先前的功能。

我怎样才能做到这一点?

I have a pandas dataframe with the following column names:

Result1, Test1, Result2, Test2, Result3, Test3, etc…

I want to drop all the columns whose name contains the word “Test”. The numbers of such columns is not static but depends on a previous function.

How can I do that?


回答 0

import pandas as pd

import numpy as np

array=np.random.random((2,4))

df=pd.DataFrame(array, columns=('Test1', 'toto', 'test2', 'riri'))

print df

      Test1      toto     test2      riri
0  0.923249  0.572528  0.845464  0.144891
1  0.020438  0.332540  0.144455  0.741412

cols = [c for c in df.columns if c.lower()[:4] != 'test']

df=df[cols]

print df
       toto      riri
0  0.572528  0.144891
1  0.332540  0.741412
import pandas as pd

import numpy as np

array=np.random.random((2,4))

df=pd.DataFrame(array, columns=('Test1', 'toto', 'test2', 'riri'))

print df

      Test1      toto     test2      riri
0  0.923249  0.572528  0.845464  0.144891
1  0.020438  0.332540  0.144455  0.741412

cols = [c for c in df.columns if c.lower()[:4] != 'test']

df=df[cols]

print df
       toto      riri
0  0.572528  0.144891
1  0.332540  0.741412

回答 1

这是一个很好的方法:

df = df[df.columns.drop(list(df.filter(regex='Test')))]

Here is one way to do this:

df = df[df.columns.drop(list(df.filter(regex='Test')))]

回答 2

便宜,快捷和惯用语: str.contains

在最新版本的熊猫中,可以在索引和列上使用字符串方法。在这里,str.startswith似乎很合适。

要删除以给定子字符串开头的所有列:

df.columns.str.startswith('Test')
# array([ True, False, False, False])

df.loc[:,~df.columns.str.startswith('Test')]

  toto test2 riri
0    x     x    x
1    x     x    x

对于不区分大小写的匹配,可以将基于正则表达式的匹配与str.containsSOL锚一起使用:

df.columns.str.contains('^test', case=False)
# array([ True, False,  True, False])

df.loc[:,~df.columns.str.contains('^test', case=False)] 

  toto riri
0    x    x
1    x    x

如果可能使用混合类型,则也要指定na=False

Cheaper, Faster, and Idiomatic: str.contains

In recent versions of pandas, you can use string methods on the index and columns. Here, str.startswith seems like a good fit.

To remove all columns starting with a given substring:

df.columns.str.startswith('Test')
# array([ True, False, False, False])

df.loc[:,~df.columns.str.startswith('Test')]

  toto test2 riri
0    x     x    x
1    x     x    x

For case-insensitive matching, you can use regex-based matching with str.contains with an SOL anchor:

df.columns.str.contains('^test', case=False)
# array([ True, False,  True, False])

df.loc[:,~df.columns.str.contains('^test', case=False)] 

  toto riri
0    x    x
1    x    x

if mixed-types is a possibility, specify na=False as well.


回答 3

您可以使用“过滤器”过滤出您想要的列

import pandas as pd
import numpy as np

data2 = [{'test2': 1, 'result1': 2}, {'test': 5, 'result34': 10, 'c': 20}]

df = pd.DataFrame(data2)

df

    c   result1     result34    test    test2
0   NaN     2.0     NaN     NaN     1.0
1   20.0    NaN     10.0    5.0     NaN

现在过滤

df.filter(like='result',axis=1)

得到..

   result1  result34
0   2.0     NaN
1   NaN     10.0

You can filter out the columns you DO want using ‘filter’

import pandas as pd
import numpy as np

data2 = [{'test2': 1, 'result1': 2}, {'test': 5, 'result34': 10, 'c': 20}]

df = pd.DataFrame(data2)

df

    c   result1     result34    test    test2
0   NaN     2.0     NaN     NaN     1.0
1   20.0    NaN     10.0    5.0     NaN

Now filter

df.filter(like='result',axis=1)

Get..

   result1  result34
0   2.0     NaN
1   NaN     10.0

回答 4

可以整齐地用以下一行完成此操作:

df = df.drop(df.filter(regex='Test').columns, axis=1)

This can be done neatly in one line with:

df = df.drop(df.filter(regex='Test').columns, axis=1)

回答 5

使用DataFrame.select方法:

In [38]: df = DataFrame({'Test1': randn(10), 'Test2': randn(10), 'awesome': randn(10)})

In [39]: df.select(lambda x: not re.search('Test\d+', x), axis=1)
Out[39]:
   awesome
0    1.215
1    1.247
2    0.142
3    0.169
4    0.137
5   -0.971
6    0.736
7    0.214
8    0.111
9   -0.214

Use the DataFrame.select method:

In [38]: df = DataFrame({'Test1': randn(10), 'Test2': randn(10), 'awesome': randn(10)})

In [39]: df.select(lambda x: not re.search('Test\d+', x), axis=1)
Out[39]:
   awesome
0    1.215
1    1.247
2    0.142
3    0.169
4    0.137
5   -0.971
6    0.736
7    0.214
8    0.111
9   -0.214

回答 6

此方法可以完成所有事情。其他许多答案也会创建副本,但效率不高:

df.drop(df.columns[df.columns.str.contains('Test')], axis=1, inplace=True)

This method does everything in place. Many of the other answers create copies and are not as efficient:

df.drop(df.columns[df.columns.str.contains('Test')], axis=1, inplace=True)


回答 7

不要丢下 赶上你想要的相反。

df = df.filter(regex='^((?!badword).)*$').columns

Don’t drop. Catch the opposite of what you want.

df = df.filter(regex='^((?!badword).)*$').columns

回答 8

最简单的方法是:

resdf = df.filter(like='Test',axis=1)

the shortest way to do is is :

resdf = df.filter(like='Test',axis=1)

回答 9

删除包含正则表达式的列名称列表时的解决方案。我更喜欢这种方法,因为我经常编辑下拉列表。对下拉列表使用负过滤器正则表达式。

drop_column_names = ['A','B.+','C.*']
drop_columns_regex = '^(?!(?:'+'|'.join(drop_column_names)+')$)'
print('Dropping columns:',', '.join([c for c in df.columns if re.search(drop_columns_regex,c)]))
df = df.filter(regex=drop_columns_regex,axis=1)

Solution when dropping a list of column names containing regex. I prefer this approach because I’m frequently editing the drop list. Uses a negative filter regex for the drop list.

drop_column_names = ['A','B.+','C.*']
drop_columns_regex = '^(?!(?:'+'|'.join(drop_column_names)+')$)'
print('Dropping columns:',', '.join([c for c in df.columns if re.search(drop_columns_regex,c)]))
df = df.filter(regex=drop_columns_regex,axis=1)