问题:用sklearn缩放的pandas数据框列
我有一个带有混合类型列的pandas数据框,我想将sklearn的min_max_scaler应用于某些列。理想情况下,我想就地进行这些转换,但还没有找到一种方法来进行。我编写了以下有效的代码:
import pandas as pd
import numpy as np
from sklearn import preprocessing
scaler = preprocessing.MinMaxScaler()
dfTest = pd.DataFrame({'A':[14.00,90.20,90.95,96.27,91.21],'B':[103.02,107.26,110.35,114.23,114.68], 'C':['big','small','big','small','small']})
min_max_scaler = preprocessing.MinMaxScaler()
def scaleColumns(df, cols_to_scale):
for col in cols_to_scale:
df[col] = pd.DataFrame(min_max_scaler.fit_transform(pd.DataFrame(dfTest[col])),columns=[col])
return df
dfTest
A B C
0 14.00 103.02 big
1 90.20 107.26 small
2 90.95 110.35 big
3 96.27 114.23 small
4 91.21 114.68 small
scaled_df = scaleColumns(dfTest,['A','B'])
scaled_df
A B C
0 0.000000 0.000000 big
1 0.926219 0.363636 small
2 0.935335 0.628645 big
3 1.000000 0.961407 small
4 0.938495 1.000000 small
我很好奇这是否是进行此转换的首选/最有效的方法。有没有办法可以使用df.apply更好呢?
我也很惊讶我无法使用以下代码:
bad_output = min_max_scaler.fit_transform(dfTest['A'])
如果我将整个数据帧传递给缩放器,则它可以工作:
dfTest2 = dfTest.drop('C', axis = 1)
good_output = min_max_scaler.fit_transform(dfTest2)
good_output
我很困惑为什么将系列传递给定标器会失败。在上面的完整工作代码中,我希望只将一个系列传递给缩放器,然后将dataframe column =设置为缩放的序列。我已经看到这个问题在其他几个地方问过,但找不到一个好的答案。任何帮助了解这里发生的事情将不胜感激!
I have a pandas dataframe with mixed type columns, and I’d like to apply sklearn’s min_max_scaler to some of the columns. Ideally, I’d like to do these transformations in place, but haven’t figured out a way to do that yet. I’ve written the following code that works:
import pandas as pd
import numpy as np
from sklearn import preprocessing
scaler = preprocessing.MinMaxScaler()
dfTest = pd.DataFrame({'A':[14.00,90.20,90.95,96.27,91.21],'B':[103.02,107.26,110.35,114.23,114.68], 'C':['big','small','big','small','small']})
min_max_scaler = preprocessing.MinMaxScaler()
def scaleColumns(df, cols_to_scale):
for col in cols_to_scale:
df[col] = pd.DataFrame(min_max_scaler.fit_transform(pd.DataFrame(dfTest[col])),columns=[col])
return df
dfTest
A B C
0 14.00 103.02 big
1 90.20 107.26 small
2 90.95 110.35 big
3 96.27 114.23 small
4 91.21 114.68 small
scaled_df = scaleColumns(dfTest,['A','B'])
scaled_df
A B C
0 0.000000 0.000000 big
1 0.926219 0.363636 small
2 0.935335 0.628645 big
3 1.000000 0.961407 small
4 0.938495 1.000000 small
I’m curious if this is the preferred/most efficient way to do this transformation. Is there a way I could use df.apply that would be better?
I’m also surprised I can’t get the following code to work:
bad_output = min_max_scaler.fit_transform(dfTest['A'])
If I pass an entire dataframe to the scaler it works:
dfTest2 = dfTest.drop('C', axis = 1)
good_output = min_max_scaler.fit_transform(dfTest2)
good_output
I’m confused why passing a series to the scaler fails. In my full working code above I had hoped to just pass a series to the scaler then set the dataframe column = to the scaled series. I’ve seen this question asked a few other places, but haven’t found a good answer. Any help understanding what’s going on here would be greatly appreciated!
回答 0
我不确定以前的版本是否pandas
阻止了此操作,但现在以下代码段对我来说效果很好,并且无需使用就可以产生所需的内容apply
>>> import pandas as pd
>>> from sklearn.preprocessing import MinMaxScaler
>>> scaler = MinMaxScaler()
>>> dfTest = pd.DataFrame({'A':[14.00,90.20,90.95,96.27,91.21],
'B':[103.02,107.26,110.35,114.23,114.68],
'C':['big','small','big','small','small']})
>>> dfTest[['A', 'B']] = scaler.fit_transform(dfTest[['A', 'B']])
>>> dfTest
A B C
0 0.000000 0.000000 big
1 0.926219 0.363636 small
2 0.935335 0.628645 big
3 1.000000 0.961407 small
4 0.938495 1.000000 small
I am not sure if previous versions of pandas
prevented this but now the following snippet works perfectly for me and produces exactly what you want without having to use apply
>>> import pandas as pd
>>> from sklearn.preprocessing import MinMaxScaler
>>> scaler = MinMaxScaler()
>>> dfTest = pd.DataFrame({'A':[14.00,90.20,90.95,96.27,91.21],
'B':[103.02,107.26,110.35,114.23,114.68],
'C':['big','small','big','small','small']})
>>> dfTest[['A', 'B']] = scaler.fit_transform(dfTest[['A', 'B']])
>>> dfTest
A B C
0 0.000000 0.000000 big
1 0.926219 0.363636 small
2 0.935335 0.628645 big
3 1.000000 0.961407 small
4 0.938495 1.000000 small
回答 1
像这样?
dfTest = pd.DataFrame({
'A':[14.00,90.20,90.95,96.27,91.21],
'B':[103.02,107.26,110.35,114.23,114.68],
'C':['big','small','big','small','small']
})
dfTest[['A','B']] = dfTest[['A','B']].apply(
lambda x: MinMaxScaler().fit_transform(x))
dfTest
A B C
0 0.000000 0.000000 big
1 0.926219 0.363636 small
2 0.935335 0.628645 big
3 1.000000 0.961407 small
4 0.938495 1.000000 small
Like this?
dfTest = pd.DataFrame({
'A':[14.00,90.20,90.95,96.27,91.21],
'B':[103.02,107.26,110.35,114.23,114.68],
'C':['big','small','big','small','small']
})
dfTest[['A','B']] = dfTest[['A','B']].apply(
lambda x: MinMaxScaler().fit_transform(x))
dfTest
A B C
0 0.000000 0.000000 big
1 0.926219 0.363636 small
2 0.935335 0.628645 big
3 1.000000 0.961407 small
4 0.938495 1.000000 small
回答 2
正如pir的评论中提到的那样-该.apply(lambda el: scale.fit_transform(el))
方法将产生以下警告:
DeprecationWarning:在0.17中弃用1d数组作为数据,它将在0.19中引发ValueError。如果数据具有单个功能,则使用X.reshape(-1,1)来重塑数据,如果包含单个样本,则使用X.reshape(1,-1)来重塑数据。
将您的列转换为numpy数组应该可以完成这项工作(我更喜欢StandardScaler):
from sklearn.preprocessing import StandardScaler
scale = StandardScaler()
dfTest[['A','B','C']] = scale.fit_transform(dfTest[['A','B','C']].as_matrix())
– 编辑 2018年11月(已针对熊猫0.23.4测试)-
作为罗布·默里提到的意见,大熊猫的电流(v0.23.4)版本.as_matrix()
的回报FutureWarning
。因此,应将其替换为.values
:
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit_transform(dfTest[['A','B']].values)
– 编辑 2019年5月(已针对熊猫0.24.2测试)-
正如joelostblom在评论中提到的那样:“因此0.24.0
,建议使用.to_numpy()
代替.values
。”
更新的示例:
import pandas as pd
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
dfTest = pd.DataFrame({
'A':[14.00,90.20,90.95,96.27,91.21],
'B':[103.02,107.26,110.35,114.23,114.68],
'C':['big','small','big','small','small']
})
dfTest[['A', 'B']] = scaler.fit_transform(dfTest[['A','B']].to_numpy())
dfTest
A B C
0 -1.995290 -1.571117 big
1 0.436356 -0.603995 small
2 0.460289 0.100818 big
3 0.630058 0.985826 small
4 0.468586 1.088469 small
As it is being mentioned in pir’s comment – the .apply(lambda el: scale.fit_transform(el))
method will produce the following warning:
DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17
and will raise ValueError in 0.19. Reshape your data either using
X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1)
if it contains a single sample.
Converting your columns to numpy arrays should do the job (I prefer StandardScaler):
from sklearn.preprocessing import StandardScaler
scale = StandardScaler()
dfTest[['A','B','C']] = scale.fit_transform(dfTest[['A','B','C']].as_matrix())
— Edit Nov 2018 (Tested for pandas 0.23.4)–
As Rob Murray mentions in the comments, in the current (v0.23.4) version of pandas .as_matrix()
returns FutureWarning
. Therefore, it should be replaced by .values
:
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit_transform(dfTest[['A','B']].values)
— Edit May 2019 (Tested for pandas 0.24.2)–
As joelostblom mentions in the comments, “Since 0.24.0
, it is recommended to use .to_numpy()
instead of .values
.”
Updated example:
import pandas as pd
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
dfTest = pd.DataFrame({
'A':[14.00,90.20,90.95,96.27,91.21],
'B':[103.02,107.26,110.35,114.23,114.68],
'C':['big','small','big','small','small']
})
dfTest[['A', 'B']] = scaler.fit_transform(dfTest[['A','B']].to_numpy())
dfTest
A B C
0 -1.995290 -1.571117 big
1 0.436356 -0.603995 small
2 0.460289 0.100818 big
3 0.630058 0.985826 small
4 0.468586 1.088469 small
回答 3
df = pd.DataFrame(scale.fit_transform(df.values), columns=df.columns, index=df.index)
这应该在没有折旧警告的情况下起作用。
df = pd.DataFrame(scale.fit_transform(df.values), columns=df.columns, index=df.index)
This should work without depreciation warnings.
回答 4
您只能使用以下方法进行操作 pandas
:
In [235]:
dfTest = pd.DataFrame({'A':[14.00,90.20,90.95,96.27,91.21],'B':[103.02,107.26,110.35,114.23,114.68], 'C':['big','small','big','small','small']})
df = dfTest[['A', 'B']]
df_norm = (df - df.min()) / (df.max() - df.min())
print df_norm
print pd.concat((df_norm, dfTest.C),1)
A B
0 0.000000 0.000000
1 0.926219 0.363636
2 0.935335 0.628645
3 1.000000 0.961407
4 0.938495 1.000000
A B C
0 0.000000 0.000000 big
1 0.926219 0.363636 small
2 0.935335 0.628645 big
3 1.000000 0.961407 small
4 0.938495 1.000000 small
You can do it using pandas
only:
In [235]:
dfTest = pd.DataFrame({'A':[14.00,90.20,90.95,96.27,91.21],'B':[103.02,107.26,110.35,114.23,114.68], 'C':['big','small','big','small','small']})
df = dfTest[['A', 'B']]
df_norm = (df - df.min()) / (df.max() - df.min())
print df_norm
print pd.concat((df_norm, dfTest.C),1)
A B
0 0.000000 0.000000
1 0.926219 0.363636
2 0.935335 0.628645
3 1.000000 0.961407
4 0.938495 1.000000
A B C
0 0.000000 0.000000 big
1 0.926219 0.363636 small
2 0.935335 0.628645 big
3 1.000000 0.961407 small
4 0.938495 1.000000 small
回答 5
我知道这是一个很老的评论,但仍然:
不要使用单括号(dfTest['A'])
,而应使用双括号(dfTest[['A']])
。
即:min_max_scaler.fit_transform(dfTest[['A']])
。
我相信这会取得理想的结果。
I know it’s a very old comment, but still:
Instead of using single bracket (dfTest['A'])
, use double brackets (dfTest[['A']])
.
i.e: min_max_scaler.fit_transform(dfTest[['A']])
.
I believe this will give the desired result.