谁能向我解释StandardScaler?

问题:谁能向我解释StandardScaler?

我无法理解网页StandardScaler的文档中sklearn

谁能简单地向我解释一下?

I am unable to understand the page of the StandardScaler in the documentation of sklearn.

Can anyone explain this to me in simple terms?


回答 0

背后的想法StandardScaler是,它将转换您的数据,使其分布的平均值为0,标准差为1。
对于多变量数据,这是按功能进行的(换句话说,独立于数据的每一列) 。
给定数据的分布,数据集中的每个值都将减去平均值,然后除以整个数据集(或多变量情况下的特征)的标准差。

The idea behind StandardScaler is that it will transform your data such that its distribution will have a mean value 0 and standard deviation of 1.
In case of multivariate data, this is done feature-wise (in other words independently for each column of the data).
Given the distribution of the data, each value in the dataset will have the mean value subtracted, and then divided by the standard deviation of the whole dataset (or feature in the multivariate case).


回答 1

简介:我假设您有一个矩阵X,其中每一行/每一都是一个样本/观测值,每一都是一个变量/特征sklearn顺便说一下,这是任何ML函数的预期输入- X.shape应该是[number_of_samples, number_of_features])。


方法的核心:主要思路是正常化/标准化,即μ = 0σ = 1你的功能/变量/列X单独之前应用任何机器学习模型。

StandardScaler()归一化的特征,即X的每一列中,独立地,使每个柱/特征/变量将具有μ = 0σ = 1


附言我在此页面上找到了最受欢迎的答案,这是错误的。我引用的是“数据集中的每个值都将减去样本平均值” –这既不正确也不正确。


另请参阅:如何以及为何使数据标准化:python教程


例:

from sklearn.preprocessing import StandardScaler
import numpy as np

# 4 samples/observations and 2 variables/features
data = np.array([[0, 0], [1, 0], [0, 1], [1, 1]])
scaler = StandardScaler()
scaled_data = scaler.fit_transform(data)

print(data)
[[0, 0],
 [1, 0],
 [0, 1],
 [1, 1]])

print(scaled_data)
[[-1. -1.]
 [ 1. -1.]
 [-1.  1.]
 [ 1.  1.]]

验证每个特征(列)的平均值是否为0:

scaled_data.mean(axis = 0)
array([0., 0.])

验证每个功能(列)的标准差为1:

scaled_data.std(axis = 0)
array([1., 1.])

数学:


UPDATE 08/2019:Concering输入参数with_meanwith_stdFalse/ True,我这里提供一个答案:之间StandardScaler差“with_std =伪或真”和“with_mean =伪或真”

Intro: I assume that you have a matrix X where each row/line is a sample/observation and each column is a variable/feature (this is the expected input for any sklearn ML function by the way — X.shape should be [number_of_samples, number_of_features]).


Core of method: The main idea is to normalize/standardize i.e. μ = 0 and σ = 1 your features/variables/columns of X, individually, before applying any machine learning model.

StandardScaler() will normalize the features i.e. each column of X, INDIVIDUALLY, so that each column/feature/variable will have μ = 0 and σ = 1.


P.S: I find the most upvoted answer on this page, wrong. I am quoting “each value in the dataset will have the sample mean value subtracted” — This is neither true nor correct.


See also: How and why to Standardize your data: A python tutorial


Example:

from sklearn.preprocessing import StandardScaler
import numpy as np

# 4 samples/observations and 2 variables/features
data = np.array([[0, 0], [1, 0], [0, 1], [1, 1]])
scaler = StandardScaler()
scaled_data = scaler.fit_transform(data)

print(data)
[[0, 0],
 [1, 0],
 [0, 1],
 [1, 1]])

print(scaled_data)
[[-1. -1.]
 [ 1. -1.]
 [-1.  1.]
 [ 1.  1.]]

Verify that the mean of each feature (column) is 0:

scaled_data.mean(axis = 0)
array([0., 0.])

Verify that the std of each feature (column) is 1:

scaled_data.std(axis = 0)
array([1., 1.])

The maths:


UPDATE 08/2020: Concerning the input parameters with_mean and with_std to False/True, I have provided an answer here: StandardScaler difference between “with_std=False or True” and “with_mean=False or True”


回答 2


回答 3

StandardScaler执行标准化任务。通常,数据集包含比例不同的变量。例如,一个Employee数据集将包含AGE列,其值的范围为20-70,而SALARY列的值的范围为10000-80000
由于这两列的规模不同,因此在构建机器学习模型时将它们标准化以具有相同的规模。

StandardScaler performs the task of Standardization. Usually a dataset contains variables that are different in scale. For e.g. an Employee dataset will contain AGE column with values on scale 20-70 and SALARY column with values on scale 10000-80000.
As these two columns are different in scale, they are Standardized to have common scale while building machine learning model.


回答 4

当您要比较对应于不同单位的数据时,这很有用。在这种情况下,您要删除单元。要以一致的方式处理所有数据,请以方差为单位且序列的均值为0的方式转换数据。

This is useful when you want to compare data that correspond to different units. In that case, you want to remove the units. To do that in a consistent way of all the data, you transform the data in a way that the variance is unitary and that the mean of the series is 0.


回答 5

上面的答案很好,但是我需要一个简单的例子来减轻过去的担忧。我想确保确实将每个专栏分开对待。现在,我可以放心了,无法找到引起我关注的示例。如上所述,所有列均按比例缩放。

import pandas as pd
import scipy.stats as ss
from sklearn.preprocessing import StandardScaler


data= [[1, 1, 1, 1, 1],[2, 5, 10, 50, 100],[3, 10, 20, 150, 200],[4, 15, 40, 200, 300]]

df = pd.DataFrame(data, columns=['N0', 'N1', 'N2', 'N3', 'N4']).astype('float64')

sc_X = StandardScaler()
df = sc_X.fit_transform(df)

num_cols = len(df[0,:])
for i in range(num_cols):
    col = df[:,i]
    col_stats = ss.describe(col)
    print(col_stats)

输出值

DescribeResult(nobs=4, minmax=(-1.3416407864998738, 1.3416407864998738), mean=0.0, variance=1.3333333333333333, skewness=0.0, kurtosis=-1.3599999999999999)
DescribeResult(nobs=4, minmax=(-1.2828087129930659, 1.3778315806221817), mean=-5.551115123125783e-17, variance=1.3333333333333337, skewness=0.11003776770595125, kurtosis=-1.394993095506219)
DescribeResult(nobs=4, minmax=(-1.155344148338584, 1.53471088361394), mean=0.0, variance=1.3333333333333333, skewness=0.48089217736510326, kurtosis=-1.1471008824318165)
DescribeResult(nobs=4, minmax=(-1.2604572012883055, 1.2668071116222517), mean=-5.551115123125783e-17, variance=1.3333333333333333, skewness=0.0056842140599118185, kurtosis=-1.6438177182479734)
DescribeResult(nobs=4, minmax=(-1.338945389819976, 1.3434309690153527), mean=5.551115123125783e-17, variance=1.3333333333333333, skewness=0.005374558840039456, kurtosis=-1.3619131970819205)

The answers above are great, but I needed a simple example to alleviate some concerns that I have had in the past. I wanted to make sure it was indeed treating each column separately. I am now reassured and can’t find what example had caused me concern. All columns ARE scaled separately as described by those above.

CODE

import pandas as pd
import scipy.stats as ss
from sklearn.preprocessing import StandardScaler


data= [[1, 1, 1, 1, 1],[2, 5, 10, 50, 100],[3, 10, 20, 150, 200],[4, 15, 40, 200, 300]]

df = pd.DataFrame(data, columns=['N0', 'N1', 'N2', 'N3', 'N4']).astype('float64')

sc_X = StandardScaler()
df = sc_X.fit_transform(df)

num_cols = len(df[0,:])
for i in range(num_cols):
    col = df[:,i]
    col_stats = ss.describe(col)
    print(col_stats)

OUTPUT

DescribeResult(nobs=4, minmax=(-1.3416407864998738, 1.3416407864998738), mean=0.0, variance=1.3333333333333333, skewness=0.0, kurtosis=-1.3599999999999999)
DescribeResult(nobs=4, minmax=(-1.2828087129930659, 1.3778315806221817), mean=-5.551115123125783e-17, variance=1.3333333333333337, skewness=0.11003776770595125, kurtosis=-1.394993095506219)
DescribeResult(nobs=4, minmax=(-1.155344148338584, 1.53471088361394), mean=0.0, variance=1.3333333333333333, skewness=0.48089217736510326, kurtosis=-1.1471008824318165)
DescribeResult(nobs=4, minmax=(-1.2604572012883055, 1.2668071116222517), mean=-5.551115123125783e-17, variance=1.3333333333333333, skewness=0.0056842140599118185, kurtosis=-1.6438177182479734)
DescribeResult(nobs=4, minmax=(-1.338945389819976, 1.3434309690153527), mean=5.551115123125783e-17, variance=1.3333333333333333, skewness=0.005374558840039456, kurtosis=-1.3619131970819205)

NOTE:

The scipy.stats module is correctly reporting the “sample” variance, which uses (n – 1) in the denominator. The “population” variance would use n in the denominator for the calculation of variance. To understand better, please see the code below that uses scaled data from the first column of the data set above:

Code

import scipy.stats as ss

sc_Data = [[-1.34164079], [-0.4472136], [0.4472136], [1.34164079]]
col_stats = ss.describe([-1.34164079, -0.4472136, 0.4472136, 1.34164079])
print(col_stats)
print()

mean_by_hand = 0
for row in sc_Data:
    for element in row:
        mean_by_hand += element
mean_by_hand /= 4

variance_by_hand = 0
for row in sc_Data:
    for element in row:
        variance_by_hand += (mean_by_hand - element)**2
sample_variance_by_hand = variance_by_hand / 3
sample_std_dev_by_hand = sample_variance_by_hand ** 0.5

pop_variance_by_hand = variance_by_hand / 4
pop_std_dev_by_hand = pop_variance_by_hand ** 0.5

print("Sample of Population Calcs:")
print(mean_by_hand, sample_variance_by_hand, sample_std_dev_by_hand, '\n')
print("Population Calcs:")
print(mean_by_hand, pop_variance_by_hand, pop_std_dev_by_hand)

Output

DescribeResult(nobs=4, minmax=(-1.34164079, 1.34164079), mean=0.0, variance=1.3333333422778562, skewness=0.0, kurtosis=-1.36000000429325)

Sample of Population Calcs:
0.0 1.3333333422778562 1.1547005422523435

Population Calcs:
0.0 1.000000006708392 1.000000003354196

回答 6

以下是一个简单的工作示例,用于解释标准化计算的工作原理。理论部分已经在其他答案中得到了很好的解释。

>>>import numpy as np
>>>data = [[6, 2], [4, 2], [6, 4], [8, 2]]
>>>a = np.array(data)

>>>np.std(a, axis=0)
array([1.41421356, 0.8660254 ])

>>>np.mean(a, axis=0)
array([6. , 2.5])

>>>from sklearn.preprocessing import StandardScaler
>>>scaler = StandardScaler()
>>>scaler.fit(data)
>>>print(scaler.mean_)

#Xchanged = (X−μ)/σ  WHERE σ is Standard Deviation and μ is mean
>>>z=scaler.transform(data)
>>>z

计算方式

正如您在输出中看到的,均值为[6。,2.5]和标准偏差为[1.41421356,0.8660254]

数据为(0,1)位置为2标准化=(2-2.5)/0.8660254 = -0.57735027

(1,0)位置的数据为4标准化=(4-6)/1.41421356 = -1.414

标准化后的结果

标准化后检查均值和标准偏差

注意:-2.77555756e-17非常接近0。

参考文献

  1. 将不同缩放器对数据的影响与离群值进行比较

  2. 标准化和标准化之间有什么区别?

  3. 使用sklearn StandardScaler缩放的数据均值不为零

Following is a simple working example to explain how standarization calculation works. The theory part is already well explained in other answers.

>>>import numpy as np
>>>data = [[6, 2], [4, 2], [6, 4], [8, 2]]
>>>a = np.array(data)

>>>np.std(a, axis=0)
array([1.41421356, 0.8660254 ])

>>>np.mean(a, axis=0)
array([6. , 2.5])

>>>from sklearn.preprocessing import StandardScaler
>>>scaler = StandardScaler()
>>>scaler.fit(data)
>>>print(scaler.mean_)

#Xchanged = (X−μ)/σ  WHERE σ is Standard Deviation and μ is mean
>>>z=scaler.transform(data)
>>>z

Calculation

As you can see in the output, mean is [6. , 2.5] and std deviation is [1.41421356, 0.8660254 ]

Data is (0,1) position is 2 Standardization = (2 – 2.5)/0.8660254 = -0.57735027

Data in (1,0) position is 4 Standardization = (4-6)/1.41421356 = -1.414

Result After Standardization

Check Mean and Std Deviation After Standardization

Note: -2.77555756e-17 is very close to 0.

References

  1. Compare the effect of different scalers on data with outliers

  2. What’s the difference between Normalization and Standardization?

  3. Mean of data scaled with sklearn StandardScaler is not zero


回答 7

应用后StandardScaler(),X中的每一列的平均值为0,标准差为1。

公式在此页面上被其他人列出。

基本原理:某些算法要求数据看起来像这样(请参阅sklearn docs)。

After applying StandardScaler(), each column in X will have mean of 0 and standard deviation of 1.

Formulas are listed by others on this page.

Rationale: some algorithms require data to look like this (see sklearn docs).