标签归档:statistics

Python中的多元线性回归

问题:Python中的多元线性回归

我似乎找不到任何进行多元回归的python库。我发现的唯一的东西只是做简单的回归。我需要针对几个自变量(x1,x2,x3等)对我的因变量(y)进行回归。

例如,使用以下数据:

print 'y        x1      x2       x3       x4      x5     x6       x7'
for t in texts:
    print "{:>7.1f}{:>10.2f}{:>9.2f}{:>9.2f}{:>10.2f}{:>7.2f}{:>7.2f}{:>9.2f}" /
   .format(t.y,t.x1,t.x2,t.x3,t.x4,t.x5,t.x6,t.x7)

(以上输出:)

      y        x1       x2       x3        x4     x5     x6       x7
   -6.0     -4.95    -5.87    -0.76     14.73   4.02   0.20     0.45
   -5.0     -4.55    -4.52    -0.71     13.74   4.47   0.16     0.50
  -10.0    -10.96   -11.64    -0.98     15.49   4.18   0.19     0.53
   -5.0     -1.08    -3.36     0.75     24.72   4.96   0.16     0.60
   -8.0     -6.52    -7.45    -0.86     16.59   4.29   0.10     0.48
   -3.0     -0.81    -2.36    -0.50     22.44   4.81   0.15     0.53
   -6.0     -7.01    -7.33    -0.33     13.93   4.32   0.21     0.50
   -8.0     -4.46    -7.65    -0.94     11.40   4.43   0.16     0.49
   -8.0    -11.54   -10.03    -1.03     18.18   4.28   0.21     0.55

我将如何在python中进行回归,以获得线性回归公式:

Y = a1x1 + a2x2 + a3x3 + a4x4 + a5x5 + a6x6 + + a7x7 + c

I can’t seem to find any python libraries that do multiple regression. The only things I find only do simple regression. I need to regress my dependent variable (y) against several independent variables (x1, x2, x3, etc.).

For example, with this data:

print 'y        x1      x2       x3       x4      x5     x6       x7'
for t in texts:
    print "{:>7.1f}{:>10.2f}{:>9.2f}{:>9.2f}{:>10.2f}{:>7.2f}{:>7.2f}{:>9.2f}" /
   .format(t.y,t.x1,t.x2,t.x3,t.x4,t.x5,t.x6,t.x7)

(output for above:)

      y        x1       x2       x3        x4     x5     x6       x7
   -6.0     -4.95    -5.87    -0.76     14.73   4.02   0.20     0.45
   -5.0     -4.55    -4.52    -0.71     13.74   4.47   0.16     0.50
  -10.0    -10.96   -11.64    -0.98     15.49   4.18   0.19     0.53
   -5.0     -1.08    -3.36     0.75     24.72   4.96   0.16     0.60
   -8.0     -6.52    -7.45    -0.86     16.59   4.29   0.10     0.48
   -3.0     -0.81    -2.36    -0.50     22.44   4.81   0.15     0.53
   -6.0     -7.01    -7.33    -0.33     13.93   4.32   0.21     0.50
   -8.0     -4.46    -7.65    -0.94     11.40   4.43   0.16     0.49
   -8.0    -11.54   -10.03    -1.03     18.18   4.28   0.21     0.55

How would I regress these in python, to get the linear regression formula:

Y = a1x1 + a2x2 + a3x3 + a4x4 + a5x5 + a6x6 + +a7x7 + c


回答 0

sklearn.linear_model.LinearRegression 会做的:

from sklearn import linear_model
clf = linear_model.LinearRegression()
clf.fit([[getattr(t, 'x%d' % i) for i in range(1, 8)] for t in texts],
        [t.y for t in texts])

然后clf.coef_将具有回归系数。

sklearn.linear_model 也具有类似的接口,可以对回归进行各种正则化。

sklearn.linear_model.LinearRegression will do it:

from sklearn import linear_model
clf = linear_model.LinearRegression()
clf.fit([[getattr(t, 'x%d' % i) for i in range(1, 8)] for t in texts],
        [t.y for t in texts])

Then clf.coef_ will have the regression coefficients.

sklearn.linear_model also has similar interfaces to do various kinds of regularizations on the regression.


回答 1

这是我创建的一些解决方法。我用R检查了它,它可以正常工作。

import numpy as np
import statsmodels.api as sm

y = [1,2,3,4,3,4,5,4,5,5,4,5,4,5,4,5,6,5,4,5,4,3,4]

x = [
     [4,2,3,4,5,4,5,6,7,4,8,9,8,8,6,6,5,5,5,5,5,5,5],
     [4,1,2,3,4,5,6,7,5,8,7,8,7,8,7,8,7,7,7,7,7,6,5],
     [4,1,2,5,6,7,8,9,7,8,7,8,7,7,7,7,7,7,6,6,4,4,4]
     ]

def reg_m(y, x):
    ones = np.ones(len(x[0]))
    X = sm.add_constant(np.column_stack((x[0], ones)))
    for ele in x[1:]:
        X = sm.add_constant(np.column_stack((ele, X)))
    results = sm.OLS(y, X).fit()
    return results

结果:

print reg_m(y, x).summary()

输出:

                            OLS Regression Results                            
==============================================================================
Dep. Variable:                      y   R-squared:                       0.535
Model:                            OLS   Adj. R-squared:                  0.461
Method:                 Least Squares   F-statistic:                     7.281
Date:                Tue, 19 Feb 2013   Prob (F-statistic):            0.00191
Time:                        21:51:28   Log-Likelihood:                -26.025
No. Observations:                  23   AIC:                             60.05
Df Residuals:                      19   BIC:                             64.59
Df Model:                           3                                         
==============================================================================
                 coef    std err          t      P>|t|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1             0.2424      0.139      1.739      0.098        -0.049     0.534
x2             0.2360      0.149      1.587      0.129        -0.075     0.547
x3            -0.0618      0.145     -0.427      0.674        -0.365     0.241
const          1.5704      0.633      2.481      0.023         0.245     2.895

==============================================================================
Omnibus:                        6.904   Durbin-Watson:                   1.905
Prob(Omnibus):                  0.032   Jarque-Bera (JB):                4.708
Skew:                          -0.849   Prob(JB):                       0.0950
Kurtosis:                       4.426   Cond. No.                         38.6

pandas 提供了运行此答案中给出的OLS的便捷方法:

使用Pandas Data Frame运行OLS回归

Here is a little work around that I created. I checked it with R and it works correct.

import numpy as np
import statsmodels.api as sm

y = [1,2,3,4,3,4,5,4,5,5,4,5,4,5,4,5,6,5,4,5,4,3,4]

x = [
     [4,2,3,4,5,4,5,6,7,4,8,9,8,8,6,6,5,5,5,5,5,5,5],
     [4,1,2,3,4,5,6,7,5,8,7,8,7,8,7,8,7,7,7,7,7,6,5],
     [4,1,2,5,6,7,8,9,7,8,7,8,7,7,7,7,7,7,6,6,4,4,4]
     ]

def reg_m(y, x):
    ones = np.ones(len(x[0]))
    X = sm.add_constant(np.column_stack((x[0], ones)))
    for ele in x[1:]:
        X = sm.add_constant(np.column_stack((ele, X)))
    results = sm.OLS(y, X).fit()
    return results

Result:

print reg_m(y, x).summary()

Output:

                            OLS Regression Results                            
==============================================================================
Dep. Variable:                      y   R-squared:                       0.535
Model:                            OLS   Adj. R-squared:                  0.461
Method:                 Least Squares   F-statistic:                     7.281
Date:                Tue, 19 Feb 2013   Prob (F-statistic):            0.00191
Time:                        21:51:28   Log-Likelihood:                -26.025
No. Observations:                  23   AIC:                             60.05
Df Residuals:                      19   BIC:                             64.59
Df Model:                           3                                         
==============================================================================
                 coef    std err          t      P>|t|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1             0.2424      0.139      1.739      0.098        -0.049     0.534
x2             0.2360      0.149      1.587      0.129        -0.075     0.547
x3            -0.0618      0.145     -0.427      0.674        -0.365     0.241
const          1.5704      0.633      2.481      0.023         0.245     2.895

==============================================================================
Omnibus:                        6.904   Durbin-Watson:                   1.905
Prob(Omnibus):                  0.032   Jarque-Bera (JB):                4.708
Skew:                          -0.849   Prob(JB):                       0.0950
Kurtosis:                       4.426   Cond. No.                         38.6

pandas provides a convenient way to run OLS as given in this answer:

Run an OLS regression with Pandas Data Frame


回答 2

为了澄清起见,您给出的示例是多元线性回归,而不是多元线性回归。区别

单个标量预测变量x和单个标量响应变量y的最简单情况就是简单线性回归。对多个和/或向量值的预测变量(用大写的X表示)的扩展被称为多元线性回归,也称为多元线性回归。几乎所有现实世界中的回归模型都涉及多个预测变量,而线性回归的基本描述通常用多元回归模型来表述。但是请注意,在这些情况下,响应变量y仍然是标量。另一个变量多元线性回归是指y是向量的情况,即与一般线性回归相同。

简而言之:

  • 多元线性回归:响应y是一个标量。
  • 多元线性回归:响应y是向量。

(另一个来源。)

Just to clarify, the example you gave is multiple linear regression, not multivariate linear regression refer. Difference:

The very simplest case of a single scalar predictor variable x and a single scalar response variable y is known as simple linear regression. The extension to multiple and/or vector-valued predictor variables (denoted with a capital X) is known as multiple linear regression, also known as multivariable linear regression. Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term multivariate linear regression refers to cases where y is a vector, i.e., the same as general linear regression. The difference between multivariate linear regression and multivariable linear regression should be emphasized as it causes much confusion and misunderstanding in the literature.

In short:

  • multiple linear regression: the response y is a scalar.
  • multivariate linear regression: the response y is a vector.

(Another source.)


回答 3

您可以使用numpy.linalg.lstsq

import numpy as np
y = np.array([-6,-5,-10,-5,-8,-3,-6,-8,-8])
X = np.array([[-4.95,-4.55,-10.96,-1.08,-6.52,-0.81,-7.01,-4.46,-11.54],[-5.87,-4.52,-11.64,-3.36,-7.45,-2.36,-7.33,-7.65,-10.03],[-0.76,-0.71,-0.98,0.75,-0.86,-0.50,-0.33,-0.94,-1.03],[14.73,13.74,15.49,24.72,16.59,22.44,13.93,11.40,18.18],[4.02,4.47,4.18,4.96,4.29,4.81,4.32,4.43,4.28],[0.20,0.16,0.19,0.16,0.10,0.15,0.21,0.16,0.21],[0.45,0.50,0.53,0.60,0.48,0.53,0.50,0.49,0.55]])
X = X.T # transpose so input vectors are along the rows
X = np.c_[X, np.ones(X.shape[0])] # add bias term
beta_hat = np.linalg.lstsq(X,y)[0]
print beta_hat

结果:

[ -0.49104607   0.83271938   0.0860167    0.1326091    6.85681762  22.98163883 -41.08437805 -19.08085066]

您可以通过以下方式查看估计的输出:

print np.dot(X,beta_hat)

结果:

[ -5.97751163,  -5.06465759, -10.16873217,  -4.96959788,  -7.96356915,  -3.06176313,  -6.01818435,  -7.90878145,  -7.86720264]

You can use numpy.linalg.lstsq:

import numpy as np

y = np.array([-6, -5, -10, -5, -8, -3, -6, -8, -8])
X = np.array(
    [
        [-4.95, -4.55, -10.96, -1.08, -6.52, -0.81, -7.01, -4.46, -11.54],
        [-5.87, -4.52, -11.64, -3.36, -7.45, -2.36, -7.33, -7.65, -10.03],
        [-0.76, -0.71, -0.98, 0.75, -0.86, -0.50, -0.33, -0.94, -1.03],
        [14.73, 13.74, 15.49, 24.72, 16.59, 22.44, 13.93, 11.40, 18.18],
        [4.02, 4.47, 4.18, 4.96, 4.29, 4.81, 4.32, 4.43, 4.28],
        [0.20, 0.16, 0.19, 0.16, 0.10, 0.15, 0.21, 0.16, 0.21],
        [0.45, 0.50, 0.53, 0.60, 0.48, 0.53, 0.50, 0.49, 0.55],
    ]
)
X = X.T  # transpose so input vectors are along the rows
X = np.c_[X, np.ones(X.shape[0])]  # add bias term
beta_hat = np.linalg.lstsq(X, y, rcond=None)[0]
print(beta_hat)

Result:

[ -0.49104607   0.83271938   0.0860167    0.1326091    6.85681762  22.98163883 -41.08437805 -19.08085066]

You can see the estimated output with:

print(np.dot(X,beta_hat))

Result:

[ -5.97751163,  -5.06465759, -10.16873217,  -4.96959788,  -7.96356915,  -3.06176313,  -6.01818435,  -7.90878145,  -7.86720264]

回答 4

使用scipy.optimize.curve_fit。而且不仅适用于线性拟合。

from scipy.optimize import curve_fit
import scipy

def fn(x, a, b, c):
    return a + b*x[0] + c*x[1]

# y(x0,x1) data:
#    x0=0 1 2
# ___________
# x1=0 |0 1 2
# x1=1 |1 2 3
# x1=2 |2 3 4

x = scipy.array([[0,1,2,0,1,2,0,1,2,],[0,0,0,1,1,1,2,2,2]])
y = scipy.array([0,1,2,1,2,3,2,3,4])
popt, pcov = curve_fit(fn, x, y)
print popt

Use scipy.optimize.curve_fit. And not only for linear fit.

from scipy.optimize import curve_fit
import scipy

def fn(x, a, b, c):
    return a + b*x[0] + c*x[1]

# y(x0,x1) data:
#    x0=0 1 2
# ___________
# x1=0 |0 1 2
# x1=1 |1 2 3
# x1=2 |2 3 4

x = scipy.array([[0,1,2,0,1,2,0,1,2,],[0,0,0,1,1,1,2,2,2]])
y = scipy.array([0,1,2,1,2,3,2,3,4])
popt, pcov = curve_fit(fn, x, y)
print popt

回答 5

将数据转换为熊猫数据框(df)后,

import statsmodels.formula.api as smf
lm = smf.ols(formula='y ~ x1 + x2 + x3 + x4 + x5 + x6 + x7', data=df).fit()
print(lm.params)

默认情况下包括拦截项。

有关更多示例,请参见此笔记本

Once you convert your data to a pandas dataframe (df),

import statsmodels.formula.api as smf
lm = smf.ols(formula='y ~ x1 + x2 + x3 + x4 + x5 + x6 + x7', data=df).fit()
print(lm.params)

The intercept term is included by default.

See this notebook for more examples.


回答 6

我认为这可能是完成这项工作的最简单方法:

from random import random
from pandas import DataFrame
from statsmodels.api import OLS
lr = lambda : [random() for i in range(100)]
x = DataFrame({'x1': lr(), 'x2':lr(), 'x3':lr()})
x['b'] = 1
y = x.x1 + x.x2 * 2 + x.x3 * 3 + 4

print x.head()

         x1        x2        x3  b
0  0.433681  0.946723  0.103422  1
1  0.400423  0.527179  0.131674  1
2  0.992441  0.900678  0.360140  1
3  0.413757  0.099319  0.825181  1
4  0.796491  0.862593  0.193554  1

print y.head()

0    6.637392
1    5.849802
2    7.874218
3    7.087938
4    7.102337
dtype: float64

model = OLS(y, x)
result = model.fit()
print result.summary()

                            OLS Regression Results                            
==============================================================================
Dep. Variable:                      y   R-squared:                       1.000
Model:                            OLS   Adj. R-squared:                  1.000
Method:                 Least Squares   F-statistic:                 5.859e+30
Date:                Wed, 09 Dec 2015   Prob (F-statistic):               0.00
Time:                        15:17:32   Log-Likelihood:                 3224.9
No. Observations:                 100   AIC:                            -6442.
Df Residuals:                      96   BIC:                            -6431.
Df Model:                           3                                         
Covariance Type:            nonrobust                                         
==============================================================================
                 coef    std err          t      P>|t|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1             1.0000   8.98e-16   1.11e+15      0.000         1.000     1.000
x2             2.0000   8.28e-16   2.41e+15      0.000         2.000     2.000
x3             3.0000   8.34e-16    3.6e+15      0.000         3.000     3.000
b              4.0000   8.51e-16    4.7e+15      0.000         4.000     4.000
==============================================================================
Omnibus:                        7.675   Durbin-Watson:                   1.614
Prob(Omnibus):                  0.022   Jarque-Bera (JB):                3.118
Skew:                           0.045   Prob(JB):                        0.210
Kurtosis:                       2.140   Cond. No.                         6.89
==============================================================================

I think this may the most easy way to finish this work:

from random import random
from pandas import DataFrame
from statsmodels.api import OLS
lr = lambda : [random() for i in range(100)]
x = DataFrame({'x1': lr(), 'x2':lr(), 'x3':lr()})
x['b'] = 1
y = x.x1 + x.x2 * 2 + x.x3 * 3 + 4

print x.head()

         x1        x2        x3  b
0  0.433681  0.946723  0.103422  1
1  0.400423  0.527179  0.131674  1
2  0.992441  0.900678  0.360140  1
3  0.413757  0.099319  0.825181  1
4  0.796491  0.862593  0.193554  1

print y.head()

0    6.637392
1    5.849802
2    7.874218
3    7.087938
4    7.102337
dtype: float64

model = OLS(y, x)
result = model.fit()
print result.summary()

                            OLS Regression Results                            
==============================================================================
Dep. Variable:                      y   R-squared:                       1.000
Model:                            OLS   Adj. R-squared:                  1.000
Method:                 Least Squares   F-statistic:                 5.859e+30
Date:                Wed, 09 Dec 2015   Prob (F-statistic):               0.00
Time:                        15:17:32   Log-Likelihood:                 3224.9
No. Observations:                 100   AIC:                            -6442.
Df Residuals:                      96   BIC:                            -6431.
Df Model:                           3                                         
Covariance Type:            nonrobust                                         
==============================================================================
                 coef    std err          t      P>|t|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1             1.0000   8.98e-16   1.11e+15      0.000         1.000     1.000
x2             2.0000   8.28e-16   2.41e+15      0.000         2.000     2.000
x3             3.0000   8.34e-16    3.6e+15      0.000         3.000     3.000
b              4.0000   8.51e-16    4.7e+15      0.000         4.000     4.000
==============================================================================
Omnibus:                        7.675   Durbin-Watson:                   1.614
Prob(Omnibus):                  0.022   Jarque-Bera (JB):                3.118
Skew:                           0.045   Prob(JB):                        0.210
Kurtosis:                       2.140   Cond. No.                         6.89
==============================================================================

回答 7

可以使用上面提到的sklearn库处理多个线性回归。我正在使用Python 3.6的Anaconda安装。

如下创建模型:

from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X, y)

# display coefficients
print(regressor.coef_)

Multiple Linear Regression can be handled using the sklearn library as referenced above. I’m using the Anaconda install of Python 3.6.

Create your model as follows:

from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X, y)

# display coefficients
print(regressor.coef_)

回答 8

您可以使用numpy.linalg.lstsq


回答 9

您可以使用下面的函数并将其传递给DataFrame:

def linear(x, y=None, show=True):
    """
    @param x: pd.DataFrame
    @param y: pd.DataFrame or pd.Series or None
              if None, then use last column of x as y
    @param show: if show regression summary
    """
    import statsmodels.api as sm

    xy = sm.add_constant(x if y is None else pd.concat([x, y], axis=1))
    res = sm.OLS(xy.ix[:, -1], xy.ix[:, :-1], missing='drop').fit()

    if show: print res.summary()
    return res

You can use the function below and pass it a DataFrame:

def linear(x, y=None, show=True):
    """
    @param x: pd.DataFrame
    @param y: pd.DataFrame or pd.Series or None
              if None, then use last column of x as y
    @param show: if show regression summary
    """
    import statsmodels.api as sm

    xy = sm.add_constant(x if y is None else pd.concat([x, y], axis=1))
    res = sm.OLS(xy.ix[:, -1], xy.ix[:, :-1], missing='drop').fit()

    if show: print res.summary()
    return res

回答 10

Scikit-learn是一个适用于Python的机器学习库,可以为您完成这项工作。只需将sklearn.linear_model模块导入脚本即可。

在python中使用sklearn查找多重线性回归的代码模板:

import numpy as np
import matplotlib.pyplot as plt #to plot visualizations
import pandas as pd

# Importing the dataset
df = pd.read_csv(<Your-dataset-path>)
# Assigning feature and target variables
X = df.iloc[:,:-1]
y = df.iloc[:,-1]

# Use label encoders, if you have any categorical variable
from sklearn.preprocessing import LabelEncoder
labelencoder = LabelEncoder()
X['<column-name>'] = labelencoder.fit_transform(X['<column-name>'])

from sklearn.preprocessing import OneHotEncoder
onehotencoder = OneHotEncoder(categorical_features = ['<index-value>'])
X = onehotencoder.fit_transform(X).toarray()

# Avoiding the dummy variable trap
X = X[:,1:] # Usually done by the algorithm itself

#Spliting the data into test and train set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y, random_state = 0, test_size = 0.2)

# Fitting the model
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)

# Predicting the test set results
y_pred = regressor.predict(X_test)

而已。您可以将此代码用作在任何数据集中实现多元线性回归的模板。为了更好地理解示例,请访问:带有示例的线性回归

Scikit-learn is a machine learning library for Python which can do this job for you. Just import sklearn.linear_model module into your script.

Find the code template for Multiple Linear Regression using sklearn in Python:

import numpy as np
import matplotlib.pyplot as plt #to plot visualizations
import pandas as pd

# Importing the dataset
df = pd.read_csv(<Your-dataset-path>)
# Assigning feature and target variables
X = df.iloc[:,:-1]
y = df.iloc[:,-1]

# Use label encoders, if you have any categorical variable
from sklearn.preprocessing import LabelEncoder
labelencoder = LabelEncoder()
X['<column-name>'] = labelencoder.fit_transform(X['<column-name>'])

from sklearn.preprocessing import OneHotEncoder
onehotencoder = OneHotEncoder(categorical_features = ['<index-value>'])
X = onehotencoder.fit_transform(X).toarray()

# Avoiding the dummy variable trap
X = X[:,1:] # Usually done by the algorithm itself

#Spliting the data into test and train set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y, random_state = 0, test_size = 0.2)

# Fitting the model
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)

# Predicting the test set results
y_pred = regressor.predict(X_test)

That’s it. You can use this code as a template for implementing Multiple Linear Regression in any dataset. For a better understanding with an example, Visit: Linear Regression with an example


回答 11

这是另一种基本方法:

from patsy import dmatrices
import statsmodels.api as sm

y,x = dmatrices("y_data ~ x_1 + x_2 ", data = my_data)
### y_data is the name of the dependent variable in your data ### 
model_fit = sm.OLS(y,x)
results = model_fit.fit()
print(results.summary())

代替sm.OLS您也可以使用sm.Logitor sm.Probit和等。

Here is an alternative and basic method:

from patsy import dmatrices
import statsmodels.api as sm

y,x = dmatrices("y_data ~ x_1 + x_2 ", data = my_data)
### y_data is the name of the dependent variable in your data ### 
model_fit = sm.OLS(y,x)
results = model_fit.fit()
print(results.summary())

Instead of sm.OLS you can also use sm.Logit or sm.Probit and etc.


在numpy向量中找到最频繁的数字

问题:在numpy向量中找到最频繁的数字

假设我在python中有以下列表:

a = [1,2,3,1,2,1,1,1,3,2,2,1]

如何以一种简洁的方式在此列表中找到最频繁的号码?

Suppose I have the following list in python:

a = [1,2,3,1,2,1,1,1,3,2,2,1]

How to find the most frequent number in this list in a neat way?


回答 0

如果您的列表包含所有非负整数,则应查看numpy.bincounts:

http://docs.scipy.org/doc/numpy/reference/generated/numpy.bincount.html

然后可能使用np.argmax:

a = np.array([1,2,3,1,2,1,1,1,3,2,2,1])
counts = np.bincount(a)
print np.argmax(counts)

对于更复杂的列表(可能包含负数或非整数值),可以np.histogram类似的方式使用。另外,如果您只想在python中工作而不使用numpy,collections.Counter则是处理此类数据的一种好方法。

from collections import Counter
a = [1,2,3,1,2,1,1,1,3,2,2,1]
b = Counter(a)
print b.most_common(1)

If your list contains all non-negative ints, you should take a look at numpy.bincounts:

http://docs.scipy.org/doc/numpy/reference/generated/numpy.bincount.html

and then probably use np.argmax:

a = np.array([1,2,3,1,2,1,1,1,3,2,2,1])
counts = np.bincount(a)
print(np.argmax(counts))

For a more complicated list (that perhaps contains negative numbers or non-integer values), you can use np.histogram in a similar way. Alternatively, if you just want to work in python without using numpy, collections.Counter is a good way of handling this sort of data.

from collections import Counter
a = [1,2,3,1,2,1,1,1,3,2,2,1]
b = Counter(a)
print(b.most_common(1))

回答 1

您可以使用

(values,counts) = np.unique(a,return_counts=True)
ind=np.argmax(counts)
print values[ind]  # prints the most frequent element

如果某个元素与另一个元素一样频繁,则此代码将仅返回第一个元素。

You may use

(values,counts) = np.unique(a,return_counts=True)
ind=np.argmax(counts)
print values[ind]  # prints the most frequent element

If some element is as frequent as another one, this code will return only the first element.


回答 2

如果您愿意使用SciPy

>>> from scipy.stats import mode
>>> mode([1,2,3,1,2,1,1,1,3,2,2,1])
(array([ 1.]), array([ 6.]))
>>> most_frequent = mode([1,2,3,1,2,1,1,1,3,2,2,1])[0][0]
>>> most_frequent
1.0

If you’re willing to use SciPy:

>>> from scipy.stats import mode
>>> mode([1,2,3,1,2,1,1,1,3,2,2,1])
(array([ 1.]), array([ 6.]))
>>> most_frequent = mode([1,2,3,1,2,1,1,1,3,2,2,1])[0][0]
>>> most_frequent
1.0

回答 3

在此处找到一些解决方案的性能(使用iPython):

>>> # small array
>>> a = [12,3,65,33,12,3,123,888000]
>>> 
>>> import collections
>>> collections.Counter(a).most_common()[0][0]
3
>>> %timeit collections.Counter(a).most_common()[0][0]
100000 loops, best of 3: 11.3 µs per loop
>>> 
>>> import numpy
>>> numpy.bincount(a).argmax()
3
>>> %timeit numpy.bincount(a).argmax()
100 loops, best of 3: 2.84 ms per loop
>>> 
>>> import scipy.stats
>>> scipy.stats.mode(a)[0][0]
3.0
>>> %timeit scipy.stats.mode(a)[0][0]
10000 loops, best of 3: 172 µs per loop
>>> 
>>> from collections import defaultdict
>>> def jjc(l):
...     d = defaultdict(int)
...     for i in a:
...         d[i] += 1
...     return sorted(d.iteritems(), key=lambda x: x[1], reverse=True)[0]
... 
>>> jjc(a)[0]
3
>>> %timeit jjc(a)[0]
100000 loops, best of 3: 5.58 µs per loop
>>> 
>>> max(map(lambda val: (a.count(val), val), set(a)))[1]
12
>>> %timeit max(map(lambda val: (a.count(val), val), set(a)))[1]
100000 loops, best of 3: 4.11 µs per loop
>>> 

对于像这样的小型阵列,最好是“最大”和“设置” 。

根据@David Sanders的说法,如果将数组大小增加到100,000个元素,则“最大w / set”算法最终将是最差的,而“ numpy bincount”方法是最佳的。

Performances (using iPython) for some solutions found here:

>>> # small array
>>> a = [12,3,65,33,12,3,123,888000]
>>> 
>>> import collections
>>> collections.Counter(a).most_common()[0][0]
3
>>> %timeit collections.Counter(a).most_common()[0][0]
100000 loops, best of 3: 11.3 µs per loop
>>> 
>>> import numpy
>>> numpy.bincount(a).argmax()
3
>>> %timeit numpy.bincount(a).argmax()
100 loops, best of 3: 2.84 ms per loop
>>> 
>>> import scipy.stats
>>> scipy.stats.mode(a)[0][0]
3.0
>>> %timeit scipy.stats.mode(a)[0][0]
10000 loops, best of 3: 172 µs per loop
>>> 
>>> from collections import defaultdict
>>> def jjc(l):
...     d = defaultdict(int)
...     for i in a:
...         d[i] += 1
...     return sorted(d.iteritems(), key=lambda x: x[1], reverse=True)[0]
... 
>>> jjc(a)[0]
3
>>> %timeit jjc(a)[0]
100000 loops, best of 3: 5.58 µs per loop
>>> 
>>> max(map(lambda val: (a.count(val), val), set(a)))[1]
12
>>> %timeit max(map(lambda val: (a.count(val), val), set(a)))[1]
100000 loops, best of 3: 4.11 µs per loop
>>> 

Best is ‘max’ with ‘set’ for small arrays like the problem.

According to @David Sanders, if you increase the array size to something like 100,000 elements, the “max w/set” algorithm ends up being the worst by far whereas the “numpy bincount” method is the best.


回答 4

另外,如果您想获得最频繁的值(正数或负数)而不加载任何模块,则可以使用以下代码:

lVals = [1,2,3,1,2,1,1,1,3,2,2,1]
print max(map(lambda val: (lVals.count(val), val), set(lVals)))

Also if you want to get most frequent value(positive or negative) without loading any modules you can use the following code:

lVals = [1,2,3,1,2,1,1,1,3,2,2,1]
print max(map(lambda val: (lVals.count(val), val), set(lVals)))

回答 5

虽然上面的大多数答案很有用,但在以下情况下您可能会:1)需要它来支持非正整数值(例如浮点数或负整数;-)),以及2)不在Python 2.7上(哪个collections.Counter) 3)不想在代码中添加scipy(甚至numpy)的依赖项,那么纯Python 2.6解决方案就是O(nlogn)(即有效),它就是这样:

from collections import defaultdict

a = [1,2,3,1,2,1,1,1,3,2,2,1]

d = defaultdict(int)
for i in a:
  d[i] += 1
most_frequent = sorted(d.iteritems(), key=lambda x: x[1], reverse=True)[0]

While most of the answers above are useful, in case you: 1) need it to support non-positive-integer values (e.g. floats or negative integers ;-)), and 2) aren’t on Python 2.7 (which collections.Counter requires), and 3) prefer not to add the dependency of scipy (or even numpy) to your code, then a purely python 2.6 solution that is O(nlogn) (i.e., efficient) is just this:

from collections import defaultdict

a = [1,2,3,1,2,1,1,1,3,2,2,1]

d = defaultdict(int)
for i in a:
  d[i] += 1
most_frequent = sorted(d.iteritems(), key=lambda x: x[1], reverse=True)[0]

回答 6

我喜欢JoshAdel的解决方案。

但是只有一个收获。

np.bincount()解决方案仅适用于数字。

如果你有琴弦 collections.Counter解决方案将为您服务。

I like the solution by JoshAdel.

But there is just one catch.

The np.bincount() solution only works on numbers.

If you have strings, collections.Counter solution will work for you.


回答 7

扩展此方法,适用于查找数据模式,在该模式下可能需要实际数组的索引才能查看该值与分布中心的距离。

(_, idx, counts) = np.unique(a, return_index=True, return_counts=True)
index = idx[np.argmax(counts)]
mode = a[index]

记住当len(np.argmax(counts))> 1时放弃该模式

Expanding on this method, applied to finding the mode of the data where you may need the index of the actual array to see how far away the value is from the center of the distribution.

(_, idx, counts) = np.unique(a, return_index=True, return_counts=True)
index = idx[np.argmax(counts)]
mode = a[index]

Remember to discard the mode when len(np.argmax(counts)) > 1


回答 8

在Python 3中,以下应该起作用:

max(set(a), key=lambda x: a.count(x))

In Python 3 the following should work:

max(set(a), key=lambda x: a.count(x))

回答 9

从开始Python 3.4,标准库包含statistics.mode返回单个最常见数据点的功能。

from statistics import mode

mode([1, 2, 3, 1, 2, 1, 1, 1, 3, 2, 2, 1])
# 1

如果存在多个具有相同频率的模式,则statistics.mode返回遇到的第一个模式。


从开始于Python 3.8,该statistics.multimode函数将按最先出现的顺序返回最频繁出现的值的列表:

from statistics import multimode

multimode([1, 2, 3, 1, 2])
# [1, 2]

Starting in Python 3.4, the standard library includes the statistics.mode function to return the single most common data point.

from statistics import mode

mode([1, 2, 3, 1, 2, 1, 1, 1, 3, 2, 2, 1])
# 1

If there are multiple modes with the same frequency, statistics.mode returns the first one encountered.


Starting in Python 3.8, the statistics.multimode function returns a list of the most frequently occurring values in the order they were first encountered:

from statistics import multimode

multimode([1, 2, 3, 1, 2])
# [1, 2]

回答 10

这是一个纯解决方案,可以使用纯粹的numpy沿轴应用而不管其值如何。我还发现,如果有很多唯一值,这比scipy.stats.mode快得多。

import numpy

def mode(ndarray, axis=0):
    # Check inputs
    ndarray = numpy.asarray(ndarray)
    ndim = ndarray.ndim
    if ndarray.size == 1:
        return (ndarray[0], 1)
    elif ndarray.size == 0:
        raise Exception('Cannot compute mode on empty array')
    try:
        axis = range(ndarray.ndim)[axis]
    except:
        raise Exception('Axis "{}" incompatible with the {}-dimension array'.format(axis, ndim))

    # If array is 1-D and numpy version is > 1.9 numpy.unique will suffice
    if all([ndim == 1,
            int(numpy.__version__.split('.')[0]) >= 1,
            int(numpy.__version__.split('.')[1]) >= 9]):
        modals, counts = numpy.unique(ndarray, return_counts=True)
        index = numpy.argmax(counts)
        return modals[index], counts[index]

    # Sort array
    sort = numpy.sort(ndarray, axis=axis)
    # Create array to transpose along the axis and get padding shape
    transpose = numpy.roll(numpy.arange(ndim)[::-1], axis)
    shape = list(sort.shape)
    shape[axis] = 1
    # Create a boolean array along strides of unique values
    strides = numpy.concatenate([numpy.zeros(shape=shape, dtype='bool'),
                                 numpy.diff(sort, axis=axis) == 0,
                                 numpy.zeros(shape=shape, dtype='bool')],
                                axis=axis).transpose(transpose).ravel()
    # Count the stride lengths
    counts = numpy.cumsum(strides)
    counts[~strides] = numpy.concatenate([[0], numpy.diff(counts[~strides])])
    counts[strides] = 0
    # Get shape of padded counts and slice to return to the original shape
    shape = numpy.array(sort.shape)
    shape[axis] += 1
    shape = shape[transpose]
    slices = [slice(None)] * ndim
    slices[axis] = slice(1, None)
    # Reshape and compute final counts
    counts = counts.reshape(shape).transpose(transpose)[slices] + 1

    # Find maximum counts and return modals/counts
    slices = [slice(None, i) for i in sort.shape]
    del slices[axis]
    index = numpy.ogrid[slices]
    index.insert(axis, numpy.argmax(counts, axis=axis))
    return sort[index], counts[index]

Here is a general solution that may be applied along an axis, regardless of values, using purely numpy. I’ve also found that this is much faster than scipy.stats.mode if there are a lot of unique values.

import numpy

def mode(ndarray, axis=0):
    # Check inputs
    ndarray = numpy.asarray(ndarray)
    ndim = ndarray.ndim
    if ndarray.size == 1:
        return (ndarray[0], 1)
    elif ndarray.size == 0:
        raise Exception('Cannot compute mode on empty array')
    try:
        axis = range(ndarray.ndim)[axis]
    except:
        raise Exception('Axis "{}" incompatible with the {}-dimension array'.format(axis, ndim))

    # If array is 1-D and numpy version is > 1.9 numpy.unique will suffice
    if all([ndim == 1,
            int(numpy.__version__.split('.')[0]) >= 1,
            int(numpy.__version__.split('.')[1]) >= 9]):
        modals, counts = numpy.unique(ndarray, return_counts=True)
        index = numpy.argmax(counts)
        return modals[index], counts[index]

    # Sort array
    sort = numpy.sort(ndarray, axis=axis)
    # Create array to transpose along the axis and get padding shape
    transpose = numpy.roll(numpy.arange(ndim)[::-1], axis)
    shape = list(sort.shape)
    shape[axis] = 1
    # Create a boolean array along strides of unique values
    strides = numpy.concatenate([numpy.zeros(shape=shape, dtype='bool'),
                                 numpy.diff(sort, axis=axis) == 0,
                                 numpy.zeros(shape=shape, dtype='bool')],
                                axis=axis).transpose(transpose).ravel()
    # Count the stride lengths
    counts = numpy.cumsum(strides)
    counts[~strides] = numpy.concatenate([[0], numpy.diff(counts[~strides])])
    counts[strides] = 0
    # Get shape of padded counts and slice to return to the original shape
    shape = numpy.array(sort.shape)
    shape[axis] += 1
    shape = shape[transpose]
    slices = [slice(None)] * ndim
    slices[axis] = slice(1, None)
    # Reshape and compute final counts
    counts = counts.reshape(shape).transpose(transpose)[slices] + 1

    # Find maximum counts and return modals/counts
    slices = [slice(None, i) for i in sort.shape]
    del slices[axis]
    index = numpy.ogrid[slices]
    index.insert(axis, numpy.argmax(counts, axis=axis))
    return sort[index], counts[index]

回答 11

我最近正在做一个项目,并使用collections.Counter。(这折磨了我)。

我认为收藏中的Counter的表现非常非常差。这只是包装dict()的类。

更糟糕的是,如果使用cProfile来分析其方法,则应该看到很多“ __missing__”和“ __instancecheck__”东西一直在浪费。

使用它的most_common()时要小心,因为每次调用它都会使它变得极其缓慢。如果使用most_common(x),它将调用堆排序,这也很慢。

顺便说一句,numpy的bincount也有一个问题:如果使用np.bincount([1,2,4000000]),您将得到一个包含4000000个元素的数组。

I’m recently doing a project and using collections.Counter.(Which tortured me).

The Counter in collections have a very very bad performance in my opinion. It’s just a class wrapping dict().

What’s worse, If you use cProfile to profile its method, you should see a lot of ‘__missing__’ and ‘__instancecheck__’ stuff wasting the whole time.

Be careful using its most_common(), because everytime it would invoke a sort which makes it extremely slow. and if you use most_common(x), it will invoke a heap sort, which is also slow.

Btw, numpy’s bincount also have a problem: if you use np.bincount([1,2,4000000]), you will get an array with 4000000 elements.


统计信息:Python中的组合

问题:统计信息:Python中的组合

我需要计算在Python combinatorials(NCR),但无法找到的功能做在mathnumpystat 图书馆。类似于函数的类型:

comb = calculate_combinations(n, r)

我需要可能的组合数量,而不是实际组合,因此itertools.combinations我对此并不感兴趣。

最后,我要避免使用阶乘,因为我将要计算其组合的数字可能会太大,并且阶乘将变得非常可怕。

这似乎是一个非常容易回答的问题,但是我被有关生成所有实际组合的问题淹没了,这不是我想要的。

I need to compute combinatorials (nCr) in Python but cannot find the function to do that in math, numpy or stat libraries. Something like a function of the type:

comb = calculate_combinations(n, r)

I need the number of possible combinations, not the actual combinations, so itertools.combinations does not interest me.

Finally, I want to avoid using factorials, as the numbers I’ll be calculating the combinations for can get too big and the factorials are going to be monstrous.

This seems like a REALLY easy to answer question, however I am being drowned in questions about generating all the actual combinations, which is not what I want.


回答 0

请参阅scipy.special.comb(旧版本的scipy中的scipy.misc.comb)。当exact为False时,它使用伽马函数来获得良好的精度而无需花费很多时间。在确切的情况下,它返回一个任意精度的整数,这可能需要很长时间才能计算出来。

See scipy.special.comb (scipy.misc.comb in older versions of scipy). When exact is False, it uses the gammaln function to obtain good precision without taking much time. In the exact case it returns an arbitrary-precision integer, which might take a long time to compute.


回答 1

为什么不自己写呢?这是一线之类的:

from operator import mul    # or mul=lambda x,y:x*y
from fractions import Fraction

def nCk(n,k): 
  return int( reduce(mul, (Fraction(n-i, i+1) for i in range(k)), 1) )

测试-打印Pascal的三角形:

>>> for n in range(17):
...     print ' '.join('%5d'%nCk(n,k) for k in range(n+1)).center(100)
...     
                                                   1                                                
                                                1     1                                             
                                             1     2     1                                          
                                          1     3     3     1                                       
                                       1     4     6     4     1                                    
                                    1     5    10    10     5     1                                 
                                 1     6    15    20    15     6     1                              
                              1     7    21    35    35    21     7     1                           
                           1     8    28    56    70    56    28     8     1                        
                        1     9    36    84   126   126    84    36     9     1                     
                     1    10    45   120   210   252   210   120    45    10     1                  
                  1    11    55   165   330   462   462   330   165    55    11     1               
               1    12    66   220   495   792   924   792   495   220    66    12     1            
            1    13    78   286   715  1287  1716  1716  1287   715   286    78    13     1         
         1    14    91   364  1001  2002  3003  3432  3003  2002  1001   364    91    14     1      
      1    15   105   455  1365  3003  5005  6435  6435  5005  3003  1365   455   105    15     1   
    1    16   120   560  1820  4368  8008 11440 12870 11440  8008  4368  1820   560   120    16     1
>>> 

PS。编辑以替换int(round(reduce(mul, (float(n-i)/(i+1) for i in range(k)), 1)))int(reduce(mul, (Fraction(n-i, i+1) for i in range(k)), 1))因此对于大N / K不会出错

Why not write it yourself? It’s a one-liner or such:

from operator import mul    # or mul=lambda x,y:x*y
from fractions import Fraction

def nCk(n,k): 
  return int( reduce(mul, (Fraction(n-i, i+1) for i in range(k)), 1) )

Test – printing Pascal’s triangle:

>>> for n in range(17):
...     print ' '.join('%5d'%nCk(n,k) for k in range(n+1)).center(100)
...     
                                                   1                                                
                                                1     1                                             
                                             1     2     1                                          
                                          1     3     3     1                                       
                                       1     4     6     4     1                                    
                                    1     5    10    10     5     1                                 
                                 1     6    15    20    15     6     1                              
                              1     7    21    35    35    21     7     1                           
                           1     8    28    56    70    56    28     8     1                        
                        1     9    36    84   126   126    84    36     9     1                     
                     1    10    45   120   210   252   210   120    45    10     1                  
                  1    11    55   165   330   462   462   330   165    55    11     1               
               1    12    66   220   495   792   924   792   495   220    66    12     1            
            1    13    78   286   715  1287  1716  1716  1287   715   286    78    13     1         
         1    14    91   364  1001  2002  3003  3432  3003  2002  1001   364    91    14     1      
      1    15   105   455  1365  3003  5005  6435  6435  5005  3003  1365   455   105    15     1   
    1    16   120   560  1820  4368  8008 11440 12870 11440  8008  4368  1820   560   120    16     1
>>> 

PS. edited to replace int(round(reduce(mul, (float(n-i)/(i+1) for i in range(k)), 1))) with int(reduce(mul, (Fraction(n-i, i+1) for i in range(k)), 1)) so it won’t err for big N/K


回答 2

在Google代码上快速搜索给出了(它使用了@Mark Byers的答案中的公式):

def choose(n, k):
    """
    A fast way to calculate binomial coefficients by Andrew Dalke (contrib).
    """
    if 0 <= k <= n:
        ntok = 1
        ktok = 1
        for t in xrange(1, min(k, n - k) + 1):
            ntok *= n
            ktok *= t
            n -= 1
        return ntok // ktok
    else:
        return 0

choose()scipy.misc.comb()您需要确切答案快10倍(在所有0 <=(n,k)<1e3对上测试)。

def comb(N,k): # from scipy.comb(), but MODIFIED!
    if (k > N) or (N < 0) or (k < 0):
        return 0L
    N,k = map(long,(N,k))
    top = N
    val = 1L
    while (top > (N-k)):
        val *= top
        top -= 1
    n = 1L
    while (n < k+1L):
        val /= n
        n += 1
    return val

A quick search on google code gives (it uses formula from @Mark Byers’s answer):

def choose(n, k):
    """
    A fast way to calculate binomial coefficients by Andrew Dalke (contrib).
    """
    if 0 <= k <= n:
        ntok = 1
        ktok = 1
        for t in xrange(1, min(k, n - k) + 1):
            ntok *= n
            ktok *= t
            n -= 1
        return ntok // ktok
    else:
        return 0

choose() is 10 times faster (tested on all 0 <= (n,k) < 1e3 pairs) than scipy.misc.comb() if you need an exact answer.

def comb(N,k): # from scipy.comb(), but MODIFIED!
    if (k > N) or (N < 0) or (k < 0):
        return 0L
    N,k = map(long,(N,k))
    top = N
    val = 1L
    while (top > (N-k)):
        val *= top
        top -= 1
    n = 1L
    while (n < k+1L):
        val /= n
        n += 1
    return val

回答 3

如果您想要确切的结果速度,请尝试gmpygmpy.comb应该完全按照您的要求进行操作,而且速度非常快(当然,作为gmpy的原始作者,我偏见;-)。

If you want exact results and speed, try gmpygmpy.comb should do exactly what you ask for, and it’s pretty fast (of course, as gmpy‘s original author, I am biased;-).


回答 4

如果您想要精确的结果,请使用sympy.binomial。看来这是最快的方法。

x = 1000000
y = 234050

%timeit scipy.misc.comb(x, y, exact=True)
1 loops, best of 3: 1min 27s per loop

%timeit gmpy.comb(x, y)
1 loops, best of 3: 1.97 s per loop

%timeit int(sympy.binomial(x, y))
100000 loops, best of 3: 5.06 µs per loop

If you want an exact result, use sympy.binomial. It seems to be the fastest method, hands down.

x = 1000000
y = 234050

%timeit scipy.misc.comb(x, y, exact=True)
1 loops, best of 3: 1min 27s per loop

%timeit gmpy.comb(x, y)
1 loops, best of 3: 1.97 s per loop

%timeit int(sympy.binomial(x, y))
100000 loops, best of 3: 5.06 µs per loop

回答 5

在许多情况下,数学定义的字面翻译是足够的(记住Python将自动使用大数算法):

from math import factorial

def calculate_combinations(n, r):
    return factorial(n) // factorial(r) // factorial(n-r)

对于我测试的某些输入(例如n = 1000 r = 500),这比reduce另一种(目前投票率最高)答案中建议的一种衬板的速度快10倍以上。另一方面,@ JF Sebastian提供的代码片段的性能优于。

A literal translation of the mathematical definition is quite adequate in a lot of cases (remembering that Python will automatically use big number arithmetic):

from math import factorial

def calculate_combinations(n, r):
    return factorial(n) // factorial(r) // factorial(n-r)

For some inputs I tested (e.g. n=1000 r=500) this was more than 10 times faster than the one liner reduce suggested in another (currently highest voted) answer. On the other hand, it is out-performed by the snippit provided by @J.F. Sebastian.


回答 6

从开始Python 3.8,标准库现在包括math.comb用于计算二项式系数的函数:

math.comb(n,k)

这是从n个项中不重复选择k个项的方法的数量
n! / (k! (n - k)!)

import math
math.comb(10, 5) # 252

Starting Python 3.8, the standard library now includes the math.comb function to compute the binomial coefficient:

math.comb(n, k)

which is the number of ways to choose k items from n items without repetition
n! / (k! (n - k)!):

import math
math.comb(10, 5) # 252

回答 7

这是另一种选择。该代码最初是用C ++编写的,因此可以将其反向移植到C ++以获取有限精度的整数(例如__int64)。优点是(1)它仅涉及整数运算,(2)通过执行连续的乘法和除法对,避免了膨胀整数值。我已经用Nas Banov的Pascal三角形测试了结果,它得到了正确的答案:

def choose(n,r):
  """Computes n! / (r! (n-r)!) exactly. Returns a python long int."""
  assert n >= 0
  assert 0 <= r <= n

  c = 1L
  denom = 1
  for (num,denom) in zip(xrange(n,n-r,-1), xrange(1,r+1,1)):
    c = (c * num) // denom
  return c

基本原理:为了最小化乘法和除法的数量,我们将表达式重写为

    n!      n(n-1)...(n-r+1)
--------- = ----------------
 r!(n-r)!          r!

为了尽可能避免乘法溢出,我们将按照以下STRICT顺序从左到右进行评估:

n / 1 * (n-1) / 2 * (n-2) / 3 * ... * (n-r+1) / r

我们可以证明按此顺序运算的整数算术是精确的(即无舍入误差)。

Here’s another alternative. This one was originally written in C++, so it can be backported to C++ for a finite-precision integer (e.g. __int64). The advantage is (1) it involves only integer operations, and (2) it avoids bloating the integer value by doing successive pairs of multiplication and division. I’ve tested the result with Nas Banov’s Pascal triangle, it gets the correct answer:

def choose(n,r):
  """Computes n! / (r! (n-r)!) exactly. Returns a python long int."""
  assert n >= 0
  assert 0 <= r <= n

  c = 1L
  denom = 1
  for (num,denom) in zip(xrange(n,n-r,-1), xrange(1,r+1,1)):
    c = (c * num) // denom
  return c

Rationale: To minimize the # of multiplications and divisions, we rewrite the expression as

    n!      n(n-1)...(n-r+1)
--------- = ----------------
 r!(n-r)!          r!

To avoid multiplication overflow as much as possible, we will evaluate in the following STRICT order, from left to right:

n / 1 * (n-1) / 2 * (n-2) / 3 * ... * (n-r+1) / r

We can show that integer arithmatic operated in this order is exact (i.e. no roundoff error).


回答 8

使用动态编程,时间复杂度为Θ(n * m),空间复杂度为Θ(m):

def binomial(n, k):
""" (int, int) -> int

         | c(n-1, k-1) + c(n-1, k), if 0 < k < n
c(n,k) = | 1                      , if n = k
         | 1                      , if k = 0

Precondition: n > k

>>> binomial(9, 2)
36
"""

c = [0] * (n + 1)
c[0] = 1
for i in range(1, n + 1):
    c[i] = 1
    j = i - 1
    while j > 0:
        c[j] += c[j - 1]
        j -= 1

return c[k]

Using dynamic programming, the time complexity is Θ(n*m) and space complexity Θ(m):

def binomial(n, k):
""" (int, int) -> int

         | c(n-1, k-1) + c(n-1, k), if 0 < k < n
c(n,k) = | 1                      , if n = k
         | 1                      , if k = 0

Precondition: n > k

>>> binomial(9, 2)
36
"""

c = [0] * (n + 1)
c[0] = 1
for i in range(1, n + 1):
    c[i] = 1
    j = i - 1
    while j > 0:
        c[j] += c[j - 1]
        j -= 1

return c[k]

回答 9

如果您的程序有上限n(例如n <= N),并且需要重复计算nCr(最好是>> N次),则使用lru_cache可以极大地提高性能:

from functools import lru_cache

@lru_cache(maxsize=None)
def nCr(n, r):
    return 1 if r == 0 or r == n else nCr(n - 1, r - 1) + nCr(n - 1, r)

构造缓存(隐式完成)需要花费O(N^2)时间。随后的所有对的调用都nCr将返回O(1)

If your program has an upper bound to n (say n <= N) and needs to repeatedly compute nCr (preferably for >>N times), using lru_cache can give you a huge performance boost:

from functools import lru_cache

@lru_cache(maxsize=None)
def nCr(n, r):
    return 1 if r == 0 or r == n else nCr(n - 1, r - 1) + nCr(n - 1, r)

Constructing the cache (which is done implicitly) takes up to O(N^2) time. Any subsequent calls to nCr will return in O(1).


回答 10

您可以编写2个简单的函数,实际上比使用scipy.special.comb快5到8倍。实际上,您不需要导入任何额外的程序包,并且该函数非常易于阅读。诀窍是使用备忘录存储先前计算的值,并使用nCr的定义

# create a memoization dictionary
memo = {}
def factorial(n):
    """
    Calculate the factorial of an input using memoization
    :param n: int
    :rtype value: int
    """
    if n in [1,0]:
        return 1
    if n in memo:
        return memo[n]
    value = n*factorial(n-1)
    memo[n] = value
    return value

def ncr(n, k):
    """
    Choose k elements from a set of n elements - n must be larger than or equal to k
    :param n: int
    :param k: int
    :rtype: int
    """
    return factorial(n)/(factorial(k)*factorial(n-k))

如果我们比较时间

from scipy.special import comb
%timeit comb(100,48)
>>> 100000 loops, best of 3: 6.78 µs per loop

%timeit ncr(100,48)
>>> 1000000 loops, best of 3: 1.39 µs per loop

You can write 2 simple functions that actually turns out to be about 5-8 times faster than using scipy.special.comb. In fact, you don’t need to import any extra packages, and the function is quite easily readable. The trick is to use memoization to store previously computed values, and using the definition of nCr

# create a memoization dictionary
memo = {}
def factorial(n):
    """
    Calculate the factorial of an input using memoization
    :param n: int
    :rtype value: int
    """
    if n in [1,0]:
        return 1
    if n in memo:
        return memo[n]
    value = n*factorial(n-1)
    memo[n] = value
    return value

def ncr(n, k):
    """
    Choose k elements from a set of n elements - n must be larger than or equal to k
    :param n: int
    :param k: int
    :rtype: int
    """
    return factorial(n)/(factorial(k)*factorial(n-k))

If we compare times

from scipy.special import comb
%timeit comb(100,48)
>>> 100000 loops, best of 3: 6.78 µs per loop

%timeit ncr(100,48)
>>> 1000000 loops, best of 3: 1.39 µs per loop

回答 11

使用sympy很容易。

import sympy

comb = sympy.binomial(n, r)

It’s pretty easy with sympy.

import sympy

comb = sympy.binomial(n, r)

回答 12

仅使用随Python分发的标准库

import itertools

def nCk(n, k):
    return len(list(itertools.combinations(range(n), k)))

Using only standard library distributed with Python:

import itertools

def nCk(n, k):
    return len(list(itertools.combinations(range(n), k)))

回答 13

当n大于20时,直接公式会产生大整数。

因此,另一个回应是:

from math import factorial

reduce(long.__mul__, range(n-r+1, n+1), 1L) // factorial(r)

简短,准确和高效,因为它通过坚持使用long避免了python大整数。

与scipy.special.comb相比,它更准确,更快捷:

 >>> from scipy.special import comb
 >>> nCr = lambda n,r: reduce(long.__mul__, range(n-r+1, n+1), 1L) // factorial(r)
 >>> comb(128,20)
 1.1965669823265365e+23
 >>> nCr(128,20)
 119656698232656998274400L  # accurate, no loss
 >>> from timeit import timeit
 >>> timeit(lambda: comb(n,r))
 8.231969118118286
 >>> timeit(lambda: nCr(128, 20))
 3.885951042175293

The direct formula produces big integers when n is bigger than 20.

So, yet another response:

from math import factorial

reduce(long.__mul__, range(n-r+1, n+1), 1L) // factorial(r)

short, accurate and efficient because this avoids python big integers by sticking with longs.

It is more accurate and faster when comparing to scipy.special.comb:

 >>> from scipy.special import comb
 >>> nCr = lambda n,r: reduce(long.__mul__, range(n-r+1, n+1), 1L) // factorial(r)
 >>> comb(128,20)
 1.1965669823265365e+23
 >>> nCr(128,20)
 119656698232656998274400L  # accurate, no loss
 >>> from timeit import timeit
 >>> timeit(lambda: comb(n,r))
 8.231969118118286
 >>> timeit(lambda: nCr(128, 20))
 3.885951042175293

回答 14

这是使用内置备忘录修饰器的@ killerT2333代码。

from functools import lru_cache

@lru_cache()
def factorial(n):
    """
    Calculate the factorial of an input using memoization
    :param n: int
    :rtype value: int
    """
    return 1 if n in (1, 0) else n * factorial(n-1)

@lru_cache()
def ncr(n, k):
    """
    Choose k elements from a set of n elements,
    n must be greater than or equal to k.
    :param n: int
    :param k: int
    :rtype: int
    """
    return factorial(n) / (factorial(k) * factorial(n - k))

print(ncr(6, 3))

This is @killerT2333 code using the builtin memoization decorator.

from functools import lru_cache

@lru_cache()
def factorial(n):
    """
    Calculate the factorial of an input using memoization
    :param n: int
    :rtype value: int
    """
    return 1 if n in (1, 0) else n * factorial(n-1)

@lru_cache()
def ncr(n, k):
    """
    Choose k elements from a set of n elements,
    n must be greater than or equal to k.
    :param n: int
    :param k: int
    :rtype: int
    """
    return factorial(n) / (factorial(k) * factorial(n - k))

print(ncr(6, 3))

回答 15

这是为您提供的高效算法

for i = 1.....r

   p = p * ( n - i ) / i

print(p)

例如nCr(30,7)= fact(30)/(fact(7)* fact(23))=(30 * 29 * 28 * 27 * 26 * 25 * 24)/(1 * 2 * 3 * 4 * 5 * 6 * 7)

因此,只需从1到r运行循环即可获得结果。

Here is an efficient algorithm for you

for i = 1.....r

   p = p * ( n - i ) / i

print(p)

For example nCr(30,7) = fact(30) / ( fact(7) * fact(23)) = ( 30 * 29 * 28 * 27 * 26 * 25 * 24 ) / (1 * 2 * 3 * 4 * 5 * 6 * 7)

So just run the loop from 1 to r can get the result.


回答 16

对于相当大的输入,这可能与在纯python中完成的速度一样快:

def choose(n, k):
    if k == n: return 1
    if k > n: return 0
    d, q = max(k, n-k), min(k, n-k)
    num =  1
    for n in xrange(d+1, n+1): num *= n
    denom = 1
    for d in xrange(1, q+1): denom *= d
    return num / denom

That’s probably as fast as you can do it in pure python for reasonably large inputs:

def choose(n, k):
    if k == n: return 1
    if k > n: return 0
    d, q = max(k, n-k), min(k, n-k)
    num =  1
    for n in xrange(d+1, n+1): num *= n
    denom = 1
    for d in xrange(1, q+1): denom *= d
    return num / denom

回答 17

此功能非常优化。

def nCk(n,k):
    m=0
    if k==0:
        m=1
    if k==1:
        m=n
    if k>=2:
        num,dem,op1,op2=1,1,k,n
        while(op1>=1):
            num*=op2
            dem*=op1
            op1-=1
            op2-=1
        m=num//dem
    return m

This function is very optimazed.

def nCk(n,k):
    m=0
    if k==0:
        m=1
    if k==1:
        m=n
    if k>=2:
        num,dem,op1,op2=1,1,k,n
        while(op1>=1):
            num*=op2
            dem*=op1
            op1-=1
            op2-=1
        m=num//dem
    return m

单个变量的频率表

问题:单个变量的频率表

当天最后一个新手熊猫问题:如何为单个系列生成一张表?

例如:

my_series = pandas.Series([1,2,2,3,3,3])
pandas.magical_frequency_function( my_series )

>> {
     1 : 1,
     2 : 2, 
     3 : 3
   }

大量的搜索使我进入了Series.describe()和pandas.crosstabs,但是这些都不满足我的需要:一个变量,按类别计数。哦,如果它适用于不同的数据类型(字符串,整数等),那就太好了。

One last newbie pandas question for the day: How do I generate a table for a single Series?

For example:

my_series = pandas.Series([1,2,2,3,3,3])
pandas.magical_frequency_function( my_series )

>> {
     1 : 1,
     2 : 2, 
     3 : 3
   }

Lots of googling has led me to Series.describe() and pandas.crosstabs, but neither of these does quite what I need: one variable, counts by categories. Oh, and it’d be nice if it worked for different data types: strings, ints, etc.


回答 0

也许.value_counts()吧?

>>> import pandas
>>> my_series = pandas.Series([1,2,2,3,3,3, "fred", 1.8, 1.8])
>>> my_series
0       1
1       2
2       2
3       3
4       3
5       3
6    fred
7     1.8
8     1.8
>>> counts = my_series.value_counts()
>>> counts
3       3
2       2
1.8     2
fred    1
1       1
>>> len(counts)
5
>>> sum(counts)
9
>>> counts["fred"]
1
>>> dict(counts)
{1.8: 2, 2: 2, 3: 3, 1: 1, 'fred': 1}

Maybe .value_counts()?

>>> import pandas
>>> my_series = pandas.Series([1,2,2,3,3,3, "fred", 1.8, 1.8])
>>> my_series
0       1
1       2
2       2
3       3
4       3
5       3
6    fred
7     1.8
8     1.8
>>> counts = my_series.value_counts()
>>> counts
3       3
2       2
1.8     2
fred    1
1       1
>>> len(counts)
5
>>> sum(counts)
9
>>> counts["fred"]
1
>>> dict(counts)
{1.8: 2, 2: 2, 3: 3, 1: 1, 'fred': 1}

回答 1

您可以对数据框使用列表理解来计算列的频率,例如

[my_series[c].value_counts() for c in list(my_series.select_dtypes(include=['O']).columns)]

分解:

my_series.select_dtypes(include=['O']) 

仅选择分类数据

list(my_series.select_dtypes(include=['O']).columns) 

从上方将列转换为列表

[my_series[c].value_counts() for c in list(my_series.select_dtypes(include=['O']).columns)] 

遍历上面的列表,并将value_counts()应用于每个列

You can use list comprehension on a dataframe to count frequencies of the columns as such

[my_series[c].value_counts() for c in list(my_series.select_dtypes(include=['O']).columns)]

Breakdown:

my_series.select_dtypes(include=['O']) 

Selects just the categorical data

list(my_series.select_dtypes(include=['O']).columns) 

Turns the columns from above into a list

[my_series[c].value_counts() for c in list(my_series.select_dtypes(include=['O']).columns)] 

Iterates through the list above and applies value_counts() to each of the columns


回答 2

@DSM提供的答案很简单明了,但是我想我将自己的输入添加到该问题中。如果查看pandas.value_counts的代码,将会发现发生了很多事情。

如果您需要计算多个序列的频率,则可能需要一段时间。更快的实现是将numpy.uniquereturn_counts = True

这是一个例子:

import pandas as pd
import numpy as np

my_series = pd.Series([1,2,2,3,3,3])

print(my_series.value_counts())
3    3
2    2
1    1
dtype: int64

注意这里返回的项目是pandas.Series

相比之下,numpy.unique返回具有两个项目的元组,即唯一值和计数。

vals, counts = np.unique(my_series, return_counts=True)
print(vals, counts)
[1 2 3] [1 2 3]

然后,您可以将它们组合成字典:

results = dict(zip(vals, counts))
print(results)
{1: 1, 2: 2, 3: 3}

然后变成 pandas.Series

print(pd.Series(results))
1    1
2    2
3    3
dtype: int64

The answer provided by @DSM is simple and straightforward, but I thought I’d add my own input to this question. If you look at the code for pandas.value_counts, you’ll see that there is a lot going on.

If you need to calculate the frequency of many series, this could take a while. A faster implementation would be to use numpy.unique with return_counts = True

Here is an example:

import pandas as pd
import numpy as np

my_series = pd.Series([1,2,2,3,3,3])

print(my_series.value_counts())
3    3
2    2
1    1
dtype: int64

Notice here that the item returned is a pandas.Series

In comparison, numpy.unique returns a tuple with two items, the unique values and the counts.

vals, counts = np.unique(my_series, return_counts=True)
print(vals, counts)
[1 2 3] [1 2 3]

You can then combine these into a dictionary:

results = dict(zip(vals, counts))
print(results)
{1: 1, 2: 2, 3: 3}

And then into a pandas.Series

print(pd.Series(results))
1    1
2    2
3    3
dtype: int64

回答 3

对于具有过多值的变量的频率分布,您可以将类中的值折叠起来,

在这里,我为employrate变量设置了过多的值,并且没有直接频率分布的含义values_count(normalize=True)

                country  employrate alcconsumption
0           Afghanistan   55.700001            .03
1               Albania   11.000000           7.29
2               Algeria   11.000000            .69
3               Andorra         nan          10.17
4                Angola   75.699997           5.57
..                  ...         ...            ...
208             Vietnam   71.000000           3.91
209  West Bank and Gaza   32.000000               
210         Yemen, Rep.   39.000000             .2
211              Zambia   61.000000           3.56
212            Zimbabwe   66.800003           4.96

[213 rows x 3 columns]

values_count(normalize=True)没有分类的频率分布,结果长度为139(似乎无意义的频率分布):

print(gm["employrate"].value_counts(sort=False,normalize=True))

50.500000   0.005618
61.500000   0.016854
46.000000   0.011236
64.500000   0.005618
63.500000   0.005618

58.599998   0.005618
63.799999   0.011236
63.200001   0.005618
65.599998   0.005618
68.300003   0.005618
Name: employrate, Length: 139, dtype: float64

进行分类时,我们将所有值都放在一定范围内。

0-10为1
11-20为2  
21-30为3,依此类推。
gm["employrate"]=gm["employrate"].str.strip().dropna()  
gm["employrate"]=pd.to_numeric(gm["employrate"])
gm['employrate'] = np.where(
   (gm['employrate'] <=10) & (gm['employrate'] > 0) , 1, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=20) & (gm['employrate'] > 10) , 1, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=30) & (gm['employrate'] > 20) , 2, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=40) & (gm['employrate'] > 30) , 3, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=50) & (gm['employrate'] > 40) , 4, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=60) & (gm['employrate'] > 50) , 5, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=70) & (gm['employrate'] > 60) , 6, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=80) & (gm['employrate'] > 70) , 7, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=90) & (gm['employrate'] > 80) , 8, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=100) & (gm['employrate'] > 90) , 9, gm['employrate']
   )
print(gm["employrate"].value_counts(sort=False,normalize=True))

分类后,我们有一个清晰的频率分布。在这里我们可以很容易地看到,37.64%一个国家的雇佣率介于51-60%11.79%之间。71-80%

5.000000   0.376404
7.000000   0.117978
4.000000   0.179775
6.000000   0.264045
8.000000   0.033708
3.000000   0.028090
Name: employrate, dtype: float64

for frequency distribution of a variable with excessive values you can collapse down the values in classes,

Here I excessive values for employrate variable, and there’s no meaning of it’s frequency distribution with direct values_count(normalize=True)

                country  employrate alcconsumption
0           Afghanistan   55.700001            .03
1               Albania   11.000000           7.29
2               Algeria   11.000000            .69
3               Andorra         nan          10.17
4                Angola   75.699997           5.57
..                  ...         ...            ...
208             Vietnam   71.000000           3.91
209  West Bank and Gaza   32.000000               
210         Yemen, Rep.   39.000000             .2
211              Zambia   61.000000           3.56
212            Zimbabwe   66.800003           4.96

[213 rows x 3 columns]

frequency distribution with values_count(normalize=True) with no classification,length of result here is 139 (seems meaningless as a frequency distribution):

print(gm["employrate"].value_counts(sort=False,normalize=True))

50.500000   0.005618
61.500000   0.016854
46.000000   0.011236
64.500000   0.005618
63.500000   0.005618

58.599998   0.005618
63.799999   0.011236
63.200001   0.005618
65.599998   0.005618
68.300003   0.005618
Name: employrate, Length: 139, dtype: float64

putting classification we put all values with a certain range ie.

0-10 as 1,
11-20 as 2  
21-30 as 3, and so forth.
gm["employrate"]=gm["employrate"].str.strip().dropna()  
gm["employrate"]=pd.to_numeric(gm["employrate"])
gm['employrate'] = np.where(
   (gm['employrate'] <=10) & (gm['employrate'] > 0) , 1, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=20) & (gm['employrate'] > 10) , 1, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=30) & (gm['employrate'] > 20) , 2, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=40) & (gm['employrate'] > 30) , 3, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=50) & (gm['employrate'] > 40) , 4, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=60) & (gm['employrate'] > 50) , 5, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=70) & (gm['employrate'] > 60) , 6, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=80) & (gm['employrate'] > 70) , 7, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=90) & (gm['employrate'] > 80) , 8, gm['employrate']
   )
gm['employrate'] = np.where(
   (gm['employrate'] <=100) & (gm['employrate'] > 90) , 9, gm['employrate']
   )
print(gm["employrate"].value_counts(sort=False,normalize=True))

after classification we have a clear frequency distribution. here we can easily see, that 37.64% of countries have employ rate between 51-60% and 11.79% of countries have employ rate between 71-80%

5.000000   0.376404
7.000000   0.117978
4.000000   0.179775
6.000000   0.264045
8.000000   0.033708
3.000000   0.028090
Name: employrate, dtype: float64

根据样本数据计算置信区间

问题:根据样本数据计算置信区间

我有一些样本数据,假设正态分布,我希望为它们计算一个置信区间。

我已经找到并安装了numpy和scipy软件包,并获得了numpy以返回均值和标准差(numpy.mean(data),其中data为列表)。任何关于获得样本置信区间的建议将不胜感激。

I have sample data which I would like to compute a confidence interval for, assuming a normal distribution.

I have found and installed the numpy and scipy packages and have gotten numpy to return a mean and standard deviation (numpy.mean(data) with data being a list). Any advice on getting a sample confidence interval would be much appreciated.


回答 0

import numpy as np
import scipy.stats


def mean_confidence_interval(data, confidence=0.95):
    a = 1.0 * np.array(data)
    n = len(a)
    m, se = np.mean(a), scipy.stats.sem(a)
    h = se * scipy.stats.t.ppf((1 + confidence) / 2., n-1)
    return m, m-h, m+h

你可以这样计算

import numpy as np
import scipy.stats


def mean_confidence_interval(data, confidence=0.95):
    a = 1.0 * np.array(data)
    n = len(a)
    m, se = np.mean(a), scipy.stats.sem(a)
    h = se * scipy.stats.t.ppf((1 + confidence) / 2., n-1)
    return m, m-h, m+h

you can calculate like this way.


回答 1

这是shasan代码的简化版本,用于计算数组均值的95%置信区间a

import numpy as np, scipy.stats as st

st.t.interval(0.95, len(a)-1, loc=np.mean(a), scale=st.sem(a))

但是使用StatsModels tconfint_mean可以说是更好的选择:

import statsmodels.stats.api as sms

sms.DescrStatsW(a).tconfint_mean()

两者的基本假设是,样本(数组a)是独立于具有未知标准偏差的正态分布绘制的(请参阅MathWorldWikipedia)。

对于大样本量n,样本均值是正态分布的,并且可以使用st.norm.interval()(如Jaime的评论中所建议的)计算其置信区间。但是上述解决方案对于较小的n也是正确的,n st.norm.interval()给出的置信区间太窄(即“假置信度”)。有关更多详细信息,请参阅我对类似问题的回答(以及此处的Russ的评论之一)。

这是一个示例,其中正确的选项给出(基本上)相同的置信区间:

In [9]: a = range(10,14)

In [10]: mean_confidence_interval(a)
Out[10]: (11.5, 9.4457397432391215, 13.554260256760879)

In [11]: st.t.interval(0.95, len(a)-1, loc=np.mean(a), scale=st.sem(a))
Out[11]: (9.4457397432391215, 13.554260256760879)

In [12]: sms.DescrStatsW(a).tconfint_mean()
Out[12]: (9.4457397432391197, 13.55426025676088)

最后,使用st.norm.interval()以下错误结果:

In [13]: st.norm.interval(0.95, loc=np.mean(a), scale=st.sem(a))
Out[13]: (10.23484868811834, 12.76515131188166)

Here a shortened version of shasan’s code, calculating the 95% confidence interval of the mean of array a:

import numpy as np, scipy.stats as st

st.t.interval(0.95, len(a)-1, loc=np.mean(a), scale=st.sem(a))

But using StatsModels’ tconfint_mean is arguably even nicer:

import statsmodels.stats.api as sms

sms.DescrStatsW(a).tconfint_mean()

The underlying assumptions for both are that the sample (array a) was drawn independently from a normal distribution with unknown standard deviation (see MathWorld or Wikipedia).

For large sample size n, the sample mean is normally distributed, and one can calculate its confidence interval using st.norm.interval() (as suggested in Jaime’s comment). But the above solutions are correct also for small n, where st.norm.interval() gives confidence intervals that are too narrow (i.e., “fake confidence”). See my answer to a similar question for more details (and one of Russ’s comments here).

Here an example where the correct options give (essentially) identical confidence intervals:

In [9]: a = range(10,14)

In [10]: mean_confidence_interval(a)
Out[10]: (11.5, 9.4457397432391215, 13.554260256760879)

In [11]: st.t.interval(0.95, len(a)-1, loc=np.mean(a), scale=st.sem(a))
Out[11]: (9.4457397432391215, 13.554260256760879)

In [12]: sms.DescrStatsW(a).tconfint_mean()
Out[12]: (9.4457397432391197, 13.55426025676088)

And finally, the incorrect result using st.norm.interval():

In [13]: st.norm.interval(0.95, loc=np.mean(a), scale=st.sem(a))
Out[13]: (10.23484868811834, 12.76515131188166)

回答 2

首先从查找表中查找所需的置信区间的z值。置信区间为,其中是您的样本均值的估计标准偏差,由给出,其中是从样本数据计算出的标准偏差,是样本量。mean +/- z*sigmasigmasigma = s / sqrt(n)sn

Start with looking up the z-value for your desired confidence interval from a look-up table. The confidence interval is then mean +/- z*sigma, where sigma is the estimated standard deviation of your sample mean, given by sigma = s / sqrt(n), where s is the standard deviation computed from your sample data and n is your sample size.


回答 3

从开始Python 3.8,标准库将NormalDist对象作为statistics模块的一部分提供:

from statistics import NormalDist

def confidence_interval(data, confidence=0.95):
  dist = NormalDist.from_samples(data)
  z = NormalDist().inv_cdf((1 + confidence) / 2.)
  h = dist.stdev * z / ((len(data) - 1) ** .5)
  return dist.mean - h, dist.mean + h

这个:

  • NormalDist从数据样本创建一个对象(NormalDist.from_samples(data),使我们可以通过NormalDist.mean和访问样本的均值和标准差NormalDist.stdev

  • 使用累积分布函数()的反函数,针对给定的置信度,Z-score基于标准正态分布(用表示)计算。NormalDist()inv_cdf

  • 根据样本的标准偏差和平均值产生置信区间。


假设样本量足够大(可以超过100个点),以便使用标准正态分布而不是学生的t分布来计算z值。

Starting Python 3.8, the standard library provides the NormalDist object as part of the statistics module:

from statistics import NormalDist

def confidence_interval(data, confidence=0.95):
  dist = NormalDist.from_samples(data)
  z = NormalDist().inv_cdf((1 + confidence) / 2.)
  h = dist.stdev * z / ((len(data) - 1) ** .5)
  return dist.mean - h, dist.mean + h

This:

  • Creates a NormalDist object from the data sample (NormalDist.from_samples(data), which gives us access to the sample’s mean and standard deviation via NormalDist.mean and NormalDist.stdev.

  • Compute the Z-score based on the standard normal distribution (represented by NormalDist()) for the given confidence using the inverse of the cumulative distribution function (inv_cdf).

  • Produces the confidence interval based on the sample’s standard deviation and mean.


This assumes the sample size is big enough (let’s say more than ~100 points) in order to use the standard normal distribution rather than the student’s t distribution to compute the z value.


如何计算累积正态分布?

问题:如何计算累积正态分布?

我正在寻找Numpy或Scipy(或任何严格的Python库)中的函数,该函数将为我提供Python中的累积正态分布函数。

I am looking for a function in Numpy or Scipy (or any rigorous Python library) that will give me the cumulative normal distribution function in Python.


回答 0

这是一个例子:

>>> from scipy.stats import norm
>>> norm.cdf(1.96)
0.9750021048517795
>>> norm.cdf(-1.96)
0.024997895148220435

换句话说,大约95%的标准法线间隔位于两个标准偏差之内,以标准平均值零为中心。

如果需要逆CDF:

>>> norm.ppf(norm.cdf(1.96))
array(1.9599999999999991)

Here’s an example:

>>> from scipy.stats import norm
>>> norm.cdf(1.96)
0.9750021048517795
>>> norm.cdf(-1.96)
0.024997895148220435

In other words, approximately 95% of the standard normal interval lies within two standard deviations, centered on a standard mean of zero.

If you need the inverse CDF:

>>> norm.ppf(norm.cdf(1.96))
array(1.9599999999999991)

回答 1

回答这个问题可能为时已晚,但是由于Google仍然领导这里的人们,因此我决定在此处编写解决方案。

也就是说,自Python 2.7起,该math库集成了error函数math.erf(x)

erf()函数可用于计算传统的统计函数,例如累积标准正态分布:

from math import *
def phi(x):
    #'Cumulative distribution function for the standard normal distribution'
    return (1.0 + erf(x / sqrt(2.0))) / 2.0

参考:

https://docs.python.org/2/library/math.html

https://docs.python.org/3/library/math.html

误差函数和标准正态分布函数有何关系?

It may be too late to answer the question but since Google still leads people here, I decide to write my solution here.

That is, since Python 2.7, the math library has integrated the error function math.erf(x)

The erf() function can be used to compute traditional statistical functions such as the cumulative standard normal distribution:

from math import *
def phi(x):
    #'Cumulative distribution function for the standard normal distribution'
    return (1.0 + erf(x / sqrt(2.0))) / 2.0

Ref:

https://docs.python.org/2/library/math.html

https://docs.python.org/3/library/math.html

How are the Error Function and Standard Normal distribution function related?


回答 2

从这里改编http://mail.python.org/pipermail/python-list/2000-June/039873.html

from math import *
def erfcc(x):
    """Complementary error function."""
    z = abs(x)
    t = 1. / (1. + 0.5*z)
    r = t * exp(-z*z-1.26551223+t*(1.00002368+t*(.37409196+
        t*(.09678418+t*(-.18628806+t*(.27886807+
        t*(-1.13520398+t*(1.48851587+t*(-.82215223+
        t*.17087277)))))))))
    if (x >= 0.):
        return r
    else:
        return 2. - r

def ncdf(x):
    return 1. - 0.5*erfcc(x/(2**0.5))

Adapted from here http://mail.python.org/pipermail/python-list/2000-June/039873.html

from math import *
def erfcc(x):
    """Complementary error function."""
    z = abs(x)
    t = 1. / (1. + 0.5*z)
    r = t * exp(-z*z-1.26551223+t*(1.00002368+t*(.37409196+
        t*(.09678418+t*(-.18628806+t*(.27886807+
        t*(-1.13520398+t*(1.48851587+t*(-.82215223+
        t*.17087277)))))))))
    if (x >= 0.):
        return r
    else:
        return 2. - r

def ncdf(x):
    return 1. - 0.5*erfcc(x/(2**0.5))

回答 3

以Unknown的示例为基础,在许多库中实现的功能normdist()的Python等效项为:

def normcdf(x, mu, sigma):
    t = x-mu;
    y = 0.5*erfcc(-t/(sigma*sqrt(2.0)));
    if y>1.0:
        y = 1.0;
    return y

def normpdf(x, mu, sigma):
    u = (x-mu)/abs(sigma)
    y = (1/(sqrt(2*pi)*abs(sigma)))*exp(-u*u/2)
    return y

def normdist(x, mu, sigma, f):
    if f:
        y = normcdf(x,mu,sigma)
    else:
        y = normpdf(x,mu,sigma)
    return y

To build upon Unknown’s example, the Python equivalent of the function normdist() implemented in a lot of libraries would be:

def normcdf(x, mu, sigma):
    t = x-mu;
    y = 0.5*erfcc(-t/(sigma*sqrt(2.0)));
    if y>1.0:
        y = 1.0;
    return y

def normpdf(x, mu, sigma):
    u = (x-mu)/abs(sigma)
    y = (1/(sqrt(2*pi)*abs(sigma)))*exp(-u*u/2)
    return y

def normdist(x, mu, sigma, f):
    if f:
        y = normcdf(x,mu,sigma)
    else:
        y = normpdf(x,mu,sigma)
    return y

回答 4

从开始Python 3.8,标准库将NormalDist对象作为statistics模块的一部分提供。

对于给定的均值()和标准差(),它可用于获取累积分布函数cdf-随机样本X小于或等于x的概率):musigma

from statistics import NormalDist

NormalDist(mu=0, sigma=1).cdf(1.96)
# 0.9750021048517796

对于标准正态分布mu = 0sigma = 1)可以简化:

NormalDist().cdf(1.96)
# 0.9750021048517796

NormalDist().cdf(-1.96)
# 0.024997895148220428

Starting Python 3.8, the standard library provides the NormalDist object as part of the statistics module.

It can be used to get the cumulative distribution function (cdf – probability that a random sample X will be less than or equal to x) for a given mean (mu) and standard deviation (sigma):

from statistics import NormalDist

NormalDist(mu=0, sigma=1).cdf(1.96)
# 0.9750021048517796

Which can be simplified for the standard normal distribution (mu = 0 and sigma = 1):

NormalDist().cdf(1.96)
# 0.9750021048517796

NormalDist().cdf(-1.96)
# 0.024997895148220428

回答 5

Alex的答案为您显示了标准正态分布的解决方案(均值= 0,标准差= 1)。如果您使用mean和进行正态分布std(是sqr(var)),并且要计算:

from scipy.stats import norm

# cdf(x < val)
print norm.cdf(val, m, s)

# cdf(x > val)
print 1 - norm.cdf(val, m, s)

# cdf(v1 < x < v2)
print norm.cdf(v2, m, s) - norm.cdf(v1, m, s)

了解更多关于此CDF和SciPy的执行正态分布的许多公式在这里

Alex’s answer shows you a solution for standard normal distribution (mean = 0, standard deviation = 1). If you have normal distribution with mean and std (which is sqr(var)) and you want to calculate:

from scipy.stats import norm

# cdf(x < val)
print norm.cdf(val, m, s)

# cdf(x > val)
print 1 - norm.cdf(val, m, s)

# cdf(v1 < x < v2)
print norm.cdf(v2, m, s) - norm.cdf(v1, m, s)

Read more about cdf here and scipy implementation of normal distribution with many formulas here.


回答 6

从上方拍摄:

from scipy.stats import norm
>>> norm.cdf(1.96)
0.9750021048517795
>>> norm.cdf(-1.96)
0.024997895148220435

对于两尾测试:

Import numpy as np
z = 1.96
p_value = 2 * norm.cdf(-np.abs(z))
0.04999579029644087

Taken from above:

from scipy.stats import norm
>>> norm.cdf(1.96)
0.9750021048517795
>>> norm.cdf(-1.96)
0.024997895148220435

For a two-tailed test:

Import numpy as np
z = 1.96
p_value = 2 * norm.cdf(-np.abs(z))
0.04999579029644087

回答 7

像这样简单:

import math
def my_cdf(x):
    return 0.5*(1+math.erf(x/math.sqrt(2)))

我在此页面中找到了公式https://www.danielsoper.com/statcalc/formulas.aspx?id=55

Simple like this:

import math
def my_cdf(x):
    return 0.5*(1+math.erf(x/math.sqrt(2)))

I found the formula in this page https://www.danielsoper.com/statcalc/formulas.aspx?id=55


回答 8

当Google针对搜索netlogo pdf提供此答案时,这是上述python代码的netlogo版本

    ;; 正态分布累积密度函数
    报告normcdf [x mu sigma]
        让TX-亩
        让y 0.5 * erfcc [-t /(sigma * sqrt 2.0)]
        如果(y> 1.0)[设置y 1.0]
        报告y
    结束

    ;; 正态分布概率密度函数
    报告normpdf [x mu sigma]
        设u =(x-mu)/ abs sigma
        令y = 1 /(sqrt [2 * pi] * abs sigma)* exp(-u * u / 2.0)
        报告y
    结束

    ;; 互补误差函数
    报告erfcc [x]
        让z abs x
        令t 1.0 /(1.0 + 0.5 * z)
        令rt * exp(-z * z -1.26551223 + t *(1.00002368 + t *(0.37409196 +
            t *(0.09678418 + t *(-0.18628806 + t *(.27886807 +
            t *(-1.13520398 + t *(1.48851587 + t *(-0.82215223 +
            t * .17087277)))))))))
        ifelse(x> = 0)[报告r] [报告2.0-r]
    结束

As Google gives this answer for the search netlogo pdf, here’s the netlogo version of the above python code


    ;; Normal distribution cumulative density function
    to-report normcdf [x mu sigma]
        let t x - mu
        let y 0.5 * erfcc [ - t / ( sigma * sqrt 2.0)]
        if ( y > 1.0 ) [ set y 1.0 ]
        report y
    end

    ;; Normal distribution probability density function
    to-report normpdf [x mu sigma]
        let u = (x - mu) / abs sigma
        let y = 1 / ( sqrt [2 * pi] * abs sigma ) * exp ( - u * u / 2.0)
        report y
    end

    ;; Complementary error function
    to-report erfcc [x]
        let z abs x
        let t 1.0 / (1.0 + 0.5 * z)
        let r t *  exp ( - z * z -1.26551223 + t * (1.00002368 + t * (0.37409196 +
            t * (0.09678418 + t * (-0.18628806 + t * (.27886807 +
            t * (-1.13520398 +t * (1.48851587 +t * (-0.82215223 +
            t * .17087277 )))))))))
        ifelse (x >= 0) [ report r ] [report 2.0 - r]
    end


在scikit学习LinearRegression中找到p值(重要性)

问题:在scikit学习LinearRegression中找到p值(重要性)

如何找到每个系数的p值(重要性)?

lm = sklearn.linear_model.LinearRegression()
lm.fit(x,y)

How can I find the p-value (significance) of each coefficient?

lm = sklearn.linear_model.LinearRegression()
lm.fit(x,y)

回答 0

这有点矫kill过正,但让我们尝试一下。首先让我们使用statsmodel找出p值应该是什么

import pandas as pd
import numpy as np
from sklearn import datasets, linear_model
from sklearn.linear_model import LinearRegression
import statsmodels.api as sm
from scipy import stats

diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target

X2 = sm.add_constant(X)
est = sm.OLS(y, X2)
est2 = est.fit()
print(est2.summary())

我们得到

                         OLS Regression Results                            
==============================================================================
Dep. Variable:                      y   R-squared:                       0.518
Model:                            OLS   Adj. R-squared:                  0.507
Method:                 Least Squares   F-statistic:                     46.27
Date:                Wed, 08 Mar 2017   Prob (F-statistic):           3.83e-62
Time:                        10:08:24   Log-Likelihood:                -2386.0
No. Observations:                 442   AIC:                             4794.
Df Residuals:                     431   BIC:                             4839.
Df Model:                          10                                         
Covariance Type:            nonrobust                                         
==============================================================================
                 coef    std err          t      P>|t|      [0.025      0.975]
------------------------------------------------------------------------------
const        152.1335      2.576     59.061      0.000     147.071     157.196
x1           -10.0122     59.749     -0.168      0.867    -127.448     107.424
x2          -239.8191     61.222     -3.917      0.000    -360.151    -119.488
x3           519.8398     66.534      7.813      0.000     389.069     650.610
x4           324.3904     65.422      4.958      0.000     195.805     452.976
x5          -792.1842    416.684     -1.901      0.058   -1611.169      26.801
x6           476.7458    339.035      1.406      0.160    -189.621    1143.113
x7           101.0446    212.533      0.475      0.635    -316.685     518.774
x8           177.0642    161.476      1.097      0.273    -140.313     494.442
x9           751.2793    171.902      4.370      0.000     413.409    1089.150
x10           67.6254     65.984      1.025      0.306     -62.065     197.316
==============================================================================
Omnibus:                        1.506   Durbin-Watson:                   2.029
Prob(Omnibus):                  0.471   Jarque-Bera (JB):                1.404
Skew:                           0.017   Prob(JB):                        0.496
Kurtosis:                       2.726   Cond. No.                         227.
==============================================================================

好的,让我们重现一下。这有点过头了,因为我们几乎要使用矩阵代数重现线性回归分析。但是到底。

lm = LinearRegression()
lm.fit(X,y)
params = np.append(lm.intercept_,lm.coef_)
predictions = lm.predict(X)

newX = pd.DataFrame({"Constant":np.ones(len(X))}).join(pd.DataFrame(X))
MSE = (sum((y-predictions)**2))/(len(newX)-len(newX.columns))

# Note if you don't want to use a DataFrame replace the two lines above with
# newX = np.append(np.ones((len(X),1)), X, axis=1)
# MSE = (sum((y-predictions)**2))/(len(newX)-len(newX[0]))

var_b = MSE*(np.linalg.inv(np.dot(newX.T,newX)).diagonal())
sd_b = np.sqrt(var_b)
ts_b = params/ sd_b

p_values =[2*(1-stats.t.cdf(np.abs(i),(len(newX)-len(newX[0])))) for i in ts_b]

sd_b = np.round(sd_b,3)
ts_b = np.round(ts_b,3)
p_values = np.round(p_values,3)
params = np.round(params,4)

myDF3 = pd.DataFrame()
myDF3["Coefficients"],myDF3["Standard Errors"],myDF3["t values"],myDF3["Probabilities"] = [params,sd_b,ts_b,p_values]
print(myDF3)

这给了我们。

    Coefficients  Standard Errors  t values  Probabilities
0       152.1335            2.576    59.061         0.000
1       -10.0122           59.749    -0.168         0.867
2      -239.8191           61.222    -3.917         0.000
3       519.8398           66.534     7.813         0.000
4       324.3904           65.422     4.958         0.000
5      -792.1842          416.684    -1.901         0.058
6       476.7458          339.035     1.406         0.160
7       101.0446          212.533     0.475         0.635
8       177.0642          161.476     1.097         0.273
9       751.2793          171.902     4.370         0.000
10       67.6254           65.984     1.025         0.306

因此,我们可以从statsmodel复制值。

This is kind of overkill but let’s give it a go. First lets use statsmodel to find out what the p-values should be

import pandas as pd
import numpy as np
from sklearn import datasets, linear_model
from sklearn.linear_model import LinearRegression
import statsmodels.api as sm
from scipy import stats

diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target

X2 = sm.add_constant(X)
est = sm.OLS(y, X2)
est2 = est.fit()
print(est2.summary())

and we get

                         OLS Regression Results                            
==============================================================================
Dep. Variable:                      y   R-squared:                       0.518
Model:                            OLS   Adj. R-squared:                  0.507
Method:                 Least Squares   F-statistic:                     46.27
Date:                Wed, 08 Mar 2017   Prob (F-statistic):           3.83e-62
Time:                        10:08:24   Log-Likelihood:                -2386.0
No. Observations:                 442   AIC:                             4794.
Df Residuals:                     431   BIC:                             4839.
Df Model:                          10                                         
Covariance Type:            nonrobust                                         
==============================================================================
                 coef    std err          t      P>|t|      [0.025      0.975]
------------------------------------------------------------------------------
const        152.1335      2.576     59.061      0.000     147.071     157.196
x1           -10.0122     59.749     -0.168      0.867    -127.448     107.424
x2          -239.8191     61.222     -3.917      0.000    -360.151    -119.488
x3           519.8398     66.534      7.813      0.000     389.069     650.610
x4           324.3904     65.422      4.958      0.000     195.805     452.976
x5          -792.1842    416.684     -1.901      0.058   -1611.169      26.801
x6           476.7458    339.035      1.406      0.160    -189.621    1143.113
x7           101.0446    212.533      0.475      0.635    -316.685     518.774
x8           177.0642    161.476      1.097      0.273    -140.313     494.442
x9           751.2793    171.902      4.370      0.000     413.409    1089.150
x10           67.6254     65.984      1.025      0.306     -62.065     197.316
==============================================================================
Omnibus:                        1.506   Durbin-Watson:                   2.029
Prob(Omnibus):                  0.471   Jarque-Bera (JB):                1.404
Skew:                           0.017   Prob(JB):                        0.496
Kurtosis:                       2.726   Cond. No.                         227.
==============================================================================

Ok, let’s reproduce this. It is kind of overkill as we are almost reproducing a linear regression analysis using Matrix Algebra. But what the heck.

lm = LinearRegression()
lm.fit(X,y)
params = np.append(lm.intercept_,lm.coef_)
predictions = lm.predict(X)

newX = pd.DataFrame({"Constant":np.ones(len(X))}).join(pd.DataFrame(X))
MSE = (sum((y-predictions)**2))/(len(newX)-len(newX.columns))

# Note if you don't want to use a DataFrame replace the two lines above with
# newX = np.append(np.ones((len(X),1)), X, axis=1)
# MSE = (sum((y-predictions)**2))/(len(newX)-len(newX[0]))

var_b = MSE*(np.linalg.inv(np.dot(newX.T,newX)).diagonal())
sd_b = np.sqrt(var_b)
ts_b = params/ sd_b

p_values =[2*(1-stats.t.cdf(np.abs(i),(len(newX)-len(newX[0])))) for i in ts_b]

sd_b = np.round(sd_b,3)
ts_b = np.round(ts_b,3)
p_values = np.round(p_values,3)
params = np.round(params,4)

myDF3 = pd.DataFrame()
myDF3["Coefficients"],myDF3["Standard Errors"],myDF3["t values"],myDF3["Probabilities"] = [params,sd_b,ts_b,p_values]
print(myDF3)

And this gives us.

    Coefficients  Standard Errors  t values  Probabilities
0       152.1335            2.576    59.061         0.000
1       -10.0122           59.749    -0.168         0.867
2      -239.8191           61.222    -3.917         0.000
3       519.8398           66.534     7.813         0.000
4       324.3904           65.422     4.958         0.000
5      -792.1842          416.684    -1.901         0.058
6       476.7458          339.035     1.406         0.160
7       101.0446          212.533     0.475         0.635
8       177.0642          161.476     1.097         0.273
9       751.2793          171.902     4.370         0.000
10       67.6254           65.984     1.025         0.306

So we can reproduce the values from statsmodel.


回答 1

scikit-learn的LinearRegression不会计算此信息,但是您可以轻松地扩展该类来做到这一点:

from sklearn import linear_model
from scipy import stats
import numpy as np


class LinearRegression(linear_model.LinearRegression):
    """
    LinearRegression class after sklearn's, but calculate t-statistics
    and p-values for model coefficients (betas).
    Additional attributes available after .fit()
    are `t` and `p` which are of the shape (y.shape[1], X.shape[1])
    which is (n_features, n_coefs)
    This class sets the intercept to 0 by default, since usually we include it
    in X.
    """

    def __init__(self, *args, **kwargs):
        if not "fit_intercept" in kwargs:
            kwargs['fit_intercept'] = False
        super(LinearRegression, self)\
                .__init__(*args, **kwargs)

    def fit(self, X, y, n_jobs=1):
        self = super(LinearRegression, self).fit(X, y, n_jobs)

        sse = np.sum((self.predict(X) - y) ** 2, axis=0) / float(X.shape[0] - X.shape[1])
        se = np.array([
            np.sqrt(np.diagonal(sse[i] * np.linalg.inv(np.dot(X.T, X))))
                                                    for i in range(sse.shape[0])
                    ])

        self.t = self.coef_ / se
        self.p = 2 * (1 - stats.t.cdf(np.abs(self.t), y.shape[0] - X.shape[1]))
        return self

这里被盗。

您应该看一下statsmodels,以便在Python中进行这种统计分析。

scikit-learn’s LinearRegression doesn’t calculate this information but you can easily extend the class to do it:

from sklearn import linear_model
from scipy import stats
import numpy as np


class LinearRegression(linear_model.LinearRegression):
    """
    LinearRegression class after sklearn's, but calculate t-statistics
    and p-values for model coefficients (betas).
    Additional attributes available after .fit()
    are `t` and `p` which are of the shape (y.shape[1], X.shape[1])
    which is (n_features, n_coefs)
    This class sets the intercept to 0 by default, since usually we include it
    in X.
    """

    def __init__(self, *args, **kwargs):
        if not "fit_intercept" in kwargs:
            kwargs['fit_intercept'] = False
        super(LinearRegression, self)\
                .__init__(*args, **kwargs)

    def fit(self, X, y, n_jobs=1):
        self = super(LinearRegression, self).fit(X, y, n_jobs)

        sse = np.sum((self.predict(X) - y) ** 2, axis=0) / float(X.shape[0] - X.shape[1])
        se = np.array([
            np.sqrt(np.diagonal(sse[i] * np.linalg.inv(np.dot(X.T, X))))
                                                    for i in range(sse.shape[0])
                    ])

        self.t = self.coef_ / se
        self.p = 2 * (1 - stats.t.cdf(np.abs(self.t), y.shape[0] - X.shape[1]))
        return self

Stolen from here.

You should take a look at statsmodels for this kind of statistical analysis in Python.


回答 2

编辑:可能不是正确的方法,请参阅评论

您可以使用sklearn.feature_selection.f_regression。

单击此处获取scikit学习页面

EDIT: Probably not the right way to do it, see comments

You could use sklearn.feature_selection.f_regression.

click here for the scikit-learn page


回答 3

elyase的答案https://stackoverflow.com/a/27928411/4240413中的代码实际上无效。请注意,sse是一个标量,然后尝试对其进行迭代。以下代码是修改后的版本。并不是很干净,但是我认为它或多或少地起作用。

class LinearRegression(linear_model.LinearRegression):

    def __init__(self,*args,**kwargs):
        # *args is the list of arguments that might go into the LinearRegression object
        # that we don't know about and don't want to have to deal with. Similarly, **kwargs
        # is a dictionary of key words and values that might also need to go into the orginal
        # LinearRegression object. We put *args and **kwargs so that we don't have to look
        # these up and write them down explicitly here. Nice and easy.

        if not "fit_intercept" in kwargs:
            kwargs['fit_intercept'] = False

        super(LinearRegression,self).__init__(*args,**kwargs)

    # Adding in t-statistics for the coefficients.
    def fit(self,x,y):
        # This takes in numpy arrays (not matrices). Also assumes you are leaving out the column
        # of constants.

        # Not totally sure what 'super' does here and why you redefine self...
        self = super(LinearRegression, self).fit(x,y)
        n, k = x.shape
        yHat = np.matrix(self.predict(x)).T

        # Change X and Y into numpy matricies. x also has a column of ones added to it.
        x = np.hstack((np.ones((n,1)),np.matrix(x)))
        y = np.matrix(y).T

        # Degrees of freedom.
        df = float(n-k-1)

        # Sample variance.     
        sse = np.sum(np.square(yHat - y),axis=0)
        self.sampleVariance = sse/df

        # Sample variance for x.
        self.sampleVarianceX = x.T*x

        # Covariance Matrix = [(s^2)(X'X)^-1]^0.5. (sqrtm = matrix square root.  ugly)
        self.covarianceMatrix = sc.linalg.sqrtm(self.sampleVariance[0,0]*self.sampleVarianceX.I)

        # Standard erros for the difference coefficients: the diagonal elements of the covariance matrix.
        self.se = self.covarianceMatrix.diagonal()[1:]

        # T statistic for each beta.
        self.betasTStat = np.zeros(len(self.se))
        for i in xrange(len(self.se)):
            self.betasTStat[i] = self.coef_[0,i]/self.se[i]

        # P-value for each beta. This is a two sided t-test, since the betas can be 
        # positive or negative.
        self.betasPValue = 1 - t.cdf(abs(self.betasTStat),df)

The code in elyase’s answer https://stackoverflow.com/a/27928411/4240413 does not actually work. Notice that sse is a scalar, and then it tries to iterate through it. The following code is a modified version. Not amazingly clean, but I think it works more or less.

class LinearRegression(linear_model.LinearRegression):

    def __init__(self,*args,**kwargs):
        # *args is the list of arguments that might go into the LinearRegression object
        # that we don't know about and don't want to have to deal with. Similarly, **kwargs
        # is a dictionary of key words and values that might also need to go into the orginal
        # LinearRegression object. We put *args and **kwargs so that we don't have to look
        # these up and write them down explicitly here. Nice and easy.

        if not "fit_intercept" in kwargs:
            kwargs['fit_intercept'] = False

        super(LinearRegression,self).__init__(*args,**kwargs)

    # Adding in t-statistics for the coefficients.
    def fit(self,x,y):
        # This takes in numpy arrays (not matrices). Also assumes you are leaving out the column
        # of constants.

        # Not totally sure what 'super' does here and why you redefine self...
        self = super(LinearRegression, self).fit(x,y)
        n, k = x.shape
        yHat = np.matrix(self.predict(x)).T

        # Change X and Y into numpy matricies. x also has a column of ones added to it.
        x = np.hstack((np.ones((n,1)),np.matrix(x)))
        y = np.matrix(y).T

        # Degrees of freedom.
        df = float(n-k-1)

        # Sample variance.     
        sse = np.sum(np.square(yHat - y),axis=0)
        self.sampleVariance = sse/df

        # Sample variance for x.
        self.sampleVarianceX = x.T*x

        # Covariance Matrix = [(s^2)(X'X)^-1]^0.5. (sqrtm = matrix square root.  ugly)
        self.covarianceMatrix = sc.linalg.sqrtm(self.sampleVariance[0,0]*self.sampleVarianceX.I)

        # Standard erros for the difference coefficients: the diagonal elements of the covariance matrix.
        self.se = self.covarianceMatrix.diagonal()[1:]

        # T statistic for each beta.
        self.betasTStat = np.zeros(len(self.se))
        for i in xrange(len(self.se)):
            self.betasTStat[i] = self.coef_[0,i]/self.se[i]

        # P-value for each beta. This is a two sided t-test, since the betas can be 
        # positive or negative.
        self.betasPValue = 1 - t.cdf(abs(self.betasTStat),df)

回答 4

拉取p值的一种简单方法是使用statsmodels回归:

import statsmodels.api as sm
mod = sm.OLS(Y,X)
fii = mod.fit()
p_values = fii.summary2().tables[1]['P>|t|']

您将获得一系列可以操纵的p值(例如,通过评估每个p值来选择要保留的顺序):

在此处输入图片说明

An easy way to pull of the p-values is to use statsmodels regression:

import statsmodels.api as sm
mod = sm.OLS(Y,X)
fii = mod.fit()
p_values = fii.summary2().tables[1]['P>|t|']

You get a series of p-values that you can manipulate (for example choose the order you want to keep by evaluating each p-value):

enter image description here


回答 5

p_value在f统计信息中。如果要获取值,只需使用以下几行代码:

import statsmodels.api as sm
from scipy import stats

diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target

X2 = sm.add_constant(X)
est = sm.OLS(y, X2)
print(est.fit().f_pvalue)

p_value is among f statistics. if you want to get the value, simply use this few lines of code:

import statsmodels.api as sm
from scipy import stats

diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target

X2 = sm.add_constant(X)
est = sm.OLS(y, X2)
print(est.fit().f_pvalue)

回答 6

在多变量回归的情况下,@ JARH的答案可能有误。(我没有足够的声誉来发表评论。)

在以下行中:

p_values =[2*(1-stats.t.cdf(np.abs(i),(len(newX)-1))) for i in ts_b]

t值遵循度的卡方分布len(newX)-1而不是度的卡方分布len(newX)-len(newX.columns)-1

所以这应该是:

p_values =[2*(1-stats.t.cdf(np.abs(i),(len(newX)-len(newX.columns)-1))) for i in ts_b]

(有关更多详细信息,请参见t值以进行OLS回归

There could be a mistake in @JARH‘s answer in the case of a multivariable regression. (I do not have enough reputation to comment.)

In the following line:

p_values =[2*(1-stats.t.cdf(np.abs(i),(len(newX)-1))) for i in ts_b],

the t-values follows a chi-squared distribution of degree len(newX)-1 instead of following a chi-squared distribution of degree len(newX)-len(newX.columns)-1.

So this should be:

p_values =[2*(1-stats.t.cdf(np.abs(i),(len(newX)-len(newX.columns)-1))) for i in ts_b]

(See t-values for OLS regression for more details)


回答 7

您可以将scipy用作p值。此代码来自scipy文档。

>>> from scipy import stats
>>> import numpy as np
>>> x = np.random.random(10)
>>> y = np.random.random(10)
>>> slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)

You can use scipy for p-value. This code is from scipy documentation.

>>> from scipy import stats
>>> import numpy as np
>>> x = np.random.random(10)
>>> y = np.random.random(10)
>>> slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)

回答 8

对于单行代码,您可以使用pingouin.linear_regression函数(免责声明:我是Pingouin的创建者),该函数可使用NumPy数组或Pandas DataFrame与单变量/多元回归配合使用,例如:

import pingouin as pg
# Using a Pandas DataFrame `df`:
lm = pg.linear_regression(df[['x', 'z']], df['y'])
# Using a NumPy array:
lm = pg.linear_regression(X, y)

输出是一个数据帧,其中包含每个预测变量的beta系数,标准误差,T值,p值和置信区间,以及拟合的R ^ 2和调整后的R ^ 2。

For a one-liner you can use the pingouin.linear_regression function (disclaimer: I am the creator of Pingouin), which works with uni/multi-variate regression using NumPy arrays or Pandas DataFrame, e.g:

import pingouin as pg
# Using a Pandas DataFrame `df`:
lm = pg.linear_regression(df[['x', 'z']], df['y'])
# Using a NumPy array:
lm = pg.linear_regression(X, y)

The output is a dataframe with the beta coefficients, standard errors, T-values, p-values and confidence intervals for each predictor, as well as the R^2 and adjusted R^2 of the fit.


使用Scipy(Python)使经验分布适合理论分布吗?

问题:使用Scipy(Python)使经验分布适合理论分布吗?

简介:我列出了30,000多个整数值,范围从0到47(含0和47),例如[0,0,0,0,..,1,1,1,1,...,2,2,2,2,...,47,47,47,...]从某个连续分布中采样。列表中的值不一定按顺序排列,但顺序对于此问题并不重要。

问题:根据我的分布,我想为任何给定值计算p值(看到更大值的概率)。例如,您可以看到0的p值将接近1,数字较大的p值将趋于0。

我不知道我是否正确,但是为了确定概率,我认为我需要使我的数据适合最适合描述我的数据的理论分布。我认为需要某种拟合优度检验来确定最佳模型。

有没有办法在Python(ScipyNumpy)中实现这种分析?你能举个例子吗?

谢谢!

INTRODUCTION: I have a list of more than 30,000 integer values ranging from 0 to 47, inclusive, e.g.[0,0,0,0,..,1,1,1,1,...,2,2,2,2,...,47,47,47,...] sampled from some continuous distribution. The values in the list are not necessarily in order, but order doesn’t matter for this problem.

PROBLEM: Based on my distribution I would like to calculate p-value (the probability of seeing greater values) for any given value. For example, as you can see p-value for 0 would be approaching 1 and p-value for higher numbers would be tending to 0.

I don’t know if I am right, but to determine probabilities I think I need to fit my data to a theoretical distribution that is the most suitable to describe my data. I assume that some kind of goodness of fit test is needed to determine the best model.

Is there a way to implement such an analysis in Python (Scipy or Numpy)? Could you present any examples?

Thank you!


回答 0

平方误差和(SSE)的分布拟合

这是Saullo答案的更新和修改,它使用当前scipy.stats分布的完整列表,并返回分布的直方图和数据的直方图之间SSE最小的分布。

拟合示例

使用中的ElNiño数据集statsmodels进行拟合,并确定误差。返回错误最少的分布。

所有发行

所有拟合分布

最佳拟合分布

最佳拟合分布

范例程式码

%matplotlib inline

import warnings
import numpy as np
import pandas as pd
import scipy.stats as st
import statsmodels as sm
import matplotlib
import matplotlib.pyplot as plt

matplotlib.rcParams['figure.figsize'] = (16.0, 12.0)
matplotlib.style.use('ggplot')

# Create models from data
def best_fit_distribution(data, bins=200, ax=None):
    """Model data by finding best fit distribution to data"""
    # Get histogram of original data
    y, x = np.histogram(data, bins=bins, density=True)
    x = (x + np.roll(x, -1))[:-1] / 2.0

    # Distributions to check
    DISTRIBUTIONS = [        
        st.alpha,st.anglit,st.arcsine,st.beta,st.betaprime,st.bradford,st.burr,st.cauchy,st.chi,st.chi2,st.cosine,
        st.dgamma,st.dweibull,st.erlang,st.expon,st.exponnorm,st.exponweib,st.exponpow,st.f,st.fatiguelife,st.fisk,
        st.foldcauchy,st.foldnorm,st.frechet_r,st.frechet_l,st.genlogistic,st.genpareto,st.gennorm,st.genexpon,
        st.genextreme,st.gausshyper,st.gamma,st.gengamma,st.genhalflogistic,st.gilbrat,st.gompertz,st.gumbel_r,
        st.gumbel_l,st.halfcauchy,st.halflogistic,st.halfnorm,st.halfgennorm,st.hypsecant,st.invgamma,st.invgauss,
        st.invweibull,st.johnsonsb,st.johnsonsu,st.ksone,st.kstwobign,st.laplace,st.levy,st.levy_l,st.levy_stable,
        st.logistic,st.loggamma,st.loglaplace,st.lognorm,st.lomax,st.maxwell,st.mielke,st.nakagami,st.ncx2,st.ncf,
        st.nct,st.norm,st.pareto,st.pearson3,st.powerlaw,st.powerlognorm,st.powernorm,st.rdist,st.reciprocal,
        st.rayleigh,st.rice,st.recipinvgauss,st.semicircular,st.t,st.triang,st.truncexpon,st.truncnorm,st.tukeylambda,
        st.uniform,st.vonmises,st.vonmises_line,st.wald,st.weibull_min,st.weibull_max,st.wrapcauchy
    ]

    # Best holders
    best_distribution = st.norm
    best_params = (0.0, 1.0)
    best_sse = np.inf

    # Estimate distribution parameters from data
    for distribution in DISTRIBUTIONS:

        # Try to fit the distribution
        try:
            # Ignore warnings from data that can't be fit
            with warnings.catch_warnings():
                warnings.filterwarnings('ignore')

                # fit dist to data
                params = distribution.fit(data)

                # Separate parts of parameters
                arg = params[:-2]
                loc = params[-2]
                scale = params[-1]

                # Calculate fitted PDF and error with fit in distribution
                pdf = distribution.pdf(x, loc=loc, scale=scale, *arg)
                sse = np.sum(np.power(y - pdf, 2.0))

                # if axis pass in add to plot
                try:
                    if ax:
                        pd.Series(pdf, x).plot(ax=ax)
                    end
                except Exception:
                    pass

                # identify if this distribution is better
                if best_sse > sse > 0:
                    best_distribution = distribution
                    best_params = params
                    best_sse = sse

        except Exception:
            pass

    return (best_distribution.name, best_params)

def make_pdf(dist, params, size=10000):
    """Generate distributions's Probability Distribution Function """

    # Separate parts of parameters
    arg = params[:-2]
    loc = params[-2]
    scale = params[-1]

    # Get sane start and end points of distribution
    start = dist.ppf(0.01, *arg, loc=loc, scale=scale) if arg else dist.ppf(0.01, loc=loc, scale=scale)
    end = dist.ppf(0.99, *arg, loc=loc, scale=scale) if arg else dist.ppf(0.99, loc=loc, scale=scale)

    # Build PDF and turn into pandas Series
    x = np.linspace(start, end, size)
    y = dist.pdf(x, loc=loc, scale=scale, *arg)
    pdf = pd.Series(y, x)

    return pdf

# Load data from statsmodels datasets
data = pd.Series(sm.datasets.elnino.load_pandas().data.set_index('YEAR').values.ravel())

# Plot for comparison
plt.figure(figsize=(12,8))
ax = data.plot(kind='hist', bins=50, normed=True, alpha=0.5, color=plt.rcParams['axes.color_cycle'][1])
# Save plot limits
dataYLim = ax.get_ylim()

# Find best fit distribution
best_fit_name, best_fit_params = best_fit_distribution(data, 200, ax)
best_dist = getattr(st, best_fit_name)

# Update plots
ax.set_ylim(dataYLim)
ax.set_title(u'El Niño sea temp.\n All Fitted Distributions')
ax.set_xlabel(u'Temp (°C)')
ax.set_ylabel('Frequency')

# Make PDF with best params 
pdf = make_pdf(best_dist, best_fit_params)

# Display
plt.figure(figsize=(12,8))
ax = pdf.plot(lw=2, label='PDF', legend=True)
data.plot(kind='hist', bins=50, normed=True, alpha=0.5, label='Data', legend=True, ax=ax)

param_names = (best_dist.shapes + ', loc, scale').split(', ') if best_dist.shapes else ['loc', 'scale']
param_str = ', '.join(['{}={:0.2f}'.format(k,v) for k,v in zip(param_names, best_fit_params)])
dist_str = '{}({})'.format(best_fit_name, param_str)

ax.set_title(u'El Niño sea temp. with best fit distribution \n' + dist_str)
ax.set_xlabel(u'Temp. (°C)')
ax.set_ylabel('Frequency')

Distribution Fitting with Sum of Square Error (SSE)

This is an update and modification to Saullo’s answer, that uses the full list of the current scipy.stats distributions and returns the distribution with the least SSE between the distribution’s histogram and the data’s histogram.

Example Fitting

Using the El Niño dataset from statsmodels, the distributions are fit and error is determined. The distribution with the least error is returned.

All Distributions

All Fitted Distributions

Best Fit Distribution

Best Fit Distribution

Example Code

%matplotlib inline

import warnings
import numpy as np
import pandas as pd
import scipy.stats as st
import statsmodels as sm
import matplotlib
import matplotlib.pyplot as plt

matplotlib.rcParams['figure.figsize'] = (16.0, 12.0)
matplotlib.style.use('ggplot')

# Create models from data
def best_fit_distribution(data, bins=200, ax=None):
    """Model data by finding best fit distribution to data"""
    # Get histogram of original data
    y, x = np.histogram(data, bins=bins, density=True)
    x = (x + np.roll(x, -1))[:-1] / 2.0

    # Distributions to check
    DISTRIBUTIONS = [        
        st.alpha,st.anglit,st.arcsine,st.beta,st.betaprime,st.bradford,st.burr,st.cauchy,st.chi,st.chi2,st.cosine,
        st.dgamma,st.dweibull,st.erlang,st.expon,st.exponnorm,st.exponweib,st.exponpow,st.f,st.fatiguelife,st.fisk,
        st.foldcauchy,st.foldnorm,st.frechet_r,st.frechet_l,st.genlogistic,st.genpareto,st.gennorm,st.genexpon,
        st.genextreme,st.gausshyper,st.gamma,st.gengamma,st.genhalflogistic,st.gilbrat,st.gompertz,st.gumbel_r,
        st.gumbel_l,st.halfcauchy,st.halflogistic,st.halfnorm,st.halfgennorm,st.hypsecant,st.invgamma,st.invgauss,
        st.invweibull,st.johnsonsb,st.johnsonsu,st.ksone,st.kstwobign,st.laplace,st.levy,st.levy_l,st.levy_stable,
        st.logistic,st.loggamma,st.loglaplace,st.lognorm,st.lomax,st.maxwell,st.mielke,st.nakagami,st.ncx2,st.ncf,
        st.nct,st.norm,st.pareto,st.pearson3,st.powerlaw,st.powerlognorm,st.powernorm,st.rdist,st.reciprocal,
        st.rayleigh,st.rice,st.recipinvgauss,st.semicircular,st.t,st.triang,st.truncexpon,st.truncnorm,st.tukeylambda,
        st.uniform,st.vonmises,st.vonmises_line,st.wald,st.weibull_min,st.weibull_max,st.wrapcauchy
    ]

    # Best holders
    best_distribution = st.norm
    best_params = (0.0, 1.0)
    best_sse = np.inf

    # Estimate distribution parameters from data
    for distribution in DISTRIBUTIONS:

        # Try to fit the distribution
        try:
            # Ignore warnings from data that can't be fit
            with warnings.catch_warnings():
                warnings.filterwarnings('ignore')

                # fit dist to data
                params = distribution.fit(data)

                # Separate parts of parameters
                arg = params[:-2]
                loc = params[-2]
                scale = params[-1]

                # Calculate fitted PDF and error with fit in distribution
                pdf = distribution.pdf(x, loc=loc, scale=scale, *arg)
                sse = np.sum(np.power(y - pdf, 2.0))

                # if axis pass in add to plot
                try:
                    if ax:
                        pd.Series(pdf, x).plot(ax=ax)
                    end
                except Exception:
                    pass

                # identify if this distribution is better
                if best_sse > sse > 0:
                    best_distribution = distribution
                    best_params = params
                    best_sse = sse

        except Exception:
            pass

    return (best_distribution.name, best_params)

def make_pdf(dist, params, size=10000):
    """Generate distributions's Probability Distribution Function """

    # Separate parts of parameters
    arg = params[:-2]
    loc = params[-2]
    scale = params[-1]

    # Get sane start and end points of distribution
    start = dist.ppf(0.01, *arg, loc=loc, scale=scale) if arg else dist.ppf(0.01, loc=loc, scale=scale)
    end = dist.ppf(0.99, *arg, loc=loc, scale=scale) if arg else dist.ppf(0.99, loc=loc, scale=scale)

    # Build PDF and turn into pandas Series
    x = np.linspace(start, end, size)
    y = dist.pdf(x, loc=loc, scale=scale, *arg)
    pdf = pd.Series(y, x)

    return pdf

# Load data from statsmodels datasets
data = pd.Series(sm.datasets.elnino.load_pandas().data.set_index('YEAR').values.ravel())

# Plot for comparison
plt.figure(figsize=(12,8))
ax = data.plot(kind='hist', bins=50, normed=True, alpha=0.5, color=plt.rcParams['axes.color_cycle'][1])
# Save plot limits
dataYLim = ax.get_ylim()

# Find best fit distribution
best_fit_name, best_fit_params = best_fit_distribution(data, 200, ax)
best_dist = getattr(st, best_fit_name)

# Update plots
ax.set_ylim(dataYLim)
ax.set_title(u'El Niño sea temp.\n All Fitted Distributions')
ax.set_xlabel(u'Temp (°C)')
ax.set_ylabel('Frequency')

# Make PDF with best params 
pdf = make_pdf(best_dist, best_fit_params)

# Display
plt.figure(figsize=(12,8))
ax = pdf.plot(lw=2, label='PDF', legend=True)
data.plot(kind='hist', bins=50, normed=True, alpha=0.5, label='Data', legend=True, ax=ax)

param_names = (best_dist.shapes + ', loc, scale').split(', ') if best_dist.shapes else ['loc', 'scale']
param_str = ', '.join(['{}={:0.2f}'.format(k,v) for k,v in zip(param_names, best_fit_params)])
dist_str = '{}({})'.format(best_fit_name, param_str)

ax.set_title(u'El Niño sea temp. with best fit distribution \n' + dist_str)
ax.set_xlabel(u'Temp. (°C)')
ax.set_ylabel('Frequency')

回答 1

SciPy 0.12.0中82个已实现的分发功能。您可以使用他们的fit()方法测试其中的一些适合您的数据的方式。检查下面的代码以获取更多详细信息:

在此处输入图片说明

import matplotlib.pyplot as plt
import scipy
import scipy.stats
size = 30000
x = scipy.arange(size)
y = scipy.int_(scipy.round_(scipy.stats.vonmises.rvs(5,size=size)*47))
h = plt.hist(y, bins=range(48))

dist_names = ['gamma', 'beta', 'rayleigh', 'norm', 'pareto']

for dist_name in dist_names:
    dist = getattr(scipy.stats, dist_name)
    param = dist.fit(y)
    pdf_fitted = dist.pdf(x, *param[:-2], loc=param[-2], scale=param[-1]) * size
    plt.plot(pdf_fitted, label=dist_name)
    plt.xlim(0,47)
plt.legend(loc='upper right')
plt.show()

参考文献:

-拟合分布,拟合优度,p值。是否可以使用Scipy(Python)做到这一点?

-Scipy的分配配件

以下是Scipy 0.12.0(VI)中可用的所有分布函数的名称列表:

dist_names = [ 'alpha', 'anglit', 'arcsine', 'beta', 'betaprime', 'bradford', 'burr', 'cauchy', 'chi', 'chi2', 'cosine', 'dgamma', 'dweibull', 'erlang', 'expon', 'exponweib', 'exponpow', 'f', 'fatiguelife', 'fisk', 'foldcauchy', 'foldnorm', 'frechet_r', 'frechet_l', 'genlogistic', 'genpareto', 'genexpon', 'genextreme', 'gausshyper', 'gamma', 'gengamma', 'genhalflogistic', 'gilbrat', 'gompertz', 'gumbel_r', 'gumbel_l', 'halfcauchy', 'halflogistic', 'halfnorm', 'hypsecant', 'invgamma', 'invgauss', 'invweibull', 'johnsonsb', 'johnsonsu', 'ksone', 'kstwobign', 'laplace', 'logistic', 'loggamma', 'loglaplace', 'lognorm', 'lomax', 'maxwell', 'mielke', 'nakagami', 'ncx2', 'ncf', 'nct', 'norm', 'pareto', 'pearson3', 'powerlaw', 'powerlognorm', 'powernorm', 'rdist', 'reciprocal', 'rayleigh', 'rice', 'recipinvgauss', 'semicircular', 't', 'triang', 'truncexpon', 'truncnorm', 'tukeylambda', 'uniform', 'vonmises', 'wald', 'weibull_min', 'weibull_max', 'wrapcauchy'] 

There are 82 implemented distribution functions in SciPy 0.12.0. You can test how some of them fit to your data using their fit() method. Check the code below for more details:

enter image description here

import matplotlib.pyplot as plt
import scipy
import scipy.stats
size = 30000
x = scipy.arange(size)
y = scipy.int_(scipy.round_(scipy.stats.vonmises.rvs(5,size=size)*47))
h = plt.hist(y, bins=range(48))

dist_names = ['gamma', 'beta', 'rayleigh', 'norm', 'pareto']

for dist_name in dist_names:
    dist = getattr(scipy.stats, dist_name)
    param = dist.fit(y)
    pdf_fitted = dist.pdf(x, *param[:-2], loc=param[-2], scale=param[-1]) * size
    plt.plot(pdf_fitted, label=dist_name)
    plt.xlim(0,47)
plt.legend(loc='upper right')
plt.show()

References:

– Fitting distributions, goodness of fit, p-value. Is it possible to do this with Scipy (Python)?

– Distribution fitting with Scipy

And here a list with the names of all distribution functions available in Scipy 0.12.0 (VI):

dist_names = [ 'alpha', 'anglit', 'arcsine', 'beta', 'betaprime', 'bradford', 'burr', 'cauchy', 'chi', 'chi2', 'cosine', 'dgamma', 'dweibull', 'erlang', 'expon', 'exponweib', 'exponpow', 'f', 'fatiguelife', 'fisk', 'foldcauchy', 'foldnorm', 'frechet_r', 'frechet_l', 'genlogistic', 'genpareto', 'genexpon', 'genextreme', 'gausshyper', 'gamma', 'gengamma', 'genhalflogistic', 'gilbrat', 'gompertz', 'gumbel_r', 'gumbel_l', 'halfcauchy', 'halflogistic', 'halfnorm', 'hypsecant', 'invgamma', 'invgauss', 'invweibull', 'johnsonsb', 'johnsonsu', 'ksone', 'kstwobign', 'laplace', 'logistic', 'loggamma', 'loglaplace', 'lognorm', 'lomax', 'maxwell', 'mielke', 'nakagami', 'ncx2', 'ncf', 'nct', 'norm', 'pareto', 'pearson3', 'powerlaw', 'powerlognorm', 'powernorm', 'rdist', 'reciprocal', 'rayleigh', 'rice', 'recipinvgauss', 'semicircular', 't', 'triang', 'truncexpon', 'truncnorm', 'tukeylambda', 'uniform', 'vonmises', 'wald', 'weibull_min', 'weibull_max', 'wrapcauchy'] 

回答 2

fit()@Saullo Castro提到的方法提供了最大似然估计(MLE)。数据的最佳分布是一种可以给您最高的数据,可以通过几种不同的方法来确定:

1,一种使您获得最高对数可能性的方法。

2,为您提供最小的AIC,BIC或BICc值(请参阅Wiki:http : //en.wikipedia.org/wiki/Akaike_information_criterion),基本上可以看作是针对参数数量调整后的对数似然性,参数有望更好地适合)

3,最大化贝叶斯后验概率的那一种。(请参见Wiki:http : //en.wikipedia.org/wiki/Posterior_probability

当然,如果您已经有一个可以描述您的数据的分布(基于特定领域的理论)并且要坚持下去,那么您将跳过确定最佳拟合分布的步骤。

scipy没有提供计算对数似然性的功能(尽管提供了MLE方法),但是一个简单的硬代码很容易:请参见`scipy.stat.distributions’的内置概率密度函数是否比用户提供的函数慢?

fit() method mentioned by @Saullo Castro provides maximum likelihood estimates (MLE). The best distribution for your data is the one give you the highest can be determined by several different ways: such as

1, the one that gives you the highest log likelihood.

2, the one that gives you the smallest AIC, BIC or BICc values (see wiki: http://en.wikipedia.org/wiki/Akaike_information_criterion, basically can be viewed as log likelihood adjusted for number of parameters, as distribution with more parameters are expected to fit better)

3, the one that maximize the Bayesian posterior probability. (see wiki: http://en.wikipedia.org/wiki/Posterior_probability)

Of course, if you already have a distribution that should describe you data (based on the theories in your particular field) and want to stick to that, you will skip the step of identifying the best fit distribution.

scipy does not come with a function to calculate log likelihood (although MLE method is provided), but hard code one is easy: see Is the build-in probability density functions of `scipy.stat.distributions` slower than a user provided one?


回答 3

AFAICU,您的分布是离散的(除了离散之外什么都没有)。因此,仅计算不同值的频率并对它们进行归一化就足以满足您的目的。因此,一个例子来证明这一点:

In []: values= [0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4]
In []: counts= asarray(bincount(values), dtype= float)
In []: cdf= counts.cumsum()/ counts.sum()

因此,看到比1简单高的值的概率(根据互补累积分布函数(ccdf))

In []: 1- cdf[1]
Out[]: 0.40000000000000002

请注意,ccdf生存函数(sf)密切相关,但它也是用离散分布定义的,而sf仅用于连续分布的定义。

AFAICU, your distribution is discrete (and nothing but discrete). Therefore just counting the frequencies of different values and normalizing them should be enough for your purposes. So, an example to demonstrate this:

In []: values= [0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 3, 3, 4]
In []: counts= asarray(bincount(values), dtype= float)
In []: cdf= counts.cumsum()/ counts.sum()

Thus, probability of seeing values higher than 1 is simply (according to the complementary cumulative distribution function (ccdf):

In []: 1- cdf[1]
Out[]: 0.40000000000000002

Please note that ccdf is closely related to survival function (sf), but it’s also defined with discrete distributions, whereas sf is defined only for contiguous distributions.


回答 4

在我看来,这听起来像是概率密度估计问题。

from scipy.stats import gaussian_kde
occurences = [0,0,0,0,..,1,1,1,1,...,2,2,2,2,...,47]
values = range(0,48)
kde = gaussian_kde(map(float, occurences))
p = kde(values)
p = p/sum(p)
print "P(x>=1) = %f" % sum(p[1:])

另请参阅http://jpktd.blogspot.com/2009/03/using-gaussian-kernel-density.html

It sounds like probability density estimation problem to me.

from scipy.stats import gaussian_kde
occurences = [0,0,0,0,..,1,1,1,1,...,2,2,2,2,...,47]
values = range(0,48)
kde = gaussian_kde(map(float, occurences))
p = kde(values)
p = p/sum(p)
print "P(x>=1) = %f" % sum(p[1:])

Also see http://jpktd.blogspot.com/2009/03/using-gaussian-kernel-density.html.


回答 5

尝试 distfit图书馆。

点安装distfit

# Create 1000 random integers, value between [0-50]
X = np.random.randint(0, 50,1000)

# Retrieve P-value for y
y = [0,10,45,55,100]

# From the distfit library import the class distfit
from distfit import distfit

# Initialize.
# Set any properties here, such as alpha.
# The smoothing can be of use when working with integers. Otherwise your histogram
# may be jumping up-and-down, and getting the correct fit may be harder.
dist = distfit(alpha=0.05, smooth=10)

# Search for best theoretical fit on your empirical data
dist.fit_transform(X)

> [distfit] >fit..
> [distfit] >transform..
> [distfit] >[norm      ] [RSS: 0.0037894] [loc=23.535 scale=14.450] 
> [distfit] >[expon     ] [RSS: 0.0055534] [loc=0.000 scale=23.535] 
> [distfit] >[pareto    ] [RSS: 0.0056828] [loc=-384473077.778 scale=384473077.778] 
> [distfit] >[dweibull  ] [RSS: 0.0038202] [loc=24.535 scale=13.936] 
> [distfit] >[t         ] [RSS: 0.0037896] [loc=23.535 scale=14.450] 
> [distfit] >[genextreme] [RSS: 0.0036185] [loc=18.890 scale=14.506] 
> [distfit] >[gamma     ] [RSS: 0.0037600] [loc=-175.505 scale=1.044] 
> [distfit] >[lognorm   ] [RSS: 0.0642364] [loc=-0.000 scale=1.802] 
> [distfit] >[beta      ] [RSS: 0.0021885] [loc=-3.981 scale=52.981] 
> [distfit] >[uniform   ] [RSS: 0.0012349] [loc=0.000 scale=49.000] 

# Best fitted model
best_distr = dist.model
print(best_distr)

# Uniform shows best fit, with 95% CII (confidence intervals), and all other parameters
> {'distr': <scipy.stats._continuous_distns.uniform_gen at 0x16de3a53160>,
>  'params': (0.0, 49.0),
>  'name': 'uniform',
>  'RSS': 0.0012349021241149533,
>  'loc': 0.0,
>  'scale': 49.0,
>  'arg': (),
>  'CII_min_alpha': 2.45,
>  'CII_max_alpha': 46.55}

# Ranking distributions
dist.summary

# Plot the summary of fitted distributions
dist.plot_summary()

在此处输入图片说明

# Make prediction on new datapoints based on the fit
dist.predict(y)

# Retrieve your pvalues with 
dist.y_pred
# array(['down', 'none', 'none', 'up', 'up'], dtype='<U4')
dist.y_proba
array([0.02040816, 0.02040816, 0.02040816, 0.        , 0.        ])

# Or in one dataframe
dist.df

# The plot function will now also include the predictions of y
dist.plot()

最合适

请注意,在这种情况下,由于均匀分布,所有点都是有效的。如果需要,可以使用dist.y_pred进行过滤。

Try the distfit library.

pip install distfit

# Create 1000 random integers, value between [0-50]
X = np.random.randint(0, 50,1000)

# Retrieve P-value for y
y = [0,10,45,55,100]

# From the distfit library import the class distfit
from distfit import distfit

# Initialize.
# Set any properties here, such as alpha.
# The smoothing can be of use when working with integers. Otherwise your histogram
# may be jumping up-and-down, and getting the correct fit may be harder.
dist = distfit(alpha=0.05, smooth=10)

# Search for best theoretical fit on your empirical data
dist.fit_transform(X)

> [distfit] >fit..
> [distfit] >transform..
> [distfit] >[norm      ] [RSS: 0.0037894] [loc=23.535 scale=14.450] 
> [distfit] >[expon     ] [RSS: 0.0055534] [loc=0.000 scale=23.535] 
> [distfit] >[pareto    ] [RSS: 0.0056828] [loc=-384473077.778 scale=384473077.778] 
> [distfit] >[dweibull  ] [RSS: 0.0038202] [loc=24.535 scale=13.936] 
> [distfit] >[t         ] [RSS: 0.0037896] [loc=23.535 scale=14.450] 
> [distfit] >[genextreme] [RSS: 0.0036185] [loc=18.890 scale=14.506] 
> [distfit] >[gamma     ] [RSS: 0.0037600] [loc=-175.505 scale=1.044] 
> [distfit] >[lognorm   ] [RSS: 0.0642364] [loc=-0.000 scale=1.802] 
> [distfit] >[beta      ] [RSS: 0.0021885] [loc=-3.981 scale=52.981] 
> [distfit] >[uniform   ] [RSS: 0.0012349] [loc=0.000 scale=49.000] 

# Best fitted model
best_distr = dist.model
print(best_distr)

# Uniform shows best fit, with 95% CII (confidence intervals), and all other parameters
> {'distr': <scipy.stats._continuous_distns.uniform_gen at 0x16de3a53160>,
>  'params': (0.0, 49.0),
>  'name': 'uniform',
>  'RSS': 0.0012349021241149533,
>  'loc': 0.0,
>  'scale': 49.0,
>  'arg': (),
>  'CII_min_alpha': 2.45,
>  'CII_max_alpha': 46.55}

# Ranking distributions
dist.summary

# Plot the summary of fitted distributions
dist.plot_summary()

enter image description here

# Make prediction on new datapoints based on the fit
dist.predict(y)

# Retrieve your pvalues with 
dist.y_pred
# array(['down', 'none', 'none', 'up', 'up'], dtype='<U4')
dist.y_proba
array([0.02040816, 0.02040816, 0.02040816, 0.        , 0.        ])

# Or in one dataframe
dist.df

# The plot function will now also include the predictions of y
dist.plot()

Best fit

Note that in this case, all points will be significant because of the uniform distribution. You can filter with the dist.y_pred if required.


回答 6

使用OpenTURNS时,我将使用BIC标准来选择适合此类数据的最佳分布。这是因为此标准不会给具有更多参数的分布带来太多优势。实际上,如果分布具有更多参数,则拟合的分布更容易接近数据。此外,在这种情况下,Kolmogorov-Smirnov可能没有意义,因为测量值的微小误差将对p值产生巨大影响。

为了说明这一过程,我加载了El-Nino数据,其中包含1950年至2010年的732次每月温度测量值:

import statsmodels.api as sm
dta = sm.datasets.elnino.load_pandas().data
dta['YEAR'] = dta.YEAR.astype(int).astype(str)
dta = dta.set_index('YEAR').T.unstack()
data = dta.values

使用GetContinuousUniVariateFactories静态方法很容易获得30个内置的单变量分布工厂。完成后,BestModelBIC静态方法将返回最佳模型和相应的BIC分数。

sample = ot.Sample(data, 1)
tested_factories = ot.DistributionFactory.GetContinuousUniVariateFactories()
best_model, best_bic = ot.FittingTest.BestModelBIC(sample,
                                                   tested_factories)
print("Best=",best_model)

打印:

Best= Beta(alpha = 1.64258, beta = 2.4348, a = 18.936, b = 29.254)

为了以图形方式将拟合度与直方图进行比较,我使用drawPDF最佳分布的方法。

import openturns.viewer as otv
graph = ot.HistogramFactory().build(sample).drawPDF()
bestPDF = best_model.drawPDF()
bestPDF.setColors(["blue"])
graph.add(bestPDF)
graph.setTitle("Best BIC fit")
name = best_model.getImplementation().getClassName()
graph.setLegends(["Histogram",name])
graph.setXTitle("Temperature (°C)")
otv.View(graph)

这将生成:

Beta版适合El-Nino温度

BestModelBIC文档中提供了有关此主题的更多详细信息。可以将Scipy分布包括在SciPyDistribution中,甚至可以将ChaosPy分布与ChaosPyDistribution一起包含在内,但是我想当前脚本可以满足大多数实际目的。

With OpenTURNS, I would use the BIC criteria to select the best distribution that fits such data. This is because this criteria does not give too much advantage to the distributions which have more parameters. Indeed, if a distribution has more parameters, it is easier for the fitted distribution to be closer to the data. Moreover, the Kolmogorov-Smirnov may not make sense in this case, because a small error in the measured values will have a huge impact on the p-value.

To illustrate the process, I load the El-Nino data, which contains 732 monthly temperature measurements from 1950 to 2010:

import statsmodels.api as sm
dta = sm.datasets.elnino.load_pandas().data
dta['YEAR'] = dta.YEAR.astype(int).astype(str)
dta = dta.set_index('YEAR').T.unstack()
data = dta.values

It is easy to get the 30 of built-in univariate factories of distributions with the GetContinuousUniVariateFactories static method. Once done, the BestModelBIC static method returns the best model and the corresponding BIC score.

sample = ot.Sample([[p] for p in data]) # data reshaping
tested_factories = ot.DistributionFactory.GetContinuousUniVariateFactories()
best_model, best_bic = ot.FittingTest.BestModelBIC(sample,
                                                   tested_factories)
print("Best=",best_model)

which prints:

Best= Beta(alpha = 1.64258, beta = 2.4348, a = 18.936, b = 29.254)

In order to graphically compare the fit to the histogram, I use the drawPDF methods of the best distribution.

import openturns.viewer as otv
graph = ot.HistogramFactory().build(sample).drawPDF()
bestPDF = best_model.drawPDF()
bestPDF.setColors(["blue"])
graph.add(bestPDF)
graph.setTitle("Best BIC fit")
name = best_model.getImplementation().getClassName()
graph.setLegends(["Histogram",name])
graph.setXTitle("Temperature (°C)")
otv.View(graph)

This produces:

Beta fit to the El-Nino temperatures

More details on this topic are presented in the BestModelBIC doc. It would be possible to include the Scipy distribution in the SciPyDistribution or even with ChaosPy distributions with ChaosPyDistribution, but I guess that the current script fulfills most practical purposes.


回答 7

如果我不理解您的需要,请原谅我,但是将数据存储在字典中的情况又如何呢?字典中的键将是0到47之间的数字,并且将其相关键在原始列表中的出现次数视为数值?
因此,您的可能性p(x)将是大于x的键的所有值的总和除以30000。

Forgive me if I don’t understand your need but what about storing your data in a dictionary where keys would be the numbers between 0 and 47 and values the number of occurrences of their related keys in your original list?
Thus your likelihood p(x) will be the sum of all the values for keys greater than x divided by 30000.


如何使用python / numpy计算百分位数?

问题:如何使用python / numpy计算百分位数?

是否有一种方便的方法来计算序列或一维numpy数组的百分位数?

我正在寻找类似于Excel的百分位数功能的东西。

我查看了NumPy的统计资料参考,但找不到。我只能找到中位数(第50个百分位数),但没有更具体的内容。

Is there a convenient way to calculate percentiles for a sequence or single-dimensional numpy array?

I am looking for something similar to Excel’s percentile function.

I looked in NumPy’s statistics reference, and couldn’t find this. All I could find is the median (50th percentile), but not something more specific.


回答 0

您可能对SciPy Stats软件包感兴趣。它具有您需要的百分位数功能和许多其他统计功能。

percentile() numpy太。

import numpy as np
a = np.array([1,2,3,4,5])
p = np.percentile(a, 50) # return 50th percentile, e.g median.
print p
3.0

这张票使我相信他们不会percentile()很快集成到numpy中。

You might be interested in the SciPy Stats package. It has the percentile function you’re after and many other statistical goodies.

percentile() is available in numpy too.

import numpy as np
a = np.array([1,2,3,4,5])
p = np.percentile(a, 50) # return 50th percentile, e.g median.
print p
3.0

This ticket leads me to believe they won’t be integrating percentile() into numpy anytime soon.


回答 1

顺便说一句,有一个纯Python实现的百分位数功能,以防万一不想依赖scipy。该函数复制如下:

## {{{ http://code.activestate.com/recipes/511478/ (r1)
import math
import functools

def percentile(N, percent, key=lambda x:x):
    """
    Find the percentile of a list of values.

    @parameter N - is a list of values. Note N MUST BE already sorted.
    @parameter percent - a float value from 0.0 to 1.0.
    @parameter key - optional key function to compute value from each element of N.

    @return - the percentile of the values
    """
    if not N:
        return None
    k = (len(N)-1) * percent
    f = math.floor(k)
    c = math.ceil(k)
    if f == c:
        return key(N[int(k)])
    d0 = key(N[int(f)]) * (c-k)
    d1 = key(N[int(c)]) * (k-f)
    return d0+d1

# median is 50th percentile.
median = functools.partial(percentile, percent=0.5)
## end of http://code.activestate.com/recipes/511478/ }}}

By the way, there is a pure-Python implementation of percentile function, in case one doesn’t want to depend on scipy. The function is copied below:

## {{{ http://code.activestate.com/recipes/511478/ (r1)
import math
import functools

def percentile(N, percent, key=lambda x:x):
    """
    Find the percentile of a list of values.

    @parameter N - is a list of values. Note N MUST BE already sorted.
    @parameter percent - a float value from 0.0 to 1.0.
    @parameter key - optional key function to compute value from each element of N.

    @return - the percentile of the values
    """
    if not N:
        return None
    k = (len(N)-1) * percent
    f = math.floor(k)
    c = math.ceil(k)
    if f == c:
        return key(N[int(k)])
    d0 = key(N[int(f)]) * (c-k)
    d1 = key(N[int(c)]) * (k-f)
    return d0+d1

# median is 50th percentile.
median = functools.partial(percentile, percent=0.5)
## end of http://code.activestate.com/recipes/511478/ }}}

回答 2

import numpy as np
a = [154, 400, 1124, 82, 94, 108]
print np.percentile(a,95) # gives the 95th percentile
import numpy as np
a = [154, 400, 1124, 82, 94, 108]
print np.percentile(a,95) # gives the 95th percentile

回答 3

这是不使用numpy的方法,仅使用python计算百分位数。

import math

def percentile(data, percentile):
    size = len(data)
    return sorted(data)[int(math.ceil((size * percentile) / 100)) - 1]

p5 = percentile(mylist, 5)
p25 = percentile(mylist, 25)
p50 = percentile(mylist, 50)
p75 = percentile(mylist, 75)
p95 = percentile(mylist, 95)

Here’s how to do it without numpy, using only python to calculate the percentile.

import math

def percentile(data, percentile):
    size = len(data)
    return sorted(data)[int(math.ceil((size * percentile) / 100)) - 1]

p5 = percentile(mylist, 5)
p25 = percentile(mylist, 25)
p50 = percentile(mylist, 50)
p75 = percentile(mylist, 75)
p95 = percentile(mylist, 95)

回答 4

我通常看到的百分位数定义期望结果是所提供的列表中的值,在该列表下找到P值的百分比…这意味着结果必须来自集合,而不是集合元素之间的插值。为此,您可以使用更简单的功能。

def percentile(N, P):
    """
    Find the percentile of a list of values

    @parameter N - A list of values.  N must be sorted.
    @parameter P - A float value from 0.0 to 1.0

    @return - The percentile of the values.
    """
    n = int(round(P * len(N) + 0.5))
    return N[n-1]

# A = (1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
# B = (15, 20, 35, 40, 50)
#
# print percentile(A, P=0.3)
# 4
# print percentile(A, P=0.8)
# 9
# print percentile(B, P=0.3)
# 20
# print percentile(B, P=0.8)
# 50

如果您希望从提供的列表中获取等于或低于P值百分比的值,请使用以下简单修改:

def percentile(N, P):
    n = int(round(P * len(N) + 0.5))
    if n > 1:
        return N[n-2]
    else:
        return N[0]

或使用@ijustlovemath建议的简化形式:

def percentile(N, P):
    n = max(int(round(P * len(N) + 0.5)), 2)
    return N[n-2]

The definition of percentile I usually see expects as a result the value from the supplied list below which P percent of values are found… which means the result must be from the set, not an interpolation between set elements. To get that, you can use a simpler function.

def percentile(N, P):
    """
    Find the percentile of a list of values

    @parameter N - A list of values.  N must be sorted.
    @parameter P - A float value from 0.0 to 1.0

    @return - The percentile of the values.
    """
    n = int(round(P * len(N) + 0.5))
    return N[n-1]

# A = (1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
# B = (15, 20, 35, 40, 50)
#
# print percentile(A, P=0.3)
# 4
# print percentile(A, P=0.8)
# 9
# print percentile(B, P=0.3)
# 20
# print percentile(B, P=0.8)
# 50

If you would rather get the value from the supplied list at or below which P percent of values are found, then use this simple modification:

def percentile(N, P):
    n = int(round(P * len(N) + 0.5))
    if n > 1:
        return N[n-2]
    else:
        return N[0]

Or with the simplification suggested by @ijustlovemath:

def percentile(N, P):
    n = max(int(round(P * len(N) + 0.5)), 2)
    return N[n-2]

回答 5

从开始Python 3.8,标准库附带的quantiles功能作为statistics模块的一部分:

from statistics import quantiles

quantiles([1, 2, 3, 4, 5], n=100)
# [0.06, 0.12, 0.18, 0.24, 0.3, 0.36, 0.42, 0.48, 0.54, 0.6, 0.66, 0.72, 0.78, 0.84, 0.9, 0.96, 1.02, 1.08, 1.14, 1.2, 1.26, 1.32, 1.38, 1.44, 1.5, 1.56, 1.62, 1.68, 1.74, 1.8, 1.86, 1.92, 1.98, 2.04, 2.1, 2.16, 2.22, 2.28, 2.34, 2.4, 2.46, 2.52, 2.58, 2.64, 2.7, 2.76, 2.82, 2.88, 2.94, 3.0, 3.06, 3.12, 3.18, 3.24, 3.3, 3.36, 3.42, 3.48, 3.54, 3.6, 3.66, 3.72, 3.78, 3.84, 3.9, 3.96, 4.02, 4.08, 4.14, 4.2, 4.26, 4.32, 4.38, 4.44, 4.5, 4.56, 4.62, 4.68, 4.74, 4.8, 4.86, 4.92, 4.98, 5.04, 5.1, 5.16, 5.22, 5.28, 5.34, 5.4, 5.46, 5.52, 5.58, 5.64, 5.7, 5.76, 5.82, 5.88, 5.94]
quantiles([1, 2, 3, 4, 5], n=100)[49] # 50th percentile (e.g median)
# 3.0

quantiles对于给定的分布,返回切分点dist列表,这些n - 1切点将分n位数间隔(以等概率dist分为n连续间隔)划分为:

statistics.quantiles(dist,*,n = 4,method =’exclusive’)

在那里n,在我们的案例(percentiles)是100

Starting Python 3.8, the standard library comes with the quantiles function as part of the statistics module:

from statistics import quantiles

quantiles([1, 2, 3, 4, 5], n=100)
# [0.06, 0.12, 0.18, 0.24, 0.3, 0.36, 0.42, 0.48, 0.54, 0.6, 0.66, 0.72, 0.78, 0.84, 0.9, 0.96, 1.02, 1.08, 1.14, 1.2, 1.26, 1.32, 1.38, 1.44, 1.5, 1.56, 1.62, 1.68, 1.74, 1.8, 1.86, 1.92, 1.98, 2.04, 2.1, 2.16, 2.22, 2.28, 2.34, 2.4, 2.46, 2.52, 2.58, 2.64, 2.7, 2.76, 2.82, 2.88, 2.94, 3.0, 3.06, 3.12, 3.18, 3.24, 3.3, 3.36, 3.42, 3.48, 3.54, 3.6, 3.66, 3.72, 3.78, 3.84, 3.9, 3.96, 4.02, 4.08, 4.14, 4.2, 4.26, 4.32, 4.38, 4.44, 4.5, 4.56, 4.62, 4.68, 4.74, 4.8, 4.86, 4.92, 4.98, 5.04, 5.1, 5.16, 5.22, 5.28, 5.34, 5.4, 5.46, 5.52, 5.58, 5.64, 5.7, 5.76, 5.82, 5.88, 5.94]
quantiles([1, 2, 3, 4, 5], n=100)[49] # 50th percentile (e.g median)
# 3.0

quantiles returns for a given distribution dist a list of n - 1 cut points separating the n quantile intervals (division of dist into n continuous intervals with equal probability):

statistics.quantiles(dist, *, n=4, method=’exclusive’)

where n, in our case (percentiles) is 100.


回答 6

检查scipy.stats模块:

 scipy.stats.scoreatpercentile

check for scipy.stats module:

 scipy.stats.scoreatpercentile

回答 7

要计算系列的百分位数,请运行:

from scipy.stats import rankdata
import numpy as np

def calc_percentile(a, method='min'):
    if isinstance(a, list):
        a = np.asarray(a)
    return rankdata(a, method=method) / float(len(a))

例如:

a = range(20)
print {val: round(percentile, 3) for val, percentile in zip(a, calc_percentile(a))}
>>> {0: 0.05, 1: 0.1, 2: 0.15, 3: 0.2, 4: 0.25, 5: 0.3, 6: 0.35, 7: 0.4, 8: 0.45, 9: 0.5, 10: 0.55, 11: 0.6, 12: 0.65, 13: 0.7, 14: 0.75, 15: 0.8, 16: 0.85, 17: 0.9, 18: 0.95, 19: 1.0}

To calculate the percentile of a series, run:

from scipy.stats import rankdata
import numpy as np

def calc_percentile(a, method='min'):
    if isinstance(a, list):
        a = np.asarray(a)
    return rankdata(a, method=method) / float(len(a))

For example:

a = range(20)
print {val: round(percentile, 3) for val, percentile in zip(a, calc_percentile(a))}
>>> {0: 0.05, 1: 0.1, 2: 0.15, 3: 0.2, 4: 0.25, 5: 0.3, 6: 0.35, 7: 0.4, 8: 0.45, 9: 0.5, 10: 0.55, 11: 0.6, 12: 0.65, 13: 0.7, 14: 0.75, 15: 0.8, 16: 0.85, 17: 0.9, 18: 0.95, 19: 1.0}

回答 8

如果您需要答案成为输入numpy数组的成员,请执行以下操作:

只是要补充一点,默认情况下numpy中的percentile函数将输出计算为输入向量中两个相邻条目的线性加权平均值。在某些情况下,人们可能希望返回的百分位数是向量的实际元素,在这种情况下,从v1.9.0开始,您可以使用“插值”选项,并使用“较低”,“较高”或“最近”。

import numpy as np
x=np.random.uniform(10,size=(1000))-5.0

np.percentile(x,70) # 70th percentile

2.075966046220879

np.percentile(x,70,interpolation="nearest")

2.0729677997904314

后者是向量中的实际项,而前者是两个向量项的线性插值,它们与百分位数接壤

In case you need the answer to be a member of the input numpy array:

Just to add that the percentile function in numpy by default calculates the output as a linear weighted average of the two neighboring entries in the input vector. In some cases people may want the returned percentile to be an actual element of the vector, in this case, from v1.9.0 onwards you can use the “interpolation” option, with either “lower”, “higher” or “nearest”.

import numpy as np
x=np.random.uniform(10,size=(1000))-5.0

np.percentile(x,70) # 70th percentile

2.075966046220879

np.percentile(x,70,interpolation="nearest")

2.0729677997904314

The latter is an actual entry in the vector, while the former is a linear interpolation of two vector entries that border the percentile


回答 9

适用于一系列:用于描述功能

假设您的df包含以下列sales和id。您想要计算销售百分比,则它的工作原理如下:

df['sales'].describe(percentiles = [0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1])

0.0: .0: minimum
1: maximum 
0.1 : 10th percentile and so on

for a series: used describe functions

suppose you have df with following columns sales and id. you want to calculate percentiles for sales then it works like this,

df['sales'].describe(percentiles = [0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1])

0.0: .0: minimum
1: maximum 
0.1 : 10th percentile and so on

回答 10

计算一维numpy序列或矩阵的百分位数的便捷方法是使用numpy.percentile < https://docs.scipy.org/doc/numpy/reference/generated/numpy.percentile.html >。例:

import numpy as np

a = np.array([0,1,2,3,4,5,6,7,8,9,10])
p50 = np.percentile(a, 50) # return 50th percentile, e.g median.
p90 = np.percentile(a, 90) # return 90th percentile.
print('median = ',p50,' and p90 = ',p90) # median =  5.0  and p90 =  9.0

但是,如果您的数据中有任何NaN值,则上述功能将无用。在这种情况下,建议使用的函数是numpy.nanpercentile < https://docs.scipy.org/doc/numpy/reference/generation/numpy.nanpercentile.html >函数:

import numpy as np

a_NaN = np.array([0.,1.,2.,3.,4.,5.,6.,7.,8.,9.,10.])
a_NaN[0] = np.nan
print('a_NaN',a_NaN)
p50 = np.nanpercentile(a_NaN, 50) # return 50th percentile, e.g median.
p90 = np.nanpercentile(a_NaN, 90) # return 90th percentile.
print('median = ',p50,' and p90 = ',p90) # median =  5.5  and p90 =  9.1

在上面显示的两个选项中,您仍然可以选择插值模式。请按照以下示例进行操作,以便于理解。

import numpy as np

b = np.array([1,2,3,4,5,6,7,8,9,10])
print('percentiles using default interpolation')
p10 = np.percentile(b, 10) # return 10th percentile.
p50 = np.percentile(b, 50) # return 50th percentile, e.g median.
p90 = np.percentile(b, 90) # return 90th percentile.
print('p10 = ',p10,', median = ',p50,' and p90 = ',p90)
#p10 =  1.9 , median =  5.5  and p90 =  9.1

print('percentiles using interpolation = ', "linear")
p10 = np.percentile(b, 10,interpolation='linear') # return 10th percentile.
p50 = np.percentile(b, 50,interpolation='linear') # return 50th percentile, e.g median.
p90 = np.percentile(b, 90,interpolation='linear') # return 90th percentile.
print('p10 = ',p10,', median = ',p50,' and p90 = ',p90)
#p10 =  1.9 , median =  5.5  and p90 =  9.1

print('percentiles using interpolation = ', "lower")
p10 = np.percentile(b, 10,interpolation='lower') # return 10th percentile.
p50 = np.percentile(b, 50,interpolation='lower') # return 50th percentile, e.g median.
p90 = np.percentile(b, 90,interpolation='lower') # return 90th percentile.
print('p10 = ',p10,', median = ',p50,' and p90 = ',p90)
#p10 =  1 , median =  5  and p90 =  9

print('percentiles using interpolation = ', "higher")
p10 = np.percentile(b, 10,interpolation='higher') # return 10th percentile.
p50 = np.percentile(b, 50,interpolation='higher') # return 50th percentile, e.g median.
p90 = np.percentile(b, 90,interpolation='higher') # return 90th percentile.
print('p10 = ',p10,', median = ',p50,' and p90 = ',p90)
#p10 =  2 , median =  6  and p90 =  10

print('percentiles using interpolation = ', "midpoint")
p10 = np.percentile(b, 10,interpolation='midpoint') # return 10th percentile.
p50 = np.percentile(b, 50,interpolation='midpoint') # return 50th percentile, e.g median.
p90 = np.percentile(b, 90,interpolation='midpoint') # return 90th percentile.
print('p10 = ',p10,', median = ',p50,' and p90 = ',p90)
#p10 =  1.5 , median =  5.5  and p90 =  9.5

print('percentiles using interpolation = ', "nearest")
p10 = np.percentile(b, 10,interpolation='nearest') # return 10th percentile.
p50 = np.percentile(b, 50,interpolation='nearest') # return 50th percentile, e.g median.
p90 = np.percentile(b, 90,interpolation='nearest') # return 90th percentile.
print('p10 = ',p10,', median = ',p50,' and p90 = ',p90)
#p10 =  2 , median =  5  and p90 =  9

如果您的输入数组仅包含整数值,那么您可能会对百分位数答案作为整数感兴趣。如果是这样,请选择插值模式,例如“较低”,“较高”或“最近”。

A convenient way to calculate percentiles for a one-dimensional numpy sequence or matrix is by using numpy.percentile <https://docs.scipy.org/doc/numpy/reference/generated/numpy.percentile.html>. Example:

import numpy as np

a = np.array([0,1,2,3,4,5,6,7,8,9,10])
p50 = np.percentile(a, 50) # return 50th percentile, e.g median.
p90 = np.percentile(a, 90) # return 90th percentile.
print('median = ',p50,' and p90 = ',p90) # median =  5.0  and p90 =  9.0

However, if there is any NaN value in your data, the above function will not be useful. The recommended function to use in that case is the numpy.nanpercentile <https://docs.scipy.org/doc/numpy/reference/generated/numpy.nanpercentile.html> function:

import numpy as np

a_NaN = np.array([0.,1.,2.,3.,4.,5.,6.,7.,8.,9.,10.])
a_NaN[0] = np.nan
print('a_NaN',a_NaN)
p50 = np.nanpercentile(a_NaN, 50) # return 50th percentile, e.g median.
p90 = np.nanpercentile(a_NaN, 90) # return 90th percentile.
print('median = ',p50,' and p90 = ',p90) # median =  5.5  and p90 =  9.1

In the two options presented above, you can still choose the interpolation mode. Follow the examples below for easier understanding.

import numpy as np

b = np.array([1,2,3,4,5,6,7,8,9,10])
print('percentiles using default interpolation')
p10 = np.percentile(b, 10) # return 10th percentile.
p50 = np.percentile(b, 50) # return 50th percentile, e.g median.
p90 = np.percentile(b, 90) # return 90th percentile.
print('p10 = ',p10,', median = ',p50,' and p90 = ',p90)
#p10 =  1.9 , median =  5.5  and p90 =  9.1

print('percentiles using interpolation = ', "linear")
p10 = np.percentile(b, 10,interpolation='linear') # return 10th percentile.
p50 = np.percentile(b, 50,interpolation='linear') # return 50th percentile, e.g median.
p90 = np.percentile(b, 90,interpolation='linear') # return 90th percentile.
print('p10 = ',p10,', median = ',p50,' and p90 = ',p90)
#p10 =  1.9 , median =  5.5  and p90 =  9.1

print('percentiles using interpolation = ', "lower")
p10 = np.percentile(b, 10,interpolation='lower') # return 10th percentile.
p50 = np.percentile(b, 50,interpolation='lower') # return 50th percentile, e.g median.
p90 = np.percentile(b, 90,interpolation='lower') # return 90th percentile.
print('p10 = ',p10,', median = ',p50,' and p90 = ',p90)
#p10 =  1 , median =  5  and p90 =  9

print('percentiles using interpolation = ', "higher")
p10 = np.percentile(b, 10,interpolation='higher') # return 10th percentile.
p50 = np.percentile(b, 50,interpolation='higher') # return 50th percentile, e.g median.
p90 = np.percentile(b, 90,interpolation='higher') # return 90th percentile.
print('p10 = ',p10,', median = ',p50,' and p90 = ',p90)
#p10 =  2 , median =  6  and p90 =  10

print('percentiles using interpolation = ', "midpoint")
p10 = np.percentile(b, 10,interpolation='midpoint') # return 10th percentile.
p50 = np.percentile(b, 50,interpolation='midpoint') # return 50th percentile, e.g median.
p90 = np.percentile(b, 90,interpolation='midpoint') # return 90th percentile.
print('p10 = ',p10,', median = ',p50,' and p90 = ',p90)
#p10 =  1.5 , median =  5.5  and p90 =  9.5

print('percentiles using interpolation = ', "nearest")
p10 = np.percentile(b, 10,interpolation='nearest') # return 10th percentile.
p50 = np.percentile(b, 50,interpolation='nearest') # return 50th percentile, e.g median.
p90 = np.percentile(b, 90,interpolation='nearest') # return 90th percentile.
print('p10 = ',p10,', median = ',p50,' and p90 = ',p90)
#p10 =  2 , median =  5  and p90 =  9

If your input array only consists of integer values, you might be interested in the percentil answer as an integer. If so, choose interpolation mode such as ‘lower’, ‘higher’, or ‘nearest’.


如何在NumPy中标准化数组?

问题:如何在NumPy中标准化数组?

我想拥有一个NumPy数组的规范。更具体地说,我正在寻找此功能的等效版本

def normalize(v):
    norm = np.linalg.norm(v)
    if norm == 0: 
       return v
    return v / norm

skearn或中有类似的东西numpy吗?

该函数在v向量为0 的情况下起作用。

I would like to have the norm of one NumPy array. More specifically, I am looking for an equivalent version of this function

def normalize(v):
    norm = np.linalg.norm(v)
    if norm == 0: 
       return v
    return v / norm

Is there something like that in skearn or numpy?

This function works in a situation where v is the 0 vector.


回答 0

如果您使用的是scikit-learn,则可以使用sklearn.preprocessing.normalize

import numpy as np
from sklearn.preprocessing import normalize

x = np.random.rand(1000)*10
norm1 = x / np.linalg.norm(x)
norm2 = normalize(x[:,np.newaxis], axis=0).ravel()
print np.all(norm1 == norm2)
# True

If you’re using scikit-learn you can use sklearn.preprocessing.normalize:

import numpy as np
from sklearn.preprocessing import normalize

x = np.random.rand(1000)*10
norm1 = x / np.linalg.norm(x)
norm2 = normalize(x[:,np.newaxis], axis=0).ravel()
print np.all(norm1 == norm2)
# True

回答 1

我同意,如果这样的功能是随附电池的一部分,那就太好了。但据我所知不是。这是适用于任意轴并提供最佳性能的版本。

import numpy as np

def normalized(a, axis=-1, order=2):
    l2 = np.atleast_1d(np.linalg.norm(a, order, axis))
    l2[l2==0] = 1
    return a / np.expand_dims(l2, axis)

A = np.random.randn(3,3,3)
print(normalized(A,0))
print(normalized(A,1))
print(normalized(A,2))

print(normalized(np.arange(3)[:,None]))
print(normalized(np.arange(3)))

I would agree that it were nice if such a function was part of the included batteries. But it isn’t, as far as I know. Here is a version for arbitrary axes, and giving optimal performance.

import numpy as np

def normalized(a, axis=-1, order=2):
    l2 = np.atleast_1d(np.linalg.norm(a, order, axis))
    l2[l2==0] = 1
    return a / np.expand_dims(l2, axis)

A = np.random.randn(3,3,3)
print(normalized(A,0))
print(normalized(A,1))
print(normalized(A,2))

print(normalized(np.arange(3)[:,None]))
print(normalized(np.arange(3)))

回答 2

您可以指定ord以获得L1范数。为了避免零除,我使用了eps,但这可能不是很好。

def normalize(v):
    norm=np.linalg.norm(v, ord=1)
    if norm==0:
        norm=np.finfo(v.dtype).eps
    return v/norm

You can specify ord to get the L1 norm. To avoid zero division I use eps, but that’s maybe not great.

def normalize(v):
    norm=np.linalg.norm(v, ord=1)
    if norm==0:
        norm=np.finfo(v.dtype).eps
    return v/norm

回答 3

这可能也适合您

import numpy as np
normalized_v = v / np.sqrt(np.sum(v**2))

但在v长度为0 时失败。

This might also work for you

import numpy as np
normalized_v = v / np.sqrt(np.sum(v**2))

but fails when v has length 0.


回答 4

如果您具有多维数据,并且希望将每个轴归一化为其最大值或总和:

def normalize(_d, to_sum=True, copy=True):
    # d is a (n x dimension) np array
    d = _d if not copy else np.copy(_d)
    d -= np.min(d, axis=0)
    d /= (np.sum(d, axis=0) if to_sum else np.ptp(d, axis=0))
    return d

使用numpys 峰到峰功能。

a = np.random.random((5, 3))

b = normalize(a, copy=False)
b.sum(axis=0) # array([1., 1., 1.]), the rows sum to 1

c = normalize(a, to_sum=False, copy=False)
c.max(axis=0) # array([1., 1., 1.]), the max of each row is 1

If you have multidimensional data and want each axis normalized to its max or its sum:

def normalize(_d, to_sum=True, copy=True):
    # d is a (n x dimension) np array
    d = _d if not copy else np.copy(_d)
    d -= np.min(d, axis=0)
    d /= (np.sum(d, axis=0) if to_sum else np.ptp(d, axis=0))
    return d

Uses numpys peak to peak function.

a = np.random.random((5, 3))

b = normalize(a, copy=False)
b.sum(axis=0) # array([1., 1., 1.]), the rows sum to 1

c = normalize(a, to_sum=False, copy=False)
c.max(axis=0) # array([1., 1., 1.]), the max of each row is 1

回答 5

Christoph Gohlke unit_vector()在流行的转换模块中还具有将向量标准化的功能:

import transformations as trafo
import numpy as np

data = np.array([[1.0, 1.0, 0.0],
                 [1.0, 1.0, 1.0],
                 [1.0, 2.0, 3.0]])

print(trafo.unit_vector(data, axis=1))

There is also the function unit_vector() to normalize vectors in the popular transformations module by Christoph Gohlke:

import transformations as trafo
import numpy as np

data = np.array([[1.0, 1.0, 0.0],
                 [1.0, 1.0, 1.0],
                 [1.0, 2.0, 3.0]])

print(trafo.unit_vector(data, axis=1))

回答 6

您提到了sci-kit学习,所以我想分享另一个解决方案。

科学工具学习 MinMaxScaler

在sci-kit学习中,有一个名为的API MinMaxScaler,可以根据需要自定义值范围。

它还为我们处理了NaN问题。

将NaN视为缺失值:忽略适合度,并保留其变换值。…参见参考文献[1]

代码样例

代码很简单,只需键入

# Let's say X_train is your input dataframe
from sklearn.preprocessing import MinMaxScaler
# call MinMaxScaler object
min_max_scaler = MinMaxScaler()
# feed in a numpy array
X_train_norm = min_max_scaler.fit_transform(X_train.values)
# wrap it up if you need a dataframe
df = pd.DataFrame(X_train_norm)
参考

You mentioned sci-kit learn, so I want to share another solution.

sci-kit learn MinMaxScaler

In sci-kit learn, there is a API called MinMaxScaler which can customize the the value range as you like.

It also deal with NaN issues for us.

NaNs are treated as missing values: disregarded in fit, and maintained in transform. … see reference [1]

Code sample

The code is simple, just type

# Let's say X_train is your input dataframe
from sklearn.preprocessing import MinMaxScaler
# call MinMaxScaler object
min_max_scaler = MinMaxScaler()
# feed in a numpy array
X_train_norm = min_max_scaler.fit_transform(X_train.values)
# wrap it up if you need a dataframe
df = pd.DataFrame(X_train_norm)
Reference

回答 7

没有sklearn和使用正义numpy。只需定义一个函数即可:

假设行是变量列是样本axis= 1):

import numpy as np

# Example array
X = np.array([[1,2,3],[4,5,6]])

def stdmtx(X):
    means = X.mean(axis =1)
    stds = X.std(axis= 1, ddof=1)
    X= X - means[:, np.newaxis]
    X= X / stds[:, np.newaxis]
    return np.nan_to_num(X)

输出:

X
array([[1, 2, 3],
       [4, 5, 6]])

stdmtx(X)
array([[-1.,  0.,  1.],
       [-1.,  0.,  1.]])

Without sklearn and using just numpy. Just define a function:.

Assuming that the rows are the variables and the columns the samples (axis= 1):

import numpy as np

# Example array
X = np.array([[1,2,3],[4,5,6]])

def stdmtx(X):
    means = X.mean(axis =1)
    stds = X.std(axis= 1, ddof=1)
    X= X - means[:, np.newaxis]
    X= X / stds[:, np.newaxis]
    return np.nan_to_num(X)

output:

X
array([[1, 2, 3],
       [4, 5, 6]])

stdmtx(X)
array([[-1.,  0.,  1.],
       [-1.,  0.,  1.]])


回答 8

如果要标准化存储在3D张量中的n维特征向量,则也可以使用PyTorch:

import numpy as np
from torch import FloatTensor
from torch.nn.functional import normalize

vecs = np.random.rand(3, 16, 16, 16)
norm_vecs = normalize(FloatTensor(vecs), dim=0, eps=1e-16).numpy()

If you want to normalize n dimensional feature vectors stored in a 3D tensor, you could also use PyTorch:

import numpy as np
from torch import FloatTensor
from torch.nn.functional import normalize

vecs = np.random.rand(3, 16, 16, 16)
norm_vecs = normalize(FloatTensor(vecs), dim=0, eps=1e-16).numpy()

回答 9

如果您正在使用3D向量,则可以使用toolbelt vg简洁地执行此操作。它是numpy之上的一个轻层,它支持单个值和堆叠的向量。

import numpy as np
import vg

x = np.random.rand(1000)*10
norm1 = x / np.linalg.norm(x)
norm2 = vg.normalize(x)
print np.all(norm1 == norm2)
# True

我在上次启动时创建了该库,它的使用动机如下:简单的想法在NumPy中太冗长了。

If you’re working with 3D vectors, you can do this concisely using the toolbelt vg. It’s a light layer on top of numpy and it supports single values and stacked vectors.

import numpy as np
import vg

x = np.random.rand(1000)*10
norm1 = x / np.linalg.norm(x)
norm2 = vg.normalize(x)
print np.all(norm1 == norm2)
# True

I created the library at my last startup, where it was motivated by uses like this: simple ideas which are way too verbose in NumPy.


回答 10

如果您不需要极高的精度,则可以将函数简化为:

v_norm = v / (np.linalg.norm(v) + 1e-16)

If you don’t need utmost precision, your function can be reduced to:

v_norm = v / (np.linalg.norm(v) + 1e-16)

回答 11

如果使用多维数组,则可以快速解决。

假设我们有2D数组,我们想通过最后一个轴对其进行归一化,而有些行的范数为零。

import numpy as np
arr = np.array([
    [1, 2, 3], 
    [0, 0, 0],
    [5, 6, 7]
], dtype=np.float)

lengths = np.linalg.norm(arr, axis=-1)
print(lengths)  # [ 3.74165739  0.         10.48808848]
arr[lengths > 0] = arr[lengths > 0] / lengths[lengths > 0][:, np.newaxis]
print(arr)
# [[0.26726124 0.53452248 0.80178373]
# [0.         0.         0.        ]
# [0.47673129 0.57207755 0.66742381]]

If you work with multidimensional array following fast solution is possible.

Say we have 2D array, which we want to normalize by last axis, while some rows have zero norm.

import numpy as np
arr = np.array([
    [1, 2, 3], 
    [0, 0, 0],
    [5, 6, 7]
], dtype=np.float)

lengths = np.linalg.norm(arr, axis=-1)
print(lengths)  # [ 3.74165739  0.         10.48808848]
arr[lengths > 0] = arr[lengths > 0] / lengths[lengths > 0][:, np.newaxis]
print(arr)
# [[0.26726124 0.53452248 0.80178373]
# [0.         0.         0.        ]
# [0.47673129 0.57207755 0.66742381]]