标签归档:pandas

使用熊猫将字符串前缀添加到字符串列中的每个值

问题:使用熊猫将字符串前缀添加到字符串列中的每个值

我想在熊猫数据帧的所述列中的每个值的开头附加一个字符串(优雅)。我已经弄清楚该如何做,目前正在使用:

df.ix[(df['col'] != False), 'col'] = 'str'+df[(df['col'] != False), 'col']

这似乎是一件微不足道的事情-您是否知道其他任何方式(可能还会将该字符添加到该列为0或NaN的行中)?

如果还不清楚,我想转一下:

    col 
1     a
2     0

变成:

       col 
1     stra
2     str0

I would like to append a string to the start of each value in a said column of a pandas dataframe (elegantly). I already figured out how to kind-of do this and I am currently using:

df.ix[(df['col'] != False), 'col'] = 'str'+df[(df['col'] != False), 'col']

This seems one hell of an inelegant thing to do – do you know any other way (which maybe also adds the character to rows where that column is 0 or NaN)?

In case this is yet unclear, I would like to turn:

    col 
1     a
2     0

into:

       col 
1     stra
2     str0

回答 0

df['col'] = 'str' + df['col'].astype(str)

例:

>>> df = pd.DataFrame({'col':['a',0]})
>>> df
  col
0   a
1   0
>>> df['col'] = 'str' + df['col'].astype(str)
>>> df
    col
0  stra
1  str0
df['col'] = 'str' + df['col'].astype(str)

Example:

>>> df = pd.DataFrame({'col':['a',0]})
>>> df
  col
0   a
1   0
>>> df['col'] = 'str' + df['col'].astype(str)
>>> df
    col
0  stra
1  str0

回答 1

另外,您也可以使用apply组合format(或f字符串更好),如果例如还想添加后缀或操纵元素本身,我会觉得可读性更高:

df = pd.DataFrame({'col':['a', 0]})

df['col'] = df['col'].apply(lambda x: "{}{}".format('str', x))

这也会产生所需的输出:

    col
0  stra
1  str0

如果您使用的是Python 3.6+,则还可以使用f字符串:

df['col'] = df['col'].apply(lambda x: f"str{x}")

产生相同的输出。

f字符串版本几乎与@RomanPekar的解决方案(python 3.6.4)一样快:

df = pd.DataFrame({'col':['a', 0]*200000})

%timeit df['col'].apply(lambda x: f"str{x}")
117 ms ± 451 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

%timeit 'str' + df['col'].astype(str)
112 ms ± 1.04 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

format但是,使用的确确实要慢得多:

%timeit df['col'].apply(lambda x: "{}{}".format('str', x))
185 ms ± 1.07 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

As an alternative, you can also use an apply combined with format (or better with f-strings) which I find slightly more readable if one e.g. also wants to add a suffix or manipulate the element itself:

df = pd.DataFrame({'col':['a', 0]})

df['col'] = df['col'].apply(lambda x: "{}{}".format('str', x))

which also yields the desired output:

    col
0  stra
1  str0

If you are using Python 3.6+, you can also use f-strings:

df['col'] = df['col'].apply(lambda x: f"str{x}")

yielding the same output.

The f-string version is almost as fast as @RomanPekar’s solution (python 3.6.4):

df = pd.DataFrame({'col':['a', 0]*200000})

%timeit df['col'].apply(lambda x: f"str{x}")
117 ms ± 451 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

%timeit 'str' + df['col'].astype(str)
112 ms ± 1.04 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

Using format, however, is indeed far slower:

%timeit df['col'].apply(lambda x: "{}{}".format('str', x))
185 ms ± 1.07 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

回答 2

您可以使用pandas.Series.map:

df['col'].map('str{}'.format)

它将在所有值之前加上“ str”一词。

You can use pandas.Series.map :

df['col'].map('str{}'.format)

It will apply the word “str” before all your values.


回答 3

如果使用加载表文件dtype=str
或将列类型转换为字符串,df['a'] = df['a'].astype(str)
则可以使用以下方法:

df['a']= 'col' + df['a'].str[:]

这种方法允许使用的前缀,追加和子集字符串df
适用于Pandas v0.23.4,v0.24.1。不了解较早的版本。

If you load you table file with dtype=str
or convert column type to string df['a'] = df['a'].astype(str)
then you can use such approach:

df['a']= 'col' + df['a'].str[:]

This approach allows prepend, append, and subset string of df.
Works on Pandas v0.23.4, v0.24.1. Don’t know about earlier versions.


回答 4

.loc的另一种解决方案:

df = pd.DataFrame({'col': ['a', 0]})
df.loc[df.index, 'col'] = 'string' + df['col'].astype(str)

这没有上述解决方案快(每个循环慢1ms以上),但在需要条件更改时可能有用,例如:

mask = (df['col'] == 0)
df.loc[mask, 'col'] = 'string' + df['col'].astype(str)

Another solution with .loc:

df = pd.DataFrame({'col': ['a', 0]})
df.loc[df.index, 'col'] = 'string' + df['col'].astype(str)

This is not as quick as solutions above (>1ms per loop slower) but may be useful in case you need conditional change, like:

mask = (df['col'] == 0)
df.loc[mask, 'col'] = 'string' + df['col'].astype(str)

忽略带有str.contains的NaN

问题:忽略带有str.contains的NaN

我想查找包含字符串的行,如下所示:

DF[DF.col.str.contains("foo")]

但是,这失败了,因为某些元素是NaN:

ValueError:无法使用包含NA / NaN值的向量建立索引

所以我诉诸于混乱

DF[DF.col.notnull()][DF.col.dropna().str.contains("foo")]

有没有更好的办法?

I want to find rows that contain a string, like so:

DF[DF.col.str.contains("foo")]

However, this fails because some elements are NaN:

ValueError: cannot index with vector containing NA / NaN values

So I resort to the obfuscated

DF[DF.col.notnull()][DF.col.dropna().str.contains("foo")]

Is there a better way?


回答 0

有一个标志:

In [11]: df = pd.DataFrame([["foo1"], ["foo2"], ["bar"], [np.nan]], columns=['a'])

In [12]: df.a.str.contains("foo")
Out[12]:
0     True
1     True
2    False
3      NaN
Name: a, dtype: object

In [13]: df.a.str.contains("foo", na=False)
Out[13]:
0     True
1     True
2    False
3    False
Name: a, dtype: bool

参见str.replace文档:

na:默认的NaN,填充缺失值的值。


因此,您可以执行以下操作:

In [21]: df.loc[df.a.str.contains("foo", na=False)]
Out[21]:
      a
0  foo1
1  foo2

There’s a flag for that:

In [11]: df = pd.DataFrame([["foo1"], ["foo2"], ["bar"], [np.nan]], columns=['a'])

In [12]: df.a.str.contains("foo")
Out[12]:
0     True
1     True
2    False
3      NaN
Name: a, dtype: object

In [13]: df.a.str.contains("foo", na=False)
Out[13]:
0     True
1     True
2    False
3    False
Name: a, dtype: bool

See the str.replace docs:

na : default NaN, fill value for missing values.


So you can do the following:

In [21]: df.loc[df.a.str.contains("foo", na=False)]
Out[21]:
      a
0  foo1
1  foo2

回答 1

除了上述答案,我想说的是对于没有单个单词名称的列,您可以使用:

df[df['Product ID'].str.contains("foo") == True]

希望这可以帮助。

In addition to the above answers, I would say for columns having no single word name, you may use:-

df[df['Product ID'].str.contains("foo") == True]

Hope this helps.


回答 2

我不是100%的理由(实际上是来这里寻找答案的),但是这也行得通,并且不需要替换所有的nan值。

import pandas as pd
import numpy as np

df = pd.DataFrame([["foo1"], ["foo2"], ["bar"], [np.nan]], columns=['a'])

newdf = df.loc[df['a'].str.contains('foo') == True]

可以使用或不使用.loc

我不知道为什么会这样,据我了解,当您使用方括号索引时,pandas会将方括号内的内容评估为TrueFalse。我不知道为什么在方括号“ extra boolean”中添加该短语完全没有效果。

I’m not 100% on why (actually came here to search for the answer), but this also works, and doesn’t require replacing all nan values.

import pandas as pd
import numpy as np

df = pd.DataFrame([["foo1"], ["foo2"], ["bar"], [np.nan]], columns=['a'])

newdf = df.loc[df['a'].str.contains('foo') == True]

Works with or without .loc.

I have no idea why this works, as I understand it when you’re indexing with brackets pandas evaluates whatever’s inside the bracket as either True or False. I can’t tell why making the phrase inside the brackets ‘extra boolean’ has any effect at all.


回答 3

您也可以设置:

DF[DF.col.str.contains(pat = '(foo)', regex = True) ]

You can also patern :

DF[DF.col.str.contains(pat = '(foo)', regex = True) ]

回答 4

import folium
import pandas

data= pandas.read_csv("maps.txt")

lat = list(data["latitude"])
lon = list(data["longitude"])

map= folium.Map(location=[31.5204, 74.3587], zoom_start=6, tiles="Mapbox Bright")

fg = folium.FeatureGroup(name="My Map")

for lt, ln in zip(lat, lon):
c1 = fg.add_child(folium.Marker(location=[lt, ln], popup="Hi i am a Country",icon=folium.Icon(color='green')))

child = fg.add_child(folium.Marker(location=[31.5204, 74.5387], popup="Welcome to Lahore", icon= folium.Icon(color='green')))

map.add_child(fg)

map.save("Lahore.html")


Traceback (most recent call last):
  File "C:\Users\Ryan\AppData\Local\Programs\Python\Python36-32\check2.py", line 14, in <module>
    c1 = fg.add_child(folium.Marker(location=[lt, ln], popup="Hi i am a Country",icon=folium.Icon(color='green')))
  File "C:\Users\Ryan\AppData\Local\Programs\Python\Python36-32\lib\site-packages\folium\map.py", line 647, in __init__
    self.location = _validate_coordinates(location)
  File "C:\Users\Ryan\AppData\Local\Programs\Python\Python36-32\lib\site-packages\folium\utilities.py", line 48, in _validate_coordinates
    'got:\n{!r}'.format(coordinates))
ValueError: Location values cannot contain NaNs, got:
[nan, nan]
import folium
import pandas

data= pandas.read_csv("maps.txt")

lat = list(data["latitude"])
lon = list(data["longitude"])

map= folium.Map(location=[31.5204, 74.3587], zoom_start=6, tiles="Mapbox Bright")

fg = folium.FeatureGroup(name="My Map")

for lt, ln in zip(lat, lon):
c1 = fg.add_child(folium.Marker(location=[lt, ln], popup="Hi i am a Country",icon=folium.Icon(color='green')))

child = fg.add_child(folium.Marker(location=[31.5204, 74.5387], popup="Welcome to Lahore", icon= folium.Icon(color='green')))

map.add_child(fg)

map.save("Lahore.html")


Traceback (most recent call last):
  File "C:\Users\Ryan\AppData\Local\Programs\Python\Python36-32\check2.py", line 14, in <module>
    c1 = fg.add_child(folium.Marker(location=[lt, ln], popup="Hi i am a Country",icon=folium.Icon(color='green')))
  File "C:\Users\Ryan\AppData\Local\Programs\Python\Python36-32\lib\site-packages\folium\map.py", line 647, in __init__
    self.location = _validate_coordinates(location)
  File "C:\Users\Ryan\AppData\Local\Programs\Python\Python36-32\lib\site-packages\folium\utilities.py", line 48, in _validate_coordinates
    'got:\n{!r}'.format(coordinates))
ValueError: Location values cannot contain NaNs, got:
[nan, nan]

当期望一维数组时,传递了列向量y

问题:当期望一维数组时,传递了列向量y

我需要适应RandomForestRegressorsklearn.ensemble

forest = ensemble.RandomForestRegressor(**RF_tuned_parameters)
model = forest.fit(train_fold, train_y)
yhat = model.predict(test_fold)

该代码始终有效,直到我对数据进行了一些预处理(train_y)。错误消息显示:

DataConversionWarning:当期望1d数组时,传递了列向量y。请将y的形状更改为(n_samples,),例如使用ravel()。

模型= forest.fit(train_fold,train_y)

以前train_y是一个Series,现在是numpy数组(它是列向量)。如果我应用train_y.ravel(),则它成为行向量,并且没有错误消息出现,通过预测步骤需要很长时间(实际上,它永远不会完成…)。

RandomForestRegressor我发现的文档中,train_y应将其定义为“ y : array-like, shape = [n_samples] or [n_samples, n_outputs] 如何解决此问题的想法?”。

I need to fit RandomForestRegressor from sklearn.ensemble.

forest = ensemble.RandomForestRegressor(**RF_tuned_parameters)
model = forest.fit(train_fold, train_y)
yhat = model.predict(test_fold)

This code always worked until I made some preprocessing of data (train_y). The error message says:

DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().

model = forest.fit(train_fold, train_y)

Previously train_y was a Series, now it’s numpy array (it is a column-vector). If I apply train_y.ravel(), then it becomes a row vector and no error message appears, through the prediction step takes very long time (actually it never finishes…).

In the docs of RandomForestRegressor I found that train_y should be defined as y : array-like, shape = [n_samples] or [n_samples, n_outputs] Any idea how to solve this issue?


回答 0

更改此行:

model = forest.fit(train_fold, train_y)

至:

model = forest.fit(train_fold, train_y.values.ravel())

编辑:

.values将给出数组中的值。(形状:[n,1)

.ravel 会将数组形状转换为(n,)

Change this line:

model = forest.fit(train_fold, train_y)

to:

model = forest.fit(train_fold, train_y.values.ravel())

Edit:

.values will give the values in an array. (shape: (n,1)

.ravel will convert that array shape to (n, )


回答 1

当我尝试训练KNN分类器时,我也遇到了这种情况。但似乎在警告不见了,我改变之后:
knn.fit(X_train,y_train)

knn.fit(X_train, np.ravel(y_train,order='C'))

在这行之前,我用过import numpy as np

I also encountered this situation when I was trying to train a KNN classifier. but it seems that the warning was gone after I changed:
knn.fit(X_train,y_train)
to
knn.fit(X_train, np.ravel(y_train,order='C'))

Ahead of this line I used import numpy as np.


回答 2

我有同样的问题。问题在于标签是按列格式的,而它却希望连续显示。用np.ravel()

knn.score(training_set, np.ravel(training_labels))

希望这能解决。

I had the same problem. The problem was that the labels were in a column format while it expected it in a row. use np.ravel()

knn.score(training_set, np.ravel(training_labels))

Hope this solves it.


回答 3

使用以下代码:

model = forest.fit(train_fold, train_y.ravel())

如果仍然像下面一样错误地打耳光?

Unknown label type: %r" % y

使用此代码:

y = train_y.ravel()
train_y = np.array(y).astype(int)
model = forest.fit(train_fold, train_y)

use below code:

model = forest.fit(train_fold, train_y.ravel())

if you are still getting slap by error as identical as below ?

Unknown label type: %r" % y

use this code:

y = train_y.ravel()
train_y = np.array(y).astype(int)
model = forest.fit(train_fold, train_y)

回答 4

另一种方法是使用 ravel

model = forest.fit(train_fold, train_y.values.reshape(-1,))

Another way of doing this is to use ravel

model = forest.fit(train_fold, train_y.values.reshape(-1,))

回答 5

使用neuraxle,您可以轻松解决此问题:

p = Pipeline([
   # expected outputs shape: (n, 1)
   OutputTransformerWrapper(NumpyRavel()), 
   # expected outputs shape: (n, )
   RandomForestRegressor(**RF_tuned_parameters)
])

p, outputs = p.fit_transform(data_inputs, expected_outputs)

Neuraxle是类似于sklearn的框架,用于深度学习项目中的超参数调整和AutoML!

With neuraxle, you can easily solve this :

p = Pipeline([
   # expected outputs shape: (n, 1)
   OutputTransformerWrapper(NumpyRavel()), 
   # expected outputs shape: (n, )
   RandomForestRegressor(**RF_tuned_parameters)
])

p, outputs = p.fit_transform(data_inputs, expected_outputs)

Neuraxle is a sklearn-like framework for hyperparameter tuning and AutoML in deep learning projects !


回答 6

format_train_y=[]
for n in train_y:
    format_train_y.append(n[0])
format_train_y=[]
for n in train_y:
    format_train_y.append(n[0])

回答 7

Y = y.values [:,0]

Y-formated_train_y

y-train_y

Y = y.values[:,0]

Y – formated_train_y

y – train_y


正确的方法来反转pandas.DataFrame?

问题:正确的方法来反转pandas.DataFrame?

这是我的代码:

import pandas as pd

data = pd.DataFrame({'Odd':[1,3,5,6,7,9], 'Even':[0,2,4,6,8,10]})

for i in reversed(data):
    print(data['Odd'], data['Even'])

当我运行此代码时,出现以下错误:

Traceback (most recent call last):
  File "C:\Python33\lib\site-packages\pandas\core\generic.py", line 665, in _get_item_cache
    return cache[item]
KeyError: 5

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\*****\Documents\******\********\****.py", line 5, in <module>
    for i in reversed(data):
  File "C:\Python33\lib\site-packages\pandas\core\frame.py", line 2003, in __getitem__
    return self._get_item_cache(key)
  File "C:\Python33\lib\site-packages\pandas\core\generic.py", line 667, in _get_item_cache
    values = self._data.get(item)
  File "C:\Python33\lib\site-packages\pandas\core\internals.py", line 1656, in get
    _, block = self._find_block(item)
  File "C:\Python33\lib\site-packages\pandas\core\internals.py", line 1936, in _find_block
    self._check_have(item)
  File "C:\Python33\lib\site-packages\pandas\core\internals.py", line 1943, in _check_have
    raise KeyError('no item named %s' % com.pprint_thing(item))
KeyError: 'no item named 5'

为什么会出现此错误?
我该如何解决?
正确的逆转方法是pandas.DataFrame什么?

Here is my code:

import pandas as pd

data = pd.DataFrame({'Odd':[1,3,5,6,7,9], 'Even':[0,2,4,6,8,10]})

for i in reversed(data):
    print(data['Odd'], data['Even'])

When I run this code, i get the following error:

Traceback (most recent call last):
  File "C:\Python33\lib\site-packages\pandas\core\generic.py", line 665, in _get_item_cache
    return cache[item]
KeyError: 5

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\*****\Documents\******\********\****.py", line 5, in <module>
    for i in reversed(data):
  File "C:\Python33\lib\site-packages\pandas\core\frame.py", line 2003, in __getitem__
    return self._get_item_cache(key)
  File "C:\Python33\lib\site-packages\pandas\core\generic.py", line 667, in _get_item_cache
    values = self._data.get(item)
  File "C:\Python33\lib\site-packages\pandas\core\internals.py", line 1656, in get
    _, block = self._find_block(item)
  File "C:\Python33\lib\site-packages\pandas\core\internals.py", line 1936, in _find_block
    self._check_have(item)
  File "C:\Python33\lib\site-packages\pandas\core\internals.py", line 1943, in _check_have
    raise KeyError('no item named %s' % com.pprint_thing(item))
KeyError: 'no item named 5'

Why am I getting this error?
How can I fix that?
What is the right way to reverse pandas.DataFrame?


回答 0

data.reindex(index=data.index[::-1])

或者简单地:

data.iloc[::-1]

将反转您的数据帧,如果您想使for循环从下到上,则可以执行以下操作:

for idx in reversed(data.index):
    print(idx, data.loc[idx, 'Even'], data.loc[idx, 'Odd'])

要么

for idx in reversed(data.index):
    print(idx, data.Even[idx], data.Odd[idx])

因为你得到一个错误reversed首先调用data.__len__()返回6,然后试图调用data[j - 1]用于jrange(6, 0, -1)和第一个电话会data[5]; 但在pandas数据框中data[5]表示第5列,没有第5列,因此它将引发异常。(请参阅文档

data.reindex(index=data.index[::-1])

or simply:

data.iloc[::-1]

will reverse your data frame, if you want to have a for loop which goes from down to up you may do:

for idx in reversed(data.index):
    print(idx, data.loc[idx, 'Even'], data.loc[idx, 'Odd'])

or

for idx in reversed(data.index):
    print(idx, data.Even[idx], data.Odd[idx])

You are getting an error because reversed first calls data.__len__() which returns 6. Then it tries to call data[j - 1] for j in range(6, 0, -1), and the first call would be data[5]; but in pandas dataframe data[5] means column 5, and there is no column 5 so it will throw an exception. ( see docs )


回答 1

您可以以更简单的方式反转行:

df[::-1]

You can reverse the rows in an even simpler way:

df[::-1]

回答 2

反转数据帧后,现有答案都不会重置索引。

为此,请执行以下操作:

 data[::-1].reset_index()

这是一个实用程序函数,它也按照@Tim的注释删除了旧的索引列:

def reset_my_index(df):
  res = df[::-1].reset_index(drop=True)
  return(res)

只需将数据框传递给函数

None of the existing answers resets the index after reversing the dataframe.

For this, do the following:

 data[::-1].reset_index()

Here’s a utility function that also removes the old index column, as per @Tim’s comment:

def reset_my_index(df):
  res = df[::-1].reset_index(drop=True)
  return(res)

Simply pass your dataframe into the function


回答 3

这有效:

    for i,r in data[::-1].iterrows():
        print(r['Odd'], r['Even'])

This works:

    for i,r in data[::-1].iterrows():
        print(r['Odd'], r['Even'])

查找列的最大值,并使用Pandas返回相应的行值

问题:查找列的最大值,并使用Pandas返回相应的行值

数据结构;

我正在尝试使用Python Pandas查找具有最大值的CountryPlace

这将返回最大值:

data.groupby(['Country','Place'])['Value'].max()

但我怎么得到相应CountryPlace的名字吗?

Structure of data;

Using Python Pandas I am trying to find the Country & Place with the maximum value.

This returns the maximum value:

data.groupby(['Country','Place'])['Value'].max()

But how do I get the corresponding Country and Place name?


回答 0

假设df有一个唯一的索引,则该行具有最大值:

In [34]: df.loc[df['Value'].idxmax()]
Out[34]: 
Country        US
Place      Kansas
Value         894
Name: 7

请注意,idxmax返回索引标签。因此,如果DataFrame在索引中有重复项,则标签可能不会唯一地标识该行,因此df.loc可能会返回多个行。

因此,如果df没有唯一索引,则必须按照上述步骤使索引唯一。取决于DataFrame,有时您可以使用stackset_index使索引唯一。或者,您可以简单地重置索引(这样行将被重新编号,从0开始):

df = df.reset_index()

Assuming df has a unique index, this gives the row with the maximum value:

In [34]: df.loc[df['Value'].idxmax()]
Out[34]: 
Country        US
Place      Kansas
Value         894
Name: 7

Note that idxmax returns index labels. So if the DataFrame has duplicates in the index, the label may not uniquely identify the row, so df.loc may return more than one row.

Therefore, if df does not have a unique index, you must make the index unique before proceeding as above. Depending on the DataFrame, sometimes you can use stack or set_index to make the index unique. Or, you can simply reset the index (so the rows become renumbered, starting at 0):

df = df.reset_index()

回答 1

df[df['Value']==df['Value'].max()]

这将返回整个行的最大值

df[df['Value']==df['Value'].max()]

This will return the entire row with max value


回答 2

国家和地方是该系列的索引,如果不需要该索引,则可以设置as_index=False

df.groupby(['country','place'], as_index=False)['value'].max()

编辑:

似乎您想让每个国家/地区的价值最大化,以下代码将满足您的要求:

df.groupby("country").apply(lambda df:df.irow(df.value.argmax()))

The country and place is the index of the series, if you don’t need the index, you can set as_index=False:

df.groupby(['country','place'], as_index=False)['value'].max()

Edit:

It seems that you want the place with max value for every country, following code will do what you want:

df.groupby("country").apply(lambda df:df.irow(df.value.argmax()))

回答 3

我认为返回具有最大值的行的最简单方法是获取其索引。argmax()可用于返回具有最大值的行的索引。

index = df.Value.argmax()

现在,索引可以用于获取该特定行的功能:

df.iloc[df.Value.argmax(), 0:2]

I think the easiest way to return a row with the maximum value is by getting its index. argmax() can be used to return the index of the row with the largest value.

index = df.Value.argmax()

Now the index could be used to get the features for that particular row:

df.iloc[df.Value.argmax(), 0:2]

回答 4

使用的index属性DataFrame。请注意,我没有在示例中键入所有行。

In [14]: df = data.groupby(['Country','Place'])['Value'].max()

In [15]: df.index
Out[15]: 
MultiIndex
[Spain  Manchester, UK     London    , US     Mchigan   ,        NewYork   ]

In [16]: df.index[0]
Out[16]: ('Spain', 'Manchester')

In [17]: df.index[1]
Out[17]: ('UK', 'London')

您还可以通过该索引获取值:

In [21]: for index in df.index:
    print index, df[index]
   ....:      
('Spain', 'Manchester') 512
('UK', 'London') 778
('US', 'Mchigan') 854
('US', 'NewYork') 562

编辑

很抱歉造成您的误解,请尝试以下操作:

In [52]: s=data.max()

In [53]: print '%s, %s, %s' % (s['Country'], s['Place'], s['Value'])
US, NewYork, 854

Use the index attribute of DataFrame. Note that I don’t type all the rows in the example.

In [14]: df = data.groupby(['Country','Place'])['Value'].max()

In [15]: df.index
Out[15]: 
MultiIndex
[Spain  Manchester, UK     London    , US     Mchigan   ,        NewYork   ]

In [16]: df.index[0]
Out[16]: ('Spain', 'Manchester')

In [17]: df.index[1]
Out[17]: ('UK', 'London')

You can also get the value by that index:

In [21]: for index in df.index:
    print index, df[index]
   ....:      
('Spain', 'Manchester') 512
('UK', 'London') 778
('US', 'Mchigan') 854
('US', 'NewYork') 562

Edit

Sorry for misunderstanding what you want, try followings:

In [52]: s=data.max()

In [53]: print '%s, %s, %s' % (s['Country'], s['Place'], s['Value'])
US, NewYork, 854

回答 5

为了以最大值打印“国家和地区”,请使用以下代码行。

print(df[['Country', 'Place']][df.Value == df.Value.max()])

In order to print the Country and Place with maximum value, use the following line of code.

print(df[['Country', 'Place']][df.Value == df.Value.max()])

回答 6

我在列中查找最大值的解决方案:

df.ix[df.idxmax()]

,也是最低要求:

df.ix[df.idxmin()]

My solution for finding maximum values in columns:

df.ix[df.idxmax()]

, also minimum:

df.ix[df.idxmin()]

回答 7

我建议使用nlargest以获得更好的性能和较短的代码。进口pandas

df[col_name].value_counts().nlargest(n=1)

I’d recommend using nlargest for better performance and shorter code. import pandas

df[col_name].value_counts().nlargest(n=1)

回答 8

您可以使用:

打印(df [df [‘Value’] == df [‘Value’]。max()])

You can use:

print(df[df['Value']==df['Value'].max()])

回答 9

import pandas
df是您创建的数据框。

使用命令:

df1=df[['Country','Place']][df.Value == df['Value'].max()]

这将显示其最大值的国家和地方。

import pandas
df is the data frame you create.

Use the command:

df1=df[['Country','Place']][df.Value == df['Value'].max()]

This will display the country and place whose value is maximum.


回答 10

尝试使用pandas导入数据时遇到类似的错误,数据集的第一列在单词开头之前有空格。我删除了空间,它就像一个魅力!

I encountered a similar error while trying to import data using pandas, The first column on my dataset had spaces before the start of the words. I removed the spaces and it worked like a charm!!


如何将SQL查询结果转换为PANDAS数据结构?

问题:如何将SQL查询结果转换为PANDAS数据结构?

在这个问题上的任何帮助将不胜感激。

因此,基本上我想对我的SQL数据库运行查询并将返回的数据存储为Pandas数据结构。

我已附上查询代码。

我正在阅读有关Pandas的文档,但是在识别查询的返回类型时遇到了问题。

我试图打印查询结果,但没有提供任何有用的信息。

谢谢!!!!

from sqlalchemy import create_engine

engine2 = create_engine('mysql://THE DATABASE I AM ACCESSING')
connection2 = engine2.connect()
dataid = 1022
resoverall = connection2.execute("
  SELECT 
      sum(BLABLA) AS BLA,
      sum(BLABLABLA2) AS BLABLABLA2,
      sum(SOME_INT) AS SOME_INT,
      sum(SOME_INT2) AS SOME_INT2,
      100*sum(SOME_INT2)/sum(SOME_INT) AS ctr,
      sum(SOME_INT2)/sum(SOME_INT) AS cpc
   FROM daily_report_cooked
   WHERE campaign_id = '%s'", %dataid)

因此,我有点想了解变量“ resoverall”的格式/数据类型是什么,以及如何将其与PANDAS数据结构一起使用。

Any help on this problem will be greatly appreciated.

So basically I want to run a query to my SQL database and store the returned data as Pandas data structure.

I have attached code for query.

I am reading the documentation on Pandas, but I have problem to identify the return type of my query.

I tried to print the query result, but it doesn’t give any useful information.

Thanks!!!!

from sqlalchemy import create_engine

engine2 = create_engine('mysql://THE DATABASE I AM ACCESSING')
connection2 = engine2.connect()
dataid = 1022
resoverall = connection2.execute("
  SELECT 
      sum(BLABLA) AS BLA,
      sum(BLABLABLA2) AS BLABLABLA2,
      sum(SOME_INT) AS SOME_INT,
      sum(SOME_INT2) AS SOME_INT2,
      100*sum(SOME_INT2)/sum(SOME_INT) AS ctr,
      sum(SOME_INT2)/sum(SOME_INT) AS cpc
   FROM daily_report_cooked
   WHERE campaign_id = '%s'", %dataid)

So I sort of want to understand what’s the format/datatype of my variable “resoverall” and how to put it with PANDAS data structure.


回答 0

这是完成任务的最短代码:

from pandas import DataFrame
df = DataFrame(resoverall.fetchall())
df.columns = resoverall.keys()

您可以像Paul的回答中所说的那样幻想和分析类型。

Here’s the shortest code that will do the job:

from pandas import DataFrame
df = DataFrame(resoverall.fetchall())
df.columns = resoverall.keys()

You can go fancier and parse the types as in Paul’s answer.


回答 1

编辑:2015年3月

如下所述,熊猫现在使用SQLAlchemy读取(read_sql)并将其插入(to_sql)数据库。以下应该工作

import pandas as pd

df = pd.read_sql(sql, cnxn)

以前的答案: 通过类似问题的麦克贝克斯

import pyodbc
import pandas.io.sql as psql

cnxn = pyodbc.connect(connection_info) 
cursor = cnxn.cursor()
sql = "SELECT * FROM TABLE"

df = psql.frame_query(sql, cnxn)
cnxn.close()

Edit: Mar. 2015

As noted below, pandas now uses SQLAlchemy to both read from (read_sql) and insert into (to_sql) a database. The following should work

import pandas as pd

df = pd.read_sql(sql, cnxn)

Previous answer: Via mikebmassey from a similar question

import pyodbc
import pandas.io.sql as psql

cnxn = pyodbc.connect(connection_info) 
cursor = cnxn.cursor()
sql = "SELECT * FROM TABLE"

df = psql.frame_query(sql, cnxn)
cnxn.close()

回答 2

如果您使用的是SQLAlchemy的ORM而不是表达式语言,则可能会发现自己想要将类型的对象转换sqlalchemy.orm.query.Query为Pandas数据框。

最干净的方法是从查询的statement属性获取生成的SQL,然后使用pandas的read_sql()方法执行它。例如,从名为的查询对象开始query

df = pd.read_sql(query.statement, query.session.bind)

If you are using SQLAlchemy’s ORM rather than the expression language, you might find yourself wanting to convert an object of type sqlalchemy.orm.query.Query to a Pandas data frame.

The cleanest approach is to get the generated SQL from the query’s statement attribute, and then execute it with pandas’s read_sql() method. E.g., starting with a Query object called query:

df = pd.read_sql(query.statement, query.session.bind)

回答 3

编辑2014-09-30:

熊猫现在具有read_sql功能。您肯定要使用它。

原始答案:

我无法使用SQLAlchemy帮助您-我总是根据需要使用pyodbc,MySQLdb或psychopg2。但是这样做的时候,像下面这样一个简单的函数往往可以满足我的需求:

import decimal

import pydobc
import numpy as np
import pandas

cnn, cur = myConnectToDBfunction()
cmd = "SELECT * FROM myTable"
cur.execute(cmd)
dataframe = __processCursor(cur, dataframe=True)

def __processCursor(cur, dataframe=False, index=None):
    '''
    Processes a database cursor with data on it into either
    a structured numpy array or a pandas dataframe.

    input:
    cur - a pyodbc cursor that has just received data
    dataframe - bool. if false, a numpy record array is returned
                if true, return a pandas dataframe
    index - list of column(s) to use as index in a pandas dataframe
    '''
    datatypes = []
    colinfo = cur.description
    for col in colinfo:
        if col[1] == unicode:
            datatypes.append((col[0], 'U%d' % col[3]))
        elif col[1] == str:
            datatypes.append((col[0], 'S%d' % col[3]))
        elif col[1] in [float, decimal.Decimal]:
            datatypes.append((col[0], 'f4'))
        elif col[1] == datetime.datetime:
            datatypes.append((col[0], 'O4'))
        elif col[1] == int:
            datatypes.append((col[0], 'i4'))

    data = []
    for row in cur:
        data.append(tuple(row))

    array = np.array(data, dtype=datatypes)
    if dataframe:
        output = pandas.DataFrame.from_records(array)

        if index is not None:
            output = output.set_index(index)

    else:
        output = array

    return output

Edit 2014-09-30:

pandas now has a read_sql function. You definitely want to use that instead.

Original answer:

I can’t help you with SQLAlchemy — I always use pyodbc, MySQLdb, or psychopg2 as needed. But when doing so, a function as simple as the one below tends to suit my needs:

import decimal

import pydobc
import numpy as np
import pandas

cnn, cur = myConnectToDBfunction()
cmd = "SELECT * FROM myTable"
cur.execute(cmd)
dataframe = __processCursor(cur, dataframe=True)

def __processCursor(cur, dataframe=False, index=None):
    '''
    Processes a database cursor with data on it into either
    a structured numpy array or a pandas dataframe.

    input:
    cur - a pyodbc cursor that has just received data
    dataframe - bool. if false, a numpy record array is returned
                if true, return a pandas dataframe
    index - list of column(s) to use as index in a pandas dataframe
    '''
    datatypes = []
    colinfo = cur.description
    for col in colinfo:
        if col[1] == unicode:
            datatypes.append((col[0], 'U%d' % col[3]))
        elif col[1] == str:
            datatypes.append((col[0], 'S%d' % col[3]))
        elif col[1] in [float, decimal.Decimal]:
            datatypes.append((col[0], 'f4'))
        elif col[1] == datetime.datetime:
            datatypes.append((col[0], 'O4'))
        elif col[1] == int:
            datatypes.append((col[0], 'i4'))

    data = []
    for row in cur:
        data.append(tuple(row))

    array = np.array(data, dtype=datatypes)
    if dataframe:
        output = pandas.DataFrame.from_records(array)

        if index is not None:
            output = output.set_index(index)

    else:
        output = array

    return output

回答 4

MySQL连接器

对于使用mysql连接器的用户,可以将此代码作为开始。(感谢@Daniel Velkov)

二手裁判:


import pandas as pd
import mysql.connector

# Setup MySQL connection
db = mysql.connector.connect(
    host="<IP>",              # your host, usually localhost
    user="<USER>",            # your username
    password="<PASS>",        # your password
    database="<DATABASE>"     # name of the data base
)   

# You must create a Cursor object. It will let you execute all the queries you need
cur = db.cursor()

# Use all the SQL you like
cur.execute("SELECT * FROM <TABLE>")

# Put it all to a data frame
sql_data = pd.DataFrame(cur.fetchall())
sql_data.columns = cur.column_names

# Close the session
db.close()

# Show the data
print(sql_data.head())

MySQL Connector

For those that works with the mysql connector you can use this code as a start. (Thanks to @Daniel Velkov)

Used refs:


import pandas as pd
import mysql.connector

# Setup MySQL connection
db = mysql.connector.connect(
    host="<IP>",              # your host, usually localhost
    user="<USER>",            # your username
    password="<PASS>",        # your password
    database="<DATABASE>"     # name of the data base
)   

# You must create a Cursor object. It will let you execute all the queries you need
cur = db.cursor()

# Use all the SQL you like
cur.execute("SELECT * FROM <TABLE>")

# Put it all to a data frame
sql_data = pd.DataFrame(cur.fetchall())
sql_data.columns = cur.column_names

# Close the session
db.close()

# Show the data
print(sql_data.head())

回答 5

这是我使用的代码。希望这可以帮助。

import pandas as pd
from sqlalchemy import create_engine

def getData():
  # Parameters
  ServerName = "my_server"
  Database = "my_db"
  UserPwd = "user:pwd"
  Driver = "driver=SQL Server Native Client 11.0"

  # Create the connection
  engine = create_engine('mssql+pyodbc://' + UserPwd + '@' + ServerName + '/' + Database + "?" + Driver)

  sql = "select * from mytable"
  df = pd.read_sql(sql, engine)
  return df

df2 = getData()
print(df2)

Here’s the code I use. Hope this helps.

import pandas as pd
from sqlalchemy import create_engine

def getData():
  # Parameters
  ServerName = "my_server"
  Database = "my_db"
  UserPwd = "user:pwd"
  Driver = "driver=SQL Server Native Client 11.0"

  # Create the connection
  engine = create_engine('mssql+pyodbc://' + UserPwd + '@' + ServerName + '/' + Database + "?" + Driver)

  sql = "select * from mytable"
  df = pd.read_sql(sql, engine)
  return df

df2 = getData()
print(df2)

回答 6

这是对您的问题的简短回答:

from __future__ import print_function
import MySQLdb
import numpy as np
import pandas as pd
import xlrd

# Connecting to MySQL Database
connection = MySQLdb.connect(
             host="hostname",
             port=0000,
             user="userID",
             passwd="password",
             db="table_documents",
             charset='utf8'
           )
print(connection)
#getting data from database into a dataframe
sql_for_df = 'select * from tabledata'
df_from_database = pd.read_sql(sql_for_df , connection)

This is a short and crisp answer to your problem:

from __future__ import print_function
import MySQLdb
import numpy as np
import pandas as pd
import xlrd

# Connecting to MySQL Database
connection = MySQLdb.connect(
             host="hostname",
             port=0000,
             user="userID",
             passwd="password",
             db="table_documents",
             charset='utf8'
           )
print(connection)
#getting data from database into a dataframe
sql_for_df = 'select * from tabledata'
df_from_database = pd.read_sql(sql_for_df , connection)

回答 7

1.使用MySQL-connector-python

# pip install mysql-connector-python

import mysql.connector
import pandas as pd

mydb = mysql.connector.connect(
    host = 'host',
    user = 'username',
    passwd = 'pass',
    database = 'db_name'
)
query = 'select * from table_name'
df = pd.read_sql(query, con = mydb)
print(df)

2.使用SQLAlchemy

# pip install pymysql
# pip install sqlalchemy

import pandas as pd
import sqlalchemy

engine = sqlalchemy.create_engine('mysql+pymysql://username:password@localhost:3306/db_name')

query = '''
select * from table_name
'''
df = pd.read_sql_query(query, engine)
print(df)

1. Using MySQL-connector-python

# pip install mysql-connector-python

import mysql.connector
import pandas as pd

mydb = mysql.connector.connect(
    host = 'host',
    user = 'username',
    passwd = 'pass',
    database = 'db_name'
)
query = 'select * from table_name'
df = pd.read_sql(query, con = mydb)
print(df)

2. Using SQLAlchemy

# pip install pymysql
# pip install sqlalchemy

import pandas as pd
import sqlalchemy

engine = sqlalchemy.create_engine('mysql+pymysql://username:password@localhost:3306/db_name')

query = '''
select * from table_name
'''
df = pd.read_sql_query(query, engine)
print(df)

回答 8

像Nathan一样,我经常想将sqlalchemy或sqlsoup Query的结果转储到Pandas数据框中。我自己的解决方案是:

query = session.query(tbl.Field1, tbl.Field2)
DataFrame(query.all(), columns=[column['name'] for column in query.column_descriptions])

Like Nathan, I often want to dump the results of a sqlalchemy or sqlsoup Query into a Pandas data frame. My own solution for this is:

query = session.query(tbl.Field1, tbl.Field2)
DataFrame(query.all(), columns=[column['name'] for column in query.column_descriptions])

回答 9

resoverall是sqlalchemy ResultProxy对象。您可以在sqlalchemy文档中阅读有关它的更多信息,后者介绍了使用Engines and Connections的基本用法。这里重要的resoverall是dict之类的。

熊猫喜欢像dict这样的对象来创建其数据结构,请参见 在线文档

祝您好运sqlalchemy和熊猫。

resoverall is a sqlalchemy ResultProxy object. You can read more about it in the sqlalchemy docs, the latter explains basic usage of working with Engines and Connections. Important here is that resoverall is dict like.

Pandas likes dict like objects to create its data structures, see the online docs

Good luck with sqlalchemy and pandas.


回答 10

简单地使用pandaspyodbc在一起。您必须connstr根据数据库规范修改连接字符串()。

import pyodbc
import pandas as pd

# MSSQL Connection String Example
connstr = "Server=myServerAddress;Database=myDB;User Id=myUsername;Password=myPass;"

# Query Database and Create DataFrame Using Results
df = pd.read_sql("select * from myTable", pyodbc.connect(connstr))

我已经使用pyodbc了多个企业数据库(例如SQL Server,MySQL,MariaDB,IBM)。

Simply use pandas and pyodbc together. You’ll have to modify your connection string (connstr) according to your database specifications.

import pyodbc
import pandas as pd

# MSSQL Connection String Example
connstr = "Server=myServerAddress;Database=myDB;User Id=myUsername;Password=myPass;"

# Query Database and Create DataFrame Using Results
df = pd.read_sql("select * from myTable", pyodbc.connect(connstr))

I’ve used pyodbc with several enterprise databases (e.g. SQL Server, MySQL, MariaDB, IBM).


回答 11

这个问题很旧,但是我想加两分钱。我读到的问题是“我想对我的[my] SQL数据库运行查询并将返回的数据存储为Pandas数据结构[DataFrame]。”

从代码中看起来您的意思是mysql数据库,并假设您的意思是pandas DataFrame。

import MySQLdb as mdb
import pandas.io.sql as sql
from pandas import *

conn = mdb.connect('<server>','<user>','<pass>','<db>');
df = sql.read_frame('<query>', conn)

例如,

conn = mdb.connect('localhost','myname','mypass','testdb');
df = sql.read_frame('select * from testTable', conn)

这会将testTable的所有行导入到DataFrame中。

This question is old, but I wanted to add my two-cents. I read the question as ” I want to run a query to my [my]SQL database and store the returned data as Pandas data structure [DataFrame].”

From the code it looks like you mean mysql database and assume you mean pandas DataFrame.

import MySQLdb as mdb
import pandas.io.sql as sql
from pandas import *

conn = mdb.connect('<server>','<user>','<pass>','<db>');
df = sql.read_frame('<query>', conn)

For example,

conn = mdb.connect('localhost','myname','mypass','testdb');
df = sql.read_frame('select * from testTable', conn)

This will import all rows of testTable into a DataFrame.


回答 12

这是我的。以防万一,如果您使用“ pymysql”:

import pymysql
from pandas import DataFrame

host   = 'localhost'
port   = 3306
user   = 'yourUserName'
passwd = 'yourPassword'
db     = 'yourDatabase'

cnx    = pymysql.connect(host=host, port=port, user=user, passwd=passwd, db=db)
cur    = cnx.cursor()

query  = """ SELECT * FROM yourTable LIMIT 10"""
cur.execute(query)

field_names = [i[0] for i in cur.description]
get_data = [xx for xx in cur]

cur.close()
cnx.close()

df = DataFrame(get_data)
df.columns = field_names

Here is mine. Just in case if you are using “pymysql”:

import pymysql
from pandas import DataFrame

host   = 'localhost'
port   = 3306
user   = 'yourUserName'
passwd = 'yourPassword'
db     = 'yourDatabase'

cnx    = pymysql.connect(host=host, port=port, user=user, passwd=passwd, db=db)
cur    = cnx.cursor()

query  = """ SELECT * FROM yourTable LIMIT 10"""
cur.execute(query)

field_names = [i[0] for i in cur.description]
get_data = [xx for xx in cur]

cur.close()
cnx.close()

df = DataFrame(get_data)
df.columns = field_names

回答 13

pandas.io.sql.write_frame已弃用。 https://pandas.pydata.org/pandas-docs/version/0.15.2/generated/pandas.io.sql.write_frame.html

应该更改为使用pandas.DataFrame.to_sql https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html

还有另一种解决方案。 PYODBC到Pandas-DataFrame不起作用-传递的值的形状为(x,y),索引表示为(w,z)

从熊猫0.12(我相信)开始,您可以:

import pandas
import pyodbc

sql = 'select * from table'
cnn = pyodbc.connect(...)

data = pandas.read_sql(sql, cnn)

在0.12之前,您可以执行以下操作:

import pandas
from pandas.io.sql import read_frame
import pyodbc

sql = 'select * from table'
cnn = pyodbc.connect(...)

data = read_frame(sql, cnn)

pandas.io.sql.write_frame is DEPRECATED. https://pandas.pydata.org/pandas-docs/version/0.15.2/generated/pandas.io.sql.write_frame.html

Should change to use pandas.DataFrame.to_sql https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html

There is another solution. PYODBC to Pandas – DataFrame not working – Shape of passed values is (x,y), indices imply (w,z)

As of Pandas 0.12 (I believe) you can do:

import pandas
import pyodbc

sql = 'select * from table'
cnn = pyodbc.connect(...)

data = pandas.read_sql(sql, cnn)

Prior to 0.12, you could do:

import pandas
from pandas.io.sql import read_frame
import pyodbc

sql = 'select * from table'
cnn = pyodbc.connect(...)

data = read_frame(sql, cnn)

回答 14

离上一篇帖子很久了,但也许可以帮助某人…

比Paul H更短的路:

my_dic = session.query(query.all())
my_df = pandas.DataFrame.from_dict(my_dic)

Long time from last post but maybe it helps someone…

Shorted way than Paul H:

my_dic = session.query(query.all())
my_df = pandas.DataFrame.from_dict(my_dic)

回答 15

我这样做的最好方法

db.execute(query) where db=db_class() #database class
    mydata=[x for x in db.fetchall()]
    df=pd.DataFrame(data=mydata)

best way I do this

db.execute(query) where db=db_class() #database class
    mydata=[x for x in db.fetchall()]
    df=pd.DataFrame(data=mydata)

回答 16

如果结果类型为ResultSet,则应首先将其转换为字典。然后,将自动收集DataFrame列

这适用于我的情况:

df = pd.DataFrame([dict(r) for r in resoverall])

If the result type is ResultSet, you should convert it to dictionary first. Then the DataFrame columns will be collected automatically.

This works on my case:

df = pd.DataFrame([dict(r) for r in resoverall])

在pandas数据框中完全打印很长的字符串

问题:在pandas数据框中完全打印很长的字符串

我正在努力看似非常简单的事情。我有一个包含非常长字符串的pandas数据框。

df = pd.DataFrame({'one' : ['one', 'two', 
      'This is very long string very long string very long string veryvery long string']})

现在,当我尝试打印相同的字符串时,我看不到完整的字符串,而只看到了字符串的一部分。

我尝试了以下选项

  • 使用 print(df.iloc[2])
  • 使用 to_html
  • 使用 to_string
  • 其中一个stackoverflow答案建议通过使用pandas display选项来增加列宽,但该方法也不起作用。
  • 我也没有得到如何set_printoptions帮助我。

任何想法表示赞赏。看起来很简单,但无法获得!

I am struggling with the seemingly very simple thing.I have a pandas data frame containing very long string.

df = pd.DataFrame({'one' : ['one', 'two', 
      'This is very long string very long string very long string veryvery long string']})

Now when I try to print the same, I do not see the full string I rather see only part of the string.

I tried following options

  • using print(df.iloc[2])
  • using to_html
  • using to_string
  • One of the stackoverflow answer suggested to increase column width by using pandas display option, that did not work either.
  • I also did not get how set_printoptions will help me.

Any ideas appreciated. Looks very simple, but not able to get it!


回答 0

您可以使用options.display.max_colwidth指定想要在默认表示中看到更多内容:

In [2]: df
Out[2]:
                                                 one
0                                                one
1                                                two
2  This is very long string very long string very...

In [3]: pd.options.display.max_colwidth
Out[3]: 50

In [4]: pd.options.display.max_colwidth = 100

In [5]: df
Out[5]:
                                                                               one
0                                                                              one
1                                                                              two
2  This is very long string very long string very long string veryvery long string

实际上,如果您只想检查一个值,则可以通过访问它(作为标量,而不是像一行一样df.iloc[2])来查看完整的字符串:

In [7]: df.iloc[2,0]    # or df.loc[2,'one']
Out[7]: 'This is very long string very long string very long string veryvery long string'

You can use options.display.max_colwidth to specify you want to see more in the default representation:

In [2]: df
Out[2]:
                                                 one
0                                                one
1                                                two
2  This is very long string very long string very...

In [3]: pd.options.display.max_colwidth
Out[3]: 50

In [4]: pd.options.display.max_colwidth = 100

In [5]: df
Out[5]:
                                                                               one
0                                                                              one
1                                                                              two
2  This is very long string very long string very long string veryvery long string

And indeed, if you just want to inspect the one value, by accessing it (as a scalar, not as a row as df.iloc[2] does) you also see the full string:

In [7]: df.iloc[2,0]    # or df.loc[2,'one']
Out[7]: 'This is very long string very long string very long string veryvery long string'

回答 1

使用pd.set_option('display.max_colwidth', -1)自动换行,多行细胞。

是有关如何充分利用大熊猫的jupyters显示器的重要资源。

Use pd.set_option('display.max_colwidth', -1) for automatic linebreaks and multi-line cells.

This is a great resource on how to use jupyters display with pandas to the fullest.


回答 2

另一种非常简单的方法是调用列表函数:

list(df['one'][2])
# output:
['This is very long string very long string very long string veryvery long string']

值得一提的是,要列出整个列并不是很方便,但是对于简单的一行来说,为什么呢?

Another, pretty simple approach is to call list function:

list(df['one'][2])
# output:
['This is very long string very long string very long string veryvery long string']

No worth to mention, that is not good to convent to list the whole columns, but for a simple line – why not


回答 3

打印整个字符串的另一种简便方法是values在数据框上调用。

df = pd.DataFrame({'one' : ['one', 'two', 
      'This is very long string very long string very long string veryvery long string']})

print(df.values)

输出将是

[['one']
 ['two']
 ['This is very long string very long string very long string veryvery long string']]

Another easier way to print the whole string is to call values on the dataframe.

df = pd.DataFrame({'one' : ['one', 'two', 
      'This is very long string very long string very long string veryvery long string']})

print(df.values)

The Output will be

[['one']
 ['two']
 ['This is very long string very long string very long string veryvery long string']]

回答 4

这是你的本意吗?

In [7]: x =  pd.DataFrame({'one' : ['one', 'two', 'This is very long string very long string very long string veryvery long string']})

In [8]: x
Out[8]: 
                                                 one
0                                                one
1                                                two
2  This is very long string very long string very...

In [9]: x['one'][2]
Out[9]: 'This is very long string very long string very long string veryvery long string'

Is this what you meant to do ?

In [7]: x =  pd.DataFrame({'one' : ['one', 'two', 'This is very long string very long string very long string veryvery long string']})

In [8]: x
Out[8]: 
                                                 one
0                                                one
1                                                two
2  This is very long string very long string very...

In [9]: x['one'][2]
Out[9]: 'This is very long string very long string very long string veryvery long string'

回答 5

我经常处理您描述的情况的.to_csv()方法是使用该方法并写入stdout:

import sys

df.to_csv(sys.stdout)

更新:现在应该可以使用None而不是sys.stdout具有相似的效果了!

这应该转储整个数据帧,包括所有字符串的全部。您可以使用to_csv参数来配置列分隔符,是否打印索引等。不过,它不如正确呈现它漂亮。

我最初将其发布是为了回答有关熊猫中某个数据框中所有列的输出数据的一些相关问题

The way I often deal with the situation you describe is to use the .to_csv() method and write to stdout:

import sys

df.to_csv(sys.stdout)

Update: it should now be possible to just use None instead of sys.stdout with similar effect!

This should dump the whole dataframe, including the entirety of any strings. You can use the to_csv parameters to configure column separators, whether the index is printed, etc. It will be less pretty than rendering it properly though.

I posted this originally in answer to the somewhat-related question at Output data from all columns in a dataframe in pandas


回答 6

只需在打印之前将以下行添加到您的代码中即可。

 pd.options.display.max_colwidth = 90  # set a value as your need

您只需执行以下步骤即可设置其他附加选项,

  • 您可以如下更改熊猫max_columns功能的选项,以显示更多列

    import pandas as pd
    pd.options.display.max_columns = 10

    (这将显示10列,您可以根据需要进行更改)

  • 这样,您可以更改行数,如下所示以显示更多行

    pd.options.display.max_rows = 999

    (这允许一次打印999行)

这应该很好

请参考文档,为熊猫更改更多选项/设置

Just add the following line to your code before print.

 pd.options.display.max_colwidth = 90  # set a value as your need

You can simply do the following steps for setting other additional options,

  • You can change the options for pandas max_columns feature as follows to display more columns

    import pandas as pd
    pd.options.display.max_columns = 10
    

    (this allows 10 columns to display, you can change this as you need)

  • Like that you can change the number of rows as you need to display as follows to display more rows

    pd.options.display.max_rows = 999
    

    (this allows to print 999 rows at a time)

this should works fine

Please kindly refer the doc to change more options/settings for pandas


回答 7

我创建了一个小实用程序功能,对我来说效果很好

def display_text_max_col_width(df, width):
    with pd.option_context('display.max_colwidth', width):
        print(df)

display_text_max_col_width(train_df["Description"], 800)

我可以根据需要更改宽度的长度,而无需永久设置任何选项。

I have created a small utility function, this works well for me

def display_text_max_col_width(df, width):
    with pd.option_context('display.max_colwidth', width):
        print(df)

display_text_max_col_width(train_df["Description"], 800)

I can change length of the width as per my requirement, without setting any option permanently.


回答 8

如果您使用的是jupyter笔记本,还可以将pandas数据帧打印为HTML表格,该表格将打印完整字符串。

from IPython.display import display, HTML
display(HTML(df.to_html()))

输出量

    one
0   one
1   two
2   This is very long string very long string very long string veryvery long string

If you’re using jupyter notebook, you can also print pandas dataframe as HTML table, which will print full strings.

from IPython.display import display, HTML
display(HTML(df.to_html()))

Output

    one
0   one
1   two
2   This is very long string very long string very long string veryvery long string

如何取消嵌套(爆炸)pandas DataFrame中的列?

问题:如何取消嵌套(爆炸)pandas DataFrame中的列?

我有以下DataFrame,其中列之一是对象(列表类型单元格):

df=pd.DataFrame({'A':[1,2],'B':[[1,2],[1,2]]})
df
Out[458]: 
   A       B
0  1  [1, 2]
1  2  [1, 2]

我的预期输出是:

   A  B
0  1  1
1  1  2
3  2  1
4  2  2

我应该怎么做才能做到这一点?


相关问题

熊猫:当单元格内容为列表时,为列表中的每个元素创建一行

好问题和答案,但只与列表处理一列(在我的答案自DEF功能将多个列的工作,也是公认的答案是使用最耗时的apply,不推荐,检查的详细信息我应该什么时候曾经想在我的代码中使用pandas apply()吗?

I have the following DataFrame where one of the columns is an object (list type cell):

df=pd.DataFrame({'A':[1,2],'B':[[1,2],[1,2]]})
df
Out[458]: 
   A       B
0  1  [1, 2]
1  2  [1, 2]

My expected output is:

   A  B
0  1  1
1  1  2
3  2  1
4  2  2

What should I do to achieve this?


Related question

pandas: When cell contents are lists, create a row for each element in the list

Good question and answer but only handle one column with list(In my answer the self-def function will work for multiple columns, also the accepted answer is use the most time consuming apply , which is not recommended, check more info When should I ever want to use pandas apply() in my code?)


回答 0

作为同时使用R和的用户python,我已经多次看到这种类型的问题。

在R中,它们具有tidyr名为的包中的内置函数unnest。但是Pythonpandas)中没有此类问题的内置函数。

我知道objecttype总是使数据难以通过pandas‘函数进行转换。当我收到这样的数据时,想到的第一件事就是“弄平”或取消嵌套列。

我正在针对此类问题使用pandaspython函数。如果您担心上述解决方案的速度,请检查user3483203的答案,因为他正在使用numpy并且大多数时候numpy速度更快。我建议Cpython,并numba如果速度在你的情况很重要。


方法0 [pandas> = 0.25]
pandas 0.25开始,如果只需要爆炸列,则可以使用以下explode函数:

df.explode('B')

       A  B
    0  1  1
    1  1  2
    0  2  1
    1  2  2

方法1
apply + pd.Series(易于理解,但不建议在性能方面使用。)

df.set_index('A').B.apply(pd.Series).stack().reset_index(level=0).rename(columns={0:'B'})
Out[463]: 
   A  B
0  1  1
1  1  2
0  2  1
1  2  2

方法2与构造函数一起
使用,重新创建您的数据框(擅长性能,不擅长多列)repeatDataFrame

df=pd.DataFrame({'A':df.A.repeat(df.B.str.len()),'B':np.concatenate(df.B.values)})
df
Out[465]: 
   A  B
0  1  1
0  1  2
1  2  1
1  2  2


例如,方法2.1除了A之外,还有A.1 ….. An如果仍然使用上面的method(方法2),则很难一一重建列。

解决方案:joinmergeindex后“UNNEST”单列

s=pd.DataFrame({'B':np.concatenate(df.B.values)},index=df.index.repeat(df.B.str.len()))
s.join(df.drop('B',1),how='left')
Out[477]: 
   B  A
0  1  1
0  2  1
1  1  2
1  2  2

如果需要与以前完全相同的列顺序,请reindex在末尾添加。

s.join(df.drop('B',1),how='left').reindex(columns=df.columns)

方法3
重新创建list

pd.DataFrame([[x] + [z] for x, y in df.values for z in y],columns=df.columns)
Out[488]: 
   A  B
0  1  1
1  1  2
2  2  1
3  2  2

如果超过两列,请使用

s=pd.DataFrame([[x] + [z] for x, y in zip(df.index,df.B) for z in y])
s.merge(df,left_on=0,right_index=True)
Out[491]: 
   0  1  A       B
0  0  1  1  [1, 2]
1  0  2  1  [1, 2]
2  1  1  2  [1, 2]
3  1  2  2  [1, 2]

方法4
使用reindexloc

df.reindex(df.index.repeat(df.B.str.len())).assign(B=np.concatenate(df.B.values))
Out[554]: 
   A  B
0  1  1
0  1  2
1  2  1
1  2  2

#df.loc[df.index.repeat(df.B.str.len())].assign(B=np.concatenate(df.B.values))


列表仅包含唯一值时的方法5

df=pd.DataFrame({'A':[1,2],'B':[[1,2],[3,4]]})
from collections import ChainMap
d = dict(ChainMap(*map(dict.fromkeys, df['B'], df['A'])))
pd.DataFrame(list(d.items()),columns=df.columns[::-1])
Out[574]: 
   B  A
0  1  1
1  2  1
2  3  2
3  4  2

高性能
使用方法6numpy

newvalues=np.dstack((np.repeat(df.A.values,list(map(len,df.B.values))),np.concatenate(df.B.values)))
pd.DataFrame(data=newvalues[0],columns=df.columns)
   A  B
0  1  1
1  1  2
2  2  1
3  2  2

方法7
使用基本函数itertools cyclechain:纯python解决方案,只是为了好玩

from itertools import cycle,chain
l=df.values.tolist()
l1=[list(zip([x[0]], cycle(x[1])) if len([x[0]]) > len(x[1]) else list(zip(cycle([x[0]]), x[1]))) for x in l]
pd.DataFrame(list(chain.from_iterable(l1)),columns=df.columns)
   A  B
0  1  1
1  1  2
2  2  1
3  2  2

归纳到多列

df=pd.DataFrame({'A':[1,2],'B':[[1,2],[3,4]],'C':[[1,2],[3,4]]})
df
Out[592]: 
   A       B       C
0  1  [1, 2]  [1, 2]
1  2  [3, 4]  [3, 4]

自卫功能:

def unnesting(df, explode):
    idx = df.index.repeat(df[explode[0]].str.len())
    df1 = pd.concat([
        pd.DataFrame({x: np.concatenate(df[x].values)}) for x in explode], axis=1)
    df1.index = idx

    return df1.join(df.drop(explode, 1), how='left')


unnesting(df,['B','C'])
Out[609]: 
   B  C  A
0  1  1  1
0  2  2  1
1  3  3  2
1  4  4  2

列式嵌套

以上所有方法都在谈论垂直嵌套和爆炸,如果您确实需要水平扩展列表,请使用pd.DataFrame构造函数检查

df.join(pd.DataFrame(df.B.tolist(),index=df.index).add_prefix('B_'))
Out[33]: 
   A       B       C  B_0  B_1
0  1  [1, 2]  [1, 2]    1    2
1  2  [3, 4]  [3, 4]    3    4

更新功能

def unnesting(df, explode, axis):
    if axis==1:
        idx = df.index.repeat(df[explode[0]].str.len())
        df1 = pd.concat([
            pd.DataFrame({x: np.concatenate(df[x].values)}) for x in explode], axis=1)
        df1.index = idx

        return df1.join(df.drop(explode, 1), how='left')
    else :
        df1 = pd.concat([
                         pd.DataFrame(df[x].tolist(), index=df.index).add_prefix(x) for x in explode], axis=1)
        return df1.join(df.drop(explode, 1), how='left')

测试输出

unnesting(df, ['B','C'], axis=0)
Out[36]: 
   B0  B1  C0  C1  A
0   1   2   1   2  1
1   3   4   3   4  2

I know object columns type makes the data hard to convert with a pandas function. When I received the data like this, the first thing that came to mind was to ‘flatten’ or unnest the columns .

I am using pandas and python functions for this type of question. If you are worried about the speed of the above solutions, check user3483203’s answer, since it’s using numpy and most of the time numpy is faster . I recommend Cpython and numba if speed matters.


Method 0 [pandas >= 0.25]
Starting from pandas 0.25, if you only need to explode one column, you can use the pandas.DataFrame.explode function:

df.explode('B')

       A  B
    0  1  1
    1  1  2
    0  2  1
    1  2  2

Given a dataframe with an empty list or a NaN in the column. An empty list will not cause an issue, but a NaN will need to be filled with a list

df = pd.DataFrame({'A': [1, 2, 3, 4],'B': [[1, 2], [1, 2], [], np.nan]})
df.B = df.B.fillna({i: [] for i in df.index})  # replace NaN with []
df.explode('B')

   A    B
0  1    1
0  1    2
1  2    1
1  2    2
2  3  NaN
3  4  NaN

Method 1
apply + pd.Series (easy to understand but in terms of performance not recommended . )

df.set_index('A').B.apply(pd.Series).stack().reset_index(level=0).rename(columns={0:'B'})
Out[463]: 
   A  B
0  1  1
1  1  2
0  2  1
1  2  2

Method 2
Using repeat with DataFrame constructor , re-create your dataframe (good at performance, not good at multiple columns )

df=pd.DataFrame({'A':df.A.repeat(df.B.str.len()),'B':np.concatenate(df.B.values)})
df
Out[465]: 
   A  B
0  1  1
0  1  2
1  2  1
1  2  2

Method 2.1
for example besides A we have A.1 …..A.n. If we still use the method(Method 2) above it is hard for us to re-create the columns one by one .

Solution : join or merge with the index after ‘unnest’ the single columns

s=pd.DataFrame({'B':np.concatenate(df.B.values)},index=df.index.repeat(df.B.str.len()))
s.join(df.drop('B',1),how='left')
Out[477]: 
   B  A
0  1  1
0  2  1
1  1  2
1  2  2

If you need the column order exactly the same as before, add reindex at the end.

s.join(df.drop('B',1),how='left').reindex(columns=df.columns)

Method 3
recreate the list

pd.DataFrame([[x] + [z] for x, y in df.values for z in y],columns=df.columns)
Out[488]: 
   A  B
0  1  1
1  1  2
2  2  1
3  2  2

If more than two columns, use

s=pd.DataFrame([[x] + [z] for x, y in zip(df.index,df.B) for z in y])
s.merge(df,left_on=0,right_index=True)
Out[491]: 
   0  1  A       B
0  0  1  1  [1, 2]
1  0  2  1  [1, 2]
2  1  1  2  [1, 2]
3  1  2  2  [1, 2]

Method 4
using reindex or loc

df.reindex(df.index.repeat(df.B.str.len())).assign(B=np.concatenate(df.B.values))
Out[554]: 
   A  B
0  1  1
0  1  2
1  2  1
1  2  2

#df.loc[df.index.repeat(df.B.str.len())].assign(B=np.concatenate(df.B.values))

Method 5
when the list only contains unique values:

df=pd.DataFrame({'A':[1,2],'B':[[1,2],[3,4]]})
from collections import ChainMap
d = dict(ChainMap(*map(dict.fromkeys, df['B'], df['A'])))
pd.DataFrame(list(d.items()),columns=df.columns[::-1])
Out[574]: 
   B  A
0  1  1
1  2  1
2  3  2
3  4  2

Method 6
using numpy for high performance:

newvalues=np.dstack((np.repeat(df.A.values,list(map(len,df.B.values))),np.concatenate(df.B.values)))
pd.DataFrame(data=newvalues[0],columns=df.columns)
   A  B
0  1  1
1  1  2
2  2  1
3  2  2

Method 7
using base function itertools cycle and chain: Pure python solution just for fun

from itertools import cycle,chain
l=df.values.tolist()
l1=[list(zip([x[0]], cycle(x[1])) if len([x[0]]) > len(x[1]) else list(zip(cycle([x[0]]), x[1]))) for x in l]
pd.DataFrame(list(chain.from_iterable(l1)),columns=df.columns)
   A  B
0  1  1
1  1  2
2  2  1
3  2  2

Generalizing to multiple columns

df=pd.DataFrame({'A':[1,2],'B':[[1,2],[3,4]],'C':[[1,2],[3,4]]})
df
Out[592]: 
   A       B       C
0  1  [1, 2]  [1, 2]
1  2  [3, 4]  [3, 4]

Self-def function:

def unnesting(df, explode):
    idx = df.index.repeat(df[explode[0]].str.len())
    df1 = pd.concat([
        pd.DataFrame({x: np.concatenate(df[x].values)}) for x in explode], axis=1)
    df1.index = idx

    return df1.join(df.drop(explode, 1), how='left')

        
unnesting(df,['B','C'])
Out[609]: 
   B  C  A
0  1  1  1
0  2  2  1
1  3  3  2
1  4  4  2

Column-wise Unnesting

All above method is talking about the vertical unnesting and explode , If you do need expend the list horizontal, Check with pd.DataFrame constructor

df.join(pd.DataFrame(df.B.tolist(),index=df.index).add_prefix('B_'))
Out[33]: 
   A       B       C  B_0  B_1
0  1  [1, 2]  [1, 2]    1    2
1  2  [3, 4]  [3, 4]    3    4

Updated function

def unnesting(df, explode, axis):
    if axis==1:
        idx = df.index.repeat(df[explode[0]].str.len())
        df1 = pd.concat([
            pd.DataFrame({x: np.concatenate(df[x].values)}) for x in explode], axis=1)
        df1.index = idx

        return df1.join(df.drop(explode, 1), how='left')
    else :
        df1 = pd.concat([
                         pd.DataFrame(df[x].tolist(), index=df.index).add_prefix(x) for x in explode], axis=1)
        return df1.join(df.drop(explode, 1), how='left')

Test Output

unnesting(df, ['B','C'], axis=0)
Out[36]: 
   B0  B1  C0  C1  A
0   1   2   1   2  1
1   3   4   3   4  2

回答 1

选项1

如果另一列中的所有子列表的长度均相同,numpy则可以在此处进行有效选择:

vals = np.array(df.B.values.tolist())    
a = np.repeat(df.A, vals.shape[1])

pd.DataFrame(np.column_stack((a, vals.ravel())), columns=df.columns)

   A  B
0  1  1
1  1  2
2  2  1
3  2  2

选项2

如果子列表的长度不同,则需要执行其他步骤:

vals = df.B.values.tolist()
rs = [len(r) for r in vals]    
a = np.repeat(df.A, rs)

pd.DataFrame(np.column_stack((a, np.concatenate(vals))), columns=df.columns)

   A  B
0  1  1
1  1  2
2  2  1
3  2  2

选项3

我对此进行了推广,以使其平坦化N列和平铺M列,稍后将进行工作以使其更加高效:

df = pd.DataFrame({'A': [1,2,3], 'B': [[1,2], [1,2,3], [1]],
                   'C': [[1,2,3], [1,2], [1,2]], 'D': ['A', 'B', 'C']})

   A          B          C  D
0  1     [1, 2]  [1, 2, 3]  A
1  2  [1, 2, 3]     [1, 2]  B
2  3        [1]     [1, 2]  C

def unnest(df, tile, explode):
    vals = df[explode].sum(1)
    rs = [len(r) for r in vals]
    a = np.repeat(df[tile].values, rs, axis=0)
    b = np.concatenate(vals.values)
    d = np.column_stack((a, b))
    return pd.DataFrame(d, columns = tile +  ['_'.join(explode)])

unnest(df, ['A', 'D'], ['B', 'C'])

    A  D B_C
0   1  A   1
1   1  A   2
2   1  A   1
3   1  A   2
4   1  A   3
5   2  B   1
6   2  B   2
7   2  B   3
8   2  B   1
9   2  B   2
10  3  C   1
11  3  C   1
12  3  C   2

功能

def wen1(df):
    return df.set_index('A').B.apply(pd.Series).stack().reset_index(level=0).rename(columns={0: 'B'})

def wen2(df):
    return pd.DataFrame({'A':df.A.repeat(df.B.str.len()),'B':np.concatenate(df.B.values)})

def wen3(df):
    s = pd.DataFrame({'B': np.concatenate(df.B.values)}, index=df.index.repeat(df.B.str.len()))
    return s.join(df.drop('B', 1), how='left')

def wen4(df):
    return pd.DataFrame([[x] + [z] for x, y in df.values for z in y],columns=df.columns)

def chris1(df):
    vals = np.array(df.B.values.tolist())
    a = np.repeat(df.A, vals.shape[1])
    return pd.DataFrame(np.column_stack((a, vals.ravel())), columns=df.columns)

def chris2(df):
    vals = df.B.values.tolist()
    rs = [len(r) for r in vals]
    a = np.repeat(df.A.values, rs)
    return pd.DataFrame(np.column_stack((a, np.concatenate(vals))), columns=df.columns)

时机

import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from timeit import timeit

res = pd.DataFrame(
       index=['wen1', 'wen2', 'wen3', 'wen4', 'chris1', 'chris2'],
       columns=[10, 50, 100, 500, 1000, 5000, 10000],
       dtype=float
)

for f in res.index:
    for c in res.columns:
        df = pd.DataFrame({'A': [1, 2], 'B': [[1, 2], [1, 2]]})
        df = pd.concat([df]*c)
        stmt = '{}(df)'.format(f)
        setp = 'from __main__ import df, {}'.format(f)
        res.at[f, c] = timeit(stmt, setp, number=50)

ax = res.div(res.min()).T.plot(loglog=True)
ax.set_xlabel("N")
ax.set_ylabel("time (relative)")

性能

在此处输入图片说明

Option 1

If all of the sublists in the other column are the same length, numpy can be an efficient option here:

vals = np.array(df.B.values.tolist())    
a = np.repeat(df.A, vals.shape[1])

pd.DataFrame(np.column_stack((a, vals.ravel())), columns=df.columns)

   A  B
0  1  1
1  1  2
2  2  1
3  2  2

Option 2

If the sublists have different length, you need an additional step:

vals = df.B.values.tolist()
rs = [len(r) for r in vals]    
a = np.repeat(df.A, rs)

pd.DataFrame(np.column_stack((a, np.concatenate(vals))), columns=df.columns)

   A  B
0  1  1
1  1  2
2  2  1
3  2  2

Option 3

I took a shot at generalizing this to work to flatten N columns and tile M columns, I’ll work later on making it more efficient:

df = pd.DataFrame({'A': [1,2,3], 'B': [[1,2], [1,2,3], [1]],
                   'C': [[1,2,3], [1,2], [1,2]], 'D': ['A', 'B', 'C']})

   A          B          C  D
0  1     [1, 2]  [1, 2, 3]  A
1  2  [1, 2, 3]     [1, 2]  B
2  3        [1]     [1, 2]  C

def unnest(df, tile, explode):
    vals = df[explode].sum(1)
    rs = [len(r) for r in vals]
    a = np.repeat(df[tile].values, rs, axis=0)
    b = np.concatenate(vals.values)
    d = np.column_stack((a, b))
    return pd.DataFrame(d, columns = tile +  ['_'.join(explode)])

unnest(df, ['A', 'D'], ['B', 'C'])

    A  D B_C
0   1  A   1
1   1  A   2
2   1  A   1
3   1  A   2
4   1  A   3
5   2  B   1
6   2  B   2
7   2  B   3
8   2  B   1
9   2  B   2
10  3  C   1
11  3  C   1
12  3  C   2

Functions

def wen1(df):
    return df.set_index('A').B.apply(pd.Series).stack().reset_index(level=0).rename(columns={0: 'B'})

def wen2(df):
    return pd.DataFrame({'A':df.A.repeat(df.B.str.len()),'B':np.concatenate(df.B.values)})

def wen3(df):
    s = pd.DataFrame({'B': np.concatenate(df.B.values)}, index=df.index.repeat(df.B.str.len()))
    return s.join(df.drop('B', 1), how='left')

def wen4(df):
    return pd.DataFrame([[x] + [z] for x, y in df.values for z in y],columns=df.columns)

def chris1(df):
    vals = np.array(df.B.values.tolist())
    a = np.repeat(df.A, vals.shape[1])
    return pd.DataFrame(np.column_stack((a, vals.ravel())), columns=df.columns)

def chris2(df):
    vals = df.B.values.tolist()
    rs = [len(r) for r in vals]
    a = np.repeat(df.A.values, rs)
    return pd.DataFrame(np.column_stack((a, np.concatenate(vals))), columns=df.columns)

Timings

import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from timeit import timeit

res = pd.DataFrame(
       index=['wen1', 'wen2', 'wen3', 'wen4', 'chris1', 'chris2'],
       columns=[10, 50, 100, 500, 1000, 5000, 10000],
       dtype=float
)

for f in res.index:
    for c in res.columns:
        df = pd.DataFrame({'A': [1, 2], 'B': [[1, 2], [1, 2]]})
        df = pd.concat([df]*c)
        stmt = '{}(df)'.format(f)
        setp = 'from __main__ import df, {}'.format(f)
        res.at[f, c] = timeit(stmt, setp, number=50)

ax = res.div(res.min()).T.plot(loglog=True)
ax.set_xlabel("N")
ax.set_ylabel("time (relative)")

Performance

enter image description here


回答 2

通过添加方法,在pandas 0.25中显着简化了爆炸式列表explode()

df = pd.DataFrame({'A': [1, 2], 'B': [[1, 2], [1, 2]]})
df.explode('B')

出:

   A  B
0  1  1
0  1  2
1  2  1
1  2  2

Exploding a list-like column has been simplified significantly in pandas 0.25 with the addition of the explode() method:

df = pd.DataFrame({'A': [1, 2], 'B': [[1, 2], [1, 2]]})
df.explode('B')

Out:

   A  B
0  1  1
0  1  2
1  2  1
1  2  2

回答 3

一种替代方法是将meshgrid配方应用于列的行,以取消嵌套

import numpy as np
import pandas as pd


def unnest(frame, explode):
    def mesh(values):
        return np.array(np.meshgrid(*values)).T.reshape(-1, len(values))

    data = np.vstack(mesh(row) for row in frame[explode].values)
    return pd.DataFrame(data=data, columns=explode)


df = pd.DataFrame({'A': [1, 2], 'B': [[1, 2], [1, 2]]})
print(unnest(df, ['A', 'B']))  # base
print()

df = pd.DataFrame({'A': [1, 2], 'B': [[1, 2], [3, 4]], 'C': [[1, 2], [3, 4]]})
print(unnest(df, ['A', 'B', 'C']))  # multiple columns
print()

df = pd.DataFrame({'A': [1, 2, 3], 'B': [[1, 2], [1, 2, 3], [1]],
                   'C': [[1, 2, 3], [1, 2], [1, 2]], 'D': ['A', 'B', 'C']})

print(unnest(df, ['A', 'B']))  # uneven length lists
print()
print(unnest(df, ['D', 'B']))  # different types
print()

输出量

   A  B
0  1  1
1  1  2
2  2  1
3  2  2

   A  B  C
0  1  1  1
1  1  2  1
2  1  1  2
3  1  2  2
4  2  3  3
5  2  4  3
6  2  3  4
7  2  4  4

   A  B
0  1  1
1  1  2
2  2  1
3  2  2
4  2  3
5  3  1

   D  B
0  A  1
1  A  2
2  B  1
3  B  2
4  B  3
5  C  1

One alternative is to apply the meshgrid recipe over the rows of the columns to unnest:

import numpy as np
import pandas as pd


def unnest(frame, explode):
    def mesh(values):
        return np.array(np.meshgrid(*values)).T.reshape(-1, len(values))

    data = np.vstack(mesh(row) for row in frame[explode].values)
    return pd.DataFrame(data=data, columns=explode)


df = pd.DataFrame({'A': [1, 2], 'B': [[1, 2], [1, 2]]})
print(unnest(df, ['A', 'B']))  # base
print()

df = pd.DataFrame({'A': [1, 2], 'B': [[1, 2], [3, 4]], 'C': [[1, 2], [3, 4]]})
print(unnest(df, ['A', 'B', 'C']))  # multiple columns
print()

df = pd.DataFrame({'A': [1, 2, 3], 'B': [[1, 2], [1, 2, 3], [1]],
                   'C': [[1, 2, 3], [1, 2], [1, 2]], 'D': ['A', 'B', 'C']})

print(unnest(df, ['A', 'B']))  # uneven length lists
print()
print(unnest(df, ['D', 'B']))  # different types
print()

Output

   A  B
0  1  1
1  1  2
2  2  1
3  2  2

   A  B  C
0  1  1  1
1  1  2  1
2  1  1  2
3  1  2  2
4  2  3  3
5  2  4  3
6  2  3  4
7  2  4  4

   A  B
0  1  1
1  1  2
2  2  1
3  2  2
4  2  3
5  3  1

   D  B
0  A  1
1  A  2
2  B  1
3  B  2
4  B  3
5  C  1

回答 4

我的5美分:

df[['B', 'B2']] = pd.DataFrame(df['B'].values.tolist())

df[['A', 'B']].append(df[['A', 'B2']].rename(columns={'B2': 'B'}),
                      ignore_index=True)

还有另外5个

df[['B1', 'B2']] = pd.DataFrame([*df['B']]) # if values.tolist() is too boring

(pd.wide_to_long(df.drop('B', 1), 'B', 'A', '')
 .reset_index(level=1, drop=True)
 .reset_index())

两者导致相同

   A  B
0  1  1
1  2  1
2  1  2
3  2  2

My 5 cents:

df[['B', 'B2']] = pd.DataFrame(df['B'].values.tolist())

df[['A', 'B']].append(df[['A', 'B2']].rename(columns={'B2': 'B'}),
                      ignore_index=True)

and another 5

df[['B1', 'B2']] = pd.DataFrame([*df['B']]) # if values.tolist() is too boring

(pd.wide_to_long(df.drop('B', 1), 'B', 'A', '')
 .reset_index(level=1, drop=True)
 .reset_index())

both resulting in the same

   A  B
0  1  1
1  2  1
2  1  2
3  2  2

回答 5

因为通常子列表的长度是不同的,并且联接/合并在计算上要昂贵得多。我针对不同长度的子列表和更普通的列重新测试了该方法。

MultiIndex也是一种更容易编写的方式,并且具有与numpy方式几乎相同的性能。

出乎意料的是,在我的实现理解方法中具有最佳性能。

def stack(df):
    return df.set_index(['A', 'C']).B.apply(pd.Series).stack()


def comprehension(df):
    return pd.DataFrame([x + [z] for x, y in zip(df[['A', 'C']].values.tolist(), df.B) for z in y])


def multiindex(df):
    return pd.DataFrame(np.concatenate(df.B.values), index=df.set_index(['A', 'C']).index.repeat(df.B.str.len()))


def array(df):
    return pd.DataFrame(
        np.column_stack((
            np.repeat(df[['A', 'C']].values, df.B.str.len(), axis=0),
            np.concatenate(df.B.values)
        ))
    )


import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from timeit import timeit

res = pd.DataFrame(
    index=[
        'stack',
        'comprehension',
        'multiindex',
        'array',
    ],
    columns=[1000, 2000, 5000, 10000, 20000, 50000],
    dtype=float
)

for f in res.index:
    for c in res.columns:
        df = pd.DataFrame({'A': list('abc'), 'C': list('def'), 'B': [['g', 'h', 'i'], ['j', 'k'], ['l']]})
        df = pd.concat([df] * c)
        stmt = '{}(df)'.format(f)
        setp = 'from __main__ import df, {}'.format(f)
        res.at[f, c] = timeit(stmt, setp, number=20)

ax = res.div(res.min()).T.plot(loglog=True)
ax.set_xlabel("N")
ax.set_ylabel("time (relative)")

性能

每种方法的相对时间

Because normally sublist length are different and join/merge is far more computational expensive. I retested the method for different length sublist and more normal columns.

MultiIndex should be also a easier way to write and has near the same performances as numpy way.

Surprisingly, in my implementation comprehension way has the best performance.

def stack(df):
    return df.set_index(['A', 'C']).B.apply(pd.Series).stack()


def comprehension(df):
    return pd.DataFrame([x + [z] for x, y in zip(df[['A', 'C']].values.tolist(), df.B) for z in y])


def multiindex(df):
    return pd.DataFrame(np.concatenate(df.B.values), index=df.set_index(['A', 'C']).index.repeat(df.B.str.len()))


def array(df):
    return pd.DataFrame(
        np.column_stack((
            np.repeat(df[['A', 'C']].values, df.B.str.len(), axis=0),
            np.concatenate(df.B.values)
        ))
    )


import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from timeit import timeit

res = pd.DataFrame(
    index=[
        'stack',
        'comprehension',
        'multiindex',
        'array',
    ],
    columns=[1000, 2000, 5000, 10000, 20000, 50000],
    dtype=float
)

for f in res.index:
    for c in res.columns:
        df = pd.DataFrame({'A': list('abc'), 'C': list('def'), 'B': [['g', 'h', 'i'], ['j', 'k'], ['l']]})
        df = pd.concat([df] * c)
        stmt = '{}(df)'.format(f)
        setp = 'from __main__ import df, {}'.format(f)
        res.at[f, c] = timeit(stmt, setp, number=20)

ax = res.div(res.min()).T.plot(loglog=True)
ax.set_xlabel("N")
ax.set_ylabel("time (relative)")

Performance

Relative time of each method


回答 6

我对此问题进行了概括,以适用于更多列。

我的解决方案的摘要:

In[74]: df
Out[74]: 
    A   B             C             columnD
0  A1  B1  [C1.1, C1.2]                D1
1  A2  B2  [C2.1, C2.2]  [D2.1, D2.2, D2.3]
2  A3  B3            C3        [D3.1, D3.2]

In[75]: dfListExplode(df,['C','columnD'])
Out[75]: 
    A   B     C columnD
0  A1  B1  C1.1    D1
1  A1  B1  C1.2    D1
2  A2  B2  C2.1    D2.1
3  A2  B2  C2.1    D2.2
4  A2  B2  C2.1    D2.3
5  A2  B2  C2.2    D2.1
6  A2  B2  C2.2    D2.2
7  A2  B2  C2.2    D2.3
8  A3  B3    C3    D3.1
9  A3  B3    C3    D3.2

完整的例子:

实际爆炸发生在3行中。剩下的就是化妆品(多列爆炸,处理字符串而不是爆炸列中的列表,…)。

import pandas as pd
import numpy as np

df=pd.DataFrame( {'A': ['A1','A2','A3'],
                  'B': ['B1','B2','B3'],
                  'C': [ ['C1.1','C1.2'],['C2.1','C2.2'],'C3'],
                  'columnD': [ 'D1',['D2.1','D2.2', 'D2.3'],['D3.1','D3.2']],
                  })
print('df',df, sep='\n')

def dfListExplode(df, explodeKeys):
    if not isinstance(explodeKeys, list):
        explodeKeys=[explodeKeys]
    # recursive handling of explodeKeys
    if len(explodeKeys)==0:
        return df
    elif len(explodeKeys)==1:
        explodeKey=explodeKeys[0]
    else:
        return dfListExplode( dfListExplode(df, explodeKeys[:1]), explodeKeys[1:])
    # perform explosion/unnesting for key: explodeKey
    dfPrep=df[explodeKey].apply(lambda x: x if isinstance(x,list) else [x]) #casts all elements to a list
    dfIndExpl=pd.DataFrame([[x] + [z] for x, y in zip(dfPrep.index,dfPrep.values) for z in y ], columns=['explodedIndex',explodeKey])
    dfMerged=dfIndExpl.merge(df.drop(explodeKey, axis=1), left_on='explodedIndex', right_index=True)
    dfReind=dfMerged.reindex(columns=list(df))
    return dfReind

dfExpl=dfListExplode(df,['C','columnD'])
print('dfExpl',dfExpl, sep='\n')

学分WeNYoBen的答案

I generalized the problem a bit to be applicable to more columns.

Summary of what my solution does:

In[74]: df
Out[74]: 
    A   B             C             columnD
0  A1  B1  [C1.1, C1.2]                D1
1  A2  B2  [C2.1, C2.2]  [D2.1, D2.2, D2.3]
2  A3  B3            C3        [D3.1, D3.2]

In[75]: dfListExplode(df,['C','columnD'])
Out[75]: 
    A   B     C columnD
0  A1  B1  C1.1    D1
1  A1  B1  C1.2    D1
2  A2  B2  C2.1    D2.1
3  A2  B2  C2.1    D2.2
4  A2  B2  C2.1    D2.3
5  A2  B2  C2.2    D2.1
6  A2  B2  C2.2    D2.2
7  A2  B2  C2.2    D2.3
8  A3  B3    C3    D3.1
9  A3  B3    C3    D3.2

Complete example:

The actual explosion is performed in 3 lines. The rest is cosmetics (multi column explosion, handling of strings instead of lists in the explosion column, …).

import pandas as pd
import numpy as np

df=pd.DataFrame( {'A': ['A1','A2','A3'],
                  'B': ['B1','B2','B3'],
                  'C': [ ['C1.1','C1.2'],['C2.1','C2.2'],'C3'],
                  'columnD': [ 'D1',['D2.1','D2.2', 'D2.3'],['D3.1','D3.2']],
                  })
print('df',df, sep='\n')

def dfListExplode(df, explodeKeys):
    if not isinstance(explodeKeys, list):
        explodeKeys=[explodeKeys]
    # recursive handling of explodeKeys
    if len(explodeKeys)==0:
        return df
    elif len(explodeKeys)==1:
        explodeKey=explodeKeys[0]
    else:
        return dfListExplode( dfListExplode(df, explodeKeys[:1]), explodeKeys[1:])
    # perform explosion/unnesting for key: explodeKey
    dfPrep=df[explodeKey].apply(lambda x: x if isinstance(x,list) else [x]) #casts all elements to a list
    dfIndExpl=pd.DataFrame([[x] + [z] for x, y in zip(dfPrep.index,dfPrep.values) for z in y ], columns=['explodedIndex',explodeKey])
    dfMerged=dfIndExpl.merge(df.drop(explodeKey, axis=1), left_on='explodedIndex', right_index=True)
    dfReind=dfMerged.reindex(columns=list(df))
    return dfReind

dfExpl=dfListExplode(df,['C','columnD'])
print('dfExpl',dfExpl, sep='\n')

Credits to WeNYoBen’s answer


回答 7

问题设定

假设其中有多列对象的长度不同

df = pd.DataFrame({
    'A': [1, 2],
    'B': [[1, 2], [3, 4]],
    'C': [[1, 2], [3, 4, 5]]
})

df

   A       B          C
0  1  [1, 2]     [1, 2]
1  2  [3, 4]  [3, 4, 5]

当长度相同时,我们很容易假设变化的元素重合并且应该“压缩”在一起。

   A       B          C
0  1  [1, 2]     [1, 2]  # Typical to assume these should be zipped [(1, 1), (2, 2)]
1  2  [3, 4]  [3, 4, 5]

但是,当我们看到不同长度的对象时,如果“压缩”,假设就会受到挑战,如果这样,那么如何处理其中一个对象中的多余部分。 或者,也许我们想要所有对象的乘积。这将很快变得很大,但可能正是所需要的。

   A       B          C
0  1  [1, 2]     [1, 2]
1  2  [3, 4]  [3, 4, 5]  # is this [(3, 3), (4, 4), (None, 5)]?

要么

   A       B          C
0  1  [1, 2]     [1, 2]
1  2  [3, 4]  [3, 4, 5]  # is this [(3, 3), (3, 4), (3, 5), (4, 3), (4, 4), (4, 5)]

功能

此函数可以根据参数适当地处理zipproduct基于参数,并zip根据最长对象的长度进行假定zip_longest

from itertools import zip_longest, product

def xplode(df, explode, zipped=True):
    method = zip_longest if zipped else product

    rest = {*df} - {*explode}

    zipped = zip(zip(*map(df.get, rest)), zip(*map(df.get, explode)))
    tups = [tup + exploded
     for tup, pre in zipped
     for exploded in method(*pre)]

    return pd.DataFrame(tups, columns=[*rest, *explode])[[*df]]

压缩的

xplode(df, ['B', 'C'])

   A    B  C
0  1  1.0  1
1  1  2.0  2
2  2  3.0  3
3  2  4.0  4
4  2  NaN  5

产品

xplode(df, ['B', 'C'], zipped=False)

   A  B  C
0  1  1  1
1  1  1  2
2  1  2  1
3  1  2  2
4  2  3  3
5  2  3  4
6  2  3  5
7  2  4  3
8  2  4  4
9  2  4  5

新设定

修改示例

df = pd.DataFrame({
    'A': [1, 2],
    'B': [[1, 2], [3, 4]],
    'C': 'C',
    'D': [[1, 2], [3, 4, 5]],
    'E': [('X', 'Y', 'Z'), ('W',)]
})

df

   A       B  C          D          E
0  1  [1, 2]  C     [1, 2]  (X, Y, Z)
1  2  [3, 4]  C  [3, 4, 5]       (W,)

压缩的

xplode(df, ['B', 'D', 'E'])

   A    B  C    D     E
0  1  1.0  C  1.0     X
1  1  2.0  C  2.0     Y
2  1  NaN  C  NaN     Z
3  2  3.0  C  3.0     W
4  2  4.0  C  4.0  None
5  2  NaN  C  5.0  None

产品

xplode(df, ['B', 'D', 'E'], zipped=False)

    A  B  C  D  E
0   1  1  C  1  X
1   1  1  C  1  Y
2   1  1  C  1  Z
3   1  1  C  2  X
4   1  1  C  2  Y
5   1  1  C  2  Z
6   1  2  C  1  X
7   1  2  C  1  Y
8   1  2  C  1  Z
9   1  2  C  2  X
10  1  2  C  2  Y
11  1  2  C  2  Z
12  2  3  C  3  W
13  2  3  C  4  W
14  2  3  C  5  W
15  2  4  C  3  W
16  2  4  C  4  W
17  2  4  C  5  W

Problem Setup

Assume there are multiple columns with different length objects within it

df = pd.DataFrame({
    'A': [1, 2],
    'B': [[1, 2], [3, 4]],
    'C': [[1, 2], [3, 4, 5]]
})

df

   A       B          C
0  1  [1, 2]     [1, 2]
1  2  [3, 4]  [3, 4, 5]

When the lengths are the same, it is easy for us to assume that the varying elements coincide and should be “zipped” together.

   A       B          C
0  1  [1, 2]     [1, 2]  # Typical to assume these should be zipped [(1, 1), (2, 2)]
1  2  [3, 4]  [3, 4, 5]

However, the assumption gets challenged when we see different length objects, should we “zip”, if so, how do we handle the excess in one of the objects. OR, maybe we want the product of all of the objects. This will get big fast, but might be what is wanted.

   A       B          C
0  1  [1, 2]     [1, 2]
1  2  [3, 4]  [3, 4, 5]  # is this [(3, 3), (4, 4), (None, 5)]?

OR

   A       B          C
0  1  [1, 2]     [1, 2]
1  2  [3, 4]  [3, 4, 5]  # is this [(3, 3), (3, 4), (3, 5), (4, 3), (4, 4), (4, 5)]

The Function

This function gracefully handles zip or product based on a parameter and assumes to zip according to the length of the longest object with zip_longest

from itertools import zip_longest, product

def xplode(df, explode, zipped=True):
    method = zip_longest if zipped else product

    rest = {*df} - {*explode}

    zipped = zip(zip(*map(df.get, rest)), zip(*map(df.get, explode)))
    tups = [tup + exploded
     for tup, pre in zipped
     for exploded in method(*pre)]

    return pd.DataFrame(tups, columns=[*rest, *explode])[[*df]]

Zipped

xplode(df, ['B', 'C'])

   A    B  C
0  1  1.0  1
1  1  2.0  2
2  2  3.0  3
3  2  4.0  4
4  2  NaN  5

Product

xplode(df, ['B', 'C'], zipped=False)

   A  B  C
0  1  1  1
1  1  1  2
2  1  2  1
3  1  2  2
4  2  3  3
5  2  3  4
6  2  3  5
7  2  4  3
8  2  4  4
9  2  4  5

New Setup

Varying up the example a bit

df = pd.DataFrame({
    'A': [1, 2],
    'B': [[1, 2], [3, 4]],
    'C': 'C',
    'D': [[1, 2], [3, 4, 5]],
    'E': [('X', 'Y', 'Z'), ('W',)]
})

df

   A       B  C          D          E
0  1  [1, 2]  C     [1, 2]  (X, Y, Z)
1  2  [3, 4]  C  [3, 4, 5]       (W,)

Zipped

xplode(df, ['B', 'D', 'E'])

   A    B  C    D     E
0  1  1.0  C  1.0     X
1  1  2.0  C  2.0     Y
2  1  NaN  C  NaN     Z
3  2  3.0  C  3.0     W
4  2  4.0  C  4.0  None
5  2  NaN  C  5.0  None

Product

xplode(df, ['B', 'D', 'E'], zipped=False)

    A  B  C  D  E
0   1  1  C  1  X
1   1  1  C  1  Y
2   1  1  C  1  Z
3   1  1  C  2  X
4   1  1  C  2  Y
5   1  1  C  2  Z
6   1  2  C  1  X
7   1  2  C  1  Y
8   1  2  C  1  Z
9   1  2  C  2  X
10  1  2  C  2  Y
11  1  2  C  2  Z
12  2  3  C  3  W
13  2  3  C  4  W
14  2  3  C  5  W
15  2  4  C  3  W
16  2  4  C  4  W
17  2  4  C  5  W

回答 8

不推荐使用的方法(至少在这种情况下有效):

df=pd.concat([df]*2).sort_index()
it=iter(df['B'].tolist()[0]+df['B'].tolist()[0])
df['B']=df['B'].apply(lambda x:next(it))

concat+ sort_index+ iter+ apply+ next

现在:

print(df)

是:

   A  B
0  1  1
0  1  2
1  2  1
1  2  2

如果在乎索引:

df=df.reset_index(drop=True)

现在:

print(df)

是:

   A  B
0  1  1
1  1  2
2  2  1
3  2  2

Something pretty not recommended (at least work in this case):

df=pd.concat([df]*2).sort_index()
it=iter(df['B'].tolist()[0]+df['B'].tolist()[0])
df['B']=df['B'].apply(lambda x:next(it))

concat + sort_index + iter + apply + next.

Now:

print(df)

Is:

   A  B
0  1  1
0  1  2
1  2  1
1  2  2

If care about index:

df=df.reset_index(drop=True)

Now:

print(df)

Is:

   A  B
0  1  1
1  1  2
2  2  1
3  2  2

回答 9

df=pd.DataFrame({'A':[1,2],'B':[[1,2],[1,2]]})

pd.concat([df['A'], pd.DataFrame(df['B'].values.tolist())], axis = 1)\
  .melt(id_vars = 'A', value_name = 'B')\
  .dropna()\
  .drop('variable', axis = 1)

    A   B
0   1   1
1   2   1
2   1   2
3   2   2

我对这种方法有何看法?还是同时进行合并和融化都被认为过于“昂贵”?

df=pd.DataFrame({'A':[1,2],'B':[[1,2],[1,2]]})

pd.concat([df['A'], pd.DataFrame(df['B'].values.tolist())], axis = 1)\
  .melt(id_vars = 'A', value_name = 'B')\
  .dropna()\
  .drop('variable', axis = 1)

    A   B
0   1   1
1   2   1
2   1   2
3   2   2

Any opinions on this method I thought of? or is doing both concat and melt considered too “expensive”?


回答 10

df=pd.DataFrame({'A':[1,2],'B':[[1,2],[1,2]]})

out = pd.concat([df.loc[:,'A'],(df.B.apply(pd.Series))], axis=1, sort=False)

out = out.set_index('A').stack().droplevel(level=1).reset_index().rename(columns={0:"B"})

       A    B
   0    1   1
   1    1   2
   2    2   1
   3    2   2
  • 如果您不想创建中间对象,则可以将其实现为一个内衬
df=pd.DataFrame({'A':[1,2],'B':[[1,2],[1,2]]})

out = pd.concat([df.loc[:,'A'],(df.B.apply(pd.Series))], axis=1, sort=False)

out = out.set_index('A').stack().droplevel(level=1).reset_index().rename(columns={0:"B"})

       A    B
   0    1   1
   1    1   2
   2    2   1
   3    2   2
  • you can implement this as one liner, if you don’t wish to create intermediate object

回答 11

# Here's the answer to the related question in:
# https://stackoverflow.com/q/56708671/11426125

# initial dataframe
df12=pd.DataFrame({'Date':['2007-12-03','2008-09-07'],'names':
[['Peter','Alex'],['Donald','Stan']]})

# convert dataframe to array for indexing list values (names)
a = np.array(df12.values)  

# create a new, dataframe with dimensions for unnested
b = np.ndarray(shape = (4,2))
df2 = pd.DataFrame(b, columns = ["Date", "names"], dtype = str)

# implement loops to assign date/name values as required
i = range(len(a[0]))
j = range(len(a[0]))
for x in i:
    for y in j:
        df2.iat[2*x+y, 0] = a[x][0]
        df2.iat[2*x+y, 1] = a[x][1][y]

# set Date column as Index
df2.Date=pd.to_datetime(df2.Date)
df2.index=df2.Date
df2.drop('Date',axis=1,inplace =True)
# Here's the answer to the related question in:
# https://stackoverflow.com/q/56708671/11426125

# initial dataframe
df12=pd.DataFrame({'Date':['2007-12-03','2008-09-07'],'names':
[['Peter','Alex'],['Donald','Stan']]})

# convert dataframe to array for indexing list values (names)
a = np.array(df12.values)  

# create a new, dataframe with dimensions for unnested
b = np.ndarray(shape = (4,2))
df2 = pd.DataFrame(b, columns = ["Date", "names"], dtype = str)

# implement loops to assign date/name values as required
i = range(len(a[0]))
j = range(len(a[0]))
for x in i:
    for y in j:
        df2.iat[2*x+y, 0] = a[x][0]
        df2.iat[2*x+y, 1] = a[x][1][y]

# set Date column as Index
df2.Date=pd.to_datetime(df2.Date)
df2.index=df2.Date
df2.drop('Date',axis=1,inplace =True)

回答 12

在我的案例中,有不止一列会爆炸,并且数组的变量长度需要取消嵌套。

我最后explode两次应用了新的0.25熊猫函数,然后删除了生成的重复项,它确实起作用了!

df = df.explode('A')
df = df.explode('B')
df = df.drop_duplicates()

In my case with more than one column to explode, and with variables lengths for the arrays that needs to be unnested.

I ended up applying the new pandas 0.25 explode function two times, then removing generated duplicates and it does the job !

df = df.explode('A')
df = df.explode('B')
df = df.drop_duplicates()

回答 13

当您有多个要爆炸的列时,我还有另一种解决此问题的好方法。

df=pd.DataFrame({'A':[1,2],'B':[[1,2],[1,2]], 'C':[[1,2,3],[1,2,3]]})

print(df)
   A       B          C
0  1  [1, 2]  [1, 2, 3]
1  2  [1, 2]  [1, 2, 3]

我想爆炸B和C列。首先爆炸B,然后爆炸C。然后我将B和C从原始df中删除。之后,我将在3个df上进行索引连接。

explode_b = df.explode('B')['B']
explode_c = df.explode('C')['C']
df = df.drop(['B', 'C'], axis=1)
df = df.join([explode_b, explode_c])

I have another good way to solves this when you have more than one column to explode.

df=pd.DataFrame({'A':[1,2],'B':[[1,2],[1,2]], 'C':[[1,2,3],[1,2,3]]})

print(df)
   A       B          C
0  1  [1, 2]  [1, 2, 3]
1  2  [1, 2]  [1, 2, 3]

I want to explode the columns B and C. First I explode B, second C. Than I drop B and C from the original df. After that I will do an index join on the 3 dfs.

explode_b = df.explode('B')['B']
explode_c = df.explode('C')['C']
df = df.drop(['B', 'C'], axis=1)
df = df.join([explode_b, explode_c])

Pandas DataFrame到列表列表

问题:Pandas DataFrame到列表列表

将列表列表转换为pandas数据框很容易:

import pandas as pd
df = pd.DataFrame([[1,2,3],[3,4,5]])

但是,如何将df重新变成列表列表?

lol = df.what_to_do_now?
print lol
# [[1,2,3],[3,4,5]]

It’s easy to turn a list of lists into a pandas dataframe:

import pandas as pd
df = pd.DataFrame([[1,2,3],[3,4,5]])

But how do I turn df back into a list of lists?

lol = df.what_to_do_now?
print lol
# [[1,2,3],[3,4,5]]

回答 0

您可以访问基础数组并调用其tolist方法:

>>> df = pd.DataFrame([[1,2,3],[3,4,5]])
>>> lol = df.values.tolist()
>>> lol
[[1L, 2L, 3L], [3L, 4L, 5L]]

You could access the underlying array and call its tolist method:

>>> df = pd.DataFrame([[1,2,3],[3,4,5]])
>>> lol = df.values.tolist()
>>> lol
[[1L, 2L, 3L], [3L, 4L, 5L]]

回答 1

如果数据具有要保留的列标签和索引标签,则有一些选项。

示例数据:

>>> df = pd.DataFrame([[1,2,3],[3,4,5]], \
       columns=('first', 'second', 'third'), \
       index=('alpha', 'beta')) 
>>> df
       first  second  third
alpha      1       2      3
beta       3       4      5

tolist()其他答案中描述方法很有用,但仅生成核心数据-可能还不够,具体取决于您的需求。

>>> df.values.tolist()
[[1, 2, 3], [3, 4, 5]]

一种方法是使用将转换DataFrame为json df.to_json(),然后再次解析。这很麻烦,但确实具有一些优点,因为该to_json()方法具有一些有用的选项。

>>> df.to_json()
{
  "first":{"alpha":1,"beta":3},
  "second":{"alpha":2,"beta":4},"third":{"alpha":3,"beta":5}
}

>>> df.to_json(orient='split')
{
 "columns":["first","second","third"],
 "index":["alpha","beta"],
 "data":[[1,2,3],[3,4,5]]
}

繁琐,但可能有用。

好消息是,为列和行建立列表非常简单:

>>> columns = [df.index.name] + [i for i in df.columns]
>>> rows = [[i for i in row] for row in df.itertuples()]

这样生成:

>>> print(f"columns: {columns}\nrows: {rows}") 
columns: [None, 'first', 'second', 'third']
rows: [['alpha', 1, 2, 3], ['beta', 3, 4, 5]]

如果None索引的名称令人讨厌,则将其重命名:

df = df.rename_axis('stage')

然后:

>>> columns = [df.index.name] + [i for i in df.columns]
>>> print(f"columns: {columns}\nrows: {rows}") 

columns: ['stage', 'first', 'second', 'third']
rows: [['alpha', 1, 2, 3], ['beta', 3, 4, 5]]

If the data has column and index labels that you want to preserve, there are a few options.

Example data:

>>> df = pd.DataFrame([[1,2,3],[3,4,5]], \
       columns=('first', 'second', 'third'), \
       index=('alpha', 'beta')) 
>>> df
       first  second  third
alpha      1       2      3
beta       3       4      5

The tolist() method described in other answers is useful but yields only the core data – which may not be enough, depending on your needs.

>>> df.values.tolist()
[[1, 2, 3], [3, 4, 5]]

One approach is to convert the DataFrame to json using df.to_json() and then parse it again. This is cumbersome but does have some advantages, because the to_json() method has some useful options.

>>> df.to_json()
{
  "first":{"alpha":1,"beta":3},
  "second":{"alpha":2,"beta":4},"third":{"alpha":3,"beta":5}
}

>>> df.to_json(orient='split')
{
 "columns":["first","second","third"],
 "index":["alpha","beta"],
 "data":[[1,2,3],[3,4,5]]
}

Cumbersome but may be useful.

The good news is that it’s pretty straightforward to build lists for the columns and rows:

>>> columns = [df.index.name] + [i for i in df.columns]
>>> rows = [[i for i in row] for row in df.itertuples()]

This yields:

>>> print(f"columns: {columns}\nrows: {rows}") 
columns: [None, 'first', 'second', 'third']
rows: [['alpha', 1, 2, 3], ['beta', 3, 4, 5]]

If the None as the name of the index is bothersome, rename it:

df = df.rename_axis('stage')

Then:

>>> columns = [df.index.name] + [i for i in df.columns]
>>> print(f"columns: {columns}\nrows: {rows}") 

columns: ['stage', 'first', 'second', 'third']
rows: [['alpha', 1, 2, 3], ['beta', 3, 4, 5]]

回答 2

我不知道它是否适合您的需求,但您也可以这样做:

>>> lol = df.values
>>> lol
array([[1, 2, 3],
       [3, 4, 5]])

这只是ndarray模块中的一个numpy数组,可让您执行所有常见的numpy数组操作。

I don’t know if it will fit your needs, but you can also do:

>>> lol = df.values
>>> lol
array([[1, 2, 3],
       [3, 4, 5]])

This is just a numpy array from the ndarray module, which lets you do all the usual numpy array things.


回答 3

我想保留索引,因此我针对该解决方案调整了原始答案:

list_df = df.reset_index().values.tolist()

现在,您可以将其粘贴到其他位置(例如,粘贴到“堆栈溢出”问题中),然后重新创建它:

pd.Dataframe(list_df, columns=['name1', ...])
pd.set_index(['name1'], inplace=True)

I wanted to preserve the index, so I adapted the original answer to this solution:

list_df = df.reset_index().values.tolist()

Now you can paste it somewhere else (e.g. to paste into a Stack Overflow question) and latter recreate it:

pd.Dataframe(list_df, columns=['name1', ...])
pd.set_index(['name1'], inplace=True)

回答 4

也许情况有所改变,但这返回了ndarrays列表,可以满足我的需要。

list(df.values)

Maybe something changed but this gave back a list of ndarrays which did what I needed.

list(df.values)

回答 5

注意:我在堆栈溢出中看到了很多情况,其中完全不需要将Pandas Series或DataFrame转换为NumPy数组或纯Python列表。如果您不熟悉该库,请考虑仔细检查那些Pandas对象是否已经提供了所需的功能。

引用@jpp 的评论

在实践中,通常不需要将NumPy数组转换为列表列表。


如果Pandas DataFrame / Series不起作用,则可以使用内置的DataFrame.to_numpySeries.to_numpy方法。

Note: I have seen many cases on Stack Overflow where converting a Pandas Series or DataFrame to a NumPy array or plain Python lists is entirely unecessary. If you’re new to the library, consider double-checking whether the functionality you need is already offered by those Pandas objects.

To quote a comment by @jpp:

In practice, there’s often no need to convert the NumPy array into a list of lists.


If a Pandas DataFrame/Series won’t work, you can use the built-in DataFrame.to_numpy and Series.to_numpy methods.


回答 6

这很简单:

import numpy as np

list_of_lists = np.array(df)

This is very simple:

import numpy as np

list_of_lists = np.array(df)

回答 7

我们可以使用DataFrame.iterrows()函数遍历给定Dataframe的每一行,并根据每一行的数据构造一个列表:

# Empty list 
row_list =[] 

# Iterate over each row 
for index, rows in df.iterrows(): 
    # Create list for the current row 
    my_list =[rows.Date, rows.Event, rows.Cost] 

    # append the list to the final list 
    row_list.append(my_list) 

# Print 
print(row_list) 

我们可以成功地将给定数据帧的每一行提取到一个列表中

We can use the DataFrame.iterrows() function to iterate over each of the rows of the given Dataframe and construct a list out of the data of each row:

# Empty list 
row_list =[] 

# Iterate over each row 
for index, rows in df.iterrows(): 
    # Create list for the current row 
    my_list =[rows.Date, rows.Event, rows.Cost] 

    # append the list to the final list 
    row_list.append(my_list) 

# Print 
print(row_list) 

We can successfully extract each row of the given data frame into a list


熊猫将某些列转换为行

问题:熊猫将某些列转换为行

因此我的数据集具有n个日期的位置信息。问题在于每个日期实际上是一个不同的列标题。例如,CSV看起来像

location    name    Jan-2010    Feb-2010    March-2010
A           "test"  12          20          30
B           "foo"   18          20          25

我想要的是它看起来像

location    name    Date        Value
A           "test"  Jan-2010    12       
A           "test"  Feb-2010    20
A           "test"  March-2010  30
B           "foo"   Jan-2010    18       
B           "foo"   Feb-2010    20
B           "foo"   March-2010  25

问题是我不知道列中有多少个日期(尽管我知道它们总是以名字开头)

So my dataset has some information by location for n dates. The problem is each date is actually a different column header. For example the CSV looks like

location    name    Jan-2010    Feb-2010    March-2010
A           "test"  12          20          30
B           "foo"   18          20          25

What I would like is for it to look like

location    name    Date        Value
A           "test"  Jan-2010    12       
A           "test"  Feb-2010    20
A           "test"  March-2010  30
B           "foo"   Jan-2010    18       
B           "foo"   Feb-2010    20
B           "foo"   March-2010  25

problem is I don’t know how many dates are in the column (though I know they will always start after name)


回答 0

UPDATE
从v0.20开始,它melt是一阶函数,您现在可以使用

df.melt(id_vars=["location", "name"], 
        var_name="Date", 
        value_name="Value")

  location    name        Date  Value
0        A  "test"    Jan-2010     12
1        B   "foo"    Jan-2010     18
2        A  "test"    Feb-2010     20
3        B   "foo"    Feb-2010     20
4        A  "test"  March-2010     30
5        B   "foo"  March-2010     25

旧版(ER):<0.20

您可以使用pd.melt来获取大部分信息,然后进行排序:

>>> df
  location  name  Jan-2010  Feb-2010  March-2010
0        A  test        12        20          30
1        B   foo        18        20          25
>>> df2 = pd.melt(df, id_vars=["location", "name"], 
                  var_name="Date", value_name="Value")
>>> df2
  location  name        Date  Value
0        A  test    Jan-2010     12
1        B   foo    Jan-2010     18
2        A  test    Feb-2010     20
3        B   foo    Feb-2010     20
4        A  test  March-2010     30
5        B   foo  March-2010     25
>>> df2 = df2.sort(["location", "name"])
>>> df2
  location  name        Date  Value
0        A  test    Jan-2010     12
2        A  test    Feb-2010     20
4        A  test  March-2010     30
1        B   foo    Jan-2010     18
3        B   foo    Feb-2010     20
5        B   foo  March-2010     25

(可能想输入.reset_index(drop=True),只是为了保持输出清洁。)

pd.DataFrame.sort 已弃用赞成pd.DataFrame.sort_values

UPDATE
From v0.20, melt is a first order function, you can now use

df.melt(id_vars=["location", "name"], 
        var_name="Date", 
        value_name="Value")

  location    name        Date  Value
0        A  "test"    Jan-2010     12
1        B   "foo"    Jan-2010     18
2        A  "test"    Feb-2010     20
3        B   "foo"    Feb-2010     20
4        A  "test"  March-2010     30
5        B   "foo"  March-2010     25

OLD(ER) VERSIONS: <0.20

You can use pd.melt to get most of the way there, and then sort:

>>> df
  location  name  Jan-2010  Feb-2010  March-2010
0        A  test        12        20          30
1        B   foo        18        20          25
>>> df2 = pd.melt(df, id_vars=["location", "name"], 
                  var_name="Date", value_name="Value")
>>> df2
  location  name        Date  Value
0        A  test    Jan-2010     12
1        B   foo    Jan-2010     18
2        A  test    Feb-2010     20
3        B   foo    Feb-2010     20
4        A  test  March-2010     30
5        B   foo  March-2010     25
>>> df2 = df2.sort(["location", "name"])
>>> df2
  location  name        Date  Value
0        A  test    Jan-2010     12
2        A  test    Feb-2010     20
4        A  test  March-2010     30
1        B   foo    Jan-2010     18
3        B   foo    Feb-2010     20
5        B   foo  March-2010     25

(Might want to throw in a .reset_index(drop=True), just to keep the output clean.)

Note: pd.DataFrame.sort has been deprecated in favour of pd.DataFrame.sort_values.


回答 1

使用set_indexstackMultiIndex Series,然后DataFramereset_indexrename

df1 = (df.set_index(["location", "name"])
         .stack()
         .reset_index(name='Value')
         .rename(columns={'level_2':'Date'}))
print (df1)
  location  name        Date  Value
0        A  test    Jan-2010     12
1        A  test    Feb-2010     20
2        A  test  March-2010     30
3        B   foo    Jan-2010     18
4        B   foo    Feb-2010     20
5        B   foo  March-2010     25

Use set_index with stack for MultiIndex Series, then for DataFrame add reset_index with rename:

df1 = (df.set_index(["location", "name"])
         .stack()
         .reset_index(name='Value')
         .rename(columns={'level_2':'Date'}))
print (df1)
  location  name        Date  Value
0        A  test    Jan-2010     12
1        A  test    Feb-2010     20
2        A  test  March-2010     30
3        B   foo    Jan-2010     18
4        B   foo    Feb-2010     20
5        B   foo  March-2010     25

回答 2

我想我找到了一个更简单的解决方案

temp1 = pd.melt(df1, id_vars=["location"], var_name='Date', value_name='Value')
temp2 = pd.melt(df1, id_vars=["name"], var_name='Date', value_name='Value')

Concat temp1temp2的专栏name

temp1['new_column'] = temp2['name']

现在,您有了所需的东西。

I guess I found a simpler solution

temp1 = pd.melt(df1, id_vars=["location"], var_name='Date', value_name='Value')
temp2 = pd.melt(df1, id_vars=["name"], var_name='Date', value_name='Value')

Concat whole temp1 with temp2‘s column name

temp1['new_column'] = temp2['name']

You now have what you asked for.


回答 3

pd.wide_to_long

您可以在年份列中添加前缀,然后直接输入pd.wide_to_long。我不会假装这是有效的,但是在某些情况下,它可能比方便pd.melt,例如,当您的列已经具有适当的前缀时。

df.columns = np.hstack((df.columns[:2], df.columns[2:].map(lambda x: f'Value{x}')))

res = pd.wide_to_long(df, stubnames=['Value'], i='name', j='Date').reset_index()\
        .sort_values(['location', 'name'])

print(res)

   name        Date location  Value
0  test    Jan-2010        A     12
2  test    Feb-2010        A     20
4  test  March-2010        A     30
1   foo    Jan-2010        B     18
3   foo    Feb-2010        B     20
5   foo  March-2010        B     25

pd.wide_to_long

You can add a prefix to your year columns and then feed directly to pd.wide_to_long. I won’t pretend this is efficient, but it may in certain situations be more convenient than pd.melt, e.g. when your columns already have an appropriate prefix.

df.columns = np.hstack((df.columns[:2], df.columns[2:].map(lambda x: f'Value{x}')))

res = pd.wide_to_long(df, stubnames=['Value'], i='name', j='Date').reset_index()\
        .sort_values(['location', 'name'])

print(res)

   name        Date location  Value
0  test    Jan-2010        A     12
2  test    Feb-2010        A     20
4  test  March-2010        A     30
1   foo    Jan-2010        B     18
3   foo    Feb-2010        B     20
5   foo  March-2010        B     25