标签归档:dataframe

读取pandas数据框的前几行的方法

问题:读取pandas数据框的前几行的方法

是否有内置的使用方式 read_csv仅读取n文件的前几行而无需提前知道行的长度?我有一个大文件,需要花费很长时间才能读取,偶尔只想使用前20行来获取它的样本(并且不希望加载完整的文件并花大头)。

如果我知道总行数,则可以执行类似的操作footer_lines = total_lines - n并将其传递给skipfooter关键字arg。我当前的解决方案是n使用python和StringIO 手动将第一行抓取到熊猫:

import pandas as pd
from StringIO import StringIO

n = 20
with open('big_file.csv', 'r') as f:
    head = ''.join(f.readlines(n))

df = pd.read_csv(StringIO(head))

并没有那么糟,但是有没有更简洁的“ pandasic”(?)方式来处理关键字或其他内容呢?

Is there a built-in way to use read_csv to read only the first n lines of a file without knowing the length of the lines ahead of time? I have a large file that takes a long time to read, and occasionally only want to use the first, say, 20 lines to get a sample of it (and prefer not to load the full thing and take the head of it).

If I knew the total number of lines I could do something like footer_lines = total_lines - n and pass this to the skipfooter keyword arg. My current solution is to manually grab the first n lines with python and StringIO it to pandas:

import pandas as pd
from StringIO import StringIO

n = 20
with open('big_file.csv', 'r') as f:
    head = ''.join(f.readlines(n))

df = pd.read_csv(StringIO(head))

It’s not that bad, but is there a more concise, ‘pandasic’ (?) way to do it with keywords or something?


回答 0

我认为您可以使用该nrows参数。从文档

nrows : int, default None

    Number of rows of file to read. Useful for reading pieces of large files

这似乎有效。使用标准大型测试文件之一(988504479字节,5344499行):

In [1]: import pandas as pd

In [2]: time z = pd.read_csv("P00000001-ALL.csv", nrows=20)
CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
Wall time: 0.00 s

In [3]: len(z)
Out[3]: 20

In [4]: time z = pd.read_csv("P00000001-ALL.csv")
CPU times: user 27.63 s, sys: 1.92 s, total: 29.55 s
Wall time: 30.23 s

I think you can use the nrows parameter. From the docs:

nrows : int, default None

    Number of rows of file to read. Useful for reading pieces of large files

which seems to work. Using one of the standard large test files (988504479 bytes, 5344499 lines):

In [1]: import pandas as pd

In [2]: time z = pd.read_csv("P00000001-ALL.csv", nrows=20)
CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
Wall time: 0.00 s

In [3]: len(z)
Out[3]: 20

In [4]: time z = pd.read_csv("P00000001-ALL.csv")
CPU times: user 27.63 s, sys: 1.92 s, total: 29.55 s
Wall time: 30.23 s

从熊猫返回多列apply()

问题:从熊猫返回多列apply()

我有一个熊猫DataFrame ,df_test。它包含一列“大小”,以字节为单位表示大小。我已经使用以下代码计算了KB,MB和GB:

df_test = pd.DataFrame([
    {'dir': '/Users/uname1', 'size': 994933},
    {'dir': '/Users/uname2', 'size': 109338711},
])

df_test['size_kb'] = df_test['size'].astype(int).apply(lambda x: locale.format("%.1f", x / 1024.0, grouping=True) + ' KB')
df_test['size_mb'] = df_test['size'].astype(int).apply(lambda x: locale.format("%.1f", x / 1024.0 ** 2, grouping=True) + ' MB')
df_test['size_gb'] = df_test['size'].astype(int).apply(lambda x: locale.format("%.1f", x / 1024.0 ** 3, grouping=True) + ' GB')

df_test


             dir       size       size_kb   size_mb size_gb
0  /Users/uname1     994933      971.6 KB    0.9 MB  0.0 GB
1  /Users/uname2  109338711  106,776.1 KB  104.3 MB  0.1 GB

[2 rows x 5 columns]

我已经运行了超过120,000行,并且根据%timeit,每列花费的时间约为2.97秒* 3 =〜9秒。

无论如何,我可以使它更快吗?例如,我是否可以代替一次套用并运行3次而不是一次返回一列,可以一次返回所有三列以将其插入回原始数据帧吗?

我发现的其他问题都希望采用多个值并返回一个值。我想要一个值并返回多列

I have a pandas DataFrame, df_test. It contains a column ‘size’ which represents size in bytes. I’ve calculated KB, MB, and GB using the following code:

df_test = pd.DataFrame([
    {'dir': '/Users/uname1', 'size': 994933},
    {'dir': '/Users/uname2', 'size': 109338711},
])

df_test['size_kb'] = df_test['size'].astype(int).apply(lambda x: locale.format("%.1f", x / 1024.0, grouping=True) + ' KB')
df_test['size_mb'] = df_test['size'].astype(int).apply(lambda x: locale.format("%.1f", x / 1024.0 ** 2, grouping=True) + ' MB')
df_test['size_gb'] = df_test['size'].astype(int).apply(lambda x: locale.format("%.1f", x / 1024.0 ** 3, grouping=True) + ' GB')

df_test


             dir       size       size_kb   size_mb size_gb
0  /Users/uname1     994933      971.6 KB    0.9 MB  0.0 GB
1  /Users/uname2  109338711  106,776.1 KB  104.3 MB  0.1 GB

[2 rows x 5 columns]

I’ve run this over 120,000 rows and time it takes about 2.97 seconds per column * 3 = ~9 seconds according to %timeit.

Is there anyway I can make this faster? For example, can I instead of returning one column at a time from apply and running it 3 times, can I return all three columns in one pass to insert back into the original dataframe?

The other questions I’ve found all want to take multiple values and return a single value. I want to take a single value and return multiple columns.


回答 0

这是一个古老的问题,但是为了完整起见,您可以从包含新数据的应用函数中返回一个Series,从而避免了需要迭代3次的麻烦。传递axis=1到apply函数sizes会将函数应用于数据框的每一行,并返回一个序列以添加到新的数据框。该系列s包含新值以及原始数据。

def sizes(s):
    s['size_kb'] = locale.format("%.1f", s['size'] / 1024.0, grouping=True) + ' KB'
    s['size_mb'] = locale.format("%.1f", s['size'] / 1024.0 ** 2, grouping=True) + ' MB'
    s['size_gb'] = locale.format("%.1f", s['size'] / 1024.0 ** 3, grouping=True) + ' GB'
    return s

df_test = df_test.append(rows_list)
df_test = df_test.apply(sizes, axis=1)

This is an old question, but for completeness, you can return a Series from the applied function that contains the new data, preventing the need to iterate three times. Passing axis=1 to the apply function applies the function sizes to each row of the dataframe, returning a series to add to a new dataframe. This series, s, contains the new values, as well as the original data.

def sizes(s):
    s['size_kb'] = locale.format("%.1f", s['size'] / 1024.0, grouping=True) + ' KB'
    s['size_mb'] = locale.format("%.1f", s['size'] / 1024.0 ** 2, grouping=True) + ' MB'
    s['size_gb'] = locale.format("%.1f", s['size'] / 1024.0 ** 3, grouping=True) + ' GB'
    return s

df_test = df_test.append(rows_list)
df_test = df_test.apply(sizes, axis=1)

回答 1

使用apply和zip将比Series方式快3倍。

def sizes(s):    
    return locale.format("%.1f", s / 1024.0, grouping=True) + ' KB', \
        locale.format("%.1f", s / 1024.0 ** 2, grouping=True) + ' MB', \
        locale.format("%.1f", s / 1024.0 ** 3, grouping=True) + ' GB'
df_test['size_kb'],  df_test['size_mb'], df_test['size_gb'] = zip(*df_test['size'].apply(sizes))

测试结果为:

Separate df.apply(): 

    100 loops, best of 3: 1.43 ms per loop

Return Series: 

    100 loops, best of 3: 2.61 ms per loop

Return tuple:

    1000 loops, best of 3: 819 µs per loop

Use apply and zip will 3 times fast than Series way.

def sizes(s):    
    return locale.format("%.1f", s / 1024.0, grouping=True) + ' KB', \
        locale.format("%.1f", s / 1024.0 ** 2, grouping=True) + ' MB', \
        locale.format("%.1f", s / 1024.0 ** 3, grouping=True) + ' GB'
df_test['size_kb'],  df_test['size_mb'], df_test['size_gb'] = zip(*df_test['size'].apply(sizes))

Test result are:

Separate df.apply(): 

    100 loops, best of 3: 1.43 ms per loop

Return Series: 

    100 loops, best of 3: 2.61 ms per loop

Return tuple:

    1000 loops, best of 3: 819 µs per loop

回答 2

当前的某些回复效果很好,但我想提供另一个也许更“惊慌”的选择。这适用于我当前的大熊猫0.23(不确定是否可以在以前的版本中使用):

import pandas as pd

df_test = pd.DataFrame([
  {'dir': '/Users/uname1', 'size': 994933},
  {'dir': '/Users/uname2', 'size': 109338711},
])

def sizes(s):
  a = locale.format("%.1f", s['size'] / 1024.0, grouping=True) + ' KB'
  b = locale.format("%.1f", s['size'] / 1024.0 ** 2, grouping=True) + ' MB'
  c = locale.format("%.1f", s['size'] / 1024.0 ** 3, grouping=True) + ' GB'
  return a, b, c

df_test[['size_kb', 'size_mb', 'size_gb']] = df_test.apply(sizes, axis=1, result_type="expand")

注意,技巧在的result_type参数上apply,它将把结果扩展为DataFrame,可以直接分配给新/旧列。

Some of the current replies work fine, but I want to offer another, maybe more “pandifyed” option. This works for me with the current pandas 0.23 (not sure if it will work in previous versions):

import pandas as pd

df_test = pd.DataFrame([
  {'dir': '/Users/uname1', 'size': 994933},
  {'dir': '/Users/uname2', 'size': 109338711},
])

def sizes(s):
  a = locale.format("%.1f", s['size'] / 1024.0, grouping=True) + ' KB'
  b = locale.format("%.1f", s['size'] / 1024.0 ** 2, grouping=True) + ' MB'
  c = locale.format("%.1f", s['size'] / 1024.0 ** 3, grouping=True) + ' GB'
  return a, b, c

df_test[['size_kb', 'size_mb', 'size_gb']] = df_test.apply(sizes, axis=1, result_type="expand")

Notice that the trick is on the result_type parameter of apply, that will expand its result into a DataFrame that can be directly assign to new/old columns.


回答 3

只是另一种可读的方式。这段代码将添加三个新列及其值,并在apply函数中返回不使用任何参数的序列。

def sizes(s):

    val_kb = locale.format("%.1f", s['size'] / 1024.0, grouping=True) + ' KB'
    val_mb = locale.format("%.1f", s['size'] / 1024.0 ** 2, grouping=True) + ' MB'
    val_gb = locale.format("%.1f", s['size'] / 1024.0 ** 3, grouping=True) + ' GB'
    return pd.Series([val_kb,val_mb,val_gb],index=['size_kb','size_mb','size_gb'])

df[['size_kb','size_mb','size_gb']] = df.apply(lambda x: sizes(x) , axis=1)

来自以下的一般示例:https : //pandas.pydata.org/pandas-docs/stable/genic/pandas.DataFrame.apply.html

df.apply(lambda x: pd.Series([1, 2], index=['foo', 'bar']), axis=1)

#foo  bar
#0    1    2
#1    1    2
#2    1    2

Just another readable way. This code will add three new columns and its values, returning series without use parameters in the apply function.

def sizes(s):

    val_kb = locale.format("%.1f", s['size'] / 1024.0, grouping=True) + ' KB'
    val_mb = locale.format("%.1f", s['size'] / 1024.0 ** 2, grouping=True) + ' MB'
    val_gb = locale.format("%.1f", s['size'] / 1024.0 ** 3, grouping=True) + ' GB'
    return pd.Series([val_kb,val_mb,val_gb],index=['size_kb','size_mb','size_gb'])

df[['size_kb','size_mb','size_gb']] = df.apply(lambda x: sizes(x) , axis=1)

A general example from: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html

df.apply(lambda x: pd.Series([1, 2], index=['foo', 'bar']), axis=1)

#foo  bar
#0    1    2
#1    1    2
#2    1    2

回答 4

真的很酷的答案!感谢Jesse和jaumebonet!关于以下方面的一些观察:

  • zip(* ...
  • ... result_type="expand")

尽管expand有点更优雅(平庸),但是zip至少快** 2倍。在下面这个简单的示例中,我的速度提高4倍

import pandas as pd

dat = [ [i, 10*i] for i in range(1000)]

df = pd.DataFrame(dat, columns = ["a","b"])

def add_and_sub(row):
    add = row["a"] + row["b"]
    sub = row["a"] - row["b"]
    return add, sub

df[["add", "sub"]] = df.apply(add_and_sub, axis=1, result_type="expand")
# versus
df["add"], df["sub"] = zip(*df.apply(add_and_sub, axis=1))

Really cool answers! Thanks Jesse and jaumebonet! Just some observation in regards to:

  • zip(* ...
  • ... result_type="expand")

Although expand is kind of more elegant (pandifyed), zip is at least **2x faster. On this simple example bellow, I got 4x faster.

import pandas as pd

dat = [ [i, 10*i] for i in range(1000)]

df = pd.DataFrame(dat, columns = ["a","b"])

def add_and_sub(row):
    add = row["a"] + row["b"]
    sub = row["a"] - row["b"]
    return add, sub

df[["add", "sub"]] = df.apply(add_and_sub, axis=1, result_type="expand")
# versus
df["add"], df["sub"] = zip(*df.apply(add_and_sub, axis=1))

回答 5

顶部答案之间的性能显著变化,和杰西&famaral42已经讨论过这一点,但它是值得分享的顶部答案之间的公平比较,并制定杰西的回答的一个微妙但重要的细节:在传递给说法功能,也会影响性能

(Python 3.7.4,Pandas 1.0.3)

import pandas as pd
import locale
import timeit


def create_new_df_test():
    df_test = pd.DataFrame([
      {'dir': '/Users/uname1', 'size': 994933},
      {'dir': '/Users/uname2', 'size': 109338711},
    ])
    return df_test


def sizes_pass_series_return_series(series):
    series['size_kb'] = locale.format_string("%.1f", series['size'] / 1024.0, grouping=True) + ' KB'
    series['size_mb'] = locale.format_string("%.1f", series['size'] / 1024.0 ** 2, grouping=True) + ' MB'
    series['size_gb'] = locale.format_string("%.1f", series['size'] / 1024.0 ** 3, grouping=True) + ' GB'
    return series


def sizes_pass_series_return_tuple(series):
    a = locale.format_string("%.1f", series['size'] / 1024.0, grouping=True) + ' KB'
    b = locale.format_string("%.1f", series['size'] / 1024.0 ** 2, grouping=True) + ' MB'
    c = locale.format_string("%.1f", series['size'] / 1024.0 ** 3, grouping=True) + ' GB'
    return a, b, c


def sizes_pass_value_return_tuple(value):
    a = locale.format_string("%.1f", value / 1024.0, grouping=True) + ' KB'
    b = locale.format_string("%.1f", value / 1024.0 ** 2, grouping=True) + ' MB'
    c = locale.format_string("%.1f", value / 1024.0 ** 3, grouping=True) + ' GB'
    return a, b, c

结果如下:

# 1 - Accepted (Nels11 Answer) - (pass series, return series):
9.82 ms ± 377 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

# 2 - Pandafied (jaumebonet Answer) - (pass series, return tuple):
2.34 ms ± 48.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

# 3 - Tuples (pass series, return tuple then zip):
1.36 ms ± 62.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

# 4 - Tuples (Jesse Answer) - (pass value, return tuple then zip):
752 µs ± 18.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

请注意如何返回元组是最快的方法,但是什么传递作为一个参数,也是影响性能。代码上的差异很细微,但性能却有显着提高。

测试#4(传递一个值)的速度是测试#3(传递一个值)的两倍,即使表面上执行的操作相同。

但是还有更多…

# 1a - Accepted (Nels11 Answer) - (pass series, return series, new columns exist):
3.23 ms ± 141 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

# 2a - Pandafied (jaumebonet Answer) - (pass series, return tuple, new columns exist):
2.31 ms ± 39.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

# 3a - Tuples (pass series, return tuple then zip, new columns exist):
1.36 ms ± 58.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

# 4a - Tuples (Jesse Answer) - (pass value, return tuple then zip, new columns exist):
694 µs ± 3.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

在某些情况下(#1a和#4a),将函数应用到已经存在输出列的DataFrame上比从函数创建它们更快。

这是运行测试的代码:

# Paste and run the following in ipython console. It will not work if you run it from a .py file.
print('\nAccepted Answer (pass series, return series, new columns dont exist):')
df_test = create_new_df_test()
%timeit result = df_test.apply(sizes_pass_series_return_series, axis=1)
print('Accepted Answer (pass series, return series, new columns exist):')
df_test = create_new_df_test()
df_test = pd.concat([df_test, pd.DataFrame(columns=['size_kb', 'size_mb', 'size_gb'])])
%timeit result = df_test.apply(sizes_pass_series_return_series, axis=1)

print('\nPandafied (pass series, return tuple, new columns dont exist):')
df_test = create_new_df_test()
%timeit df_test[['size_kb', 'size_mb', 'size_gb']] = df_test.apply(sizes_pass_series_return_tuple, axis=1, result_type="expand")
print('Pandafied (pass series, return tuple, new columns exist):')
df_test = create_new_df_test()
df_test = pd.concat([df_test, pd.DataFrame(columns=['size_kb', 'size_mb', 'size_gb'])])
%timeit df_test[['size_kb', 'size_mb', 'size_gb']] = df_test.apply(sizes_pass_series_return_tuple, axis=1, result_type="expand")

print('\nTuples (pass series, return tuple then zip, new columns dont exist):')
df_test = create_new_df_test()
%timeit df_test['size_kb'],  df_test['size_mb'], df_test['size_gb'] = zip(*df_test.apply(sizes_pass_series_return_tuple, axis=1))
print('Tuples (pass series, return tuple then zip, new columns exist):')
df_test = create_new_df_test()
df_test = pd.concat([df_test, pd.DataFrame(columns=['size_kb', 'size_mb', 'size_gb'])])
%timeit df_test['size_kb'],  df_test['size_mb'], df_test['size_gb'] = zip(*df_test.apply(sizes_pass_series_return_tuple, axis=1))

print('\nTuples (pass value, return tuple then zip, new columns dont exist):')
df_test = create_new_df_test()
%timeit df_test['size_kb'],  df_test['size_mb'], df_test['size_gb'] = zip(*df_test['size'].apply(sizes_pass_value_return_tuple))
print('Tuples (pass value, return tuple then zip, new columns exist):')
df_test = create_new_df_test()
df_test = pd.concat([df_test, pd.DataFrame(columns=['size_kb', 'size_mb', 'size_gb'])])
%timeit df_test['size_kb'],  df_test['size_mb'], df_test['size_gb'] = zip(*df_test['size'].apply(sizes_pass_value_return_tuple))

The performance between the top answers is significantly varied, and Jesse & famaral42 have already discussed this, but it is worth sharing a fair comparison between the top answers, and elaborating on a subtle but important detail of Jesse’s answer: the argument passed in to the function, also affects performance.

(Python 3.7.4, Pandas 1.0.3)

import pandas as pd
import locale
import timeit


def create_new_df_test():
    df_test = pd.DataFrame([
      {'dir': '/Users/uname1', 'size': 994933},
      {'dir': '/Users/uname2', 'size': 109338711},
    ])
    return df_test


def sizes_pass_series_return_series(series):
    series['size_kb'] = locale.format_string("%.1f", series['size'] / 1024.0, grouping=True) + ' KB'
    series['size_mb'] = locale.format_string("%.1f", series['size'] / 1024.0 ** 2, grouping=True) + ' MB'
    series['size_gb'] = locale.format_string("%.1f", series['size'] / 1024.0 ** 3, grouping=True) + ' GB'
    return series


def sizes_pass_series_return_tuple(series):
    a = locale.format_string("%.1f", series['size'] / 1024.0, grouping=True) + ' KB'
    b = locale.format_string("%.1f", series['size'] / 1024.0 ** 2, grouping=True) + ' MB'
    c = locale.format_string("%.1f", series['size'] / 1024.0 ** 3, grouping=True) + ' GB'
    return a, b, c


def sizes_pass_value_return_tuple(value):
    a = locale.format_string("%.1f", value / 1024.0, grouping=True) + ' KB'
    b = locale.format_string("%.1f", value / 1024.0 ** 2, grouping=True) + ' MB'
    c = locale.format_string("%.1f", value / 1024.0 ** 3, grouping=True) + ' GB'
    return a, b, c

Here are the results:

# 1 - Accepted (Nels11 Answer) - (pass series, return series):
9.82 ms ± 377 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

# 2 - Pandafied (jaumebonet Answer) - (pass series, return tuple):
2.34 ms ± 48.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

# 3 - Tuples (pass series, return tuple then zip):
1.36 ms ± 62.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

# 4 - Tuples (Jesse Answer) - (pass value, return tuple then zip):
752 µs ± 18.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Notice how returning tuples is the fastest method, but what is passed in as an argument, also affects the performance. The difference in the code is subtle but the performance improvement is significant.

Test #4 (passing in a single value) is twice as fast as test #3 (passing in a series), even though the operation performed is ostensibly identical.

But there’s more…

# 1a - Accepted (Nels11 Answer) - (pass series, return series, new columns exist):
3.23 ms ± 141 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

# 2a - Pandafied (jaumebonet Answer) - (pass series, return tuple, new columns exist):
2.31 ms ± 39.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

# 3a - Tuples (pass series, return tuple then zip, new columns exist):
1.36 ms ± 58.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

# 4a - Tuples (Jesse Answer) - (pass value, return tuple then zip, new columns exist):
694 µs ± 3.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In some cases (#1a and #4a), applying the function to a DataFrame in which the output columns already exist is faster than creating them from the function.

Here is the code for running the tests:

# Paste and run the following in ipython console. It will not work if you run it from a .py file.
print('\nAccepted Answer (pass series, return series, new columns dont exist):')
df_test = create_new_df_test()
%timeit result = df_test.apply(sizes_pass_series_return_series, axis=1)
print('Accepted Answer (pass series, return series, new columns exist):')
df_test = create_new_df_test()
df_test = pd.concat([df_test, pd.DataFrame(columns=['size_kb', 'size_mb', 'size_gb'])])
%timeit result = df_test.apply(sizes_pass_series_return_series, axis=1)

print('\nPandafied (pass series, return tuple, new columns dont exist):')
df_test = create_new_df_test()
%timeit df_test[['size_kb', 'size_mb', 'size_gb']] = df_test.apply(sizes_pass_series_return_tuple, axis=1, result_type="expand")
print('Pandafied (pass series, return tuple, new columns exist):')
df_test = create_new_df_test()
df_test = pd.concat([df_test, pd.DataFrame(columns=['size_kb', 'size_mb', 'size_gb'])])
%timeit df_test[['size_kb', 'size_mb', 'size_gb']] = df_test.apply(sizes_pass_series_return_tuple, axis=1, result_type="expand")

print('\nTuples (pass series, return tuple then zip, new columns dont exist):')
df_test = create_new_df_test()
%timeit df_test['size_kb'],  df_test['size_mb'], df_test['size_gb'] = zip(*df_test.apply(sizes_pass_series_return_tuple, axis=1))
print('Tuples (pass series, return tuple then zip, new columns exist):')
df_test = create_new_df_test()
df_test = pd.concat([df_test, pd.DataFrame(columns=['size_kb', 'size_mb', 'size_gb'])])
%timeit df_test['size_kb'],  df_test['size_mb'], df_test['size_gb'] = zip(*df_test.apply(sizes_pass_series_return_tuple, axis=1))

print('\nTuples (pass value, return tuple then zip, new columns dont exist):')
df_test = create_new_df_test()
%timeit df_test['size_kb'],  df_test['size_mb'], df_test['size_gb'] = zip(*df_test['size'].apply(sizes_pass_value_return_tuple))
print('Tuples (pass value, return tuple then zip, new columns exist):')
df_test = create_new_df_test()
df_test = pd.concat([df_test, pd.DataFrame(columns=['size_kb', 'size_mb', 'size_gb'])])
%timeit df_test['size_kb'],  df_test['size_mb'], df_test['size_gb'] = zip(*df_test['size'].apply(sizes_pass_value_return_tuple))

回答 6

我相信1.1版会破坏此处最佳答案中建议的行为。

import pandas as pd
def test_func(row):
    row['c'] = str(row['a']) + str(row['b'])
    row['d'] = row['a'] + 1
    return row

df = pd.DataFrame({'a': [1, 2, 3], 'b': ['i', 'j', 'k']})
df.apply(test_func, axis=1)

上面的代码在pandas 1.1.0返回上运行:

   a  b   c  d
0  1  i  1i  2
1  1  i  1i  2
2  1  i  1i  2

在大熊猫1.0.5中返回:

   a   b    c  d
0  1   i   1i  2
1  2   j   2j  3
2  3   k   3k  4

我认为这是您所期望的。

不知道怎么的发行说明解释这种行为,然而由于解释这里通过复制他们复活旧的行为避免了原始行的突变。即:

def test_func(row):
    row = row.copy()   #  <---- Avoid mutating the original reference
    row['c'] = str(row['a']) + str(row['b'])
    row['d'] = row['a'] + 1
    return row

I believe the 1.1 version breaks the behavior suggested in the top answer here.

import pandas as pd
def test_func(row):
    row['c'] = str(row['a']) + str(row['b'])
    row['d'] = row['a'] + 1
    return row

df = pd.DataFrame({'a': [1, 2, 3], 'b': ['i', 'j', 'k']})
df.apply(test_func, axis=1)

The above code ran on pandas 1.1.0 returns:

   a  b   c  d
0  1  i  1i  2
1  1  i  1i  2
2  1  i  1i  2

While in pandas 1.0.5 it returned:

   a   b    c  d
0  1   i   1i  2
1  2   j   2j  3
2  3   k   3k  4

Which I think is what you’d expect.

Not sure how the release notes explain this behavior, however as explained here avoiding mutation of the original rows by copying them resurrects the old behavior. i.e.:

def test_func(row):
    row = row.copy()   #  <---- Avoid mutating the original reference
    row['c'] = str(row['a']) + str(row['b'])
    row['d'] = row['a'] + 1
    return row

回答 7

通常,要返回多个值,这就是我要做的

def gimmeMultiple(group):
    x1 = 1
    x2 = 2
    return array([[1, 2]])
def gimmeMultipleDf(group):
    x1 = 1
    x2 = 2
    return pd.DataFrame(array([[1,2]]), columns=['x1', 'x2'])
df['size'].astype(int).apply(gimmeMultiple)
df['size'].astype(int).apply(gimmeMultipleDf)

明确地返回数据帧具有其优势,但有时并非必需。您可以查看apply()收益,并使用这些功能;)

Generally, to return multiple values, this is what I do

def gimmeMultiple(group):
    x1 = 1
    x2 = 2
    return array([[1, 2]])
def gimmeMultipleDf(group):
    x1 = 1
    x2 = 2
    return pd.DataFrame(array([[1,2]]), columns=['x1', 'x2'])
df['size'].astype(int).apply(gimmeMultiple)
df['size'].astype(int).apply(gimmeMultipleDf)

Returning a dataframe definitively has its perks, but sometimes not required. You can look at what the apply() returns and play a bit with the functions ;)


回答 8

它提供了一个新数据框,其中包含原始列的两列。

import pandas as pd
df = ...
df_with_two_columns = df.apply(lambda row:pd.Series([row['column_1'], row['column_2']], index=['column_1', 'column_2']),axis = 1)

It gives a new dataframe with two columns from the original one.

import pandas as pd
df = ...
df_with_two_columns = df.apply(lambda row:pd.Series([row['column_1'], row['column_2']], index=['column_1', 'column_2']),axis = 1)

使用熊猫查找最多两列或更多列

问题:使用熊猫查找最多两列或更多列

我有一个列的数据帧AB。我需要创建一个列C,以便为每个记录/行:

C = max(A, B)

我应该怎么做呢?

I have a dataframe with columns A,B. I need to create a column C such that for every record / row:

C = max(A, B).

How should I go about doing this?


回答 0

您可以这样获得最大值:

>>> import pandas as pd
>>> df = pd.DataFrame({"A": [1,2,3], "B": [-2, 8, 1]})
>>> df
   A  B
0  1 -2
1  2  8
2  3  1
>>> df[["A", "B"]]
   A  B
0  1 -2
1  2  8
2  3  1
>>> df[["A", "B"]].max(axis=1)
0    1
1    8
2    3

所以:

>>> df["C"] = df[["A", "B"]].max(axis=1)
>>> df
   A  B  C
0  1 -2  1
1  2  8  8
2  3  1  3

如果您知道“ A”和“ B”是唯一的列,那么您甚至可以逃脱

>>> df["C"] = df.max(axis=1)

.apply(max, axis=1)我猜你也可以使用。

You can get the maximum like this:

>>> import pandas as pd
>>> df = pd.DataFrame({"A": [1,2,3], "B": [-2, 8, 1]})
>>> df
   A  B
0  1 -2
1  2  8
2  3  1
>>> df[["A", "B"]]
   A  B
0  1 -2
1  2  8
2  3  1
>>> df[["A", "B"]].max(axis=1)
0    1
1    8
2    3

and so:

>>> df["C"] = df[["A", "B"]].max(axis=1)
>>> df
   A  B  C
0  1 -2  1
1  2  8  8
2  3  1  3

If you know that “A” and “B” are the only columns, you could even get away with

>>> df["C"] = df.max(axis=1)

And you could use .apply(max, axis=1) too, I guess.


回答 1

在几乎所有正常情况下,@ DSM的答案都很好。但是,如果您是想比表面层次更深入一点的程序员类型,那么您可能想知道,在基础数组.to_numpy().values对于<0.24)上调用numpy函数要快一些,而不是直接调用调用在DataFrame / Series对象上定义的(cythonized)函数。

例如,您可以ndarray.max()沿第一个轴使用。

# Data borrowed from @DSM's post.
df = pd.DataFrame({"A": [1,2,3], "B": [-2, 8, 1]})
df
   A  B
0  1 -2
1  2  8
2  3  1

df['C'] = df[['A', 'B']].values.max(1)
# Or, assuming "A" and "B" are the only columns, 
# df['C'] = df.values.max(1) 
df

   A  B  C
0  1 -2  1
1  2  8  8
2  3  1  3 

如果您的数据包含NaN,则将需要numpy.nanmax

df['C'] = np.nanmax(df.values, axis=1)
df

   A  B  C
0  1 -2  1
1  2  8  8
2  3  1  3 

您也可以使用numpy.maximum.reducenumpy.maximumufunc(通用函数)每个ufunc都有reduce

df['C'] = np.maximum.reduce(df['A', 'B']].values, axis=1)
# df['C'] = np.maximum.reduce(df[['A', 'B']], axis=1)
# df['C'] = np.maximum.reduce(df, axis=1)
df

   A  B  C
0  1 -2  1
1  2  8  8
2  3  1  3

np.maximum.reduce并且np.max看起来或多或少是相同的(对于大多数标准尺寸的DataFrame),并且阴影的速度比快DataFrame.max。我认为这种差异大致保持不变,并且是由于内部开销(索引对齐,处理NaN等)引起的。

该图是使用perfplot生成的。基准测试代码,以供参考:

import pandas as pd
import perfplot

np.random.seed(0)
df_ = pd.DataFrame(np.random.randn(5, 1000))

perfplot.show(
    setup=lambda n: pd.concat([df_] * n, ignore_index=True),
    kernels=[
        lambda df: df.assign(new=df.max(axis=1)),
        lambda df: df.assign(new=df.values.max(1)),
        lambda df: df.assign(new=np.nanmax(df.values, axis=1)),
        lambda df: df.assign(new=np.maximum.reduce(df.values, axis=1)),
    ],
    labels=['df.max', 'np.max', 'np.maximum.reduce', 'np.nanmax'],
    n_range=[2**k for k in range(0, 15)],
    xlabel='N (* len(df))',
    logx=True,
    logy=True)

@DSM’s answer is perfectly fine in almost any normal scenario. But if you’re the type of programmer who wants to go a little deeper than the surface level, you might be interested to know that it is a little faster to call numpy functions on the underlying .to_numpy() (or .values for <0.24) array instead of directly calling the (cythonized) functions defined on the DataFrame/Series objects.

For example, you can use ndarray.max() along the first axis.

# Data borrowed from @DSM's post.
df = pd.DataFrame({"A": [1,2,3], "B": [-2, 8, 1]})
df
   A  B
0  1 -2
1  2  8
2  3  1

df['C'] = df[['A', 'B']].values.max(1)
# Or, assuming "A" and "B" are the only columns, 
# df['C'] = df.values.max(1) 
df

   A  B  C
0  1 -2  1
1  2  8  8
2  3  1  3 

If your data has NaNs, you will need numpy.nanmax:

df['C'] = np.nanmax(df.values, axis=1)
df

   A  B  C
0  1 -2  1
1  2  8  8
2  3  1  3 

You can also use numpy.maximum.reduce. numpy.maximum is a ufunc (Universal Function), and every ufunc has a reduce:

df['C'] = np.maximum.reduce(df['A', 'B']].values, axis=1)
# df['C'] = np.maximum.reduce(df[['A', 'B']], axis=1)
# df['C'] = np.maximum.reduce(df, axis=1)
df

   A  B  C
0  1 -2  1
1  2  8  8
2  3  1  3

np.maximum.reduce and np.max appear to be more or less the same (for most normal sized DataFrames)—and happen to be a shade faster than DataFrame.max. I imagine this difference roughly remains constant, and is due to internal overhead (indexing alignment, handling NaNs, etc).

The graph was generated using perfplot. Benchmarking code, for reference:

import pandas as pd
import perfplot

np.random.seed(0)
df_ = pd.DataFrame(np.random.randn(5, 1000))

perfplot.show(
    setup=lambda n: pd.concat([df_] * n, ignore_index=True),
    kernels=[
        lambda df: df.assign(new=df.max(axis=1)),
        lambda df: df.assign(new=df.values.max(1)),
        lambda df: df.assign(new=np.nanmax(df.values, axis=1)),
        lambda df: df.assign(new=np.maximum.reduce(df.values, axis=1)),
    ],
    labels=['df.max', 'np.max', 'np.maximum.reduce', 'np.nanmax'],
    n_range=[2**k for k in range(0, 15)],
    xlabel='N (* len(df))',
    logx=True,
    logy=True)

用python熊猫装箱列

问题:用python熊猫装箱列

我有一个带有数值的数据框列:

df['percentage'].head()
46.5
44.2
100.0
42.12

我想查看该列作为箱数:

bins = [0, 1, 5, 10, 25, 50, 100]

我如何将结果作为垃圾箱value counts

[0, 1] bin amount
[1, 5] etc 
[5, 10] etc 
......

I have a Data Frame column with numeric values:

df['percentage'].head()
46.5
44.2
100.0
42.12

I want to see the column as bin counts:

bins = [0, 1, 5, 10, 25, 50, 100]

How can I get the result as bins with their value counts?

[0, 1] bin amount
[1, 5] etc 
[5, 10] etc 
......

回答 0

您可以使用pandas.cut

bins = [0, 1, 5, 10, 25, 50, 100]
df['binned'] = pd.cut(df['percentage'], bins)
print (df)
   percentage     binned
0       46.50   (25, 50]
1       44.20   (25, 50]
2      100.00  (50, 100]
3       42.12   (25, 50]

bins = [0, 1, 5, 10, 25, 50, 100]
labels = [1,2,3,4,5,6]
df['binned'] = pd.cut(df['percentage'], bins=bins, labels=labels)
print (df)
   percentage binned
0       46.50      5
1       44.20      5
2      100.00      6
3       42.12      5

numpy.searchsorted

bins = [0, 1, 5, 10, 25, 50, 100]
df['binned'] = np.searchsorted(bins, df['percentage'].values)
print (df)
   percentage  binned
0       46.50       5
1       44.20       5
2      100.00       6
3       42.12       5

…然后value_countsor groupby和合计size

s = pd.cut(df['percentage'], bins=bins).value_counts()
print (s)
(25, 50]     3
(50, 100]    1
(10, 25]     0
(5, 10]      0
(1, 5]       0
(0, 1]       0
Name: percentage, dtype: int64

s = df.groupby(pd.cut(df['percentage'], bins=bins)).size()
print (s)
percentage
(0, 1]       0
(1, 5]       0
(5, 10]      0
(10, 25]     0
(25, 50]     3
(50, 100]    1
dtype: int64

默认cut返回categorical

Series像这样的方法Series.value_counts()将使用所有类别,即使数据中不存在某些类别,也可以使用categorical 操作

You can use pandas.cut:

bins = [0, 1, 5, 10, 25, 50, 100]
df['binned'] = pd.cut(df['percentage'], bins)
print (df)
   percentage     binned
0       46.50   (25, 50]
1       44.20   (25, 50]
2      100.00  (50, 100]
3       42.12   (25, 50]

bins = [0, 1, 5, 10, 25, 50, 100]
labels = [1,2,3,4,5,6]
df['binned'] = pd.cut(df['percentage'], bins=bins, labels=labels)
print (df)
   percentage binned
0       46.50      5
1       44.20      5
2      100.00      6
3       42.12      5

Or numpy.searchsorted:

bins = [0, 1, 5, 10, 25, 50, 100]
df['binned'] = np.searchsorted(bins, df['percentage'].values)
print (df)
   percentage  binned
0       46.50       5
1       44.20       5
2      100.00       6
3       42.12       5

…and then value_counts or groupby and aggregate size:

s = pd.cut(df['percentage'], bins=bins).value_counts()
print (s)
(25, 50]     3
(50, 100]    1
(10, 25]     0
(5, 10]      0
(1, 5]       0
(0, 1]       0
Name: percentage, dtype: int64

s = df.groupby(pd.cut(df['percentage'], bins=bins)).size()
print (s)
percentage
(0, 1]       0
(1, 5]       0
(5, 10]      0
(10, 25]     0
(25, 50]     3
(50, 100]    1
dtype: int64

By default cut return categorical.

Series methods like Series.value_counts() will use all categories, even if some categories are not present in the data, operations in categorical.


回答 1

使用numba模块加速。

在大型数据集(500k >pd.cut上,对数据进行合并可能会非常慢。

我编写了自己的函数,numba并进行了及时编译,这大约16x要快一些:

from numba import njit

@njit
def cut(arr):
    bins = np.empty(arr.shape[0])
    for idx, x in enumerate(arr):
        if (x >= 0) & (x < 1):
            bins[idx] = 1
        elif (x >= 1) & (x < 5):
            bins[idx] = 2
        elif (x >= 5) & (x < 10):
            bins[idx] = 3
        elif (x >= 10) & (x < 25):
            bins[idx] = 4
        elif (x >= 25) & (x < 50):
            bins[idx] = 5
        elif (x >= 50) & (x < 100):
            bins[idx] = 6
        else:
            bins[idx] = 7

    return bins
cut(df['percentage'].to_numpy())

# array([5., 5., 7., 5.])

可选:您还可以将其作为字符串映射到垃圾箱:

a = cut(df['percentage'].to_numpy())

conversion_dict = {1: 'bin1',
                   2: 'bin2',
                   3: 'bin3',
                   4: 'bin4',
                   5: 'bin5',
                   6: 'bin6',
                   7: 'bin7'}

bins = list(map(conversion_dict.get, a))

# ['bin5', 'bin5', 'bin7', 'bin5']

速度比较

# create dataframe of 8 million rows for testing
dfbig = pd.concat([df]*2000000, ignore_index=True)

dfbig.shape

# (8000000, 1)
%%timeit
cut(dfbig['percentage'].to_numpy())

# 38 ms ± 616 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%%timeit
bins = [0, 1, 5, 10, 25, 50, 100]
labels = [1,2,3,4,5,6]
pd.cut(dfbig['percentage'], bins=bins, labels=labels)

# 215 ms ± 9.76 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

Using numba module for speed up.

On big datasets (500k >) pd.cut can be quite slow for binning data.

I wrote my own function in numba with just in time compilation, which is roughly 16x faster:

from numba import njit

@njit
def cut(arr):
    bins = np.empty(arr.shape[0])
    for idx, x in enumerate(arr):
        if (x >= 0) & (x < 1):
            bins[idx] = 1
        elif (x >= 1) & (x < 5):
            bins[idx] = 2
        elif (x >= 5) & (x < 10):
            bins[idx] = 3
        elif (x >= 10) & (x < 25):
            bins[idx] = 4
        elif (x >= 25) & (x < 50):
            bins[idx] = 5
        elif (x >= 50) & (x < 100):
            bins[idx] = 6
        else:
            bins[idx] = 7

    return bins
cut(df['percentage'].to_numpy())

# array([5., 5., 7., 5.])

Optional: you can also map it to bins as strings:

a = cut(df['percentage'].to_numpy())

conversion_dict = {1: 'bin1',
                   2: 'bin2',
                   3: 'bin3',
                   4: 'bin4',
                   5: 'bin5',
                   6: 'bin6',
                   7: 'bin7'}

bins = list(map(conversion_dict.get, a))

# ['bin5', 'bin5', 'bin7', 'bin5']

Speed comparison:

# create dataframe of 8 million rows for testing
dfbig = pd.concat([df]*2000000, ignore_index=True)

dfbig.shape

# (8000000, 1)
%%timeit
cut(dfbig['percentage'].to_numpy())

# 38 ms ± 616 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%%timeit
bins = [0, 1, 5, 10, 25, 50, 100]
labels = [1,2,3,4,5,6]
pd.cut(dfbig['percentage'], bins=bins, labels=labels)

# 215 ms ± 9.76 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

如何在pyspark中将Dataframe列从String类型更改为Double类型

问题:如何在pyspark中将Dataframe列从String类型更改为Double类型

我有一个列为String的数据框。我想在PySpark中将列类型更改为Double type。

以下是我的方法:

toDoublefunc = UserDefinedFunction(lambda x: x,DoubleType())
changedTypedf = joindf.withColumn("label",toDoublefunc(joindf['show']))

只是想知道,这是正确的方法,就像通过Logistic回归运行时一样,我遇到了一些错误,所以我想知道,这是麻烦的原因。

I have a dataframe with column as String. I wanted to change the column type to Double type in PySpark.

Following is the way, I did:

toDoublefunc = UserDefinedFunction(lambda x: x,DoubleType())
changedTypedf = joindf.withColumn("label",toDoublefunc(joindf['show']))

Just wanted to know, is this the right way to do it as while running through Logistic Regression, I am getting some error, so I wonder, is this the reason for the trouble.


回答 0

这里不需要UDF。Column已经提供了带有instance的cast方法DataType

from pyspark.sql.types import DoubleType

changedTypedf = joindf.withColumn("label", joindf["show"].cast(DoubleType()))

或短字符串:

changedTypedf = joindf.withColumn("label", joindf["show"].cast("double"))

规范的字符串名称(也可以支持其他变体)对应于simpleString值。因此对于原子类型:

from pyspark.sql import types 

for t in ['BinaryType', 'BooleanType', 'ByteType', 'DateType', 
          'DecimalType', 'DoubleType', 'FloatType', 'IntegerType', 
           'LongType', 'ShortType', 'StringType', 'TimestampType']:
    print(f"{t}: {getattr(types, t)().simpleString()}")
BinaryType: binary
BooleanType: boolean
ByteType: tinyint
DateType: date
DecimalType: decimal(10,0)
DoubleType: double
FloatType: float
IntegerType: int
LongType: bigint
ShortType: smallint
StringType: string
TimestampType: timestamp

例如复杂类型

types.ArrayType(types.IntegerType()).simpleString()   
'array<int>'
types.MapType(types.StringType(), types.IntegerType()).simpleString()
'map<string,int>'

There is no need for an UDF here. Column already provides cast method with DataType instance :

from pyspark.sql.types import DoubleType

changedTypedf = joindf.withColumn("label", joindf["show"].cast(DoubleType()))

or short string:

changedTypedf = joindf.withColumn("label", joindf["show"].cast("double"))

where canonical string names (other variations can be supported as well) correspond to simpleString value. So for atomic types:

from pyspark.sql import types 

for t in ['BinaryType', 'BooleanType', 'ByteType', 'DateType', 
          'DecimalType', 'DoubleType', 'FloatType', 'IntegerType', 
           'LongType', 'ShortType', 'StringType', 'TimestampType']:
    print(f"{t}: {getattr(types, t)().simpleString()}")
BinaryType: binary
BooleanType: boolean
ByteType: tinyint
DateType: date
DecimalType: decimal(10,0)
DoubleType: double
FloatType: float
IntegerType: int
LongType: bigint
ShortType: smallint
StringType: string
TimestampType: timestamp

and for example complex types

types.ArrayType(types.IntegerType()).simpleString()   
'array<int>'
types.MapType(types.StringType(), types.IntegerType()).simpleString()
'map<string,int>'

回答 1

保留列名,并通过使用与输入列相同的名称来避免添加额外的列:

changedTypedf = joindf.withColumn("show", joindf["show"].cast(DoubleType()))

Preserve the name of the column and avoid extra column addition by using the same name as input column:

changedTypedf = joindf.withColumn("show", joindf["show"].cast(DoubleType()))

回答 2

给定的答案足以解决问题,但是我想分享另一种可能会引入新版本的Spark的方法(我不确定),因此给定的答案无法解决。

我们可以使用col("colum_name")关键字到达spark语句中的列:

from pyspark.sql.functions import col , column
changedTypedf = joindf.withColumn("show", col("show").cast("double"))

Given answers are enough to deal with the problem but I want to share another way which may be introduced the new version of Spark (I am not sure about it) so given answer didn’t catch it.

We can reach the column in spark statement with col("colum_name") keyword:

from pyspark.sql.functions import col , column
changedTypedf = joindf.withColumn("show", col("show").cast("double"))

回答 3

pyspark版本:

  df = <source data>
  df.printSchema()

  from pyspark.sql.types import *

  # Change column type
  df_new = df.withColumn("myColumn", df["myColumn"].cast(IntegerType()))
  df_new.printSchema()
  df_new.select("myColumn").show()

pyspark version:

  df = <source data>
  df.printSchema()

  from pyspark.sql.types import *

  # Change column type
  df_new = df.withColumn("myColumn", df["myColumn"].cast(IntegerType()))
  df_new.printSchema()
  df_new.select("myColumn").show()

回答 4

解决方案很简单-

toDoublefunc = UserDefinedFunction(lambda x: float(x),DoubleType())
changedTypedf = joindf.withColumn("label",toDoublefunc(joindf['show']))

the solution was simple –

toDoublefunc = UserDefinedFunction(lambda x: float(x),DoubleType())
changedTypedf = joindf.withColumn("label",toDoublefunc(joindf['show']))

如何在没有索引的情况下打印Pandas DataFrame

问题:如何在没有索引的情况下打印Pandas DataFrame

我想打印整个数据框,但是我不想打印索引

此外,一列是日期时间类型,我只想打印时间,而不是日期。

数据框如下所示:

   User ID           Enter Time   Activity Number
0      123  2014-07-08 00:09:00              1411
1      123  2014-07-08 00:18:00               893
2      123  2014-07-08 00:49:00              1041

我希望它打印为

User ID   Enter Time   Activity Number
123         00:09:00              1411
123         00:18:00               893
123         00:49:00              1041

I want to print the whole dataframe, but I don’t want to print the index

Besides, one column is datetime type, I just want to print time, not date.

The dataframe looks like:

   User ID           Enter Time   Activity Number
0      123  2014-07-08 00:09:00              1411
1      123  2014-07-08 00:18:00               893
2      123  2014-07-08 00:49:00              1041

I want it print as

User ID   Enter Time   Activity Number
123         00:09:00              1411
123         00:18:00               893
123         00:49:00              1041

回答 0

print df.to_string(index=False)
print df.to_string(index=False)

回答 1

print(df.to_csv(sep='\t', index=False))

或可能:

print(df.to_csv(columns=['A', 'B', 'C'], sep='\t', index=False))
print(df.to_csv(sep='\t', index=False))

Or possibly:

print(df.to_csv(columns=['A', 'B', 'C'], sep='\t', index=False))

回答 2

下面的行在打印时将隐藏DataFrame的索引列

df.style.hide_index()

The line below would hide the index column of DataFrame when you print

df.style.hide_index()

回答 3

如果要漂亮地打印数据框,则可以使用列表包。

import pandas as pd
import numpy as np
from tabulate import tabulate

def pprint_df(dframe):
    print tabulate(dframe, headers='keys', tablefmt='psql', showindex=False)

df = pd.DataFrame({'col1': np.random.randint(0, 100, 10), 
    'col2': np.random.randint(50, 100, 10), 
    'col3': np.random.randint(10, 10000, 10)})

pprint_df(df)

具体来说,showindex=False顾名思义,,您可以不显示索引。输出如下所示:

+--------+--------+--------+
|   col1 |   col2 |   col3 |
|--------+--------+--------|
|     15 |     76 |   5175 |
|     30 |     97 |   3331 |
|     34 |     56 |   3513 |
|     50 |     65 |    203 |
|     84 |     75 |   7559 |
|     41 |     82 |    939 |
|     78 |     59 |   4971 |
|     98 |     99 |    167 |
|     81 |     99 |   6527 |
|     17 |     94 |   4267 |
+--------+--------+--------+

If you want to pretty print the data frames, then you can use tabulate package.

import pandas as pd
import numpy as np
from tabulate import tabulate

def pprint_df(dframe):
    print tabulate(dframe, headers='keys', tablefmt='psql', showindex=False)

df = pd.DataFrame({'col1': np.random.randint(0, 100, 10), 
    'col2': np.random.randint(50, 100, 10), 
    'col3': np.random.randint(10, 10000, 10)})

pprint_df(df)

Specifically, the showindex=False, as the name says, allows you to not show index. The output would look as follows:

+--------+--------+--------+
|   col1 |   col2 |   col3 |
|--------+--------+--------|
|     15 |     76 |   5175 |
|     30 |     97 |   3331 |
|     34 |     56 |   3513 |
|     50 |     65 |    203 |
|     84 |     75 |   7559 |
|     41 |     82 |    939 |
|     78 |     59 |   4971 |
|     98 |     99 |    167 |
|     81 |     99 |   6527 |
|     17 |     94 |   4267 |
+--------+--------+--------+

回答 4

保留“精美印刷”使用

from IPython.display import HTML
HTML(df.to_html(index=False))

To retain “pretty-print” use

from IPython.display import HTML
HTML(df.to_html(index=False))


回答 5

如果只想打印一个字符串/ json,可以使用以下方法解决:

print(df.to_string(index=False))

Buf如果您也想序列化数据甚至发送到MongoDB,最好执行以下操作:

document = df.to_dict(orient='list')

到目前为止,有6种方法可以调整数据方向,请在熊猫文档中查看更多适合您的方法。

If you just want a string/json to print it can be solved with:

print(df.to_string(index=False))

Buf if you want to serialize the data too or even send to a MongoDB, would be better to do something like:

document = df.to_dict(orient='list')

There are 6 ways by now to orient the data, check more in the panda docs which better fits you.


回答 6

要回答“如何在没有索引的情况下打印数据框”问题,可以将索引设置为空字符串数组(数据帧中的每一行一个),如下所示:

blankIndex=[''] * len(df)
df.index=blankIndex

如果我们使用您帖子中的数据:

row1 = (123, '2014-07-08 00:09:00', 1411)
row2 = (123, '2014-07-08 00:49:00', 1041)
row3 = (123, '2014-07-08 00:09:00', 1411)
data = [row1, row2, row3]
#set up dataframe
df = pd.DataFrame(data, columns=('User ID', 'Enter Time', 'Activity Number'))
print(df)

通常将其打印为:

   User ID           Enter Time  Activity Number
0      123  2014-07-08 00:09:00             1411
1      123  2014-07-08 00:49:00             1041
2      123  2014-07-08 00:09:00             1411

通过创建一个空字符串与数据框中的行数一样多的数组:

blankIndex=[''] * len(df)
df.index=blankIndex
print(df)

它将从输出中删除索引:

  User ID           Enter Time  Activity Number
      123  2014-07-08 00:09:00             1411
      123  2014-07-08 00:49:00             1041
      123  2014-07-08 00:09:00             1411

并且在Jupyter Notebook中将按照此屏幕截图进行渲染: 没有索引列的Juptyer Notebooks数据框

To answer the “How to print dataframe without an index” question, you can set the index to be an array of empty strings (one for each row in the dataframe), like this:

blankIndex=[''] * len(df)
df.index=blankIndex

If we use the data from your post:

row1 = (123, '2014-07-08 00:09:00', 1411)
row2 = (123, '2014-07-08 00:49:00', 1041)
row3 = (123, '2014-07-08 00:09:00', 1411)
data = [row1, row2, row3]
#set up dataframe
df = pd.DataFrame(data, columns=('User ID', 'Enter Time', 'Activity Number'))
print(df)

which would normally print out as:

   User ID           Enter Time  Activity Number
0      123  2014-07-08 00:09:00             1411
1      123  2014-07-08 00:49:00             1041
2      123  2014-07-08 00:09:00             1411

By creating an array with as many empty strings as there are rows in the data frame:

blankIndex=[''] * len(df)
df.index=blankIndex
print(df)

It will remove the index from the output:

  User ID           Enter Time  Activity Number
      123  2014-07-08 00:09:00             1411
      123  2014-07-08 00:49:00             1041
      123  2014-07-08 00:09:00             1411

And in Jupyter Notebooks would render as per this screenshot: Juptyer Notebooks dataframe with no index column


回答 7

与上面使用df.to_string(index = False)的许多答案类似,我经常发现有必要提取值的单列,在这种情况下,您可以使用.to_string使用以下内容指定单个列:

data = pd.DataFrame({'col1': np.random.randint(0, 100, 10), 
    'col2': np.random.randint(50, 100, 10), 
    'col3': np.random.randint(10, 10000, 10)})

print(data.to_string(columns=['col1'], index=False)

print(data.to_string(columns=['col1', 'col2'], index=False))

它提供了易于复制(且无索引)的输出,可用于粘贴到其他地方(Excel)。样本输出:

col1  col2    
49    62    
97    97    
87    94    
85    61    
18    55

Similar to many of the answers above that use df.to_string(index=False), I often find it necessary to extract a single column of values in which case you can specify an individual column with .to_string using the following:

data = pd.DataFrame({'col1': np.random.randint(0, 100, 10), 
    'col2': np.random.randint(50, 100, 10), 
    'col3': np.random.randint(10, 10000, 10)})

print(data.to_string(columns=['col1'], index=False)

print(data.to_string(columns=['col1', 'col2'], index=False))

Which provides an easy to copy (and index free) output for use pasting elsewhere (Excel). Sample output:

col1  col2    
49    62    
97    97    
87    94    
85    61    
18    55

将Pandas DataFrame转换为字典

问题:将Pandas DataFrame转换为字典

我有一个包含四列的DataFrame。我想将此DataFrame转换为python字典。我希望第一列keys的元素为,同一行中其他列的元素为values

数据框:

    ID   A   B   C
0   p    1   3   2
1   q    4   3   2
2   r    4   0   9  

输出应如下所示:

字典:

{'p': [1,3,2], 'q': [4,3,2], 'r': [4,0,9]}

I have a DataFrame with four columns. I want to convert this DataFrame to a python dictionary. I want the elements of first column be keys and the elements of other columns in same row be values.

DataFrame:

    ID   A   B   C
0   p    1   3   2
1   q    4   3   2
2   r    4   0   9  

Output should be like this:

Dictionary:

{'p': [1,3,2], 'q': [4,3,2], 'r': [4,0,9]}

回答 0

to_dict()方法将列名设置为字典键,因此您需要稍微调整DataFrame的形状。将“ ID”列设置为索引,然后转置DataFrame是实现此目的的一种方法。

to_dict()还接受一个“东方”参数,您需要该参数才能为每列输出值列表。否则,{index: value}将为每一列返回形式的字典。

这些步骤可以通过以下行完成:

>>> df.set_index('ID').T.to_dict('list')
{'p': [1, 3, 2], 'q': [4, 3, 2], 'r': [4, 0, 9]}

如果需要不同的字典格式,则以下是可能的Orient参数的示例。考虑以下简单的DataFrame:

>>> df = pd.DataFrame({'a': ['red', 'yellow', 'blue'], 'b': [0.5, 0.25, 0.125]})
>>> df
        a      b
0     red  0.500
1  yellow  0.250
2    blue  0.125

然后,选项如下。

dict-默认值:列名称是键,值是index:data对的字典

>>> df.to_dict('dict')
{'a': {0: 'red', 1: 'yellow', 2: 'blue'}, 
 'b': {0: 0.5, 1: 0.25, 2: 0.125}}

list-键是列名,值是列数据的列表

>>> df.to_dict('list')
{'a': ['red', 'yellow', 'blue'], 
 'b': [0.5, 0.25, 0.125]}

系列 -类似于“列表”,但值是系列

>>> df.to_dict('series')
{'a': 0       red
      1    yellow
      2      blue
      Name: a, dtype: object, 

 'b': 0    0.500
      1    0.250
      2    0.125
      Name: b, dtype: float64}

split-将列/数据/索引拆分为键,其值分别是列名,按行和索引标签的数据值

>>> df.to_dict('split')
{'columns': ['a', 'b'],
 'data': [['red', 0.5], ['yellow', 0.25], ['blue', 0.125]],
 'index': [0, 1, 2]}

记录 -每行成为一个字典,其中键是列名,值是单元格中的数据

>>> df.to_dict('records')
[{'a': 'red', 'b': 0.5}, 
 {'a': 'yellow', 'b': 0.25}, 
 {'a': 'blue', 'b': 0.125}]

索引 -类似于“记录”,但是字典的字典以键作为索引标签(而不是列表)

>>> df.to_dict('index')
{0: {'a': 'red', 'b': 0.5},
 1: {'a': 'yellow', 'b': 0.25},
 2: {'a': 'blue', 'b': 0.125}}

The to_dict() method sets the column names as dictionary keys so you’ll need to reshape your DataFrame slightly. Setting the ‘ID’ column as the index and then transposing the DataFrame is one way to achieve this.

to_dict() also accepts an ‘orient’ argument which you’ll need in order to output a list of values for each column. Otherwise, a dictionary of the form {index: value} will be returned for each column.

These steps can be done with the following line:

>>> df.set_index('ID').T.to_dict('list')
{'p': [1, 3, 2], 'q': [4, 3, 2], 'r': [4, 0, 9]}

In case a different dictionary format is needed, here are examples of the possible orient arguments. Consider the following simple DataFrame:

>>> df = pd.DataFrame({'a': ['red', 'yellow', 'blue'], 'b': [0.5, 0.25, 0.125]})
>>> df
        a      b
0     red  0.500
1  yellow  0.250
2    blue  0.125

Then the options are as follows.

dict – the default: column names are keys, values are dictionaries of index:data pairs

>>> df.to_dict('dict')
{'a': {0: 'red', 1: 'yellow', 2: 'blue'}, 
 'b': {0: 0.5, 1: 0.25, 2: 0.125}}

list – keys are column names, values are lists of column data

>>> df.to_dict('list')
{'a': ['red', 'yellow', 'blue'], 
 'b': [0.5, 0.25, 0.125]}

series – like ‘list’, but values are Series

>>> df.to_dict('series')
{'a': 0       red
      1    yellow
      2      blue
      Name: a, dtype: object, 

 'b': 0    0.500
      1    0.250
      2    0.125
      Name: b, dtype: float64}

split – splits columns/data/index as keys with values being column names, data values by row and index labels respectively

>>> df.to_dict('split')
{'columns': ['a', 'b'],
 'data': [['red', 0.5], ['yellow', 0.25], ['blue', 0.125]],
 'index': [0, 1, 2]}

records – each row becomes a dictionary where key is column name and value is the data in the cell

>>> df.to_dict('records')
[{'a': 'red', 'b': 0.5}, 
 {'a': 'yellow', 'b': 0.25}, 
 {'a': 'blue', 'b': 0.125}]

index – like ‘records’, but a dictionary of dictionaries with keys as index labels (rather than a list)

>>> df.to_dict('index')
{0: {'a': 'red', 'b': 0.5},
 1: {'a': 'yellow', 'b': 0.25},
 2: {'a': 'blue', 'b': 0.125}}

回答 1

尝试使用 Zip

df = pd.read_csv("file")
d= dict([(i,[a,b,c ]) for i, a,b,c in zip(df.ID, df.A,df.B,df.C)])
print d

输出:

{'p': [1, 3, 2], 'q': [4, 3, 2], 'r': [4, 0, 9]}

Try to use Zip

df = pd.read_csv("file")
d= dict([(i,[a,b,c ]) for i, a,b,c in zip(df.ID, df.A,df.B,df.C)])
print d

Output:

{'p': [1, 3, 2], 'q': [4, 3, 2], 'r': [4, 0, 9]}

回答 2

跟着这些步骤:

假设您的数据框如下:

>>> df
   A  B  C ID
0  1  3  2  p
1  4  3  2  q
2  4  0  9  r

1. set_index用于将ID列设置为数据框索引。

    df.set_index("ID", drop=True, inplace=True)

2.使用orient=index参数将索引用作字典键。

    dictionary = df.to_dict(orient="index")

结果如下:

    >>> dictionary
    {'q': {'A': 4, 'B': 3, 'D': 2}, 'p': {'A': 1, 'B': 3, 'D': 2}, 'r': {'A': 4, 'B': 0, 'D': 9}}

3.如果需要将每个样本作为列表,请运行以下代码。确定列顺序

column_order= ["A", "B", "C"] #  Determine your preferred order of columns
d = {} #  Initialize the new dictionary as an empty dictionary
for k in dictionary:
    d[k] = [dictionary[k][column_name] for column_name in column_order]

Follow these steps:

Suppose your dataframe is as follows:

>>> df
   A  B  C ID
0  1  3  2  p
1  4  3  2  q
2  4  0  9  r

1. Use set_index to set ID columns as the dataframe index.

    df.set_index("ID", drop=True, inplace=True)

2. Use the orient=index parameter to have the index as dictionary keys.

    dictionary = df.to_dict(orient="index")

The results will be as follows:

    >>> dictionary
    {'q': {'A': 4, 'B': 3, 'D': 2}, 'p': {'A': 1, 'B': 3, 'D': 2}, 'r': {'A': 4, 'B': 0, 'D': 9}}

3. If you need to have each sample as a list run the following code. Determine the column order

column_order= ["A", "B", "C"] #  Determine your preferred order of columns
d = {} #  Initialize the new dictionary as an empty dictionary
for k in dictionary:
    d[k] = [dictionary[k][column_name] for column_name in column_order]

回答 3

如果您不介意字典值是元组,则可以使用itertuples:

>>> {x[0]: x[1:] for x in df.itertuples(index=False)}
{'p': (1, 3, 2), 'q': (4, 3, 2), 'r': (4, 0, 9)}

If you don’t mind the dictionary values being tuples, you can use itertuples:

>>> {x[0]: x[1:] for x in df.itertuples(index=False)}
{'p': (1, 3, 2), 'q': (4, 3, 2), 'r': (4, 0, 9)}

回答 4

字典应该像:

{'red': '0.500', 'yellow': '0.250, 'blue': '0.125'}

需要像这样的数据框之外:

        a      b
0     red  0.500
1  yellow  0.250
2    blue  0.125

最简单的方法是:

dict(df.values.tolist())

下面的工作片段:

import pandas as pd
df = pd.DataFrame({'a': ['red', 'yellow', 'blue'], 'b': [0.5, 0.25, 0.125]})
dict(df.values.tolist())

should a dictionary like:

{'red': '0.500', 'yellow': '0.250, 'blue': '0.125'}

be required out of a dataframe like:

        a      b
0     red  0.500
1  yellow  0.250
2    blue  0.125

simplest way would be to do:

dict(df.values.tolist())

working snippet below:

import pandas as pd
df = pd.DataFrame({'a': ['red', 'yellow', 'blue'], 'b': [0.5, 0.25, 0.125]})
dict(df.values.tolist())


回答 5

对于我的使用(具有xy位置的节点名称),我发现@ user4179775对最有用/最直观的答案:

import pandas as pd

df = pd.read_csv('glycolysis_nodes_xy.tsv', sep='\t')

df.head()
    nodes    x    y
0  c00033  146  958
1  c00031  601  195
...

xy_dict_list=dict([(i,[a,b]) for i, a,b in zip(df.nodes, df.x,df.y)])

xy_dict_list
{'c00022': [483, 868],
 'c00024': [146, 868],
 ... }

xy_dict_tuples=dict([(i,(a,b)) for i, a,b in zip(df.nodes, df.x,df.y)])

xy_dict_tuples
{'c00022': (483, 868),
 'c00024': (146, 868),
 ... }

附录

后来,我又回到了这个问题,进行其他但相关的工作。这是一种更接近于[优秀]公认答案的方法。

node_df = pd.read_csv('node_prop-glycolysis_tca-from_pg.tsv', sep='\t')

node_df.head()
   node  kegg_id kegg_cid            name  wt  vis
0  22    22       c00022   pyruvate        1   1
1  24    24       c00024   acetyl-CoA      1   1
...

将Pandas数据帧转换为[列表],{dict},{dict的{dict}},…

每个接受的答案:

node_df.set_index('kegg_cid').T.to_dict('list')

{'c00022': [22, 22, 'pyruvate', 1, 1],
 'c00024': [24, 24, 'acetyl-CoA', 1, 1],
 ... }

node_df.set_index('kegg_cid').T.to_dict('dict')

{'c00022': {'kegg_id': 22, 'name': 'pyruvate', 'node': 22, 'vis': 1, 'wt': 1},
 'c00024': {'kegg_id': 24, 'name': 'acetyl-CoA', 'node': 24, 'vis': 1, 'wt': 1},
 ... }

就我而言,我想做同样的事情,但要选择Pandas数据框中的列,因此我需要对列进行切片。有两种方法。

  1. 直:

(请参阅:将大熊猫转换为字典,以定义用于键值的列

node_df.set_index('kegg_cid')[['name', 'wt', 'vis']].T.to_dict('dict')

{'c00022': {'name': 'pyruvate', 'vis': 1, 'wt': 1},
 'c00024': {'name': 'acetyl-CoA', 'vis': 1, 'wt': 1},
 ... }
  1. “间接:”首先,从Pandas数据框中切片所需的列/数据(同样,两种方法),
node_df_sliced = node_df[['kegg_cid', 'name', 'wt', 'vis']]

要么

node_df_sliced2 = node_df.loc[:, ['kegg_cid', 'name', 'wt', 'vis']]

然后可以用来创建字典的字典

node_df_sliced.set_index('kegg_cid').T.to_dict('dict')

{'c00022': {'name': 'pyruvate', 'vis': 1, 'wt': 1},
 'c00024': {'name': 'acetyl-CoA', 'vis': 1, 'wt': 1},
 ... }

For my use (node names with xy positions) I found @user4179775’s answer to the most helpful / intuitive:

import pandas as pd

df = pd.read_csv('glycolysis_nodes_xy.tsv', sep='\t')

df.head()
    nodes    x    y
0  c00033  146  958
1  c00031  601  195
...

xy_dict_list=dict([(i,[a,b]) for i, a,b in zip(df.nodes, df.x,df.y)])

xy_dict_list
{'c00022': [483, 868],
 'c00024': [146, 868],
 ... }

xy_dict_tuples=dict([(i,(a,b)) for i, a,b in zip(df.nodes, df.x,df.y)])

xy_dict_tuples
{'c00022': (483, 868),
 'c00024': (146, 868),
 ... }

Addendum

I later returned to this issue, for other, but related, work. Here is an approach that more closely mirrors the [excellent] accepted answer.

node_df = pd.read_csv('node_prop-glycolysis_tca-from_pg.tsv', sep='\t')

node_df.head()
   node  kegg_id kegg_cid            name  wt  vis
0  22    22       c00022   pyruvate        1   1
1  24    24       c00024   acetyl-CoA      1   1
...

Convert Pandas dataframe to a [list], {dict}, {dict of {dict}}, …

Per accepted answer:

node_df.set_index('kegg_cid').T.to_dict('list')

{'c00022': [22, 22, 'pyruvate', 1, 1],
 'c00024': [24, 24, 'acetyl-CoA', 1, 1],
 ... }

node_df.set_index('kegg_cid').T.to_dict('dict')

{'c00022': {'kegg_id': 22, 'name': 'pyruvate', 'node': 22, 'vis': 1, 'wt': 1},
 'c00024': {'kegg_id': 24, 'name': 'acetyl-CoA', 'node': 24, 'vis': 1, 'wt': 1},
 ... }

In my case, I wanted to do the same thing but with selected columns from the Pandas dataframe, so I needed to slice the columns. There are two approaches.

  1. Directly:

(see: Convert pandas to dictionary defining the columns used fo the key values)

node_df.set_index('kegg_cid')[['name', 'wt', 'vis']].T.to_dict('dict')

{'c00022': {'name': 'pyruvate', 'vis': 1, 'wt': 1},
 'c00024': {'name': 'acetyl-CoA', 'vis': 1, 'wt': 1},
 ... }
  1. “Indirectly:” first, slice the desired columns/data from the Pandas dataframe (again, two approaches),
node_df_sliced = node_df[['kegg_cid', 'name', 'wt', 'vis']]

or

node_df_sliced2 = node_df.loc[:, ['kegg_cid', 'name', 'wt', 'vis']]

that can then can be used to create a dictionary of dictionaries

node_df_sliced.set_index('kegg_cid').T.to_dict('dict')

{'c00022': {'name': 'pyruvate', 'vis': 1, 'wt': 1},
 'c00024': {'name': 'acetyl-CoA', 'vis': 1, 'wt': 1},
 ... }

回答 6

DataFrame.to_dict() 将DataFrame转换为字典。

>>> df = pd.DataFrame(
    {'col1': [1, 2], 'col2': [0.5, 0.75]}, index=['a', 'b'])
>>> df
   col1  col2
a     1   0.1
b     2   0.2
>>> df.to_dict()
{'col1': {'a': 1, 'b': 2}, 'col2': {'a': 0.5, 'b': 0.75}}

有关详细信息,请参见此文档

DataFrame.to_dict() converts DataFrame to dictionary.

Example

>>> df = pd.DataFrame(
    {'col1': [1, 2], 'col2': [0.5, 0.75]}, index=['a', 'b'])
>>> df
   col1  col2
a     1   0.1
b     2   0.2
>>> df.to_dict()
{'col1': {'a': 1, 'b': 2}, 'col2': {'a': 0.5, 'b': 0.75}}

See this Documentation for details


熊猫DataFrame Groupby两列并获取计数

问题:熊猫DataFrame Groupby两列并获取计数

我有以下格式的熊猫数据框:

df = pd.DataFrame([[1.1, 1.1, 1.1, 2.6, 2.5, 3.4,2.6,2.6,3.4,3.4,2.6,1.1,1.1,3.3], list('AAABBBBABCBDDD'), [1.1, 1.7, 2.5, 2.6, 3.3, 3.8,4.0,4.2,4.3,4.5,4.6,4.7,4.7,4.8], ['x/y/z','x/y','x/y/z/n','x/u','x','x/u/v','x/y/z','x','x/u/v/b','-','x/y','x/y/z','x','x/u/v/w'],['1','3','3','2','4','2','5','3','6','3','5','1','1','1']]).T
df.columns = ['col1','col2','col3','col4','col5']

df:

   col1 col2 col3     col4 col5
0   1.1    A  1.1    x/y/z    1
1   1.1    A  1.7      x/y    3
2   1.1    A  2.5  x/y/z/n    3
3   2.6    B  2.6      x/u    2
4   2.5    B  3.3        x    4
5   3.4    B  3.8    x/u/v    2
6   2.6    B    4    x/y/z    5
7   2.6    A  4.2        x    3
8   3.4    B  4.3  x/u/v/b    6
9   3.4    C  4.5        -    3
10  2.6    B  4.6      x/y    5
11  1.1    D  4.7    x/y/z    1
12  1.1    D  4.7        x    1
13  3.3    D  4.8  x/u/v/w    1

现在,我想按两列将其分组,如下所示:

df.groupby(['col5','col2']).reset_index()

输出:

             index col1 col2 col3     col4 col5
col5 col2                                      
1    A    0      0  1.1    A  1.1    x/y/z    1
     D    0     11  1.1    D  4.7    x/y/z    1
          1     12  1.1    D  4.7        x    1
          2     13  3.3    D  4.8  x/u/v/w    1
2    B    0      3  2.6    B  2.6      x/u    2
          1      5  3.4    B  3.8    x/u/v    2
3    A    0      1  1.1    A  1.7      x/y    3
          1      2  1.1    A  2.5  x/y/z/n    3
          2      7  2.6    A  4.2        x    3
     C    0      9  3.4    C  4.5        -    3
4    B    0      4  2.5    B  3.3        x    4
5    B    0      6  2.6    B    4    x/y/z    5
          1     10  2.6    B  4.6      x/y    5
6    B    0      8  3.4    B  4.3  x/u/v/b    6

我想按如下方式获取每一行的计数。预期Yield:

col5 col2 count
1    A      1
     D      3
2    B      2
etc...

如何获得我的预期输出?我想为每个“ col2”值找到最大的计数吗?

I have a pandas dataframe in the following format:

df = pd.DataFrame([[1.1, 1.1, 1.1, 2.6, 2.5, 3.4,2.6,2.6,3.4,3.4,2.6,1.1,1.1,3.3], list('AAABBBBABCBDDD'), [1.1, 1.7, 2.5, 2.6, 3.3, 3.8,4.0,4.2,4.3,4.5,4.6,4.7,4.7,4.8], ['x/y/z','x/y','x/y/z/n','x/u','x','x/u/v','x/y/z','x','x/u/v/b','-','x/y','x/y/z','x','x/u/v/w'],['1','3','3','2','4','2','5','3','6','3','5','1','1','1']]).T
df.columns = ['col1','col2','col3','col4','col5']

df:

   col1 col2 col3     col4 col5
0   1.1    A  1.1    x/y/z    1
1   1.1    A  1.7      x/y    3
2   1.1    A  2.5  x/y/z/n    3
3   2.6    B  2.6      x/u    2
4   2.5    B  3.3        x    4
5   3.4    B  3.8    x/u/v    2
6   2.6    B    4    x/y/z    5
7   2.6    A  4.2        x    3
8   3.4    B  4.3  x/u/v/b    6
9   3.4    C  4.5        -    3
10  2.6    B  4.6      x/y    5
11  1.1    D  4.7    x/y/z    1
12  1.1    D  4.7        x    1
13  3.3    D  4.8  x/u/v/w    1

Now I want to group this by two columns like following:

df.groupby(['col5','col2']).reset_index()

OutPut:

             index col1 col2 col3     col4 col5
col5 col2                                      
1    A    0      0  1.1    A  1.1    x/y/z    1
     D    0     11  1.1    D  4.7    x/y/z    1
          1     12  1.1    D  4.7        x    1
          2     13  3.3    D  4.8  x/u/v/w    1
2    B    0      3  2.6    B  2.6      x/u    2
          1      5  3.4    B  3.8    x/u/v    2
3    A    0      1  1.1    A  1.7      x/y    3
          1      2  1.1    A  2.5  x/y/z/n    3
          2      7  2.6    A  4.2        x    3
     C    0      9  3.4    C  4.5        -    3
4    B    0      4  2.5    B  3.3        x    4
5    B    0      6  2.6    B    4    x/y/z    5
          1     10  2.6    B  4.6      x/y    5
6    B    0      8  3.4    B  4.3  x/u/v/b    6

I want to get the count by each row like following. Expected Output:

col5 col2 count
1    A      1
     D      3
2    B      2
etc...

How to get my expected output? And I want to find largest count for each ‘col2’ value?


回答 0

紧跟@Andy的答案,您可以执行以下操作来解决第二个问题:

In [56]: df.groupby(['col5','col2']).size().reset_index().groupby('col2')[[0]].max()
Out[56]: 
      0
col2   
A     3
B     2
C     1
D     3

Followed by @Andy’s answer, you can do following to solve your second question:

In [56]: df.groupby(['col5','col2']).size().reset_index().groupby('col2')[[0]].max()
Out[56]: 
      0
col2   
A     3
B     2
C     1
D     3

回答 1

您正在寻找size

In [11]: df.groupby(['col5', 'col2']).size()
Out[11]:
col5  col2
1     A       1
      D       3
2     B       2
3     A       3
      C       1
4     B       1
5     B       2
6     B       1
dtype: int64

要获得与waitingkuo相同的答案(“第二个问题”),但要简洁一些,可以按级别分组:

In [12]: df.groupby(['col5', 'col2']).size().groupby(level=1).max()
Out[12]:
col2
A       3
B       2
C       1
D       3
dtype: int64

You are looking for size:

In [11]: df.groupby(['col5', 'col2']).size()
Out[11]:
col5  col2
1     A       1
      D       3
2     B       2
3     A       3
      C       1
4     B       1
5     B       2
6     B       1
dtype: int64

To get the same answer as waitingkuo (the “second question”), but slightly cleaner, is to groupby the level:

In [12]: df.groupby(['col5', 'col2']).size().groupby(level=1).max()
Out[12]:
col2
A       3
B       2
C       1
D       3
dtype: int64

回答 2

数据插入pandas数据框并提供列名

import pandas as pd
df = pd.DataFrame([['A','C','A','B','C','A','B','B','A','A'], ['ONE','TWO','ONE','ONE','ONE','TWO','ONE','TWO','ONE','THREE']]).T
df.columns = [['Alphabet','Words']]
print(df)   #printing dataframe.

这是我们的打印数据:

为了在pandas和counter中创建一组数据框
您需要再提供一个对分组进行计数的列,我们将该列称为dataframe中的“ COUNTER”

像这样:

df['COUNTER'] =1       #initially, set that counter to 1.
group_data = df.groupby(['Alphabet','Words'])['COUNTER'].sum() #sum function
print(group_data)

输出:

Inserting data into a pandas dataframe and providing column name.

import pandas as pd
df = pd.DataFrame([['A','C','A','B','C','A','B','B','A','A'], ['ONE','TWO','ONE','ONE','ONE','TWO','ONE','TWO','ONE','THREE']]).T
df.columns = [['Alphabet','Words']]
print(df)   #printing dataframe.

This is our printed data:

For making a group of dataframe in pandas and counter,
You need to provide one more column which counts the grouping, let’s call that column as, “COUNTER” in dataframe.

Like this:

df['COUNTER'] =1       #initially, set that counter to 1.
group_data = df.groupby(['Alphabet','Words'])['COUNTER'].sum() #sum function
print(group_data)

OUTPUT:


回答 3

仅使用单个groupby的惯用解决方案

(df.groupby(['col5', 'col2']).size() 
   .sort_values(ascending=False) 
   .reset_index(name='count') 
   .drop_duplicates(subset='col2'))

  col5 col2  count
0    3    A      3
1    1    D      3
2    5    B      2
6    3    C      1

说明

groupby size方法的结果是带有col5col2在索引中的Series 。从这里,您可以使用另一种groupby方法来查找其中的每个值的最大值, col2但是没有必要这样做。您可以简单地对所有值进行降序排序,然后只保留第一次col2使用drop_duplicates方法出现的行。

Idiomatic solution that uses only a single groupby

(df.groupby(['col5', 'col2']).size() 
   .sort_values(ascending=False) 
   .reset_index(name='count') 
   .drop_duplicates(subset='col2'))

  col5 col2  count
0    3    A      3
1    1    D      3
2    5    B      2
6    3    C      1

Explanation

The result of the groupby size method is a Series with col5 and col2 in the index. From here, you can use another groupby method to find the maximum value of each value in col2 but it is not necessary to do. You can simply sort all the values descendingly and then keep only the rows with the first occurrence of col2 with the drop_duplicates method.


回答 4

您是否要在数据帧中添加一个包含组计数的新列(例如’count_column’):

df.count_column=df.groupby(['col5','col2']).col5.transform('count')

(我选择了“ col5”,因为它不包含nan)

Should you want to add a new column (say ‘count_column’) containing the groups’ counts into the dataframe:

df.count_column=df.groupby(['col5','col2']).col5.transform('count')

(I picked ‘col5’ as it contains no nan)


回答 5

您可以只使用内置函数计数,然后使用groupby函数

df.groupby(['col5','col2']).count()

You can just use the built-in function count follow by the groupby function

df.groupby(['col5','col2']).count()

如何使用列的格式字符串显示浮点数的pandas DataFrame?

问题:如何使用列的格式字符串显示浮点数的pandas DataFrame?

我想使用print()和IPython 显示给定格式的熊猫数据框display()。例如:

df = pd.DataFrame([123.4567, 234.5678, 345.6789, 456.7890],
                  index=['foo','bar','baz','quux'],
                  columns=['cost'])
print df

         cost
foo   123.4567
bar   234.5678
baz   345.6789
quux  456.7890

我想以某种方式强迫这样做

         cost
foo   $123.46
bar   $234.57
baz   $345.68
quux  $456.79

无需修改数据本身或创建副本,只需更改其显示方式即可。

我怎样才能做到这一点?

I would like to display a pandas dataframe with a given format using print() and the IPython display(). For example:

df = pd.DataFrame([123.4567, 234.5678, 345.6789, 456.7890],
                  index=['foo','bar','baz','quux'],
                  columns=['cost'])
print df

         cost
foo   123.4567
bar   234.5678
baz   345.6789
quux  456.7890

I would like to somehow coerce this into printing

         cost
foo   $123.46
bar   $234.57
baz   $345.68
quux  $456.79

without having to modify the data itself or create a copy, just change the way it is displayed.

How can I do this?


回答 0

import pandas as pd
pd.options.display.float_format = '${:,.2f}'.format
df = pd.DataFrame([123.4567, 234.5678, 345.6789, 456.7890],
                  index=['foo','bar','baz','quux'],
                  columns=['cost'])
print(df)

Yield

        cost
foo  $123.46
bar  $234.57
baz  $345.68
quux $456.79

但这仅在您希望每个浮点数都用美元符号格式化时才有效。

否则,如果您只想为某些浮点数设置美元格式,那么我认为您必须预先修改数据框(将这些浮点数转换为字符串):

import pandas as pd
df = pd.DataFrame([123.4567, 234.5678, 345.6789, 456.7890],
                  index=['foo','bar','baz','quux'],
                  columns=['cost'])
df['foo'] = df['cost']
df['cost'] = df['cost'].map('${:,.2f}'.format)
print(df)

Yield

         cost       foo
foo   $123.46  123.4567
bar   $234.57  234.5678
baz   $345.68  345.6789
quux  $456.79  456.7890
import pandas as pd
pd.options.display.float_format = '${:,.2f}'.format
df = pd.DataFrame([123.4567, 234.5678, 345.6789, 456.7890],
                  index=['foo','bar','baz','quux'],
                  columns=['cost'])
print(df)

yields

        cost
foo  $123.46
bar  $234.57
baz  $345.68
quux $456.79

but this only works if you want every float to be formatted with a dollar sign.

Otherwise, if you want dollar formatting for some floats only, then I think you’ll have to pre-modify the dataframe (converting those floats to strings):

import pandas as pd
df = pd.DataFrame([123.4567, 234.5678, 345.6789, 456.7890],
                  index=['foo','bar','baz','quux'],
                  columns=['cost'])
df['foo'] = df['cost']
df['cost'] = df['cost'].map('${:,.2f}'.format)
print(df)

yields

         cost       foo
foo   $123.46  123.4567
bar   $234.57  234.5678
baz   $345.68  345.6789
quux  $456.79  456.7890

回答 1

如果您不想修改数据框,则可以对该列使用自定义格式程序。

import pandas as pd
pd.options.display.float_format = '${:,.2f}'.format
df = pd.DataFrame([123.4567, 234.5678, 345.6789, 456.7890],
                  index=['foo','bar','baz','quux'],
                  columns=['cost'])


print df.to_string(formatters={'cost':'${:,.2f}'.format})

Yield

        cost
foo  $123.46
bar  $234.57
baz  $345.68
quux $456.79

If you don’t want to modify the dataframe, you could use a custom formatter for that column.

import pandas as pd
pd.options.display.float_format = '${:,.2f}'.format
df = pd.DataFrame([123.4567, 234.5678, 345.6789, 456.7890],
                  index=['foo','bar','baz','quux'],
                  columns=['cost'])


print df.to_string(formatters={'cost':'${:,.2f}'.format})

yields

        cost
foo  $123.46
bar  $234.57
baz  $345.68
quux $456.79

回答 2

从Pandas 0.17开始,现在有一个样式系统,该系统实质上使用Python格式字符串提供DataFrame的格式化视图:

import pandas as pd
import numpy as np

constants = pd.DataFrame([('pi',np.pi),('e',np.e)],
                   columns=['name','value'])
C = constants.style.format({'name': '~~ {} ~~', 'value':'--> {:15.10f} <--'})
C

显示

这是一个视图对象;DataFrame本身不会更改格式,但是DataFrame中的更新会反映在视图中:

constants.name = ['pie','eek']
C

但是,它似乎有一些局限性:

  • 在原位添加新的行和/或列似乎会导致样式视图不一致(不添加行/列标签):

    constants.loc[2] = dict(name='bogus', value=123.456)
    constants['comment'] = ['fee','fie','fo']
    constants
    

看起来不错,但是:

C

  • 格式化仅适用于值,不适用于索引条目:

    constants = pd.DataFrame([('pi',np.pi),('e',np.e)],
                   columns=['name','value'])
    constants.set_index('name',inplace=True)
    C = constants.style.format({'name': '~~ {} ~~', 'value':'--> {:15.10f} <--'})
    C
    

As of Pandas 0.17 there is now a styling system which essentially provides formatted views of a DataFrame using Python format strings:

import pandas as pd
import numpy as np

constants = pd.DataFrame([('pi',np.pi),('e',np.e)],
                   columns=['name','value'])
C = constants.style.format({'name': '~~ {} ~~', 'value':'--> {:15.10f} <--'})
C

which displays

This is a view object; the DataFrame itself does not change formatting, but updates in the DataFrame are reflected in the view:

constants.name = ['pie','eek']
C

However it appears to have some limitations:

  • Adding new rows and/or columns in-place seems to cause inconsistency in the styled view (doesn’t add row/column labels):

    constants.loc[2] = dict(name='bogus', value=123.456)
    constants['comment'] = ['fee','fie','fo']
    constants
    

which looks ok but:

C

  • Formatting works only for values, not index entries:

    constants = pd.DataFrame([('pi',np.pi),('e',np.e)],
                   columns=['name','value'])
    constants.set_index('name',inplace=True)
    C = constants.style.format({'name': '~~ {} ~~', 'value':'--> {:15.10f} <--'})
    C
    


回答 3

与上面的unutbu相似,您也可以applymap如下使用:

import pandas as pd
df = pd.DataFrame([123.4567, 234.5678, 345.6789, 456.7890],
                  index=['foo','bar','baz','quux'],
                  columns=['cost'])

df = df.applymap("${0:.2f}".format)

Similar to unutbu above, you could also use applymap as follows:

import pandas as pd
df = pd.DataFrame([123.4567, 234.5678, 345.6789, 456.7890],
                  index=['foo','bar','baz','quux'],
                  columns=['cost'])

df = df.applymap("${0:.2f}".format)

回答 4

我喜欢将pandas.apply()与python format()结合使用。

import pandas as pd
s = pd.Series([1.357, 1.489, 2.333333])

make_float = lambda x: "${:,.2f}".format(x)
s.apply(make_float)

而且,它可以轻松地用于多列…

df = pd.concat([s, s * 2], axis=1)

make_floats = lambda row: "${:,.2f}, ${:,.3f}".format(row[0], row[1])
df.apply(make_floats, axis=1)

I like using pandas.apply() with python format().

import pandas as pd
s = pd.Series([1.357, 1.489, 2.333333])

make_float = lambda x: "${:,.2f}".format(x)
s.apply(make_float)

Also, it can be easily used with multiple columns…

df = pd.concat([s, s * 2], axis=1)

make_floats = lambda row: "${:,.2f}, ${:,.3f}".format(row[0], row[1])
df.apply(make_floats, axis=1)

回答 5

您还可以将语言环境设置为您所在的区域,并将float_format设置为使用货币格式。这将自动为美国的货币设置$符号。

import locale

locale.setlocale(locale.LC_ALL, "en_US.UTF-8")

pd.set_option("float_format", locale.currency)

df = pd.DataFrame(
    [123.4567, 234.5678, 345.6789, 456.7890],
    index=["foo", "bar", "baz", "quux"],
    columns=["cost"],
)
print(df)

        cost
foo  $123.46
bar  $234.57
baz  $345.68
quux $456.79

You can also set locale to your region and set float_format to use a currency format. This will automatically set $ sign for currency in USA.

import locale

locale.setlocale(locale.LC_ALL, "en_US.UTF-8")

pd.set_option("float_format", locale.currency)

df = pd.DataFrame(
    [123.4567, 234.5678, 345.6789, 456.7890],
    index=["foo", "bar", "baz", "quux"],
    columns=["cost"],
)
print(df)

        cost
foo  $123.46
bar  $234.57
baz  $345.68
quux $456.79

回答 6

摘要:


    df = pd.DataFrame({'money': [100.456, 200.789], 'share': ['100,000', '200,000']})
    print(df)
    print(df.to_string(formatters={'money': '${:,.2f}'.format}))
    for col_name in ('share',):
        df[col_name] = df[col_name].map(lambda p: int(p.replace(',', '')))
    print(df)
    """
        money    share
    0  100.456  100,000
    1  200.789  200,000

        money    share
    0 $100.46  100,000
    1 $200.79  200,000

         money   share
    0  100.456  100000
    1  200.789  200000
    """

summary:


    df = pd.DataFrame({'money': [100.456, 200.789], 'share': ['100,000', '200,000']})
    print(df)
    print(df.to_string(formatters={'money': '${:,.2f}'.format}))
    for col_name in ('share',):
        df[col_name] = df[col_name].map(lambda p: int(p.replace(',', '')))
    print(df)
    """
        money    share
    0  100.456  100,000
    1  200.789  200,000

        money    share
    0 $100.46  100,000
    1 $200.79  200,000

         money   share
    0  100.456  100000
    1  200.789  200000
    """

Pandas DataFrame到字典列表

问题:Pandas DataFrame到字典列表

我有以下DataFrame:

客户item1 item2 item3
1个苹果牛奶番茄
2水橙土豆
3汁芒果片

我想将其翻译为每行词典列表

rows = [{'customer': 1, 'item1': 'apple', 'item2': 'milk', 'item3': 'tomato'},
    {'customer': 2, 'item1': 'water', 'item2': 'orange', 'item3': 'potato'},
    {'customer': 3, 'item1': 'juice', 'item2': 'mango', 'item3': 'chips'}]

I have the following DataFrame:

customer    item1      item2    item3
1           apple      milk     tomato
2           water      orange   potato
3           juice      mango    chips

which I want to translate it to list of dictionaries per row

rows = [{'customer': 1, 'item1': 'apple', 'item2': 'milk', 'item3': 'tomato'},
    {'customer': 2, 'item1': 'water', 'item2': 'orange', 'item3': 'potato'},
    {'customer': 3, 'item1': 'juice', 'item2': 'mango', 'item3': 'chips'}]

回答 0

编辑

正如John Galt在回答中提到的那样,您可能应该改用df.to_dict('records')。它比手动移调要快。

In [20]: timeit df.T.to_dict().values()
1000 loops, best of 3: 395 µs per loop

In [21]: timeit df.to_dict('records')
10000 loops, best of 3: 53 µs per loop

原始答案

使用df.T.to_dict().values(),如下所示:

In [1]: df
Out[1]:
   customer  item1   item2   item3
0         1  apple    milk  tomato
1         2  water  orange  potato
2         3  juice   mango   chips

In [2]: df.T.to_dict().values()
Out[2]:
[{'customer': 1.0, 'item1': 'apple', 'item2': 'milk', 'item3': 'tomato'},
 {'customer': 2.0, 'item1': 'water', 'item2': 'orange', 'item3': 'potato'},
 {'customer': 3.0, 'item1': 'juice', 'item2': 'mango', 'item3': 'chips'}]

Edit

As John Galt mentions in his answer , you should probably instead use df.to_dict('records'). It’s faster than transposing manually.

In [20]: timeit df.T.to_dict().values()
1000 loops, best of 3: 395 µs per loop

In [21]: timeit df.to_dict('records')
10000 loops, best of 3: 53 µs per loop

Original answer

Use df.T.to_dict().values(), like below:

In [1]: df
Out[1]:
   customer  item1   item2   item3
0         1  apple    milk  tomato
1         2  water  orange  potato
2         3  juice   mango   chips

In [2]: df.T.to_dict().values()
Out[2]:
[{'customer': 1.0, 'item1': 'apple', 'item2': 'milk', 'item3': 'tomato'},
 {'customer': 2.0, 'item1': 'water', 'item2': 'orange', 'item3': 'potato'},
 {'customer': 3.0, 'item1': 'juice', 'item2': 'mango', 'item3': 'chips'}]

回答 1

用途df.to_dict('records')-提供输出,而无需外部转置。

In [2]: df.to_dict('records')
Out[2]:
[{'customer': 1L, 'item1': 'apple', 'item2': 'milk', 'item3': 'tomato'},
 {'customer': 2L, 'item1': 'water', 'item2': 'orange', 'item3': 'potato'},
 {'customer': 3L, 'item1': 'juice', 'item2': 'mango', 'item3': 'chips'}]

Use df.to_dict('records') — gives the output without having to transpose externally.

In [2]: df.to_dict('records')
Out[2]:
[{'customer': 1L, 'item1': 'apple', 'item2': 'milk', 'item3': 'tomato'},
 {'customer': 2L, 'item1': 'water', 'item2': 'orange', 'item3': 'potato'},
 {'customer': 3L, 'item1': 'juice', 'item2': 'mango', 'item3': 'chips'}]

回答 2

作为对John Galt答案的扩展-

对于以下DataFrame,

   customer  item1   item2   item3
0         1  apple    milk  tomato
1         2  water  orange  potato
2         3  juice   mango   chips

如果要获取包含索引值的词典列表,可以执行以下操作:

df.to_dict('index')

输出字典的字典,其中父字典的键是索引值。在这种情况下

{0: {'customer': 1, 'item1': 'apple', 'item2': 'milk', 'item3': 'tomato'},
 1: {'customer': 2, 'item1': 'water', 'item2': 'orange', 'item3': 'potato'},
 2: {'customer': 3, 'item1': 'juice', 'item2': 'mango', 'item3': 'chips'}}

As an extension to John Galt’s answer –

For the following DataFrame,

   customer  item1   item2   item3
0         1  apple    milk  tomato
1         2  water  orange  potato
2         3  juice   mango   chips

If you want to get a list of dictionaries including the index values, you can do something like,

df.to_dict('index')

Which outputs a dictionary of dictionaries where keys of the parent dictionary are index values. In this particular case,

{0: {'customer': 1, 'item1': 'apple', 'item2': 'milk', 'item3': 'tomato'},
 1: {'customer': 2, 'item1': 'water', 'item2': 'orange', 'item3': 'potato'},
 2: {'customer': 3, 'item1': 'juice', 'item2': 'mango', 'item3': 'chips'}}

回答 3

如果您只想选择一列,则可以使用。

df[["item1"]].to_dict("records")

下面将工作,并产生一个类型错误:不支持的类型。我相信这是因为它正在尝试将系列转换为字典,而不是将数据帧转换为字典。

df["item1"].to_dict("records")

我只需要选择一个列,然后将其转换为以列名作为键的字典列表,然后在此卡住一会儿,以至于我想与大家分享。

If you are interested in only selecting one column this will work.

df[["item1"]].to_dict("records")

The below will NOT work and produces a TypeError: unsupported type: . I believe this is because it is trying to convert a series to a dict and not a Data Frame to a dict.

df["item1"].to_dict("records")

I had a requirement to only select one column and convert it to a list of dicts with the column name as the key and was stuck on this for a bit so figured I’d share.