标签归档:rows

numpy-将行添加到数组

问题:numpy-将行添加到数组

如何将行添加到numpy数组?

我有一个数组A:

A = array([[0, 1, 2], [0, 2, 0]])

如果X中每行的第一个元素满足特定条件,我希望从另一个数组X向该数组添加行。

Numpy数组没有像列表那样的“追加”方法,或者看起来。

如果A和X是列表,我只会这样做:

for i in X:
    if i[0] < 3:
        A.append(i)

是否有numpythonic的方式来做等效的?

谢谢,S ;-)

How does one add rows to a numpy array?

I have an array A:

A = array([[0, 1, 2], [0, 2, 0]])

I wish to add rows to this array from another array X if the first element of each row in X meets a specific condition.

Numpy arrays do not have a method ‘append’ like that of lists, or so it seems.

If A and X were lists I would merely do:

for i in X:
    if i[0] < 3:
        A.append(i)

Is there a numpythonic way to do the equivalent?

Thanks, S ;-)


回答 0

什么X啊 如果它是一个二维数组,你怎么能那么其行比作一个号码:i < 3

OP评论后编辑:

A = array([[0, 1, 2], [0, 2, 0]])
X = array([[0, 1, 2], [1, 2, 0], [2, 1, 2], [3, 2, 0]])

AX第一个元素添加到所有行< 3

import numpy as np
A = np.vstack((A, X[X[:,0] < 3]))

# returns: 
array([[0, 1, 2],
       [0, 2, 0],
       [0, 1, 2],
       [1, 2, 0],
       [2, 1, 2]])

What is X? If it is a 2D-array, how can you then compare its row to a number: i < 3?

EDIT after OP’s comment:

A = array([[0, 1, 2], [0, 2, 0]])
X = array([[0, 1, 2], [1, 2, 0], [2, 1, 2], [3, 2, 0]])

add to A all rows from X where the first element < 3:

import numpy as np
A = np.vstack((A, X[X[:,0] < 3]))

# returns: 
array([[0, 1, 2],
       [0, 2, 0],
       [0, 1, 2],
       [1, 2, 0],
       [2, 1, 2]])

回答 1

好吧,你可以这样做:

  newrow = [1,2,3]
  A = numpy.vstack([A, newrow])

well u can do this :

  newrow = [1,2,3]
  A = numpy.vstack([A, newrow])

回答 2

由于这个问题已经存在了7年,所以我使用的最新版本是numpy版本1.13和python3,我在向矩阵中添加一行时也做同样的事情,请记住在第二个参数中加上双括号,否则会引起尺寸误差。

在这里我要添加矩阵A

1 2 3
4 5 6

连续

7 8 9

相同的用法 np.r_

A= [[1, 2, 3], [4, 5, 6]]
np.append(A, [[7, 8, 9]], axis=0)

    >> array([[1, 2, 3],
              [4, 5, 6],
              [7, 8, 9]])
#or 
np.r_[A,[[7,8,9]]]

只是对某人感兴趣,如果您想添加一列,

array = np.c_[A,np.zeros(#A's row size)]

按照我们之前在矩阵A上所做的操作,向其中添加一列

np.c_[A, [2,8]]

>> array([[1, 2, 3, 2],
          [4, 5, 6, 8]])

As this question is been 7 years before, in the latest version which I am using is numpy version 1.13, and python3, I am doing the same thing with adding a row to a matrix, remember to put a double bracket to the second argument, otherwise, it will raise dimension error.

In here I am adding on matrix A

1 2 3
4 5 6

with a row

7 8 9

same usage in np.r_

A= [[1, 2, 3], [4, 5, 6]]
np.append(A, [[7, 8, 9]], axis=0)

    >> array([[1, 2, 3],
              [4, 5, 6],
              [7, 8, 9]])
#or 
np.r_[A,[[7,8,9]]]

Just to someone’s intersted, if you would like to add a column,

array = np.c_[A,np.zeros(#A's row size)]

following what we did before on matrix A, adding a column to it

np.c_[A, [2,8]]

>> array([[1, 2, 3, 2],
          [4, 5, 6, 8]])

回答 3

您也可以这样做:

newrow = [1,2,3]
A = numpy.concatenate((A,newrow))

You can also do this:

newrow = [1,2,3]
A = numpy.concatenate((A,newrow))

回答 4

如果每行之后都不需要进行计算,则在python中添加行然后转换为numpy会更快。以下是使用python 3.6与numpy 1.14进行的时序测试,添加了100行,一次添加一行:

import numpy as np 
from time import perf_counter, sleep

def time_it():
    # Compare performance of two methods for adding rows to numpy array
    py_array = [[0, 1, 2], [0, 2, 0]]
    py_row = [4, 5, 6]
    numpy_array = np.array(py_array)
    numpy_row = np.array([4,5,6])
    n_loops = 100

    start_clock = perf_counter()
    for count in range(0, n_loops):
       numpy_array = np.vstack([numpy_array, numpy_row]) # 5.8 micros
    duration = perf_counter() - start_clock
    print('numpy 1.14 takes {:.3f} micros per row'.format(duration * 1e6 / n_loops))

    start_clock = perf_counter()
    for count in range(0, n_loops):
        py_array.append(py_row) # .15 micros
    numpy_array = np.array(py_array) # 43.9 micros       
    duration = perf_counter() - start_clock
    print('python 3.6 takes {:.3f} micros per row'.format(duration * 1e6 / n_loops))
    sleep(15)

#time_it() prints:

numpy 1.14 takes 5.971 micros per row
python 3.6 takes 0.694 micros per row

因此,七年前对原始问题的简单解决方案是在将行转换为numpy数组后,使用vstack()添加新行。但是更现实的解决方案应该考虑在这些情况下vstack的性能不佳。如果您不需要在每次添加后对数组进行数据分析,最好将新行缓冲到python行列表(实际上是列表列表)中,然后将它们作为一个组添加到numpy数组中在进行任何数据分析之前使用vstack()。

If no calculations are necessary after every row, it’s much quicker to add rows in python, then convert to numpy. Here are timing tests using python 3.6 vs. numpy 1.14, adding 100 rows, one at a time:

import numpy as np 
from time import perf_counter, sleep

def time_it():
    # Compare performance of two methods for adding rows to numpy array
    py_array = [[0, 1, 2], [0, 2, 0]]
    py_row = [4, 5, 6]
    numpy_array = np.array(py_array)
    numpy_row = np.array([4,5,6])
    n_loops = 100

    start_clock = perf_counter()
    for count in range(0, n_loops):
       numpy_array = np.vstack([numpy_array, numpy_row]) # 5.8 micros
    duration = perf_counter() - start_clock
    print('numpy 1.14 takes {:.3f} micros per row'.format(duration * 1e6 / n_loops))

    start_clock = perf_counter()
    for count in range(0, n_loops):
        py_array.append(py_row) # .15 micros
    numpy_array = np.array(py_array) # 43.9 micros       
    duration = perf_counter() - start_clock
    print('python 3.6 takes {:.3f} micros per row'.format(duration * 1e6 / n_loops))
    sleep(15)

#time_it() prints:

numpy 1.14 takes 5.971 micros per row
python 3.6 takes 0.694 micros per row

So, the simple solution to the original question, from seven years ago, is to use vstack() to add a new row after converting the row to a numpy array. But a more realistic solution should consider vstack’s poor performance under those circumstances. If you don’t need to run data analysis on the array after every addition, it is better to buffer the new rows to a python list of rows (a list of lists, really), and add them as a group to the numpy array using vstack() before doing any data analysis.


回答 5

import numpy as np
array_ = np.array([[1,2,3]])
add_row = np.array([[4,5,6]])

array_ = np.concatenate((array_, add_row), axis=0)
import numpy as np
array_ = np.array([[1,2,3]])
add_row = np.array([[4,5,6]])

array_ = np.concatenate((array_, add_row), axis=0)

回答 6

如果您可以在一个操作中完成构造,那么类似vstack-with-fancy-indexing的答案就是很好的方法。但是,如果您的情况更加复杂,或者您的行不断增加,那么您可能想要增加数组。实际上,执行类似这样的numpythonic方法-动态增长数组-是动态增长列表:

A = np.array([[1,2,3],[4,5,6]])
Alist = [r for r in A]
for i in range(100):
    newrow = np.arange(3)+i
    if i%5:
        Alist.append(newrow)
A = np.array(Alist)
del Alist

列表针对这种访问模式进行了高度优化。在列表形式时,您没有方便的numpy多维索引,但是只要您要追加,就很难比行数组列表做得更好。

If you can do the construction in a single operation, then something like the vstack-with-fancy-indexing answer is a fine approach. But if your condition is more complicated or your rows come in on the fly, you may want to grow the array. In fact the numpythonic way to do something like this – dynamically grow an array – is to dynamically grow a list:

A = np.array([[1,2,3],[4,5,6]])
Alist = [r for r in A]
for i in range(100):
    newrow = np.arange(3)+i
    if i%5:
        Alist.append(newrow)
A = np.array(Alist)
del Alist

Lists are highly optimized for this kind of access pattern; you don’t have convenient numpy multidimensional indexing while in list form, but for as long as you’re appending it’s hard to do better than a list of row arrays.


回答 7

我使用更快的“ np.vstack”,例如:

import numpy as np

input_array=np.array([1,2,3])
new_row= np.array([4,5,6])

new_array=np.vstack([input_array, new_row])

I use ‘np.vstack’ which is faster, EX:

import numpy as np

input_array=np.array([1,2,3])
new_row= np.array([4,5,6])

new_array=np.vstack([input_array, new_row])

回答 8

您可以用来numpy.append()在numpty数组后附加一行,然后再将其整形为矩阵。

import numpy as np
a = np.array([1,2])
a = np.append(a, [3,4])
print a
# [1,2,3,4]
# in your example
A = [1,2]
for row in X:
    A = np.append(A, row)

You can use numpy.append() to append a row to numpty array and reshape to a matrix later on.

import numpy as np
a = np.array([1,2])
a = np.append(a, [3,4])
print a
# [1,2,3,4]
# in your example
A = [1,2]
for row in X:
    A = np.append(A, row)

如何在Pandas的DataFrame中的行上进行迭代?

问题:如何在Pandas的DataFrame中的行上进行迭代?

我有一个DataFrame熊猫来的:

import pandas as pd
inp = [{'c1':10, 'c2':100}, {'c1':11,'c2':110}, {'c1':12,'c2':120}]
df = pd.DataFrame(inp)
print df

输出:

   c1   c2
0  10  100
1  11  110
2  12  120

现在,我要遍历该框架的行。对于每一行,我希望能够通过列名访问其元素(单元格中的值)。例如:

for row in df.rows:
   print row['c1'], row['c2']

熊猫有可能这样做吗?

我发现了类似的问题。但这并不能给我我所需的答案。例如,建议在那里使用:

for date, row in df.T.iteritems():

要么

for row in df.iterrows():

但我不了解该row对象是什么以及如何使用它。

I have a DataFrame from pandas:

import pandas as pd
inp = [{'c1':10, 'c2':100}, {'c1':11,'c2':110}, {'c1':12,'c2':120}]
df = pd.DataFrame(inp)
print df

Output:

   c1   c2
0  10  100
1  11  110
2  12  120

Now I want to iterate over the rows of this frame. For every row I want to be able to access its elements (values in cells) by the name of the columns. For example:

for row in df.rows:
   print row['c1'], row['c2']

Is it possible to do that in pandas?

I found this similar question. But it does not give me the answer I need. For example, it is suggested there to use:

for date, row in df.T.iteritems():

or

for row in df.iterrows():

But I do not understand what the row object is and how I can work with it.


回答 0

DataFrame.iterrows是产生索引和行的生成器

import pandas as pd
import numpy as np

df = pd.DataFrame([{'c1':10, 'c2':100}, {'c1':11,'c2':110}, {'c1':12,'c2':120}])

for index, row in df.iterrows():
    print(row['c1'], row['c2'])

Output: 
   10 100
   11 110
   12 120

DataFrame.iterrows is a generator which yield both index and row

import pandas as pd
import numpy as np

df = pd.DataFrame([{'c1':10, 'c2':100}, {'c1':11,'c2':110}, {'c1':12,'c2':120}])

for index, row in df.iterrows():
    print(row['c1'], row['c2'])

Output: 
   10 100
   11 110
   12 120

回答 1

如何在Pandas的DataFrame中的行上进行迭代?

答案:不要*

熊猫中的迭代是一种反模式,只有在用尽所有其他选项后才应执行此操作。您不应iter将名称中带有“ ”的任何函数使用超过数千行,否则您将不得不习惯很多等待。

您要打印一个DataFrame吗?使用DataFrame.to_string()

您要计算吗?在这种情况下,请按以下顺序搜索方法(列表从此处修改):

  1. 向量化
  2. Cython例程
  3. 列表推导(香草for循环)
  4. DataFrame.apply():i)可以在cython中执行的约简操作,ii)在python空间中进行迭代
  5. DataFrame.itertuples()iteritems()
  6. DataFrame.iterrows()

iterrows并且itertuples(在该问题的答案中都获得很多票)应该在非常罕见的情况下使用,例如生成行对象/命名元以进行顺序处理,这实际上是这些功能唯一有用的东西。

呼吁授权迭代中
的docs页面上有一个巨大的红色警告框,指出:

遍历熊猫对象通常很慢。在许多情况下,不需要手动在行上进行迭代。

*实际上比“不要”复杂一些。df.iterrows()是此问题的正确答案,但是“向量化您的操作”是更好的选择。我将承认在某些情况下无法避免迭代(例如,某些操作的结果取决于为上一行计算的值)。但是,需要一些熟悉库才能知道何时。如果不确定是否需要迭代解决方案,则可能不需要。PS:要进一步了解我编写此答案的依据,请跳到最底端。


比循环快:矢量化Cython

熊猫(通过NumPy或通过Cythonized函数)对许多基本操作和计算进行了“向量化”。这包括算术,比较,(大部分)归约,整形(例如透视),联接和groupby操作。浏览有关基本基本功能的文档,以找到适合您问题的矢量化方法。

如果不存在,请使用自定义cython扩展名自行编写。


下一件事:列表理解*

如果1)没有可用的向量化解决方案,2)性能很重要,但不够重要,不足以经历对代码进行cythonize的麻烦,并且3)您尝试执行元素转换,则列表理解应该是您的下一个调用端口在您的代码上。有大量证据表明,列表理解对于许多常见的熊猫任务足够快(甚至有时更快)。

公式很简单,

# iterating over one column - `f` is some function that processes your data
result = [f(x) for x in df['col']]
# iterating over two columns, use `zip`
result = [f(x, y) for x, y in zip(df['col1'], df['col2'])]
# iterating over multiple columns - same data type
result = [f(row[0], ..., row[n]) for row in df[['col1', ...,'coln']].to_numpy()]
# iterating over multiple columns - differing data type
result = [f(row[0], ..., row[n]) for row in zip(df['col1'], ..., df['coln'])]

如果可以将业务逻辑封装到一个函数中,则可以使用调用它的列表理解。您可以通过原始python的简单性和速度来使任意复杂的事情起作用。

注意事项
列表推论假设您的数据易于使用-这意味着您的数据类型是一致的,并且您没有NaN,但这不能总是保证。

  1. 第一个更明显,但是在处理NaN时,如果存在内置熊猫方法,则更喜欢它们(因为它们具有更好的极端情况处理逻辑),或者确保您的业务逻辑包括适当的NaN处理逻辑。
  2. 在处理混合数据类型时,您应该进行迭代,zip(df['A'], df['B'], ...)而不是df[['A', 'B']].to_numpy()因为后者隐式地将数据转换为最常见的类型。例如,如果A为数字而B为字符串,to_numpy()则将整个数组转换为字符串,这可能不是您想要的。幸运的是,zip将所有列一起ping是最简单的解决方法。

* YMMV出于上面“ 注意事项”部分概述的原因。


一个明显的例子

让我们用添加两个pandas column的简单示例来演示差异A + B。这是可向量化的操作数,因此很容易对比上述方法的性能。

基准测试代码,供您参考。

但是,我应该指出的是,并非总是如此。有时,“什么是最佳操作方法”的答案是“取决于您的数据”。我的建议是在建立数据之前先测试一下数据的不同方法。


进一步阅读

*熊猫字符串方法是“矢量化的”,因为它们在系列中已指定但可在每个元素上使用。底层机制仍然是迭代的,因为字符串操作本来就很难向量化。


为什么我写这个答案

我从新用户那里注意到的一个普遍趋势是提出以下形式的问题:“如何在df上迭代以执行X?”。显示iterrows()在for循环内执行某些操作时调用的代码。这就是为什么。尚未引入向量化概念的图书馆新用户可能会想到通过迭代数据来执行某些操作来解决其问题的代码。不知道如何遍历DataFrame,他们要做的第一件事就是Google它并最终在此问题上出现。然后,他们看到被接受的答案告诉他们如何操作,然后他们闭上眼睛并运行此代码,而无需首先质疑迭代是否是正确的选择。

该答案的目的是帮助新用户理解迭代并不一定是解决每个问题的方法,并且可能存在更好,更快和更惯用的解决方案,值得您花时间探索它们。我并不是要发动迭代与向量化之战,而是希望在开发使用此库的问题的解决方案时通知新用户。

How to iterate over rows in a DataFrame in Pandas?

Answer: DON’T*!

Iteration in pandas is an anti-pattern, and is something you should only do when you have exhausted every other option. You should not use any function with “iter” in its name for more than a few thousand rows or you will have to get used to a lot of waiting.

Do you want to print a DataFrame? Use DataFrame.to_string().

Do you want to compute something? In that case, search for methods in this order (list modified from here):

  1. Vectorization
  2. Cython routines
  3. List Comprehensions (vanilla for loop)
  4. DataFrame.apply(): i)  Reductions that can be performed in cython, ii) Iteration in python space
  5. DataFrame.itertuples() and iteritems()
  6. DataFrame.iterrows()

iterrows and itertuples (both receiving many votes in answers to this question) should be used in very rare circumstances, such as generating row objects/nametuples for sequential processing, which is really the only thing these functions are useful for.

Appeal to Authority
The docs page on iteration has a huge red warning box that says:

Iterating through pandas objects is generally slow. In many cases, iterating manually over the rows is not needed […].

* It’s actually a little more complicated than “don’t”. df.iterrows() is the correct answer to this question, but “vectorize your ops” is the better one. I will concede that there are circumstances where iteration cannot be avoided (for example, some operations where the result depends on the value computed for the previous row). However, it takes some familiarity with the library to know when. If you’re not sure whether you need an iterative solution, you probably don’t. PS: To know more about my rationale for writing this answer, skip to the very bottom.


Faster than Looping: Vectorization, Cython

A good number of basic operations and computations are “vectorised” by pandas (either through NumPy, or through Cythonized functions). This includes arithmetic, comparisons, (most) reductions, reshaping (such as pivoting), joins, and groupby operations. Look through the documentation on Essential Basic Functionality to find a suitable vectorised method for your problem.

If none exists, feel free to write your own using custom cython extensions.


Next Best Thing: List Comprehensions*

List comprehensions should be your next port of call if 1) there is no vectorized solution available, 2) performance is important, but not important enough to go through the hassle of cythonizing your code, and 3) you’re trying to perform elementwise transformation on your code. There is a good amount of evidence to suggest that list comprehensions are sufficiently fast (and even sometimes faster) for many common pandas tasks.

The formula is simple,

# iterating over one column - `f` is some function that processes your data
result = [f(x) for x in df['col']]
# iterating over two columns, use `zip`
result = [f(x, y) for x, y in zip(df['col1'], df['col2'])]
# iterating over multiple columns - same data type
result = [f(row[0], ..., row[n]) for row in df[['col1', ...,'coln']].to_numpy()]
# iterating over multiple columns - differing data type
result = [f(row[0], ..., row[n]) for row in zip(df['col1'], ..., df['coln'])]

If you can encapsulate your business logic into a function, you can use a list comprehension that calls it. You can make arbitrarily complex things work through the simplicity and speed of raw python.

Caveats
List comprehensions assume that your data is easy to work with – what that means is your data types are consistent and you don’t have NaNs, but this cannot always be guaranteed.

  1. The first one is more obvious, but when dealing with NaNs, prefer in-built pandas methods if they exist (because they have much better corner-case handling logic), or ensure your business logic includes appropriate NaN handling logic.
  2. When dealing with mixed data types you should iterate over zip(df['A'], df['B'], ...) instead of df[['A', 'B']].to_numpy() as the latter implicitly upcasts data to the most common type. As an example if A is numeric and B is string, to_numpy() will cast the entire array to string, which may not be what you want. Fortunately zipping your columns together is the most straightforward workaround to this.

* YMMV for the reasons outlined in the Caveats section above.


An Obvious Example

Let’s demonstrate the difference with a simple example of adding two pandas columns A + B. This is a vectorizable operaton, so it will be easy to contrast the performance of the methods discussed above.

Benchmarking code, for your reference.

I should mention, however, that it isn’t always this cut and dry. Sometimes the answer to “what is the best method for an operation” is “it depends on your data”. My advice is to test out different approaches on your data before settling on one.


Further Reading

* Pandas string methods are “vectorized” in the sense that they are specified on the series but operate on each element. The underlying mechanisms are still iterative, because string operations are inherently hard to vectorize.


Why I Wrote this Answer

A common trend I notice from new users is to ask questions of the form “how can I iterate over my df to do X?”. Showing code that calls iterrows() while doing something inside a for loop. Here is why. A new user to the library who has not been introduced to the concept of vectorization will likely envision the code that solves their problem as iterating over their data to do something. Not knowing how to iterate over a DataFrame, the first thing they do is Google it and end up here, at this question. They then see the accepted answer telling them how to, and they close their eyes and run this code without ever first questioning if iteration is not the right thing to do.

The aim of this answer is to help new users understand that iteration is not necessarily the solution to every problem, and that better, faster and more idiomatic solutions could exist, and that it is worth investing time in exploring them. I’m not trying to start a war of iteration vs vectorization, but I want new users to be informed when developing solutions to their problems with this library.


回答 2

首先考虑是否真的需要遍历 DataFrame中的行。有关其他选择,请参见此答案

如果仍然需要遍历行,则可以使用以下方法。请注意一些其他 警告中未提及的重要警告

itertuples() 应该比 iterrows()

但是要注意,根据文档(目前为熊猫0.24.2):

  • Iterrows:dtype可能与每一行都不匹配

    因为iterrows为每一行返回一个Series,所以它不会在各行中保留 dtype(dtypes在DataFrames的各列之间都保留)。为了在遍历行时保留dtype,最好使用itertuples()返回值的命名元组,并且通常比iterrows()快得多

  • 行程:请勿修改行

    永远不要修改要迭代的内容。不能保证在所有情况下都能正常工作。根据数据类型,迭代器将返回副本而不是视图,并且对其进行写入将无效。

    使用DataFrame.apply()代替:

    new_df = df.apply(lambda x: x * 2)
  • itertuples:

    如果列名是无效的Python标识符,重复出现或以下划线开头,则列名将重命名为位置名。具有大量列(> 255)时,将返回常规元组。

有关更多详细信息,请参见有关迭代的pandas文档

First consider if you really need to iterate over rows in a DataFrame. See this answer for alternatives.

If you still need to iterate over rows, you can use methods below. Note some important caveats which are not mentioned in any of the other answers.

itertuples() is supposed to be faster than iterrows()

But be aware, according to the docs (pandas 0.24.2 at the moment):

  • iterrows: dtype might not match from row to row

    Because iterrows returns a Series for each row, it does not preserve dtypes across the rows (dtypes are preserved across columns for DataFrames). To preserve dtypes while iterating over the rows, it is better to use itertuples() which returns namedtuples of the values and which is generally much faster than iterrows()

  • iterrows: Do not modify rows

    You should never modify something you are iterating over. This is not guaranteed to work in all cases. Depending on the data types, the iterator returns a copy and not a view, and writing to it will have no effect.

    Use DataFrame.apply() instead:

    new_df = df.apply(lambda x: x * 2)
    
  • itertuples:

    The column names will be renamed to positional names if they are invalid Python identifiers, repeated, or start with an underscore. With a large number of columns (>255), regular tuples are returned.

See pandas docs on iteration for more details.


回答 3

您应该使用df.iterrows()。尽管逐行迭代并不是特别有效,因为Series必须创建对象。

You should use df.iterrows(). Though iterating row-by-row is not especially efficient since Series objects have to be created.


回答 4

虽然这iterrows()是一个不错的选择,但有时itertuples()可能会更快:

df = pd.DataFrame({'a': randn(1000), 'b': randn(1000),'N': randint(100, 1000, (1000)), 'x': 'x'})

%timeit [row.a * 2 for idx, row in df.iterrows()]
# => 10 loops, best of 3: 50.3 ms per loop

%timeit [row[1] * 2 for row in df.itertuples()]
# => 1000 loops, best of 3: 541 µs per loop

While iterrows() is a good option, sometimes itertuples() can be much faster:

df = pd.DataFrame({'a': randn(1000), 'b': randn(1000),'N': randint(100, 1000, (1000)), 'x': 'x'})

%timeit [row.a * 2 for idx, row in df.iterrows()]
# => 10 loops, best of 3: 50.3 ms per loop

%timeit [row[1] * 2 for row in df.itertuples()]
# => 1000 loops, best of 3: 541 µs per loop

回答 5

您还可以df.apply()用于遍历行并访问一个函数的多列。

docs:DataFrame.apply()

def valuation_formula(x, y):
    return x * y * 0.5

df['price'] = df.apply(lambda row: valuation_formula(row['x'], row['y']), axis=1)

You can also use df.apply() to iterate over rows and access multiple columns for a function.

docs: DataFrame.apply()

def valuation_formula(x, y):
    return x * y * 0.5

df['price'] = df.apply(lambda row: valuation_formula(row['x'], row['y']), axis=1)

回答 6

您可以按以下方式使用df.iloc函数:

for i in range(0, len(df)):
    print df.iloc[i]['c1'], df.iloc[i]['c2']

You can use the df.iloc function as follows:

for i in range(0, len(df)):
    print df.iloc[i]['c1'], df.iloc[i]['c2']

回答 7

我一直在寻找如何在行和列上进行迭代,因此在这里结束:

for i, row in df.iterrows():
    for j, column in row.iteritems():
        print(column)

I was looking for How to iterate on rows AND columns and ended here so :

for i, row in df.iterrows():
    for j, column in row.iteritems():
        print(column)

回答 8

您可以编写自己的迭代器来实现 namedtuple

from collections import namedtuple

def myiter(d, cols=None):
    if cols is None:
        v = d.values.tolist()
        cols = d.columns.values.tolist()
    else:
        j = [d.columns.get_loc(c) for c in cols]
        v = d.values[:, j].tolist()

    n = namedtuple('MyTuple', cols)

    for line in iter(v):
        yield n(*line)

这可以直接与媲美pd.DataFrame.itertuples。我的目标是更高效地执行相同的任务。


对于具有我的功能的给定数据框:

list(myiter(df))

[MyTuple(c1=10, c2=100), MyTuple(c1=11, c2=110), MyTuple(c1=12, c2=120)]

或搭配pd.DataFrame.itertuples

list(df.itertuples(index=False))

[Pandas(c1=10, c2=100), Pandas(c1=11, c2=110), Pandas(c1=12, c2=120)]

全面测试
我们测试使所有列均可用并对其进行子集设置。

def iterfullA(d):
    return list(myiter(d))

def iterfullB(d):
    return list(d.itertuples(index=False))

def itersubA(d):
    return list(myiter(d, ['col3', 'col4', 'col5', 'col6', 'col7']))

def itersubB(d):
    return list(d[['col3', 'col4', 'col5', 'col6', 'col7']].itertuples(index=False))

res = pd.DataFrame(
    index=[10, 30, 100, 300, 1000, 3000, 10000, 30000],
    columns='iterfullA iterfullB itersubA itersubB'.split(),
    dtype=float
)

for i in res.index:
    d = pd.DataFrame(np.random.randint(10, size=(i, 10))).add_prefix('col')
    for j in res.columns:
        stmt = '{}(d)'.format(j)
        setp = 'from __main__ import d, {}'.format(j)
        res.at[i, j] = timeit(stmt, setp, number=100)

res.groupby(res.columns.str[4:-1], axis=1).plot(loglog=True);

You can write your own iterator that implements namedtuple

from collections import namedtuple

def myiter(d, cols=None):
    if cols is None:
        v = d.values.tolist()
        cols = d.columns.values.tolist()
    else:
        j = [d.columns.get_loc(c) for c in cols]
        v = d.values[:, j].tolist()

    n = namedtuple('MyTuple', cols)

    for line in iter(v):
        yield n(*line)

This is directly comparable to pd.DataFrame.itertuples. I’m aiming at performing the same task with more efficiency.


For the given dataframe with my function:

list(myiter(df))

[MyTuple(c1=10, c2=100), MyTuple(c1=11, c2=110), MyTuple(c1=12, c2=120)]

Or with pd.DataFrame.itertuples:

list(df.itertuples(index=False))

[Pandas(c1=10, c2=100), Pandas(c1=11, c2=110), Pandas(c1=12, c2=120)]

A comprehensive test
We test making all columns available and subsetting the columns.

def iterfullA(d):
    return list(myiter(d))

def iterfullB(d):
    return list(d.itertuples(index=False))

def itersubA(d):
    return list(myiter(d, ['col3', 'col4', 'col5', 'col6', 'col7']))

def itersubB(d):
    return list(d[['col3', 'col4', 'col5', 'col6', 'col7']].itertuples(index=False))

res = pd.DataFrame(
    index=[10, 30, 100, 300, 1000, 3000, 10000, 30000],
    columns='iterfullA iterfullB itersubA itersubB'.split(),
    dtype=float
)

for i in res.index:
    d = pd.DataFrame(np.random.randint(10, size=(i, 10))).add_prefix('col')
    for j in res.columns:
        stmt = '{}(d)'.format(j)
        setp = 'from __main__ import d, {}'.format(j)
        res.at[i, j] = timeit(stmt, setp, number=100)

res.groupby(res.columns.str[4:-1], axis=1).plot(loglog=True);


回答 9

如何有效地进行迭代?

如果确实需要迭代熊猫数据,则可能要避免使用iterrows()。有不同的方法,通常iterrows()远非最佳。itertuples()可以快100倍。

简而言之:

  • 通常使用df.itertuples(name=None)。特别是当您有固定数量的列且少于255列时。参见要点(3)
  • 否则,df.itertuples()除非您的列具有特殊字符(例如空格或’-‘),否则请使用。参见要点(2)
  • 它可以使用itertuples()使用最后一个例子,即使你的数据帧有奇怪列。参见要点(4)
  • iterrows()当您无法使用以前的解决方案时使用。参见要点(1)

遍历pandas数据框中的行的不同方法:

生成具有一百万行四列的随机数据框:

    df = pd.DataFrame(np.random.randint(0, 100, size=(1000000, 4)), columns=list('ABCD'))
    print(df)

1)通常iterrows()很方便,但是该死的慢:

start_time = time.clock()
result = 0
for _, row in df.iterrows():
    result += max(row['B'], row['C'])

total_elapsed_time = round(time.clock() - start_time, 2)
print("1. Iterrows done in {} seconds, result = {}".format(total_elapsed_time, result))

2)默认itertuples()值已经快得多,但是它不适用于诸如以下的列名My Col-Name is very Strange(如果重复列或如果列名不能简单地转换为python变量名,则应避免使用此方法):

start_time = time.clock()
result = 0
for row in df.itertuples(index=False):
    result += max(row.B, row.C)

total_elapsed_time = round(time.clock() - start_time, 2)
print("2. Named Itertuples done in {} seconds, result = {}".format(total_elapsed_time, result))

3)itertuples()使用name = None 的默认值甚至更快,但由于必须在每列中定义一个变量,因此并不十分方便。

start_time = time.clock()
result = 0
for(_, col1, col2, col3, col4) in df.itertuples(name=None):
    result += max(col2, col3)

total_elapsed_time = round(time.clock() - start_time, 2)
print("3. Itertuples done in {} seconds, result = {}".format(total_elapsed_time, result))

4)最后,named itertuples()的速度比上一点慢,但是您不必为每列定义一个变量,它可以与诸如的列名一起使用My Col-Name is very Strange

start_time = time.clock()
result = 0
for row in df.itertuples(index=False):
    result += max(row[df.columns.get_loc('B')], row[df.columns.get_loc('C')])

total_elapsed_time = round(time.clock() - start_time, 2)
print("4. Polyvalent Itertuples working even with special characters in the column name done in {} seconds, result = {}".format(total_elapsed_time, result))

输出:

         A   B   C   D
0       41  63  42  23
1       54   9  24  65
2       15  34  10   9
3       39  94  82  97
4        4  88  79  54
...     ..  ..  ..  ..
999995  48  27   4  25
999996  16  51  34  28
999997   1  39  61  14
999998  66  51  27  70
999999  51  53  47  99

[1000000 rows x 4 columns]

1. Iterrows done in 104.96 seconds, result = 66151519
2. Named Itertuples done in 1.26 seconds, result = 66151519
3. Itertuples done in 0.94 seconds, result = 66151519
4. Polyvalent Itertuples working even with special characters in the column name done in 2.94 seconds, result = 66151519

本文是iterrows和itertuples之间非常有趣的比较

How to iterate efficiently?

If you really have to iterate a pandas dataframe, you will probably want to avoid using iterrows(). There are different methods and the usual iterrows() is far from being the best. itertuples() can be 100 times faster.

In short:

  • As a general rule, use df.itertuples(name=None). In particular, when you have a fixed number columns and less than 255 columns. See point (3)
  • Otherwise, use df.itertuples() except if your columns have special characters such as spaces or ‘-‘. See point (2)
  • It is possible to use itertuples() even if your dataframe has strange columns by using the last example. See point (4)
  • Only use iterrows() if you cannot the previous solutions. See point (1)

Different methods to iterate over rows in a pandas dataframe:

Generate a random dataframe with a million rows and 4 columns:

    df = pd.DataFrame(np.random.randint(0, 100, size=(1000000, 4)), columns=list('ABCD'))
    print(df)

1) The usual iterrows() is convenient but damn slow:

start_time = time.clock()
result = 0
for _, row in df.iterrows():
    result += max(row['B'], row['C'])

total_elapsed_time = round(time.clock() - start_time, 2)
print("1. Iterrows done in {} seconds, result = {}".format(total_elapsed_time, result))

2) The default itertuples() is already much faster but it doesn’t work with column names such as My Col-Name is very Strange (you should avoid this method if your columns are repeated or if a column name cannot be simply converted to a python variable name).:

start_time = time.clock()
result = 0
for row in df.itertuples(index=False):
    result += max(row.B, row.C)

total_elapsed_time = round(time.clock() - start_time, 2)
print("2. Named Itertuples done in {} seconds, result = {}".format(total_elapsed_time, result))

3) The default itertuples() using name=None is even faster but not really convenient as you have to define a variable per column.

start_time = time.clock()
result = 0
for(_, col1, col2, col3, col4) in df.itertuples(name=None):
    result += max(col2, col3)

total_elapsed_time = round(time.clock() - start_time, 2)
print("3. Itertuples done in {} seconds, result = {}".format(total_elapsed_time, result))

4) Finally, the named itertuples() is slower than the previous point but you do not have to define a variable per column and it works with column names such as My Col-Name is very Strange.

start_time = time.clock()
result = 0
for row in df.itertuples(index=False):
    result += max(row[df.columns.get_loc('B')], row[df.columns.get_loc('C')])

total_elapsed_time = round(time.clock() - start_time, 2)
print("4. Polyvalent Itertuples working even with special characters in the column name done in {} seconds, result = {}".format(total_elapsed_time, result))

Output:

         A   B   C   D
0       41  63  42  23
1       54   9  24  65
2       15  34  10   9
3       39  94  82  97
4        4  88  79  54
...     ..  ..  ..  ..
999995  48  27   4  25
999996  16  51  34  28
999997   1  39  61  14
999998  66  51  27  70
999999  51  53  47  99

[1000000 rows x 4 columns]

1. Iterrows done in 104.96 seconds, result = 66151519
2. Named Itertuples done in 1.26 seconds, result = 66151519
3. Itertuples done in 0.94 seconds, result = 66151519
4. Polyvalent Itertuples working even with special characters in the column name done in 2.94 seconds, result = 66151519

This article is a very interesting comparison between iterrows and itertuples


回答 10

要循环一个中的所有行,dataframe您可以使用:

for x in range(len(date_example.index)):
    print date_example['Date'].iloc[x]

To loop all rows in a dataframe you can use:

for x in range(len(date_example.index)):
    print date_example['Date'].iloc[x]

回答 11

 for ind in df.index:
     print df['c1'][ind], df['c2'][ind]
 for ind in df.index:
     print df['c1'][ind], df['c2'][ind]

回答 12

有时一个有用的模式是:

# Borrowing @KutalmisB df example
df = pd.DataFrame({'col1': [1, 2], 'col2': [0.1, 0.2]}, index=['a', 'b'])
# The to_dict call results in a list of dicts
# where each row_dict is a dictionary with k:v pairs of columns:value for that row
for row_dict in df.to_dict(orient='records'):
    print(row_dict)

结果是:

{'col1':1.0, 'col2':0.1}
{'col1':2.0, 'col2':0.2}

Sometimes a useful pattern is:

# Borrowing @KutalmisB df example
df = pd.DataFrame({'col1': [1, 2], 'col2': [0.1, 0.2]}, index=['a', 'b'])
# The to_dict call results in a list of dicts
# where each row_dict is a dictionary with k:v pairs of columns:value for that row
for row_dict in df.to_dict(orient='records'):
    print(row_dict)

Which results in:

{'col1':1.0, 'col2':0.1}
{'col1':2.0, 'col2':0.2}

回答 13

若要将a中的所有行循环dataframe方便地使用每行的值,可以将其转换为s。例如:namedtuplesndarray

df = pd.DataFrame({'col1': [1, 2], 'col2': [0.1, 0.2]}, index=['a', 'b'])

遍历行:

for row in df.itertuples(index=False, name='Pandas'):
    print np.asarray(row)

结果是:

[ 1.   0.1]
[ 2.   0.2]

请注意,如果index=True所述索引被添加为元组的第一个元素,这可能是不期望的对某些应用。

To loop all rows in a dataframe and use values of each row conveniently, namedtuples can be converted to ndarrays. For example:

df = pd.DataFrame({'col1': [1, 2], 'col2': [0.1, 0.2]}, index=['a', 'b'])

Iterating over the rows:

for row in df.itertuples(index=False, name='Pandas'):
    print np.asarray(row)

results in:

[ 1.   0.1]
[ 2.   0.2]

Please note that if index=True, the index is added as the first element of the tuple, which may be undesirable for some applications.


回答 14

有一种方法可以在返回DataFrame而不是Series的同时迭代引发行。我没有看到任何人提到您可以将index作为列表传递给要作为DataFrame返回的行:

for i in range(len(df)):
    row = df.iloc[[i]]

请注意双括号的用法。这将返回一个具有单行的DataFrame。

There is a way to iterate throw rows while getting a DataFrame in return, and not a Series. I don’t see anyone mentioning that you can pass index as a list for the row to be returned as a DataFrame:

for i in range(len(df)):
    row = df.iloc[[i]]

Note the usage of double brackets. This returns a DataFrame with a single row.


回答 15

对于查看和修改值,我将使用iterrows()。在for循环中,并通过使用元组拆包(请参见示例:)i, row,我row仅用于查看值,并在想要修改值时iloc方法一起使用。如先前的答案所述,您不应在此处修改要迭代的内容。

for i, row in df.iterrows():
    df_column_A = df.loc[i, 'A']
    if df_column_A == 'Old_Value':
        df_column_A = 'New_value'  

这里的rowin循环是该行的副本,而不是它的视图。因此,您不应编写类似的内容row['A'] = 'New_Value',它不会修改DataFrame。但是,您可以使用iloc指定DataFrame来完成工作。

For both viewing and modifying values, I would use iterrows(). In a for loop and by using tuple unpacking (see the example: i, row), I use the row for only viewing the value and use i with the loc method when I want to modify values. As stated in previous answers, here you should not modify something you are iterating over.

for i, row in df.iterrows():
    df_column_A = df.loc[i, 'A']
    if df_column_A == 'Old_Value':
        df_column_A = 'New_value'  

Here the row in the loop is a copy of that row, and not a view of it. Therefore, you should NOT write something like row['A'] = 'New_Value', it will not modify the DataFrame. However, you can use i and loc and specify the DataFrame to do the work.


回答 16

我知道我要参加答疑会很晚,但是我只想添加到上述@ cs95的答案中,我认为这应该是公认的答案。在他的回答中,他表明,熊猫矢量化远胜过其他使用数据帧计算内容的熊猫方法。

我想补充一点,如果您首先将数据帧转换为numpy数组,然后使用向量化,它甚至比pandas数据帧向量化要快(而且还包括将其转换回数据帧系列的时间)。

如果在@ cs95的基准代码中添加以下功能,这将非常明显:

def np_vectorization(df):
    np_arr = df.to_numpy()
    return pd.Series(np_arr[:,0] + np_arr[:,1], index=df.index)

def just_np_vectorization(df):
    np_arr = df.to_numpy()
    return np_arr[:,0] + np_arr[:,1]

I know I’m late to the answering party, but I just wanted to add to @cs95’s answer above, which I believe should be the accepted answer. In his answer, he shows that pandas vectorization far outperforms other pandas methods for computing stuff with dataframes.

I wanted to add that if you first convert the dataframe to a numpy array and then use vectorization, it’s even faster than pandas dataframe vectorization, (and that includes the time to turn it back into a dataframe series).

If you add the following functions to @cs95’s benchmark code, this becomes pretty evident:

def np_vectorization(df):
    np_arr = df.to_numpy()
    return pd.Series(np_arr[:,0] + np_arr[:,1], index=df.index)

def just_np_vectorization(df):
    np_arr = df.to_numpy()
    return np_arr[:,0] + np_arr[:,1]


回答 17

您还可以进行numpy索引以提高速度。对于某些应用程序,它并不是真正的迭代,但是比迭代好得多。

subset = row['c1'][0:5]
all = row['c1'][:]

您可能还需要将其转换为数组。这些索引/选择应该已经像Numpy数组一样起作用,但是我遇到了问题,需要进行强制转换

np.asarray(all)
imgs[:] = cv2.resize(imgs[:], (224,224) ) #resize every image in an hdf5 file

You can also do numpy indexing for even greater speed ups. It’s not really iterating but works much better than iteration for certain applications.

subset = row['c1'][0:5]
all = row['c1'][:]

You may also want to cast it to an array. These indexes/selections are supposed to act like Numpy arrays already but I ran into issues and needed to cast

np.asarray(all)
imgs[:] = cv2.resize(imgs[:], (224,224) ) #resize every image in an hdf5 file

回答 18

有很多方法可以遍历pandas数据框中的行。一种非常简单直观的方法是:

df=pd.DataFrame({'A':[1,2,3], 'B':[4,5,6],'C':[7,8,9]})
print(df)
for i in range(df.shape[0]):
    # For printing the second column
    print(df.iloc[i,1])
    # For printing more than one columns
    print(df.iloc[i,[0,2]])

There are so many ways to iterate over the rows in pandas dataframe. One very simple and intuitive way is :

df=pd.DataFrame({'A':[1,2,3], 'B':[4,5,6],'C':[7,8,9]})
print(df)
for i in range(df.shape[0]):
    # For printing the second column
    print(df.iloc[i,1])
    # For printing more than one columns
    print(df.iloc[i,[0,2]])

回答 19

本示例使用iloc隔离数据帧中的每个数字。

import pandas as pd

 a = [1, 2, 3, 4]
 b = [5, 6, 7, 8]

 mjr = pd.DataFrame({'a':a, 'b':b})

 size = mjr.shape

 for i in range(size[0]):
     for j in range(size[1]):
         print(mjr.iloc[i, j])

This example uses iloc to isolate each digit in the data frame.

import pandas as pd

 a = [1, 2, 3, 4]
 b = [5, 6, 7, 8]

 mjr = pd.DataFrame({'a':a, 'b':b})

 size = mjr.shape

 for i in range(size[0]):
     for j in range(size[1]):
         print(mjr.iloc[i, j])

回答 20

某些库(例如,我使用的Java互操作库)要求每次将值连续传递一次,例如,如果是流数据。为了复制流式传输的性质,我逐一“流式传输”我的数据帧值,我写了下面的内容,它有时会派上用场。

class DataFrameReader:
  def __init__(self, df):
    self._df = df
    self._row = None
    self._columns = df.columns.tolist()
    self.reset()
    self.row_index = 0

  def __getattr__(self, key):
    return self.__getitem__(key)

  def read(self) -> bool:
    self._row = next(self._iterator, None)
    self.row_index += 1
    return self._row is not None

  def columns(self):
    return self._columns

  def reset(self) -> None:
    self._iterator = self._df.itertuples()

  def get_index(self):
    return self._row[0]

  def index(self):
    return self._row[0]

  def to_dict(self, columns: List[str] = None):
    return self.row(columns=columns)

  def tolist(self, cols) -> List[object]:
    return [self.__getitem__(c) for c in cols]

  def row(self, columns: List[str] = None) -> Dict[str, object]:
    cols = set(self._columns if columns is None else columns)
    return {c : self.__getitem__(c) for c in self._columns if c in cols}

  def __getitem__(self, key) -> object:
    # the df index of the row is at index 0
    try:
        if type(key) is list:
            ix = [self._columns.index(key) + 1 for k in key]
        else:
            ix = self._columns.index(key) + 1
        return self._row[ix]
    except BaseException as e:
        return None

  def __next__(self) -> 'DataFrameReader':
    if self.read():
        return self
    else:
        raise StopIteration

  def __iter__(self) -> 'DataFrameReader':
    return self

可以使用:

for row in DataFrameReader(df):
  print(row.my_column_name)
  print(row.to_dict())
  print(row['my_column_name'])
  print(row.tolist())

并保留要迭代的行的值/名称映射。显然,这比使用如上所述的apply和Cython慢​​很多,但是在某些情况下是必需的。

Some libraries (e.g. a Java interop library that I use) require values to be passed in a row at a time, for example, if streaming data. To replicate the streaming nature, I ‘stream’ my dataframe values one by one, I wrote the below, which comes in handy from time to time.

class DataFrameReader:
  def __init__(self, df):
    self._df = df
    self._row = None
    self._columns = df.columns.tolist()
    self.reset()
    self.row_index = 0

  def __getattr__(self, key):
    return self.__getitem__(key)

  def read(self) -> bool:
    self._row = next(self._iterator, None)
    self.row_index += 1
    return self._row is not None

  def columns(self):
    return self._columns

  def reset(self) -> None:
    self._iterator = self._df.itertuples()

  def get_index(self):
    return self._row[0]

  def index(self):
    return self._row[0]

  def to_dict(self, columns: List[str] = None):
    return self.row(columns=columns)

  def tolist(self, cols) -> List[object]:
    return [self.__getitem__(c) for c in cols]

  def row(self, columns: List[str] = None) -> Dict[str, object]:
    cols = set(self._columns if columns is None else columns)
    return {c : self.__getitem__(c) for c in self._columns if c in cols}

  def __getitem__(self, key) -> object:
    # the df index of the row is at index 0
    try:
        if type(key) is list:
            ix = [self._columns.index(key) + 1 for k in key]
        else:
            ix = self._columns.index(key) + 1
        return self._row[ix]
    except BaseException as e:
        return None

  def __next__(self) -> 'DataFrameReader':
    if self.read():
        return self
    else:
        raise StopIteration

  def __iter__(self) -> 'DataFrameReader':
    return self

Which can be used:

for row in DataFrameReader(df):
  print(row.my_column_name)
  print(row.to_dict())
  print(row['my_column_name'])
  print(row.tolist())

And preserves the values/ name mapping for the rows being iterated. Obviously, is a lot slower than using apply and Cython as indicated above, but is necessary in some circumstances.


回答 21

简而言之

  • 尽可能使用向量化
  • 如果操作无法向量化-使用列表推导
  • 如果您需要一个代表整个行的对象,请使用itertuples
  • 如果上述操作太慢-请尝试swifter.apply
  • 如果仍然太慢-请尝试Cython例程

详细资料 该视频中的

基准测试

In short

  • Use vectorization if possible
  • If operation can’t be vectorized – use list comprehensions
  • If you need a single object representing entire row – use itertuples
  • If the above is too slow – try swifter.apply
  • If it’s still too slow – try Cython routine

Details in this video

Benchmark