为什么2012年Pandas在python中的合并速度比data.table在R中的合并速度快?

问题:为什么2012年Pandas在python中的合并速度比data.table在R中的合并速度快?

最近,我遇到了python 的pandas库,根据该基准,该库执行非常快的内存中合并。它甚至比R(我选择分析的语言)中的data.table包还要快。

为什么pandas要比这快得多data.table?是因为python相对于R具有固有的速度优势,还是我不了解一些折衷方案?有没有一种方法可以执行内部和外部联接data.table而无需使用merge(X, Y, all=FALSE)and merge(X, Y, all=TRUE)

这是用于对各种软件包进行基准测试的R代码Python代码

I recently came across the pandas library for python, which according to this benchmark performs very fast in-memory merges. It’s even faster than the data.table package in R (my language of choice for analysis).

Why is pandas so much faster than data.table? Is it because of an inherent speed advantage python has over R, or is there some tradeoff I’m not aware of? Is there a way to perform inner and outer joins in data.table without resorting to merge(X, Y, all=FALSE) and merge(X, Y, all=TRUE)?

Here’s the R code and the Python code used to benchmark the various packages.


回答 0

data.table当唯一字符串(级别)的数量很大:10,000 时,Wes似乎发现了一个已知问题。

是否Rprof()显示通话中的大部分时间sortedmatch(levels(i[[lc]]), levels(x[[rc]])?这实际上不是联接本身(算法),而是第一步。

最近的努力涉及允许键中的字符列,这应该通过与R自己的全局字符串哈希表更紧密地集成来解决该问题。已经报告了一些基准测试结果,test.data.table()但是尚未连接代码以替换级别匹配的级别。

大熊猫的合并速度是否比data.table常规整数列快?那应该是一种隔离算法本身与因素问题的方法。

此外,data.table时间序列合并的初衷。这有两个方面:i)多列有序键,例如(id,datetime); ii)快速流行的连接(roll=TRUE),也称为上一个观察结转。

我需要一些时间来确认,因为这是我所看到的与之比较的第一个data.table


2012年7月发布的data.table v1.8.0的更新

  • 当将“因子”类型的列的i级与x级进行匹配时,内部函数sortedmatch()被删除并替换为chmatch()。当因子列的级别数很大(例如,> 10,000)时,此初步步骤导致(已知)显着的减慢。如Wes McKinney(Python软件包Pandas的作者)所演示的,在连接四个这样的列的测试中加剧了这种情况。例如,匹配的100万个字符串(其中600,000个是唯一的)现在从16s减少到0.5s。

在该版本中还包括:

  • 现在,键中允许使用字符列,并且首选将其分解。data.table()和setkey()不再强制字符分解。仍然支持因素。实现FR#1493,FR#1224和(部分)FR#951。

  • 新功能chmatch()和%chin%,用于字符向量的match()和%in%的更快版本。使用R的内部字符串缓存(不构建哈希表)。它们比fchmatch示例中的match()快约4倍。

截至2013年9月,data.table在CRAN上的版本为v1.8.10,我们正在开发v1.9.0。新闻是实时更新的。


但是正如我最初在上面写的:

data.table具有时间序列合并的初衷。这有两个方面:i)多列有序键,例如(id,datetime); ii)快速流行的连接(roll=TRUE),也称为上一个观察结转。

因此,两个字符列的熊猫等值连接可能仍比data.table快。由于听起来像是对合并的两列进行哈希处理。data.table不会对键进行哈希处理,因为它考虑了普遍的有序连接。data.table中的“键”实际上只是排序顺序(类似于SQL中的聚簇索引;即,这就是在RAM中对数据进行排序的方式)。例如,在列表上添加辅助键。

总而言之,由于已知问题已得到解决,因此由具有10,000个以上唯一字符串的特殊的两个字符的列测试所强调的明显的速度差异现在应该不会那么糟糕。

It looks like Wes may have discovered a known issue in data.table when the number of unique strings (levels) is large: 10,000.

Does Rprof() reveal most of the time spent in the call sortedmatch(levels(i[[lc]]), levels(x[[rc]])? This isn’t really the join itself (the algorithm), but a preliminary step.

Recent efforts have gone into allowing character columns in keys, which should resolve that issue by integrating more closely with R’s own global string hash table. Some benchmark results are already reported by test.data.table() but that code isn’t hooked up yet to replace the levels to levels match.

Are pandas merges faster than data.table for regular integer columns? That should be a way to isolate the algorithm itself vs factor issues.

Also, data.table has time series merge in mind. Two aspects to that: i) multi column ordered keys such as (id,datetime) ii) fast prevailing join (roll=TRUE) a.k.a. last observation carried forward.

I’ll need some time to confirm as it’s the first I’ve seen of the comparison to data.table as presented.


UPDATE from data.table v1.8.0 released July 2012

  • Internal function sortedmatch() removed and replaced with chmatch() when matching i levels to x levels for columns of type ‘factor’. This preliminary step was causing a (known) significant slowdown when the number of levels of a factor column was large (e.g. >10,000). Exacerbated in tests of joining four such columns, as demonstrated by Wes McKinney (author of Python package Pandas). Matching 1 million strings of which of which 600,000 are unique is now reduced from 16s to 0.5s, for example.

also in that release was :

  • character columns are now allowed in keys and are preferred to factor. data.table() and setkey() no longer coerce character to factor. Factors are still supported. Implements FR#1493, FR#1224 and (partially) FR#951.

  • New functions chmatch() and %chin%, faster versions of match() and %in% for character vectors. R’s internal string cache is utilised (no hash table is built). They are about 4 times faster than match() on the example in ?chmatch.

As of Sep 2013 data.table is v1.8.10 on CRAN and we’re working on v1.9.0. NEWS is updated live.


But as I wrote originally, above :

data.table has time series merge in mind. Two aspects to that: i) multi column ordered keys such as (id,datetime) ii) fast prevailing join (roll=TRUE) a.k.a. last observation carried forward.

So the Pandas equi join of two character columns is probably still faster than data.table. Since it sounds like it hashes the combined two columns. data.table doesn’t hash the key because it has prevailing ordered joins in mind. A “key” in data.table is literally just the sort order (similar to a clustered index in SQL; i.e., that’s how the data is ordered in RAM). On the list is to add secondary keys, for example.

In summary, the glaring speed difference highlighted by this particular two-character-column test with over 10,000 unique strings shouldn’t be as bad now, since the known problem has been fixed.


回答 1

pandas更快的原因是因为我想出了一个更好的算法,该算法使用快速哈希表实现非常仔细地实现-klib和C / Cython,以避免不可向量化部分的Python解释器开销。在我的演示文稿中对该算法进行了详细描述:熊猫设计和开发的内幕

与之data.table比较实际上有点有趣,因为R的要点data.table是它包含用于各个列的预先计算的索引,以加快诸如数据选择和合并之类的操作。在这种情况下(数据库连接),pandas的DataFrame不包含用于合并的预先计算的信息,可以说这是“冷”合并。如果我存储了联接键的分解版本,则联接将明显更快-因为分解是该算法的最大瓶颈。

我还应该补充一点,熊猫的DataFrame的内部设计比R的data.frame(内部只是数组的列表)更适合于此类操作。

The reason pandas is faster is because I came up with a better algorithm, which is implemented very carefully using a fast hash table implementation – klib and in C/Cython to avoid the Python interpreter overhead for the non-vectorizable parts. The algorithm is described in some detail in my presentation: A look inside pandas design and development.

The comparison with data.table is actually a bit interesting because the whole point of R’s data.table is that it contains pre-computed indexes for various columns to accelerate operations like data selection and merges. In this case (database joins) pandas’ DataFrame contains no pre-computed information that is being used for the merge, so to speak it’s a “cold” merge. If I had stored the factorized versions of the join keys, the join would be significantly faster – as factorizing is the biggest bottleneck for this algorithm.

I should also add that the internal design of pandas’ DataFrame is much more amenable to these kinds of operations than R’s data.frame (which is just a list of arrays internally).


回答 2

这个话题已经有两年历史了,但是当人们寻找熊猫和数据的比较时,它似乎是一个着陆的地方。

由于这两种方法都随着时间的推移而发展,因此我想在此为感兴趣的用户发布一个相对较新的比较(2014年以来):https : //github.com/Rdatatable/data.table/wiki/Benchmarks- : -Grouping

有趣的是,Wes和/或Matt(分别是Pandas和data.table的创建者,并且都在上面发表了评论)是否也有任何新闻要补充。

-更新-

jangorecki在下面发布的评论包含一个我认为非常有用的链接:https : //github.com/szilard/benchm-databases

该图描绘了不同技术的聚合和连接操作的平均时间(更低=更快;比较上次更新于2016年9月)。对我来说真的很有教育意义。

回到问题,R DT keyR DT参考R的data.table的键控/非键控风格,并且在本基准测试中碰巧比Python的Pandas(Py pandas)快。

This topic is two years old but seems like a probable place for people to land when they search for comparisons of Pandas and data.table

Since both of these have evolved over time, I want to post a relatively newer comparison (from 2014) here for the interested users: https://github.com/Rdatatable/data.table/wiki/Benchmarks-:-Grouping

It would be interesting to know if Wes and/or Matt (who, by the way, are creators of Pandas and data.table respectively and have both commented above) have any news to add here as well.

— UPDATE —

A comment posted below by jangorecki contains a link that I think is very useful: https://github.com/szilard/benchm-databases

This graph depicts the average times of aggregation and join operations for different technologies (lower = faster; comparison last updated in Sept 2016). It was really educational for me.

Going back to the question, R DT key and R DT refer to the keyed/unkeyed flavors of R’s data.table and happen to be faster in this benchmark than Python’s Pandas (Py pandas).


回答 3

有很好的答案,特别是这两个工具的作者都提出了这个问题。马特(Matt)的答案解释了问题中报告的情况,该情况是由错误而非合并算法引起的。错误已在第二天(超过7年前)修复。

在我的回答中,我将提供一些data.table和pandas合并操作的最新时间。请注意,不包括plyr和基本R合并。

我正在介绍的时间来自db-benchmark项目,它是一个连续运行的可复制基准。它将工具升级到最新版本并重新运行基准脚本。它运行许多其他软件解决方案。如果您对Spark,Dask和其他一些产品感兴趣,请务必检查链接。


截至目前…(仍在实施:一个数据大小和五个问题)

我们测试了LHS表的2种不同数据大小。
对于这些数据大小,我们运行5个不同的合并问题。

Q1:LHS内部联接RHS- 整数
Q2:LHS内加入RHS的培养基上整数
Q3:LHS 加入RHS的培养基上整数
Q4:LHS内加入RHS的培养基上因子(分类)
Q5:LHS内部联接RHS- 整数

RHS桌子有3种尺寸

  • 转化为LHS / 1e6的大小
  • 介质转换为LHS / 1e3的大小
  • 转换为LHS的大小

在所有情况下,LHS和RHS之间大约有90%的匹配行,并且RHS连接列中没有重复项(没有笛卡尔积)。


截止到现在(2019年11月2日运行)

熊猫0.25.3于2019年11月1日发布数据
表0.12.7(92abb70)于2019年11月2日发布

对于LHS的两种不同数据大小,以下计时单位为秒。列pd2dt中添加了大熊猫比data.table慢多少倍的字段存储比率。

  • 0.5 GB LHS数据
+-----------+--------------+----------+--------+
| question  |  data.table  |  pandas  |  pd2dt |
+-----------+--------------+----------+--------+
| q1        |        0.51  |    3.60  |      7 |
| q2        |        0.50  |    7.37  |     14 |
| q3        |        0.90  |    4.82  |      5 |
| q4        |        0.47  |    5.86  |     12 |
| q5        |        2.55  |   54.10  |     21 |
+-----------+--------------+----------+--------+
  • 5 GB LHS数据
+-----------+--------------+----------+--------+
| question  |  data.table  |  pandas  |  pd2dt |
+-----------+--------------+----------+--------+
| q1        |        6.32  |    89.0  |     14 |
| q2        |        5.72  |   108.0  |     18 |
| q3        |       11.00  |    56.9  |      5 |
| q4        |        5.57  |    90.1  |     16 |
| q5        |       30.70  |   731.0  |     23 |
+-----------+--------------+----------+--------+

There are great answers, notably made by authors of both tools that question asks about. Matt’s answer explain the case reported in the question, that it was caused by a bug, and not an merge algorithm. Bug was fixed on the next day, more than a 7 years ago already.

In my answer I will provide some up-to-date timings of merging operation for data.table and pandas. Note that plyr and base R merge are not included.

Timings I am presenting are coming from db-benchmark project, a continuously run reproducible benchmark. It upgrades tools to recent versions and re-run benchmark scripts. It runs many other software solutions. If you are interested in Spark, Dask and few others be sure to check the link.


As of now… (still to be implemented: one more data size and 5 more questions)

We tests 2 different data sizes of LHS table.
For each of those data sizes we run 5 different merge questions.

q1: LHS inner join RHS-small on integer
q2: LHS inner join RHS-medium on integer
q3: LHS outer join RHS-medium on integer
q4: LHS inner join RHS-medium on factor (categorical)
q5: LHS inner join RHS-big on integer

RHS table is of 3 various sizes

  • small translates to size of LHS/1e6
  • medium translates to size of LHS/1e3
  • big translates to size of LHS

In all cases there are around 90% of matching rows between LHS and RHS, and no duplicates in RHS joining column (no cartesian product).


As of now (run on 2nd Nov 2019)

pandas 0.25.3 released on 1st Nov 2019
data.table 0.12.7 (92abb70) released on 2nd Nov 2019

Below timings are in seconds, for two different data sizes of LHS. Column pd2dt is added field storing ratio of how many times pandas is slower than data.table.

  • 0.5 GB LHS data
+-----------+--------------+----------+--------+
| question  |  data.table  |  pandas  |  pd2dt |
+-----------+--------------+----------+--------+
| q1        |        0.51  |    3.60  |      7 |
| q2        |        0.50  |    7.37  |     14 |
| q3        |        0.90  |    4.82  |      5 |
| q4        |        0.47  |    5.86  |     12 |
| q5        |        2.55  |   54.10  |     21 |
+-----------+--------------+----------+--------+
  • 5 GB LHS data
+-----------+--------------+----------+--------+
| question  |  data.table  |  pandas  |  pd2dt |
+-----------+--------------+----------+--------+
| q1        |        6.32  |    89.0  |     14 |
| q2        |        5.72  |   108.0  |     18 |
| q3        |       11.00  |    56.9  |      5 |
| q4        |        5.57  |    90.1  |     16 |
| q5        |       30.70  |   731.0  |     23 |
+-----------+--------------+----------+--------+