Here's a simple timing test of aggregation functions in R, using 1.3 million rows and 80,000 groups of real data on a 1.8GHz Intel Core i5. Thanks to Arun Srinivasan for helpful comments.
The fastest function to run through the data.frame
benchmark is data.table, which runs twice faster than dplyr, which runs ten times faster than base R.
For a benchmark that includes plyr, see this earlier Gist for a computationally more intensive test on half a million rows, where dplyr still runs 1.5 times faster than aggregate
in base R.
Both tests confirm what W. Andrew Barr blogged on dplyr:
the 2 most important improvements in dplyr are
- a MASSIVE increase in speed, making dplyr useful on big data sets
- the ability to chain operations together in a natural order
Tony Fischetti has clear examples of the latter, and Erick Gregory shows that easy access to SQL databases should also be added to the list.
Your function is
as.data.table(data)[, .N, by=Functie)]
. This includes creation of the data.table and the aggregation. And your benchmark results indicate 0.112s which is 2.3x faster thandplyr
. I don't understand how you saydplyr
is fastest and thatdata.table
is fastest if you first convert todata.table
. It seems quite straightforward to me thatdplyr
is slower here.. What am I missing?