Is R Fast Enough? - Part 4 - ‘Loops’


Eliot McIntire


May 12, 2015

In part 4 of this series on benchmarking R, we’ll explore loops and a common alternative, vectorizing. This is probably the “biggest” issue making people think that R is a slow language. Essentially, other procedural languages use explicit loops; programmers moving from those languages to R start with the same procedures and find that R is slow. We will discuss a range of ways making loops faster and how vectorizing can help.

There are many other resources about this topic; we will try to be concise and show the worst case, the best case, and many little steps in between.


Loops have been the achilles heel of R in the past. In version 3.1 and forward, much of this problem appears to be gone. As could be seen in the, pre-allocating a vector and filling it up inside a loop can now be very fast and efficient in native R. To demonstrate these points, below are 6 ways to achieve the same result in R, beginning with a naive loop approach, and working up to the fully vectorized approach. I am using a very fast vectorized function, seq_len, to emphasize the differences between using loops and optimized vectorized functions.

The basic code below generates random numbers. The sequence goes from a fully unvectorized, looped structure, with no pre-allocation of the output vector, through to pure vectorized code. The intermediate steps are:

  • Loop
  • Loop with pre-allocated length of output
  • sapply (like loops)
  • sapply with pipe operator
  • vectorized
  • vectorized with no intermediate objects
  • C++ vectorized
library(magrittr) # for pipe %>%
N = 1e5

mb = microbenchmark::microbenchmark(times=100L,

# no pre-allocating of vector length, generating uniform random numbers once, then calling them within each loop
loopWithNoPreallocate = {
  a <- numeric()
  unifs = runif(N)
    for (i in 1:N) {
      a[i] = unifs[i]
  } ,

# pre-allocating vector length, generating uniform random numbers once, then calling them within each loop
loopWithPreallocate = {
    unifs <- runif(N)
    b <- numeric(N) 
    for (i in 1:N) {
      b[i] = unifs[i]
# # sapply - generally faster than loops
sapplyVector1 = {
      b <- runif(N) 
      sapply(b,function(x) x)

# sapply with pipe operator: no intermediate objects are created
sapplyWithPipe = {
      b <- (runif(N)) %>%
        sapply(.,function(x) x)

# vectorized with intermediate object before return
vectorizedWithCopy = {
    unifs <- runif(N)

# no intermediate object before return
vectorizedWithNoCopy = {


                   expr     min   median      max
1 loopWithNoPreallocate 21.9615 29.84165  89.6070
2   loopWithPreallocate  7.6051  8.09575  16.7365
3         sapplyVector1 55.2273 61.30605 111.2397
4        sapplyWithPipe 52.7653 58.97110 115.2396
5    vectorizedWithCopy  2.0667  2.21285   6.6799
6  vectorizedWithNoCopy  2.0717  2.22310   6.0343
# Test that all results return the same vector
all.equalV(loopWithNoPreallocate, loopWithPreallocate, sapplyVector1, sapplyWithPipe, vectorizedWithCopy, vectorizedWithNoCopy)
[1] TRUE
sumLoops <- round(summary(mb)[[5]],0)

The fully vectorized function is 15x faster than the fully naive loop. Note also that making as few intermediate objects as possible is faster as well. Comparing vectorizedWithCopy and vectorizedWithNoCopy (where the only difference is making one copy of the object) shows virtually no change. This, I believe, is due to some improvements in after version 3.1 of R that reduces copying for vectors and matrices. Using pipes instead of intermediate objects also did not change the speed very much (slight change by 100%). These are simple tests, and for larger, or more complex objects, in general, it is likely that using pipes will be faster.


Write vectorized code in R where possible. If not possible, pre-allocate prior to writing loops.

Next time

We move on to higher level operations. Specifically, some GIS operations.

Functions used

all.equalV = function(...) {
  vals <- list(...)
  all(sapply(vals[-1], function(x) all.equal(vals[[1]], x)))

System used:

Tests were done on an HP Z400, Xeon 3.33 GHz processor, running Windows 7 Enterprise, using:

R version 4.3.0 (2023-04-21 ucrt)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 18363)

Matrix products: default

[1] LC_COLLATE=English_Canada.utf8  LC_CTYPE=English_Canada.utf8   
[3] LC_MONETARY=English_Canada.utf8 LC_NUMERIC=C                   
[5] LC_TIME=English_Canada.utf8    

time zone: America/Vancouver
tzcode source: internal

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] magrittr_2.0.3

loaded via a namespace (and not attached):
 [1] htmlwidgets_1.6.2     microbenchmark_1.4.10 compiler_4.3.0       
 [4] fastmap_1.1.1         cli_3.6.1             tools_4.3.0          
 [7] htmltools_0.5.5       rstudioapi_0.14       yaml_2.3.7           
[10] rmarkdown_2.21        knitr_1.42            jsonlite_1.8.4       
[13] xfun_0.39             digest_0.6.31         rlang_1.1.1          
[16] evaluate_0.21