# 35 Profiling

## 35.1 Has somebody already solved the problem?

1. Q: What are faster alternatives to lm? Which are specifically designed to work with larger datasets?

A: Within the Cran task view for HighPerformanceComputing we can find for example the speedglm package and its speedlm() function. We might not gain any performance improvements on small datasets:

stopifnot(all.equal(
coef(speedglm::speedlm(Sepal.Length ~ Sepal.Width + Species, data = iris)),
coef(lm(Sepal.Length ~ Sepal.Width + Species, data = iris))))

microbenchmark::microbenchmark(
speedglm::speedlm(Sepal.Length ~ Sepal.Width + Species, data = iris),
lm(Sepal.Length ~ Sepal.Width + Species, data = iris)
)
#> Unit: microseconds
#>                                                                  expr
#>  speedglm::speedlm(Sepal.Length ~ Sepal.Width + Species, data = iris)
#>                 lm(Sepal.Length ~ Sepal.Width + Species, data = iris)
#>       min        lq      mean   median        uq      max neval
#>  1149.577 1177.6725 1210.4036 1188.881 1207.4370 2142.911   100
#>   863.331  899.7955  973.9823  920.297  935.6775 6005.491   100

However on bigger datasets it can make a difference:

eps <- rnorm(100000)
x1 <- rnorm(100000, 5, 3)
x2 <- rep(c("a", "b"), 50000)
y <- 7 * x1 + (x2 == "a") + eps
td <- data.frame(y = y, x1 = x1, x2 = x2, eps = eps)

stopifnot(all.equal(
coef(speedglm::speedlm(y ~ x1 + x2, data = td)),
coef(lm(y ~ x1 + x2, data = td))))

microbenchmark::microbenchmark(
speedglm::speedlm(y ~ x1 + x2, data = td),
lm(y ~ x1 + x2, data = td)
)
#> Unit: milliseconds
#>                                       expr      min       lq     mean
#>  speedglm::speedlm(y ~ x1 + x2, data = td) 32.30824 34.62577 45.30296
#>                 lm(y ~ x1 + x2, data = td) 36.57468 39.23520 43.19490
#>    median       uq      max neval
#>  36.29986 37.53671 171.4557   100
#>  40.63443 41.72501 171.3716   100

For further speed improvements, you might consider switching your linear algebra libraries as stated in ?speedglm::speedlm

The functions of class ‘speedlm’ may speed up the fitting of LMs to large data sets. High performances can be obtained especially if R is linked against an optimized BLAS, such as ATLAS.

Note that there are many other opportunities mentioned in the task view, also some that make it possible to handle data which is not in memory.

When it comes to pure speed a quick google search on r fastest lm provides a stackoverflow thread where someone already solved this problem for us…

2. Q: What package implements a version of match() that’s faster for repeated lookups? How much faster is it?

A: Again google gives a good recommendation for the search term r faster match:

set.seed(1)
table <- 1L:100000L
x <- sample(table, 10000, replace = TRUE)

stopifnot(all.equal(match(x, table), fastmatch::fmatch(x, table)))

microbenchmark::microbenchmark(
match(x, table),
fastmatch::fmatch(x, table)
)
#> Unit: microseconds
#>                         expr       min       lq       mean     median
#>              match(x, table) 22638.659 23204.89 23815.1912 23655.4015
#>  fastmatch::fmatch(x, table)   477.048   524.38   781.5993   600.6345
#>         uq       max neval
#>  24126.819 27921.516   100
#>    961.145  1955.461   100

On my laptop fastmatch::fmatch() is around 25 times as fast as match().

3. Q: List four functions (not just those in base R) that convert a string into a date time object. What are their strengths and weaknesses?

A: At least these functions will do the trick: as.POSIXct(), as.POSIXlt(), strftime(), strptime(), lubridate::ymd_hms(). There might also be some in the timeseries packages xts or zoo and in anytime. An update on this will follow.

4. Q: How many different ways can you compute a 1d density estimate in R?

A: According to Deng and Wickham (2011) density estimation is implemented in over 20 R packages.

5. Q: Which packages provide the ability to compute a rolling mean?

A: Again google r rolling mean provides us with enough information and guides our attention on solutions in the following packages:
• zoo
zoo::rollmean(1:10, 2, na.pad = TRUE, align = "left")
#>  [1] 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5  NA
zoo::rollapply(1:10, 2, mean, fill = NA, align = "left")
#>  [1] 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5  NA
• TTR
TTR::SMA(1:10, 2)
#>  [1]  NA 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5
• RcppRoll
RcppRoll::roll_mean(1:10, n = 2, fill = NA, align = "left")
#>  [1] 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5  NA
• caTools
caTools::runmean(1:10, k = 2, endrule = "NA", align = "left")
#>  [1] 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5  NA

Note that an exhaustive example on how to create a rolling mean function is provided in the textbook.

6. Q: What are the alternatives to optim()?

A: Depending on the use case a lot of different options might be considered. For a general overview we would suggest the corresponding taskview on Optimization.

## 35.2 Do as little as possible

1. Q: How do the results change if you compare mean() and mean.default() on 10,000 observations, rather than on 100?

x <- runif(1e2)
microbenchmark::microbenchmark(
mean(x),
mean.default(x)
)
#> Unit: microseconds
#>             expr   min    lq    mean median    uq    max neval
#>          mean(x) 2.573 2.641 3.20935 2.7075 2.846 33.017   100
#>  mean.default(x) 1.270 1.302 1.41028 1.3395 1.401  3.531   100

In case of 10,000 observations we can observe that using mean.default() preserves only a small advantage over the use of mean():

x <- runif(1e4)
microbenchmark::microbenchmark(
mean(x),
mean.default(x),
unit = "ns"
)
#> Unit: nanoseconds
#>             expr   min      lq     mean  median      uq   max neval
#>          mean(x) 24933 25035.0 26070.50 25122.0 25446.5 43483   100
#>  mean.default(x) 23444 23517.5 24084.03 23570.5 23698.0 42849   100

When using even more observations - like in the next lines - it seems that mean.default() doesn’t preserve anymore any advantage at all:

x <- runif(1e6)
microbenchmark::microbenchmark(
mean(x),
mean.default(x),
unit = "ns"
)
#> Unit: nanoseconds
#>             expr     min      lq    mean  median      uq     max neval
#>          mean(x) 2273825 2291232 2307531 2301105 2313745 2687216   100
#>  mean.default(x) 2266983 2282388 2294843 2292658 2301916 2411350   100
2. Q: The following code provides an alternative implementation of rowSums(). Why is it faster for this input?

rowSums2 <- function(df) {
out <- df[[1L]]
if (ncol(df) == 1) return(out)

for (i in 2:ncol(df)) {
out <- out + df[[i]]
}
out
}

df <- as.data.frame(
replicate(1e3, sample(100, 1e4, replace = TRUE))
)
system.time(rowSums(df))
#>    user  system elapsed
#>   0.064   0.008   0.073
system.time(rowSums2(df))
#>    user  system elapsed
#>   0.032   0.000   0.032

A:

3. Q: What’s the difference between rowSums() and .rowSums()?

A: .rowSums() is defined as

.rowSums
#> function (x, m, n, na.rm = FALSE)
#> .Internal(rowSums(x, m, n, na.rm))
#> <bytecode: 0xc888ca8>
#> <environment: namespace:base>

this means, that the internal .rowSums() function is called via .Internal().

.Internal performs a call to an internal code which is built in to the R interpreter.

The internal .rowSums() is a complete different function than the “normal” rowSums() function.

Of course (since they have the same name) in this case these functions are heavily related with each other: If we look into the source code of rowSums(), we see that it is a wrapper around the internal .rowSums(). Just some input checkings, conversions and the special cases (complex numbers) are added:

rowSums
#> function (x, na.rm = FALSE, dims = 1L)
#> {
#>     if (is.data.frame(x))
#>         x <- as.matrix(x)
#>     if (!is.array(x) || length(dn <- dim(x)) < 2L)
#>         stop("'x' must be an array of at least two dimensions")
#>     if (dims < 1L || dims > length(dn) - 1L)
#>         stop("invalid 'dims'")
#>     p <- prod(dn[-(id <- seq_len(dims))])
#>     dn <- dn[id]
#>     z <- if (is.complex(x))
#>         .Internal(rowSums(Re(x), prod(dn), p, na.rm)) + (0+1i) *
#>             .Internal(rowSums(Im(x), prod(dn), p, na.rm))
#>     else .Internal(rowSums(x, prod(dn), p, na.rm))
#>     if (length(dn) > 1L) {
#>         dim(z) <- dn
#>         dimnames(z) <- dimnames(x)[id]
#>     }
#>     else names(z) <- dimnames(x)[[1L]]
#>     z
#> }
#> <bytecode: 0xab63fd8>
#> <environment: namespace:base>
4. Q: Make a faster version of chisq.test() that only computes the chi-square test statistic when the input is two numeric vectors with no missing values. You can try simplifying chisq.test() or by coding from the mathematical definition.

A: Since chisq.test() has a relatively long source code, we try a new implementation from scratch:

chisq.test2 <- function(x, y){

# Input
if (!is.numeric(x)) {
stop("x must be numeric")}
if (!is.numeric(y)) {
stop("y must be numeric")}
if (length(x) != length(y)) {
stop("x and y must have the same length")}
if (length(x) <= 1) {
stop("length of x must be greater one")}
if (any(c(x, y) < 0)) {
stop("all entries of x and y must be greater or equal zero")}
if (sum(complete.cases(x, y)) != length(x)) {
stop("there must be no missing values in x and y")}
if (any(is.null(c(x, y)))) {
stop("entries of x and y must not be NULL")}

# Help variables
m <- rbind(x, y)
margin1 <- rowSums(m)
margin2 <- colSums(m)
n <- sum(m)
me <- tcrossprod(margin1, margin2) / n

# Output
x_stat = sum((m - me)^2 / me)
dof <- (length(margin1) - 1) * (length(margin2) - 1)
p <- pchisq(x_stat, df = dof, lower.tail = FALSE)

return(list(x_stat = x_stat, df = dof, p-value = p))
}

We check if our new implementation returns the same results

a <- 21:25
b <- c(21, 23, 25, 27, 29)
m_test <- cbind(a, b)

chisq.test(m_test)
#>
#>  Pearson's Chi-squared test
#>
#> data:  m_test
#> X-squared = 0.16194, df = 4, p-value = 0.9969
chisq.test2(a, b)
#> $x_stat #> [1] 0.1619369 #> #>$df
#> [1] 4
#>
#> $p-value #> [1] 0.9968937 Finally we benchmark this implementation against a compiled version of itself and the original stats::chisq.test(): chisq.test2c <- compiler::cmpfun(chisq.test2) microbenchmark::microbenchmark( chisq.test(m_test), chisq.test2(a, b), chisq.test2c(a, b) ) #> Unit: microseconds #> expr min lq mean median uq max neval #> chisq.test(m_test) 60.623 64.6205 73.19399 67.8070 76.5150 275.807 100 #> chisq.test2(a, b) 18.842 19.9140 22.59833 21.8895 23.5850 40.645 100 #> chisq.test2c(a, b) 18.690 20.1830 22.99964 21.5555 23.7505 56.184 100 5. Q: Can you make a faster version of table() for the case of an input of two integer vectors with no missing values? Can you use it to speed up your chi-square test? A: 6. Q: Imagine you want to compute the bootstrap distribution of a sample correlation using cor_df() and the data in the example below. Given that you want to run this many times, how can you make this code faster? (Hint: the function has three components that you can speed up.) n <- 1e6 df <- data.frame(a = rnorm(n), b = rnorm(n)) cor_df <- function(df, n) { i <- sample(seq(n), n, replace = FALSE) cor(df[i, , drop = FALSE])[2, 1] # note also that in the last line the textbook says q[] instead of df[]. Since # this is probably just a typo, we changed this to df[]. } Is there a way to vectorise this procedure? A: The three components (mentioned in the questions hint) are: 1. sampling of indices 2. subsetting the data frame/conversion to matrix (or vector input) 3. the cor() function itself. Since a run of lineprof like shown in the textbook suggests that as.matrix() within the cor() function is the biggest bottleneck, we start with that: n <- 1e6 df <- data.frame(a = rnorm(n), b = rnorm(n)) Remember the outgoing function: cor_df <- function() { i <- sample(seq(n), n, replace = FALSE) cor(df[i, , drop = FALSE])[2, 1] } First we want to optimise the second line (without attention to the cor() function itself). Therefore we exclude the first line from our optimisation approaches and define i within the global environment: i <- sample(seq(n), n) Then we define our approaches, check that their behaviour is correct and do the first benchmark: # old version cor_v1 <- function() { cor(df[i, , drop = FALSE])[2, 1] } # cbind instead of internal as.matrix cor_v2 <- function() { m <- cbind(df$a[i], df$b[i]) cor(m)[2, 1] } # cbind + vector subsetting of the output matrix cor_v3 <- function() { m <- cbind(df$a[i], df$b[i]) cor(m)[2] } # use vector input within the cor function, so that no conversion is needed cor_v4 <- function() { cor(df$a[i], df$b[i]) } # check if all return the same result (if you don't get the same result on your # machine, you might wanna check if it is due to the precision and istead # run cor_v1(), cor_v2() etc.) cor_list <- list(cor_v1, cor_v2, cor_v3, cor_v4) ulapply <- function(X, FUN, ...) unlist(lapply(X, FUN, ...)) ulapply(cor_list, function(x) identical(x(), cor_v1())) #> [1] TRUE TRUE TRUE FALSE # benchmark set.seed(1) microbenchmark::microbenchmark( cor_v1(), cor_v2(), cor_v3(), cor_v4() ) #> Unit: milliseconds #> expr min lq mean median uq max neval #> cor_v1() 68.00983 73.17224 77.73181 77.42869 81.96665 91.39209 100 #> cor_v2() 34.92743 36.76447 42.89929 38.61998 41.07361 164.89603 100 #> cor_v3() 34.92082 36.93425 41.58357 38.53243 40.58491 167.27550 100 #> cor_v4() 32.56499 33.55686 36.79612 34.98842 37.20768 162.17370 100 According to the resulting medians, lower and upper quartiles of our benchmark all three new versions seem to provide more or less the same speed benefit (note that the maximum and mean can vary a lot for these approaches). Since the second version is most similar to the code we started, we implement this line into a second version of cor_df() (if this sounds too arbitrary, note that in the final solution we will come back to the vector input version anyway) and do a benchmark to get the overall speedup: cor_df2 <- function() { i <- sample(seq(n), n) m <- cbind(df$a[i], df$b[i]) cor(m)[2, 1] } microbenchmark::microbenchmark( cor_df(), cor_df2() ) #> Unit: milliseconds #> expr min lq mean median uq max neval #> cor_df() 99.63714 106.10510 124.06861 111.88786 124.48601 261.0917 100 #> cor_df2() 62.40583 65.82074 77.78915 68.29032 71.21948 226.4105 100 Now we can focus on a speedup for the random generation of indices. (Note that a run of linepfrof suggests to optimize cbind(). However, after rewriting cor() to a version that only works with vector input, this step will be unnecessary anyway). We could try differnt approaches for the sequence generation within sample() (like seq(n), seq.int(n), seq_len(n), a:n) and a direct call of sample.int(). In the following, we will see, that sample.int() is always faster (since we don’t include the generation of the sequence into our benchmark). When we look into sample.int() we see that it calls two different internal sample versions depending on the input. Since in our usecase always the second version will be called, we also provide this version in our benchmark: seq_n <- seq(n) microbenchmark::microbenchmark( sample(seq_n, n), sample.int(n, n), .Internal(sample(n, n, replace = FALSE, prob = NULL)) ) #> Unit: milliseconds #> expr min lq #> sample(seq_n, n) 25.76130 27.00635 #> sample.int(n, n) 20.10353 21.34656 #> .Internal(sample(n, n, replace = FALSE, prob = NULL)) 19.77272 21.11431 #> mean median uq max neval #> 27.88299 27.48037 28.34219 33.92948 100 #> 22.03978 21.77165 22.35635 26.33112 100 #> 22.05701 21.68927 22.22222 32.03124 100 The sample.int() versions give clearly the biggest improvement. Since the internal version doesn’t provide any clear improvement, but restricts the general scope of our function, we choose to implement sample.int() in a third version of cor_df() and benchmark our actual achievements: cor_df3 <- function() { i <- sample.int(n, n) m <- cbind(df$a[i], df$b[i]) cor(m)[2, 1] } microbenchmark::microbenchmark( cor_df(), cor_df2(), cor_df3() ) #> Unit: milliseconds #> expr min lq mean median uq max neval #> cor_df() 93.90135 105.74657 122.81830 116.30188 126.10450 310.0699 100 #> cor_df2() 61.46896 65.74519 82.22654 70.75071 76.82705 243.0292 100 #> cor_df3() 56.21630 61.72607 77.44884 65.97643 71.85150 280.4668 100 As a last step, we try to speedup the calculation of the pearson correlation coefficient. Since quite a lot of functionality is build into the stats::cor() function this seems like a reasonable approach. We try this by working with another cor() function from the WGCNA package and an own implementation which should give a small improvement, because we use sum(x) / length(x) instead of mean(x) for internal calculations: #WGCNA version (matrix and vector). Note that I don't use a local setup which uses #the full potential of this function. For furter information see ?WGCNA::cor cor_df4m <- function() { i <- sample.int(n, n) m <- cbind(df$a[i], df$b[i]) WGCNA::cor(m)[2] } cor_df4v <- function() { i <- sample.int(n, n) WGCNA::cor(df$a[i], df$b[i], quick = 1)[1] } #New implementation of underlying cor function #A definition can be found for example here #http://www.socscistatistics.com/tests/pearson/ cor2 <- function(x, y){ xm <- sum(x) / length(x) ym <- sum(y) / length(y) x_xm <- x - xm y_ym <- y - ym numerator <- sum((x_xm) * (y_ym)) denominator <- sqrt(sum(x_xm^2)) * sqrt(sum(y_ym^2)) return(numerator / denominator) } cor2 <- compiler::cmpfun(cor2) cor_df5 <- function() { i <- sample.int(n, n) cor2(df$a[i], df$b[i]) } In our final benchmark, we also include compiled verions of all our attempts: cor_df_c <- compiler::cmpfun(cor_df) cor_df2_c <- compiler::cmpfun(cor_df2) cor_df3_c <- compiler::cmpfun(cor_df3) cor_df4m_c <- compiler::cmpfun(cor_df4m) cor_df4v_c <- compiler::cmpfun(cor_df4v) cor_df5_c <- compiler::cmpfun(cor_df5) microbenchmark::microbenchmark( cor_df(), cor_df2(), cor_df3(), cor_df4m(), cor_df4v(), cor_df5(), cor_df_c(), cor_df2_c(), cor_df3_c(), cor_df4m_c(), cor_df4v_c(), cor_df5_c() ) #> #> Unit: milliseconds #> expr min lq mean median uq max #> cor_df() 93.69474 101.58731 136.62427 105.31740 116.14679 366.4755 #> cor_df2() 59.74903 64.61351 98.09192 66.39092 69.73457 357.9196 #> cor_df3() 54.68245 57.86840 76.77177 60.14916 63.26598 343.4537 #> cor_df4m() 78.47800 85.12587 118.09512 88.88758 93.59687 466.6316 #> cor_df4v() 75.28063 82.16756 125.07651 84.95143 94.77764 429.5538 #> cor_df5() 50.31390 57.05252 81.47288 60.14645 64.18283 398.7600 #> cor_df_c() 94.04417 101.76583 132.33930 105.41177 114.30206 504.8655 #> cor_df2_c() 59.35191 64.44230 81.22067 67.28446 70.48400 393.6352 #> cor_df3_c() 54.57816 58.82453 86.79131 61.68728 65.55480 431.7059 #> cor_df4m_c() 77.49631 84.50715 128.58519 87.16920 91.04370 2410.4450 #> cor_df4v_c() 73.14337 81.98549 123.93428 85.62008 94.41818 463.7264 #> cor_df5_c() 51.35410 56.96752 100.64343 60.76175 66.32210 442.8767 #> neval #> 100 #> 100 #> 100 #> 100 #> 100 #> 100 #> 100 #> 100 #> 100 #> 100 #> 100 #> 100 #> #> Unit: milliseconds #> expr min lq mean median uq max #> cor_df() 7.885838 9.146994 12.739322 9.844561 10.447188 83.08044 #> cor_df2() 1.951941 3.178821 6.101036 3.339742 3.485085 36.25404 #> cor_df3() 1.288831 1.943510 4.353773 2.018105 2.145119 66.74022 #> cor_df4m() 1.534061 2.156848 6.344015 2.243540 2.370921 234.17307 #> cor_df4v() 1.639997 2.271949 5.755720 2.349477 2.453214 196.49897 #> cor_df5() 1.281133 1.879911 6.263500 1.932696 2.064292 237.83135 #> cor_df_c() 7.600654 9.337239 13.613901 10.009330 10.965688 41.17146 #> cor_df2_c() 2.009124 2.878242 5.645224 3.298138 3.469871 34.29037 #> cor_df3_c() 1.312291 1.926281 4.426418 1.986948 2.094532 87.97220 #> cor_df4m_c() 1.548357 2.179025 3.306796 2.242991 2.355709 23.87195 #> cor_df4v_c() 1.669689 2.255087 27.148456 2.341962 2.490419 2263.33230 #> cor_df5_c() 1.284065 1.888892 3.884798 1.953774 2.078955 25.81583 #> neval cld #> 100 a #> 100 a #> 100 a #> 100 a #> 100 a #> 100 a #> 100 a #> 100 a #> 100 a #> 100 a #> 100 a #> 100 a Our final solution benefits most from the switch from data frames to vectors. Working with sample.int gives only little improvement. Reimplementing and compiling a new correlation function adds only minimal speedup. To trust our final result we include a last check for similar return values: set.seed(1) cor_df() #> [1] -0.001214781 #> [1] -0.001441277 set.seed(1) cor_df5_c() #> [1] -0.001214781 #> [1] -0.001441277 Vectorisation of this problem seems rather difficult, since attempts of using matrix calculus, always depend on building and handling big matrices in the first place. We can for example rewrite our correlation function to work with matrices and build a new (vectorised) version of cor_df() on top of that cor2m <- function(x, y){ n_row <- nrow(x) xm <- colSums(x) / n_row ym <- colSums(y) / n_row x_xm <- t(t(x) - xm) y_ym <- t(t(y) - ym) numerator <- colSums((x_xm) * (y_ym)) denominator <- sqrt(colSums(x_xm^2)) * sqrt(colSums(y_ym^2)) return(numerator / denominator) } cor_df_v <- function(i){ indices <- replicate(i, sample.int(n, n), simplify = "array") x <- matrix(df$a[indices], ncol = i)
y <- matrix(df$b[indices], ncol = i) cor2m(x, y) } cor_df_v <- compiler::cmpfun(cor_df_v) However this still doesn’t provide any improvement over the use of lapply(): ulapply2 <- function(X, FUN, ...) unlist(lapply(X, FUN, ...), use.names = FALSE) microbenchmark::microbenchmark( cor_df5_c(), ulapply2(1:100, function(x) cor_df5_c()), cor_df_v(100) ) #> Unit: milliseconds #> expr min lq mean #> cor_df5_c() 1.098219 1.148988 2.06762 #> ulapply2(1:100, function(x) cor_df5_c()) 228.250897 233.052303 256.56849 #> cor_df_v(100) 263.158918 272.988449 312.43802 #> median uq max neval cld #> 1.864515 2.030201 13.2325 100 a #> 238.665277 247.184900 410.0768 100 b #> 278.648158 291.164035 581.2834 100 c Further improvements can be achieved using parallelisation (for example via parallel::parLapply()) ## 35.3 Vectorise 1. Q: The density functions, e.g., dnorm(), have a common interface. Which arguments are vectorised over? What does rnorm(10, mean = 10:1) do? A: We can see the interface of these functions via ?dnorm: dnorm(x, mean = 0, sd = 1, log = FALSE) pnorm(q, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) qnorm(p, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) rnorm(n, mean = 0, sd = 1). They are vectorised over their numeric arguments, which is always the first argument (x, q, p, n), mean and sd. Note that it’s dangerous to supply a vector to n in the rnorm() function, since the behaviour will change, when n has length 1 (like in the second part of this question). rnorm(10, mean = 10:1) generates ten random numbers from different normal distributions. The normal distributions differ in their means. The first has mean 10, the second has mean 9, the third mean 8 etc. 2. Q: Compare the speed of apply(x, 1, sum) with rowSums(x) for varying sizes of x. A: We compare regarding different sizes for square matrices: library(microbenchmark) dimensions <- c(1e0, 1e1, 1e2, 1e3, 0.5e4, 1e4) matrices <- lapply(dimensions, function(x) tcrossprod(rnorm(x), rnorm(x))) bench_rs <- lapply(matrices, function(x) fivenum(microbenchmark(rowSums(x), unit = "ns")$time))

bench_rs <- data.frame(time = unlist(bench_rs),
call = "rowSums", stringsAsFactors = FALSE)

bench_apply <- lapply(matrices,
function(x) fivenum(microbenchmark(apply(x, 1, sum),
unit = "ns")$time)) bench_apply <- data.frame(time = unlist(bench_apply), call = "apply", stringsAsFactors = FALSE) df <- rbind(bench_rs, bench_apply) df$dimension <- rep(dimensions, each = 5)
df$aggr <- rep(c("min", "lq", "median", "uq", "max"), times = length(dimensions)) df$aggr_size <- rep(c(1, 2, 3, 2, 1), times = length(dimensions))
df$group <- paste(as.character(df$call), as.character(df$aggr), sep = " ") library(ggplot2) ggplot(df, aes(x = dimension, y = time, colour = call, group = group)) + geom_point() + geom_line(aes(linetype = factor(aggr_size, levels = c("3", "2", "1"))), show.legend = FALSE)  The graph is a good indicator to notice, that apply() is not “vectorised for performance”. 3. Q: How can you use crossprod() to compute a weighted sum? How much faster is it than the naive sum(x * w)? A: We can just give the vectors to crossprod() which converts them to row- and column-vectors and then multiplies these. The result is the dot product which is also a weighted sum. a <- rnorm(10) b <- rnorm(10) sum(a * b) - crossprod(a, b)[1] #> [1] 0 A benchmark of both alternatives for different vector lengths indicates, that the crossprod() variant is about 2.5 times faster than sum(): dimensions <- c(1e1, 1e2, 1e3, 1e4, 1e5, 1e6, 0.5e7, 1e7) xvector <- lapply(dimensions, rnorm) weights <- lapply(dimensions, rnorm) bench_sum <- Map(function(x, y) fivenum(microbenchmark::microbenchmark(sum(x * y))$time),
xvector, weights)
bench_sum <- data.frame(time = unlist(bench_sum),
call = "sum",
stringsAsFactors = FALSE)
bench_cp <- Map(function(x, y) fivenum(microbenchmark::microbenchmark(crossprod(x, y)[1])$time), xvector, weights) bench_cp <- data.frame(time = unlist(bench_cp), call = "crossproduct", stringsAsFactors = FALSE) df <- rbind(bench_sum, bench_cp) df$dimension <- rep(dimensions, each = 5)
df$aggr <- rep(c("min", "lq", "median", "uq", "max"), times = length(dimensions)) df$aggr_size <- rep(c(1, 2, 3, 2, 1), times = length(dimensions))
df$group <- paste(as.character(df$call), as.character(df\$aggr), sep = " ")

library(ggplot2)
ggplot(df, aes(x = dimension, y = time, colour = call, group = group)) +
geom_point() +
geom_line(aes(linetype = factor(aggr_size, levels = c("3", "2", "1"))),
show.legend = FALSE) +
scale_y_continuous(expand = c(0, 0))