16 Evaluation
Prerequisites
On our journey through R’s metaprogramming, we continue to use the functions from the rlang package.
16.1 Evaluation basics
Q1: Carefully read the documentation for source()
. What environment does it use by default? What if you supply local = TRUE
? How do you provide a custom environment?
A: By default, source()
uses the global environment (local = FALSE
). A specific evaluation environment may be chosen, by passing it explicitly to the local
argument. To use current environment (i.e. the calling environment of source()
) set local = TRUE
.
# Create a temporary, sourcable R script that prints x
tmp_file <- tempfile()
writeLines("print(x)", tmp_file)
# Set `x` globally
x <- "global environment"
env2 <- env(x = "specified environment")
locate_evaluation <- function(file, local) {
x <- "local environment"
source(file, local = local)
}
# Where will source() evaluate the code?
locate_evaluation(tmp_file, local = FALSE) # default
#> [1] "global environment"
locate_evaluation(tmp_file, local = env2)
#> [1] "specified environment"
locate_evaluation(tmp_file, local = TRUE)
#> [1] "local environment"
Q2: Predict the results of the following lines of code:
eval(expr(eval(expr(eval(expr(2 + 2)))))) # (1)
eval(eval(expr(eval(expr(eval(expr(2 + 2))))))) # (2)
expr(eval(expr(eval(expr(eval(expr(2 + 2))))))) # (3)
A: Let’s look at a quote from the first edition of Advanced R:
“
expr()
andeval()
are opposites. […] eacheval()
peels off one layer ofexpr()
’s.”
In general, eval(expr(x))
evaluates to x
. Therefore, (1) evaluates to \(2 + 2 = 4\). Adding another eval()
doesn’t have impact here. So, also (2) evaluates to 4
. However, when wrapping (1) into expr()
the whole expression will be quoted.
eval(expr(eval(expr(eval(expr(2 + 2)))))) # (1)
#> [1] 4
eval(eval(expr(eval(expr(eval(expr(2 + 2))))))) # (2)
#> [1] 4
expr(eval(expr(eval(expr(eval(expr(2 + 2))))))) # (3)
#> eval(expr(eval(expr(eval(expr(2 + 2))))))
Q3: Fill in the function bodies below to re-implement get()
using sym()
and eval()
, and assign()
using sym()
, expr()
, and eval()
. Don’t worry about the multiple ways of choosing an environment that get()
and assign()
support; assume that the user supplies it explicitly.
# name is a string
get2 <- function(name, env) {}
assign2 <- function(name, value, env) {}
A: We reimplement these two functions using tidy evaluation. We turn the string name
into a symbol, then evaluate it:
get2 <- function(name, env = caller_env()) {
name_sym <- sym(name)
eval(name_sym, env)
}
x <- 1
get2("x")
#> [1] 1
To build the correct expression for the value assignment, we unquote using !!
.
assign2 <- function(name, value, env = caller_env()) {
name_sym <- sym(name)
assign_expr <- expr(!!name_sym <- !!value)
eval(assign_expr, env)
}
assign2("x", 4)
x
#> [1] 4
Q4: Modify source2()
so it returns the result of every expression, not just the last one. Can you eliminate the for loop?
A: The code for source2()
was given in Advanced R as:
source2 <- function(path, env = caller_env()) {
file <- paste(readLines(path, warn = FALSE), collapse = "\n")
exprs <- parse_exprs(file)
res <- NULL
for (i in seq_along(exprs)) {
res <- eval(exprs[[i]], env)
}
invisible(res)
}
In order to highlight the modifications in our new source2()
function, we’ve preserved the differing code from the former source2()
in a comment.
source2 <- function(path, env = caller_env()) {
file <- paste(readLines(path, warn = FALSE), collapse = "\n")
exprs <- parse_exprs(file)
# res <- NULL
# for (i in seq_along(exprs)) {
# res[[i]] <- eval(exprs[[i]], env)
# }
res <- purrr::map(exprs, eval, env)
invisible(res)
}
Let’s create a file and test source2()
. Keep in mind that <-
returns invisibly.
tmp_file <- tempfile()
writeLines(
"x <- 1
x
y <- 2
y # some comment",
tmp_file
)
(source2(tmp_file))
#> [[1]]
#> [1] 1
#>
#> [[2]]
#> [1] 1
#>
#> [[3]]
#> [1] 2
#>
#> [[4]]
#> [1] 2
Q5: We can make base::local()
slightly easier to understand by spreading it over multiple lines:
local3 <- function(expr, envir = new.env()) {
call <- substitute(eval(quote(expr), envir))
eval(call, envir = parent.frame())
}
Explain how local()
works in words. (Hint: you might want to print(call)
to help understand what substitute()
is doing, and read the documentation to remind yourself what environment new.env()
will inherit from.)
A: Let’s follow the advice and add print(call)
inside of local3()
:
local3 <- function(expr, envir = new.env()) {
call <- substitute(eval(quote(expr), envir))
print(call)
eval(call, envir = parent.frame())
}
The first line generates a call to eval()
, because substitute()
operates in the current evaluation argument. However, this doesn’t matter here, as both, expr
and envir
are promises and therefore “the expression slots of the promises replace the symbols,” from ?substitute
.
local3({
x <- 10
x * 2
})
#> eval(quote({
#> x <- 10
#> x * 2
#> }), new.env())
#> [1] 20
Next, call
will be evaluated in the caller environment (aka the parent frame). Given that call
contains another call eval()
why does this matter? The answer is subtle: this outer environment determines where the bindings for eval
, quote
, and new.env
are found.
16.2 Quosures
Q1: Predict what evaluating each of the following quosures will return if evaluated.
q1 <- new_quosure(expr(x), env(x = 1))
q1
#> <quosure>
#> expr: ^x
#> env: 0x7fce038effc8
q2 <- new_quosure(expr(x + !!q1), env(x = 10))
q2
#> <quosure>
#> expr: ^x + (^x)
#> env: 0x7fce00ad4c78
q3 <- new_quosure(expr(x + !!q2), env(x = 100))
q3
#> <quosure>
#> expr: ^x + (^x + (^x))
#> env: 0x7fcdfef3ab98
A: Each quosure is evaluated in its own environment, so x
is bound to a different value for each time. This leads us to:
Q2: Write an enenv()
function that captures the environment associated with an argument. (Hint: this should only require two function calls.)
A: A quosure captures both the expression and the environment. From a quosure, we can access the environment with the help of get_env()
.
16.3 Data masks
Q1: Why did I use a for loop in transform2()
instead of map()
? Consider transform2(df, x = x * 2, x = x * 2)
.
A: transform2()
was defined in Advanced R as:
transform2 <- function(.data, ...) {
dots <- enquos(...)
for (i in seq_along(dots)) {
name <- names(dots)[[i]]
dot <- dots[[i]]
.data[[name]] <- eval_tidy(dot, .data)
}
.data
}
A for loop applies the processing steps regarding .data
iteratively. This includes updating .data
and reusing the same variable names. This makes it possible to apply transformations sequentially, so that subsequent transformations can refer to columns that were just created.
Q2: Here’s an alternative implementation of subset2()
:
subset3 <- function(data, rows) {
rows <- enquo(rows)
eval_tidy(expr(data[!!rows, , drop = FALSE]), data = data)
}
df <- data.frame(x = 1:3)
subset3(df, x == 1)
Compare and contrast subset3()
to subset2()
. What are its advantages and disadvantages?
A: Let’s take a closer look at subset2()
first:
subset2 <- function(data, rows) {
rows <- enquo(rows)
rows_val <- eval_tidy(rows, data)
stopifnot(is.logical(rows_val))
data[rows_val, , drop = FALSE]
}
subset2()
provides an additional logical check, which is missing from subset3()
. Here, rows
is evaluated in the context of data
, which results in a logical vector. Afterwards only [
needs to be used for subsetting.
# subset2() evaluation
(rows_val <- eval_tidy(quo(x == 1), df))
#> [1] TRUE FALSE FALSE
df[rows_val, , drop = FALSE]
#> x
#> 1 1
With subset3()
both of these steps occur in a single line (which is probably closer to what one would produce by hand). This means that the subsetting is also evaluated in the context of the data mask.
This is shorter (but probably also less readable) because the evaluation and the subsetting take place in the same expression. However, it may introduce unwanted errors, if the data mask contains an element named “data,” as the objects from the data mask take precedence over arguments of the function.
df <- data.frame(x = 1:3, data = 1)
subset2(df, x == 1)
#> x data
#> 1 1 1
subset3(df, x == 1)
#> Error in data[~x == 1, , drop = FALSE]: incorrect number of dimensions
Q3: The following function implements the basics of dplyr::arrange()
. Annotate each line with a comment explaining what it does. Can you explain why !!.na.last
is strictly correct, but omitting the !!
is unlikely to cause problems?
arrange2 <- function(.df, ..., .na.last = TRUE) {
args <- enquos(...)
order_call <- expr(order(!!!args, na.last = !!.na.last))
ord <- eval_tidy(order_call, .df)
stopifnot(length(ord) == nrow(.df))
.df[ord, , drop = FALSE]
}
A: arrange2()
basically reorders a data frame by one or more of its variables. As arrange2()
allows to provide the variables as expressions (via ...
), these need to be quoted first. Afterwards they are used to build up an order()
call, which is then evaluated in the context of the data frame. Finally, the data frame is reordered via integer subsetting. Let’s take a closer look at the source code:
arrange2 <- function(.df, ..., .na.last = TRUE) {
# Capture and quote arguments, which determine the order
args <- enquos(...)
# `!!!`: unquote-splice arguments into order()
# `!!.na.last`: pass option for treatment of NAs to order()
# return expression-object
order_call <- expr(order(!!!args, na.last = !!.na.last))
# Evaluate order_call within .df
ord <- eval_tidy(order_call, .df)
# Ensure that no rows are dropped
stopifnot(length(ord) == nrow(.df))
# Reorder rows via integer subsetting
.df[ord, , drop = FALSE]
}
By using !!.na.last
the .na.last
argument is unquoted when the order()
call is built. This way, the na.last
argument is already correctly specified (typically TRUE
, FALSE
or NA
).
Without the unquoting, the expression would read na.last = .na.last
and the value for .na.last
would still need to be looked up and found. Because these computations take place inside of the function’s execution environment (which contains .na.last
), this is unlikely to cause problems.
16.4 Using tidy evaluation
Q1: I’ve included an alternative implementation of threshold_var()
below. What makes it different to the approach I used above? What makes it harder?
threshold_var2 <- function(df, var, val) {
var <- ensym(var)
subset2(df, `$`(.data, !!var) >= !!val)
}
A: Let’s compare this approach to the original implementation:
threshold_var <- function(df, var, val) {
var <- as_string(ensym(var))
subset2(df, .data[[var]] >= !!val)
}
We can see that threshold_var2()
no longer coerces the symbol to a string. Therefore $
instead of [[
can be used for subsetting. Initially we suspected partial matching would be introduced by $
, but .data
deliberately avoids this problem.
The prefix call to $()
is less common than infix-subsetting using [[
, but ultimately both functions behave the same.
df <- data.frame(x = 1:10)
threshold_var(df, x, 8)
#> x
#> 8 8
#> 9 9
#> 10 10
threshold_var2(df, x, 8)
#> x
#> 8 8
#> 9 9
#> 10 10
16.5 Base evaluation
Q1: Why does this function fail?
lm3a <- function(formula, data) {
formula <- enexpr(formula)
lm_call <- expr(lm(!!formula, data = data))
eval(lm_call, caller_env())
}
lm3a(mpg ~ disp, mtcars)$call
#> Error in model.frame.default(formula = mpg ~ disp, data = data, drop.unused.levels = TRUE): 'data' must be a data.frame, environment, or list
A: In this function, lm_call
is evaluated in the caller environment, which happens to be the global environment. In this environment, the name data
is bound to utils::data
. To fix the error, we can either set the evaluation environment to the function’s execution environment or unquote the data
argument when building the call to lm()
.
# Change evaluation environment
lm3b <- function(formula, data) {
formula <- enexpr(formula)
lm_call <- expr(lm(!!formula, data = data))
eval(lm_call, current_env())
}
lm3b(mpg ~ disp, mtcars)$call
#> lm(formula = mpg ~ disp, data = data)
lm3b(mpg ~ disp, data)$call #reproduces original error
#> Error in model.frame.default(formula = mpg ~ disp, data = data, drop.unused.levels = TRUE): 'data' must be a data.frame, environment, or list
When we want to unquote an argument within a function, we first need to capture the user-input (by enexpr()
).
# Unquoting data-argument
lm3c <- function(formula, data) {
formula <- enexpr(formula)
data_quo <- enexpr(data)
lm_call <- expr(lm(!!formula, data = !!data_quo))
eval(lm_call, caller_env())
}
lm3c(mpg ~ disp, mtcars)$call
#> lm(formula = mpg ~ disp, data = mtcars)
Q2: When model building, typically the response and data are relatively constant while you rapidly experiment with different predictors. Write a small wrapper that allows you to reduce duplication in the code below.
lm(mpg ~ disp, data = mtcars)
lm(mpg ~ I(1 / disp), data = mtcars)
lm(mpg ~ disp * cyl, data = mtcars)
A: In our wrapper lm_wrap()
, we provide mpg
and mtcars
as default response and data. This seems to give us a good mix of usability and flexibility.
lm_wrap <- function(pred, resp = mpg, data = mtcars,
env = caller_env()) {
pred <- enexpr(pred)
resp <- enexpr(resp)
data <- enexpr(data)
formula <- expr(!!resp ~ !!pred)
lm_call <- expr(lm(!!formula, data = !!data))
eval(lm_call, envir = env)
}
# Test if the output looks ok
lm_wrap(I(1 / disp) + disp * cyl)
#>
#> Call:
#> lm(formula = mpg ~ I(1/disp) + disp * cyl, data = mtcars)
#>
#> Coefficients:
#> (Intercept) I(1/disp) disp cyl disp:cyl
#> -1.22e+00 1.85e+03 7.68e-02 1.18e+00 -9.14e-03
# Test if the result is identical to calling lm() directly
identical(
lm_wrap(I(1 / disp) + disp * cyl),
lm(mpg ~ I(1 / disp) + disp * cyl, data = mtcars)
)
#> [1] TRUE
Q3: Another way to write resample_lm()
would be to include the resample expression (data[sample(nrow(data), replace = TRUE), , drop = FALSE])
in the data argument. Implement that approach. What are the advantages? What are the disadvantages?
A: Different versions of resample_lm()
were given in Advanced R. However, none of them implemented the resampling within the function argument.
Different versions of resample_lm()
(resample_lm0()
, resample_lm1()
, resample_lm2()
) were specified in Advanced R. However, in none of these versions was the resampling step implemented in any of the arguments.
This approach takes advantage of R’s lazy evaluation of function arguments, by moving the resampling step into the argument definition. The user passes the data to the function, but only a permutation of this data (resample_data
) will be used.
resample_lm <- function(
formula, data,
resample_data = data[sample(nrow(data), replace = TRUE), ,
drop = FALSE],
env = current_env()) {
formula <- enexpr(formula)
lm_call <- expr(lm(!!formula, data = resample_data))
expr_print(lm_call)
eval(lm_call, env)
}
df <- data.frame(x = 1:10, y = 5 + 3 * (1:10) + round(rnorm(10), 2))
(lm_1 <- resample_lm(y ~ x, data = df))
#> lm(y ~ x, data = resample_data)
#>
#> Call:
#> lm(formula = y ~ x, data = resample_data)
#>
#> Coefficients:
#> (Intercept) x
#> 4.85 3.02
lm_1$call
#> lm(formula = y ~ x, data = resample_data)
With this approach the evaluation needs to take place within the function’s environment, because the resampled dataset (defined as a default argument) will only be available in the function environment.
Overall, putting an essential part of the pre-processing outside of the functions body is not common practice in R. Compared to the unquoting-implementation (resample_lm1()
in Advanced R), this approach captures the model-call in a more meaningful way. This approach will also lead to a new resample every time you update()
the model.