# 1 Names and values

## Prerequisites

We use the development version of lobstr to answer questions regarding the internal representation of R objects.

# devtools::install_github("r-lib/lobstr")
library(lobstr)

## 1.1 Binding basics

1. Q: Explain the relationship between a, b, c and d in the following code:

a <- 1:10
b <- a
c <- b
d <- 1:10

A: a, b, c point to the same object (with the same address in memory). This object has the value 1:10. d points to a different object with the same value.

list_of_names <- list(a, b, c, d)
#> [1] "0x1b4a4d0" "0x1b4a4d0" "0x1b4a4d0" "0x2f2d950"
2. Q: The following code accesses the mean function in multiple ways. Do they all point to the same underlying function object? Verify with lobstr::obj_addr().

mean
base::mean
get("mean")
evalq(mean)
match.fun("mean")

A: Yes, they point to the same object. We confirm this by inspecting the address of the underlying function object.

mean_functions <- list(mean,
base::mean,
get("mean"),
evalq(mean),
match.fun("mean"))

#> [1] "0x1ae8298"
3. Q: By default, base R data import functions, like read.csv(), will automatically convert non-syntactic names to syntactic names. Why might this be problematic? What option allows you to suppress this behaviour?

A: When automatic and implicit (name) conversion occurs, the prediction of a scripts output will be more difficult. For example when R is used non-interactively and some data is read, transformed and written, than the output may not contain the same names as the original data source. This behaviour may introduce problems in downstream analysis. To avoid automatic name conversion set check.names=FALSE.

4. Q: What rules does make.names() use to convert non-syntactic names into syntactic names?

A: A valid name starts with a letter or a dot (which must not be followed by a number). It also consists of letters, numbers, dots and underscores only ("_" are allowed since R version 1.9.0).

Three main mechanisms ensure syntactically valid names (see ?make.names):
• The variable name will be prepended by an X when names do not start with a letter or start with a dot followed by a number
make.names("")
#> [1] "X"
make.names(".1")
#> [1] "X.1"
• (additionally) non-valid characters are replaced by a dot
make.names("@")          # prepending + . replacement
#> [1] "X."
make.names("  ")         # prepending + .. replacement
#> [1] "X.."
make.names("non-valid")  # . replacement
#> [1] "non.valid"
• reserved R keywords (see ?reserved) are appended by a dot
make.names("if")
#> [1] "if."

Interestingly, some of these transformations are influenced by the current locale (from ?make.names):

The definition of a letter depends on the current locale, but only ASCII digits are considered to be digits.

5. Q: I slightly simplified the rules that govern syntactic names. Why is .123e1 not a syntactic name? Read ?make.names for the full details.

A: .123e1 is not a syntactic name, because it starts with one dot which is followed by a number.

## 1.2 Copy-on-modify

1. Q: Why is tracemem(1:10) not useful?

A: When 1:10 is called an object with an address in memory is created, but it is not bound to a name. Therefore the object cannot be called or manipulated from R. As no copies will be made, it is not useful to track the object for copying.

obj_addr(1:10)  # the object exists, but no copy will be made
#> [1] "0x42faed8"
2. Q: Explain why tracemem() shows two copies when you run this code. Hint: carefully look at the difference between this code and the code show earlier in the section.

x <- c(1L, 2L, 3L)
tracemem(x)

x[[3]] <- 4

A: Initially the vector x has integer type. The replacement call assigns a double to the third element of x, which triggers copy-on-modify. Because of R’s coercion rules, a type conversion occurs, which affects the vector as a whole and leads to an additional copy.

# two copies
x <- 1:3
tracemem(x)
#> <0x66a4a70>

x[[3]] <- 4
#> tracemem[0x55eec7b3af38 -> 0x55eec774cc18]:
#> tracemem[0x55eec774cc18 -> 0x55eeca6ed5a8]: 

By assigning an integer instead of a double one copy (the one related to coercion) may be avoided:

# the same as
x <- 1:3
tracemem(x)
#> <0x55eec6940ae0>

x[[3]] <- 4L
#> tracemem[0x55eec7021e10 -> 0x55eecb99e788]:
x <- as.double(x)
#> tracemem[0x55eecb99e788 -> 0x55eec93d9c18]:
3. Q: Sketch out the relationship between the following objects:

a <- 1:10
b <- list(a, a)
c <- list(b, a, 1:10)

A: a contains a reference to an address with the value 1:10. b contains a list of two references to the same address as a. c contains a list of b (containing two references to a), a (containing the same reference again) and a reference pointing to a different address containing the same value (1:10).

ref(c)
#> █ [1:0x55eec93cbdd8] <list>    # c
#> ├─█ [2:0x55eecb8246e8] <list>  # - b
#> │ ├─[3:0x55eec7df4e98] <int>   # -- a
#> │ └─[3:0x55eec7df4e98]         # -- a
#> ├─[3:0x55eec7df4e98]           # - a
#> └─[4:0x55eec7aa6968] <int>     # - 1:10
4. Q: What happens when you run this code:

x <- list(1:10)
x[[2]] <- x

Draw a picture.

A: The initial reference tree of x shows, that the name x binds to a list object. This object contains a reference to the integer vector 1:10.

x <- list(1:10)
ref(x)
#> █ [1:0x55853b74ff40] <list>
#> └─[2:0x534t3abffad8] <int> 

When x is assigned to an element of itself copy-on-modify takes place and the list is copied to a new address in memory.

tracemem(x)
x[[2]] <- x
#> tracemem[0x55853b74ff40 -> 0x5d553bacdcd8]:

The list object previously bound to x is now referenced in the newly created list object. It is no longer bound to a name. The integer vector is referenced twice.

ref(x)
#> █ [1:0x5d553bacdcd8] <list>
#> └─█ [3:0x55853b74ff40] <list>
#>   └─[2:0x534t3abffad8] 

## 1.3 Object size

1. Q: In the following example, why are object.size(y) and obj_size(y) so radically different? Consult the documentation of object.size().

y <- rep(list(runif(1e4)), 100)

object.size(y)
#> 8005648 bytes
obj_size(y)
#> 80,896 B

A: object.size() doesn’t account for shared elements within lists. Therefore, the results differ by a factor of ~ 100.

2. Q: Take the following list. Why is its size somewhat misleading?

x <- list(mean, sd, var)
# obj_size(x)
#> 16,928 B

A: It is somewhat misleading, because all three functions are built-in to R as part of the base and stats packages and hence always loaded.

From the following calculations we can see that this applies to about 2696 objects which are usually loaded by default and take up about 50.93 MB of memory.

base_env_names <- c("package:stats", "package:graphics", "package:grDevices",
"package:utils", "package:datasets", "package:methods"  ,

base_env_list <- sapply(base_env_names,
function(x) mget(ls(x, all = TRUE), as.environment(x)))

sum(lengths(base_env_list))
#> [1] 2696

sapply(base_env_list, lobstr::obj_size)
#>     package:stats  package:graphics package:grDevices     package:utils
#>          11446944           3197536           1831504           7428624
#>            604144          13430224               288          15606352
round(sum(sapply(base_env_list, lobstr::obj_size)) / 1024^2, 2)
#> [1] 51.07
3. Q: Predict the output of the following code:

x <- 1:1e6
obj_size(x)

y <- list(x, x)
obj_size(y)
obj_size(x, y)

y[[1]][[1]] <- 10
obj_size(y)
obj_size(x, y)

y[[2]][[1]] <- 10
obj_size(y)
obj_size(x, y)

A: TODO: lobstr and pryr return very different results (600 bytes vs 4MB in the first example). So, before we rewrite this answer it needs to be clarified, why these differences occur and how to handle them best. See also the related issue: https://github.com/hadley/adv-r/issues/1324.

Since lobstr::obj_size() currently returns very different values, we will use unclass(pryr::obj_size()) for now.

To predict the size of x, we first find out via obj_size(integer(0)) that an integer takes 48 B. For every element of the integer vector additionally 4 B are needed and R allocates memory in chunks of 2, so 8 B at a time. This can be verified for example via sapply(1:100, function(x) obj_size(integer(x))). Overall our prediction will result in 40 B + 1000000 * 4 B = 4000040 B:

x <- 1:1e6
unclass(pryr::object_size(x))
#> [1] 4000040

To predict the size of y <- list(x, x) consider that both list elements point to the same memory address. They share the same reference, which means that no additional memory is needed. A list takes 40 B in memory and 8 B for each element. Overall our prediction will result in x (4000040 B) + list of length 2 (40 B + 16 B):

y <- list(x, x)
unclass(pryr::object_size(y))
#> [1] 4000096

Since x and y are names with bindings to objects that point to the same reference, no additional memory is needed and our prediction is the maximum memory of both objects (y; 4000040 B):

unclass(pryr::object_size(x, y))
#> [1] 4000096

The next one gets a bit more tricky. Since the first element of y becomes different to x, a completely new object is created in memory. Hence 10 is of type double (which triggers a silent coercion), the new object will take more memory. A double needs 40 B + length * 8 B (overall 8000040 B). So we get: first element of y (8000040 B) + second element of y (x; 4000040 B) + list of length 2 (40 B + 16 B) = 12000136 B as our prediction:

y[[1]][[1]] <- 10
unclass(pryr::object_size(y))
#> [1] 12000136

Again all elements of x are shared within y (x is the second element of y). So the overall memory usage corresponds to y’s:

unclass(pryr::object_size(x, y))
#> [1] 12000136

In the next example also the second element of y gets the same value as the first one. However, R does not now, that it is the same as the first element, so a new object is created taking the same amount of memory:

y[[2]][[1]] <- 10
unclass(pryr::object_size(y))
#> [1] 16000136

Now x and y don’t share any values anymore (from R’s perspective) and their memory adds up:

unclass(pryr::object_size(x, y))
#> [1] 20000176

## 1.4 Modify-in-place

1. Q: Wrap the two methods for subtracting medians into two functions, then use the bench package to carefully compare their speeds. How does performance change as the number of columns increase?

A: First, let’s define a function to create some random data and a function to subtract the median from each column.

create_random_df <- function(nrow, ncol) {
random_matrix <- matrix(runif(nrow * ncol), nrow = nrow)
as.data.frame(random_matrix)
}

subtract_medians <- function(x, medians){
for (i in seq_along(medians)) {
x[[i]] <- x[[i]] - medians[[i]]
}
x
}

We can then profile the performance, by benchmarking subtact_medians() on data frame- and list-input for a specified number of columns. The functions should both input and output a data frame, so one is going to do a bit more work.

compare_speed <- function(ncol){
df_input   <- create_random_df(nrow = 1e4, ncol = ncol)
medians <- vapply(df_input, median, numeric(1))

bench::mark(Data Frame = subtract_medians(df_input,   medians),
List = as.data.frame(subtract_medians(as.list(df_input), medians)))
}

Then bench package allows us to run our benchmark across a grid of parameters easily. We will use it to slowly increase the number of columns containing random data.

results <- bench::press(
ncol = c(1, 5, 10, 50, 100, 200, 400, 600, 800, 1000, 1500),
compare_speed(ncol)
)

library(ggplot2)
ggplot(results, aes(ncol, median, col = expression)) +
geom_point(size = 2) +
geom_smooth() +
labs(x = "Number of Columns of Input Data", y = "Computation Time",
color = "Input Data Structure",
title = "Benchmark: Median Subtraction")

The execution times for median subtraction on data frames columns increase exponentially with the number of columns in the input data. This is because, the data frame will be copied more often and the copy will also be bigger. For subtraction on list elements the execution time increases only linearly.

For list input with less than ~ 800 columns, the cost of the additional data structure conversion is relatively big. For very wide data frames the overhead of the additional copies slows down the computation considerably. Apparently the choice of the faster function depends on the size of the data also.

2. Q: What happens if you attempt to use tracemem() on an environment?

A: tracemem() cannot be used to mark and trace environments.

x <- new.env()
tracemem(x)
#> Error in tracemem(x): 'tracemem' is not useful for promise and environment objects

The error occurs because “it is not useful to trace NULL, environments, promises, weak references, or external pointer objects, as these are not duplicated” (see ?tracemem). Environments are always modified in place.