## 7.3 Parallel computing

In recent R versions (since R 2.14.0) ** parallel** package comes pre-installed with base R. The ** parallel** package must still be loaded before use however, and you must determine the number of available cores manually, as illustrated below.

library("parallel")
no_of_cores = detectCores()

The computer used to compile the published version of this book chapter has 2 CPUs/Cores.

### 7.3.1 Parallel versions of apply functions

The most commonly used parallel applications are parallelized replacements of lapply, sapply and apply. The parallel implementations and their arguments are shown below.

parLapply(cl, x, FUN, ...)
parApply(cl = NULL, X, MARGIN, FUN, ...)
parSapply(cl = NULL, X, FUN, ..., simplify = TRUE, USE.NAMES = TRUE) 

Note that each function has an argument cl which must be created by makeCluster. This function, amongst other things, specifies the number of processors to use.

### 7.3.2 Example: parallel bootstraping

In 1965, Gordon Moore co-founder of Intel, observed that the number of transistors in a dense integrated circuit doubles approximately every two years. This observation is known as Moore’s law. A scatter plot (figure 7.3) of processors over the last thirty years shows that that this law seems to hold.

We can estimate the trend using simple linear regression. A standard algorithm for obtaining uncertainty estimates on regression coefficients is bootstrapping. This is a simple algorithm; at each iteration we sample with replacement from the original data set and estimate the parameters of the new data set. The distribution of the parameters gives us our uncertainty estimate. We begin by loading the data set and creating a function for performing a single bootstrap

data("transistors", package="efficient")
bs = function(i) {
s = sample(1:NROW(transistors), replace=TRUE)
trans_samp = transistors[s,]
coef(lm(log2(Count) ~ Year, data=trans_samp))
}

We can then perform $$N=10^4$$ bootstraps using sapply

N = 10000
sapply(1:N, bs)

Rewriting this code to make use of the ** parallel** package is straightforward. We begin by making a cluster and exporting the data set

library("parallel")
cl = makeCluster(6)
clusterExport(cl, "transistors")

Then use parSapply and stop the cluster

parSapply(cl, 1:N, bs)
stopCluster(cl)

On this computer, we get a four-fold speed-up.

stopCluster(cl)

### 7.3.3 Process forking

Another way of running code in parallel is to use the mclapply and mcmapply functions. These functions use forking forking, that is creating a new copy of a process running on the CPU. However, Windows does not support this low-level functionality in the way that Linux does.