## 6.1 Parallelization

mlr3 uses the future backends for parallelization. Make sure you have installed the required packages future and future.apply:

mlr3 is capable of parallelizing a variety of different scenarios. One of the most used cases is to parallelize the Resampling iterations. See Section Resampling for a detailed introduction to resampling.

In the following, we will use the spam task and a simple classification tree ("classif.rpart") to showcase parallelization. We use the future package to parallelize the resampling by selecting a backend via the function future::plan(). We use the "multiprocess" backend here which uses threads on UNIX based systems and a “Socket” cluster on Windows.

future::plan("multiprocess")

task = mlr_tasks$get("spam") learner = mlr_learners$get("classif.rpart")
resampling = mlr_resamplings\$get("subsampling")

time = Sys.time()
Sys.time() - time
By default all CPUs of your machine are used unless you specify argument workers in future::plan().
In mlr3 this is no longer required All kind of events are rolled out on the same level - no need to decide whether you want to parallelize the tuning OR the resampling. Just lean back and let the machine do the work :-)