## 4.1 Hyperparameter Tuning

Hyperparameter tuning is supported via the extension package mlr3tuning. The heart of mlr3tuning are the R6 classes mlr3tuning::PerformanceEvaluator and the Tuner* classes. They store the settings, perform the tuning and save the results.

### 4.1.1 The Performance Evaluator class

The mlr3tuning::PerformanceEvaluator class requires the following inputs from the user:

It is similar to resample and benchmark with the additional requirement of a “Parameter Set” (paradox::ParamSet ) specifying the Hyperparameters of the given learner which should be optimized.

An exemplary definition could looks as follows:

library(mlr3tuning)

task = mlr_tasks$get("iris") learner = mlr_learners$get("classif.rpart")
resampling = mlr_resamplings$get("holdout") measures = mlr_measures$mget("classif.ce")
param_set = paradox::ParamSet$new(params = list( paradox::ParamDbl$new("cp", lower = 0.001, upper = 0.1),
paradox::ParamInt$new("minsplit", lower = 1, upper = 10))) pe = PerformanceEvaluator$new(
learner = learner,
resampling = resampling,
measures = measures,
param_set = param_set
)

Evaluation of Single Parameter Settings

Using the method .$eval(), the mlr3tuning::PerformanceEvaluator is able to tune a specific set of hyperparameters on the given inputs. The parameters have to be handed over wrapped in a data.table: pe$eval(data.table::data.table(cp = 0.05, minsplit = 5))

The results are stored in a BenchmarkResult class within the pe object. Note that this is the “bare bone” concept of using hyperparameters during Resampling. Usually you want to optimize the parameters in an automated fashion.

### 4.1.2 Tuning Hyperparameter Spaces

Most often you do not want to only check the performance of fixed hyperparameter settings sequentially but optimize the outcome using different hyperparameter choices in an automated way.

To achieve this, we need a definition of the search spaced that should be optimized. Let’s use again the space we defined in the introduction.

paradox::ParamSet$new(params = list( paradox::ParamDbl$new("cp", lower = 0.001, upper = 0.1),
paradox::ParamInt$new("minsplit", lower = 1, upper = 10))) To start the tuning, we still need to select how the optimization should take place - in other words, we need to choose the optimization algorithm. The following algorithms are currently implemented in mlr3: In this example we will use a simple “Grid Search”. Since we have only numeric parameters and specified the upper and lower bounds for the search space, mlr3tuning::TunerGridSearch will create a grid of equally-sized steps. By default, mlr3tuning::TunerGridSearch creates ten equal-sized steps. The number of steps can be changed with the resolution argument. In this example we use 15 steps and create a new class mlr3tuning::TunerGridSearch using the mlr3tuning::PerformanceEvaluator pe and the resolution. tuner_gs = TunerGridSearch$new(pe, resolution = 15)

Oh! The error message tells us that we need to specify an addition argument called terminator.

### 4.1.3 Defining the Terminator

What is a “Terminator”? The mlr3tuning::Terminator defines when the tuning should be stopped. This setting can have various instances:

Often enough one termination criterion is not enough. For example, you will not know beforehand if all of your given evaluations will finish within a given amount of time. This highly depends on the Learner and the paradox::ParamSet given. However, you might not want to exceed a certain tuning time for each learner. In this case, it makes sense to combine both criteria using mlr3tuning::TerminatorMultiplexer. Tuning will stop as soon as one Terminator signals to be finished.

In the following example we create two terminators and then combine them into one:

tr = TerminatorRuntime$new(5) te = TerminatorEvaluations$new(max_evaluations = 50)

tm = TerminatorMultiplexer$new(list(te, tr)) tm ### 4.1.4 Executing the Tuning Now that we have all required inputs (paradox::ParamSet, mlr3tuning::Terminator and the optimization algorithm), we can perform the hyperparameter tuning. The first step is to create the respective “Tuner” class, here mlr3tuning::TunerGridSearch. tuner_gs = TunerGridSearch$new(pe = pe, terminator = tm,
resolution = 15)

After it has been initialized, we can call its member function .$tune() to run the tuning. tuner_gs$tune()

.$tune() simply performs a benchmark on the parameter values generated by the tuner and writes the results into a BenchmarkResult object which is stored in field .$bmr of the mlr3tuning::PerformanceEvaluator object that we passed to it.

### 4.1.5 Inspecting Results

During the .$tune() call not only the BenchmarkResult output was written to the .$bmr slot of the mlr3tuning::PerformanceEvaluator but also the mlr3tuning::Terminator got updated.

We can take a look by directly printing the mlr3tuning::Terminator object:

print(tm)

We can easily see that all evaluations were executed before the time limit kicked in.

Now let’s take a closer look at the actual tuning result. It can be queried using .$tune_result() from the respective mlr3tuning::Tuner class that generated it. Internally, the function scrapes the data from the BenchmarkResult that was generated during tuning and stored in .$pe$bmr. tuner_gs$tune_result()

It returns the scored performance and the values of the optimized hyperparameters. Note that each measure “knows” if it was minimized or maximized during tuning:

measures$classif.ce$minimize

A summary of the BenchmarkResult created by the tuning can be queried using the .$aggregate() function of the Tuner class. tuner_gs$aggregate()

Now the optimized hyperparameters can be used to create a new Learner and train it on the full dataset.

task = mlr_tasks$get("iris") learner = mlr_learners$get("classif.rpart",
param_vals = list(
xval = tuner_gs$tune_result()$values$xval, cp = tuner_gs$tune_result()$values$cp)
)

learner$train(task) ### 4.1.6 Automating the Tuning The steps shown above can be executed in a more convenient way using the mlr3tuning::AutoTuner class. This class gathers all the steps from above into a single call and uses the optimized hyperparameters from the tuning to create a new learner. Requirements: • Task • Learner • Resampling • Measure • Parameter Set • Terminator • Tuning method • Tuning settings (optional) task = mlr_tasks$get("iris")
learner = mlr_learners$get("classif.rpart") resampling = mlr_resamplings$get("holdout")
measures = mlr_measures$mget("classif.ce") param_set = paradox::ParamSet$new(
params = list(paradox::ParamDbl$new("cp", lower = 0.001, upper = 0.1))) terminator = TerminatorEvaluations$new(5)

at = mlr3tuning::AutoTuner$new(learner, resampling, measures = measures, param_set, terminator, tuner = TunerGridSearch, tuner_settings = list(resolution = 10L)) at$train(task)
at$learner Note that you can also pass the AutoTuner to resample() or benchmark(). By doing so, the AutoTuner will do its resampling for tuning on the training set of the respective split of the outer resampling. This is called nested resampling. To compare the tuned learner with the learner using its default, we can use benchmark(). bmr = benchmark(expand_grid("iris", list(at, "classif.rpart"), "cv3")) bmr$aggregate(measures)

### 4.1.7 Summary

• Use PerformanceEvaluator$eval() for manual execution of parameters in Resampling • Define a Tuner of your choice using a mlr3tuning::PerformanceEvaluator with the following inputs • Inspect the tuning result using Tuner*$tune_result()
• Get a summary view of all runs based on the BenchmarkResult object created during tuning using Tuner*\$aggregate()
• The AutoTuner class is a convenience wrapper that gathers all steps into one function

### References

Bergstra, James, and Yoshua Bengio. 2012. “Random Search for Hyper-Parameter Optimization.” J. Mach. Learn. Res. 13. JMLR.org: 281–305.