3.1 Hyperparameter Tuning

Hyperparameters are second-order parameters of machine learning models that, while often not explicitly optimized during the model estimation process, can have an important impact on the outcome and predictive performance of a model. Typically, hyperparameters are fixed before training a model. However, because the output of a model can be sensitive to the specification of hyperparameters, it is often recommended to make an informed decision about which hyperparameter settings may yield better model performance. In many cases, hyperparameter settings may be chosen a priori, but it can be advantageous to try different settings before fitting your model on the training data. This process is often called model ‘tuning’.

Hyperparameter tuning is supported via the mlr3tuning extension package. Below you can find an illustration of the process:

At the heart of mlr3tuning are the R6 classes:

3.1.1 The TuningInstance Classes

The following sub-section examines the optimization of a simple classification tree on the Pima Indian Diabetes data set.

task = tsk("pima")
print(task)
## <TaskClassif:pima> (768 x 9)
## * Target: diabetes
## * Properties: twoclass
## * Features (8):
##   - dbl (8): age, glucose, insulin, mass, pedigree, pregnant, pressure,
##     triceps

We use the classification tree from rpart and choose a subset of the hyperparameters we want to tune. This is often referred to as the “tuning space”.

learner = lrn("classif.rpart")
learner$param_set
## <ParamSet>
##                 id    class lower upper      levels        default value
##  1:       minsplit ParamInt     1   Inf                         20      
##  2:      minbucket ParamInt     1   Inf             <NoDefault[3]>      
##  3:             cp ParamDbl     0     1                       0.01      
##  4:     maxcompete ParamInt     0   Inf                          4      
##  5:   maxsurrogate ParamInt     0   Inf                          5      
##  6:       maxdepth ParamInt     1    30                         30      
##  7:   usesurrogate ParamInt     0     2                          2      
##  8: surrogatestyle ParamInt     0     1                          0      
##  9:           xval ParamInt     0   Inf                         10     0
## 10:     keep_model ParamLgl    NA    NA  TRUE,FALSE          FALSE

Here, we opt to tune two parameters:

  • The complexity cp
  • The termination criterion minsplit

The tuning space needs to be bounded, therefore one has to set lower and upper bounds:

library("paradox")
tune_ps = ParamSet$new(list(
  ParamDbl$new("cp", lower = 0.001, upper = 0.1),
  ParamInt$new("minsplit", lower = 1, upper = 10)
))
tune_ps
## <ParamSet>
##          id    class lower upper levels        default value
## 1:       cp ParamDbl 0.001   0.1        <NoDefault[3]>      
## 2: minsplit ParamInt 1.000  10.0        <NoDefault[3]>

Next, we need to specify how to evaluate the performance. For this, we need to choose a resampling strategy and a performance measure.

hout = rsmp("holdout")
measure = msr("classif.ce")

Finally, one has to select the budget available, to solve this tuning instance. This is done by selecting one of the available Terminators:

For this short introduction, we specify a budget of 20 evaluations and then put everything together into a TuningInstanceSingleCrit:

library("mlr3tuning")

evals20 = trm("evals", n_evals = 20)

instance = TuningInstanceSingleCrit$new(
  task = task,
  learner = learner,
  resampling = hout,
  measure = measure,
  search_space = tune_ps,
  terminator = evals20
)
instance
## <TuningInstanceSingleCrit>
## * State:  Not optimized
## * Objective: <ObjectiveTuning:classif.rpart_on_pima>
## * Search Space:
## <ParamSet>
##          id    class lower upper levels        default value
## 1:       cp ParamDbl 0.001   0.1        <NoDefault[3]>      
## 2: minsplit ParamInt 1.000  10.0        <NoDefault[3]>      
## * Terminator: <TerminatorEvals>
## * Terminated: FALSE
## * Archive:
## <ArchiveTuning>
## Null data.table (0 rows and 0 cols)

To start the tuning, we still need to select how the optimization should take place. In other words, we need to choose the optimization algorithm via the Tuner class.

3.1.2 The Tuner Class

The following algorithms are currently implemented in mlr3tuning:

In this example, we will use a simple grid search with a grid resolution of 5.

tuner = tnr("grid_search", resolution = 5)

Since we have only numeric parameters, TunerGridSearch will create an equidistant grid between the respective upper and lower bounds. As we have two hyperparameters with a resolution of 5, the two-dimensional grid consists of \(5^2 = 25\) configurations. Each configuration serves as a hyperparameter setting for the previously defined Learner and triggers a 3-fold cross validation on the task. All configurations will be examined by the tuner (in a random order), until either all configurations are evaluated or the Terminator signals that the budget is exhausted.

3.1.3 Triggering the Tuning

To start the tuning, we simply pass the TuningInstanceSingleCrit to the $optimize() method of the initialized Tuner. The tuner proceeds as follows:

  1. The Tuner proposes at least one hyperparameter configuration (the Tuner may propose multiple points to improve parallelization, which can be controlled via the setting batch_size).
  2. For each configuration, the given Learner is fitted on the Task using the provided Resampling. All evaluations are stored in the archive of the TuningInstanceSingleCrit.
  3. The Terminator is queried if the budget is exhausted. If the budget is not exhausted, restart with 1) until it is.
  4. Determine the configuration with the best observed performance.
  5. Store the best configurations as result in the instance object. The best hyperparameter settings ($result_learner_param_vals) and the corresponding measured performance ($result_y) can be accessed from the instance.
tuner$optimize(instance)
## INFO  [15:14:39.039] Starting to optimize 2 parameter(s) with '<OptimizerGridSearch>' and '<TerminatorEvals>' 
## INFO  [15:14:39.088] Evaluating 1 configuration(s) 
## INFO  [15:14:39.246] Result of batch 1: 
## INFO  [15:14:39.250]   cp minsplit classif.ce                                uhash 
## INFO  [15:14:39.250]  0.1       10     0.2773 4339c14f-ff85-41f7-b1b3-68575b03a7b8 
## INFO  [15:14:39.252] Evaluating 1 configuration(s) 
## INFO  [15:14:39.327] Result of batch 2: 
## INFO  [15:14:39.330]       cp minsplit classif.ce                                uhash 
## INFO  [15:14:39.330]  0.02575        8     0.2461 83da19b8-b80a-4c20-9728-352800cb4356 
## INFO  [15:14:39.333] Evaluating 1 configuration(s) 
## INFO  [15:14:39.389] Result of batch 3: 
## INFO  [15:14:39.392]      cp minsplit classif.ce                                uhash 
## INFO  [15:14:39.392]  0.0505        1     0.2461 0f49a7ca-56e9-404d-bcac-dbf3c6b2d5b7 
## INFO  [15:14:39.395] Evaluating 1 configuration(s) 
## INFO  [15:14:39.452] Result of batch 4: 
## INFO  [15:14:39.455]      cp minsplit classif.ce                                uhash 
## INFO  [15:14:39.455]  0.0505       10     0.2461 9e10e687-096b-4d81-881b-6831d1ce6d94 
## INFO  [15:14:39.458] Evaluating 1 configuration(s) 
## INFO  [15:14:39.517] Result of batch 5: 
## INFO  [15:14:39.520]     cp minsplit classif.ce                                uhash 
## INFO  [15:14:39.520]  0.001       10     0.2617 10564730-0b55-4fa2-a4e5-5feba44c7c69 
## INFO  [15:14:39.527] Evaluating 1 configuration(s) 
## INFO  [15:14:39.581] Result of batch 6: 
## INFO  [15:14:39.584]       cp minsplit classif.ce                                uhash 
## INFO  [15:14:39.584]  0.02575       10     0.2461 e6cbb281-e1d8-46a1-b6ae-00ba73096866 
## INFO  [15:14:39.586] Evaluating 1 configuration(s) 
## INFO  [15:14:39.642] Result of batch 7: 
## INFO  [15:14:39.645]     cp minsplit classif.ce                                uhash 
## INFO  [15:14:39.645]  0.001        1     0.3203 24dbab9a-8362-487f-90bc-da6019f2d629 
## INFO  [15:14:39.648] Evaluating 1 configuration(s) 
## INFO  [15:14:39.705] Result of batch 8: 
## INFO  [15:14:39.708]       cp minsplit classif.ce                                uhash 
## INFO  [15:14:39.708]  0.07525        8     0.2773 c21f0c46-5775-4fd1-b1e4-1b13cf4a45df 
## INFO  [15:14:39.711] Evaluating 1 configuration(s) 
## INFO  [15:14:39.770] Result of batch 9: 
## INFO  [15:14:39.773]      cp minsplit classif.ce                                uhash 
## INFO  [15:14:39.773]  0.0505        3     0.2461 2ae8617a-a43a-4ecf-b543-3968c76aea24 
## INFO  [15:14:39.776] Evaluating 1 configuration(s) 
## INFO  [15:14:39.834] Result of batch 10: 
## INFO  [15:14:39.837]      cp minsplit classif.ce                                uhash 
## INFO  [15:14:39.837]  0.0505        8     0.2461 1850de0c-8bc8-4beb-81bf-fe1d22e4f78e 
## INFO  [15:14:39.840] Evaluating 1 configuration(s) 
## INFO  [15:14:39.896] Result of batch 11: 
## INFO  [15:14:39.899]       cp minsplit classif.ce                                uhash 
## INFO  [15:14:39.899]  0.07525        3     0.2773 030a31e7-2960-4efe-bd09-477af4b6f0aa 
## INFO  [15:14:39.901] Evaluating 1 configuration(s) 
## INFO  [15:14:39.962] Result of batch 12: 
## INFO  [15:14:39.965]     cp minsplit classif.ce                                uhash 
## INFO  [15:14:39.965]  0.001        3     0.2969 2e91140c-d71f-4b94-9c15-dc3a830448b8 
## INFO  [15:14:39.967] Evaluating 1 configuration(s) 
## INFO  [15:14:40.023] Result of batch 13: 
## INFO  [15:14:40.025]   cp minsplit classif.ce                                uhash 
## INFO  [15:14:40.025]  0.1        3     0.2773 3e8f9015-4393-4019-93e8-1e8f1833c960 
## INFO  [15:14:40.028] Evaluating 1 configuration(s) 
## INFO  [15:14:40.082] Result of batch 14: 
## INFO  [15:14:40.084]   cp minsplit classif.ce                                uhash 
## INFO  [15:14:40.084]  0.1        1     0.2773 81bd3871-3137-4397-8e00-90d31bda83ce 
## INFO  [15:14:40.087] Evaluating 1 configuration(s) 
## INFO  [15:14:40.142] Result of batch 15: 
## INFO  [15:14:40.145]       cp minsplit classif.ce                                uhash 
## INFO  [15:14:40.145]  0.07525        1     0.2773 23bb44ec-b0a6-41bf-b8df-438ff23171dc 
## INFO  [15:14:40.147] Evaluating 1 configuration(s) 
## INFO  [15:14:40.200] Result of batch 16: 
## INFO  [15:14:40.203]       cp minsplit classif.ce                                uhash 
## INFO  [15:14:40.203]  0.02575        1     0.2461 3dce8684-69b6-4dc4-8ddf-706b74ff4704 
## INFO  [15:14:40.206] Evaluating 1 configuration(s) 
## INFO  [15:14:40.262] Result of batch 17: 
## INFO  [15:14:40.265]       cp minsplit classif.ce                                uhash 
## INFO  [15:14:40.265]  0.02575        5     0.2461 fd093922-c74e-4119-8bdf-af675f3a649e 
## INFO  [15:14:40.268] Evaluating 1 configuration(s) 
## INFO  [15:14:40.332] Result of batch 18: 
## INFO  [15:14:40.335]       cp minsplit classif.ce                                uhash 
## INFO  [15:14:40.335]  0.07525       10     0.2773 0414d313-458c-42fa-81f0-e0e21136d863 
## INFO  [15:14:40.337] Evaluating 1 configuration(s) 
## INFO  [15:14:40.395] Result of batch 19: 
## INFO  [15:14:40.398]     cp minsplit classif.ce                                uhash 
## INFO  [15:14:40.398]  0.001        8     0.2695 7a9ea601-273a-4578-9112-b3fe5a822100 
## INFO  [15:14:40.401] Evaluating 1 configuration(s) 
## INFO  [15:14:40.462] Result of batch 20: 
## INFO  [15:14:40.465]       cp minsplit classif.ce                                uhash 
## INFO  [15:14:40.465]  0.02575        3     0.2461 15aaedf4-e119-4452-9581-64287c241e82 
## INFO  [15:14:40.474] Finished optimizing after 20 evaluation(s) 
## INFO  [15:14:40.475] Result: 
## INFO  [15:14:40.478]       cp minsplit learner_param_vals  x_domain classif.ce 
## INFO  [15:14:40.478]  0.02575        8          <list[3]> <list[2]>     0.2461
##         cp minsplit learner_param_vals  x_domain classif.ce
## 1: 0.02575        8          <list[3]> <list[2]>     0.2461
instance$result_learner_param_vals
## $xval
## [1] 0
## 
## $cp
## [1] 0.02575
## 
## $minsplit
## [1] 8
instance$result_y
## classif.ce 
##     0.2461

One can investigate all resamplings which were undertaken, as they are stored in the archive of the TuningInstanceSingleCrit and can be accessed through $data() method:

instance$archive$data()
##          cp minsplit classif.ce                                uhash  x_domain
##  1: 0.10000       10     0.2773 4339c14f-ff85-41f7-b1b3-68575b03a7b8 <list[2]>
##  2: 0.02575        8     0.2461 83da19b8-b80a-4c20-9728-352800cb4356 <list[2]>
##  3: 0.05050        1     0.2461 0f49a7ca-56e9-404d-bcac-dbf3c6b2d5b7 <list[2]>
##  4: 0.05050       10     0.2461 9e10e687-096b-4d81-881b-6831d1ce6d94 <list[2]>
##  5: 0.00100       10     0.2617 10564730-0b55-4fa2-a4e5-5feba44c7c69 <list[2]>
##  6: 0.02575       10     0.2461 e6cbb281-e1d8-46a1-b6ae-00ba73096866 <list[2]>
##  7: 0.00100        1     0.3203 24dbab9a-8362-487f-90bc-da6019f2d629 <list[2]>
##  8: 0.07525        8     0.2773 c21f0c46-5775-4fd1-b1e4-1b13cf4a45df <list[2]>
##  9: 0.05050        3     0.2461 2ae8617a-a43a-4ecf-b543-3968c76aea24 <list[2]>
## 10: 0.05050        8     0.2461 1850de0c-8bc8-4beb-81bf-fe1d22e4f78e <list[2]>
## 11: 0.07525        3     0.2773 030a31e7-2960-4efe-bd09-477af4b6f0aa <list[2]>
## 12: 0.00100        3     0.2969 2e91140c-d71f-4b94-9c15-dc3a830448b8 <list[2]>
## 13: 0.10000        3     0.2773 3e8f9015-4393-4019-93e8-1e8f1833c960 <list[2]>
## 14: 0.10000        1     0.2773 81bd3871-3137-4397-8e00-90d31bda83ce <list[2]>
## 15: 0.07525        1     0.2773 23bb44ec-b0a6-41bf-b8df-438ff23171dc <list[2]>
## 16: 0.02575        1     0.2461 3dce8684-69b6-4dc4-8ddf-706b74ff4704 <list[2]>
## 17: 0.02575        5     0.2461 fd093922-c74e-4119-8bdf-af675f3a649e <list[2]>
## 18: 0.07525       10     0.2773 0414d313-458c-42fa-81f0-e0e21136d863 <list[2]>
## 19: 0.00100        8     0.2695 7a9ea601-273a-4578-9112-b3fe5a822100 <list[2]>
## 20: 0.02575        3     0.2461 15aaedf4-e119-4452-9581-64287c241e82 <list[2]>
##               timestamp batch_nr
##  1: 2020-09-09 15:14:39        1
##  2: 2020-09-09 15:14:39        2
##  3: 2020-09-09 15:14:39        3
##  4: 2020-09-09 15:14:39        4
##  5: 2020-09-09 15:14:39        5
##  6: 2020-09-09 15:14:39        6
##  7: 2020-09-09 15:14:39        7
##  8: 2020-09-09 15:14:39        8
##  9: 2020-09-09 15:14:39        9
## 10: 2020-09-09 15:14:39       10
## 11: 2020-09-09 15:14:39       11
## 12: 2020-09-09 15:14:39       12
## 13: 2020-09-09 15:14:40       13
## 14: 2020-09-09 15:14:40       14
## 15: 2020-09-09 15:14:40       15
## 16: 2020-09-09 15:14:40       16
## 17: 2020-09-09 15:14:40       17
## 18: 2020-09-09 15:14:40       18
## 19: 2020-09-09 15:14:40       19
## 20: 2020-09-09 15:14:40       20

In sum, the grid search evaluated 20/25 different configurations of the grid in a random order before the Terminator stopped the tuning.

The associated resampling iterations can be accessed in the BenchmarkResult:

instance$archive$benchmark_result$data
##                                    uhash              task
##  1: 4339c14f-ff85-41f7-b1b3-68575b03a7b8 <TaskClassif[44]>
##  2: 83da19b8-b80a-4c20-9728-352800cb4356 <TaskClassif[44]>
##  3: 0f49a7ca-56e9-404d-bcac-dbf3c6b2d5b7 <TaskClassif[44]>
##  4: 9e10e687-096b-4d81-881b-6831d1ce6d94 <TaskClassif[44]>
##  5: 10564730-0b55-4fa2-a4e5-5feba44c7c69 <TaskClassif[44]>
##  6: e6cbb281-e1d8-46a1-b6ae-00ba73096866 <TaskClassif[44]>
##  7: 24dbab9a-8362-487f-90bc-da6019f2d629 <TaskClassif[44]>
##  8: c21f0c46-5775-4fd1-b1e4-1b13cf4a45df <TaskClassif[44]>
##  9: 2ae8617a-a43a-4ecf-b543-3968c76aea24 <TaskClassif[44]>
## 10: 1850de0c-8bc8-4beb-81bf-fe1d22e4f78e <TaskClassif[44]>
## 11: 030a31e7-2960-4efe-bd09-477af4b6f0aa <TaskClassif[44]>
## 12: 2e91140c-d71f-4b94-9c15-dc3a830448b8 <TaskClassif[44]>
## 13: 3e8f9015-4393-4019-93e8-1e8f1833c960 <TaskClassif[44]>
## 14: 81bd3871-3137-4397-8e00-90d31bda83ce <TaskClassif[44]>
## 15: 23bb44ec-b0a6-41bf-b8df-438ff23171dc <TaskClassif[44]>
## 16: 3dce8684-69b6-4dc4-8ddf-706b74ff4704 <TaskClassif[44]>
## 17: fd093922-c74e-4119-8bdf-af675f3a649e <TaskClassif[44]>
## 18: 0414d313-458c-42fa-81f0-e0e21136d863 <TaskClassif[44]>
## 19: 7a9ea601-273a-4578-9112-b3fe5a822100 <TaskClassif[44]>
## 20: 15aaedf4-e119-4452-9581-64287c241e82 <TaskClassif[44]>
##                       learner              resampling iteration prediction
##  1: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
##  2: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
##  3: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
##  4: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
##  5: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
##  6: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
##  7: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
##  8: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
##  9: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
## 10: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
## 11: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
## 12: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
## 13: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
## 14: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
## 15: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
## 16: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
## 17: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
## 18: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
## 19: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>
## 20: <LearnerClassifRpart[32]> <ResamplingHoldout[19]>         1  <list[1]>

The uhash column links the resampling iterations to the evaluated configurations stored in instance$archive$data(). This allows e.g. to score the included ResampleResults on a different measure.

Now the optimized hyperparameters can take the previously created Learner, set the returned hyperparameters and train it on the full dataset.

learner$param_set$values = instance$result_learner_param_vals
learner$train(task)

The trained model can now be used to make a prediction on external data. Note that predicting on observations present in the task, should be avoided. The model has seen these observations already during tuning and therefore results would be statistically biased. Hence, the resulting performance measure would be over-optimistic. Instead, to get statistically unbiased performance estimates for the current task, nested resampling is required.

3.1.4 Automating the Tuning

The AutoTuner wraps a learner and augments it with an automatic tuning for a given set of hyperparameters. Because the AutoTuner itself inherits from the Learner base class, it can be used like any other learner. Analogously to the previous subsection, a new classification tree learner is created. This classification tree learner automatically tunes the parameters cp and minsplit using an inner resampling (holdout). We create a terminator which allows 10 evaluations, and use a simple random search as tuning algorithm:

library("paradox")
library("mlr3tuning")

learner = lrn("classif.rpart")
tune_ps = ParamSet$new(list(
  ParamDbl$new("cp", lower = 0.001, upper = 0.1),
  ParamInt$new("minsplit", lower = 1, upper = 10)
))
terminator = trm("evals", n_evals = 10)
tuner = tnr("random_search")

at = AutoTuner$new(
  learner = learner,
  resampling = rsmp("holdout"),
  measure = msr("classif.ce"),
  search_space = tune_ps,
  terminator = terminator,
  tuner = tuner
)
at
## <AutoTuner:classif.rpart.tuned>
## * Model: -
## * Parameters: xval=0
## * Packages: rpart
## * Predict Type: response
## * Feature types: logical, integer, numeric, factor, ordered
## * Properties: importance, missings, multiclass, selected_features,
##   twoclass, weights

We can now use the learner like any other learner, calling the $train() and $predict() method. This time however, we pass it to benchmark() to compare the tuner to a classification tree without tuning. This way, the AutoTuner will do its resampling for tuning on the training set of the respective split of the outer resampling. The learner then undertakes predictions using the test set of the outer resampling. This yields unbiased performance measures, as the observations in the test set have not been used during tuning or fitting of the respective learner. This is called nested resampling.

To compare the tuned learner with the learner that uses default values, we can use benchmark():

grid = benchmark_grid(
  task = tsk("pima"),
  learner = list(at, lrn("classif.rpart")),
  resampling = rsmp("cv", folds = 3)
)

# avoid console output from mlr3tuning
logger = lgr::get_logger("bbotk")
logger$set_threshold("warn")

bmr = benchmark(grid)
bmr$aggregate(msrs(c("classif.ce", "time_train")))
##    nr      resample_result task_id          learner_id resampling_id iters
## 1:  1 <ResampleResult[18]>    pima classif.rpart.tuned            cv     3
## 2:  2 <ResampleResult[18]>    pima       classif.rpart            cv     3
##    classif.ce time_train
## 1:     0.2513   0.685333
## 2:     0.2396   0.007333

Note that we do not expect any differences compared to the non-tuned approach for multiple reasons:

  • the task is too easy
  • the task is rather small, and thus prone to overfitting
  • the tuning budget (10 evaluations) is small
  • rpart does not benefit that much from tuning

References

Bergstra, James, and Yoshua Bengio. 2012. “Random Search for Hyper-Parameter Optimization.” J. Mach. Learn. Res. 13: 281–305.