3.1 Hyperparameter Tuning

Hyperparameter tuning is supported via the extension package mlr3tuning. The heart of mlr3tuning are the R6 classes

  • TuningInstance: This class describes the tuning problem and stores results.
  • Tuner: This class is the base class for implementations of tuning algorithms.

3.1.1 The TuningInstance Class

The following sub-section examines the optimization of a simple classification tree on the Pima Indian Diabetes data set.

task = tsk("pima")
print(task)
## <TaskClassif:pima> (768 x 9)
## * Target: diabetes
## * Properties: twoclass
## * Features (8):
##   - dbl (8): age, glucose, insulin, mass, pedigree, pregnant, pressure,
##     triceps

We use the classification tree from rpart and choose a subset of the hyperparameters we want to tune. This is often referred to as the “tuning space”.

learner = lrn("classif.rpart")
learner$param_set
## ParamSet: 
##              id    class lower upper levels default value
## 1:     minsplit ParamInt     1   Inf             20      
## 2:           cp ParamDbl     0     1           0.01      
## 3:   maxcompete ParamInt     0   Inf              4      
## 4: maxsurrogate ParamInt     0   Inf              5      
## 5:     maxdepth ParamInt     1    30             30      
## 6:         xval ParamInt     0   Inf             10     0

Here, we opt to tune two parameters:

  • The complexity cp
  • The termination criterion minsplit

As the tuning space has to be bound, one has to set lower and upper bounds:

library(paradox)
tune_ps = ParamSet$new(list(
  ParamDbl$new("cp", lower = 0.001, upper = 0.1),
  ParamInt$new("minsplit", lower = 1, upper = 10)
))
tune_ps
## ParamSet: 
##          id    class lower upper levels     default value
## 1:       cp ParamDbl 0.001   0.1        <NoDefault>      
## 2: minsplit ParamInt 1.000  10.0        <NoDefault>

Next, we need to define how to evaluate the performance. For this, we need to choose a resampling strategy and a performance measure.

hout = rsmp("holdout")
measure = msr("classif.ce")

Finally, one has to determine the budget available to solve this tuning instance. This is done by selecting one of the available Terminators:

For this short introduction, we grant a budget of 20 evaluations and then put everything together into a TuningInstance:

library(mlr3tuning)

evals20 = term("evals", n_evals = 20)

instance = TuningInstance$new(
  task = task,
  learner = learner,
  resampling = hout,
  measures = measure,
  param_set = tune_ps,
  terminator = evals20
)
print(instance)
## <TuningInstance>
## * Task: <TaskClassif:pima>
## * Learner: <LearnerClassifRpart:classif.rpart>
## * Measures: classif.ce
## * Resampling: <ResamplingHoldout>
## * Terminator: <TerminatorEvals>
## * bm_args: list()
## ParamSet: 
##          id    class lower upper levels     default value
## 1:       cp ParamDbl 0.001   0.1        <NoDefault>      
## 2: minsplit ParamInt 1.000  10.0        <NoDefault>      
## Archive:
## Empty data.table (0 rows and 11 cols): nr,batch_nr,resample_result,task_id,learner_id,resampling_id...

To start the tuning, we still need to select how the optimization should take place - in other words, we need to choose the optimization algorithm via the Tuner class.

3.1.2 The Tuner Class

The following algorithms are currently implemented in mlr3tuning:

In this example, we will use a simple grid search with a grid resolution of 10:

tuner = tnr("grid_search", resolution = 5)

Since we have only numeric parameters, TunerGridSearch will create a grid of equally-sized steps between the respective upper and lower bounds. As we have two hyperparameters with a resolution of 5, the two-dimensional grid consists of \(5^2 = 25\) configurations. Each configuration serves as hyperparameter setting for the classification tree and triggers a 3-fold cross validation on the task. All configurations will be examined by the tuner (in a random order), until either all configurations are evaluated or the Terminator signals that the budget is exhausted.

3.1.3 Triggering the Tuning

To start the tuning, we simply pass the TuningInstance to the $tune() method of the initialized Tuner. The tuner proceeds as follow:

  1. The Tuner proposes at least one hyperparameter configuration (the Tuner and may propose multiple points to improve parallelization, which can be controlled via the setting batch_size).
  2. For each configuration, a Learner is fitted on Task using the provided Resampling. The results are combined with other results from previous iterations to a single BenchmarkResult.
  3. The Terminator is queried if the budget is exhausted. If the budget is not exhausted, restart with 1) until it is.
  4. Determine the configuration with the best observed performance.
  5. Return a named list with the hyperparameter settings ("values") and the corresponding measured performance ("performance").
result = tuner$tune(instance)
print(result)
## NULL

One can investigate all resamplings which where undertaken, using the $archive() method of the TuningInstance. Here, one just extracts the performance values and the hyperparameters:

instance$archive(unnest = "params")[, c("cp", "minsplit", "classif.ce")]
##          cp minsplit classif.ce
##  1: 0.02575        5     0.2109
##  2: 0.00100        5     0.3047
##  3: 0.05050        5     0.2109
##  4: 0.05050        8     0.2109
##  5: 0.02575       10     0.2109
##  6: 0.07525       10     0.2109
##  7: 0.07525        3     0.2109
##  8: 0.02575        8     0.2109
##  9: 0.02575        1     0.2109
## 10: 0.00100        8     0.2617
## 11: 0.00100        1     0.2734
## 12: 0.07525        8     0.2109
## 13: 0.10000        8     0.2383
## 14: 0.02575        3     0.2109
## 15: 0.05050        3     0.2109
## 16: 0.07525        1     0.2109
## 17: 0.05050        1     0.2109
## 18: 0.10000       10     0.2383
## 19: 0.10000        1     0.2383
## 20: 0.00100       10     0.2656

In sum, the grid search evaluated 20/25 different configurations of the grid in a random order before the Terminator stopped the tuning.

Now the optimized hyperparameters can take the previously created Learner, set the returned hyperparameters and train it on the full dataset.

learner$param_set$values = instance$result$params
learner$train(task)

The trained model could now be used to make a prediction on external data. Note that predicting on observations present in the task, is statistically bias and should be avoided, as the model has seen these observations already during tuning. Hence, the resulting performance measure would be over-optimistic. Instead, to get unbiased performance estimates for the current task, nested resampling is required.

3.1.4 Automating the Tuning

The AutoTuner wraps a learner and augments it with an automatic tuning for a given set of hyperparameters. Because the AutoTuner itself inherits from the Learner base class, it can be used like any other learner. Analogously to the previous subsection, a new classification tree learner is created. This classification tree learner automatically tunes the parameters cp and minsplit using an inner resampling (holdout). We create a terminator which allows 10 evaluations, and use a simple random search as tuning algorithm:

library(paradox)
library(mlr3tuning)

learner = lrn("classif.rpart")
resampling = rsmp("holdout")
measures = msr("classif.ce")
tune_ps = ParamSet$new(list(
  ParamDbl$new("cp", lower = 0.001, upper = 0.1),
  ParamInt$new("minsplit", lower = 1, upper = 10)
))
terminator = term("evals", n_evals = 10)
tuner = tnr("random_search")

at = AutoTuner$new(
  learner = learner,
  resampling = resampling,
  measures = measures,
  tune_ps = tune_ps,
  terminator = terminator,
  tuner = tuner
)
at
## <AutoTuner:classif.rpart.tuned>
## * Model: -
## * Parameters: xval=0
## * Packages: rpart
## * Predict Type: response
## * Feature types: logical, integer, numeric, factor, ordered
## * Properties: importance, missings, multiclass, selected_features,
##   twoclass, weights

We can now use the learner like any other learner, calling the $train() and $predict() method. This time however, we pass it to benchmark() to compare the tuner to a classification tree without tuning. This way, the AutoTuner will do its resampling for tuning on the training set of the respective split of the outer resampling. The learner then predicts using the test set of the outer resampling. This yields unbiased performance measures, as the observations in the test set have not been used during tuning or fitting of the respective learner. This is called nested resampling.

To compare the tuned learner with the learner using its default, we can use benchmark():

grid = benchmark_grid(
  task = tsk("pima"),
  learner = list(at, lrn("classif.rpart")),
  resampling = rsmp("cv", folds = 3)
)
bmr = benchmark(grid)
bmr$aggregate(measures)
##    nr  resample_result task_id          learner_id resampling_id iters
## 1:  1 <ResampleResult>    pima classif.rpart.tuned            cv     3
## 2:  2 <ResampleResult>    pima       classif.rpart            cv     3
##    classif.ce
## 1:     0.2630
## 2:     0.2591

Note that we do not expect any differences here compared to the non-tuned approach for multiple reasons:

  • the task is too easy
  • the task is rather small, and thus prone to overfitting
  • the tuning budget (10 evaluations) is small
  • rpart does not benefit that much from tuning

References

Bergstra, James, and Yoshua Bengio. 2012. “Random Search for Hyper-Parameter Optimization.” J. Mach. Learn. Res. 13: 281–305.