2.6 Benchmarking

Comparing the performance of different learners on multiple tasks and/or different resampling schemes is a recurrent task. This operation is usually referred to as “benchmarking” in the field of machine-learning. The mlr3 package offers the benchmark() function for convenience.

2.6.1 Design Creation

In mlr3 we require you to supply a “design” of your benchmark experiment. By “design” we essentially mean the matrix of settings you want to execute. A “design” consists of Task, Learner and Resampling.

Here, we call benchmark() to perform a single holdout split on a single task and two learners. We use the benchmark_grid() function to create an exhaustive design and instantiate the resampling properly, so that all learners are executed on the same train/test split for each task:

Instead of using benchmark_grid() you could also create the design manually as a data.table and use the full flexibility of the benchmark() function. The design does not have to be exhaustive, e.g. it can also contain a different learner for each task. However, you should note that benchmark_grid() makes sure to instantiate the resamplings for each task. If you create the design manually, even if the same task is used multiple times, the train/test splits will be different for each row of the design if you do not manually instantiate the resampling before creating the design.

Let’s construct a more complex design to show the full capabilities of the benchmark() function.

2.6.2 Execution and Aggregation of Results

After the benchmark design is ready, we can directly call benchmark():

Note that we did not instantiate the resampling instance manually. benchmark_grid() took care of it for us: Each resampling strategy is instantiated once for each task during the construction of the exhaustive grid.

After the benchmark, we can calculate and aggregate the performance with $aggregate():

Subsequently, we can aggregate the results further. For example, we might be interested which learner performed best over all tasks simultaneously. Simply aggregating the performances with the mean is usually not statistically sound. Instead, we calculate the rank statistic for each learner grouped by task. Then the calculated ranks grouped by learner are aggregated. Since the AUC needs to be maximized, we multiply with \(-1\) so that the best learner gets a rank of 1.

Unsurprisingly, the featureless learner is outperformed.

2.6.3 Plotting Benchmark Results

Analogously to plotting tasks, predictions or resample results, mlr3viz also provides a autoplot() method for benchmark results.

We can also plot ROC curves. To do so, we first need to filter the BenchmarkResult to only contain a single Task:

All available types are listed on the manual page of autoplot.BenchmarkResult().