3.4 Tuning with Hyperband
Besides the more traditional tuning methods, the ecosystem around mlr3 offers another procedure for hyperparameter optimization called Hyperband implemented in the mlr3hyperband package. Hyperband is a budget-oriented procedure, weeding out suboptimal performing configurations early on during a partially sequential training process, increasing tuning efficiency as a consequence. For this, a combination of incremental resource allocation and early stopping is used: As optimization progresses, computational resources are increased for more promising configurations, while less promising ones are terminated early. To give an introductory analogy, imagine two horse trainers are given eight untrained horses. Both trainers want to win the upcoming race, but they are only given 32 units of food. Given that each horse can be fed up to 8 units food (“maximum budget” per horse), there is not enough food for all the horses. It is critical to identify the most promising horses early, and give them enough food to improve. So, the trainers need to develop a strategy to split up the food in the best possible way. The first trainer is very optimistic and wants to explore the full capabilities of a horse, because he does not want to pass a judgment on a horse’s performance unless it has been fully trained. So, he divides his budget by the maximum amount he can give to a horse (lets say eight, so \(32 / 8 = 4\)) and randomly picks four horses - his budget simply is not enough to fully train more. Those four horses are then trained to their full capabilities, while the rest is set free. This way, the trainer is confident about choosing the best out of the four trained horses, but he might have overlooked the horse with the highest potential since he only focused on half of them. The other trainer is more creative and develops a different strategy. He thinks, if a horse is not performing well at the beginning, it will also not improve after further training. Based on this assumption, he decides to give one unit of food to each horse and observes how they develop. After the initial food is consumed, he checks their performance and kicks the slowest half out of his training regime. Then, he increases the available food for the remaining, further trains them until the food is consumed again, only to kick out the worst half once more. He repeats this until the one remaining horse gets the rest of the food. This means only one horse is fully trained, but on the flip side, he was able to start training with all eight horses. On race day, all the horses are put on the starting line. But which trainer will have the winning horse? The one, who tried to train a maximum amount of horses to their fullest? Or the other one, who made assumptions about the training progress of his horses? How the training phases may possibly look like is visualized in figure 3.1.

Figure 3.1: Visulization of how the training processes may look like. The left plot corresponds to the non-selective trainer, while the right one to the selective trainer.
Hyperband works very similar in some ways, but also different in others. It is not embodied by one of the trainers in our analogy, but more by the person, who would pay them. Hyperband consists of several brackets, each bracket corresponding to a trainer, and we do not care about horses but about hyperparameter configurations of a machine learning algorithm. The budget is not in terms of food, but in terms of a hyperparameter of the learner that scales in some way with the computational effort. An example is the number of epochs we train a neural network, or the number of iterations in boosting. Furthermore, there are not only two brackets (or trainers), but several, each placed at a unique spot between fully explorative of later training stages and extremely selective, equal to higher exploration of early training stages. The level of selection aggressiveness is handled by a user-defined parameter called \(\eta\). So, \(1/\eta\) is the fraction of remaining configurations after a bracket removes his worst performing ones, but \(\eta\) is also the factor by that the budget is increased for the next stage. Because there is a different maximum budget per configuration that makes sense in different scenarios, the user also has to set this as the \(R\) parameter. No further parameters are required for Hyperband – the full required budget across all brackets is indirectly given by \[(\lfloor \log_{\eta}{R} \rfloor + 1)^2 * R\] (Li et al. 2016). To give an idea how a full bracket layout might look like for a specific \(R\) and \(\eta\), a quick overview is given in the following table.
|
|
|
|
Of course, early termination based on a performance criterion may be disadvantageous if it is done too aggressively in certain scenarios. A learner to jumping radically in its estimated performance during the training phase may get the best configurations canceled too early, simply because they do not improve quickly enough compared to others. In other words, it is often unclear beforehand if having an high amount of configurations \(n\), that gets aggressively discarded early, is better than having a high budget \(B\) per configuration. The arising tradeoff, that has to be made, is called the “\(n\) versus \(B/n\) problem.” To create a balance between selection based on early training performance versus exploration of training performances in later training stages, \(\lfloor \log_{\eta}{R} \rfloor + 1\) brackets are constructed with an associated set of varying sized configurations. Thus, some brackets contain more configurations, with a small initial budget. In these, a lot are discarded after having been trained for only a short amount of time, corresponding to the selective trainer in our horse analogy. Others are constructed with fewer configurations, where discarding only takes place after a significant amount of budget was consumed. The last bracket usually never discards anything, but also starts with only very few configurations – this is equivalent to the trainer explorative of later stages. The former corresponds high \(n\), while the latter high \(B/n\). Even though different brackets are initialized with a different amount of configurations and different initial budget sizes, each bracket is assigned (approximately) the same budget \((\lfloor \log_{\eta}{R} \rfloor + 1) * R\).
The configurations at the start of each bracket are initialized by random, often uniform sampling. Note that currently all configurations are trained completely from the beginning, so no online updates of models from stage to stage is happening.
To identify the budget for evaluating Hyperband, the user has to specify explicitly which hyperparameter of the learner influences the budget by extending a single hyperparameter in the ParamSet
with an argument (tags = "budget"
), like in the following snippet:
library(paradox)
# Hyperparameter subset of XGBoost
= ParamSet$new(list(
search_space $new("nrounds", lower = 1, upper = 16, tags = "budget"),
ParamInt$new("booster", levels = c("gbtree", "gblinear", "dart"))
ParamFct ))
Thanks to the broad ecosystem of the mlr3verse a learner does not require a natural budget parameter. A typical case of this would be decision trees. By using subsampling as preprocessing with mlr3pipelines, we can work around a lacking budget parameter.
library(mlr3tuning)
library(mlr3hyperband)
library(mlr3pipelines)
set.seed(123)
# extend "classif.rpart" with "subsampling" as preprocessing step
= po("subsample") %>>% lrn("classif.rpart")
ll
# extend hyperparameters of "classif.rpart" with subsampling fraction as budget
= ps(
search_space classif.rpart.cp = p_dbl(lower = 0.001, upper = 0.1),
classif.rpart.minsplit = p_int(lower = 1, upper = 10),
subsample.frac = p_dbl(lower = 0.1, upper = 1, tags = "budget")
)
We can now plug the new learner with the extended hyperparameter set into a TuningInstanceSingleCrit
the same way as usual.
Naturally, Hyperband terminates once all of its brackets are evaluated, so a Terminator
in the tuning instance acts as an upper bound and should be only set to a low value if one is unsure of how long Hyperband will take to finish under the given settings.
= TuningInstanceSingleCrit$new(
instance task = tsk("iris"),
learner = ll,
resampling = rsmp("holdout"),
measure = msr("classif.ce"),
terminator = trm("none"), # hyperband terminates itself
search_space = search_space
)
Now, we initialize a new instance of the mlr3hyperband::TunerHyperband
class and start tuning with it.
= tnr("hyperband", eta = 3)
tuner $optimize(instance) tuner
## INFO [20:21:05.476] [bbotk] Starting to optimize 3 parameter(s) with '<TunerHyperband>' and '<TerminatorNone> [list()]'
## INFO [20:21:05.526] [bbotk] Amount of brackets to be evaluated = 3,
## INFO [20:21:05.537] [bbotk] Start evaluation of bracket 1
## INFO [20:21:05.543] [bbotk] Training 9 configs with budget of 0.111111 for each
## INFO [20:21:05.545] [bbotk] Evaluating 9 configuration(s)
## INFO [20:21:07.425] [bbotk] Result of batch 1:
## INFO [20:21:07.428] [bbotk] classif.rpart.cp classif.rpart.minsplit subsample.frac bracket bracket_stage
## INFO [20:21:07.428] [bbotk] 0.02533 3 0.1111 2 0
## INFO [20:21:07.428] [bbotk] 0.07348 5 0.1111 2 0
## INFO [20:21:07.428] [bbotk] 0.08490 3 0.1111 2 0
## INFO [20:21:07.428] [bbotk] 0.05026 6 0.1111 2 0
## INFO [20:21:07.428] [bbotk] 0.03940 4 0.1111 2 0
## INFO [20:21:07.428] [bbotk] 0.02540 7 0.1111 2 0
## INFO [20:21:07.428] [bbotk] 0.01200 4 0.1111 2 0
## INFO [20:21:07.428] [bbotk] 0.03961 4 0.1111 2 0
## INFO [20:21:07.428] [bbotk] 0.05762 6 0.1111 2 0
## INFO [20:21:07.428] [bbotk] budget_scaled budget_real n_configs classif.ce
## INFO [20:21:07.428] [bbotk] 1.111 0.1111 9 0.04
## INFO [20:21:07.428] [bbotk] 1.111 0.1111 9 0.02
## INFO [20:21:07.428] [bbotk] 1.111 0.1111 9 0.02
## INFO [20:21:07.428] [bbotk] 1.111 0.1111 9 0.02
## INFO [20:21:07.428] [bbotk] 1.111 0.1111 9 0.02
## INFO [20:21:07.428] [bbotk] 1.111 0.1111 9 0.42
## INFO [20:21:07.428] [bbotk] 1.111 0.1111 9 0.14
## INFO [20:21:07.428] [bbotk] 1.111 0.1111 9 0.02
## INFO [20:21:07.428] [bbotk] 1.111 0.1111 9 0.02
## INFO [20:21:07.428] [bbotk] uhash
## INFO [20:21:07.428] [bbotk] 020ca627-fc44-4e9e-b7e9-6b9418f43a59
## INFO [20:21:07.428] [bbotk] 21849b1b-3418-425f-bd96-4c57d8f875a4
## INFO [20:21:07.428] [bbotk] 8953be4c-3c3e-4157-8448-296b0411391d
## INFO [20:21:07.428] [bbotk] aa42f568-aa94-4599-8e8f-68024cd8d512
## INFO [20:21:07.428] [bbotk] cf26466f-387f-4c78-b968-cd1f2f2691d3
## INFO [20:21:07.428] [bbotk] 162e9099-e172-471e-8306-6a10e1f53b09
## INFO [20:21:07.428] [bbotk] e3b51a16-0220-410a-a8e9-080bfc1e61fc
## INFO [20:21:07.428] [bbotk] 654664a0-e49b-4b15-a761-93eba1c26827
## INFO [20:21:07.428] [bbotk] 1f234623-abcd-4080-a275-ca63982f1a35
## INFO [20:21:07.429] [bbotk] Training 3 configs with budget of 0.333333 for each
## INFO [20:21:07.431] [bbotk] Evaluating 3 configuration(s)
## INFO [20:21:08.059] [bbotk] Result of batch 2:
## INFO [20:21:08.062] [bbotk] classif.rpart.cp classif.rpart.minsplit subsample.frac bracket bracket_stage
## INFO [20:21:08.062] [bbotk] 0.07348 5 0.3333 2 1
## INFO [20:21:08.062] [bbotk] 0.08490 3 0.3333 2 1
## INFO [20:21:08.062] [bbotk] 0.05026 6 0.3333 2 1
## INFO [20:21:08.062] [bbotk] budget_scaled budget_real n_configs classif.ce
## INFO [20:21:08.062] [bbotk] 3.333 0.3333 3 0.06
## INFO [20:21:08.062] [bbotk] 3.333 0.3333 3 0.04
## INFO [20:21:08.062] [bbotk] 3.333 0.3333 3 0.06
## INFO [20:21:08.062] [bbotk] uhash
## INFO [20:21:08.062] [bbotk] 5de59ca1-9204-43e6-b2ad-ac8ca38dc849
## INFO [20:21:08.062] [bbotk] 78468fb0-6ffa-47f9-9ad1-28632f5dfcb5
## INFO [20:21:08.062] [bbotk] 6227c440-0f9b-4c41-91c8-6d046940d6b8
## INFO [20:21:08.063] [bbotk] Training 1 configs with budget of 1 for each
## INFO [20:21:08.065] [bbotk] Evaluating 1 configuration(s)
## INFO [20:21:08.297] [bbotk] Result of batch 3:
## INFO [20:21:08.299] [bbotk] classif.rpart.cp classif.rpart.minsplit subsample.frac bracket bracket_stage
## INFO [20:21:08.299] [bbotk] 0.0849 3 1 2 2
## INFO [20:21:08.299] [bbotk] budget_scaled budget_real n_configs classif.ce
## INFO [20:21:08.299] [bbotk] 10 1 1 0.04
## INFO [20:21:08.299] [bbotk] uhash
## INFO [20:21:08.299] [bbotk] f00ddcaf-354d-48c6-abae-f0e2e87ea0d8
## INFO [20:21:08.300] [bbotk] Start evaluation of bracket 2
## INFO [20:21:08.304] [bbotk] Training 5 configs with budget of 0.333333 for each
## INFO [20:21:08.305] [bbotk] Evaluating 5 configuration(s)
## INFO [20:21:09.312] [bbotk] Result of batch 4:
## INFO [20:21:09.315] [bbotk] classif.rpart.cp classif.rpart.minsplit subsample.frac bracket bracket_stage
## INFO [20:21:09.315] [bbotk] 0.08650 6 0.3333 1 0
## INFO [20:21:09.315] [bbotk] 0.07491 9 0.3333 1 0
## INFO [20:21:09.315] [bbotk] 0.06716 6 0.3333 1 0
## INFO [20:21:09.315] [bbotk] 0.06218 9 0.3333 1 0
## INFO [20:21:09.315] [bbotk] 0.03785 4 0.3333 1 0
## INFO [20:21:09.315] [bbotk] budget_scaled budget_real n_configs classif.ce
## INFO [20:21:09.315] [bbotk] 3.333 0.3333 5 0.02
## INFO [20:21:09.315] [bbotk] 3.333 0.3333 5 0.06
## INFO [20:21:09.315] [bbotk] 3.333 0.3333 5 0.04
## INFO [20:21:09.315] [bbotk] 3.333 0.3333 5 0.08
## INFO [20:21:09.315] [bbotk] 3.333 0.3333 5 0.06
## INFO [20:21:09.315] [bbotk] uhash
## INFO [20:21:09.315] [bbotk] 3639dfc7-5705-40ba-b157-041698830b25
## INFO [20:21:09.315] [bbotk] ab171d3f-569a-4a64-a083-e2e007fbcad2
## INFO [20:21:09.315] [bbotk] 86994edb-e266-44e3-9c6c-fbe297658cb3
## INFO [20:21:09.315] [bbotk] 2c5b5f48-f5ea-491e-92d4-7e17168b9038
## INFO [20:21:09.315] [bbotk] 7d1bd669-8838-46e1-bbcc-ae89898f200f
## INFO [20:21:09.316] [bbotk] Training 1 configs with budget of 1 for each
## INFO [20:21:09.318] [bbotk] Evaluating 1 configuration(s)
## INFO [20:21:09.548] [bbotk] Result of batch 5:
## INFO [20:21:09.550] [bbotk] classif.rpart.cp classif.rpart.minsplit subsample.frac bracket bracket_stage
## INFO [20:21:09.550] [bbotk] 0.0865 6 1 1 1
## INFO [20:21:09.550] [bbotk] budget_scaled budget_real n_configs classif.ce
## INFO [20:21:09.550] [bbotk] 10 1 1 0.04
## INFO [20:21:09.550] [bbotk] uhash
## INFO [20:21:09.550] [bbotk] dd4b4cdf-8e6c-4dee-9724-753a2b939c25
## INFO [20:21:09.551] [bbotk] Start evaluation of bracket 3
## INFO [20:21:09.554] [bbotk] Training 3 configs with budget of 1 for each
## INFO [20:21:09.556] [bbotk] Evaluating 3 configuration(s)
## INFO [20:21:10.180] [bbotk] Result of batch 6:
## INFO [20:21:10.182] [bbotk] classif.rpart.cp classif.rpart.minsplit subsample.frac bracket bracket_stage
## INFO [20:21:10.182] [bbotk] 0.02724 10 1 0 0
## INFO [20:21:10.182] [bbotk] 0.05689 3 1 0 0
## INFO [20:21:10.182] [bbotk] 0.09141 4 1 0 0
## INFO [20:21:10.182] [bbotk] budget_scaled budget_real n_configs classif.ce
## INFO [20:21:10.182] [bbotk] 10 1 3 0.04
## INFO [20:21:10.182] [bbotk] 10 1 3 0.04
## INFO [20:21:10.182] [bbotk] 10 1 3 0.04
## INFO [20:21:10.182] [bbotk] uhash
## INFO [20:21:10.182] [bbotk] 97c7a72a-ac6d-4428-b9d0-be04421ddd3a
## INFO [20:21:10.182] [bbotk] bb7c0368-b6b6-40fb-a570-49cf1cafcb62
## INFO [20:21:10.182] [bbotk] 498770d9-dbe8-4233-b0bc-7be1d7139dcd
## INFO [20:21:10.199] [bbotk] Finished optimizing after 22 evaluation(s)
## INFO [20:21:10.200] [bbotk] Result:
## INFO [20:21:10.202] [bbotk] classif.rpart.cp classif.rpart.minsplit subsample.frac learner_param_vals
## INFO [20:21:10.202] [bbotk] 0.07348 5 0.1111 <list[6]>
## INFO [20:21:10.202] [bbotk] x_domain classif.ce
## INFO [20:21:10.202] [bbotk] <list[3]> 0.02
## classif.rpart.cp classif.rpart.minsplit subsample.frac learner_param_vals
## 1: 0.07348 5 0.1111 <list[6]>
## x_domain classif.ce
## 1: <list[3]> 0.02
To receive the results of each sampled configuration, we simply run the following snippet.
as.data.table(instance$archive)[, c(
"subsample.frac",
"classif.rpart.cp",
"classif.rpart.minsplit",
"classif.ce"
= FALSE] ), with
## subsample.frac classif.rpart.cp classif.rpart.minsplit classif.ce
## 1: 0.1111 0.02533 3 0.04
## 2: 0.1111 0.07348 5 0.02
## 3: 0.1111 0.08490 3 0.02
## 4: 0.1111 0.05026 6 0.02
## 5: 0.1111 0.03940 4 0.02
## 6: 0.1111 0.02540 7 0.42
## 7: 0.1111 0.01200 4 0.14
## 8: 0.1111 0.03961 4 0.02
## 9: 0.1111 0.05762 6 0.02
## 10: 0.3333 0.07348 5 0.06
## 11: 0.3333 0.08490 3 0.04
## 12: 0.3333 0.05026 6 0.06
## 13: 1.0000 0.08490 3 0.04
## 14: 0.3333 0.08650 6 0.02
## 15: 0.3333 0.07491 9 0.06
## 16: 0.3333 0.06716 6 0.04
## 17: 0.3333 0.06218 9 0.08
## 18: 0.3333 0.03785 4 0.06
## 19: 1.0000 0.08650 6 0.04
## 20: 1.0000 0.02724 10 0.04
## 21: 1.0000 0.05689 3 0.04
## 22: 1.0000 0.09141 4 0.04
## subsample.frac classif.rpart.cp classif.rpart.minsplit classif.ce
You can access the best found configuration through the instance object.
$result instance
## classif.rpart.cp classif.rpart.minsplit subsample.frac learner_param_vals
## 1: 0.07348 5 0.1111 <list[6]>
## x_domain classif.ce
## 1: <list[3]> 0.02
$result_learner_param_vals instance
## $subsample.frac
## [1] 0.1111
##
## $subsample.stratify
## [1] FALSE
##
## $subsample.replace
## [1] FALSE
##
## $classif.rpart.xval
## [1] 0
##
## $classif.rpart.cp
## [1] 0.07348
##
## $classif.rpart.minsplit
## [1] 5
$result_y instance
## classif.ce
## 0.02
If you are familiar with the original paper, you may have wondered how we just used Hyperband with a parameter ranging from 0.1
to 1.0
(Li et al. 2016).
The answer is, with the help the internal rescaling of the budget parameter.
mlr3hyperband automatically divides the budget parameters boundaries with its lower bound, ending up with a budget range starting again at 1
, like it is the case originally.
If we want an overview of what bracket layout Hyperband created and how the rescaling in each bracket worked, we can print a compact table to see this information.
unique(as.data.table(instance$archive)[, .(bracket, bracket_stage, budget_scaled, budget_real, n_configs)])
## bracket bracket_stage budget_scaled budget_real n_configs
## 1: 2 0 1.111 0.1111 9
## 2: 2 1 3.333 0.3333 3
## 3: 2 2 10.000 1.0000 1
## 4: 1 0 3.333 0.3333 5
## 5: 1 1 10.000 1.0000 1
## 6: 0 0 10.000 1.0000 3
In the traditional way, Hyperband uses uniform sampling to receive a configuration sample at the start of each bracket.
But it is also possible to define a custom Sampler
for each hyperparameter.
library(mlr3learners)
set.seed(123)
= ps(
search_space nrounds = p_int(lower = 1, upper = 16, tags = "budget"),
eta = p_dbl(lower = 0, upper = 1),
booster = p_fct(levels = c("gbtree", "gblinear", "dart"))
)
= TuningInstanceSingleCrit$new(
instance task = tsk("iris"),
learner = lrn("classif.xgboost"),
resampling = rsmp("holdout"),
measure = msr("classif.ce"),
terminator = trm("none"), # hyperband terminates itself
search_space = search_space
)
# beta distribution with alpha = 2 and beta = 5
# categorical distribution with custom probabilities
= SamplerJointIndep$new(list(
sampler $new(search_space$params$eta, function(n) rbeta(n, 2, 5)),
Sampler1DRfun$new(search_space$params$booster, prob = c(0.2, 0.3, 0.5))
Sampler1DCateg ))
Then, the defined sampler has to be given as an argument during instance creation. Afterwards, the usual tuning can proceed.
= tnr("hyperband", eta = 2, sampler = sampler)
tuner $optimize(instance) tuner
## INFO [20:21:10.593] [bbotk] Starting to optimize 3 parameter(s) with '<TunerHyperband>' and '<TerminatorNone> [list()]'
## INFO [20:21:10.595] [bbotk] Amount of brackets to be evaluated = 5,
## INFO [20:21:10.596] [bbotk] Start evaluation of bracket 1
## INFO [20:21:10.599] [bbotk] Training 16 configs with budget of 1 for each
## INFO [20:21:10.601] [bbotk] Evaluating 16 configuration(s)
## [20:21:12] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:12] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:12] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:12] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:12] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:12] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:12] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:12] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:12] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:12] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:12] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:12] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:12] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:12] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:12] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:12] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:13.269] [bbotk] Result of batch 1:
## INFO [20:21:13.271] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:13.271] [bbotk] 0.16633 gblinear 1 4 0 1 1
## INFO [20:21:13.271] [bbotk] 0.53672 gblinear 1 4 0 1 1
## INFO [20:21:13.271] [bbotk] 0.23163 dart 1 4 0 1 1
## INFO [20:21:13.271] [bbotk] 0.09921 dart 1 4 0 1 1
## INFO [20:21:13.271] [bbotk] 0.32375 dart 1 4 0 1 1
## INFO [20:21:13.271] [bbotk] 0.25848 gblinear 1 4 0 1 1
## INFO [20:21:13.271] [bbotk] 0.28688 gblinear 1 4 0 1 1
## INFO [20:21:13.271] [bbotk] 0.36995 gbtree 1 4 0 1 1
## INFO [20:21:13.271] [bbotk] 0.21663 gblinear 1 4 0 1 1
## INFO [20:21:13.271] [bbotk] 0.43376 dart 1 4 0 1 1
## INFO [20:21:13.271] [bbotk] 0.24324 gblinear 1 4 0 1 1
## INFO [20:21:13.271] [bbotk] 0.35749 dart 1 4 0 1 1
## INFO [20:21:13.271] [bbotk] 0.38180 dart 1 4 0 1 1
## INFO [20:21:13.271] [bbotk] 0.22436 dart 1 4 0 1 1
## INFO [20:21:13.271] [bbotk] 0.57168 dart 1 4 0 1 1
## INFO [20:21:13.271] [bbotk] 0.52773 gbtree 1 4 0 1 1
## INFO [20:21:13.271] [bbotk] n_configs classif.ce uhash
## INFO [20:21:13.271] [bbotk] 16 0.74 d4a198f0-f258-45c7-aa2f-f535fd0bc40b
## INFO [20:21:13.271] [bbotk] 16 0.42 0fea9d47-a89c-4367-897d-9b263190678b
## INFO [20:21:13.271] [bbotk] 16 0.04 82648609-3a6e-49c6-8360-40ae28b9d1fd
## INFO [20:21:13.271] [bbotk] 16 0.04 22ce4a85-79c6-4f8d-862a-abc865f9f6a3
## INFO [20:21:13.271] [bbotk] 16 0.04 f10ea212-98e7-402e-9283-fb2819dda42a
## INFO [20:21:13.271] [bbotk] 16 0.74 7fc7c559-f753-451a-8695-6e48609572a0
## INFO [20:21:13.271] [bbotk] 16 0.74 43fe4924-acae-40b9-b7fe-0a69cbcd6e18
## INFO [20:21:13.271] [bbotk] 16 0.04 aac74ac4-3032-49de-a6f9-c2882877e185
## INFO [20:21:13.271] [bbotk] 16 0.74 3f9abd4c-09c0-47bf-8901-775b57f27d40
## INFO [20:21:13.271] [bbotk] 16 0.04 0b85f7f0-6dd9-42c2-879a-e8038b80fd3b
## INFO [20:21:13.271] [bbotk] 16 0.74 c1080ed9-4f9d-452a-94f6-276a2483698b
## INFO [20:21:13.271] [bbotk] 16 0.04 a74fccb7-c6bd-44b4-a61b-a181eb49958a
## INFO [20:21:13.271] [bbotk] 16 0.04 c4182b79-4326-45bf-92e9-432a1992e545
## INFO [20:21:13.271] [bbotk] 16 0.04 356b1dfd-329d-4569-b7e1-f8c6669251a4
## INFO [20:21:13.271] [bbotk] 16 0.04 5726251a-7627-4eaf-ab42-52550746ba48
## INFO [20:21:13.271] [bbotk] 16 0.04 1bae4d53-554e-49da-9651-f64f9d9a33ab
## INFO [20:21:13.273] [bbotk] Training 8 configs with budget of 2 for each
## INFO [20:21:13.274] [bbotk] Evaluating 8 configuration(s)
## [20:21:13] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:13] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:13] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:13] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:13] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:13] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:13] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:13] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:14.158] [bbotk] Result of batch 2:
## INFO [20:21:14.161] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:14.161] [bbotk] 0.23163 dart 2 4 1 2 2
## INFO [20:21:14.161] [bbotk] 0.09921 dart 2 4 1 2 2
## INFO [20:21:14.161] [bbotk] 0.32375 dart 2 4 1 2 2
## INFO [20:21:14.161] [bbotk] 0.36995 gbtree 2 4 1 2 2
## INFO [20:21:14.161] [bbotk] 0.43376 dart 2 4 1 2 2
## INFO [20:21:14.161] [bbotk] 0.35749 dart 2 4 1 2 2
## INFO [20:21:14.161] [bbotk] 0.38180 dart 2 4 1 2 2
## INFO [20:21:14.161] [bbotk] 0.22436 dart 2 4 1 2 2
## INFO [20:21:14.161] [bbotk] n_configs classif.ce uhash
## INFO [20:21:14.161] [bbotk] 8 0.04 7cfaf104-18c9-467f-bfde-047832d7c39f
## INFO [20:21:14.161] [bbotk] 8 0.04 ce9ad275-5edc-4d3e-8af1-9351b16bd285
## INFO [20:21:14.161] [bbotk] 8 0.04 9bf773d9-8362-474e-836b-8e9b85a37391
## INFO [20:21:14.161] [bbotk] 8 0.04 669ee45f-027e-4dac-a37a-3dc93f7d225e
## INFO [20:21:14.161] [bbotk] 8 0.04 1e7dd3f3-bb30-4d34-b7c2-c7ffc25ba5b2
## INFO [20:21:14.161] [bbotk] 8 0.04 96f6252c-2ab6-4e64-aaa9-866b17261850
## INFO [20:21:14.161] [bbotk] 8 0.04 6d4bdb07-bb00-4951-81a3-caa731b39195
## INFO [20:21:14.161] [bbotk] 8 0.04 8264e9ad-3665-4801-914a-a702b24f61f9
## INFO [20:21:14.162] [bbotk] Training 4 configs with budget of 4 for each
## INFO [20:21:14.164] [bbotk] Evaluating 4 configuration(s)
## [20:21:14] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:14] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:14] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:14] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:14.626] [bbotk] Result of batch 3:
## INFO [20:21:14.628] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:14.628] [bbotk] 0.23163 dart 4 4 2 4 4
## INFO [20:21:14.628] [bbotk] 0.09921 dart 4 4 2 4 4
## INFO [20:21:14.628] [bbotk] 0.32375 dart 4 4 2 4 4
## INFO [20:21:14.628] [bbotk] 0.36995 gbtree 4 4 2 4 4
## INFO [20:21:14.628] [bbotk] n_configs classif.ce uhash
## INFO [20:21:14.628] [bbotk] 4 0.04 57590523-bf70-4f04-ac89-0a741f603e79
## INFO [20:21:14.628] [bbotk] 4 0.04 ca72d48c-30bc-4fc9-8fa9-58dfd8c1fb93
## INFO [20:21:14.628] [bbotk] 4 0.04 d90b7f51-2a0c-440a-8814-59fbc335c0f7
## INFO [20:21:14.628] [bbotk] 4 0.04 74e373e3-fab1-494f-af81-86296c77a0d4
## INFO [20:21:14.629] [bbotk] Training 2 configs with budget of 8 for each
## INFO [20:21:14.631] [bbotk] Evaluating 2 configuration(s)
## [20:21:14] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:14] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:15.108] [bbotk] Result of batch 4:
## INFO [20:21:15.110] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:15.110] [bbotk] 0.23163 dart 8 4 3 8 8
## INFO [20:21:15.110] [bbotk] 0.09921 dart 8 4 3 8 8
## INFO [20:21:15.110] [bbotk] n_configs classif.ce uhash
## INFO [20:21:15.110] [bbotk] 2 0.04 8e9d913d-af87-448e-a1f4-10797f5a8ca1
## INFO [20:21:15.110] [bbotk] 2 0.04 74796088-79be-4a93-ab50-2bcd101e1045
## INFO [20:21:15.111] [bbotk] Training 1 configs with budget of 16 for each
## INFO [20:21:15.113] [bbotk] Evaluating 1 configuration(s)
## [20:21:15] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:15.248] [bbotk] Result of batch 5:
## INFO [20:21:15.249] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:15.249] [bbotk] 0.2316 dart 16 4 4 16 16
## INFO [20:21:15.249] [bbotk] n_configs classif.ce uhash
## INFO [20:21:15.249] [bbotk] 1 0.04 27d36b5b-21a0-4197-bdeb-87ca6fb29fd1
## INFO [20:21:15.250] [bbotk] Start evaluation of bracket 2
## INFO [20:21:15.253] [bbotk] Training 10 configs with budget of 2 for each
## INFO [20:21:15.254] [bbotk] Evaluating 10 configuration(s)
## [20:21:15] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:15] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:15] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:15] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:15] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:15] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:15] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:15] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:15] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:15] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:16.259] [bbotk] Result of batch 6:
## INFO [20:21:16.262] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:16.262] [bbotk] 0.17165 gblinear 2 3 0 2 2
## INFO [20:21:16.262] [bbotk] 0.33565 gbtree 2 3 0 2 2
## INFO [20:21:16.262] [bbotk] 0.30172 gbtree 2 3 0 2 2
## INFO [20:21:16.262] [bbotk] 0.12918 dart 2 3 0 2 2
## INFO [20:21:16.262] [bbotk] 0.27153 dart 2 3 0 2 2
## INFO [20:21:16.262] [bbotk] 0.38573 gblinear 2 3 0 2 2
## INFO [20:21:16.262] [bbotk] 0.29412 gblinear 2 3 0 2 2
## INFO [20:21:16.262] [bbotk] 0.20787 dart 2 3 0 2 2
## INFO [20:21:16.262] [bbotk] 0.03459 gblinear 2 3 0 2 2
## INFO [20:21:16.262] [bbotk] 0.56669 gblinear 2 3 0 2 2
## INFO [20:21:16.262] [bbotk] n_configs classif.ce uhash
## INFO [20:21:16.262] [bbotk] 10 0.72 d2343954-f051-456d-9a4e-0dae16e1b8e6
## INFO [20:21:16.262] [bbotk] 10 0.04 392d0557-2a2c-4744-9198-d30c5998645f
## INFO [20:21:16.262] [bbotk] 10 0.04 7a64f55d-0b0b-47ff-a02c-742fa34ee28b
## INFO [20:21:16.262] [bbotk] 10 0.04 6800d239-6a0f-4986-a504-fa830dcb1746
## INFO [20:21:16.262] [bbotk] 10 0.04 49762c9c-19ff-47b3-8a72-3d83690b4678
## INFO [20:21:16.262] [bbotk] 10 0.42 40ed2546-23cb-4592-aa02-530cb553f3d4
## INFO [20:21:16.262] [bbotk] 10 0.44 b8f473f0-cac3-4b02-a92c-dd98b40fabf8
## INFO [20:21:16.262] [bbotk] 10 0.04 9ac498c3-9e3f-4d0b-ad46-b390341b3c6b
## INFO [20:21:16.262] [bbotk] 10 0.74 949c523f-d278-4202-ac72-109f518df614
## INFO [20:21:16.262] [bbotk] 10 0.42 3e94d47f-ad06-4154-adeb-372bce320a1e
## INFO [20:21:16.263] [bbotk] Training 5 configs with budget of 4 for each
## INFO [20:21:16.265] [bbotk] Evaluating 5 configuration(s)
## [20:21:16] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:16] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:16] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:16] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:16] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:16.794] [bbotk] Result of batch 7:
## INFO [20:21:16.796] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:16.796] [bbotk] 0.3356 gbtree 4 3 1 4 4
## INFO [20:21:16.796] [bbotk] 0.3017 gbtree 4 3 1 4 4
## INFO [20:21:16.796] [bbotk] 0.1292 dart 4 3 1 4 4
## INFO [20:21:16.796] [bbotk] 0.2715 dart 4 3 1 4 4
## INFO [20:21:16.796] [bbotk] 0.2079 dart 4 3 1 4 4
## INFO [20:21:16.796] [bbotk] n_configs classif.ce uhash
## INFO [20:21:16.796] [bbotk] 5 0.04 79f99b33-ba9d-4e67-b6a7-472fb824e0c7
## INFO [20:21:16.796] [bbotk] 5 0.04 4538e6f3-a664-4b77-9782-727f2f043186
## INFO [20:21:16.796] [bbotk] 5 0.04 a479a685-d87a-41fa-b4d6-3a7aa97635ae
## INFO [20:21:16.796] [bbotk] 5 0.04 6b9db389-ad14-4e7b-86fa-5785dcbfa3bd
## INFO [20:21:16.796] [bbotk] 5 0.04 41ebae5a-6d72-4ecd-86ea-688e11107f27
## INFO [20:21:16.797] [bbotk] Training 2 configs with budget of 8 for each
## INFO [20:21:16.799] [bbotk] Evaluating 2 configuration(s)
## [20:21:16] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:16] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:17.040] [bbotk] Result of batch 8:
## INFO [20:21:17.042] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:17.042] [bbotk] 0.3356 gbtree 8 3 2 8 8
## INFO [20:21:17.042] [bbotk] 0.3017 gbtree 8 3 2 8 8
## INFO [20:21:17.042] [bbotk] n_configs classif.ce uhash
## INFO [20:21:17.042] [bbotk] 2 0.04 cc90a7bf-39b1-453c-af05-7f2029504447
## INFO [20:21:17.042] [bbotk] 2 0.04 1f2fc380-b1d6-4cdc-af05-0ed88f1db1c9
## INFO [20:21:17.043] [bbotk] Training 1 configs with budget of 16 for each
## INFO [20:21:17.045] [bbotk] Evaluating 1 configuration(s)
## [20:21:17] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:17.176] [bbotk] Result of batch 9:
## INFO [20:21:17.178] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:17.178] [bbotk] 0.3356 gbtree 16 3 3 16 16
## INFO [20:21:17.178] [bbotk] n_configs classif.ce uhash
## INFO [20:21:17.178] [bbotk] 1 0.04 b43146cc-4775-4242-92f9-914ad0cf162f
## INFO [20:21:17.179] [bbotk] Start evaluation of bracket 3
## INFO [20:21:17.182] [bbotk] Training 7 configs with budget of 4 for each
## INFO [20:21:17.183] [bbotk] Evaluating 7 configuration(s)
## [20:21:17] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:17] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:17] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:17] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:17] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:17] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:17] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:17.921] [bbotk] Result of batch 10:
## INFO [20:21:17.923] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:17.923] [bbotk] 0.41312 gblinear 4 2 0 4 4
## INFO [20:21:17.923] [bbotk] 0.21633 dart 4 2 0 4 4
## INFO [20:21:17.923] [bbotk] 0.52311 dart 4 2 0 4 4
## INFO [20:21:17.923] [bbotk] 0.21596 dart 4 2 0 4 4
## INFO [20:21:17.923] [bbotk] 0.54437 gbtree 4 2 0 4 4
## INFO [20:21:17.923] [bbotk] 0.11852 dart 4 2 0 4 4
## INFO [20:21:17.923] [bbotk] 0.09508 dart 4 2 0 4 4
## INFO [20:21:17.923] [bbotk] n_configs classif.ce uhash
## INFO [20:21:17.923] [bbotk] 7 0.42 d1df3c48-bd6a-4f10-a2f6-9f8bddd072be
## INFO [20:21:17.923] [bbotk] 7 0.04 de0984e4-f206-46be-869e-573ba28b4f5f
## INFO [20:21:17.923] [bbotk] 7 0.04 1c58a9f6-0000-4a22-b7f1-b505c91d79ab
## INFO [20:21:17.923] [bbotk] 7 0.04 c00b2bf3-6129-49a8-8f5e-f3a9e37cdeaa
## INFO [20:21:17.923] [bbotk] 7 0.04 624e71a0-99b1-4d93-b97e-f65628fca5eb
## INFO [20:21:17.923] [bbotk] 7 0.04 78d8be95-67f7-41e2-a48e-b2b96e20c51e
## INFO [20:21:17.923] [bbotk] 7 0.04 c6269964-2ba7-4f36-9562-145827e8842f
## INFO [20:21:17.924] [bbotk] Training 3 configs with budget of 8 for each
## INFO [20:21:17.926] [bbotk] Evaluating 3 configuration(s)
## [20:21:18] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:18] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:18] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:18.270] [bbotk] Result of batch 11:
## INFO [20:21:18.272] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:18.272] [bbotk] 0.2163 dart 8 2 1 8 8
## INFO [20:21:18.272] [bbotk] 0.5231 dart 8 2 1 8 8
## INFO [20:21:18.272] [bbotk] 0.2160 dart 8 2 1 8 8
## INFO [20:21:18.272] [bbotk] n_configs classif.ce uhash
## INFO [20:21:18.272] [bbotk] 3 0.04 0989662e-7935-485e-b80d-ee0f384d5da4
## INFO [20:21:18.272] [bbotk] 3 0.04 3b0761e3-5615-44b8-8308-ff49b0332946
## INFO [20:21:18.272] [bbotk] 3 0.04 f1b0b7b6-a19c-43f1-a3e2-b36977aa5d93
## INFO [20:21:18.274] [bbotk] Training 1 configs with budget of 16 for each
## INFO [20:21:18.276] [bbotk] Evaluating 1 configuration(s)
## [20:21:18] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:18.419] [bbotk] Result of batch 12:
## INFO [20:21:18.421] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:18.421] [bbotk] 0.2163 dart 16 2 2 16 16
## INFO [20:21:18.421] [bbotk] n_configs classif.ce uhash
## INFO [20:21:18.421] [bbotk] 1 0.04 ac536d37-8dae-4a7d-9c6e-24dcc130c9f1
## INFO [20:21:18.422] [bbotk] Start evaluation of bracket 4
## INFO [20:21:18.426] [bbotk] Training 5 configs with budget of 8 for each
## INFO [20:21:18.427] [bbotk] Evaluating 5 configuration(s)
## [20:21:18] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:18] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:18] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:18] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:18] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:18.990] [bbotk] Result of batch 13:
## INFO [20:21:18.992] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:18.992] [bbotk] 0.2462 gbtree 8 1 0 8 8
## INFO [20:21:18.992] [bbotk] 0.5226 gblinear 8 1 0 8 8
## INFO [20:21:18.992] [bbotk] 0.1413 gblinear 8 1 0 8 8
## INFO [20:21:18.992] [bbotk] 0.1950 dart 8 1 0 8 8
## INFO [20:21:18.992] [bbotk] 0.4708 gblinear 8 1 0 8 8
## INFO [20:21:18.992] [bbotk] n_configs classif.ce uhash
## INFO [20:21:18.992] [bbotk] 5 0.04 91c745a8-c26b-4ce7-80af-1a483dabe89c
## INFO [20:21:18.992] [bbotk] 5 0.42 80e0206d-b0b0-44d4-a1df-b5d661d29fc6
## INFO [20:21:18.992] [bbotk] 5 0.42 e9bbb93a-5395-4a1d-964c-2348ba6688fc
## INFO [20:21:18.992] [bbotk] 5 0.04 34affcac-4762-4f12-8774-408129f73479
## INFO [20:21:18.992] [bbotk] 5 0.42 7db83de6-d7f1-4bb1-8186-f4f9b3f18459
## INFO [20:21:18.994] [bbotk] Training 2 configs with budget of 16 for each
## INFO [20:21:18.995] [bbotk] Evaluating 2 configuration(s)
## [20:21:19] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:19] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:19.248] [bbotk] Result of batch 14:
## INFO [20:21:19.250] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:19.250] [bbotk] 0.2462 gbtree 16 1 1 16 16
## INFO [20:21:19.250] [bbotk] 0.1950 dart 16 1 1 16 16
## INFO [20:21:19.250] [bbotk] n_configs classif.ce uhash
## INFO [20:21:19.250] [bbotk] 2 0.04 2bdbf7dd-fce4-476b-86cc-8dd137ab57b4
## INFO [20:21:19.250] [bbotk] 2 0.04 679e733f-dea7-4576-b741-ced1f0a6a3a7
## INFO [20:21:19.251] [bbotk] Start evaluation of bracket 5
## INFO [20:21:19.255] [bbotk] Training 5 configs with budget of 16 for each
## INFO [20:21:19.256] [bbotk] Evaluating 5 configuration(s)
## [20:21:19] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:19] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:19] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:19] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:19] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:19.826] [bbotk] Result of batch 15:
## INFO [20:21:19.828] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:19.828] [bbotk] 0.08993 dart 16 0 0 16 16
## INFO [20:21:19.828] [bbotk] 0.42262 dart 16 0 0 16 16
## INFO [20:21:19.828] [bbotk] 0.09600 gbtree 16 0 0 16 16
## INFO [20:21:19.828] [bbotk] 0.17779 dart 16 0 0 16 16
## INFO [20:21:19.828] [bbotk] 0.61866 dart 16 0 0 16 16
## INFO [20:21:19.828] [bbotk] n_configs classif.ce uhash
## INFO [20:21:19.828] [bbotk] 5 0.04 9ed91caf-625c-4b25-b68c-710d9d11ed28
## INFO [20:21:19.828] [bbotk] 5 0.04 98d2e844-d7d9-483b-ba7d-7d5f4ab4d2c9
## INFO [20:21:19.828] [bbotk] 5 0.04 b41a6e03-1ecb-4455-bbf5-a800ad247447
## INFO [20:21:19.828] [bbotk] 5 0.04 aa8f30fd-805c-4707-a0d0-5f24cded316d
## INFO [20:21:19.828] [bbotk] 5 0.04 bbf1a067-afba-4dc9-849f-88c5ff82708c
## INFO [20:21:19.833] [bbotk] Finished optimizing after 72 evaluation(s)
## INFO [20:21:19.834] [bbotk] Result:
## INFO [20:21:19.835] [bbotk] nrounds eta booster learner_param_vals x_domain classif.ce
## INFO [20:21:19.835] [bbotk] 1 0.2316 dart <list[4]> <list[3]> 0.04
## nrounds eta booster learner_param_vals x_domain classif.ce
## 1: 1 0.2316 dart <list[4]> <list[3]> 0.04
$result instance
## nrounds eta booster learner_param_vals x_domain classif.ce
## 1: 1 0.2316 dart <list[4]> <list[3]> 0.04
Furthermore, we extended the original algorithm, to make it also possible to use mlr3hyperband for multi-objective optimization.
To do this, simply specify more measures in the TuningInstanceMultiCrit
and run the rest as usual.
= TuningInstanceMultiCrit$new(
instance task = tsk("pima"),
learner = lrn("classif.xgboost"),
resampling = rsmp("holdout"),
measures = msrs(c("classif.tpr", "classif.fpr")),
terminator = trm("none"), # hyperband terminates itself
search_space = search_space
)
= tnr("hyperband", eta = 4)
tuner $optimize(instance) tuner
## INFO [20:21:20.183] [bbotk] Starting to optimize 3 parameter(s) with '<TunerHyperband>' and '<TerminatorNone> [list()]'
## INFO [20:21:20.196] [bbotk] Amount of brackets to be evaluated = 3,
## INFO [20:21:20.197] [bbotk] Start evaluation of bracket 1
## INFO [20:21:20.200] [bbotk] Training 16 configs with budget of 1 for each
## INFO [20:21:20.201] [bbotk] Evaluating 16 configuration(s)
## [20:21:20] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:20] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:20] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:20] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:20] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:21] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:21] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:21] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:21] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:21] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:21] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:21] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:21] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:21] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:21] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:21] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:22.152] [bbotk] Result of batch 1:
## INFO [20:21:22.154] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:22.154] [bbotk] 0.20737 gblinear 1 2 0 1 1
## INFO [20:21:22.154] [bbotk] 0.45924 gbtree 1 2 0 1 1
## INFO [20:21:22.154] [bbotk] 0.24150 gblinear 1 2 0 1 1
## INFO [20:21:22.154] [bbotk] 0.11869 gbtree 1 2 0 1 1
## INFO [20:21:22.154] [bbotk] 0.07247 gbtree 1 2 0 1 1
## INFO [20:21:22.154] [bbotk] 0.69099 dart 1 2 0 1 1
## INFO [20:21:22.154] [bbotk] 0.28696 dart 1 2 0 1 1
## INFO [20:21:22.154] [bbotk] 0.14941 dart 1 2 0 1 1
## INFO [20:21:22.154] [bbotk] 0.97243 gbtree 1 2 0 1 1
## INFO [20:21:22.154] [bbotk] 0.41051 gblinear 1 2 0 1 1
## INFO [20:21:22.154] [bbotk] 0.40181 dart 1 2 0 1 1
## INFO [20:21:22.154] [bbotk] 0.64856 dart 1 2 0 1 1
## INFO [20:21:22.154] [bbotk] 0.91631 gblinear 1 2 0 1 1
## INFO [20:21:22.154] [bbotk] 0.21666 gbtree 1 2 0 1 1
## INFO [20:21:22.154] [bbotk] 0.54800 gblinear 1 2 0 1 1
## INFO [20:21:22.154] [bbotk] 0.72005 gblinear 1 2 0 1 1
## INFO [20:21:22.154] [bbotk] n_configs classif.tpr classif.fpr uhash
## INFO [20:21:22.154] [bbotk] 16 0.0000 0.0000 e2d26d15-e8db-42f9-a372-6a16e29db56d
## INFO [20:21:22.154] [bbotk] 16 0.7531 0.2571 09f4c101-9f7b-4d14-b5b6-09666b64549f
## INFO [20:21:22.154] [bbotk] 16 0.0000 0.0000 f4752042-a22b-43b7-a6c7-c4d797f9246f
## INFO [20:21:22.154] [bbotk] 16 0.7407 0.2457 f7a158f8-d121-479f-8647-f877e32478b2
## INFO [20:21:22.154] [bbotk] 16 0.7407 0.2571 bdf58e57-6b6c-49b0-af86-8866fa86c6b6
## INFO [20:21:22.154] [bbotk] 16 0.7407 0.2629 b8c54c04-9575-4abe-8656-d95b951f368a
## INFO [20:21:22.154] [bbotk] 16 0.7531 0.2629 c946ee40-395d-4ebb-b953-42300c2cc833
## INFO [20:21:22.154] [bbotk] 16 0.7407 0.2514 4868773d-6d3e-4759-bd30-0359446b4a37
## INFO [20:21:22.154] [bbotk] 16 0.7531 0.2571 8204c117-04a2-47fd-8c19-89a7912d2fb1
## INFO [20:21:22.154] [bbotk] 16 0.0000 0.0000 b9b3eeb9-25ef-4af3-b125-5f2b9c5aae70
## INFO [20:21:22.154] [bbotk] 16 0.7407 0.2457 ff7d1849-89bf-484f-8163-38931b8141f1
## INFO [20:21:22.154] [bbotk] 16 0.7531 0.2686 0fb2029a-4582-4fdf-8ccb-e8beca3742d9
## INFO [20:21:22.154] [bbotk] 16 0.0000 0.0000 e4ebf029-0e2d-401c-bc6b-78a8a761561c
## INFO [20:21:22.154] [bbotk] 16 0.7407 0.2571 0de9dd28-f14c-499e-b8cb-a08004345265
## INFO [20:21:22.154] [bbotk] 16 0.0000 0.0000 22b6c63b-3b2a-41d6-96d1-8784b002280f
## INFO [20:21:22.154] [bbotk] 16 0.0000 0.0000 acfdafa2-2f41-4914-8675-b12db6b34861
## INFO [20:21:22.156] [bbotk] Training 4 configs with budget of 4 for each
## INFO [20:21:22.159] [bbotk] Evaluating 4 configuration(s)
## [20:21:22] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:22] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:22] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:22] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:22.682] [bbotk] Result of batch 2:
## INFO [20:21:22.684] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:22.684] [bbotk] 0.4592 gbtree 4 2 1 4 4
## INFO [20:21:22.684] [bbotk] 0.1187 gbtree 4 2 1 4 4
## INFO [20:21:22.684] [bbotk] 0.5480 gblinear 4 2 1 4 4
## INFO [20:21:22.684] [bbotk] 0.7201 gblinear 4 2 1 4 4
## INFO [20:21:22.684] [bbotk] n_configs classif.tpr classif.fpr uhash
## INFO [20:21:22.684] [bbotk] 4 0.66667 0.18286 bc8c0dbc-53e3-453b-a9f1-bdd7375b6ab1
## INFO [20:21:22.684] [bbotk] 4 0.72840 0.22286 0d7dc104-d994-42f4-bd06-92bb2be317cf
## INFO [20:21:22.684] [bbotk] 4 0.06173 0.02857 f3c6d5bc-0a03-4298-87ac-3885a8c0e641
## INFO [20:21:22.684] [bbotk] 4 0.11111 0.05143 c3bf9ed2-e685-4ef7-811b-c9e474e9e17c
## INFO [20:21:22.685] [bbotk] Training 1 configs with budget of 16 for each
## INFO [20:21:22.687] [bbotk] Evaluating 1 configuration(s)
## [20:21:22] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:22.853] [bbotk] Result of batch 3:
## INFO [20:21:22.855] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:22.855] [bbotk] 0.1187 gbtree 16 2 2 16 16
## INFO [20:21:22.855] [bbotk] n_configs classif.tpr classif.fpr uhash
## INFO [20:21:22.855] [bbotk] 1 0.5926 0.1543 fe9dc6d8-8db9-45cd-8b1e-00a3cbe69b69
## INFO [20:21:22.856] [bbotk] Start evaluation of bracket 2
## INFO [20:21:22.860] [bbotk] Training 6 configs with budget of 4 for each
## INFO [20:21:22.862] [bbotk] Evaluating 6 configuration(s)
## [20:21:23] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:23] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:23] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:23] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:23] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:23] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:23.663] [bbotk] Result of batch 4:
## INFO [20:21:23.666] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:23.666] [bbotk] 0.98871 dart 4 1 0 4 4
## INFO [20:21:23.666] [bbotk] 0.06475 gbtree 4 1 0 4 4
## INFO [20:21:23.666] [bbotk] 0.15766 gblinear 4 1 0 4 4
## INFO [20:21:23.666] [bbotk] 0.78535 gbtree 4 1 0 4 4
## INFO [20:21:23.666] [bbotk] 0.54219 dart 4 1 0 4 4
## INFO [20:21:23.666] [bbotk] 0.41655 gblinear 4 1 0 4 4
## INFO [20:21:23.666] [bbotk] n_configs classif.tpr classif.fpr uhash
## INFO [20:21:23.666] [bbotk] 6 0.61728 0.17714 cba5d62b-bbba-4037-8d3d-b466292b7063
## INFO [20:21:23.666] [bbotk] 6 0.64198 0.18286 b514583b-4f97-46d2-b911-074d8a25abd3
## INFO [20:21:23.666] [bbotk] 6 0.00000 0.00000 131dbeb2-4572-458a-b4c2-dc02b77d7e29
## INFO [20:21:23.666] [bbotk] 6 0.66667 0.18857 355e76ec-7ad1-4942-82b1-c46c543703a2
## INFO [20:21:23.666] [bbotk] 6 0.60494 0.15429 41c09ef9-a28a-4bbb-ab1c-21f7b8414e48
## INFO [20:21:23.666] [bbotk] 6 0.03704 0.01143 29cbf257-6eed-4730-aced-ded22a1961e0
## INFO [20:21:23.667] [bbotk] Training 1 configs with budget of 16 for each
## INFO [20:21:23.669] [bbotk] Evaluating 1 configuration(s)
## [20:21:23] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:23.855] [bbotk] Result of batch 5:
## INFO [20:21:23.857] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:23.857] [bbotk] 0.7853 gbtree 16 1 1 16 16
## INFO [20:21:23.857] [bbotk] n_configs classif.tpr classif.fpr uhash
## INFO [20:21:23.857] [bbotk] 1 0.6543 0.2114 0be7cca1-80e5-40df-a153-e77505486188
## INFO [20:21:23.858] [bbotk] Start evaluation of bracket 3
## INFO [20:21:23.862] [bbotk] Training 3 configs with budget of 16 for each
## INFO [20:21:23.863] [bbotk] Evaluating 3 configuration(s)
## [20:21:24] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:24] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## [20:21:24] WARNING: amalgamation/../src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
## INFO [20:21:24.308] [bbotk] Result of batch 6:
## INFO [20:21:24.310] [bbotk] eta booster nrounds bracket bracket_stage budget_scaled budget_real
## INFO [20:21:24.310] [bbotk] 0.5221 dart 16 0 0 16 16
## INFO [20:21:24.310] [bbotk] 0.1117 gbtree 16 0 0 16 16
## INFO [20:21:24.310] [bbotk] 0.8860 gblinear 16 0 0 16 16
## INFO [20:21:24.310] [bbotk] n_configs classif.tpr classif.fpr uhash
## INFO [20:21:24.310] [bbotk] 3 0.6420 0.2171 eebedc1d-f734-45f5-b861-1da7babc493c
## INFO [20:21:24.310] [bbotk] 3 0.6543 0.1714 a264670f-557a-4d47-877f-ae1fe80868fa
## INFO [20:21:24.310] [bbotk] 3 0.4815 0.1829 d0e32cbe-8f3b-4569-8f42-4430b8ee5657
## INFO [20:21:24.317] [bbotk] Finished optimizing after 31 evaluation(s)
## INFO [20:21:24.318] [bbotk] Result:
## INFO [20:21:24.320] [bbotk] nrounds eta booster learner_param_vals x_domain classif.tpr classif.fpr
## INFO [20:21:24.320] [bbotk] 1 0.2074 gblinear <list[4]> <list[3]> 0.00000 0.00000
## INFO [20:21:24.320] [bbotk] 1 0.4592 gbtree <list[4]> <list[3]> 0.75309 0.25714
## INFO [20:21:24.320] [bbotk] 1 0.2415 gblinear <list[4]> <list[3]> 0.00000 0.00000
## INFO [20:21:24.320] [bbotk] 1 0.1187 gbtree <list[4]> <list[3]> 0.74074 0.24571
## INFO [20:21:24.320] [bbotk] 1 0.9724 gbtree <list[4]> <list[3]> 0.75309 0.25714
## INFO [20:21:24.320] [bbotk] 1 0.4105 gblinear <list[4]> <list[3]> 0.00000 0.00000
## INFO [20:21:24.320] [bbotk] 1 0.4018 dart <list[4]> <list[3]> 0.74074 0.24571
## INFO [20:21:24.320] [bbotk] 1 0.9163 gblinear <list[4]> <list[3]> 0.00000 0.00000
## INFO [20:21:24.320] [bbotk] 1 0.5480 gblinear <list[4]> <list[3]> 0.00000 0.00000
## INFO [20:21:24.320] [bbotk] 1 0.7201 gblinear <list[4]> <list[3]> 0.00000 0.00000
## INFO [20:21:24.320] [bbotk] 4 0.4592 gbtree <list[4]> <list[3]> 0.66667 0.18286
## INFO [20:21:24.320] [bbotk] 4 0.1187 gbtree <list[4]> <list[3]> 0.72840 0.22286
## INFO [20:21:24.320] [bbotk] 4 0.5480 gblinear <list[4]> <list[3]> 0.06173 0.02857
## INFO [20:21:24.320] [bbotk] 4 0.7201 gblinear <list[4]> <list[3]> 0.11111 0.05143
## INFO [20:21:24.320] [bbotk] 4 0.1577 gblinear <list[4]> <list[3]> 0.00000 0.00000
## INFO [20:21:24.320] [bbotk] 4 0.5422 dart <list[4]> <list[3]> 0.60494 0.15429
## INFO [20:21:24.320] [bbotk] 4 0.4165 gblinear <list[4]> <list[3]> 0.03704 0.01143
## INFO [20:21:24.320] [bbotk] 16 0.1117 gbtree <list[4]> <list[3]> 0.65432 0.17143
## nrounds eta booster learner_param_vals x_domain classif.tpr
## 1: 1 0.2074 gblinear <list[4]> <list[3]> 0.00000
## 2: 1 0.4592 gbtree <list[4]> <list[3]> 0.75309
## 3: 1 0.2415 gblinear <list[4]> <list[3]> 0.00000
## 4: 1 0.1187 gbtree <list[4]> <list[3]> 0.74074
## 5: 1 0.9724 gbtree <list[4]> <list[3]> 0.75309
## 6: 1 0.4105 gblinear <list[4]> <list[3]> 0.00000
## 7: 1 0.4018 dart <list[4]> <list[3]> 0.74074
## 8: 1 0.9163 gblinear <list[4]> <list[3]> 0.00000
## 9: 1 0.5480 gblinear <list[4]> <list[3]> 0.00000
## 10: 1 0.7201 gblinear <list[4]> <list[3]> 0.00000
## 11: 4 0.4592 gbtree <list[4]> <list[3]> 0.66667
## 12: 4 0.1187 gbtree <list[4]> <list[3]> 0.72840
## 13: 4 0.5480 gblinear <list[4]> <list[3]> 0.06173
## 14: 4 0.7201 gblinear <list[4]> <list[3]> 0.11111
## 15: 4 0.1577 gblinear <list[4]> <list[3]> 0.00000
## 16: 4 0.5422 dart <list[4]> <list[3]> 0.60494
## 17: 4 0.4165 gblinear <list[4]> <list[3]> 0.03704
## 18: 16 0.1117 gbtree <list[4]> <list[3]> 0.65432
## classif.fpr
## 1: 0.00000
## 2: 0.25714
## 3: 0.00000
## 4: 0.24571
## 5: 0.25714
## 6: 0.00000
## 7: 0.24571
## 8: 0.00000
## 9: 0.00000
## 10: 0.00000
## 11: 0.18286
## 12: 0.22286
## 13: 0.02857
## 14: 0.05143
## 15: 0.00000
## 16: 0.15429
## 17: 0.01143
## 18: 0.17143
Now the result is not a single best configuration but an estimated Pareto front. All red points are not dominated by another parameter configuration regarding their fpr and tpr performance measures.
$result instance
## nrounds eta booster learner_param_vals x_domain classif.tpr
## 1: 1 0.2074 gblinear <list[4]> <list[3]> 0.00000
## 2: 1 0.4592 gbtree <list[4]> <list[3]> 0.75309
## 3: 1 0.2415 gblinear <list[4]> <list[3]> 0.00000
## 4: 1 0.1187 gbtree <list[4]> <list[3]> 0.74074
## 5: 1 0.9724 gbtree <list[4]> <list[3]> 0.75309
## 6: 1 0.4105 gblinear <list[4]> <list[3]> 0.00000
## 7: 1 0.4018 dart <list[4]> <list[3]> 0.74074
## 8: 1 0.9163 gblinear <list[4]> <list[3]> 0.00000
## 9: 1 0.5480 gblinear <list[4]> <list[3]> 0.00000
## 10: 1 0.7201 gblinear <list[4]> <list[3]> 0.00000
## 11: 4 0.4592 gbtree <list[4]> <list[3]> 0.66667
## 12: 4 0.1187 gbtree <list[4]> <list[3]> 0.72840
## 13: 4 0.5480 gblinear <list[4]> <list[3]> 0.06173
## 14: 4 0.7201 gblinear <list[4]> <list[3]> 0.11111
## 15: 4 0.1577 gblinear <list[4]> <list[3]> 0.00000
## 16: 4 0.5422 dart <list[4]> <list[3]> 0.60494
## 17: 4 0.4165 gblinear <list[4]> <list[3]> 0.03704
## 18: 16 0.1117 gbtree <list[4]> <list[3]> 0.65432
## classif.fpr
## 1: 0.00000
## 2: 0.25714
## 3: 0.00000
## 4: 0.24571
## 5: 0.25714
## 6: 0.00000
## 7: 0.24571
## 8: 0.00000
## 9: 0.00000
## 10: 0.00000
## 11: 0.18286
## 12: 0.22286
## 13: 0.02857
## 14: 0.05143
## 15: 0.00000
## 16: 0.15429
## 17: 0.01143
## 18: 0.17143
plot(classif.tpr~classif.fpr, instance$archive$data)
points(classif.tpr~classif.fpr, instance$result, col = "red")