9 Model Interpretation

In principle, all generic frameworks for model interpretation are applicable on the models fitted with mlr3 by just extracting the fitted models from the Learner objects.

However, two of the most popular frameworks,

additionally come with some convenience for mlr3.

9.1 IML

Author: Shawn Storm

iml is an R package that interprets the behavior and explains predictions of machine learning models. The functions provided in the iml package are model-agnostic which gives the flexibility to use any machine learning model.

This chapter provides examples of how to use iml with mlr3. For more information refer to the IML github and the IML book

9.1.1 Penguin Task

To understand what iml can offer, we start off with a thorough example. The goal of this example is to figure out the species of penguins given a set of features. The palmerpenguins::penguins data set will be used which is an alternative to the iris data set. The penguins data sets contains 8 variables of 344 penguins:

data("penguins", package = "palmerpenguins")
str(penguins)
## tibble [344 × 8] (S3: tbl_df/tbl/data.frame)
##  $ species          : Factor w/ 3 levels "Adelie","Chinstrap",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ island           : Factor w/ 3 levels "Biscoe","Dream",..: 3 3 3 3 3 3 3 3 3 3 ...
##  $ bill_length_mm   : num [1:344] 39.1 39.5 40.3 NA 36.7 39.3 38.9 39.2 34.1 42 ...
##  $ bill_depth_mm    : num [1:344] 18.7 17.4 18 NA 19.3 20.6 17.8 19.6 18.1 20.2 ...
##  $ flipper_length_mm: int [1:344] 181 186 195 NA 193 190 181 195 193 190 ...
##  $ body_mass_g      : int [1:344] 3750 3800 3250 NA 3450 3650 3625 4675 3475 4250 ...
##  $ sex              : Factor w/ 2 levels "female","male": 2 1 1 NA 1 2 1 2 NA NA ...
##  $ year             : int [1:344] 2007 2007 2007 2007 2007 2007 2007 2007 2007 2007 ...

To get started run:

penguins = na.omit(penguins)
task_peng = as_task_classif(penguins, target = "species")

penguins = na.omit(penguins) is to omit the 11 cases with missing values. If not omitted, there will be an error when running the learner from the data points that have N/A for some features.

learner = lrn("classif.ranger")
learner$predict_type = "prob"
learner$train(task_peng)
learner$model
## Ranger result
## 
## Call:
##  ranger::ranger(dependent.variable.name = task$target_names, data = task$data(),      probability = self$predict_type == "prob", case.weights = task$weights$weight,      num.threads = 1L) 
## 
## Type:                             Probability estimation 
## Number of trees:                  500 
## Sample size:                      333 
## Number of independent variables:  7 
## Mtry:                             2 
## Target node size:                 10 
## Variable importance mode:         none 
## Splitrule:                        gini 
## OOB prediction error (Brier s.):  0.0179
x = penguins[which(names(penguins) != "species")]
model = Predictor$new(learner, data = x, y = penguins$species)

As explained in Section 2.3, specific learners can be queried with mlr_learners. In Section 2.5 it is recommended for some classifiers to use the predict_type as prob instead of directly predicting a label. This is what is done in this example. penguins[which(names(penguins) != "species")] is the data of all the features and y will be the penguinsspecies. learner$train(task_peng) trains the model and learner$model stores the model from the training command. Predictor holds the machine learning model and the data. All interpretation methods in iml need the machine learning model and the data to be wrapped in the Predictor object.

Next is the core functionality of iml. In this example three separate interpretation methods will be used: FeatureEffects, FeatureImp and Shapley

9.1.2 FeatureEffects

In addition to the commands above the following two need to be ran:

num_features = c("bill_length_mm", "bill_depth_mm", "flipper_length_mm", "body_mass_g", "year")
effect = FeatureEffects$new(model)
plot(effect, features = num_features)
Plot of the results from FeatureEffects. FeatureEffects computes and plots feature effects of prediction models

Figure 9.1: Plot of the results from FeatureEffects. FeatureEffects computes and plots feature effects of prediction models

effect stores the object from the FeatureEffect computation and the results can then be plotted. In this example, all of the features provided by the penguins data set were used.

All features except for year provide meaningful interpretable information. It should be clear why year doesn’t provide anything of significance. bill_length_mm shows for example that when the bill length is smaller than roughly 40mm, there is a high chance that the penguin is an Adelie.

9.1.3 Shapley

x = penguins[which(names(penguins) != "species")]
model = Predictor$new(learner, data = penguins, y = "species")
x.interest = data.frame(penguins[1, ])
shapley = Shapley$new(model, x.interest = x.interest)
plot(shapley)
Plot of the results from Shapley. $\phi$ gives the increase or decrease in probability given the values on the vertical axis

Figure 9.2: Plot of the results from Shapley. \(\phi\) gives the increase or decrease in probability given the values on the vertical axis

The \(\phi\) provides insight into the probability given the values on the vertical axis. For example, a penguin is less likely to be Gentoo if the bill_depth=18.7 is and much more likely to be Adelie than Chinstrap.

9.1.4 FeatureImp

effect = FeatureImp$new(model, loss = "ce")
effect$plot(features = num_features)
Plot of the results from FeatureImp. FeatureImp visualizes the importance of features given the prediction model

Figure 9.3: Plot of the results from FeatureImp. FeatureImp visualizes the importance of features given the prediction model

FeatureImp shows the level of importance of the features when classifying the penguins. It is clear to see that the bill_length_mm is of high importance and one should concentrate on different boundaries of this feature when attempting to classify the three species.

9.1.5 Independent Test Data

It is also interesting to see how well the model performs on a test data set. For this section, exactly as was recommended in Section 2.4, 80% of the penguin data set will be used for the training set and 20% for the test set:

train_set = sample(task_peng$nrow, 0.8 * task_peng$nrow)
test_set = setdiff(seq_len(task_peng$nrow), train_set)
learner$train(task_peng, row_ids = train_set)
prediction = learner$predict(task_peng, row_ids = test_set)

First, we compare the feature importance on training and test set

# plot on training
model = Predictor$new(learner, data = penguins[train_set, ], y = "species")
effect = FeatureImp$new(model, loss = "ce")
plot_train = plot(effect, features = num_features)

# plot on test data
model = Predictor$new(learner, data = penguins[test_set, ], y = "species")
effect = FeatureImp$new(model, loss = "ce")
plot_test = plot(effect, features = num_features)

# combine into single plot
library("patchwork")
plot_train + plot_test
FeatImp on train (left) and test (right)

Figure 9.4: FeatImp on train (left) and test (right)

The results of the train set for FeatureImp are very similar, which is expected. We follow a similar approach to compare the feature effects:

model = Predictor$new(learner, data = penguins[train_set, ], y = "species")
effect = FeatureEffects$new(model)
plot(effect, features = num_features)
FeatEffect train data set

Figure 9.5: FeatEffect train data set

model = Predictor$new(learner, data = penguins[test_set, ], y = "species")
effect = FeatureEffects$new(model)
plot(effect, features = num_features)
FeatEffect test data set

Figure 9.6: FeatEffect test data set

As is the case with FeatureImp, the test data results show either an over- or underestimate of feature importance / feature effects compared to the results where the entire penguin data set was used. This would be a good opportunity for the reader to attempt to resolve the estimation by playing with the amount of features and the amount of data used for both the test and train data sets of FeatureImp and FeatureEffects. Be sure to not change the line train_set = sample(task_peng$nrow, 0.8 * task_peng$nrow) as it will randomly sample the data again.

9.2 DALEX

Authors: Przemysław Biecek, Szymon Maksymiuk

9.2.1 Introduction

The DALEX package X-rays any predictive model and helps to explore, explain and visualize its behaviour. The package implements a collection of methods for Explanatory Model Analysis. It is based on a unified grammar summarised in Figure 9.7.

In the following sections, we will present subsequent methods available in the DALEX package based on a random forest model trained for football players worth prediction on the FIFA 20 data. We will show both methods analyzing the model at the level of a single prediction and the global level - for the whole data set.

The structure of this chapter is the following:

  • In Section 9.2.2 we introduce the FIFA 20 dataset and then in section 9.2.3 we train a random regression forest using the ranger package.
  • Section 9.2.4 introduces general logic beyond DALEX explainers.
  • Section 9.2.5 introduces methods for dataset level model exploration.
  • Section 9.2.6 introduces methods for instance-level model exploration.
Taxonomy of methods for model exploration presented in this chapter. Left part overview methods for instance level exploration while right part is related to dataset level model exploration.

Figure 9.7: Taxonomy of methods for model exploration presented in this chapter. Left part overview methods for instance level exploration while right part is related to dataset level model exploration.

9.2.2 Read data: FIFA

Examples presented in this chapter are based on data retrieved from the FIFA video game. We will use the data scrapped from the sofifa website. The raw data is available at kaggle. After some basic data cleaning, the processed data for the top 5000 football players is available in the DALEX package under the name fifa.

library("DALEX")
fifa[1:2, c("value_eur", "age", "height_cm", "nationality", "attacking_crossing")]
##                   value_eur age height_cm nationality attacking_crossing
## L. Messi           95500000  32       170   Argentina                 88
## Cristiano Ronaldo  58500000  34       187    Portugal                 84

For every player, we have 42 features available.

dim(fifa)
## [1] 5000   42

In the table below we overview these 42 features for three selected players. One of the features, called value_eur, is the worth of the footballer in euros. In the next section, we will build a prediction model, which will estimate the worth of the player based on other player characteristics.

Lionel Messi Cristiano Ronaldo Neymar Junior
wage_eur 565000 405000 290000
age 32 34 27
height_cm 170 187 175
weight_kg 72 83 68
nationality Argentina Portugal Brazil
overall 94 93 92
potential 94 93 92
value_eur 95 500 000 58 500 000 105 500 000
attacking_crossing 88 84 87
attacking_finishing 95 94 87
attacking_heading_accuracy 70 89 62
attacking_short_passing 92 83 87
attacking_volleys 88 87 87
skill_dribbling 97 89 96
skill_curve 93 81 88
skill_fk_accuracy 94 76 87
skill_long_passing 92 77 81
skill_ball_control 96 92 95
movement_acceleration 91 89 94
movement_sprint_speed 84 91 89
movement_agility 93 87 96
movement_reactions 95 96 92
movement_balance 95 71 84
power_shot_power 86 95 80
power_jumping 68 95 61
power_stamina 75 85 81
power_strength 68 78 49
power_long_shots 94 93 84
mentality_aggression 48 63 51
mentality_interceptions 40 29 36
mentality_positioning 94 95 87
mentality_vision 94 82 90
mentality_penalties 75 85 90
mentality_composure 96 95 94
defending_marking 33 28 27
defending_standing_tackle 37 32 26
defending_sliding_tackle 26 24 29
goalkeeping_diving 6 7 9
goalkeeping_handling 11 11 9
goalkeeping_kicking 15 15 15
goalkeeping_positioning 14 14 15
goalkeeping_reflexes 8 11 11

In order to get a more stable model we remove four variables i.e. nationality, overall, potential, wage_eur.

fifa[, c("nationality", "overall", "potential", "wage_eur")] = NULL
for (i in 1:ncol(fifa)) fifa[, i] = as.numeric(fifa[, i])

9.2.3 Train a model: Ranger

The DALEX package works for any model regardless of its internal structure. Examples of how this package works are shown on a random forest model implemented in the ranger package.

We use the mlr3 package to build a predictive model. First, let’s load the required packages.

Then we can define the regression task - prediction of the value_eur variable:

fifa_task = as_task_regr(fifa, target = "value_eur")

Finally, we train mlr3’s ranger learner with 250 trees. Note that in this example for brevity we do not split the data into a train/test data. The model is built on the whole data.

fifa_ranger = lrn("regr.ranger")
fifa_ranger$param_set$values = list(num.trees = 250)
fifa_ranger$train(fifa_task)
fifa_ranger
## <LearnerRegrRanger:regr.ranger>
## * Model: ranger
## * Parameters: num.trees=250
## * Packages: mlr3, mlr3learners, ranger
## * Predict Type: response
## * Feature types: logical, integer, numeric, character, factor, ordered
## * Properties: hotstart_backward, importance, oob_error, weights

9.2.4 The general workflow

Working with explanations in the DALEX package always consists of three steps schematically shown in the pipe below.

model %>%
  explain_mlr3(data = ..., y = ..., label = ...) %>%
  model_parts() %>%
  plot()
  1. All functions in the DALEX package can work for models with any structure. It is possible because in the first step we create an adapter that allows the downstream functions to access the model in a consistent fashion. In general, such an adapter is created with DALEX::explain.default() function, but for models created in the mlr3 package it is more convenient to use the DALEXtra::explain_mlr3().

  2. Explanations are determined by the functions DALEX::model_parts(), DALEX::model_profile(), DALEX::predict_parts() and DALEX::predict_profile(). Each of these functions takes the model adapter as its first argument. The other arguments describe how the function works. We will present them in the following section.

  3. Explanations can be visualized with the generic function plot or summarised with the generic function print(). Each explanation is a data frame with an additional class attribute. The plot function creates graphs using the ggplot2 package, so they can be easily modified with usual ggplot2 decorators.

We show this cascade of functions based on the FIFA example.

To get started with the exploration of the model behaviour we need to create an explainer. DALEX::explain.default function handles is for all types of predictive models. In the DALEXtra package there generic versions for the most common ML frameworks. Among them the DALEXtra::explain_mlr3() function works for mlr3 models.

This function performs a series of internal checks so the output is a bit verbose. Turn the verbose = FALSE argument to make it less wordy.

library("DALEX")
library("DALEXtra")

ranger_exp = explain_mlr3(fifa_ranger,
  data     = fifa,
  y        = fifa$value_eur,
  label    = "Ranger RF",
  colorize = FALSE)
## Preparation of a new explainer is initiated
##   -> model label       :  Ranger RF 
##   -> data              :  5000  rows  38  cols 
##   -> target variable   :  5000  values 
##   -> predict function  :  yhat.LearnerRegr  will be used (  default  )
##   -> predicted values  :  No value for predict function target column. (  default  )
##   -> model_info        :  package mlr3 , ver. 0.13.0 , task regression (  default  ) 
##   -> predicted values  :  numerical, min =  469831 , mean =  7473597 , max =  91636967  
##   -> residual function :  difference between y and yhat (  default  )
##   -> residuals         :  numerical, min =  -8296233 , mean =  -310.4 , max =  18253700  
##   A new explainer has been created!

9.2.5 Dataset level exploration

The DALEX::model_parts() function calculates the importance of variables using the permutations based importance.

fifa_vi = model_parts(ranger_exp)
head(fifa_vi)
##              variable mean_dropout_loss     label
## 1        _full_model_           1353358 Ranger RF
## 2           value_eur           1353358 Ranger RF
## 3 goalkeeping_kicking           1415655 Ranger RF
## 4           weight_kg           1418386 Ranger RF
## 5    movement_balance           1420246 Ranger RF
## 6       power_jumping           1422762 Ranger RF

Results can be visualized with generic plot(). The chart for all 38 variables would be unreadable, so with the max_vars argument, we limit the number of variables on the plot.

plot(fifa_vi, max_vars = 12, show_boxplots = FALSE)

Once we know which variables are most important, we can use Partial Dependence Plots to show how the model, on average, changes with changes in selected variables. In this example, they show the average relation between the particular variables and players’ value.

selected_variables = c("age", "movement_reactions",
  "skill_ball_control", "skill_dribbling")

fifa_pd = model_profile(ranger_exp,
  variables = selected_variables)$agr_profiles
fifa_pd
## Top profiles    : 
##              _vname_   _label_ _x_  _yhat_ _ids_
## 1 skill_ball_control Ranger RF   5 6037295     0
## 2    skill_dribbling Ranger RF   7 6361021     0
## 3    skill_dribbling Ranger RF  11 6353791     0
## 4    skill_dribbling Ranger RF  12 6354969     0
## 5    skill_dribbling Ranger RF  13 6356050     0
## 6    skill_dribbling Ranger RF  14 6356651     0

Again, the result of the explanation can be presented with the generic function plot().

library("ggplot2")
plot(fifa_pd) +
  scale_y_continuous("Estimated value in Euro", labels = scales::dollar_format(suffix = "€", prefix = "")) +
  ggtitle("Partial Dependence profiles for selected variables")

The general trend for most player characteristics is the same. The higher are the skills the higher is the player’s worth. With a single exception – variable Age.

9.2.6 Instance level explanation

Time to see how the model behaves for a single observation/player This can be done for any player, but this example we will use the Cristiano Ronaldo.

The function predict_parts is an instance-level version of the model_parts function introduced in the previous section. For the background behind that method see the Introduction to Break Down.

ronaldo = fifa["Cristiano Ronaldo", ]
ronaldo_bd_ranger = predict_parts(ranger_exp,
  new_observation = ronaldo)
head(ronaldo_bd_ranger)
##                                       contribution
## Ranger RF: intercept                       7473597
## Ranger RF: movement_reactions = 96        12438174
## Ranger RF: skill_ball_control = 92         6591115
## Ranger RF: attacking_finishing = 94        4657746
## Ranger RF: mentality_positioning = 95      5064647
## Ranger RF: skill_dribbling = 89            4676977

The generic plot() function shows the estimated contribution of variables to the final prediction.

Cristiano is a striker, therefore characteristics that influence his worth are those related to attack, like attacking_volleys or skill_dribbling. The only variable with negative attribution is age.

plot(ronaldo_bd_ranger)

Another way to inspect the local behaviour of the model is to use SHapley Additive exPlanations (SHAP). It locally shows the contribution of variables to a single observation, just like Break Down.

ronaldo_shap_ranger = predict_parts(ranger_exp,
  new_observation = ronaldo,
  type = "shap")

plot(ronaldo_shap_ranger) +
  scale_y_continuous("Estimated value in Euro", labels = scales::dollar_format(suffix = "€", prefix = ""))

In the previous section, we’ve introduced a global explanation - Partial Dependence Plots. Ceteris Paribus is the instance level version of that plot. It shows the response of the model for observation when we change only one variable while others stay unchanged. Blue dot stands for the original value.

selected_variables = c("age", "movement_reactions",
  "skill_ball_control", "skill_dribbling")

ronaldo_cp_ranger = predict_profile(ranger_exp, ronaldo, variables = selected_variables)

plot(ronaldo_cp_ranger, variables = selected_variables) +
  scale_y_continuous("Estimated value of Christiano Ronaldo", labels = scales::dollar_format(suffix = "€", prefix = ""))