7.6 Cost-Sensitive Classification

In regular classification the aim is to minimize the misclassification rate and thus all types of misclassification errors are deemed equally severe. A more general setting is cost-sensitive classification. Cost sensitive classification does not assume that the costs caused by different kinds of errors are equal. The objective of cost sensitive classification is to minimize the expected costs.

Imagine you are an analyst for a big credit institution. Let’s also assume that a correct decision of the bank would result in 35% of the profit at the end of a specific period. A correct decision means that the bank predicts that a customer will pay their bills (hence would obtain a loan), and the customer indeed has good credit. On the other hand, a wrong decision means that the bank predicts that the customer’s credit is in good standing, but the opposite is true. This would result in a loss of 100% of the given loan.

Good Customer (truth) Bad Customer (truth)
Good Customer (predicted) + 0.35 - 1.0
Bad Customer (predicted) 0 0

Expressed as costs (instead of profit), we can write down the cost-matrix as follows:

An exemplary data set for such a problem is the German Credit task:

The data has 70% customers who are able to pay back their credit, and 30% bad customers who default on the debt. A manager, who doesn’t have any model, could decide to give either everybody a credit or to give nobody a credit. The resulting costs for the German credit data are:

If the average loan is $20,000, the credit institute would lose more than one million dollar if it would grant everybody a credit:

Our goal is to find a model which minimizes the costs (and thereby maximizes the expected profit).

7.6.1 A First Model

For our first model, we choose an ordinary logistic regression (implemented in the add-on package mlr3learners). We first create a classification task, then resample the model using a 10-fold cross validation and extract the resulting confusion matrix:

To calculate the average costs like above, we can simply multiply the elements of the confusion matrix with the elements of the previously introduced cost matrix, and sum the values of the resulting matrix:

With an average loan of $20,000, the logistic regression yields the following costs:

Instead of losing over $1,000,000, the credit institute now can expect a profit of more than $1,000,000.

7.6.2 Cost-sensitive Measure

Our natural next step would be to further improve the modeling step in order to maximize the profit. For this purpose we first create a cost-sensitive classification measure which calculates the costs based on our cost matrix. This allows us to conveniently quantify and compare modeling decisions. Fortunately, there already is a predefined measure Measure for this purpose: MeasureClassifCosts:

If we now call resample() or benchmark(), the cost-sensitive measures will be evaluated. We compare the logistic regression to a simple featureless learner and to a random forest from package ranger :

As expected, the featureless learner is performing comparably bad. The logistic regression and the random forest work equally well.

7.6.3 Thresholding

Although we now correctly evaluate the models in a cost-sensitive fashion, the models themselves are unaware of the classification costs. They assume the same costs for both wrong classification decisions (false positives and false negatives). Some learners natively support cost-sensitive classification (e.g., XXX). However, we will concentrate on a more generic approach which works for all models which can predict probabilities for class labels: thresholding.

Most learners can calculate the probability \(p\) for the positive class. If \(p\) exceeds the threshold \(0.5\), they predict the positive class, and the negative class otherwise.

For our binary classification case of the credit data, the we primarily want to minimize the errors where the model predicts “good”, but truth is “bad” (i.e., the number of false positives) as this is the more expensive error. If we now increase the threshold to values \(> 0.5\), we reduce the number of false negatives. Note that we increase the number of false positives simultaneously, or, in other words, we are trading false positives for false negatives.

Instead of manually trying different threshold values, one uses use optimize() to find a good threshold value w.r.t. our performance measure:

The function optimize() is intended for unimodal functions and therefore may converge to a local optimum here. See below for better alternatives to find good threshold values.

7.6.4 Threshold Tuning

More following soon!

Upcoming sections will entail:

  • threshold tuning as pipeline operator
  • joint hyperparameter optimization

More following soon!