Bernd Bischl, Raphael Sonabend, Lars Kotthoff, Michel Lang
Contributors
Marc Becker
Przemysław Biecek
Martin Binder
Bernd Bischl
Lukas Burk
Giuseppe Casalicchio
Susanne Dandl
Sebastian Fischer
Natalie Foss
Lars Kotthoff
Michel Lang
Florian Pfisterer
Damir Pulatov
Lennart Schneider
Patrick Schratz
Raphael Sonabend
Janek Thomas
Marvin N. Wright
Welcome to the Machine Learning in R universe. This is the online version of the print book Applied Machine Learning Using mlr3 in R published by CRC Press, you can buy a copy of the book here - all profits from the book will go to the mlr organisation to support future maintenance and development of the mlr universe. This book will teach you about the mlr3 universe of packages, from machine learning methodology to implementations of complex algorithmic pipelines.
We hope you enjoy reading our book and always welcome comments and feedback. If you notice any mistakes we would appreciate if you could open an issue in the mlr3book issue tracker.
Licensing
Code chunks in this book are licensed under MIT and all figures generated by code chunks are licensed under CC BY, which means you can copy, adapt, and redistribute this material in any way that you like as long as you reference this book (see citation information just below).
All other content (text, tables, figures not generated from code chunks, etc.) is licensed under CC BY-NC-SA 4.0, which means you can copy and redistribute the material however you want and adapt it however you want, as long as: you do reference the book (see citation information below), you do not use any material for commercial purposes, you do use a CC BY-NC-SA 4.0 compatible license if you adapt the material.
If you have any questions about licensing just open an issue and we will help you out.
Citation Information
Citation details of packages in the mlr3 ecosystem can be found in their respective GitHub repositories.
When you are citing this book please cite chapters directly; citations can be found at the end of each chapter. If you need to reference the full book please use:
Bischl, B., Sonabend, R., Kotthoff, L., & Lang, M. (Eds.). (2024).
"Applied Machine Learning Using mlr3 in R". CRC Press. https://mlr3book.mlr-org.com
@book{Bischl2024
title = {Applied Machine Learning Using {m}lr3 in {R}},
editor = {Bernd Bischl and Raphael Sonabend and Lars Kotthoff and Michel Lang},
url = {https://mlr3book.mlr-org.com},
year = {2024},
isbn = {9781032507545},
publisher = {CRC Press}
}
Community Links
The mlr community is open to all and we welcome everybody, from those completely new to machine learning and R to advanced coders and professional data scientists.
The mlr3 GitHub is a good starting point for links to cheatsheets, documentation, videos, slides, overview tables, and pointers to other packages. If you want to chat to us, you can reach us on our Mattermost. For case studies and how-to guides, check out the mlr3gallery.
We appreciate all contributions, whether they are bug reports, feature requests, or pull requests that fix bugs or extend functionality. Each of our GitHub repositories includes issues and pull request templates to ensure we can help you as much as possible to get started. Please make sure you read our code of conduct and contribution guidelines before opening your first issue or pull request.
With so many packages in our universe it may be hard to keep track of where to open issues, as a general rule:
If you have a question about using any part of the mlr3 ecosystem, ask on StackOverflow and use the tag #mlr3 – one of our team will answer you there. Be sure to include a reproducible example (reprex) and if we think you found a bug then we will either refer you to the relevant GitHub repository or we will open an issue for you.
Issues or pull requests about core functionality (train, predict, etc.) should be opened in the mlr3 GitHub repository.
Issues or pull requests about learners should be opened in the mlr3extralearners GitHub repository.
Issues or pull requests about measures should be opened in the mlr3measures GitHub repository.
Issues or pull requests about specialized functionality (e.g., pipelines and tuning) should be opened in the GitHub repository of the respective package.
Do not worry about opening an issue in the wrong place, we will transfer it to the right one.
Overview
The mlr3 ecosystem is the result of many years of methodological and applied research. This book describes the resulting features and discusses best practices for ML, technical implementation details, and in-depth considerations for model optimization. This book may be helpful for both practitioners who want to quickly apply machine learning (ML) algorithms and researchers who want to implement, benchmark, and compare their new methods in a structured environment. While we hope this book is accessible to a wide range of readers and levels of ML expertise, we do assume that readers have taken at least an introductory ML course or have the equivalent expertise and some basic experience with R. A background in computer science or statistics is beneficial for understanding the advanced functionality described in the later chapters of this book, but not required. A comprehensive ML introduction for those new to the field can be found in James et al. (2014). Wickham and Grolemund (2017) provides a comprehensive introduction to data science in R.
The book is split into the following four parts:
Part I: Fundamentals In this part of the book we will teach you the fundamentals of mlr3. This will give you a flavor of the building blocks of the mlr3 universe and the basic tools you will need to tackle most machine learning problems. We recommend that all readers study these chapters to become familiar with mlr3 terminology, syntax, and style. In 2 Data and Basic Modeling we will cover the basic classes in mlr3, including Learner (machine learning implementations), Measure (performance metrics), and Task (machine learning task definitions). 3 Evaluation and Benchmarking will take evaluation a step further to include discussions about resampling – robust strategies for measuring model performance – and benchmarking – experiments for comparing multiple models.
Part II: Tuning and Feature Selection In this part of the book, we will look at more advanced methodology that is essential to developing powerful ML models with good predictive ability. 4 Hyperparameter Optimization introduces hyperparameter optimization, which is the process of tuning model hyperparameters to obtain better model performance. Tuning is implemented via the mlr3tuning package, which also includes methods for automating complex tuning processes, including nested resampling. The performance of ML models can be improved by tuning hyperparameters but also by carefully selecting features. 6 Feature Selection introduces feature selection with filters and wrappers implemented in mlr3filters and mlr3fselect. For readers interested in taking a deep dive into tuning, 5 Advanced Tuning Methods and Black Box Optimization discusses advanced tuning methods including error handling, multi-objective tuning, and tuning with Hyperband and Bayesian optimization methods.
Part III: Pipelines and Preprocessing In Part III we introduce mlr3pipelines, which allows users to implement complex ML workflows easily. In 7 Sequential Pipelines we will show you how to build a pipeline out of discrete configurable operations and how to treat complex pipelines as if they were any other machine learning model. In 8 Non-sequential Pipelines and Tuning we will build on the previous chapter by introducing non-sequential pipelines, which can have multiple branches that carry out operations concurrently. We will also demonstrate how to tune pipelines, including how to tune which operations should be included in the pipeline. Finally, in 9 Preprocessing we will put pipelines into practice by demonstrating how to solve common problems that occur when fitting ML models to messy data.
Part IV: Advanced Topics In the final part of the book, we will look at advanced methodology and technical details. This part of the book is more theory-heavy in some sections to help ground the design and implementation decisions. We will begin by looking at advanced technical details in 10 Advanced Technical Aspects of mlr3 that are essential reading for advanced users who require parallelization, custom error handling, or large databases. 11 Large-Scale Benchmarking will build on all preceding chapters to introduce large-scale benchmarking experiments that compare many models, tasks, and measures; including how to make use of mlr3 extension packages for loading data, using high-performance computing clusters, and formal statistical analysis of benchmark experiments. 12 Model Interpretation will discuss different packages that are compatible with mlr3 to provide model-agnostic interpretability for feature importance and local explainability of individual predictions. 13 Beyond Regression and Classification will then delve into detail on domain-specific methods that are implemented in our extension packages including survival analysis, density estimation, spatio-temporal analysis, and more. Readers may choose to selectively read sections in this chapter depending on your use case (i.e., if you have domain-specific problems to tackle), or to use these as introductions to new domains to explore. Finally, 14 Algorithmic Fairness will introduce algorithmic fairness, which includes specialized measures and methods to identify and reduce algorithmic biases.
Acknowledgments
As well as the editors and contributing authors, many others have contributed to this book. We would like to acknowledge Stefan Coors for creating many of the images in the book, as well as Daniel Saggau, Jakob Richter, and Marvin Böcker for contributions to earlier drafts the book. We would also like to acknowledge the following organisations that supported various contributors: Munich Center for Machine Learning (MCML), National Science Foundation (NSF), and Mathematical Research Data Initiative (MaRDI).
James, Gareth, Daniela Witten, Trevor Hastie, and Robert Tibshirani. 2014. An Introduction to Statistical Learning: With Applications in R. Springer Publishing Company, Incorporated. https://doi.org/10.1007/978-1-4614-7138-7.
Wickham, Hadley, and Garrett Grolemund. 2017. R for Data Science: Import, Tidy, Transform, Visualize, and Model Data. 1st ed. O’Reilly Media. https://r4ds.had.co.nz/.
# Getting Started {.unnumbered .unlisted}```{r, include = FALSE, cache = FALSE}library(mlr3book)```::: {.content-visible when-format="html"}### Editors {.unnumbered .unlisted}Bernd Bischl, Raphael Sonabend, Lars Kotthoff, Michel Lang### Contributors {.unnumbered .unlisted}:::: {.columns}::: {.column width="30%"} - Marc Becker - Przemysław Biecek - Martin Binder - Bernd Bischl - Lukas Burk - Giuseppe Casalicchio - Susanne Dandl - Sebastian Fischer - Natalie Foss:::::: {.column width="30%"} - Lars Kotthoff - Michel Lang - Florian Pfisterer - Damir Pulatov - Lennart Schneider - Patrick Schratz - Raphael Sonabend - Janek Thomas - Marvin N. Wright:::::::Welcome to the **M**achine **L**earning in **R** universe.This is the online version of the print book *Applied Machine Learning Using mlr3 in R* published by CRC Press, you can buy a copy of the book [here](https://www.routledge.com/Applied-Machine-Learning-Using-mlr3-in-R/Bischl-Sonabend-Kotthoff-Lang/p/book/9781032507545) - all profits from the book will go to the mlr organisation to support future maintenance and development of the mlr universe.This book will teach you about the `mlr3` universe of packages, from machine learning methodology to implementations of complex algorithmic pipelines.We hope you enjoy reading our book and always welcome comments and feedback.If you notice any mistakes we would appreciate if you could open an issue in the [mlr3book issue tracker](https://github.com/mlr-org/mlr3book/issues).## LicensingCode chunks in this book are licensed under [MIT](https://opensource.org/license/mit/) and all figures generated by code chunks are licensed under [CC BY](https://creativecommons.org/licenses/by/4.0/), which means you can copy, adapt, and redistribute this material in any way that you like as long as you reference this book (see citation information just below).All other content (text, tables, figures not generated from code chunks, etc.) is licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/), which means you can copy and redistribute the material however you want and adapt it however you want, as long as: you **do** reference the book (see citation information below), you do **not** use any material for commercial purposes, you **do** use a [CC BY-NC-SA 4.0 compatible license](https://creativecommons.org/share-your-work/licensing-considerations/compatible-licenses) if you adapt the material.If you have any questions about licensing just [open an issue](https://github.com/mlr-org/mlr3book/issues) and we will help you out.## Citation InformationCitation details of packages in the `mlr3` ecosystem can be found in their respective GitHub repositories.When you are citing this book please cite chapters directly; citations can be found at the end of each chapter.If you need to reference the full book please use:```Bischl, B., Sonabend, R., Kotthoff, L., & Lang, M. (Eds.). (2024)."{{< meta book.title >}}". CRC Press. https://mlr3book.mlr-org.com@book{Bischl2024 title = {Applied Machine Learning Using {m}lr3 in {R}}, editor = {Bernd Bischl and Raphael Sonabend and Lars Kotthoff and Michel Lang}, url = {https://mlr3book.mlr-org.com}, year = {2024}, isbn = {9781032507545}, publisher = {CRC Press}}```## Community LinksThe mlr community is open to all and we welcome everybody, from those completely new to machine learning and R to advanced coders and professional data scientists.The [mlr3 GitHub](https://github.com/mlr-org/mlr3) is a good starting point for links to cheatsheets, documentation, videos, slides, overview tables, and pointers to other packages.If you want to chat to us, you can reach us on our [Mattermost](https://lmmisld-lmu-stats-slds.srv.mwn.de/signup_email?id=6n7n67tdh7d4bnfxydqomjqspo).For case studies and how-to guides, check out the [mlr3gallery](https://mlr-org.com/gallery.html).We appreciate all contributions, whether they are bug reports, feature requests, or pull requests that fix bugs or extend functionality.Each of our GitHub repositories includes issues and pull request templates to ensure we can help you as much as possible to get started.Please make sure you read our [code of conduct](https://github.com/mlr-org/mlr3/blob/main/.github/CODE_OF_CONDUCT.md) and [contribution guidelines](https://github.com/mlr-org/mlr3/blob/main/CONTRIBUTING.md) before opening your first issue or pull request.With so many packages in our universe it may be hard to keep track of where to open issues, as a general rule:1. If you have a question about using any part of the `mlr3` ecosystem, ask on [StackOverflow](https://stackoverflow.com/) and use the tag #mlr3 -- one of our team will answer you there.Be sure to include a reproducible example ([reprex](https://stackoverflow.com/help/minimal-reproducible-example)) and if we think you found a bug then we will either refer you to the relevant GitHub repository or we will open an issue for you.1. Issues or pull requests about core functionality (train, predict, etc.) should be opened in the [mlr3](https://github.com/mlr-org/mlr3) GitHub repository.2. Issues or pull requests about learners should be opened in the [mlr3extralearners](https://github.com/mlr-org/mlr3extralearners) GitHub repository.3. Issues or pull requests about measures should be opened in the [mlr3measures](https://github.com/mlr-org/mlr3measures) GitHub repository.4. Issues or pull requests about specialized functionality (e.g., pipelines and tuning) should be opened in the GitHub repository of the respective package.Do not worry about opening an issue in the wrong place, we will transfer it to the right one.## OverviewThe `mlr3` ecosystem is the result of many years of methodological and applied research.This book describes the resulting features and discusses best practices for ML, technical implementation details, and in-depth considerations for model optimization.This book may be helpful for both practitioners who want to quickly apply machine learning (ML) algorithms and researchers who want to implement, benchmark, and compare their new methods in a structured environment.While we hope this book is accessible to a wide range of readers and levels of ML expertise, we do assume that readers have taken at least an introductory ML course or have the equivalent expertise and some basic experience with R.A background in computer science or statistics is beneficial for understanding the advanced functionality described in the later chapters of this book, but not required.A comprehensive ML introduction for those new to the field can be found in @james_introduction_2014.@Wickham2017R provides a comprehensive introduction to data science in R.The book is split into the following four parts:**Part I: Fundamentals**<br>In this part of the book we will teach you the fundamentals of `mlr3`.This will give you a flavor of the building blocks of the `mlr3` universe and the basic tools you will need to tackle most machine learning problems.We recommend that all readers study these chapters to become familiar with `mlr3` terminology, syntax, and style.In @sec-basics we will cover the basic classes in `r mlr3`, including `Learner` (machine learning implementations), `Measure` (performance metrics), and `Task` (machine learning task definitions).@sec-performance will take evaluation a step further to include discussions about resampling -- robust strategies for measuring model performance -- and benchmarking -- experiments for comparing multiple models.**Part II: Tuning and Feature Selection**<br>In this part of the book, we will look at more advanced methodology that is essential to developing powerful ML models with good predictive ability.@sec-optimization introduces hyperparameter optimization, which is the process of tuning model hyperparameters to obtain better model performance.Tuning is implemented via the `r mlr3tuning` package, which also includes methods for automating complex tuning processes, including nested resampling.The performance of ML models can be improved by tuning hyperparameters but also by carefully selecting features.@sec-feature-selection introduces feature selection with filters and wrappers implemented in `r mlr3filters` and `r mlr3fselect`.For readers interested in taking a deep dive into tuning, @sec-optimization-advanced discusses advanced tuning methods including error handling, multi-objective tuning, and tuning with Hyperband and Bayesian optimization methods.**Part III: Pipelines and Preprocessing**<br>In Part III we introduce `r mlr3pipelines`, which allows users to implement complex ML workflows easily.In @sec-pipelines we will show you how to build a pipeline out of discrete configurable operations and how to treat complex pipelines as if they were any other machine learning model.In @sec-pipelines-nonseq we will build on the previous chapter by introducing non-sequential pipelines, which can have multiple branches that carry out operations concurrently.We will also demonstrate how to tune pipelines, including how to tune which operations should be included in the pipeline.Finally, in @sec-preprocessing we will put pipelines into practice by demonstrating how to solve common problems that occur when fitting ML models to messy data.**Part IV: Advanced Topics**<br>In the final part of the book, we will look at advanced methodology and technical details.This part of the book is more theory-heavy in some sections to help ground the design and implementation decisions.We will begin by looking at advanced technical details in @sec-technical that are essential reading for advanced users who require parallelization, custom error handling, or large databases.@sec-large-benchmarking will build on all preceding chapters to introduce large-scale benchmarking experiments that compare many models, tasks, and measures; including how to make use of `mlr3` extension packages for loading data, using high-performance computing clusters, and formal statistical analysis of benchmark experiments.@sec-interpretation will discuss different packages that are compatible with `mlr3` to provide model-agnostic interpretability for feature importance and local explainability of individual predictions.@sec-special will then delve into detail on domain-specific methods that are implemented in our extension packages including survival analysis, density estimation, spatio-temporal analysis, and more.Readers may choose to selectively read sections in this chapter depending on your use case (i.e., if you have domain-specific problems to tackle), or to use these as introductions to new domains to explore.Finally, @sec-fairness will introduce algorithmic fairness, which includes specialized measures and methods to identify and reduce algorithmic biases.## AcknowledgmentsAs well as the editors and contributing authors, many others have contributed to this book.We would like to acknowledge Stefan Coors for creating many of the images in the book, as well as Daniel Saggau, Jakob Richter, and Marvin Böcker for contributions to earlier drafts the book.We would also like to acknowledge the following organisations that supported various contributors: Munich Center for Machine Learning (MCML), National Science Foundation (NSF), and Mathematical Research Data Initiative (MaRDI).:::\frontmatter::: {.content-visible when-format="pdf"}\halftitle{Main Title}{Standard blurb goes here}%%Placeholder for Half title\seriespg{Series page goes here (if applicable); otherwise blank}\title{Applied Machine Learning Using mlr3 in R}\edition{First Edition}\editor{Bernd Bischl, Raphael Sonabend, Lars Kotthoff, Michel Lang}\locpage\cleardoublepage\setcounter{page}{7}\tableofcontents\chapter*{Preface}The \href{https://cran.r-project.org/package=mlr}{\texttt{mlr}}\index{\texttt{mlr}} package [@mlr] was first released on CRAN in 2013, with the core design and architecture dating back further.Over time, the addition of many features led to a complex design that made it too difficult for us to extend further.In hindsight, we saw that some design and architecture choices in \href{https://cran.r-project.org/package=mlr}{\texttt{mlr}}\index{\texttt{mlr}} made it difficult to support new features, in particular with respect to ML pipelines.So in 2018, we set about working on a reimplementation, which resulted in the first release of \href{https://mlr3.mlr-org.com}{\texttt{mlr3}}\index{\texttt{mlr3}} on CRAN in July 2019.\subsection*{Overview}The `mlr3` ecosystem is the result of many years of methodological and applied research.This book describes the resulting features and discusses best practices for ML, technical implementation details, and in-depth considerations for model optimization.This book may be helpful for both practitioners who want to quickly apply machine learning (ML) algorithms and researchers who want to implement, benchmark, and compare their new methods in a structured environment.While we hope this book is accessible to a wide range of readers and levels of ML expertise, we do assume that readers have taken at least an introductory ML course or have the equivalent expertise and some basic experience with R.A background in computer science or statistics is beneficial for understanding the advanced functionality described in the later chapters of this book, but not required.A comprehensive ML introduction for those new to the field can be found in @james_introduction_2014.@Wickham2017R provides a comprehensive introduction to data science in R.The book is split into the following four parts:\textbf{Part I: Fundamentals} In this part of the book we will teach youthe fundamentals of \texttt{mlr3}. This will give you a flavor of thebuilding blocks of the \texttt{mlr3} universe and the basic tools youwill need to tackle most machine learning problems. We recommend thatall readers study these chapters to become familiar with \texttt{mlr3}terminology, syntax, and style. In Chapter \ref{sec-basics} we willcover the basic classes in\href{https://mlr3.mlr-org.com}{\texttt{mlr3}}\index{\texttt{mlr3}},including \texttt{Learner} (machine learning implementations),\texttt{Measure} (performance metrics), and \texttt{Task} (machinelearning task definitions). Chapter \ref{sec-performance} will takeevaluation a step further to include discussions about resampling --robust strategies for measuring model performance -- and benchmarking --experiments for comparing multiple models.\textbf{Part II: Tuning and Feature Selection} In this part of the book,we will look at more advanced methodology that is essential todeveloping powerful ML models with good predictive ability.Chapter \ref{sec-optimization} introduces hyperparameter optimization,which is the process of tuning model hyperparameters to obtain bettermodel performance. Tuning is implemented via the\href{https://mlr3tuning.mlr-org.com}{\texttt{mlr3tuning}}\index{\texttt{mlr3tuning}}package, which also includes methods for automating complex tuningprocesses, including nested resampling. The performance of ML models canbe improved by tuning hyperparameters but also by carefully selectingfeatures. Chapter \ref{sec-feature-selection} introduces featureselection with filters and wrappers implemented in\href{https://mlr3filters.mlr-org.com}{\texttt{mlr3filters}}\index{\texttt{mlr3filters}}and\href{https://mlr3fselect.mlr-org.com}{\texttt{mlr3fselect}}\index{\texttt{mlr3fselect}}.For readers interested in taking a deep dive into tuning,Chapter \ref{sec-optimization-advanced} discusses advanced tuningmethods including error handling, multi-objective tuning, and tuningwith Hyperband and Bayesian optimization methods.\textbf{Part III: Pipelines and Preprocessing} In Part III we introduce\href{https://mlr3pipelines.mlr-org.com}{\texttt{mlr3pipelines}}\index{\texttt{mlr3pipelines}},which allows users to implement complex ML workflows easily. InChapter \ref{sec-pipelines} we will show you how to build a pipeline outof discrete configurable operations and how to treat complex pipelinesas if they were any other machine learning model. InChapter \ref{sec-pipelines-nonseq} we will build on the previous chapterby introducing non-sequential pipelines, which can have multiplebranches that carry out operations concurrently. We will alsodemonstrate how to tune pipelines, including how to tune whichoperations should be included in the pipeline. Finally, inChapter \ref{sec-preprocessing} we will put pipelines into practice bydemonstrating how to solve common problems that occur when fitting MLmodels to messy data.\textbf{Part IV: Advanced Topics} In the final part of the book, we willlook at advanced methodology and technical details. This part of thebook is more theory-heavy in some sections to help ground the design andimplementation decisions. We will begin by looking at advanced technicaldetails in Chapter \ref{sec-technical} that are essential reading foradvanced users who require parallelization, custom error handling, orlarge databases. Chapter \ref{sec-large-benchmarking} will build on allpreceding chapters to introduce large-scale benchmarking experimentsthat compare many models, tasks, and measures; including how to make useof \texttt{mlr3} extension packages for loading data, usinghigh-performance computing clusters, and formal statistical analysis ofbenchmark experiments. Chapter \ref{sec-interpretation} will discussdifferent packages that are compatible with \texttt{mlr3} to providemodel-agnostic interpretability for feature importance and localexplainability of individual predictions. Chapter \ref{sec-special} willthen delve into detail on domain-specific methods that are implementedin our extension packages including survival analysis, densityestimation, spatio-temporal analysis, and more. Readers may choose toselectively read sections in this chapter depending on your use case(i.e., if you have domain-specific problems to tackle), or to use theseas introductions to new domains to explore. Finally,Chapter \ref{sec-fairness} will introduce algorithmic fairness, whichincludes specialized measures and methods to identify and reducealgorithmic biases.\subsection*{Citing this book}This book is the culmination of many years worth of software design, coding, writing, and editing.It is very important to us that all our contributors are credited appropriately.Citation details of packages in the `mlr3` ecosystem can be found in their respective GitHub repositories.When you are citing this book please cite chapters directly; citations can be found at the end of each chapter.If you need to reference the full book please use:```Bischl, B., Sonabend, R., Kotthoff, L., & Lang, M. (Eds.). (2024)."{{< meta book.title >}}". CRC Press. https://mlr3book.mlr-org.com@book{Bischl2024 title = {Applied Machine Learning Using {m}lr3 in {R}}, editor = {Bernd Bischl and Raphael Sonabend and Lars Kotthoff and Michel Lang}, url = {https://mlr3book.mlr-org.com}, year = {2024}, isbn = {9781032507545}, publisher = {CRC Press}}```Please see the front page of the book website ([https://mlr3book.mlr-org.com](https://mlr3book.mlr-org.com)) for full licensing details.\vspace{10mm}We hope you enjoy reading this book.\vspace{5mm}Bernd, Raphael, Lars, Michel\chapter*{Editors}Bernd Bischl is a professor of Statistical Learning and Data Science in LMU Munich and co-director of the Munich Center for Machine Learning.He studied Computer Science, Artificial Intelligence and Data Science and holds a PhD in Statistics.His research interests include AutoML, model selection, interpretable ML and the development of statistical software.He wrote the initial version of `mlr` in 2012 and 2013 and still leads the team of developers of `mlr3`, now largely focusing on design, code review and strategic development.Raphael Sonabend is the CEO and co-founder of OSPO Now and a visiting researcher at Imperial College London.They hold a PhD in statistics, specializing in machine learning applications for survival analysis.They wrote the `mlr3` packages `mlr3proba` and `mlr3benchmark`.Lars Kotthoff is an associate professor of Computer Science at the University of Wyoming, US.He has studied and held academic appointments in Germany, UK, Ireland, and Canada.Lars has been contributing to `mlr` for about a decade.His research aims to automate machine learning and other areas of AI.Michel Lang is the scientific coordinator of the Research Center Trustworthy Data Science and Security.He has a PhD in Statistics and has been developing statistical software for over a decade.He joined the `mlr` team in 2014 and wrote the initial version of `mlr3`.\begin{contributorlist}\contau{Marc Becker}\contaff{Ludwig-Maximilians-Universität München}\contau{Przemysław Biecek}\contaff{MI2.AI, Warsaw University of Technology}\contaff{University of Warsaw}\contau{Martin Binder}\contaff{Ludwig-Maximilians-Universität München}\contaff{Munich Center for Machine Learning (MCML)}\contau{Bernd Bischl}\contaff{Ludwig-Maximilians-Universität München}\contaff{Munich Center for Machine Learning (MCML)}\contau{Lukas Burk}\contaff{Ludwig-Maximilians-Universität München}\contaff{Leibniz Institute for Prevention Research and Epidemiology - BIPS}\contaff{Munich Center for Machine Learning (MCML)}\contau{Giuseppe Casalicchio}\contaff{Ludwig-Maximilians-Universität München}\contaff{Munich Center for Machine Learning (MCML)}\contaff{Essential Data Science Training GmbH}\contau{Susanne Dandl}\contaff{Ludwig-Maximilians-Universität München}\contaff{Munich Center for Machine Learning (MCML)}\contau{Sebastian Fischer}\contaff{Ludwig-Maximilians-Universität München}\contau{Natalie Foss}\contaff{University of Wyoming}\contau{Lars Kotthoff}\contaff{University of Wyoming}\contau{Michel Lang}\contaff{Research Center Trustworthy Data Science and Security}\contaff{TU Dortmund University}\contau{Florian Pfisterer}\contaff{Ludwig-Maximilians-Universität München}\contau{Damir Pulatov}\contaff{University of Wyoming}\contau{Lennart Schneider}\contaff{Ludwig-Maximilians-Universität München}\contaff{Munich Center for Machine Learning (MCML)}\contau{Patrick Schratz}\contaff{Friedrich Schiller University Jena}\contau{Raphael Sonabend}\contaff{OSPO Now}\contau{Janek Thomas}\contaff{Ludwig-Maximilians-Universität München}\contaff{Munich Center for Machine Learning (MCML)}\contaff{Essential Data Science Training GmbH}\contau{Marvin N. Wright}\contaff{Leibniz Institute for Prevention Research and Epidemiology – BIPS}\contaff{University of Bremen}\contaff{University of Copenhagen}\end{contributorlist}:::\mainmatter