References

Baniecki, Hubert, and Przemyslaw Biecek. 2019. modelStudio: Interactive Studio with Explanations for ML Predictive Models.” Journal of Open Source Software 4 (43): 1798. https://doi.org/10.21105/joss.01798.
Baniecki, Hubert, Wojciech Kretowicz, Piotr Piątyszek, Jakub Wiśniewski, and Przemysław Biecek. 2021. dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python.” Journal of Machine Learning Research 22 (214): 1–7. http://jmlr.org/papers/v22/20-1473.html.
Bengio, Yoshua, and Yves Grandvalet. 2003. “No Unbiased Estimator of the Variance of k-Fold Cross-Validation.” Advances in Neural Information Processing Systems 16.
Bergstra, James, and Yoshua Bengio. 2012. “Random Search for Hyper-Parameter Optimization.” Journal of Machine Learning Research 13 (10): 281–305. http://jmlr.org/papers/v13/bergstra12a.html.
Biecek, Przemyslaw. 2018. DALEX: Explainers for complex predictive models in R.” Journal of Machine Learning Research 19 (84): 1–5. http://jmlr.org/papers/v19/18-416.html.
Biecek, Przemyslaw, and Tomasz Burzykowski. 2021. Explanatory Model Analysis. Chapman; Hall/CRC, New York. https://ema.drwhy.ai/.
Binder, Martin, Florian Pfisterer, Michel Lang, Lennart Schneider, Lars Kotthoff, and Bernd Bischl. 2021. mlr3pipelines - Flexible Machine Learning Pipelines in R.” Journal of Machine Learning Research 22 (184): 1–7. http://jmlr.org/papers/v22/21-0281.html.
Bischl, Bernd, Martin Binder, Michel Lang, Tobias Pielok, Jakob Richter, Stefan Coors, Janek Thomas, et al. 2021. “Hyperparameter Optimization: Foundations, Algorithms, Best Practices and Open Challenges.” https://doi.org/10.48550/ARXIV.2107.05847.
Bischl, Bernd, Michel Lang, Lars Kotthoff, Julia Schiffner, Jakob Richter, Erich Studerus, Giuseppe Casalicchio, and Zachary M. Jones. 2016. mlr: Machine Learning in R.” Journal of Machine Learning Research 17 (170): 1–5. http://jmlr.org/papers/v17/15-066.html.
Bischl, Bernd, Olaf Mersmann, Heike Trautmann, and Claus Weihs. 2012. “Resampling Methods for Meta-Model Validation with Recommendations for Evolutionary Computation.” Evolutionary Computation 20 (2): 249–75.
Bishop, Christopher M. 2006. Pattern Recognition and Machine Learning. Springer.
Bommert, Andrea, Xudong Sun, Bernd Bischl, Jörg Rahnenführer, and Michel Lang. 2020. “Benchmark for Filter Methods for Feature Selection in High-Dimensional Classification Data.” Computational Statistics & Data Analysis 143: 106839. https://doi.org/https://doi.org/10.1016/j.csda.2019.106839.
Breiman, Leo. 1996. “Bagging Predictors.” Machine Learning 24 (2): 123–40.
Bücker, Michael, Gero Szepannek, Alicja Gosiewska, and Przemyslaw Biecek. 2022. “Transparency, Auditability, and Explainability of Machine Learning Models in Credit Scoring.” Journal of the Operational Research Society 73 (1): 70–90. https://doi.org/10.1080/01605682.2021.1922098.
Chandrashekar, Girish, and Ferat Sahin. 2014. “A Survey on Feature Selection Methods.” Computers and Electrical Engineering 40 (1): 16–28. https://doi.org/https://doi.org/10.1016/j.compeleceng.2013.11.024.
Collett, David. 2014. Modelling Survival Data in Medical Research. 3rd ed. CRC.
Davis, Jesse, and Mark Goadrich. 2006. “The Relationship Between Precision-Recall and ROC Curves.” In Proceedings of the 23rd International Conference on Machine Learning, 233–40.
Demšar, Janez. 2006. “Statistical Comparisons of Classifiers over Multiple Data Sets.” Journal of Machine Learning Research 7 (1): 1–30. https://jmlr.org/papers/v7/demsar06a.html.
Feurer, Matthias, and Frank Hutter. 2019. “Hyperparameter Optimization.” In Automated Machine Learning: Methods, Systems, Challenges, edited by Frank Hutter, Lars Kotthoff, and Joaquin Vanschoren, 3–33. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-05318-5_1.
Guyon, Isabelle, and André Elisseeff. 2003. “An Introduction to Variable and Feature Selection.” Journal of Machine Learning Research 3 (Mar): 1157–82.
Hand, David J, and Robert J Till. 2001. “A Simple Generalisation of the Area Under the ROC Curve for Multiple Class Classification Problems.” Machine Learning 45: 171–86.
Hansen, Nikolaus, and Anne Auger. 2011. “CMA-ES: Evolution Strategies and Covariance Matrix Adaptation.” In Proceedings of the 13th Annual Conference Companion on Genetic and Evolutionary Computation, 991–1010.
Hastie, Trevor, Jerome Friedman, and Robert Tibshirani. 2001. The Elements of Statistical Learning. Springer New York. https://doi.org/10.1007/978-0-387-21606-5.
Holzinger, Andreas, Anna Saranti, Christoph Molnar, Przemyslaw Biecek, and Wojciech Samek. 2022. “Explainable AI Methods - a Brief Overview.” International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers, 13–38. https://doi.org/10.1007/978-3-031-04083-2_2.
Horst, Allison Marie, Alison Presmanes Hill, and Kristen B Gorman. 2020. palmerpenguins: Palmer Archipelago (Antarctica) penguin data. https://doi.org/10.5281/zenodo.3960218.
James, Gareth, Daniela Witten, Trevor Hastie, and Robert Tibshirani. 2014. An Introduction to Statistical Learning: With Applications in r. Springer Publishing Company, Incorporated.
Japkowicz, Nathalie, and Mohak Shah. 2011. Evaluating Learning Algorithms: A Classification Perspective. Cambridge University Press.
Kalbfleisch, John D, and Ross L Prentice. 2011. The statistical analysis of failure time data. Vol. 360. John Wiley & Sons.
Karl, Florian, Tobias Pielok, Julia Moosbauer, Florian Pfisterer, Stefan Coors, Martin Binder, Lennart Schneider, et al. 2022. “Multi-Objective Hyperparameter Optimization - an Overview.” https://doi.org/10.48550/ARXIV.2206.07438.
Kim, Ji-Hyun. 2009. “Estimating Classification Error Rate: Repeated Cross-Validation, Repeated Hold-Out and Bootstrap.” Computational Statistics & Data Analysis 53 (11): 3735–45.
Krzyziński, Mateusz, Mikołaj Spytek, Hubert Baniecki, and Przemysław Biecek. 2023. SurvSHAP(t): Time-dependent explanations of machine learning survival models.” Knowledge-Based Systems 262: 110234. https://doi.org/https://doi.org/10.1016/j.knosys.2022.110234.
Lang, Michel. 2017. checkmate: Fast Argument Checks for Defensive R Programming.” The R Journal 9 (1): 437–45. https://doi.org/10.32614/RJ-2017-028.
Lang, Michel, Martin Binder, Jakob Richter, Patrick Schratz, Florian Pfisterer, Stefan Coors, Quay Au, Giuseppe Casalicchio, Lars Kotthoff, and Bernd Bischl. 2019. mlr3: A Modern Object-Oriented Machine Learning Framework in R.” Journal of Open Source Software, December. https://doi.org/10.21105/joss.01903.
Li, Lisha, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. 2017. “Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization.” The Journal of Machine Learning Research 18 (1): 6765–6816.
López-Ibáñez, Manuel, Jérémie Dubois-Lacoste, Leslie Pérez Cáceres, Mauro Birattari, and Thomas Stützle. 2016. “The Irace Package: Iterated Racing for Automatic Algorithm Configuration.” Operations Research Perspectives 3: 43–58.
Molinaro, Annette M, Richard Simon, and Ruth M Pfeiffer. 2005. “Prediction Error Estimation: A Comparison of Resampling Methods.” Bioinformatics 21 (15): 3301–7.
O’Neil, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York, NY: Crown Publishing Group.
R Core Team. 2019. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/.
Romaszko, Kamil, Magda Tatarynowicz, Mateusz Urbański, and Przemysław Biecek. 2019. “modelDown: Automated Website Generator with Interpretable Documentation for Predictive Machine Learning Models.” Journal of Open Source Software 4 (38): 1444. https://doi.org/10.21105/joss.01444.
Ruspini, Enrique H. 1970. “Numerical Methods for Fuzzy Clustering.” Information Sciences 2 (3): 319–50. https://doi.org/https://doi.org/10.1016/S0020-0255(70)80056-1.
Schratz, Patrick, Marc Becker, Michel Lang, and Alexander Brenning. 2021. mlr3spatiotempcv: Spatiotemporal resampling methods for machine learning in R,” October. http://arxiv.org/abs/2110.12674.
Silverman, Bernard W. 1986. Density Estimation for Statistics and Data Analysis. Vol. 26. CRC press.
Simon, Richard. 2007. “Resampling Strategies for Model Assessment and Selection.” In Fundamentals of Data Mining in Genomics and Proteomics, edited by Werner Dubitzky, Martin Granzow, and Daniel Berrar, 173–86. Boston, MA: Springer US. https://doi.org/10.1007/978-0-387-47509-7_8.
Sonabend, Raphael Edward Benjamin. 2021. A Theoretical and Methodological Framework for Machine Learning in Survival Analysis: Enabling Transparent and Accessible Predictive Modelling on Right-Censored Time-to-Event Data.” PhD, University College London (UCL). https://discovery.ucl.ac.uk/id/eprint/10129352/.
Sonabend, Raphael, and Andreas Bender. 2023. Machine Learning in Survival Analysis. https://www.mlsabook.com.
Sonabend, Raphael, Andreas Bender, and Sebastian Vollmer. 2022. Avoiding C-hacking when evaluating survival distribution predictions with discrimination measures.” Edited by Zhiyong Lu. Bioinformatics 38 (17): 4178–84. https://doi.org/10.1093/bioinformatics/btac451.
Sonabend, Raphael, Franz J Király, Andreas Bender, Bernd Bischl, and Michel Lang. 2021. mlr3proba: An R Package for Machine Learning in Survival Analysis.” Bioinformatics, February. https://doi.org/10.1093/bioinformatics/btab039.
Tsallis, Constantino, and Daniel A Stariolo. 1996. “Generalized Simulated Annealing.” Physica A: Statistical Mechanics and Its Applications 233 (1-2): 395–406.
Wiśniewski, Jakub, and Przemysław Biecek. 2022. “The r Journal: Fairmodels: A Flexible Tool for Bias Detection, Visualization, and Mitigation in Binary Classification Models.” The R Journal 14: 227–43. https://doi.org/10.32614/RJ-2022-019.
Wolpert, David H. 1992. “Stacked Generalization.” Neural Networks 5 (2): 241–59. https://doi.org/https://doi.org/10.1016/S0893-6080(05)80023-1.
Xiang, Yang, Sylvain Gubian, Brian Suomela, and Julia Hoeng. 2013. “Generalized Simulated Annealing for Global Optimization: The GenSA Package.” R J. 5 (1): 13.