Mindblown: a blog about philosophy.
-
Interactions – where are you?
This question sends shivers down the poor modelers spine… The {hstats} R package introduced in our last post measures their strength using Friedman’s H-statistics, a collection of statistics based on partial dependence functions. On Github, the preview version of {hstats} 1.0.0 out – I will try to bring it to CRAN in about one week…
-
It’s the interactions
What makes a ML model a black-box? It is the interactions. Without any interactions, the ML model is additive and can be exactly described. Studying interaction effects of ML models is challenging. The main XAI approaches are: This post is mainly about the third approach. Its beauty is that we get information about all interactions.…
-
Model Diagnostics in Python
Version 1.0.0 of the new Python package for model-diagnostics was just released on PyPI.
-
Geographic SHAP
“R Python” continued… Geographic SHAP
-
Quantiles And Their Estimation
Applied statistics is dominated by the ubiquitous mean. For a change, this post is dedicated to quantiles. I will give my best to provide a good mix of theory and practical examples. While the mean describes only the central tendency of a distribution or random sample, quantiles are able to describe the whole distribution. They…
-
SHAP + XGBoost + Tidymodels = LOVE
tidymodels and shapviz to explain XGBoost models
-
Dplyr-style without dplyr
How to get “dplyr” feeling without “dplyr”
-
Interpret Complex Linear Models with SHAP within Seconds
Peaking into richly parametrized linear models with SHAP? Yes!
-
Histograms, Gradient Boosted Trees, Group-By Queries and One-Hot Encoding
This post shows how filling histograms can be done in very different ways thereby connecting very different areas: from gradient boosted trees to SQL queries to one-hot encoding. Let’s jump into it! Modern gradient boosted trees (GBT) like LightGBM, XGBoost and the HistGradientBoostingRegressor of scikit-learn all use two techniques on top of standard gradient boosting:…
-
The Unfairness of AI Fairness
Fairness in Artificial Intelligence (AI) and Machine Learning (ML) is a recent and hot topic. As ML models are used in insurance pricing, the fairness topic also applies there. Just last month, Lindholm, Richman, Tsanakas and Wüthrich published a discussion paper on this subject that sheds new light on established AI fairness criteria. This post…
-
Kernel SHAP in R and Python
“R Python” continued… Kernel SHAP
-
Kernel SHAP
Standard Kernel SHAP has arrived in R. We show how well it plays together with deep learning in Keras
Got any book recommendations?