In a recent post, I introduced the initial version of the “shapviz” package. Its motto: do one thing, but do it well: visualize SHAP values.
The initial community feedback was very positive, and a couple of things have been improved in version 0.2.0. Here the main changes:
“shapviz” now works with tree-based models of the h2o package in R.
Additionally, it wraps the shapr package, which implements an improved version of Kernel SHAP taking into account feature dependence.
A simple interface to collapse SHAP values of dummy variables was added.
The default importance plot is now a bar plot, instead of the (slower) beeswarm plot. In later releases, the latter might be moved to a separate function sv_summary() for consistency with other packages.
Importance plot and dependence plot now work neatly with ggplotly(). The other plot types cannot be translated with ggplotly() because they use geoms from outside ggplot. At least I do not know how to do this…
Example
Let’s build an H2O gradient boosted trees model to explain diamond prices. Then, we explain the model with our “shapviz” package. Note that H2O itself also offers some SHAP plots. “shapviz” is directly applied to the fitted H2O model. This means you don’t have to write a single superfluous line of code.
R
library(shapviz)
library(tidyverse)
library(h2o)
h2o.init()
set.seed(1)
# Get rid of that darn ordinals
ord <- c("clarity", "cut", "color")
diamonds[, ord] <- lapply(diamonds[, ord], factor, ordered = FALSE)
# Minimally tuned GBM with 260 trees, determined by early-stopping with CV
dia_h2o <- as.h2o(diamonds)
fit <- h2o.gbm(
c("carat", "clarity", "color", "cut"),
y = "price",
training_frame = dia_h2o,
nfolds = 5,
learn_rate = 0.05,
max_depth = 4,
ntrees = 10000,
stopping_rounds = 10,
score_each_iteration = TRUE
)
fit
# SHAP analysis on about 2000 diamonds
X_small <- diamonds %>%
filter(carat <= 2.5) %>%
sample_n(2000) %>%
as.h2o()
shp <- shapviz(fit, X_pred = X_small)
sv_importance(shp, show_numbers = TRUE)
sv_importance(shp, show_numbers = TRUE, kind = "bee")
sv_dependence(shp, "color", "auto", alpha = 0.5)
sv_force(shp, row_id = 1)
sv_waterfall(shp, row_id = 1)
Summary and importance plots
The SHAP importance and SHAP summary plots clearly show that carat is the most important variable. On average, it impacts the prediction by 3247 USD. The effect of “cut” is much smaller. Its impact on the predictions, on average, is plus or minus 112 USD.
SHAP summary plotSHAP importance plot
SHAP dependence plot
The SHAP dependence plot shows the effect of “color” on the prediction: The better the color (close to “D”), the higher the price. Using a correlation based heuristic, the plot selected carat on the color scale to show that the color effect is hightly influenced by carat in the sense that the impact of color increases with larger diamond weight. This clearly makes sense!
Dependence plot for “color”
Waterfall and force plot
Finally, the waterfall and force plots show how a single prediction is decomposed into contributions from each feature. While this does not tell much about the model itself, it might be helpful to explain what SHAP values are and to debug strange predictions.
Waterfall plotForce plot
Short wrap-up
Combining “shapviz” and H2O is fun. Okay, that one was subjective :-).
Good visualization of ML models is extremely helpful and reassuring.
SHAP (SHapley Additive exPlanations, Lundberg and Lee, 2017) is an ingenious way to study black box models. SHAP values decompose – as fair as possible – predictions into additive feature contributions.
When it comes to SHAP, the Python implementation is the de-facto standard. It not only offers many SHAP algorithms, but also provides beautiful plots. In R, the situation is a bit more confusing. Different packages contain implementations of SHAP algorithms, e.g.,
some of which with great visualizations. Plus there is SHAPforxgboost (see my recent post), originally designed to visualize the results of SHAP values calculated from XGBoost, but it can also be used more generally by now.
The shapviz package
In order to entangle calculation from visualization, the shapviz package was designed. It solely focuses on visualization of SHAP values. Closely following its README, it currently provides these plots:
sv_waterfall(): Waterfall plots to study single predictions.
sv_force(): Force plots as an alternative to waterfall plots.
sv_importance(): Importance plots (bar and/or beeswarm plots) to study variable importance.
sv_dependence(): Dependence plots to study feature effects (optionally colored by heuristically strongest interacting feature).
They require a “shapviz” object, which is built from two things only:
S: Matrix of SHAP values
X: Dataset with corresponding feature values
Furthermore, a “baseline” can be passed to represent an average prediction on the scale of the SHAP values.
A key feature of the “shapviz” package is that X is used for visualization only. Thus it is perfectly fine to use factor variables, even if the underlying model would not accept these.
To further simplify the use of shapviz, direct connectors to the packages
One line of code creates a shapviz object. It contains SHAP values and feature values for the set of observations we are interested in. Note again that X is solely used as explanation dataset, not for calculating SHAP values.
In this example we construct the shapviz object directly from the fitted XGBoost model. Thus we also need to pass a corresponding prediction dataset X_pred used for calculating SHAP values by XGBoost.
R
shp <- shapviz(fit, X_pred = data.matrix(X_small), X = X_small)
Explaining one single prediction
Let’s start by explaining a single prediction by a waterfall plot or, alternatively, a force plot.
R
# Two types of visualizations
sv_waterfall(shp, row_id = 1)
sv_force(shp, row_id = 1
Waterfall plot
Factor/character variables are kept as they are, even if the underlying XGBoost model required them to be integer encoded.
Force plot
Explaining the model as a whole
We have decomposed 2000 predictions, not just one. This allows us to study variable importance at a global model level by studying average absolute SHAP values as a bar plot or by looking at beeswarm plots of SHAP values.
Beeswarm plotBar plotBeeswarm plot overlaid with bar plot
A scatterplot of SHAP values of a feature like color against its observed values gives a great impression on the feature effect on the response. Vertical scatter gives additional info on interaction effects. shapviz offers a heuristic to pick another feature on the color scale with potential strongest interaction.
R
sv_dependence(shp, v = "color", "auto")
Dependence plot with automatic interaction colorization
Summary
The “shapviz” has a single purpose: making SHAP plots.
Its interface is optimized for existing SHAP crunching packages and can easily be used in future packages as well.
All plots are highly customizable. Furthermore, they are all written with ggplot and allow corresponding modifications.
There are different R packages devoted to model agnostic interpretability, DALEX and iml being among the best known. In 2019, I added flashlight
for a couple of reasons:
Its explainers work with case weights.
Multiple explainers can be combined to a multi-explainer.
Stratified calculation is possible.
Since almost all plots in flashlight are constructed with ggplot, it is super easy to turn them into interactive plotly objects: just add a simple ggplotly() to the end of the call.
We will use a sweet dataset with more than 20’000 houses to model house prices by a set of derived features such as the logarithmic living area. The location will be represented by the postal code.
Data preparation
We first load the data and prepare some of the columns for modeling. Furthermore, we specify the set of features and the response.
Now, we are ready to inspect our two models regarding performance, variable importance, and effects.
Set up explainers
First, we pack all model dependent information into flashlights (the explainer objects) and combine them to a multiflashlight. As evaluation dataset, we pass the test data. This ensures that interpretability tools using the response (e.g., performance measures and permutation importance) are not being biased by overfitting.
Let’s evaluate model RMSE and R-squared on the hold-out dataset. Here, the mixed-effects model performs a tiny little bit better than the random forest:
Next, we inspect the variable strength based on permutation importance. It shows by how much the RMSE is being increased when shuffling a variable before prediction. The results are quite similar between the two models.
R
(light_importance(fls, v = x) %>%
plot(fill = "darkred") +
labs(title = "Permutation importance", y = "Drop in RMSE")) %>%
ggplotly()
Variable importance (png)
ICE plot
To get an impression of the effect of the living area, we select 200 observations and profile their predictions with increasing (log) living area, keeping everything else fixed (Ceteris Paribus). These ICE (individual conditional expectation) plots are vertically centered in order to highlight potential interaction effects. If all curves coincide, there are no interaction effects and we can say that the effect of the feature is modelled in an additive way (no surprise for the additive linear mixed-effects model).
R
(light_ice(fls, v = "log_sqft_living", n_max = 200, center = "middle") %>%
plot(alpha = 0.05, color = "darkred") +
labs(title = "Centered ICE plot", y = "log_price (shifted)")) %>%
ggplotly()
Partial dependence plots
Averaging many uncentered ICE curves provides the famous partial dependence plot, introduced in Friedman’s seminal paper on gradient boosting machines (2001).
R
(light_profile(fls, v = "log_sqft_living", n_bins = 21) %>%
plot(rotate_x = FALSE) +
labs(title = "Partial dependence plot", y = y) +
scale_colour_viridis_d(begin = 0.2, end = 0.8)) %>%
ggplotly()
Partial dependence plots (png)
Multiple effects visualized together
The last figure extends the partial dependence plot with three additional curves, all evaluated on the hold-out dataset:
Average observed values
Average predictions
ALE plot (“accumulated local effects”, an alternative to partial dependence plots with relaxed Ceteris Paribus assumption)
R
(light_effects(fls, v = "log_sqft_living", n_bins = 21) %>%
plot(use = "all") +
labs(title = "Different effect estimates", y = y) +
scale_colour_viridis_d(begin = 0.2, end = 0.8)) %>%
ggplotly()
This is the next article in our series “Lost in Translation between R and Python”. The aim of this series is to provide high-quality R and Python 3 code to achieve some non-trivial tasks. If you are to learn R, check out the R tab below. Similarly, if you are to learn Python, the Python tab will be your friend.
DuckDB
DuckDB is a fantastic in-process SQL database management system written completely in C++. Check its official documentation and other blogposts like this to get a feeling of its superpowers. It is getting better and better!
Some of the highlights:
Easy installation in R and Python, made possible via language bindings.
Multiprocessing and fast.
Allows to work with data bigger than RAM.
Can fire SQL queries on R and Pandas tables.
Can fire SQL queries on (multiple!) csv and/or Parquet files.
Additional packages required to run the code of this post are indicated in the code.
A first query
Let’s start by loading a dataset, initializing DuckDB and running a simple query.
The dataset we use here contains information on over 20,000 sold houses in Kings County. Along with the sale price, different features describe the size and location of the properties. The dataset is available on OpenML.org with ID 42092.
R
Python
library(OpenML)
library(duckdb)
library(tidyverse)
# Load data
df <- getOMLDataSet(data.id = 42092)$data
# Initialize duckdb, register df and materialize first query
con = dbConnect(duckdb())
duckdb_register(con, name = "df", df = df)
con %>%
dbSendQuery("SELECT * FROM df limit 5") %>%
dbFetch()
import duckdb
import pandas as pd
from sklearn.datasets import fetch_openml
# Load data
df = fetch_openml(data_id=42092, as_frame=True)["frame"]
# Initialize duckdb, register df and fire first query
# If out-of-RAM: duckdb.connect("py.duckdb", config={"temp_directory": "a_directory"})
con = duckdb.connect()
con.register("df", df)
con.execute("SELECT * FROM df limit 5").fetchdf()
Result of first query (from R)
Average price per grade
If you like SQL, then you can do your data preprocessing and simple analyses with DuckDB. Here, we calculate the average house price per online grade (the higher the grade, the better the house).
R
Python
query <-
"
SELECT AVG(price) avg_price, grade
FROM df
GROUP BY grade
ORDER BY grade
"
avg <- con %>%
dbSendQuery(query) %>%
dbFetch()
avg
# Average price per grade
query = """
SELECT AVG(price) avg_price, grade
FROM df
GROUP BY grade
ORDER BY grade
"""
avg = con.execute(query).fetchdf()
avg
R output
Highlight: queries to files
The last query will be applied directly to files on disk. To demonstrate this fantastic feature, we first save “df” as a parquet file and “avg” as a csv file.
# Save df and avg to different file types
df.to_parquet("housing.parquet") # pyarrow=7
avg.to_csv("housing_avg.csv", index=False)
Let’s load some columns of “housing.parquet” data, but only rows with grades having an average price of one million USD. Agreed, that query does not make too much sense but I hope you get the idea…😃
R
Python
# "Complex" query
query2 <- "
SELECT price, sqft_living, A.grade, avg_price
FROM 'housing.parquet' A
LEFT JOIN 'housing_avg.csv' B
ON A.grade = B.grade
WHERE B.avg_price > 1000000
"
expensive_grades <- con %>%
dbSendQuery(query2) %>%
dbFetch()
head(expensive_grades)
# dbDisconnect(con)
# Complex query
query2 = """
SELECT price, sqft_living, A.grade, avg_price
FROM 'housing.parquet' A
LEFT JOIN 'housing_avg.csv' B
ON A.grade = B.grade
WHERE B.avg_price > 1000000
"""
expensive_grades = con.execute(query2).fetchdf()
expensive_grades
# con.close()
R output
Last words
DuckDB is cool!
If you have strong SQL skills but do not know R or Python so well, this is a great way to get used to those programming languages.
If you are unfamiliar to SQL but like R and/or Python, you can use DuckDB for a while and end up being an SQL addict.
If your analysis involves combining many large files during preprocessing, then you can try the trick shown in the last example of this post.
It must have been around the year 2000, when I wrote my first snipped of SPLUS/R code. One thing I’ve learned back then:
Loops are slow. Replace them with
vectorized calculations or
if vectorization is not possible, use sapply() et al.
Since then, the R core team and the community has invested tons of time to improve R and also to make it faster. There are things like RCPP and parallel computing to speed up loops.
But what still relatively few R users know: loops are not that slow anymore. We want to demonstrate this using two examples.
Example 1: sqrt()
We use three ways to calculate the square root of a vector of random numbers:
Vectorized calculation. This will be the way to go because it is internally optimized in C.
A loop. This must be super slow for large vectors.
vapply() (as safe alternative to sapply).
The three approaches are then compared via bench::mark() regarding their speed for different numbers n of vector lengths. The results are then compared first regarding absolute median times, and secondly (using an independent run), on a relative scale (1 is the vectorized approach).
R
library(tidyverse)
library(bench)
# Calculate square root for each element in loop
sqrt_loop <- function(x) {
out <- numeric(length(x))
for (i in seq_along(x)) {
out[i] <- sqrt(x[i])
}
out
}
# Example
sqrt_loop(1:4) # 1.000000 1.414214 1.732051 2.000000
# Compare its performance with two alternatives
sqrt_benchmark <- function(n) {
x <- rexp(n)
mark(
vectorized = sqrt(x),
loop = sqrt_loop(x),
vapply = vapply(x, sqrt, FUN.VALUE = 0.0),
# relative = TRUE
)
}
# Combine results of multiple benchmarks and plot results
multiple_benchmarks <- function(one_bench, N) {
res <- vector("list", length(N))
for (i in seq_along(N)) {
res[[i]] <- one_bench(N[i]) %>%
mutate(n = N[i], expression = names(expression))
}
ggplot(bind_rows(res), aes(n, median, color = expression)) +
geom_point(size = 3) +
geom_line(size = 1) +
scale_x_log10() +
ggtitle(deparse1(substitute(one_bench))) +
theme(legend.position = c(0.8, 0.15))
}
# Apply simulation
multiple_benchmarks(sqrt_benchmark, N = 10^seq(3, 6, 0.25))
Absolute timings
Absolute median times on the “sqrt()” task
Relative timings (using a second run)
Relative median times of a separate run on the “sqrt()” task
We see:
Run times increase quite linearly with vector size.
Vectorization is more than ten times faster than the naive loop.
Most strikingly, vapply() is much slower than the naive loop. Would you have thought this?
Example 2: paste()
For the second example, we use a less simple function, namely
paste(“Number”, prettyNum(x, digits = 5))
What will our three approaches (vectorized, naive loop, vapply) show on this task?
R
pretty_paste <- function(x) {
paste("Number", prettyNum(x, digits = 5))
}
# Example
pretty_paste(pi) # "Number 3.1416"
# Again, call pretty_paste() for each element in a loop
paste_loop <- function(x) {
out <- character(length(x))
for (i in seq_along(x)) {
out[i] <- pretty_paste(x[i])
}
out
}
# Compare its performance with two alternatives
paste_benchmark <- function(n) {
x <- rexp(n)
mark(
vectorized = pretty_paste(x),
loop = paste_loop(x),
vapply = vapply(x, pretty_paste, FUN.VALUE = ""),
# relative = TRUE
)
}
multiple_benchmarks(paste_benchmark, N = 10^seq(3, 5, 0.25))
Absolute timings
Absolute median times on the “paste()” task
Relative timings (using a second run)
Relative median times of a separate run on the “paste()” task
In contrast to the first example, vapply() is now as fast as the naive loop.
The time advantage of the vectorized approach is much less impressive. The loop takes in median only 50% longer.
Conclusion
Vectorization is fast and easy to read. If available, use this. No surprise.
If you use vapply/sapply/lapply, do it for the style, not for the speed. In some cases, the loop will be faster. And, depending on the situation and the audience, a loop might actually be even easier to read.
Besides the many negative aspects of going through a pandemic, there are also certain positive ones like having time to write short blog posts like this.
This one picks up a topic that was intensively discussed a couple of years ago on Wolfram’s page: Namely that the damped sine wave
f(t) = t sin(t)
can be used to draw a Christmas tree. Throw in some 3D animation using the R package rgl and the tree begins to become virtual reality…
Here is our version using just ten lines of R code:
R
library(rgl)
t <- seq(0, 100, by = 0.7)^0.6
x <- t * c(sin(t), sin(t + pi))
y <- t * c(cos(t), cos(t + pi))
z <- -2 * c(t, t)
color <- rep(c("darkgreen", "gold"), each = length(t))
open3d(windowRect = c(100, 100, 600, 600), zoom = 0.9)
bg3d("black")
spheres3d(x, y, z, radius = 0.3, color = color)
# On screen (skip if export)
play3d(spin3d(axis = c(0, 0, 1), rpm = 4))
# Export (requires 3rd party tool "ImageMagick" resp. magick-package)
# movie3d(spin3d(axis = c(0, 0, 1), rpm = 4), duration = 30, dir = getwd())
Exported as gif using magick
Christian and me wish you a relaxing time over Christmas. Take care of the people you love and stay healthy and safe.
This is the next article in our series “Lost in Translation between R and Python”. The aim of this series is to provide high-quality R and Python 3 code to achieve some non-trivial tasks. If you are to learn R, check out the R tab below. Similarly, if you are to learn Python, the Python tab will be your friend.
Monotonic constraints
On ML competition platforms like Kaggle, complex and unintuitively behaving models dominate. In this respect, reality is completely different. There, the majority of models do not serve as pure prediction machines but rather as fruitful source of information. Furthermore, even if used as prediction machine, the users of the models might expect a certain degree of consistency when “playing” with input values.
A classic example are statistical house appraisal models. An additional bathroom or an additional square foot of ground area is expected to raise the appraisal, everything else being fixed (ceteris paribus). The user might lose trust in the model if the opposite happens.
One way to enforce such consistency is to monitor the signs of coefficients of a linear regression model. Another useful strategy is to impose monotonicity constraints on selected model effects.
Trees and monotonic constraints
Monotonicity constraints are especially simple to implement for decision trees. The rule is basically as follows: If a monotonicity constraint would be violated by a split on feature X, it is rejected. (Or a large penalty is subtracted from the corresponding split gain.) This will imply monotonic behavior of predictions in X, keeping all other features fixed.
Tree ensembles like boosted trees or random forests will automatically inherit this property.
Boosted trees
Most implementations of boosted trees offer monotonicity constraints. Here is a selection:
Unfortunately, the picture is completely different for random forests. At the time of writing, I am not aware of any random forest implementation in R or Python offering this useful feature.
Some options
Implement monotonic constrainted random forests from scratch.
Ask for this feature in existing implementations.
Be creative and use XGBoost to emulate random forests.
For the moment, let’s stick to option 3. In our last R <-> Python blog post, we demonstrated that XGBoost’s random forest mode works essentially as good as standard random forest implementations, at least in regression settings and using sensible defaults.
Warning: Be careful with imposing monotonicity constraints
Ask yourself: does the constraint really make sense for all possible values of other features? You will see that the answer is often “no”.
An example: If your house price model uses the features “number of rooms” and “living area”, then a monotonic constraint on “living area” might make sense (given any number of rooms), while such constraint would be non-sensical for the number of rooms. Why? Because having six rooms in a 1200 square feet home is not necessarily better than having just five rooms in an equally sized home.
Let’s try it out
We use a nice dataset containing information on over 20,000 sold houses in Kings County. Along with the sale price, different features describe the size and location of the properties. The dataset is available on OpenML.org with ID 42092.
Some rows and columns from the Kings County house dataset.
The following R and Python codes
fetch the data,
prepare the ML setting,
fit unconstrained XGBoost random forests using log sales price as response,
and visualize the effect of log ground area by individual conditional expectation (ICE) curves.
An ICE curve for variable X shows how the prediction of one specific observation changes if the value of X changes. Repeating this for multiple observations gives an idea of the effect of X. The average over multiple ICE curves produces the famous partial dependent plot.
R
Python
library(farff)
library(OpenML)
library(dplyr)
library(xgboost)
set.seed(83454)
rmse <- function(y, pred) {
sqrt(mean((y-pred)^2))
}
# Load King Country house prices dataset on OpenML
# ID 42092, https://www.openml.org/d/42092
df <- getOMLDataSet(data.id = 42092)$data
head(df)
# Prepare
df <- df %>%
mutate(
log_price = log(price),
log_sqft_lot = log(sqft_lot),
year = as.numeric(substr(date, 1, 4)),
building_age = year - yr_built,
zipcode = as.integer(as.character(zipcode))
)
# Define response and features
y <- "log_price"
x <- c("grade", "year", "building_age", "sqft_living",
"log_sqft_lot", "bedrooms", "bathrooms", "floors", "zipcode",
"lat", "long", "condition", "waterfront")
# random split
ix <- sample(nrow(df), 0.8 * nrow(df))
y_test <- df[[y]][-ix]
# Fit untuned, but good(!) XGBoost random forest
dtrain <- xgb.DMatrix(data.matrix(df[ix, x]),
label = df[ix, y])
params <- list(
objective = "reg:squarederror",
learning_rate = 1,
num_parallel_tree = 500,
subsample = 0.63,
colsample_bynode = 1/3,
reg_lambda = 0,
max_depth = 20,
min_child_weight = 2
)
system.time( # 25 s
unconstrained <- xgb.train(
params,
data = dtrain,
nrounds = 1,
verbose = 0
)
)
pred <- predict(unconstrained, data.matrix(df[-ix, x]))
# Test RMSE: 0.172
rmse(y_test, pred)
# ICE curves via our flashlight package
library(flashlight)
pred_xgb <- function(m, X) predict(m, data.matrix(X[, x]))
fl <- flashlight(
model = unconstrained,
label = "unconstrained",
data = df[ix, ],
predict_function = pred_xgb
)
light_ice(fl, v = "log_sqft_lot", indices = 1:9,
evaluate_at = seq(7, 11, by = 0.1)) %>%
plot()
Figure 1 (R output): ICE curves of log(ground area) for the first nine observations. Many non-monotonic parts are visible.
We clearly see many non-monotonic (and in this case counterintuitive) ICE curves.
What would a model give with monotonically increasing constraint on the ground area?
R
Python
# Monotonic increasing constraint
(params$monotone_constraints <- 1 * (x == "log_sqft_lot"))
system.time( # 179s
monotonic <- xgb.train(
params,
data = dtrain,
nrounds = 1,
verbose = 0
)
)
pred <- predict(monotonic, data.matrix(df[-ix, x]))
# Test RMSE: 0.176
rmse(y_test, pred)
fl_m <- flashlight(
model = monotonic,
label = "monotonic",
data = df[ix, ],
predict_function = pred_xgb
)
light_ice(fl_m, v = "log_sqft_lot", indices = 1:9,
evaluate_at = seq(7, 11, by = 0.1)) %>%
plot()
# One needs to pass the constraints as single string, which is rather ugly
mc = "(" + ",".join([str(int(x == "log_sqft_lot")) for x in xvars]) + ")"
print(mc)
# Modeling - wall time 49 seconds
constrained = XGBRFRegressor(monotone_constraints=mc, **param_dict)
constrained.fit(X_train, y_train)
# Test RMSE: 0.178
pred = constrained.predict(X_test)
print(f"RMSE: {mean_squared_error(y_test, pred, squared=False):.03f}")
# ICE and PDP - wall time 39 seconds
PartialDependenceDisplay.from_estimator(
constrained,
X=X_train,
features=["log_sqft_lot"],
kind="both",
subsample=20,
random_state=1,
)
Figure 2 (R output): ICE curves of the same observations as in Figure 1, but now with monotonic constraint. All curves are monotonically increasing.
We see:
It works! Each ICE curve in log(lot area) is monotonically increasing. This means that predictions are monotonically increasing in lot area, keeping all other feature values fixed.
The model performance is slightly worse. This is the price paid for receiving a more intuitive behaviour in an important feature.
In Python, both models take about the same time to fit (30-40 s on a 4 core i7 CPU laptop). Curiously, in R, the constrained model takes about six times longer to fit than the unconstrained one (170 s vs 30 s).
Summary
Monotonic constraints help to create intuitive models.
Unfortunately, as per now, native random forest implementations do not offer such constraints.
Using XGBoost’s random forest mode is a temporary solution until native random forest implementations add this feature.
Be careful to add too many constraints: does a constraint really make sense for all other (fixed) choices of feature values?
Recently, together with Yang Liu, we have been investing some time to extend the R package SHAPforxgboost. This package is designed to make beautiful SHAP plots for XGBoost models, using the native treeshap implementation shipped with XGBoost.
Some of the new features of SHAPforxgboost
Added support for LightGBM models, using the native treeshap implementation for LightGBM. So don’t get tricked by the package name “SHAPforxgboost” :-).
The function shap.plot.dependence() has received the option to select the heuristically strongest interacting feature on the color scale, see last section for details.
shap.plot.dependence() now allows jitter and alpha transparency.
The new function shap.importance() returns SHAP importances without plotting them.
An interesting alternative to calculate and plot SHAP values for different tree-based models is the treeshap package by Szymon Maksymiuk et al. Keep an eye on this one – it is actively being developed!
What is SHAP?
A couple of years ago, the concept of Shapely values from game theory from the 1950ies was discovered e.g. by Scott Lundberg as an interesting approach to explain predictions of ML models.
The basic idea is to decompose a prediction in a fair way into additive contributions of features. Repeating the process for many predictions provides a brilliant way to investigate the model as a whole.
The main resource on the topic is Scott Lundberg’s site. Besides this, I’d recomment to go through these two fantastic blog posts, even if you already know what SHAP values are:
As an example, we will try to model log house prices of 20’000 sold houses in Kings County. The dataset is available e.g. on OpenML.org under ID 42092.
Some rows and columns from the Kings County house dataset.
Fetch and prepare data
We start by downloading the data and preparing it for modelling.
R
library(farff)
library(OpenML)
library(dplyr)
library(xgboost)
library(ggplot2)
library(SHAPforxgboost)
# Load King Country house prices dataset on OpenML
# ID 42092, https://www.openml.org/d/42092
df <- getOMLDataSet(data.id = 42092)$data
head(df)
# Prepare
df <- df %>%
mutate(
log_price = log(price),
log_sqft_lot = log(sqft_lot),
year = as.numeric(substr(date, 1, 4)),
building_age = year - yr_built,
zipcode = as.integer(as.character(zipcode))
)
# Define response and features
y <- "log_price"
x <- c("grade", "year", "building_age", "sqft_living",
"log_sqft_lot", "bedrooms", "bathrooms", "floors", "zipcode",
"lat", "long", "condition", "waterfront")
# random split
set.seed(83454)
ix <- sample(nrow(df), 0.8 * nrow(df))
Fit XGBoost model
Next, we fit a manually tuned XGBoost model to the data.
The resulting model consists of about 600 trees and reaches a validation RMSE of 0.16. This means that about 2/3 of the predictions are within 16% of the observed price, using the empirical rule.
Compact SHAP analysis
ML models are rarely of any use without interpreting its results, so let’s use SHAP to peak into the model.
The analysis includes a first plot with SHAP importances. Then, with decreasing importance, dependence plots are shown to get an impression on the effects of each feature.
R
# Step 1: Select some observations
X <- data.matrix(df[sample(nrow(df), 1000), x])
# Step 2: Crunch SHAP values
shap <- shap.prep(fit_xgb, X_train = X)
# Step 3: SHAP importance
shap.plot.summary(shap)
# Step 4: Loop over dependence plots in decreasing importance
for (v in shap.importance(shap, names_only = TRUE)) {
p <- shap.plot.dependence(shap, v, color_feature = "auto",
alpha = 0.5, jitter_width = 0.1) +
ggtitle(v)
print(p)
}
Some of the plots are shown below. The code actually produces all plots, see the corresponding html output on github.
Figure 1: SHAP importance for XGBoost model. The results make intuitive sense. Location and size are among the strongest predictors.Figure 2: SHAP dependence for the second strongest predictor. The larger the living area, the higher the log price. There is not much vertical scatter, indicating that living area acts quite additively on the predictions on the log scale.Figure 3: SHAP dependence for a less important predictor. The effect of “condition” 4 vs 3 seems to depend on the zipcode (see the color). For some zipcodes, the condition does not have a big effect on the price, while for other zipcodes, the effect is clearly higher.
Same workflow for LightGBM
Let’s try out the SHAPforxgboost package with LightGBM.
Note: LightGBM Version 3.2.1 on CRAN is not working properly under Windows. This will be fixed in the next release of LightGBM. As a temporary solution, you need to build it from the current master branch.
Early stopping on the validation data selects about 900 trees as being optimal and results in a validation RMSE of also 0.16.
SHAP analysis
We use exactly the same short snippet to analyze the model by SHAP.
R
X <- data.matrix(df[sample(nrow(df), 1000), x])
shap <- shap.prep(fit_lgb, X_train = X)
shap.plot.summary(shap)
for (v in shap.importance(shap, names_only = TRUE)) {
p <- shap.plot.dependence(shap, v, color_feature = "auto",
alpha = 0.5, jitter_width = 0.1) +
ggtitle(v)
print(p)
}
Again, we only show some of the output and refer to the html of the corresponding rmarkdown. Overall, the model seems to be very similar to the one obtained by XGBoost.
Figure 4: SHAP importance for LightGBM. By chance, the order of importance is the same as for XGBoost.Figure 5: The dependence plot for the living area also looks identical in shape than for the XGBoost model.
How does the dependence plot selects the color variable?
By default, Scott’s shap package for Python uses a statistical heuristic to colorize the points in the dependence plot by the variable with possibly strongest interaction. The heuristic used by SHAPforxgboost is slightly different and directly uses conditional variances. More specifically, the variable X on the x-axis as well as each other feature Z_k is binned into categories. Then, for each Z_k, the conditional variance across binned X and Z_k is calculated. The Z_k with the highest conditional variance is selected as the color variable.
Note that the heuristic does not depend on “shap interaction values” in order to save time (and because these would not be available for LightGBM).
The following simple example shows how/that it is working. First, a dataset is created and a model with three features and strong interaction between x1 and x2 is being fitted. Then, we look at the dependence plots to see if it is consistent with the model/data situation.
Figure 6: The dependence plots for x1 shows a clear interaction effect with the color variable x2. This is as simulated in the data.Figure 7: The dependence plots for x3 does not show clear interaction effects, consistent with the data situation.
The full R script and rmarkdown file of this post can be found on github.
This is the next article in our series “Lost in Translation between R and Python”. The aim of this series is to provide high-quality R and Python 3 code to achieve some non-trivial tasks. If you are to learn R, check out the R tab below. Similarly, if you are to learn Python, the Python tab will be your friend.
For sure, XGBoost is well known for its excellent gradient boosting trees implementation. Although less obvious, it is no secret that it also offers a way to fit single trees in parallel, emulating random forests, see the great explanations on the official XGBoost page. Still, there seems to exist quite some confusion on how to choose certain parameters in order to get good results. It is the aim of this post to clarify this.
Also LightGBM offers a random forest mode. We will investigate it in a later post.
Why would you want to use XGBoost to fit a random forest?
Interaction & monotonic constraints are available for XGBoost, but typically not for random forest implementations. A separate post will follow to illustrate this in the random forest setting.
XGBoost can natively deal with missing values in an elegant way, unlike many random forest algorithms.
You can stick to the same data preparation pipeline.
I had additional reasons in mind, e.g. using non-standard loss functions, but this did not turn out to work well. This is possibly due to the fact that XGBoost uses a quadratic approximation to the loss, which is exact only for the mean squared error loss (MSE).
How to enable the ominous random forest mode?
Following the official explanations, we would need to set
num_parallel_tree to the number of trees in the forest,
learning_rate and num_boost_round to 1.
There are further valuable tips, e.g. to set row and column subsampling to values below one to resemble true random forests.
Still, most of the regularization parameters of XGBoost tend to favour simple trees, while the idea of a random forest is to aggregate deep, overfitted trees. These regularization parameters have to be changed as well in order to get good results.
So voila my suggestions.
Suggestions for parameters
learning_rate=1 (see above)
num_boost_round=1 (see above) Has to be set in train(), not in the parameter list. It is called nrounds in R.
subsample=0.63 A random forest draws a bootstrap sample to fit each tree. This means about 0.63 of the rows will enter one or multiple times into the model, leaving 37% out. While XGBoost does not offer such sampling with replacement, we can still introduce the necessary randomness in the dataset used to fit a tree by skipping 37% of the rows per tree.
colsample_bynode=floor(sqrt(m))/m Column subsampling per split is the main source of randomness in a random forest. A good default is usually to sample the square root of the number of features m or m/3. XGBoost offers different colsample_by* parameters, but it is important to sample per split resp. per node, not by tree. Otherwise, it might happen that important features are missing in a tree altogether, leading to overall bad predictions.
num_parallel_tree The number of trees. Native implementations of random forests usually use a default value between 100 and 500. The more, the better—but slower.
reg_lambda=0 XGBoost uses a default L2 penalty of 1! This will typically lead to shallow trees, colliding with the idea of a random forest to have deep, wiggly trees. In my experience, leaving this parameter at its default will lead to extremely bad XGBoost random forest fits. Set it to zero or a value close to zero.
max_depth=20 Random forests usually train very deep trees, while XGBoost’s default is 6. A value of 20 corresponds to the default in the h2o random forest, so let’s go for their choice.
min_child_weight=2 The default of XGBoost is 1, which tends to be slightly too greedy in random forest mode. For binary classification, you would need to set it to a value close or equal to 0.
Of course these parameters can be tuned by cross-validation, but one of the reasons to love random forests is their good performance even with default parameters.
Compared to optimized random forests, XGBoost’s random forest mode is quite slow. At the cost of performance, choose
lower max_depth,
higher min_child_weight, and/or
smaller num_parallel_tree.
Let’s try it out with regression
We will use a nice house price dataset, consisting of information on over 20,000 sold houses in Kings County. Along with the sale price, different features describe the size and location of the properties. The dataset is available on OpenML.org with ID 42092.
Some rows and columns from the Kings County house dataset.
The following R resp. Python codes fetch the data, prepare the ML setting and fit a native random forest with good defaults. In R, we use the ranger package, in Python the implementation of scikit-learn.
The response variable is the logarithmic sales price. A healthy set of 13 variables are used as features.
R
Python
library(farff)
library(OpenML)
library(dplyr)
library(ranger)
library(xgboost)
set.seed(83454)
rmse <- function(y, pred) {
sqrt(mean((y-pred)^2))
}
# Load King Country house prices dataset on OpenML
# ID 42092, https://www.openml.org/d/42092
df <- getOMLDataSet(data.id = 42092)$data
head(df)
# Prepare
df <- df %>%
mutate(
log_price = log(price),
year = as.numeric(substr(date, 1, 4)),
building_age = year - yr_built,
zipcode = as.integer(as.character(zipcode))
)
# Define response and features
y <- "log_price"
x <- c("grade", "year", "building_age", "sqft_living",
"sqft_lot", "bedrooms", "bathrooms", "floors", "zipcode",
"lat", "long", "condition", "waterfront")
m <- length(x)
# random split
ix <- sample(nrow(df), 0.8 * nrow(df))
# Fit untuned random forest
system.time( # 3 s
fit_rf <- ranger(reformulate(x, y), data = df[ix, ])
)
y_test <- df[-ix, y]
# Test RMSE: 0.173
rmse(y_test, predict(fit_rf, df[-ix, ])$pred)
# object.size(fit_rf) # 180 MB
# Imports
import numpy as np
import pandas as pd
from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
def rmse(y_true, y_pred):
return np.sqrt(mean_squared_error(y_true, y_pred))
# Fetch data from OpenML
df = fetch_openml(data_id=42092, as_frame=True)["frame"]
print("Shape: ", df.shape)
df.head()
# Prepare data
df = df.assign(
year = lambda x: x.date.str[0:4].astype(int),
zipcode = lambda x: x.zipcode.astype(int)
).assign(
building_age = lambda x: x.year - x.yr_built,
)
# Feature list
xvars = [
"grade", "year", "building_age", "sqft_living",
"sqft_lot", "bedrooms", "bathrooms", "floors",
"zipcode", "lat", "long", "condition", "waterfront"
]
# Data split
y_train, y_test, X_train, X_test = train_test_split(
np.log(df["price"]), df[xvars],
train_size=0.8, random_state=766
)
# Fit scikit-learn rf
rf = RandomForestRegressor(
n_estimators=500,
max_features="sqrt",
max_depth=20,
n_jobs=-1,
random_state=104
)
rf.fit(X_train, y_train) # Wall time 3 s
# Test RMSE: 0.176
print(f"RMSE: {rmse(y_test, rf.predict(X_test)):.03f}")
Both in R and Python, the test RMSE is between 0.17 and 0.18, i.e. about 2/3 of the test predictions are within 18% of the observed value. Not bad! Note: The test performance depends on the split seed, so it does not make sense to directly compare the R and Python performance.
With XGBoost’s random forest mode
Now let’s try to reach the same performance with XGBoost’s random forest implementation using the above parameter suggestions.
The performance of the XGBoost random forest is essentially as good as the native random forest implementations. And all this without any parameter tuning!
XGBoost is much slower than the optimized random forest implementations. If this is a problem, e.g. reduce the tree depth. In this example, Python takes almost twice as much time as R. No idea why! The timings were made on a usual 4 core i7 processor.
Disk space required to store the model objects is comparable between XGBoost and native random forest implementations.
What if you would run the same model with XGBoost defaults?
With default reg_lambda=1: The performance would end up at a catastrophic RMSE of 0.35!
With default max_depth=6: The RMSE would be much worse (0.23) as well.
With colsample_bytree instead of colsample_bynode: The RMSE would deteriorate to 0.27.
Thus: It is essential to set some values to a good “random forest” default!
Does it always work that good?
Definitively not in classification settings. However, in regression settings with the MSE loss, XGBoost’s random forest mode is often as accurate as native implementations.
Classification models In my experience, the XGBoost random forest mode does not work as good as a native random forest for classification, possibly due to the fact that it uses only an approximation to the loss function.
Other regression examples Using the setting of our last “R <–> Python” post (diamond duplicates and grouped sampling) and the same parameters as above, we get the following test RMSEs: With ranger (R code in link below): 0.1043, with XGBoost: 0.1042. Sweet!
Wrap up
With the right default parameters, XGBoost’s random forest mode reaches similar performance on regression problems than native random forest packages. Without any tuning!
For losses other than MSE, it does not work so well.
This is the next article in our series “Lost in Translation between R and Python”. The aim of this series is to provide high-quality R and Python 3 code to achieve some non-trivial tasks. If you are to learn R, check out the R tab below. Similarly, if you are to learn Python, the Python tab will be your friend.
Question: How many times did you use diamonds data to compare regression techniques like random forests and gradient boosting?
Answer: Probably a lot!
The curious fact
We recently stumbled over a curious fact regarding that dataset. 26% of the diamonds are duplicates regarding price and the four “C” variables. Within duplicates, the perspective variables table, depth, x, y, and z would differ as if a diamond had been measured from different angles.
In order to illustrate the issue, let us add the two auxilary variables
id: group id of diamonds with identical price and four “C”, and
id_size: number of rows in that id
to the dataset and consider a couple of examples. You can view both R and Python code – but the specific output will differ because language specific naming of group ids.
R
Python
library(tidyverse)
# We add group id and its size
dia <- diamonds %>%
group_by(carat, cut, clarity, color, price) %>%
mutate(id = cur_group_id(),
id_size = n()) %>%
ungroup() %>%
arrange(id)
# Proportion of duplicates
1 - max(dia$id) / nrow(dia) # 0.26
# Some examples
dia %>%
filter(id_size > 1) %>%
head(10)
# Most frequent
dia %>%
arrange(-id_size) %>%
head(.$id_size[1])
# A random large diamond appearing multiple times
dia %>%
filter(id_size > 3) %>%
arrange(-carat) %>%
head(.$id_size[1])
import numpy as np
import pandas as pd
from plotnine.data import diamonds
# Variable groups
cat_vars = ["cut", "color", "clarity"]
xvars = cat_vars + ["carat"]
all_vars = xvars + ["price"]
print("Shape: ", diamonds.shape)
# Add id and id_size
df = diamonds.copy()
df["id"] = df.groupby(all_vars).ngroup()
df["id_size"] = df.groupby(all_vars)["price"].transform(len)
df.sort_values("id", inplace=True)
print(f'Proportion of dupes: {1 - df["id"].max() / df.shape[0]:.0%}')
print("Random examples")
print(df[df.id_size > 1].head(10))
print("Most frequent")
print(df.sort_values(["id_size", "id"]).tail(13))
print("A random large diamond appearing multiple times")
df[df.id_size > 3].sort_values("carat").tail(6)
Table 1: Some duplicates in the four “C” variables and price (Python output).Table 2: One of the two(!) diamonds appearing a whopping 43 times (Python output).Table 3: A large, 2.01 carat diamond appears six times (Python output).
Of course, having the same id does not necessarily mean that the rows really describe the same diamond. price and the four “C”s could coincide purely by chance. Nevertheless: there are exactly six diamonds of 2.01 carat and a price of 16,778 USD in the dataset. And they all have the same color, cut and clarity. This cannot be coincidence!
Why would this be problematic?
In the presence of grouped data, standard validation techniques tend to reward overfitting.
This becomes immediately clear having in mind the 2.01 carat diamond from Table 3. Standard cross-validation (CV) uses random or stratified sampling and would scatter the six rows of that diamond across multiple CV folds. Highly flexible algorithms like random forests or nearest-neighbour regression could exploit this by memorizing the price of this diamond in-fold and do very well out-of-fold. As a consequence, the stated CV performance would be too good and the choice of the modeling technique and its hyperparameters suboptimal.
With grouped data, a good approach is often to randomly sample the whole group instead of single rows. Using such grouped splitting ensures that all rows in the same group would end up in the same fold, removing the above described tendency to overfit.
Note 1. In our case of duplicates, a simple alternative to grouped splitting would be to remove the duplicates altogether. However, the occurrence of duplicates is just one of many situations where grouped or clustered samples appear in reality.
Note 2. The same considerations not only apply to cross-validation but also to simple train/validation/test splits.
Evaluation
What does this mean regarding our diamonds dataset? Using five-fold CV, we will estimate the true root-mean-squared error (RMSE) of a random forest predicting log price by the four “C”. We run this experiment twice: one time, we create the folds by random splitting and the other time by grouped splitting. How heavily will the results from random splitting be biased?
R
Python
library(ranger)
library(splitTools) # one of our packages on CRAN
set.seed(8325)
# We model log(price)
dia <- dia %>%
mutate(y = log(price))
# Helper function: calculate rmse
rmse <- function(obs, pred) {
sqrt(mean((obs - pred)^2))
}
# Helper function: fit model on one fold and evaluate
fit_on_fold <- function(fold, data) {
fit <- ranger(y ~ carat + cut + color + clarity, data = data[fold, ])
rmse(data$y[-fold], predict(fit, data[-fold, ])$pred)
}
# 5-fold CV for different split types
cross_validate <- function(type, data) {
folds <- create_folds(data$id, k = 5, type = type)
mean(sapply(folds, fit_on_fold, data = dia))
}
# Apply and plot
(results <- sapply(c("basic", "grouped"), cross_validate, data = dia))
barplot(results, col = "orange", ylab = "RMSE by 5-fold CV")
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import cross_val_score, GroupKFold, KFold
from sklearn.metrics import make_scorer, mean_squared_error
import seaborn as sns
rmse = make_scorer(mean_squared_error, squared=False)
# Prepare y, X
df = df.sample(frac=1, random_state=6345)
y = np.log(df.price)
X = df[xvars].copy()
# Correctly ordered integer encoding
X[cat_vars] = X[cat_vars].apply(lambda x: x.cat.codes)
# Cross-validation
results = {}
rf = RandomForestRegressor(n_estimators=500, max_features="sqrt",
min_samples_leaf=5, n_jobs=-1)
for nm, strategy in zip(("basic", "grouped"), (KFold, GroupKFold)):
results[nm] = cross_val_score(
rf, X, y, cv=strategy(), scoring=rmse, groups=df.id
).mean()
print(results)
res = pd.DataFrame(results.items())
sns.barplot(x=0, y=1, data=res);
Figure 1: Test root-mean-squared error using different splitting methods (R output).
The RMSE (11%) of grouped CV is 8%-10% higher than of random CV (10%). The standard technique therefore seems to be considerably biased.
Final remarks
The diamonds dataset is not only a brilliant example to demonstrate regression techniques but also a great way to show the importance of a clean validation strategy (in this case: grouped splitting).
Blind or automatic ML would most probably fail to detect non-trivial data structures like in this case and therefore use inappropriate validation strategies. The resulting model would be somewhere between suboptimal and dangerous. Just that nobody would know it!
The first step towards a good model validation strategy is data understanding. This is a mix of knowing the data source, how the data was generated, the meaning of columns and rows, descriptive statistics etc.
This is the next article in our series “Lost in Translation between R and Python”. The aim of this series is to provide high-quality R and Python 3 code to achieve some non-trivial tasks. If you are to learn R, check out the R tab below. Similarly, if you are to learn Python, the Python tab will be your friend.
The last one was a deep dive into historic mortality rates.
No Covid-19, no public data for a change: This post focusses on a real beauty, namely a decomposition of the R-squared in a linear regression model
E(y) = \alpha + \sum_{j = 1}^p x_j \beta_j
fitted by least-squares. If the response y and all p covariables are standardized to variance 1 beforehand, then the R-squared can be obtained as the cross-product of the fitted coefficients and the usual correlations between each covariable and the response:
Two elegant derivations can be found in this answer to the same question, written by the number 1 contributor to crossvalidated: whuber. Look up a couple of his posts – and statistics will suddenly feel super easy and clear.
Direct consequences of the formula are:
If a covariable is uncorrelated with the response, it cannot contribute to the R-squared, i.e. neither improve nor worsen. This is not obvious.
A correlated covariable only improves R-squared if its coefficient is non-zero. Put differently: if the effect of a covariable is already fully covered by the other covariables, it does not improve the R-squared. This is somewhat obvious.
Note that all formulas refer to in-sample calculations.
Since we do not want to bore you with math, we simply demonstrate the result with short R and Python codes based on the famous iris dataset.
R
Python
y <- "Sepal.Width"
x <- c("Sepal.Length", "Petal.Length", "Petal.Width")
# Scaled version of iris
iris2 <- data.frame(scale(iris[c(y, x)]))
# Fit model
fit <- lm(reformulate(x, y), data = iris2)
summary(fit) # multiple R-squared: 0.524
(betas <- coef(fit)[x])
# Sepal.Length Petal.Length Petal.Width
# 1.1533143 -2.3734841 0.9758767
# Correlations (scaling does not matter here)
(cors <- cor(iris[, y], iris[x]))
# Sepal.Length Petal.Length Petal.Width
# -0.1175698 -0.4284401 -0.3661259
# The R-squared?
sum(betas * cors) # 0.524
# Import packages
import numpy as np
import pandas as pd
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LinearRegression
# Load data
iris = datasets.load_iris(as_frame=True).data
print("The data:", iris.head(3), sep = "\n")
# Specify response
yvar = "sepal width (cm)"
# Correlations of everyone with response
cors = iris.corrwith(iris[yvar]).drop(yvar)
print("\nCorrelations:", cors, sep = "\n")
# Prepare scaled response and covariables
X = StandardScaler().fit_transform(iris.drop(yvar, axis=1))
y = StandardScaler().fit_transform(iris[[yvar]])
# Fit linear regression
OLS = LinearRegression().fit(X, y)
betas = OLS.coef_[0]
print("\nScaled coefs:", betas, sep = "\n")
# R-squared via scikit-learn: 0.524
print(f"\nUsual R-squared:\t {OLS.score(X, y): .3f}")
# R-squared via decomposition: 0.524
rsquared = betas @ cors.values
print(f"Applying the formula:\t {rsquared: .3f}")
Indeed: the cross-product of coefficients and correlations equals the R-squared of 52%.
This is the third article in our series “Lost in Translation between R and Python”. The aim of this series is to provide high-quality R and Python 3 code to achieve some non-trivial tasks. If you are to learn R, check out the R tab below. Similarly, if you are to learn Python, the Python tab will be your friend.
Before diving into the data analysis, I would like to share with you that writing this text brought back some memories from my first year as an actuary. Indeed, I started in life insurance and soon switched to non-life. While investigating mortality may be seen as a dry matter, it reveals some interesting statistical problems which have to be properly addressed before drawing conclusions.
Similar to Post 2, we use a publicly available data, this time from the Human Mortality Database and from the Federal Statistical Office of Switzerland, in order to calculate crude death rates, i.e. number of deaths per person alive per year. This time, we look at a longer time periode of 20 and over 100 years and take all causes of death into account. We caution against any misinterpretation: We show only crude death rates (CDR) which do not take into account any demographic shifts like changing distributions of age or effects from measures taken against COVID-19.
Let us start with producing the first figure. We fetch the data from the internet, pick some countries of interest, focus on males and females combined only, aggregate and plot. The Python version uses the visualization library altair, which can generate interactive charts. Unfortunately, we can only show a static version in the blog post. If someone knows how to render Verga-Light charts in wordpress, I’d be very interested in a secure solution.
R
Python
library(tidyverse)
library(lubridate)
# Fetch data
df_original = read_csv(
"https://www.mortality.org/Public/STMF/Outputs/stmf.csv",
skip = 2
)
# 1. Select countries of interest and only "both" sexes
# Note: Germany "DEUTNP" and "USA" have short time series
# 2. Change to ISO-3166-1 ALPHA-3 codes
# 3.Create population pro rata temporis (exposure) to ease aggregation
df_mortality <- df_original %>%
filter(CountryCode %in% c("CAN", "CHE", "FRATNP", "GBRTENW", "SWE"),
Sex == "b") %>%
mutate(CountryCode = recode(CountryCode, "FRATNP" = "FRA",
"GBRTENW" = "England & Wales"),
population = DTotal / RTotal,
Year = ymd(Year, truncated = 2))
# Data aggregation per year and country
df <- df_mortality %>%
group_by(Year, CountryCode) %>%
summarise(CDR = sum(DTotal) / sum(population),
.groups = "drop")
ggplot(df, aes(x = Year, y = CDR, color = CountryCode)) +
geom_line(size = 1) +
ylab("Crude Death Rate per Year") +
theme(legend.position = c(0.2, 0.8))
import pandas as pd
import altair as alt
# Fetch data
df_mortality = pd.read_csv(
"https://www.mortality.org/Public/STMF/Outputs/stmf.csv",
skiprows=2,
)
# Select countdf_mortalityf interest and only "both" sexes
# Note: Germany "DEUTNP" and "USA" have short time series
df_mortality = df_mortality[
df_mortality["CountryCode"].isin(["CAN", "CHE", "FRATNP", "GBRTENW", "SWE"])
& (df_mortality["Sex"] == "b")
].copy()
# Change to ISO-3166-1 ALPHA-3 codes
df_mortality["CountryCode"].replace(
{"FRATNP": "FRA", "GBRTENW": "England & Wales"}, inplace=True
)
# Create population pro rata temporis (exposure) to ease aggregation
df_mortality = df_mortality.assign(
population=lambda df: df["DTotal"] / df["RTotal"]
)
# Data aggregation per year and country
df_mortality = (
df_mortality.groupby(["Year", "CountryCode"])[["population", "DTotal"]]
.sum()
.assign(CDR=lambda x: x["DTotal"] / x["population"])
# .filter(items=["CDR"]) # make df even smaller
.reset_index()
.assign(Year=lambda x: pd.to_datetime(x["Year"], format="%Y"))
)
chart = (
alt.Chart(df_mortality)
.mark_line()
.encode(
x="Year:T",
y=alt.Y("CDR:Q", scale=alt.Scale(zero=False)),
color="CountryCode:N",
)
.properties(title="Crude Death Rate per Year")
.interactive()
)
# chart.save("crude_death_rate.html")
chart
Crude death rate (CDR) for Canada (CAN), Switzerland (CHE), England & Wales, France (FRA) and Sweden (SWE). Data as of 07.02.2021.
Note that the y-axis does not start at zero. Nevertheless, we see values between 0.007 and 0.014, which is twice as large. While 2020 shows a clear raise in mortality, for some countries more dramatic than for others, the values of 2021 are still preliminary. For 2021, the data is still incomplete and the yearly CDR is based on a small observation period and hence on a smaller population pro rata temporis. On top, there might be effects from seasonality. To sum up, it means that there is a larger uncertainty for 2021 than for previous whole years.
For Switzerland, it is also possible to collect data for over 100 years. As the code for data fetching and preparation becomes a bit lengthy, we won’t bother you with it. You can find it in the notebooks linked below. Note that we added the value of 2020 from the figure above. This seems legit as the CDR of both data sources agree within less than 1% relative error.
Crude death rate (CDR) for Switzerland from 1901 to 2020.
Again, note that the left y-axis does not start at zero, but the right y-axis does. One can see several interesting facts:
The Swiss population is and always was growing for the last 120 years—with the only exception around 1976.
The Spanish flu between 1918 and 1920 caused by far the largest peak in mortality in the last 120 years.
The second world war is not visible in the mortality in Switzerland.
This is the next article in our series “Lost in Translation between R and Python”. The aim of this series is to provide high-quality R and Python 3 code to achieve some non-trivial tasks. If you are to learn R, check out the R tab below. Similarly, if you are to learn Python, the Python tab will be your friend.
In Post 2, we use a publicly available data of the European Centre for Disease Prevention and Control to calculate Covid-19 deaths per Mio persons over time and across countries . We will use slim Python and R codes to
fetch the data directly from the internet,
prepare and restructure it for plotting and
plot a curve per selected country.
Note that different countries use different definitions of whom to count as Covid-19 death and these definitions might also have changed over time. So be careful with comparisons!
R
Python
library(tidyverse)
# Source and countries
link <- "https://opendata.ecdc.europa.eu/covid19/casedistribution/csv"
countries <- c("Switzerland", "United_States_of_America",
"Germany", "Sweden")
# Import
df0 <- read_csv(link)
# Data prep
df <- df0 %>%
mutate(Date = lubridate::dmy(dateRep),
Deaths = deaths_weekly / (popData2019 / 1e6)) %>%
rename(Country = countriesAndTerritories) %>%
filter(Date >= "2020-03-01",
Country %in% countries)
# Plot
ggplot(df, aes(x = Date, y = Deaths, color = Country)) +
geom_line(size = 1) +
ylab("Weekly deaths per Mio") +
theme(legend.position = c(0.2, 0.85))
This is the first article in our series “Lost in Translation between R and Python”. The aim of this series is to provide high-quality R and Python 3 code to achieve some non-trivial tasks. If you are to learn R, check out the R tab below. Similarly, if you are to learn Python, the Python tab will be your friend.
Let’s start with a little bit of statistics – it wont be the last time, friends: Illustrating the Central Limit Theorem (CLT).
Take a sample of a random variable X with finite variance. The CLT says: No matter how “unnormally” distributed X is, its sample mean will be approximately normally distributed, at least if the sample size is not too small. This classic result is the basis to construct simple confidence intervals and hypothesis tests for the (true) mean of X, check out Wikipedia for a lot of additional information.
The code below illustrates this famous statistical result by simulation, using a very asymmetrically distributed X, namely X = 1 with probability 0.2 and X=0 otherwise. X could represent the result of asking a randomly picked person whether he smokes. Conducting such a poll, the mean of the collected sample of such results would be a statistical estimate of the proportion of people smoking.
Curiously, by a tiny modification, the same code will also illustrate another key result in statistics – the Law of Large Numbers: For growing sample size, the distribution of the sample mean of X contracts to the expectation E(X).
R
Python
# Fix seed, set constants
set.seed(2006)
sample_sizes <- c(1, 10, 30, 1000)
nsims <- 10000
# Helper function: Mean of one sample of X
one_mean <- function(n, p = c(0.8, 0.2)) {
mean(sample(0:1, n, replace = TRUE, prob = p))
}
# one_mean(10)
# Simulate and plot
par(mfrow = c(2, 2), mai = rep(0.4, 4))
for (n in sample_sizes) {
means <- replicate(nsims, one_mean(n))
hist(means, breaks = "FD",
# xlim = 0:1, # uncomment for LLN
main = sprintf("n=%i", n))
}
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Fix seed, set constants
np.random.seed(100)
sample_sizes = [1, 10, 30, 1000]
nsims = 10_000
# Helper function: Mean of one sample
def one_mean(n, p=0.2):
return np.random.binomial(1, p, n).mean()
# Simulate and plot
fig, axes = plt.subplots(2, 2, figsize=(8, 8))
for i, n in enumerate(sample_sizes):
means = [one_mean(n) for ell in range(nsims)]
ax = axes[i // 2, i % 2]
ax.hist(means, 50)
ax.title.set_text(f'$n = {n}$')
ax.set_xlabel('mean')
# ax.set_xlim(0, 1) # uncomment for LLN
fig.tight_layout()
Result: The Central Limit Theorem
The larger the samples, the closer the histogram of the simulated means resembles a symmetric bell shaped curve (R-Output for illustration).
Result: The Law of Large Number
Fixing the x-scale illustrates – for free(!) – the Law of Large Numbers: The distribution of the mean contracts more and more to the expectation 0.2 (R-Output for illustration).