Personal Highlights of Scikit-Learn 1.0

Yes! After more than 10 years, scikit-learn released its 1.0 version on 24 September 2021. In this post, I’d like to point out some personal highlights apart from the release highlights.

1. Feature Names

This one is listed in the release highlights, but deserves to be mentioned again.

from sklearn.compose import ColumnTransformer
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler
import pandas as pd

df = pd.DataFrame({
    "pet": ["dog", "cat", "fish"],
    "age": [3, 7, 1],
    "noise": [-99, pd.NA, 1e-10],
    "target": [1, 0, 1],
})
y = df.pop("target")
X = df

preprocessor = ColumnTransformer(
    [
        ("numerical", StandardScaler(), ["age"]),
        ("categorical", OneHotEncoder(), ["pet"]),
    ],
    verbose_feature_names_out=False,
    remainder="drop",
)

pipe = make_pipeline(preprocessor, LogisticRegression())
pipe.fit(X, y)
pipe[:-1].get_feature_names_out()
array(['age', 'pet_cat', 'pet_dog', 'pet_fish'], dtype=object)

This is not yet available for all estimators and transformers, but it is a big step towards SLEP007.

2. ColumnTransformer allows changed order of columns

Before this release, ColumnTransformer recorded the order of the columns of a dataframe during the fit method and required that a dataframe X passed to transform had the exact same columns and in the exact same order.

This was a big pain point in productive settings because fit and predict of a model pipeline, both calling transform, often get data from different sources, and, for instance, SQL does not care about the order of columns. On top, remainder=”drop” forced you to have also all dropped columns in transform. This contradicted at least my modelling workflow as I often specify all meaningful features explicitly and drop the rest by the remainder option. This then led to unwanted surprises when applying predict to new data in the end. It might also happen, that one forgets to remove the target variable from the training X and relies on the drop option. Usually, the application of the predictive model pipeline is on new data without the target variable. The error in this case, however, might be considered a good thing.

With pull request (PR) #19263, the ColumnTransformer only cares about the presence and names of the columns, not about their order. With remainder=”drop”, it only cares about the specified columns and ignores all other columns, even no matter if the dropped ones are different in fit and transform. Note that this only works with pandas dataframes as input (or an object that quacks alike).

df_new = pd.DataFrame({
    "age": [1, 9, 3],
    "another_noise": [pd.NA, -99, 1e-10],
    "pet": ["cat", "dog", "fish"],
})
pipe.predict(df_new)

You find these little code snippets as notebook at the usual place: https://github.com/lorentzenchr/notebooks/blob/master/blogposts/2021-10-21%20scikit-learn_v1_release_highlights.ipynb.

3. Poisson criterion for random forests

Scikit-learn v0.24 shipped with the new option criterion=”poisson” for DecisionTreeRegressor to split nodes based on the reduction of Poisson deviance. Version 1.0 passed this option further to the RandomForestRegressor in PR #19836. Random forests are often used models and valued for their ease of use. We even like to write blog posts about them:

The Poisson splitting criterion has its place when modelling counts or frequencies. It allows for non-negative values to be modelled, but forbids non-positive predictions. This corresponds to y_{train} \geq 0 and y_{predict} > 0.

4. The best example/tutorial of the year

It’s not visible from the release notes, but this deserves to be noted. PR #20281 added a fantastic example, more like a tutorial, on time-related feature engineering. You find a lot of interesting features, some of them shipped with the 1.0 release, e.g. time base cross validation, generation of cyclic b-splines and adding pairwise interactions to a linear model, usage of native categorical features in the HistGradientBoostingRegressor

Take a look for yourself at this wonderful example.


Posted

in

by

Tags:

Comments

One response to “Personal Highlights of Scikit-Learn 1.0”

  1. […] Christian Lorentzen takes a well-deserved victory lap: […]

Leave a Reply

Your email address will not be published. Required fields are marked *