tabular_learner#

skrub.tabular_learner(estimator, *, n_jobs=None)[source]#

Get a simple machine-learning pipeline for tabular data.

Given a scikit-learn estimator, this function creates a machine-learning pipeline that preprocesses tabular data to extract numeric features and impute missing values if necessary, then applies the estimator.

Instead of an actual estimator, estimator can also be the string 'regressor' or 'classifier' to use a HistGradientBoostingRegressor or a HistGradientBoostingClassifier with default parameters.

tabular_learner returns a scikit-learn Pipeline with several steps:

  • A TableVectorizer transforms the tabular data into numeric features. Its parameters are chosen depending on the provided estimator.

  • An optional SimpleImputer imputes missing values by their mean and adds binary columns that indicate which values were missing. This step is only added if the estimator cannot handle missing values itself.

  • An optional StandardScaler centers and rescales the data. This step is not added (because it is unnecessary) when the estimator is a tree ensemble such as random forest or gradient boosting.

  • The last step is the provided estimator.

Read more in the User Guide.

Note

tabular_learner is a recent addition and the heuristics used to define an appropriate preprocessing based on the estimator may change in future releases.

Parameters:
estimator{“regressor”, “classifier”} or scikit-learn estimator

The estimator to use as the final step in the pipeline. Based on the type of estimator, the previous preprocessing steps and their respective parameters are chosen. The possible values are:

n_jobsint, default=None

Number of jobs to run in parallel in the TableVectorizer step. None means 1 unless in a joblib parallel_backend context. -1 means using all processors.

Returns:
Pipeline

A scikit-learn Pipeline chaining some preprocessing and the provided estimator.

Notes

The parameter values for the TableVectorizer might differ depending on the version of scikit-learn:

Examples

>>> from skrub import tabular_learner

We can easily get a default pipeline for regression or classification:

>>> tabular_learner('regressor')                                    
Pipeline(steps=[('tablevectorizer',
                 TableVectorizer(high_cardinality=MinHashEncoder(),
                                 low_cardinality=ToCategorical())),
                ('histgradientboostingregressor',
                 HistGradientBoostingRegressor(categorical_features='from_dtype'))])

When requesting a 'regressor', the last step of the pipeline is set to a HistGradientBoostingRegressor.

>>> tabular_learner('classifier')                                   
Pipeline(steps=[('tablevectorizer',
                 TableVectorizer(high_cardinality=MinHashEncoder(),
                                 low_cardinality=ToCategorical())),
                ('histgradientboostingclassifier',
                 HistGradientBoostingClassifier(categorical_features='from_dtype'))])

When requesting a 'classifier', the last step of the pipeline is set to a HistGradientBoostingClassifier.

This pipeline can be applied to rich tabular data:

>>> import pandas as pd
>>> X = pd.DataFrame(
...     {
...         "last_visit": ["2020-01-02", "2021-04-01", "2024-12-05", "2023-08-10"],
...         "medication": [None, "metformin", "paracetamol", "gliclazide"],
...         "insulin_prescriptions": ["N/A", 13, 0, 17],
...         "fasting_glucose": [35, 140, 44, 137],
...     }
... )
>>> y = [0, 1, 0, 1]
>>> X
   last_visit   medication insulin_prescriptions  fasting_glucose
0  2020-01-02         None                   N/A               35
1  2021-04-01    metformin                    13              140
2  2024-12-05  paracetamol                     0               44
3  2023-08-10   gliclazide                    17              137
>>> model = tabular_learner('classifier').fit(X, y)
>>> model.predict(X)
array([0, 0, 0, 0])

Rather than using the default estimator, we can provide our own scikit-learn estimator:

>>> from sklearn.linear_model import LogisticRegression
>>> model = tabular_learner(LogisticRegression())
>>> model.fit(X, y)
Pipeline(steps=[('tablevectorizer', TableVectorizer()),
                ('simpleimputer', SimpleImputer(add_indicator=True)),
                ('standardscaler', StandardScaler()),
                ('logisticregression', LogisticRegression())])

By applying only the first pipeline step we can see the transformed data that is sent to the supervised estimator (see the TableVectorizer documentation for details):

>>> model.named_steps['tablevectorizer'].transform(X)               
   last_visit_year  last_visit_month  ...  insulin_prescriptions  fasting_glucose
0           2020.0               1.0  ...                    NaN             35.0
1           2021.0               4.0  ...                   13.0            140.0
2           2024.0              12.0  ...                    0.0             44.0
3           2023.0               8.0  ...                   17.0            137.0

The parameters of the TableVectorizer depend on the provided estimator.

>>> tabular_learner(LogisticRegression())
Pipeline(steps=[('tablevectorizer', TableVectorizer()),
                ('simpleimputer', SimpleImputer(add_indicator=True)),
                ('standardscaler', StandardScaler()),
                ('logisticregression', LogisticRegression())])

We see that for the LogisticRegression we get the default configuration of the TableVectorizer which is intended to work well for a wide variety of downstream estimators. Moreover, as the LogisticRegression cannot handle missing values, an imputation step is added. Finally, as many models require the inputs to be centered and on the same scale, centering and standard scaling is added.

On the other hand, For the HistGradientBoostingClassifier (generated with the string "classifier"):

>>> tabular_learner('classifier')                                   
Pipeline(steps=[('tablevectorizer',
                 TableVectorizer(high_cardinality=MinHashEncoder(),
                                 low_cardinality=ToCategorical())),
                ('histgradientboostingclassifier',
                 HistGradientBoostingClassifier(categorical_features='from_dtype'))])
  • A MinHashEncoder is used as the high_cardinality. This encoder provides good performance when the supervised estimator is based on a decision tree or ensemble of trees, as is the case for the HistGradientBoostingClassifier. Unlike the default GapEncoder, the MinHashEncoder does not produce interpretable features. However, it is much faster and uses less memory.

  • The low_cardinality does not one-hot encode features. The HistGradientBoostingClassifier has built-in support for categorical data which is more efficient than one-hot encoding. Therefore the selected encoder, ToCategorical, simply makes sure that those features have a categorical dtype so that the HistGradientBoostingClassifier recognizes them as such.

  • There is no missing-value imputation because the classifier has its own (better) mechanism for dealing with missing values.

  • There is no standard scaling which is unnecessary for trees and ensembles of trees.