Encoding: from a dataframe to a numerical matrix for machine learning#

This example shows how to transform a rich dataframe with columns of various types into a numerical matrix on which machine-learning algorithms can be applied. We study the case of predicting wages using the employee salaries dataset.

Easy learning on a dataframe#

Let’s first retrieve the dataset, using one of the downloaders from the skrub.datasets module. As all the downloaders, fetch_employee_salaries() returns a dataset with attributes X, and y. X is a dataframe which contains the features (aka design matrix, explanatory variables, independent variables). y is a column (pandas Series) which contains the target (aka dependent, response variable) that we want to learn to predict from X. In this case y is the annual salary.

gender department department_name division assignment_category employee_position_title date_first_hired year_first_hired
0 F POL Department of Police MSB Information Mgmt and Tech Division Records... Fulltime-Regular Office Services Coordinator 09/22/1986 1986
1 M POL Department of Police ISB Major Crimes Division Fugitive Section Fulltime-Regular Master Police Officer 09/12/1988 1988
2 F HHS Department of Health and Human Services Adult Protective and Case Management Services Fulltime-Regular Social Worker IV 11/19/1989 1989
3 M COR Correction and Rehabilitation PRRS Facility and Security Fulltime-Regular Resident Supervisor II 05/05/2014 2014
4 M HCA Department of Housing and Community Affairs Affordable Housing Programs Fulltime-Regular Planning Specialist III 03/05/2007 2007
... ... ... ... ... ... ... ... ...
9223 F HHS Department of Health and Human Services School Based Health Centers Fulltime-Regular Community Health Nurse II 11/03/2015 2015
9224 F FRS Fire and Rescue Services Human Resources Division Fulltime-Regular Fire/Rescue Division Chief 11/28/1988 1988
9225 M HHS Department of Health and Human Services Child and Adolescent Mental Health Clinic Serv... Parttime-Regular Medical Doctor IV - Psychiatrist 04/30/2001 2001
9226 M CCL County Council Council Central Staff Fulltime-Regular Manager II 09/05/2006 2006
9227 M DLC Department of Liquor Control Licensure, Regulation and Education Fulltime-Regular Alcohol/Tobacco Enforcement Specialist II 01/30/2012 2012

9228 rows × 8 columns



Most machine-learning algorithms work with arrays of numbers. The challenge here is that the employees dataframe is a heterogeneous set of columns: some are numerical ('year_first_hired'), some dates ('date_first_hired'), some have a few categorical entries ('gender'), some many ('employee_position_title'). Therefore our table needs to be “vectorized”: processed to extract numeric features.

skrub provides an easy way to build a simple but reliable machine-learning model which includes this step, working well on most tabular data.

from sklearn.model_selection import cross_validate

from skrub import tabular_learner

model = tabular_learner("regressor")
results = cross_validate(model, employees, salaries)
results["test_score"]
array([0.89370447, 0.89279068, 0.92282557, 0.92319094, 0.92162666])

The estimator returned by tabular_learner combines 2 steps:

Pipeline(steps=[('tablevectorizer',
                 TableVectorizer(high_cardinality=MinHashEncoder(),
                                 low_cardinality=ToCategorical())),
                ('histgradientboostingregressor',
                 HistGradientBoostingRegressor(categorical_features='from_dtype'))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


In the rest of this example, we focus on the first step and explore the capabilities of skrub’s TableVectorizer.


More details on encoding tabular data#

gender_F gender_M gender_nan department_BOA department_BOE department_CAT department_CCL department_CEC department_CEX department_COR department_CUS department_DEP department_DGS department_DHS department_DLC department_DOT department_DPS department_DTS department_ECM department_FIN department_FRS department_HCA department_HHS department_HRC department_IGR department_LIB department_MPB department_NDA department_OAG department_OCP department_OHR department_OIG department_OLO department_OMB department_PIO department_POL department_PRO department_REC department_SHF department_ZAH ... division: nicholson, transit, taxicab division: welfare, childhood, child division: school, health, based division: engineering, planning, parking assignment_category_Parttime-Regular employee_position_title: officer, office, police employee_position_title: planning, engineer, screening employee_position_title: equipment, investment, investigator employee_position_title: master, registered, meter employee_position_title: firefighter, rescuer, recruit employee_position_title: sergeant, cadet, police employee_position_title: technician, mechanic, electrician employee_position_title: manager, projects, project employee_position_title: assistance, income, client employee_position_title: specialist, special, estate employee_position_title: enforcement, permitting, inspector employee_position_title: correctional, correction, corporal employee_position_title: school, room, behavioral employee_position_title: lieutenant, maintenance, shift employee_position_title: coordinator, coordinating, depot employee_position_title: supervisory, supervisor, therapist employee_position_title: operator, bus, operations employee_position_title: liquor, clerk, store employee_position_title: community, nurse, health employee_position_title: program, programs, project employee_position_title: crossing, parking, guard employee_position_title: information, technology, technologist employee_position_title: communications, telecommunications, safety employee_position_title: warehouse, welfare, caseworker employee_position_title: environmental, budget, procurement employee_position_title: accountant, assistant, library employee_position_title: administrative, legislative, principal employee_position_title: services, service, urban employee_position_title: candidate, sheriff, deputy employee_position_title: captain, chief, battalion date_first_hired_year date_first_hired_month date_first_hired_day date_first_hired_total_seconds year_first_hired
0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 ... 0.103165 0.061692 0.063956 0.063927 0.0 0.096096 0.053249 0.055884 0.055274 0.054627 0.081749 0.051492 0.054035 0.054646 0.053109 0.054893 0.061017 0.051728 0.053427 37.301174 0.057114 0.067350 0.058739 0.052538 0.050624 0.052762 0.052125 0.054054 0.053304 0.051231 0.051350 0.052558 0.104047 0.054275 0.055525 1986.0 9.0 22.0 5.277312e+08 1986.0
1 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 ... 0.085138 0.064415 0.055038 0.061673 0.0 21.650934 0.064611 0.052574 6.716549 0.059399 0.088964 0.053865 0.069086 0.078914 0.058706 0.051739 0.061050 0.052671 0.055771 0.064804 0.055957 0.054286 0.059837 0.050841 0.053215 0.050018 0.056082 0.054055 0.053795 0.054975 0.059922 0.053813 0.058542 0.053131 0.051892 1988.0 9.0 12.0 5.900256e+08 1988.0
2 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.054110 0.084406 0.082005 0.083714 0.0 0.060465 0.066100 0.055526 0.054401 0.058527 0.051249 0.056721 0.069114 0.055823 0.067792 0.054186 0.064008 0.058030 0.050275 0.052207 0.054603 0.053538 0.057367 0.052823 0.060719 0.050011 0.055127 0.053843 20.863762 0.060116 0.051764 0.057633 0.052052 0.051957 0.050259 1989.0 11.0 19.0 6.274368e+08 1989.0
3 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.054927 0.063733 0.054578 0.082678 0.0 0.102171 0.073792 4.829050 0.103989 0.704754 0.063154 0.055861 0.268707 0.057658 0.063005 0.153086 0.074673 0.050967 0.106712 0.069661 21.399443 0.060175 0.080973 0.054077 0.058716 0.054507 0.057054 0.055313 0.089762 0.109300 0.068611 0.278002 2.124854 0.117832 0.114143 2014.0 5.0 5.0 1.399248e+09 2014.0
4 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.066732 0.052717 0.094449 9.229471 0.0 0.085395 19.275763 0.056995 0.050711 0.070362 0.062230 0.094157 0.177950 0.098767 11.451275 0.098135 0.064020 0.060094 0.058841 0.051935 0.070222 0.051789 0.056332 0.059251 0.129161 0.190897 0.061595 0.069907 0.069420 0.175774 0.068648 0.055600 0.063999 0.069196 0.051577 2007.0 3.0 5.0 1.173053e+09 2007.0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
9223 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.053495 0.059884 22.283127 0.062912 0.0 0.055054 0.052907 0.051165 0.050002 0.053088 0.053558 0.054319 0.055778 0.054488 0.054682 0.053391 0.054232 0.068159 0.051162 0.051964 0.052517 0.050000 0.052181 34.434174 0.052835 0.050008 0.051482 0.059752 0.054098 0.052571 0.052737 0.051809 0.064375 0.056451 0.051061 2015.0 11.0 3.0 1.446509e+09 2015.0
9224 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 1.473484 0.081055 0.072255 0.069051 0.0 0.051844 0.064104 0.050251 0.072251 0.078107 0.052250 0.062810 0.050002 0.079907 0.114322 0.050604 0.082354 0.058828 0.084668 0.335808 2.445349 0.050002 0.052280 0.050642 0.053727 0.115392 1.082870 0.064660 0.095460 0.058466 0.110875 0.478225 0.122879 0.072272 31.358795 1988.0 11.0 28.0 5.966784e+08 1988.0
9225 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.077425 9.745681 3.275984 0.115025 1.0 0.148277 0.094468 2.648824 0.053634 0.057994 0.069643 0.859741 0.071150 0.079183 0.118996 1.876557 0.164615 0.205792 0.057042 0.176380 34.776176 0.200176 0.091934 0.057194 0.062945 0.050162 0.097304 0.114943 3.749662 0.072300 0.074296 0.181320 0.078304 0.068245 0.142742 2001.0 4.0 30.0 9.885888e+08 2001.0
9226 0.0 1.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.129231 0.058910 0.066593 0.057373 0.0 0.065515 0.051257 0.054898 0.055727 0.062461 0.054675 0.054939 11.932912 0.051888 0.053154 0.055046 0.057148 0.051428 0.052666 0.051527 0.053447 0.051938 0.053816 0.052750 0.052753 0.050006 0.051882 0.050884 0.061099 0.055844 0.052362 0.050002 0.053773 0.054203 0.050002 2006.0 9.0 5.0 1.157414e+09 2006.0
9227 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.075191 0.066045 0.057307 0.143154 0.0 0.240325 2.336365 0.072617 0.050784 0.052499 0.262669 0.058088 0.054626 0.090572 0.144052 13.540852 0.074469 0.157639 0.052923 0.074986 0.062938 0.063641 0.064039 0.062996 0.089035 0.051949 0.171009 0.069689 0.067835 41.727760 0.083168 0.053337 0.062199 0.054035 0.052904 2012.0 1.0 30.0 1.327882e+09 2012.0

9228 rows × 143 columns



From our 8 columns, the TableVectorizer has extracted 143 numerical features. Most of them are one-hot encoded representations of the categorical features. For example, we can see that 3 columns 'gender_F', 'gender_M', 'gender_nan' were created to encode the 'gender' column.

By performing appropriate transformations on our complex data, the TableVectorizer produced numeric features that we can use for machine-learning:

HistGradientBoostingRegressor()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


The TableVectorizer bridges the gap between tabular data and machine-learning pipelines. It allows us to apply a machine-learning estimator to our dataframe without manual data wrangling and feature extraction.

Inspecting the TableVectorizer#

The TableVectorizer distinguishes between 4 basic kinds of columns (more may be added in the future). For each kind, it applies a different transformation, which we can configure. The kinds of columns and the default transformation for each of them are:

  • numeric columns: simply casting to floating-point

  • datetime columns: extracting features such as year, day, hour with the DatetimeEncoder

  • low-cardinality categorical columns: one-hot encoding

  • high-cardinality categorical columns: a simple and effective text representation pipeline provided by the GapEncoder

TableVectorizer()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


We can inspect which transformation was chosen for a each column and retrieve the fitted transformer. vectorizer.kind_to_columns_ provides an overview of how the vectorizer categorized columns in our input:

{'numeric': ['year_first_hired'], 'datetime': ['date_first_hired'], 'low_cardinality': ['gender', 'department', 'department_name', 'assignment_category'], 'high_cardinality': ['division', 'employee_position_title'], 'specific': []}

The reverse mapping is given by:

{'year_first_hired': 'numeric', 'date_first_hired': 'datetime', 'gender': 'low_cardinality', 'department': 'low_cardinality', 'department_name': 'low_cardinality', 'assignment_category': 'low_cardinality', 'division': 'high_cardinality', 'employee_position_title': 'high_cardinality'}

vectorizer.transformers_ gives us a dictionary which maps column names to the corresponding transformer.

vectorizer.transformers_["date_first_hired"]
DatetimeEncoder()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


We can also see which features in the vectorizer’s output were derived from a given input column.

vectorizer.input_to_outputs_["date_first_hired"]
['date_first_hired_year', 'date_first_hired_month', 'date_first_hired_day', 'date_first_hired_total_seconds']
date_first_hired_year date_first_hired_month date_first_hired_day date_first_hired_total_seconds
0 1986.0 9.0 22.0 5.277312e+08
1 1988.0 9.0 12.0 5.900256e+08
2 1989.0 11.0 19.0 6.274368e+08
3 2014.0 5.0 5.0 1.399248e+09
4 2007.0 3.0 5.0 1.173053e+09
... ... ... ... ...
9223 2015.0 11.0 3.0 1.446509e+09
9224 1988.0 11.0 28.0 5.966784e+08
9225 2001.0 4.0 30.0 9.885888e+08
9226 2006.0 9.0 5.0 1.157414e+09
9227 2012.0 1.0 30.0 1.327882e+09

9228 rows × 4 columns



Finally, we can go in the opposite direction: given a column in the input, find out from which input column it was derived.

vectorizer.output_to_input_["department_BOA"]
'department'

Dataframe preprocessing#

Note that "date_first_hired" has been recognized and processed as a datetime ßcolumn.

vectorizer.column_to_kind_["date_first_hired"]
'datetime'

But looking closer at our original dataframe, it was encoded as a string.

employees["date_first_hired"]
0       09/22/1986
1       09/12/1988
2       11/19/1989
3       05/05/2014
4       03/05/2007
           ...
9223    11/03/2015
9224    11/28/1988
9225    04/30/2001
9226    09/05/2006
9227    01/30/2012
Name: date_first_hired, Length: 9228, dtype: object

Note the dtype: object in the output above. Before applying the transformers we specify, the TableVectorizer performs a few preprocessing steps.

For example, strings commonly used to represent missing values such as "N/A" are replaced with actual null. As we saw above, columns containing strings that represent dates (e.g. '2024-05-15') are detected and converted to proper datetimes.

We can inspect the list of steps that were applied to a given column:

[CleanNullStrings(), ToDatetime(), DatetimeEncoder(), {'date_first_hired_day': ToFloat32(), 'date_first_hired_month': ToFloat32(), ...}]

These preprocessing steps depend on the column:

[CleanNullStrings(), ToStr(), OneHotEncoder(drop='if_binary', dtype='float32', handle_unknown='ignore',
              sparse_output=False), {'department_BOA': ToFloat32(), 'department_BOE': ToFloat32(), ...}]

A simple Pipeline for tabular data#

The TableVectorizer outputs data that can be understood by a scikit-learn estimator. Therefore we can easily build a 2-step scikit-learn Pipeline that we can fit, test or cross-validate and that works well on tabular data.

import numpy as np
from sklearn.ensemble import HistGradientBoostingRegressor
from sklearn.model_selection import cross_validate
from sklearn.pipeline import make_pipeline

pipeline = make_pipeline(TableVectorizer(), HistGradientBoostingRegressor())

results = cross_validate(pipeline, employees, salaries)
scores = results["test_score"]
print(f"R2 score:  mean: {np.mean(scores):.3f}; std: {np.std(scores):.3f}")
print(f"mean fit time: {np.mean(results['fit_time']):.3f} seconds")
R2 score:  mean: 0.922; std: 0.012
mean fit time: 5.905 seconds

Specializing the TableVectorizer for HistGradientBoosting#

The encoders used by default by the TableVectorizer are safe choices for a wide range of downstream estimators. If we know we want to use it with a HistGradientBoostingRegressor (or classifier) model, we can make some different choices that are only well-suited for tree-based models but can yield a faster pipeline. We make 2 changes.

The HistGradientBoostingRegressor has built-in support for categorical features, so we do not need to one-hot encode them. We do need to tell it which features should be treated as categorical with the categorical_features parameter. In recent versions of scikit-learn, we can set categorical_features='from_dtype', and it will treat all columns in the input that have a Categorical dtype as such. Therefore we change the encoder for low-cardinality columns: instead of OneHotEncoder, we use skrub’s ToCategorical. This transformer will simply ensure our columns have an actual Categorical dtype (as opposed to string for example), so that they can be recognized by the HistGradientBoostingRegressor.

The second change replaces the GapEncoder with a MinHashEncoder. The GapEncoder is a topic model. It produces interpretable embeddings in a vector space where distances are meaningful, which is great for interpretation and necessary for some downstream supervised learners such as linear models. However fitting the topic model is costly in computation time and memory. The MinHashEncoder produces features that are not easy to interpret, but that decision trees can efficiently use to test for the occurrence of particular character n-grams (more details are provided in its documentation). Therefore it can be a faster and very effective alternative, when the supervised learner is built on top of decision trees, which is the case for the HistGradientBoostingRegressor.

The resulting pipeline is identical to the one produced by default by tabular_learner.

from skrub import MinHashEncoder, ToCategorical

vectorizer = TableVectorizer(
    low_cardinality=ToCategorical(), high_cardinality=MinHashEncoder()
)
pipeline = make_pipeline(
    vectorizer, HistGradientBoostingRegressor(categorical_features="from_dtype")
)

results = cross_validate(pipeline, employees, salaries)
scores = results["test_score"]
print(f"R2 score:  mean: {np.mean(scores):.3f}; std: {np.std(scores):.3f}")
print(f"mean fit time: {np.mean(results['fit_time']):.3f} seconds")
R2 score:  mean: 0.911; std: 0.014
mean fit time: 1.024 seconds

We can see that this new pipeline achieves a similar score but is fitted much faster. This is mostly due to replacing GapEncoder with MinHashEncoder (however this makes the features less interpretable).

Feature importances in the statistical model#

As we just saw, we can fit a MinHashEncoder faster than a GapEncoder. However, the GapEncoder has a crucial advantage: each dimension of its output space is associated with a topic which can be inspected and interpreted. In this section, after training a regressor, we will plot the feature importances.

First, we train another scikit-learn regressor, the RandomForestRegressor:

from sklearn.ensemble import RandomForestRegressor

vectorizer = TableVectorizer()  # now using the default GapEncoder
regressor = RandomForestRegressor(n_estimators=50, max_depth=20, random_state=0)

pipeline = make_pipeline(vectorizer, regressor)
pipeline.fit(employees, salaries)
Pipeline(steps=[('tablevectorizer', TableVectorizer()),
                ('randomforestregressor',
                 RandomForestRegressor(max_depth=20, n_estimators=50,
                                       random_state=0))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


We are retrieving the feature importances:

And plotting the results:

import matplotlib.pyplot as plt

top_indices = indices[:20]
labels = vectorizer.get_feature_names_out()[top_indices]

plt.figure(figsize=(12, 9))
plt.barh(
    y=labels,
    width=avg_importances[top_indices],
    yerr=std_importances[top_indices],
    color="b",
)
plt.yticks(fontsize=15)
plt.title("Feature importances")
plt.tight_layout(pad=1)
plt.show()
Feature importances

The GapEncoder creates feature names that show the first 3 most important words in the topic associated with each feature. As we can see in the plot above, this helps inspecting the model. If we had used a MinHashEncoder instead, the features would be much less helpful, with names such as employee_position_title_0, employee_position_title_1, etc.

We can see that features such the time elapsed since being hired, having a full-time employment, and the position, seem to be the most informative for prediction. However, feature importances must not be over-interpreted – they capture statistical associations rather than causal effects. Moreover, the fast feature importance method used here suffers from biases favouring features with larger cardinality, as illustrated in a scikit-learn example. In general we should prefer permutation_importance(), but it is a slower method.

Conclusion#

In this example, we motivated the need for a simple machine learning pipeline, which we built using the TableVectorizer and a HistGradientBoostingRegressor.

We saw that by default, it works well on a heterogeneous dataset.

To better understand our dataset, and without much effort, we were also able to plot the feature importances.

Total running time of the script: (1 minutes 11.934 seconds)

Gallery generated by Sphinx-Gallery