.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/data_ops/12_multiple_tables.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. or to run this example in your browser via JupyterLite or Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_data_ops_12_multiple_tables.py: Multiples tables: building machine learning pipelines with DataOps ================================================================== In this example, we show how to build a DataOps plan to handle pre-processing, validation and hyperparameter tuning of a dataset with **multiple tables**. We consider the credit fraud dataset, which contains two tables: one for baskets (orders) and one for products. The goal is to predict whether a basket (a single order that has been placed with the website) is fraudulent or not, based on the products it contains. .. currentmodule:: skrub .. |choose_from| replace:: :func:`skrub.choose_from` .. |choose_int| replace:: :func:`skrub.choose_int` .. |choose_float| replace:: :func:`skrub.choose_float` .. |MinHashEncoder| replace:: :class:`~skrub.MinHashEncoder` .. |StringEncoder| replace:: :class:`~skrub.StringEncoder` .. |TableVectorizer| replace:: :class:`~skrub.TableVectorizer` .. |var| replace:: :func:`skrub.var` .. |TableReport| replace:: :class:`~skrub.TableReport` .. |HistGradientBoostingClassifier| replace:: :class:`~sklearn.ensemble.HistGradientBoostingClassifier` .. |make_randomized_search| replace:: :func:`~skrub.DataOp.skb.make_randomized_search` .. GENERATED FROM PYTHON SOURCE LINES 32-37 The credit fraud dataset ------------------------ The ``baskets`` table contains a basket ID and a flag indicating if the order was fraudulent or not (the customer never made the payment). .. GENERATED FROM PYTHON SOURCE LINES 39-45 .. code-block:: Python import skrub import skrub.datasets dataset = skrub.datasets.fetch_credit_fraud() skrub.TableReport(dataset.baskets) .. raw:: html

Please enable javascript

The skrub table reports need javascript to display correctly. If you are displaying a report in a Jupyter notebook and you see this message, you may need to re-execute the cell or to trust the notebook (button on the top right or "File > Trust notebook").



.. GENERATED FROM PYTHON SOURCE LINES 46-50 The ``products`` table contains information about the products that have been purchased, and the basket they belong to. A basket contains at least one product. Products can be associated with the corresponding basket through the "basket_ID" column. .. GENERATED FROM PYTHON SOURCE LINES 52-54 .. code-block:: Python skrub.TableReport(dataset.products) .. raw:: html

Please enable javascript

The skrub table reports need javascript to display correctly. If you are displaying a report in a Jupyter notebook and you see this message, you may need to re-execute the cell or to trust the notebook (button on the top right or "File > Trust notebook").



.. GENERATED FROM PYTHON SOURCE LINES 55-81 A data-processing challenge ---------------------------- The general structure of the DataOps plan we want to build looks like this: .. image:: ../../_static/credit_fraud_diagram.svg :width: 300 We want to fit a |HistGradientBoostingClassifier| to predict the fraud flag (y). However, since the features for each basket are stored in the products table, we need to extract these features, aggregate them at the basket level, and merge the result with the basket data. We can use the |TableVectorizer| to vectorize the products, but we then need to aggregate the resulting vectors to obtain a single row per basket. Using a scikit-learn Pipeline is tricky because the |TableVectorizer| would be fitted on a table with a different number of rows than the target y (the baskets table), which scikit-learn does not allow. While we could fit the |TableVectorizer| manually, this would forfeit scikit-learn’s tooling for managing transformations, storing fitted estimators, splitting data, cross-validation, and hyper-parameter tuning. We would also have to handle the aggregation and join ourselves, likely with error-prone Pandas code. Fortunately, skrub DataOps provide a powerful alternative for building flexible plans that address these problems. .. GENERATED FROM PYTHON SOURCE LINES 83-88 Building a multi-table DataOps plan ------------------------------------ We start by creating skrub variables, which are the inputs to our plan. In our example, we create two skrub |var| objects: ``products`` and ``baskets``: .. GENERATED FROM PYTHON SOURCE LINES 90-96 .. code-block:: Python products = skrub.var("products", dataset.products) baskets = skrub.var("baskets", dataset.baskets) basket_ids = baskets[["ID"]].skb.mark_as_X() fraud_flags = baskets["fraud_flag"].skb.mark_as_y() .. GENERATED FROM PYTHON SOURCE LINES 97-106 We mark the ``basket_ids`` variable as ``X`` and the ``fraud_flags`` variable as ``y`` so that DataOps can use their indices for train-test splitting and cross-validation. We then build the plan by applying transformations to those inputs. Since our DataOps expect dataframes for products, baskets and fraud flags, we manipulate those objects as we would manipulate pandas dataframes. For instance, we filter products to keep only those that match one of the baskets in the ``baskets`` table, and then add a column containing the total amount for each kind of product in a basket: .. GENERATED FROM PYTHON SOURCE LINES 108-114 .. code-block:: Python kept_products = products[products["basket_ID"].isin(basket_ids["ID"])] products_with_total = kept_products.assign( total_price=kept_products["Nbr_of_prod_purchas"] * kept_products["cash_price"] ) products_with_total .. raw:: html
<CallMethod 'assign'>
Show graph Var 'products' GetItem 'basket_ID' GetItem <CallMethod 'isin'> CallMethod 'isin' Var 'baskets' X: GetItem ['ID'] GetItem 'ID' GetItem 'Nbr_of_prod_purchas' GetItem 'cash_price' CallMethod 'assign' BinOp: mul

Result:

Please enable javascript

The skrub table reports need javascript to display correctly. If you are displaying a report in a Jupyter notebook and you see this message, you may need to re-execute the cell or to trust the notebook (button on the top right or "File > Trust notebook").



.. GENERATED FROM PYTHON SOURCE LINES 115-126 We then build a skrub ``TableVectorizer`` with different choices of the type of encoder for high-cardinality categorical or string columns, and the number of components it uses. With skrub, there’s no need to specify a separate grid of hyperparameters outside the pipeline. Instead, within a DataOps plan, we can directly replace a parameter’s value using one of skrub’s ``choose_*`` functions, which define the range of values to consider during hyperparameter selection. In this example, we use |choose_int| to select the number of components for the encoder and |choose_from| to select the type of encoder. .. GENERATED FROM PYTHON SOURCE LINES 128-138 .. code-block:: Python n = skrub.choose_int(5, 15, name="n_components") encoder = skrub.choose_from( { "MinHash": skrub.MinHashEncoder(n_components=n), "LSA": skrub.StringEncoder(n_components=n), }, name="encoder", ) vectorizer = skrub.TableVectorizer(high_cardinality=encoder) .. GENERATED FROM PYTHON SOURCE LINES 139-142 We can restrict the vectorizer to a subset of columns: in our case, we want to vectorize all columns except the ``"basket_ID"`` column, which is not a feature but a link to the basket it belongs to. .. GENERATED FROM PYTHON SOURCE LINES 144-148 .. code-block:: Python vectorized_products = products_with_total.skb.apply( vectorizer, exclude_cols="basket_ID" ) .. GENERATED FROM PYTHON SOURCE LINES 149-151 We then aggregate the vectorized products by basket ID, and then merge the result with the baskets table. .. GENERATED FROM PYTHON SOURCE LINES 153-158 .. code-block:: Python aggregated_products = vectorized_products.groupby("basket_ID").agg("mean").reset_index() augmented_baskets = basket_ids.merge( aggregated_products, left_on="ID", right_on="basket_ID" ).drop(columns=["ID", "basket_ID"]) .. GENERATED FROM PYTHON SOURCE LINES 159-161 Finally, we add a supervised estimator, and use |choose_float| to add the learning rate as a hyperparameter to tune. .. GENERATED FROM PYTHON SOURCE LINES 163-171 .. code-block:: Python from sklearn.ensemble import HistGradientBoostingClassifier hgb = HistGradientBoostingClassifier( learning_rate=skrub.choose_float(0.01, 0.9, log=True, name="learning_rate") ) predictions = augmented_baskets.skb.apply(hgb, y=fraud_flags) predictions .. raw:: html
<Apply HistGradientBoostingClassifier>
Show graph Var 'baskets' X: GetItem ['ID'] y: GetItem 'fraud_flag' GetItem 'ID' CallMethod 'merge' Var 'products' GetItem 'basket_ID' GetItem <CallMethod 'isin'> CallMethod 'isin' GetItem 'Nbr_of_prod_purchas' GetItem 'cash_price' CallMethod 'assign' BinOp: mul Apply TableVectorizer CallMethod 'groupby' CallMethod 'agg' CallMethod 'reset_index' CallMethod 'drop' Apply HistGradientBoostingClassifier

Result:

Please enable javascript

The skrub table reports need javascript to display correctly. If you are displaying a report in a Jupyter notebook and you see this message, you may need to re-execute the cell or to trust the notebook (button on the top right or "File > Trust notebook").



.. GENERATED FROM PYTHON SOURCE LINES 172-177 And our DataOps plan is complete! We can now use |make_randomized_search| to perform hyperparameter tuning and find the best hyperparameters for our model. Below, we display the hyperparameter combinations that define our search space. .. GENERATED FROM PYTHON SOURCE LINES 179-181 .. code-block:: Python print(predictions.skb.describe_param_grid()) .. rst-class:: sphx-glr-script-out .. code-block:: none - learning_rate: choose_float(0.01, 0.9, log=True, name='learning_rate') encoder: 'MinHash' n_components: choose_int(5, 15, name='n_components') - learning_rate: choose_float(0.01, 0.9, log=True, name='learning_rate') encoder: 'LSA' n_components: choose_int(5, 15, name='n_components') .. GENERATED FROM PYTHON SOURCE LINES 182-184 |make_randomized_search| returns a :class:`~skrub.ParamSearch` object, which contains our search result and some plotting logic. .. GENERATED FROM PYTHON SOURCE LINES 184-189 .. code-block:: Python search = predictions.skb.make_randomized_search( scoring="roc_auc", n_iter=8, n_jobs=4, random_state=0, fitted=True ) search.results_ .. raw:: html
n_components encoder learning_rate mean_test_score
0 10 LSA 0.067286 0.887210
1 12 LSA 0.052436 0.884805
2 15 MinHash 0.056148 0.884057
3 9 LSA 0.038147 0.884011
4 13 MinHash 0.086695 0.877958
5 15 LSA 0.013766 0.871156
6 11 MinHash 0.446581 0.730944
7 13 LSA 0.501435 0.721205


.. GENERATED FROM PYTHON SOURCE LINES 190-191 We can also display the results of the search in a parallel coordinates plot: .. GENERATED FROM PYTHON SOURCE LINES 191-193 .. code-block:: Python search.plot_results() .. raw:: html


.. GENERATED FROM PYTHON SOURCE LINES 194-199 It seems here that using the LSA as an encoder brings better test scores, but at the expense of training and scoring time. We can get the best performing :class:`~skrub.SkrubLearner` via ``best_learner_``, and use it for inference on new data with: .. GENERATED FROM PYTHON SOURCE LINES 199-217 .. code-block:: Python import pandas as pd new_baskets = pd.DataFrame([dict(ID="abc")]) new_products = pd.DataFrame( [ dict( basket_ID="abc", item="COMPUTER", cash_price=200, make="APPLE", model="XXX-X", goods_code="239246782", Nbr_of_prod_purchas=1, ) ] ) search.best_learner_.predict_proba({"baskets": new_baskets, "products": new_products}) .. rst-class:: sphx-glr-script-out .. code-block:: none array([[9.99799160e-01, 2.00839553e-04]]) .. GENERATED FROM PYTHON SOURCE LINES 218-230 Conclusion ---------- In this example, we have shown how to build a multi-table machine learning pipeline with skrub DataOps. We have seen how DataOps allow us to use familiar Pandas operations to manipulate dataframes, and how we can build a DataOps plan that works with multiple tables and performs hyperparameter tuning on the resulting pipeline. If you want to learn more about tuning hyperparameters using skrub DataOps, see the :ref:`Tuning Pipelines example ` for an in-depth tutorial. .. rst-class:: sphx-glr-timing **Total running time of the script:** (4 minutes 10.441 seconds) .. _sphx_glr_download_auto_examples_data_ops_12_multiple_tables.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: binder-badge .. image:: images/binder_badge_logo.svg :target: https://mybinder.org/v2/gh/skrub-data/skrub/0.6.2?urlpath=lab/tree/notebooks/auto_examples/data_ops/12_multiple_tables.ipynb :alt: Launch binder :width: 150 px .. container:: lite-badge .. image:: images/jupyterlite_badge_logo.svg :target: ../../lite/lab/index.html?path=auto_examples/data_ops/12_multiple_tables.ipynb :alt: Launch JupyterLite :width: 150 px .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: 12_multiple_tables.ipynb <12_multiple_tables.ipynb>` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: 12_multiple_tables.py <12_multiple_tables.py>` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: 12_multiple_tables.zip <12_multiple_tables.zip>` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_