Using the skrub choose_*
functions to tune hyperparameters#
skrub provides a convenient way to declare ranges of possible values, and tune those choices to keep the values that give the best predictions on a validation set.
Rather than specifying a grid of hyperparameters separately from the pipeline, we simply insert special skrub objects in place of the value.
We define the same set of operations as before:
>>> from sklearn.datasets import load_diabetes
>>> from sklearn.linear_model import Ridge
>>> import skrub
>>> diabetes_df = load_diabetes(as_frame=True)["frame"]
>>> data = skrub.var("data", diabetes_df)
>>> X = data.drop(columns="target", errors="ignore").skb.mark_as_X()
>>> y = data["target"].skb.mark_as_y()
>>> pred = X.skb.apply(Ridge(), y=y)
Now, we can
replace the hyperparameter alpha
(which should be a float) with a range
created by skrub.choose_float()
. skrub can use it to select the best value
for alpha
.
>>> pred = X.skb.apply(
... Ridge(alpha=skrub.choose_float(0.01, 10.0, log=True, name="α")), y=y
... )
Warning
When we do .skb.make_learner()
, the pipeline
we obtain does not perform any hyperparameter tuning. The pipeline we obtain
uses default values for each of the choices. For numeric choices it is the
middle of the range, and for choose_from()
it is the first option we
give it.
To get a pipeline that runs an internal cross-validation to select the best
hyperparameters, we must use .skb.make_grid_search()
or .skb.make_randomized_search()
.
Here are the different kinds of choices, along with their default outcome when we are not using hyperparameter search:
Choosing function |
Description |
Default outcome |
---|---|---|
Choose between the listed options (10 and 20). |
First outcome in the list: |
|
Choose between the listed options (10 and 20). Dictionary keys serve as names for the options. |
First outcome in the dictionary: |
|
Choose between the provided value and |
The provided |
|
Choose between True and False. |
|
|
Sample a floating-point number in a range. |
The middle of the range: |
|
Sample an integer in a range. |
The integer closest to the middle of the range: |
|
Sample a float in a range on a logarithmic scale. |
The middle of the range on a log scale: |
|
Sample an integer in a range on a logarithmic scale. |
The integer closest to the middle of the range on a log scale: |
|
Sample a float on a grid. |
The step closest to the middle of the range: |
|
Sample an integer on a grid. |
The step closest to the middle of the range: |
|
Sample a float on a logarithmically spaced grid. |
The step closest to the middle of the range on a log scale: |
|
Sample an integer on a logarithmically spaced grid. |
The step closest to the middle of the range on a log scale: |
The default choices for an DataOp, those that get used when calling
.skb.make_learner()
, can be inspected with
.skb.describe_defaults()
:
>>> pred.skb.describe_defaults()
{'α': 0.316...}
We can then find the best hyperparameters.
>>> search = pred.skb.make_randomized_search(fitted=True)
>>> search.results_
mean_test_score α
0 0.478338 0.141359
1 0.476022 0.186623
2 0.474905 0.205476
3 0.457807 0.431171
4 0.456808 0.443038
5 0.439670 0.643117
6 0.420917 0.866328
7 0.380719 1.398196
8 0.233172 4.734989
9 0.168444 7.780156
Rather than fitting a randomized or grid search to find the best combination, it is also
possible to obtain an iterator over different parameter combinations, to inspect
their outputs or to have manual control over the model selection, using
.skb.iter_learners_grid()
or
.skb.iter_learners_randomized()
.
Those yield the candidate pipelines that are explored by the grid and randomized
search respectively.
A human-readable description of parameters for a pipeline can be obtained with
SkrubLearner.describe_params()
:
>>> search.best_learner_.describe_params()
{'α': 0.054...}
It is also possible to use ParamSearch.plot_results()
to visualize the results
of the search using a parallel coordinates plot.
A full example of how to use hyperparameter search is available in Hyperparameter tuning with DataOps.
Feature selection with skrub SelectCols
and DropCols
#
It is possible to combine SelectCols
and DropCols
with
choose_from()
to perform feature selection by dropping specific columns
and evaluating how this affects the downstream performance.
Consider this example. We first define the variable:
>>> import pandas as pd
>>> import skrub.selectors as s
>>> from sklearn.preprocessing import StandardScaler, OneHotEncoder
>>> df = pd.DataFrame({"text": ["foo", "bar", "baz"], "number": [1, 2, 3]})
>>> X = skrub.X(df)
Then, we use the skrub selectors to encode each column with a different transformer:
>>> X_enc = X.skb.apply(StandardScaler(), cols=s.numeric()).skb.apply(
... OneHotEncoder(sparse_output=False), cols=s.string()
... )
>>> X_enc
<Apply OneHotEncoder>
Result:
―――――――
number text_bar text_baz text_foo
0 -1.224745 0.0 0.0 1.0
1 0.000000 1.0 0.0 0.0
2 1.224745 0.0 1.0 0.0
Now we can use skrub.DropCols
to define two possible selection strategies:
first, we drop the column number
, then we drop all columns that start with
text
. We rely again on the skrub selectors for this:
>>> from skrub import DropCols
>>> drop = DropCols(cols=skrub.choose_from(
... {"number": s.cols("number"), "text": s.glob("text_*")})
... )
>>> X_enc.skb.apply(drop)
<Apply DropCols>
Result:
―――――――
text_bar text_baz text_foo
0 0.0 0.0 1.0
1 1.0 0.0 0.0
2 0.0 1.0 0.0
We can see the generated parameter grid with DataOps.skb.describe_param_grid()
.
>>> X_enc.skb.apply(drop).skb.describe_param_grid()
"- choose_from({'number': …, 'text': …}): ['number', 'text']\n"
A more advanced application of this technique is used in this tutorial on forecasting timeseries, along with the feature engineering required to prepare the columns, and the analysis of the results.