Using the skrub choose_* functions to tune hyperparameters#
skrub provides a convenient way to declare ranges of possible values, and tune those choices to keep the values that give the best predictions on a validation set.
Rather than specifying a grid of hyperparameters separately from the pipeline, we simply insert special skrub objects in place of the value.
We define the same set of operations as before:
>>> from sklearn.datasets import load_diabetes
>>> from sklearn.linear_model import Ridge
>>> import skrub
>>> diabetes_df = load_diabetes(as_frame=True)["frame"]
>>> data = skrub.var("data", diabetes_df)
>>> X = data.drop(columns="target", errors="ignore").skb.mark_as_X()
>>> y = data["target"].skb.mark_as_y()
>>> pred = X.skb.apply(Ridge(), y=y)
Now, we can
replace the hyperparameter alpha (which should be a float) with a range
created by skrub.choose_float(). skrub can use it to select the best value
for alpha.
>>> pred = X.skb.apply(
... Ridge(alpha=skrub.choose_float(1e-6, 10.0, log=True, name="α")), y=y
... )
Warning
When we do .skb.make_learner(), the
pipeline we obtain does not perform any hyperparameter tuning. The pipeline
we obtain by default uses default values for each of the choices. For numeric
choices it is the middle of the range (unless an explicit default has been
set when creating the choice), and for choose_from() it is the first
option we give it. We can also obtain random choices, or choices suggested by
an Optuna trial, by passing the choose
parameter.
To get a pipeline that runs an internal cross-validation to select the best
hyperparameters, we must use .skb.make_grid_search() or .skb.make_randomized_search(). We can also use Optuna to choose the best hyperparameters as shown
in this example.
Here are the different kinds of choices, along with their default outcome when we are not using hyperparameter search:
Choosing function |
Description |
Default outcome |
|---|---|---|
Choose between the listed options (10 and 20). |
First outcome in the list: |
|
Choose between the listed options (10 and 20). Dictionary keys serve as names for the options. |
First outcome in the dictionary: |
|
Choose between the provided value and |
The provided |
|
Choose between True and False. |
|
|
Sample a floating-point number in a range. |
The middle of the range: |
|
Sample an integer in a range. |
The integer closest to the middle of the range: |
|
Sample a float in a range on a logarithmic scale. |
The middle of the range on a log scale: |
|
Sample an integer in a range on a logarithmic scale. |
The integer closest to the middle of the range on a log scale: |
|
Sample a float on a grid. |
The step closest to the middle of the range: |
|
Sample an integer on a grid. |
The step closest to the middle of the range: |
|
Sample a float on a logarithmically spaced grid. |
The step closest to the middle of the range on a log scale: |
|
Sample an integer on a logarithmically spaced grid. |
The step closest to the middle of the range on a log scale: |
The default choices for a DataOp, those that get used when calling
.skb.make_learner(), can be inspected with
.skb.describe_defaults():
>>> pred.skb.describe_defaults()
{'α': 0.00316...}
We can then find the best hyperparameters.
>>> search = pred.skb.make_randomized_search(fitted=True)
>>> search.results_
α mean_test_score
0 0.000480 0.482327
1 0.000287 0.482327
2 0.000014 0.482317
3 0.000012 0.482317
4 0.000006 0.482317
5 0.134157 0.478651
6 0.249613 0.472019
7 0.612327 0.442312
8 2.664713 0.308492
9 3.457901 0.275007
A human-readable description of parameters for a pipeline can be obtained with
SkrubLearner.describe_params():
>>> search.best_learner_.describe_params()
{'α': 0.000479...}
It is also possible to use ParamSearch.plot_results() to visualize the results
of the search using a parallel coordinates plot.
This could also be done with Optuna, either by passing backend='optuna' to
DataOp.skb.make_randomized_search(), or by using Optuna directly:
>>> import optuna
>>> def objective(trial):
... learner = pred.skb.make_learner(choose=trial)
... cv_results = skrub.cross_validate(learner, pred.skb.get_data())
... return cv_results['test_score'].mean()
>>> study = optuna.create_study(direction="maximize")
>>> study.optimize(objective, n_trials=10)
>>> best_learner = pred.skb.make_learner(choose=study.best_trial)
>>> best_learner.describe_params()
{'α': 0.0006391165935023005}
Rather than fitting a randomized or grid search to find the best combination, it
is also possible to obtain an iterator over different parameter combinations to
inspect their outputs or to have manual control over the model selection. This can
be done with .skb.iter_learners_grid() or
.skb.iter_learners_randomized() (
which yield the candidate pipelines that are explored by the grid and randomized
search respectively), or with the choose parameter of
.skb.make_learner().
A full example of how to use hyperparameter search is available in Hyperparameter tuning with DataOps, and a full example using Optuna is in Tuning DataOps with Optuna.
Feature selection with skrub SelectCols and DropCols#
It is possible to combine SelectCols and DropCols with
choose_from() to perform feature selection by dropping specific columns
and evaluating how this affects the downstream performance.
Consider this example. We first define the variable:
>>> import pandas as pd
>>> import skrub.selectors as s
>>> from sklearn.preprocessing import StandardScaler, OneHotEncoder
>>> df = pd.DataFrame({"text": ["foo", "bar", "baz"], "number": [1, 2, 3]})
>>> X = skrub.X(df)
Then, we use the skrub selectors to encode each column with a different transformer:
>>> X_enc = X.skb.apply(StandardScaler(), cols=s.numeric()).skb.apply(
... OneHotEncoder(sparse_output=False), cols=s.string()
... )
>>> X_enc
<Apply OneHotEncoder>
Result:
―――――――
number text_bar text_baz text_foo
0 -1.224745 0.0 0.0 1.0
1 0.000000 1.0 0.0 0.0
2 1.224745 0.0 1.0 0.0
Now we can use skrub.DropCols to define two possible selection strategies:
first, we drop the column number, then we drop all columns that start with
text. We rely again on the skrub selectors for this:
>>> from skrub import DropCols
>>> drop = DropCols(cols=skrub.choose_from(
... {"number": s.cols("number"), "text": s.glob("text_*")})
... )
>>> X_enc.skb.apply(drop)
<Apply DropCols>
Result:
―――――――
text_bar text_baz text_foo
0 0.0 0.0 1.0
1 1.0 0.0 0.0
2 0.0 1.0 0.0
We can see the generated parameter grid with DataOps.skb.describe_param_grid().
>>> X_enc.skb.apply(drop).skb.describe_param_grid()
"- choose_from({'number': …, 'text': …}): ['number', 'text']\n"
A more advanced application of this technique is used in this tutorial on forecasting timeseries, along with the feature engineering required to prepare the columns, and the analysis of the results.