Encoding: creating feature matrices#

Encoding or vectorizing creates numerical features from the data, converting dataframes, strings, dates… Different encoders are suited for different types of data.

Encoding string columns#

Non-normalized entries and dirty categories#

String columns can be seen categories for statistical analysis, but standard tools to represent categories fail if these strings are not normalized into a small number of well-identified form, if they have typos, or if there are too many categories.

Skrub provides encoders that represent well open-ended strings or dirty categories, eg to replace OneHotEncoder:

  • GapEncoder: infers latent categories and represent the data on these. Very interpretable, sometimes slow

  • MinHashEncoder: a very scalable encoding of strings capturing their similarities. Particularly useful on large databases and well suited for learners such as trees (boosted trees or random forests)

  • SimilarityEncoder: a simple encoder that works by representing strings similarities with all the different categories in the data. Useful when there are a small number of categories, but we still want to capture the links between them (eg: “west”, “north”, “north-west”)

Text with diverse entries#

When strings in a column are not dirty categories, but rather diverse entries of text (names, open-ended or free-flowing text) it is useful to use language models of various sizes to represent string columns as embeddings. Depending on the task and dataset, this approach may lead to significant improvements in the quality of predictions, albeit with potential increases in memory usage and computation time.

Skrub integrates these language models as scikit-learn transformers, allowing them to be easily plugged into TableVectorizer and Pipeline.

These language models are pre-trained deep-learning encoders that have been fine-tuned specifically for embedding tasks. Note that skrub does not provide a simple way to fine-tune language models directly on your dataset.

Warning

These encoders require installing additional dependencies around torch. See the “deep learning dependencies” section in the Install guide for more details.

With TextEncoder, a wrapper around the sentence-transformers package, you can use any sentence embedding model available on the HuggingFace Hub or locally stored on your disk. This means you can fine-tune a model using the sentence-transformers library and then use it with the TextEncoder like any other pre-trained model. For more information, see the sentence-transformers fine-tuning guide.

Encoding dates#

The DatetimeEncoder encodes date and time: it represent them as time in seconds since a fixed date, but also added features useful to capture regularities: week of the day, month of the year…