Release history#
Release 0.4.1#
Changes#
A new parameter
verbosehas been added to theTableReportto toggle on or off the printing of progress information when a report is being generated. #1182 by Priscilla Baah.A parameter
verbosehas been added to thepatch_display()to toggle on or off the printing of progress information when a table report is being generated. #1188 by Priscilla Baah.tabular_learner()accepts the alias"regression"for the option"regressor"and"classification"for"classifier". #1180 by Mojdeh Rastgoo.
Bug fixes#
Generating a
TableReportcould have an effect on the matplotib configuration which could cause plots not to display inline in jupyter notebooks any more. This has been fixed in skrub in #1172 by Jérôme Dockès and the matplotlib issue can be tracked here.The labels on bar plots in the
TableReportfor columns of object dtypes that have a repr spanning multiple lines could be unreadable. This has been fixed in #1196 by Jérôme Dockès.Improve the performance of
deduplicate()by removing some unnecessary computations. #1193 by Jérôme Dockès.
Maintenance#
Make
skrubcompatible with scikit-learn 1.6. #1169 by Guillaume Lemaitre.
Release 0.4.0#
Highlights#
The
TextEncodercan extract embeddings from a string column with a deep learning language model (possibly downloaded from the HuggingFace Hub).Several improvements to the
TableReportsuch as better support for other scripts than the latin alphabet in the bar plot labels, smaller report sizes, clipping the outliers to better see the details of distributions in histograms. See the full changelog for details.The
TableVectorizercan now drop columns that contain a fraction of null values above a user-chosen threshold.
New features#
The
TextEncoderis now available to encode string columns with diverse entries. It allows the representation of table entries as embeddings computed by a deep learning language model. The weights of this model can be fetched locally or from the HuggingFace Hub. #1077 by Vincent Maladiere.The
column_associations()function has been added. It computes a pairwise measure of statistical dependence between all columns in a dataframe (the same as shown in theTableReport). #1109 by Jérôme Dockès.The
patch_display()function has been added. It changes the display of pandas and polars dataframes in jupyter notebooks to replace them with aTableReport. This can be undone withunpatch_display(). #1108 by Jérôme Dockès
Major changes#
AggJoiner,AggTargetandMultiAggJoinernow require the operations argument. They do not split columns by type anymore, but apply operations on all selected cols. “median” is now supported, “hist” and “value_counts” are no longer supported. #1116 by Théo Jolivet.The
AggTargetno longer supports y inputs of type list. #1116 by Théo Jolivet.
Minor changes#
The column filter selection dropdown in the tablereport is smaller and its label has been removed to save space. #1107 by Jérôme Dockès.
The TableReport now uses the font size of its parent element when inserted into another page. This makes it smaller in pages that use a smaller font size than the browser default such as VSCode in some configurations. It also makes it easier to control its size when inserting it in a web page by setting the font size of its parent element. A few other small adjustments have also been made to make it a bit more compact. #1098 by Jérôme Dockès.
Display of labels in the plots of the TableReport, especially for other scripts than the latin alphabet, has improved.
before, some characters could be missing and replaced by empty boxes.
before, when the text is truncated, the ellipsis “…” could appear on the wrong side for right-to-left scripts.
Moreover, when the text contains line breaks it now appears all on one line. Note this only affects the labels in the plots; the rest of the report did not have these problems. #1097 by Jérôme Dockès and #1138 by Jérôme Dockès.
In the TableReport it is now possible, before clicking any of the cells, to reach the dataframe sample table and activate a cell with tab key navigation. #1101 by Jérôme Dockès.
The “Column name” column of the “summary statistics” table in the TableReport is now always visible when scrolling the table. #1102 by Jérôme Dockès.
Added parameter drop_null_fraction to TableVectorizer to drop columns based on whether they contain a fraction of nulls larger than the given threshold. #1115 and #1149 by Riccardo Cappuzzo.
The
TableReportnow provides more helpful output for columns of dtype TimeDelta / Duration. #1152 by Jérôme Dockès.The
TableReportnow also reports the number of unique values for numeric columns. #1154 by Jérôme Dockès.The
TableReport, when plotting histograms, now detects outliers and clips the range of data shown in the histogram. This allows seeing more detail in the shown distribution. #1157 by Jérôme Dockès.
Bug fixes#
The
TableReportcould raise an exception when one of the columns contained datetimes with time zones and missing values; this has been fixed in #1114 by Jérôme Dockès.In scikit-learn versions older than 1.4 the
TableVectorizercould fail on polars dataframes when used with the default parameters. This has been fixed in #1122 by Jérôme Dockès.The
TableReportwould raise an exception when the input (pandas) dataframe contained several columns with the same name. This has been fixed in #1125 by Jérôme Dockès.The
TableReportwould raise an exception when a column contained infinite values. This has been fixed in #1150 by Jérôme Dockès and #1151 by Jérôme Dockès.
Release 0.3.1#
Minor changes#
For tree-based models,
tabular_learner()now adds handle_unknown=’use_encoded_value’ to the OrdinalEncoder, to avoid errors with new categories in the test set. This is consistent with the setting of OneHotEncoder used by default in theTableVectorizer. #1078 by Gaël VaroquauxThe reports created by
TableReport, when inserted in an html page (or displayed in a notebook), now use the same font as the surrounding page. #1038 by Jérôme Dockès.The content of the dataframe corresponding to the currently selected table cell in the TableReport can be copied without actually selecting the text (as in a spreadsheet). #1048 by Jérôme Dockès.
The selection of content displayed in the TableReport’s copy-paste boxes has been removed. Now they always display the value of the selected item. When copied, the repr of the selected item is copied to the clipboard. #1058 by Jérôme Dockès.
A “stats” panel has been added to the TableReport, showing summary statistics for all columns (number of missing values, mean, etc. – similar to
pandas.info()) in a table. It can be sorted by each column. #1056 and #1068 by Jérôme Dockès.The credit fraud dataset is now available with the
fetch_credit_fraud function(). #1053 by Vincent Maladiere.Added zero padding for column names in
MinHashEncoderto improve column ordering consistency. #1069 by Shreekant Nandiyawar.The selection in the TableReport’s sample table can now be manipulated with the keyboard. #1065 by Jérôme Dockès.
The
TableReportnow displays the pandas (multi-)index, and has a better display & interaction of pandas columns when the columns are a MultiIndex. #1083 by Jérôme Dockès.It is possible to control the number of rows displayed by the TableReport in the “sample” tab panel by specifying
n_rows. #1083 by Jérôme Dockès.the TableReport used to raise an exception when the dataframe contained unhashable types such as python lists. This has been fixed in #1087 by Jérôme Dockès.
Display’s columns name with the HTML representation of the fitted TableVectorizer. This has been fixed in #1093 by Shreekant Nandiyawar.
AggTarget will now work even when y is a Series and not raise any error. This has been fixed in #1094 by Shreekant Nandiyawar.
Release 0.3.0#
Highlights#
Polars dataframes are now supported across all
skrubestimators.TableReportgenerates an interactive report for a dataframe. This page regroups some precomputed examples.
Major changes#
The
InterpolationJoinernow supports polars dataframes. #1016 by Théo Jolivet.The
TableReportprovides an interactive report on a dataframe’s contents: an overview, summary statistics and plots, statistical associations between columns. It can be displayed in a jupyter notebook, a browser tab or saved as a static HTML page. #984 by Jérôme Dockès.
Minor changes#
Joinerandfuzzy_join()used to raise an error when columns with the same name appeared in the main and auxiliary table (after adding the suffix). This is now allowed and a random string is inserted in the duplicate column to ensure all names are unique. #1014 by Jérôme Dockès.AggJoinerandAggTargetcould produce outputs whose column names varied across calls to transform in some cases in the presence of duplicate column names, now the output names are always the same. #1013 by Jérôme Dockès.In some cases
AggJoinerandAggTargetinserted a column in the output named “index” containing the pandas index of the auxiliary table. This has been corrected. #1020 by Jérôme Dockès.
Release 0.2.0#
Major changes#
The
Joinerhas been adapted to support polars dataframes. #945 by Théo Jolivet.The
TableVectorizernow consistently applies the same transformation across different calls to transform. There also have been some breaking changes to its functionality: (i) all transformations are now applied independently to each column, i.e. it does not perform multivariate transformations (ii) inspecific_transformersthe same column may not be used twice (go through 2 different transformers). #902 by Jérôme Dockès.Some parameters of
TableVectorizerhave been renamed: high_cardinality_transformer → high_cardinality, low_cardinality_transformer → low_cardinality, datetime_transformer → datetime, numeric_transformer → numeric. #947 by Jérôme Dockès.The
GapEncoderandMinHashEncoderare now a single-column transformers: theirfit,fit_transformandtransformmethods accept a single column (a pandas or polars Series). Dataframes and numpy arrays are not accepted. #920 and #923 by Jérôme Dockès.Added the
MultiAggJoinerthat allows to augment a main table with multiple auxiliary tables. #876 by Théo Jolivet.AggJoinernow only accepts a single table as an input, and some of its parameters were renamed to be consistent with theMultiAggJoiner. It now has akey`parameter that allows to join main and auxiliary tables that share the same column names. #876 by Théo Jolivet.tabular_learner()has been added to easily create a supervised learner that works well on tabular data. #926 by Jérôme Dockès.
Minor changes#
GapEncoderandMinHashEncoderused to modify their input in-place, replacing missing values with a string. They no longer do so. Their parameter handle_missing has been removed; now missing values are always treated as the empty string. #930 by Jérôme Dockès.The minimum supported python version is now 3.9 #939 by Jérôme Dockès.
Skrub supports numpy 2. #946 by Jérôme Dockès.
fetch_ken_embeddings()now add suffix even with the default value for the parameter pca_components. #956 by Guillaume Lemaitre.Joinernow performs some preprocessing (the same as done by theTableVectorizer, eg trying to parse dates, converting pandas object columns with mixed types to a single type) on the joining columns before vectorizing them. #972 by Jérôme Dockès.
skrub release 0.1.1#
This is a bugfix release to adapt to the most recent versions of pandas (2.2) and scikit-learn (1.5). There are no major changes to the functionality of skrub.
skrub release 0.1.0#
Major changes#
TargetEncoderhas been removed in favor ofsklearn.preprocessing.TargetEncoder, available since scikit-learn 1.3.Joinerandfuzzy_join()support several ways of rescaling distances;match_scorehas been replaced bymax_dist; bugs which prevented the Joiner to consistently vectorize inputs and accept or reject matches across calls to transform have been fixed. #821 by Jérôme Dockès.InterpolationJoinerwas added to join two tables by using machine-learning to infer the matching rows from the second table. #742 by Jérôme Dockès.Pipelines including
TableVectorizercan now be grid-searched, since we can now call set_params on the default transformers ofTableVectorizer. #814 by Vincent Maladiereto_datetime()is now available to support pandas.to_datetime over dataframes and 2d arrays. #784 by Vincent MaladiereSome parameters of
Joinerhave changed. The goal is to harmonize parameters across all estimator that perform join(-like) operations, as discussed in #751. #757 by Jérôme Dockès.dataframe.pd_join(),dataframe.pd_aggregate(),dataframe.pl_join()anddataframe.pl_aggregate()are now available in the dataframe submodule. #733 by Vincent MaladiereFeatureAugmenteris renamed toJoiner. #674 by Jovan Stojanovicfuzzy_join()andFeatureAugmentercan now join on datetime columns. #552 by Jovan StojanovicJoinernow supports joining on multiple column keys. #674 by Jovan StojanovicThe signatures of all encoders and functions have been revised to enforce cleaner calls. This means that some arguments that could previously be passed positionally now have to be passed as keywords. #514 by Lilian Boulard.
Parallelized the
GapEncodercolumn-wise. Parameters n_jobs and verbose added to the signature. #582 by Lilian BoulardIntroducing
AggJoiner, a transformer performing aggregation on auxiliary tables followed by left-joining on a base table. #600 by Vincent Maladiere.Introducing
AggTarget, a transformer performing aggregation on the target y, followed by left-joining on a base table. #600 by Vincent Maladiere.Added the
SelectColsandDropColstransformers that allow selecting a subset of a dataframe’s columns inside of a pipeline. #804 by Jérôme Dockès.
Minor changes#
DatetimeEncoderdoesn’t remove constant features anymore. It also supports an ‘errors’ argument to raise or coerce errors during transform, and a ‘add_total_seconds’ argument to include the number of seconds since Epoch. #784 by Vincent MaladiereScaling of
matching_scoreinfuzzy_join()is now between 0 and 1; it used to be between 0.5 and 1. Moreover, the division by 0 error that occurred when all rows had a perfect match has been fixed. #802 by Jérôme Dockès.TableVectorizeris now able to apply parallelism at the column level rather than the transformer level. This is the default for univariate transformers, likeMinHashEncoder, andGapEncoder. #592 by Leo Grinsztajninverse_transforminSimilarityEncodernow works as expected; it used to raise an exception. #801 by Jérôme Dockès.TableVectorizerpropagate the n_jobs parameter to the underlying transformers except if the underlying transformer already set explicitly n_jobs. #761 by Leo Grinsztajn, Guillaume Lemaitre, and Jerome Dockes.Parallelized the
deduplicate()function. Parameter n_jobs added to the signature. #618 by Jovan Stojanovic and Lilian BoulardFunctions
datasets.fetch_ken_embeddings(),datasets.fetch_ken_table_aliases()anddatasets.fetch_ken_types()have been renamed. #602 by Jovan StojanovicMake pyarrow an optional dependencies to facilitate the integration with pyodide. #639 by Guillaume Lemaitre.
Bumped minimal required Python version to 3.10. #606 by Gael Varoquaux
Bumped minimal required versions for the dependencies: - numpy >= 1.23.5 - scipy >= 1.9.3 - scikit-learn >= 1.2.1 - pandas >= 1.5.3 #613 by Lilian Boulard
You can now pass column-specific transformers to
TableVectorizerusing the specific_transformers argument. #583 by Lilian Boulard.Do not support 1-D array (and pandas Series) in
TableVectorizer. Pass a 2-D array (or a pandas DataFrame) with a single column instead. This change is for compliance with the scikit-learn API. #647 by Guillaume LemaitreFixes a bug in
TableVectorizerwith remainder: it is now cloned if it’s a transformer so that the same instance is not shared between different transformers. #678 by Guillaume LemaitreGapEncoderspeedup #680 by Leo GrinsztajnImproved
GapEncoder’s early stopping logic. The parameters tol and min_iter have been removed. The parameter max_no_improvement can now be used to control the early stopping. #663 by Simona Maggio #593 by Lilian Boulard #681 by Leo GrinsztajnImplementation improvement leading to a ~x5 speedup for each iteration.
Better default hyperparameters: batch_size now defaults to 1024, and max_iter_e_steps to 1.
Removed the most_frequent and k-means strategies from the
SimilarityEncoder. These strategy were used for scalability reasons, but we recommend using theMinHashEncoderor theGapEncoderinstead. #596 by Leo GrinsztajnRemoved the similarity argument from the
SimilarityEncoderconstructor, as we only support the ngram similarity. #596 by Leo GrinsztajnAdded the analyzer parameter to the
SimilarityEncoderto allow word counts for similarity measures. #619 by Jovan Stojanovicskrub now uses modern type hints introduced in PEP 585. #609 by Lilian Boulard
Some bug fixes for
TableVectorizer( #579):check_is_fitted now looks at “transformers_” rather than “columns_”
the default of the remainder parameter in the docstring is now “passthrough” instead of “drop” to match the implementation.
uint8 and int8 dtypes are now considered as numerical columns.
Removed the leading “<” and trailing “>” symbols from KEN entities and types. #601 by Jovan Stojanovic
Add get_feature_names_out method to
MinHashEncoder. #616 by Leo GrinsztajnRemoved requests from the requirements. #613 by Lilian Boulard
TableVectorizernow handles mixed types columns without failing by converting them to string before type inference. #623`by :user:`Leo Grinsztajn <LeoGrin>Moved the default storage location of data to the user’s home folder. #652 by Felix Lefebvre and Gael Varoquaux
Fixed bug when using
TableVectorizer’s transform method on categorical columns with missing values. #644 by Leo GrinsztajnTableVectorizernever output a sparse matrix by default. This can be changed by increasing the sparse_threshold parameter. #646 by Leo GrinsztajnTableVectorizerdoesn’t fail anymore if an inferred type doesn’t work during transform. The new entries not matching the type are replaced by missing values. #666 by Leo Grinsztajn
Dataset fetcher
datasets.fetch_employee_salaries()now has a parameter overload_job_titles to allow overloading the job titles (employee_position_title) with the column underfilled_job_title, which provides some more information about the job title. #581 by Lilian Boulard
Fix bugs which was triggered when extract_until was “year”, “month”, “microseconds” or “nanoseconds”, and add the option to set it to None to only extract total_time, the time from epoch.
DatetimeEncoder. #743 by Leo Grinsztajn
Before skrub: dirty_cat#
Skrub was born from the dirty_cat package.
Dirty-cat release 0.4.1#
Major changes#
fuzzy_join()andFeatureAugmentercan now join on numerical columns based on the euclidean distance. #530 by Jovan Stojanovicfuzzy_join()andFeatureAugmentercan perform many-to-many joins on lists of numerical or string key columns. #530 by Jovan StojanovicGapEncoder.transform()will not continue fitting of the instance anymore. It makes functions that depend on it (get_feature_names_out(),score(), etc.) deterministic once fitted. #548 by Lilian Boulardfuzzy_join()andFeatureAugmenternow perform joins on missing values as in pandas.merge but raises a warning. #522 and #529 by Jovan StojanovicAdded
get_ken_table_aliases()andget_ken_types()for exploring KEN embeddings. #539 by Lilian Boulard.
Minor changes#
Improvement of date column detection and date format inference in
TableVectorizer. The format inference now tries to find a format which works for all non-missing values of the column, and only tries pandas default inference if it fails. #543 by Leo Grinsztajn #587 by Leo Grinsztajn
Dirty-cat Release 0.4.0#
Major changes#
SuperVectorizer is renamed as
TableVectorizer, a warning is raised when using the old name. #484 by Jovan StojanovicNew experimental feature: joining tables using
fuzzy_join()by approximate key matching. Matches are based on string similarities and the nearest neighbors matches are found for each category. #291 by Jovan Stojanovic and Leo GrinsztajnNew experimental feature:
FeatureAugmenter, a transformer that augments withfuzzy_join()the number of features in a main table by using information from auxiliary tables. #409 by Jovan StojanovicUnnecessary API has been made private: everything (files, functions, classes) starting with an underscore shouldn’t be imported in your code. #331 by Lilian Boulard
The
MinHashEncodernow supports a n_jobs parameter to parallelize the hashes computation. #267 by Leo Grinsztajn and Lilian Boulard.New experimental feature: deduplicating misspelled categories using
deduplicate()by clustering string distances. This function works best when there are significantly more duplicates than underlying categories. #339 by Moritz Boos.
Minor changes#
Add example Wikipedia embeddings to enrich the data. #487 by Jovan Stojanovic
datasets.fetching: contains a new function
get_ken_embeddings()that can be used to download Wikipedia embeddings and filter them by type.datasets.fetching: contains a new function
fetch_world_bank_indicator()that can be used to download indicators from the World Bank Open Data platform. #291 by Jovan StojanovicRemoved example Fitting scalable, non-linear models on data with dirty categories. #386 by Jovan Stojanovic
MinHashEncoder’sminhash()method is no longer public. #379 by Jovan StojanovicFetching functions now have an additional argument
directory, which can be used to specify where to save and load from datasets. #432 by Lilian BoulardFetching functions now have an additional argument
directory, which can be used to specify where to save and load from datasets. #432 and #453 by Lilian BoulardThe
TableVectorizer’s default OneHotEncoder for low cardinality categorical variables now defaults to handle_unknown=”ignore” instead of handle_unknown=”error” (for sklearn >= 1.0.0). This means that categories seen only at test time will be encoded by a vector of zeroes instead of raising an error. #473 by Leo Grinsztajn
Bug fixes#
The
MinHashEncodernow considers None and empty strings as missing values, rather than raising an error. #378 by Gael Varoquaux
Dirty-cat Release 0.3.0#
Major changes#
New encoder:
DatetimeEncodercan transform a datetime column into several numerical columns (year, month, day, hour, minute, second, …). It is now the default transformer used in theTableVectorizerfor datetime columns. #239 by Leo GrinsztajnThe
TableVectorizerhas seen some major improvements and bug fixes:Fixes the automatic casting logic in
transform.To avoid dimensionality explosion when a feature has two unique values, the default encoder (
OneHotEncoder) now drops one of the two vectors (see parameter drop=”if_binary”).fit_transformandtransformcan now return unencoded features, like theColumnTransformer’s behavior. Previously, aRuntimeErrorwas raised.
Backward-incompatible change in the TableVectorizer: To apply
remainderto features (with the*_transformerparameters), the value'remainder'must be passed, instead ofNonein previous versions.Nonenow indicates that we want to use the default transformer. #303 by Lilian BoulardSupport for Python 3.6 and 3.7 has been dropped. Python >= 3.8 is now required. #289 by Lilian Boulard
Bumped minimum dependencies:
scikit-learn>=0.23
scipy>=1.4.0
numpy>=1.17.3
pandas>=1.2.0 #299 and #300 by Lilian Boulard
Dropped support for Jaro, Jaro-Winkler and Levenshtein distances.
The
SimilarityEncodernow exclusively usesngramfor similarities, and the similarity parameter is deprecated. It will be removed in 0.5. #282 by Lilian Boulard
Notes#
The
transformers_attribute of theTableVectorizernow contains column names instead of column indices for the “remainder” columns. #266 by Leo Grinsztajn
Dirty-cat Release 0.2.2#
Bug fixes#
Fixed a bug in the
TableVectorizercausing aFutureWarningwhen using theget_feature_names_out()method. #262 by Lilian Boulard
Dirty-cat Release 0.2.1#
Major changes#
Improvements to the
TableVectorizerType detection works better: handles dates, numerics columns encoded as strings, or numeric columns containing strings for missing values.
get_feature_names()becomesget_feature_names_out(), following changes in the scikit-learn API.get_feature_names()is deprecated in scikit-learn > 1.0. #241 by Gael Varoquaux- Improvements to the
MinHashEncoder It is now possible to fit multiple columns simultaneously with the
MinHashEncoder. Very useful when using for instance themake_column_transformer()function, on multiple columns.
- Improvements to the
Bug-fixes#
Fixed a bug that resulted in the
GapEncoderignoring the analyzer argument. #242 by Jovan StojanovicGapEncoder’s get_feature_names_out now accepts all iterators, not just lists. #255 by Lilian BoulardFixed
DeprecationWarningraised by the usage of distutils.version.LooseVersion. #261 by Lilian Boulard
Notes#
Remove trailing imports in the
MinHashEncoder.Fix typos and update links for website.
Documentation of the
TableVectorizerand theSimilarityEncoderimproved.
Dirty-cat Release 0.2.0#
Also see pre-release 0.2.0a1 below for additional changes.
Major changes#
Bump minimum dependencies:
scikit-learn (>=0.21.0) #202 by Lilian Boulard
pandas (>=1.1.5) ! NEW REQUIREMENT ! #155 by Lilian Boulard
datasets.fetching - backward-incompatible changes to the example datasets fetchers:
The backend has changed: we now exclusively fetch the datasets from OpenML. End users should not see any difference regarding this.
The frontend, however, changed a little: the fetching functions stay the same but their return values were modified in favor of a more Pythonic interface. Refer to the docstrings of functions dirty_cat.datasets.fetch_* for more information.
The example notebooks were updated to reflect these changes. #155 by Lilian Boulard
Backward incompatible change to
MinHashEncoder: TheMinHashEncodernow only supports two dimensional inputs of shape (N_samples, 1). #185 by Lilian Boulard and Alexis Cvetkov.Update handle_missing parameters:
GapEncoder: the default value “zero_impute” becomes “empty_impute” (see doc).MinHashEncoder: the default value “” becomes “zero_impute” (see doc).
#210 by Alexis Cvetkov.
Add a method “get_feature_names_out” for the
GapEncoderand theTableVectorizer, since get_feature_names will be depreciated in scikit-learn 1.2. #216 by Alexis Cvetkov
Notes#
Removed hard-coded CSV file dirty_cat/data/FiveThirtyEight_Midwest_Survey.csv.
Improvements to the
TableVectorizerMissing values are not systematically imputed anymore
Type casting and per-column imputation are now learnt during fitting
Several bugfixes
Dirty-cat Release 0.2.0a1#
Version 0.2.0a1 is a pre-release. To try it, you have to install it manually using:
pip install --pre dirty_cat==0.2.0a1
or from the GitHub repository:
pip install git+https://github.com/dirty-cat/dirty_cat.git
Major changes#
Bump minimum dependencies:
Python (>= 3.6)
NumPy (>= 1.16)
SciPy (>= 1.2)
scikit-learn (>= 0.20.0)
TableVectorizer: Added automatic transform through theTableVectorizerclass. It transforms columns automatically based on their type. It provides a replacement for scikit-learn’sColumnTransformersimpler to use on heterogeneous pandas DataFrame. #167 by Lilian BoulardBackward incompatible change to
GapEncoder: TheGapEncodernow only supports two-dimensional inputs of shape (n_samples, n_features). Internally, features are encoded by independentGapEncodermodels, and are then concatenated into a single matrix. #185 by Lilian Boulard and Alexis Cvetkov.
Bug-fixes#
Fix get_feature_names for scikit-learn > 0.21. #216 by Alexis Cvetkov
Dirty-cat Release 0.1.1#
Major changes#
Bug-fixes#
RuntimeWarnings due to overflow in
GapEncoder. #161 by Alexis Cvetkov
Dirty-cat Release 0.1.0#
Major changes#
GapEncoder: Added online Gamma-Poisson factorization through theGapEncoderclass. This method discovers latent categories formed via combinations of substrings, and encodes string data as combinations of these categories. To be used if interpretability is important. #153 by Alexis Cvetkov
Bug-fixes#
Multiprocessing exception in notebook. #154 by Lilian Boulard
Dirty-cat Release 0.0.7#
MinHashEncoder: Added
minhash_encoder.pyandfast_hast.pyfiles that implement minhash encoding through theMinHashEncoderclass. This method allows for fast and scalable encoding of string categorical variables.datasets.fetch_employee_salaries: change the origin of download for employee_salaries.
The function now return a bunch with a dataframe under the field “data”, and not the path to the csv file.
The field “description” has been renamed to “DESCR”.
SimilarityEncoder: Fixed a bug when using the Jaro-Winkler distance as a similarity metric. Our implementation now accurately reproduces the behaviour of the
python-Levenshteinimplementation.SimilarityEncoder: Added a handle_missing attribute to allow encoding with missing values.
TargetEncoder: Added a handle_missing attribute to allow encoding with missing values.
MinHashEncoder: Added a handle_missing attribute to allow encoding with missing values.
Dirty-cat Release 0.0.6#
SimilarityEncoder: Accelerate
SimilarityEncoder.transform, by:computing the vocabulary count vectors in
fitinstead oftransformcomputing the similarities in parallel using
joblib. This option can be turned on/off via then_jobsattribute of theSimilarityEncoder.
SimilarityEncoder: Fix a bug that was preventing a
SimilarityEncoderto be created whencategorieswas a list.SimilarityEncoder: Set the dtype passed to the ngram similarity to float32, which reduces memory consumption during encoding.
Dirty-cat Release 0.0.5#
SimilarityEncoder: Change the default ngram range to (2, 4) which performs better empirically.
SimilarityEncoder: Added a most_frequent strategy to define prototype categories for large-scale learning.
SimilarityEncoder: Added a k-means strategy to define prototype categories for large-scale learning.
SimilarityEncoder: Added the possibility to use hashing ngrams for stateless fitting with the ngram similarity.
SimilarityEncoder: Performance improvements in the ngram similarity.
SimilarityEncoder: Expose a get_feature_names method.