FeatureSelectionCV

GAFeatureSelectionCV(estimator[, cv, ...])

Evolutionary optimization for feature selection.

GASearchCV.decision_function(X)

Call decision_function on the estimator with the best found parameters.

GASearchCV.fit(X, y[, callbacks])

Main method of GASearchCV, starts the optimization procedure with the hyperparameters of the given estimator

GASearchCV.get_params([deep])

Get parameters for this estimator.

GASearchCV.inverse_transform(Xt)

Call inverse_transform on the estimator with the best found params.

GASearchCV.predict(X)

Call predict on the estimator with the best found parameters.

GASearchCV.predict_proba(X)

Call predict_proba on the estimator with the best found parameters.

GASearchCV.score(X[, y])

Return the score on the given data, if the estimator has been refit.

GASearchCV.score_samples(X)

Call score_samples on the estimator with the best found parameters.

GASearchCV.set_params(**params)

Set the parameters of this estimator.

GASearchCV.transform(X)

Call transform on the estimator with the best found parameters.

class sklearn_genetic.GAFeatureSelectionCV(estimator, cv=3, scoring=None, population_size=10, generations=40, crossover_probability=0.8, mutation_probability=0.1, tournament_size=3, elitism=True, verbose=True, keep_top_k=1, criteria='max', algorithm='eaMuPlusLambda', refit=True, n_jobs=1, pre_dispatch='2*n_jobs', error_score=nan, return_train_score=False, log_config=None)[source]

Evolutionary optimization for feature selection.

GAFeatureSelectionCV implements a “fit” and a “score” method. It also implements “predict”, “predict_proba”, “decision_function”, “predict_log_proba” if they are implemented in the estimator used. The features (variables) used by the estimator are found by optimizing the cv-scores and by minimizing the number of features

Parameters
estimatorestimator object, default=None

estimator object implementing ‘fit’ The object to use to fit the data.

cvint, cross-validation generator or an iterable, default=None

Determines the cross-validation splitting strategy. Possible inputs for cv are: - None, to use the default 5-fold cross validation, - int, to specify the number of folds in a (Stratified)KFold, - CV splitter, - An iterable yielding (train, test) splits as arrays of indices. For int/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. These splitters are instantiated with shuffle=False so the splits will be the same across calls.

population_sizeint, default=10

Size of the initial population to sample randomly generated individuals.

generationsint, default=40

Number of generations or iterations to run the evolutionary algorithm.

crossover_probabilityfloat, default=0.8

Probability of crossover operation between two individuals.

mutation_probabilityfloat, default=0.1

Probability of child mutation.

tournament_sizeint, default=3

Number of individuals to perform tournament selection.

elitismbool, default=True

If True takes the tournament_size best solution to the next generation.

scoringstr or callable, default=None

A str (see model evaluation documentation) or a scorer callable object / function with signature scorer(estimator, X, y) which should return only a single value.

n_jobsint, default=None

Number of jobs to run in parallel. Training the estimator and computing the score are parallelized over the cross-validation splits. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors.

verbosebool, default=True

If True, shows the metrics on the optimization routine.

keep_top_kint, default=1

Number of best solutions to keep in the hof object. If a callback stops the algorithm before k iterations, it will return only one set of parameters per iteration.

criteria{‘max’, ‘min’} , default=’max’

max if a higher scoring metric is better, min otherwise.

algorithm{‘eaMuPlusLambda’, ‘eaMuCommaLambda’, ‘eaSimple’}, default=’eaMuPlusLambda’

Evolutionary algorithm to use. See more details in the deap algorithms documentation.

refitbool, default=True

Refit an estimator using the best found parameters on the whole dataset. If False, it is not possible to make predictions using this GASearchCV instance after fitting.

pre_dispatchint or str, default=’2*n_jobs’

Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be:

  • None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs

  • An int, giving the exact number of total jobs that are spawned

  • A str, giving an expression as a function of n_jobs, as in ‘2*n_jobs’

error_score‘raise’ or numeric, default=np.nan

Value to assign to the score if an error occurs in estimator fitting. If set to 'raise', the error is raised. If a numeric value is given, FitFailedWarning is raised.

return_train_score: bool, default=False

If False, the cv_results_ attribute will not include training scores. Computing training scores is used to get insights on how different parameter settings impact the overfitting/underfitting trade-off. However computing the scores on the training set can be computationally expensive and is not strictly required to select the parameters that yield the best generalization performance.

log_configMLflowConfig, default = None

Configuration to log metrics and models to mlflow, of None, no mlflow logging will be performed

Attributes
logbookDEAP.tools.Logbook

Contains the logs of every set of hyperparameters fitted with its average scoring metric.

historydict

Dictionary of the form: {“gen”: [], “fitness”: [], “fitness_std”: [], “fitness_max”: [], “fitness_min”: []}

gen returns the index of the evaluated generations. Each entry on the others lists, represent the average metric in each generation.

cv_results_dict of numpy (masked) ndarrays

A dict with keys as column headers and values as columns, that can be imported into a pandas DataFrame.

best_estimator_estimator

Estimator that was chosen by the search, i.e. estimator which gave highest score on the left out data. Not available if refit=False.

best_features_list

List of bool, each index represents one feature in the same order the data was fed. 1 means the feature was selected, 0 means the features was discarded.

scorer_function or a dict

Scorer function used on the held out data to choose the best parameters for the model.

n_splits_int

The number of cross-validation splits (folds/iterations).

refit_time_float

Seconds used for refitting the best model on the whole dataset. This is present only if refit is not False.

decision_function(X)

Call decision_function on the estimator with the best found parameters.

Only available if refit=True and the underlying estimator supports decision_function.

Parameters
Xindexable, length n_samples

Must fulfill the input assumptions of the underlying estimator.

Returns
y_scorendarray of shape (n_samples,) or (n_samples, n_classes) or (n_samples, n_classes * (n_classes-1) / 2)

Result of the decision function for X based on the estimator with the best found parameters.

fit(X, y, callbacks=None)[source]

Main method of GAFeatureSelectionCV, starts the optimization procedure with to find the best features set Parameters ———- X : array-like of shape (n_samples, n_features)

The data to fit. Can be for example a list, or an array.

yarray-like of shape (n_samples,) or (n_samples, n_outputs), default=None

The target variable to try to predict in the case of supervised learning.

callbacks: list or callable

One or a list of the callbacks methods available in callbacks. The callback is evaluated after fitting the estimators from the generation 1.

get_params(deep=True)

Get parameters for this estimator.

Parameters
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns
paramsdict

Parameter names mapped to their values.

inverse_transform(Xt)

Call inverse_transform on the estimator with the best found params.

Only available if the underlying estimator implements inverse_transform and refit=True.

Parameters
Xtindexable, length n_samples

Must fulfill the input assumptions of the underlying estimator.

Returns
X{ndarray, sparse matrix} of shape (n_samples, n_features)

Result of the inverse_transform function for Xt based on the estimator with the best found parameters.

predict(X)

Call predict on the estimator with the best found parameters.

Only available if refit=True and the underlying estimator supports predict.

Parameters
Xindexable, length n_samples

Must fulfill the input assumptions of the underlying estimator.

Returns
y_predndarray of shape (n_samples,)

The predicted labels or values for X based on the estimator with the best found parameters.

predict_log_proba(X)

Call predict_log_proba on the estimator with the best found parameters.

Only available if refit=True and the underlying estimator supports predict_log_proba.

Parameters
Xindexable, length n_samples

Must fulfill the input assumptions of the underlying estimator.

Returns
y_predndarray of shape (n_samples,) or (n_samples, n_classes)

Predicted class log-probabilities for X based on the estimator with the best found parameters. The order of the classes corresponds to that in the fitted attribute classes_.

predict_proba(X)

Call predict_proba on the estimator with the best found parameters.

Only available if refit=True and the underlying estimator supports predict_proba.

Parameters
Xindexable, length n_samples

Must fulfill the input assumptions of the underlying estimator.

Returns
y_predndarray of shape (n_samples,) or (n_samples, n_classes)

Predicted class probabilities for X based on the estimator with the best found parameters. The order of the classes corresponds to that in the fitted attribute classes_.

score(X, y=None)

Return the score on the given data, if the estimator has been refit.

This uses the score defined by scoring where provided, and the best_estimator_.score method otherwise.

Parameters
Xarray-like of shape (n_samples, n_features)

Input data, where n_samples is the number of samples and n_features is the number of features.

yarray-like of shape (n_samples, n_output) or (n_samples,), default=None

Target relative to X for classification or regression; None for unsupervised learning.

Returns
scorefloat

The score defined by scoring if provided, and the best_estimator_.score method otherwise.

score_samples(X)

Call score_samples on the estimator with the best found parameters.

Only available if refit=True and the underlying estimator supports score_samples.

New in version 0.24.

Parameters
Xiterable

Data to predict on. Must fulfill input requirements of the underlying estimator.

Returns
y_scorendarray of shape (n_samples,)

The best_estimator_.score_samples method.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters
**paramsdict

Estimator parameters.

Returns
selfestimator instance

Estimator instance.

transform(X)

Call transform on the estimator with the best found parameters.

Only available if the underlying estimator supports transform and refit=True.

Parameters
Xindexable, length n_samples

Must fulfill the input assumptions of the underlying estimator.

Returns
Xt{ndarray, sparse matrix} of shape (n_samples, n_features)

X transformed in the new space based on the estimator with the best found parameters.