pypunisher.metrics.criterion.aic(model, X_train, y_train)[source]¶Compute the Akaike Information Criterion (AIC)
AIC’s objective is to prevent model overfitting by adding a penalty term which penalizes more complex models. Its formal definition is:
where \(L\) is the maximized value of the likelihood function and \(k\) if the number of parameters. A smaller AIC value suggests that the model is a better fit for the data, relative to competing models.
| Parameters: |
|
|---|---|
| Returns: |
|
References
pypunisher.metrics.criterion.bic(model, X_train, y_train)[source]¶Compute the Bayesian Information Criterion (BIC)
BIC’s objective is to prevent model over-fitting by adding a penalty term which penalizes more complex models. Its formal definition is:
where \(L\) is the maximized value of the likelihood function and \(k\) if the number of parameters. A smaller BIC value suggests that the model is a better fit for the data, relative to competing models.
| Parameters: |
|
|---|---|
| Returns: |
|
References
pypunisher.selection_engines.selection.Selection(model, X_train, y_train, X_val, y_val, criterion=None, verbose=True)[source]¶Forward and Backward Selection Algorithms.
| Parameters: |
|
|---|
forward(n_features=0.5, min_change=None, **kwargs)[source]¶Perform Forward Selection on a Sklearn model.
| Parameters: |
|
|---|---|
| Returns: |
|
| Raises: | if |
backward(n_features=0.5, min_change=None, **kwargs)[source]¶Perform Backward Selection on a Sklearn model.
| Parameters: |
|
|---|---|
| Returns: |
|
| Raises: | if |