pycalib.scoring Module

Scoring functions and metrics for classification models.

Functions

accuracy(y, p_pred)

Computes the accuracy.

average_confidence(y, p_pred)

Computes the average confidence in the prediction

brier_score(y, p_pred)

Compute the Brier score.

error(y, p_pred)

Computes the classification error.

expected_calibration_error(y, p_pred[, …])

Computes the expected calibration error ECE_p.

odds_correctness(y, p_pred)

Computes the odds of making a correct prediction.

overconfidence(y, p_pred)

Computes the overconfidence of a classifier.

precision(y, p_pred, **kwargs)

Computes the precision.

ratio_over_underconfidence(y, p_pred)

Computes the ratio of over- and underconfidence of a classifier.

recall(y, p_pred, **kwargs)

Computes the recall.

sharpness(y, p_pred[, ddof])

Computes the empirical sharpness of a classifier.

underconfidence(y, p_pred)

Computes the underconfidence of a classifier.

weighted_abs_conf_difference(y, p_pred)

Computes the weighted absolute difference between over and underconfidence.

Classes

MultiScorer(metrics, plots)

Use this class to encapsulate and/or aggregate multiple scoring functions so that it can be passed as an argument for scoring in scikit’s cross_val_score function.