pycalib.scoring Module¶
Scoring functions and metrics for classification models.
Functions¶
|
Computes the accuracy. |
|
Computes the average confidence in the prediction |
|
Compute the Brier score. |
|
Computes the classification error. |
|
Computes the expected calibration error ECE_p. |
|
Computes the odds of making a correct prediction. |
|
Computes the overconfidence of a classifier. |
|
Computes the precision. |
|
Computes the ratio of over- and underconfidence of a classifier. |
|
Computes the recall. |
|
Computes the empirical sharpness of a classifier. |
|
Computes the underconfidence of a classifier. |
|
Computes the weighted absolute difference between over and underconfidence. |
Classes¶
|
Use this class to encapsulate and/or aggregate multiple scoring functions so that it can be passed as an argument for scoring in scikit’s cross_val_score function. |