underconfidence¶
-
pycalib.scoring.
underconfidence
(y, p_pred)[source]¶ Computes the underconfidence of a classifier.
Computes the empirical underconfidence of a classifier on a test sample by evaluating the average uncertainty on the correct predictions.
- Parameters
y (array-like) – Ground truth labels
p_pred (array-like) – Array of confidence estimates
- Returns
Underconfidence
- Return type