overconfidence¶
-
pycalib.scoring.
overconfidence
(y, p_pred)[source]¶ Computes the overconfidence of a classifier.
Computes the empirical overconfidence of a classifier on a test sample by evaluating the average confidence on the false predictions.
- Parameters
y (array-like) – Ground truth labels
p_pred (array-like) – Array of confidence estimates
- Returns
Overconfidence
- Return type