ratio_over_underconfidence

pycalib.scoring.ratio_over_underconfidence(y, p_pred)[source]

Computes the ratio of over- and underconfidence of a classifier.

Computes the empirical ratio of over- and underconfidence of a classifier on a test sample.

Parameters
  • y (array-like) – Ground truth labels

  • p_pred (array-like) – Array of confidence estimates

Returns

Ratio of over- and underconfidence

Return type

float