weighted_abs_conf_difference

pycalib.scoring.weighted_abs_conf_difference(y, p_pred)[source]

Computes the weighted absolute difference between over and underconfidence.

Parameters
  • y (array-like) – Ground truth labels. Here a dummy variable for cross validation.

  • p_pred (array-like) – Array of confidence estimates.

Returns

weighted_abs_diff – Accuracy weighted absolute difference between over and underconfidence.

Return type

float