-
Notifications
You must be signed in to change notification settings - Fork 3
Open
Description
There seems to be a bug in how NaN are handled.
From ivtmetrics/recognition.py:
def resolve_nan(self, classwise):
equiv_nan = ['-0', '-0.', '-0.0', '-.0']
classwise = list(map(str, classwise))
classwise = [np.nan if x in equiv_nan else x for x in classwise]
classwise = np.array(list(map(float, classwise)))
return classwise
Newer sklearn versions (tested with 1.7.2) return 0.0 for classes that have no ground truth sample, which does not match with equiv_nan.
from sklearn.metrics import average_precision_score
y_true = [[1, 0, 0], [1, 0, 0], [0, 0, 0]] # class 0 has positives, 1-2 don't
y_pred = [[0.9, 0.5, 0.3], [0.8, 0.4, 0.2], [0.1, 0.6, 0.7]]
ap = average_precision_score(y_true, y_pred, average=None)
# Result: [1.0, 0.0, 0.0]
# ^ ^ ^
# | | +-- class 2: no GT, AP=0
# | +------- class 1: no GT, AP=0
# +------------ class 0: has GT, AP=1This significantly drags down the metrics, as it will be passed as 0.0 and then averaged.
Metadata
Metadata
Assignees
Labels
No labels