You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: libauc/losses/auc.py
+15-12
Original file line number
Diff line number
Diff line change
@@ -53,8 +53,9 @@ class AUCMLoss(torch.nn.Module):
53
53
54
54
args:
55
55
margin (float): margin for squared-hinge surrogate loss (default: ``1.0``).
56
-
imratio (float, optional): the ratio of the number of positive samples to the number of total samples in the training dataset.
57
-
If this value is not given, the mini-batch statistics will be used instead.
56
+
imratio (float, optional): the ratio of the number of positive samples to the number of total samples in the training dataset, i.e., :math:`p` in the above formulation.
57
+
If this value is not given, it will be automatically calculated with mini-batch samples.
58
+
This value is ignored when ``version`` is set to ``'v2'``.
58
59
version (str, optional): whether to include prior :math:`p` in the objective function (default: ``'v1'``).
59
60
60
61
@@ -65,13 +66,12 @@ class AUCMLoss(torch.nn.Module):
65
66
>>> loss = loss_fn(preds, target)
66
67
>>> loss.backward()
67
68
68
-
.. note::
69
-
To use ``v2`` of AUCMLoss, plesae set ``version='v2'``. Otherwise, the default version is ``v1``. The ``v2`` version requires the use of :obj:`~libauc.sampler.DualSampler`.
70
69
71
70
.. note::
72
71
Practial Tips:
73
72
74
-
- ``epoch_decay`` is a regularization parameter similar to `weight_decay` that can be tuned in the same range.
73
+
- It is recommended to use ``v2`` of AUCMLoss by setting ``version='v2'`` to get better performance. The ``v2`` version requires the use of :obj:`~libauc.sampler.DualSampler`.
74
+
- ``epoch_decay`` is a regularization parameter similar to `weight_decay` that can be tuned in the same range.
75
75
- For complex tasks, it is recommended to use regular loss to pretrain the model, and then switch to AUCMLoss for finetuning with a smaller learning rate.
76
76
77
77
Reference:
@@ -140,7 +140,9 @@ class CompositionalAUCLoss(torch.nn.Module):
140
140
141
141
args:
142
142
margin (float): margin for squared-hinge surrogate loss (default: ``1.0``).
143
-
imratio (float, optional): the ratio of the number of positive samples to the number of total samples in the training dataset. If this value is not given, the mini-batch statistics will be used instead.
143
+
imratio (float, optional): the ratio of the number of positive samples to the number of total samples in the training dataset, i.e., :math:`p` in the above formulation.
144
+
If this value is not given, it will be automatically calculated with mini-batch samples.
145
+
This value is ignored when ``version`` is set to ``'v2'``.
144
146
k (int, optional): number of steps for inner updates. For example, when k is set to 2, the optimizer will alternately execute two steps optimizing :obj:`~libauc.losses.losses.CrossEntropyLoss` followed by a single step optimizing :obj:`~libauc.losses.auc.AUCMLoss` during training (default: ``1``).
145
147
version (str, optional): whether to include prior :math:`p` in the objective function (default: ``'v1'``).
146
148
@@ -152,7 +154,8 @@ class CompositionalAUCLoss(torch.nn.Module):
152
154
>>> loss.backward()
153
155
154
156
.. note::
155
-
As CompositionalAUCLoss is built on AUCMLoss, there are also two versions of CompositionalAUCLoss. To use ``v2`` version, plesae set ``version='v2'``. Otherwise, the default version is ``v1``.
157
+
As CompositionalAUCLoss is built on AUCMLoss, there are also two versions of CompositionalAUCLoss.
158
+
It is recommended to use ``v2`` version by setting ``version='v2'`` to get better performance.
0 commit comments