You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are excited to announce that Torchmetrics v0.6 is now publicly available. TorchMetrics v0.6 does not focus on specific domains but adds a ton of new metrics to several domains, thus increasing the number of metrics in the repository to over 60! Not only have v0.6 added metrics within already covered domains, but we also add support for two new: Pairwise metrics and detection.
TorchMetrics v0.6 offers a new set of metrics in its functional backend for calculating pairwise distances. Given a tensor X with shape [N,d] (N observations, each in d dimensions), a pairwise metric calculates [N,N] matrix of all possible combinations between the rows of X.
Detection
TorchMetrics v0.6 now includes a detection package that provides for the MAP metric. The implementation essentially wraps pycocotools around securing that we get the correct value, but with the benefit of now being able to scale to multiple devices (as any other metric in TorchMetrics).
New additions
In the audio package, we have two new metrics: Perceptual Evaluation of Speech Quality (PESQ) and Short Term Objective Intelligibility (STOI). Both metrics can be used to assert speech quality.
In the retrieval package, we also have two new metrics: R-precision and Hit-rate. R-precision corresponds to recall at the R-th position of the query. The hit rate is the ratio of the total number of hits returned as a result of a query (hits) to the total number of hits returned.
The text package also receives an update in the form of two new metrics: Sacre BLEU score and character error rate. Sacre BLUE score provides and more systematic way of comparing BLUE scores across tasks. The character error rate is similar to the word error rate but instead calculates if a given algorithm has correctly predicted a sentence based on a character-by-character comparison.
The regression package got a single new metric in the form of the Tweedie deviance score metric. Deviance scores are generally a better measure of fit than measures such as squared error when trying to model data coming from highly screwed distributions.
Finally, we have added five new metrics for simple aggregation: SumMetric, MeanMetric, MinMetric, MaxMetric, CatMetric. All five metrics take in a single input (either native python floats or torch.Tensor) and keep track of the sum, average, min, etc. These new aggregation metrics are especially useful in combination with self.log from lightning if you want to log something other than the average of the metric you are tracking.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
[0.6.0] - 2021-10-28
We are excited to announce that Torchmetrics v0.6 is now publicly available. TorchMetrics v0.6 does not focus on specific domains but adds a ton of new metrics to several domains, thus increasing the number of metrics in the repository to over 60! Not only have v0.6 added metrics within already covered domains, but we also add support for two new: Pairwise metrics and detection.
https://devblog.pytorchlightning.ai/torchmetrics-v0-6-more-metrics-than-ever-e98c3983621e
Pairwise Metrics
TorchMetrics v0.6 offers a new set of metrics in its functional backend for calculating pairwise distances. Given a tensor
Xwith shape[N,d](Nobservations, each inddimensions), a pairwise metric calculates[N,N]matrix of all possible combinations between the rows ofX.Detection
TorchMetrics v0.6 now includes a detection package that provides for the MAP metric. The implementation essentially wraps
pycocotoolsaround securing that we get the correct value, but with the benefit of now being able to scale to multiple devices (as any other metric in TorchMetrics).New additions
In the
audiopackage, we have two new metrics: Perceptual Evaluation of Speech Quality (PESQ) and Short Term Objective Intelligibility (STOI). Both metrics can be used to assert speech quality.In the
retrievalpackage, we also have two new metrics: R-precision and Hit-rate. R-precision corresponds to recall at the R-th position of the query. The hit rate is the ratio of the total number of hits returned as a result of a query (hits) to the total number of hits returned.The
textpackage also receives an update in the form of two new metrics: Sacre BLEU score and character error rate. Sacre BLUE score provides and more systematic way of comparing BLUE scores across tasks. The character error rate is similar to the word error rate but instead calculates if a given algorithm has correctly predicted a sentence based on a character-by-character comparison.The
regressionpackage got a single new metric in the form of the Tweedie deviance score metric. Deviance scores are generally a better measure of fit than measures such as squared error when trying to model data coming from highly screwed distributions.Finally, we have added five new metrics for simple aggregation:
SumMetric,MeanMetric,MinMetric,MaxMetric,CatMetric. All five metrics take in a single input (either native python floats ortorch.Tensor) and keep track of the sum, average, min, etc. These new aggregation metrics are especially useful in combination with self.log from lightning if you want to log something other than the average of the metric you are tracking.Detail changes
Added
RetrievalRPrecision(Implemented R-Precision for IR #577)RetrievalHitRate(Implemented HitRate for IR #576)SacreBLEUScore(AddSacreBLEUScore#546)CharErrorRate(Character Error Rate #575)MAP(mean average precision) metric to new detection package (Add mean average precision metric for object detection #467)nDCGmetric (Add float target support to class & functional NDCG #437)averageargument toAveragePrecisionmetric for reducing multi-label and multi-class problems (Adds average argument to AveragePrecision metric #477)MultioutputWrapper(Implement MultioutputWrapper #510)higher_is_betteras constant attribute (Metric sweeping #544)higher_is_betterto rest of codebase (Add missinghigher_is_betterattribute to metrics #584)SumMetric,MeanMetric,CatMetric,MinMetric,MaxMetric(Simple aggregation metrics #506)pairwise_cosine_similaritypairwise_euclidean_distancepairwise_linear_similaritypairwise_manhatten_distanceChanged
AveragePrecisionwill now as default output themacroaverage for multilabel and multiclass problems (Adds average argument to AveragePrecision metric #477)half,double,floatwill no longer change the dtype of the metric states. Usemetric.set_dtypeinstead (Fix dtype issues #493)AverageMetertoMeanMetric(Simple aggregation metrics #506)is_differentiablefrom property to a constant attribute (makeis_differentiableas attribute #551)ROCandAUROCwill no longer throw an error when either the positive or negative class is missing. Instead, return 0 scores and give a warningDeprecated
torchmetrics.functional.self_supervised.embedding_similarityin favour of new pairwise submoduleRemoved
dtypeproperty (Fix dtype issues #493)Fixed
F1withaverage='macro'andignore_index!=None(Fix f1 score for macro and ignore index #495)pitby using the returned first result to initialize device and type (make metric_mtx type and device correct #533)SSIMmetric using too much memory (Fix SSIM memory #539)deviceproperty was not properly updated when the metric was a child of a module (Fix child device #542)Contributors
@an1lam, @Borda, @karthikrangasai, @lucadiliello, @mahinlma, @obus, @quancs, @SkafteNicki, @stancld, @tkupek
If we forgot someone due to not matching commit email with GitHub account, let us know :]
This discussion was created from the release More metrics than ever.
Beta Was this translation helpful? Give feedback.
All reactions