Skip to content

Faster collection and more metrics!

Choose a tag to compare

@Borda Borda released this 15 Apr 01:07
· 1543 commits to master since this release

We are excited to announce that TorchMetrics v0.8 is now available. The release includes several new metrics in the classification and image domains and some performance improvements for those working with metrics collections.

Metric collections just got faster

Common wisdom dictates that you should never evaluate the performance of your models using only a single metric but instead a collection of metrics. For example, it is common to simultaneously evaluate the accuracy, precision, recall, and f1 score in classification. In TorchMetrics, we have for a long time provided the MetricCollection object for chaining such metrics together for an easy interface to calculate them all at once. However, in many cases, such a collection of metrics shares some of the underlying computations that have been repeated for every metric in the collection. In Torchmetrics v0.8 we have introduced the concept of compute_groups to MetricCollection that will, as default, be auto-detected and group metrics that share some of the same computations.

Thus, if you are using MetricCollections in your code, upgrading to TorchMetrics v0.8 should automatically make your code run faster without any code changes.

Many exciting new metrics

TorchMetrics v0.8 includes several new metrics within the classification and image domain, both for the functional and modular API. We refer to the documentation for the full description of all metrics if you want to learn more about them.

  • SpectralAngleMapper or SAM was added to the image package. This metric can calculate the spectral similarity between given reference spectra and estimated spectra.
  • CoverageError was added to the classification package. This metric can be used when you are working with multi-label data. The metric works similar to the sklearn counterpart and computes how far you need to go through ranked scores such that all true labels are covered.
  • LabelRankingAveragePrecision and LabelRankingLoss were added to the classification package. Both metrics are used in multi-label ranking problems, where the goal is to give a better rank to the labels associated with each sample. Each metric gives a measure of how well your model is doing this.
  • ErrorRelativeGlobalDimensionlessSynthesis or ERGAS was added to the image package. This metric can be used to calculate the accuracy of Pan sharpened images considering the normalized average error of each band of the resulting image.
  • UniversalImageQualityIndex was added to the image package. This metric can assess the difference between two images, which considers three different factors when computed: loss of correlation, luminance distortion, and contrast distortion.
  • ClasswiseWrapper was added to the wrapper package. This wrapper can be used in combinations with metrics that return multiple values (such as classification metrics with the average=None argument). The wrapper will unwrap the result into a dict with a label for each value.

[0.8.0] - 2022-04-14

Added

  • Added WeightedMeanAbsolutePercentageError to regression package (#948)
  • Added new classification metrics:
    • CoverageError (#787)
    • LabelRankingAveragePrecision and LabelRankingLoss (#787)
  • Added new image metric:
    • SpectralAngleMapper (#885)
    • ErrorRelativeGlobalDimensionlessSynthesis (#894)
    • UniversalImageQualityIndex (#824)
    • SpectralDistortionIndex (#873)
  • Added support for MetricCollection in MetricTracker (#718)
  • Added support for 3D image and uniform kernel in StructuralSimilarityIndexMeasure (#818)
  • Added smart update of MetricCollection (#709)
  • Added ClasswiseWrapper for better logging of classification metrics with multiple output values (#832)
  • Added **kwargs argument for passing additional arguments to base class (#833)
  • Added negative ignore_index for the Accuracy metric (#362)
  • Added adaptive_k for the RetrievalPrecision metric (#910)
  • Added reset_real_features argument image quality assessment metrics (#722)
  • Added new keyword argument compute_on_cpu to all metrics (#867)

Changed

  • Made num_classes in jaccard_index a required argument (#853, #914)
  • Added normalizer, tokenizer to ROUGE metric (#838)
  • Improved shape checking of permutation_invariant_training (#864)
  • Allowed reduction None (#891)
  • MetricTracker.best_metric will now give a warning when computing on metric that do not have a best (#913)

Deprecated

  • Deprecated argument compute_on_step (#792)
  • Deprecated passing in dist_sync_on_step, process_group, dist_sync_fn direct argument (#833)

Removed

  • Removed support for versions of Lightning lower than v1.5 (#788)
  • Removed deprecated functions, and warnings in Text (#773)
    • WER and functional.wer
  • Removed deprecated functions and warnings in Image (#796)
    • SSIM and functional.ssim
    • PSNR and functional.psnr
  • Removed deprecated functions, and warnings in classification and regression (#806)
    • FBeta and functional.fbeta
    • F1 and functional.f1
    • Hinge and functional.hinge
    • IoU and functional.iou
    • MatthewsCorrcoef
    • PearsonCorrcoef
    • SpearmanCorrcoef
  • Removed deprecated functions, and warnings in detection and pairwise (#804)
    • MAP and functional.pairwise.manhatten
  • Removed deprecated functions, and warnings in Audio (#805)
    • PESQ and functional.audio.pesq
    • PIT and functional.audio.pit
    • SDR and functional.audio.sdr and functional.audio.si_sdr
    • SNR and functional.audio.snr and functional.audio.si_snr
    • STOI and functional.audio.stoi

Fixed

  • Fixed device mismatch for MAP metric in specific cases (#950)
  • Improved testing speed (#820)
  • Fixed compatibility of ClasswiseWrapper with the prefix argument of MetricCollection (#843)
  • Fixed BestScore on GPU (#912)
  • Fixed Lsum computation for ROUGEScore (#944)

Contributors

@ankitaS11, @ashutoshml, @Borda, @hookSSi, @justusschock, @lucadiliello, @quancs, @rusty1s, @SkafteNicki, @stancld, @vumichien, @weningerleon, @yassersouri

If we forgot someone due to not matching commit email with GitHub account, let us know :]