Skip to content

Information retrieval

Choose a tag to compare

@Borda Borda released this 20 Apr 19:34
· 2003 commits to master since this release

Information Retrieval

Information retrieval (IR) metrics are used to evaluate how well a system is retrieving information from a database or from a collection of documents. This is the case with search engines, where a query provided by the user is compared with many possible results, some of which are relevant and some are not.

When you query a search engine, you hope that results that could be useful are ranked higher on the results page. However, each query is usually compared with a different set of documents. For this reason, we had to implement a mechanism to allow users to easily compute the IR metrics in cases where each query is compared with a different number of possible candidates.

For this reason, IR metrics feature an additional argument called indexes that say to which query a prediction refers to. In the end, all query-document pairs are grouped by query index and then the final result is computed as the average of the metric over each group.

In total 6 new metrics have been added for doing information retrieval:

  • RetrievalMAP (Mean Average Precision)
  • RetrievalMRR (Mean Reciprocal Rank)
  • RetrievalPrecision (Precision for IR)
  • RetrievalRecall (Recall for IR)
  • RetrievalNormalizedDCG (Normalized Discounted Cumulative Gain)
  • RetrievalFallOut (Fall Out rate for IR)

Special thanks go to @lucadiliello, for implementing all IR.

Expanding and improving the collection

In addition to expanding our collection to the field of information retrieval, this release also includes new metrics for the classification domain:

  • BootStrapper metric that can wrap around any other metric in our collection for easy computation of confidence intervals
  • CohenKappa is a statistic that is used to measure inter-rater reliability for qualitative (categorical) items
  • MatthewsCorrcoef or phi coefficient is used in machine learning as a measure of the quality of binary (two-class) classifications
  • Hinge loss is used for "maximum-margin" classification, most notably for support vector machines.
  • PearsonCorrcoef is a metric for measuring the linear correlation between two sets of data
  • SpearmanCorrcoef is a metric for measuring the rank correlation between two sets of data. It assesses how well the relationship between two variables can be described using a monotonic function.

Binned metrics

The current implementation of the AveragePrecision and PrecisionRecallCurve has the drawback that it saves all predictions and targets in memory to correctly calculate the metric value. These metrics now receive a binned version that calculates the value at fixed thresholds. This is less precise than original implementations but also much more memory efficient.

Special thanks go to @SkafteNicki, for letting all this happen.

https://devblog.pytorchlightning.ai/torchmetrics-v0-3-0-information-retrieval-metrics-and-more-c55265e9b94f

[0.3.0] - 2021-04-20

Added

  • Added BootStrapper to easily calculate confidence intervals for metrics (#101)
  • Added Binned metrics (#128)
  • Added metrics for Information Retrieval:
    • Added RetrievalMAP (PL^5032)
    • Added RetrievalMRR (#119)
    • Added RetrievalPrecision (#139)
    • Added RetrievalRecall (#146)
    • Added RetrievalNormalizedDCG (#160)
    • Added RetrievalFallOut (#161)
  • Added other metrics:
    • Added CohenKappa (#69)
    • Added MatthewsCorrcoef (#98)
    • Added PearsonCorrcoef (#157)
    • Added SpearmanCorrcoef (#158)
    • Added Hinge (#120)
  • Added average='micro' as an option in AUROC for multilabel problems (#110)
  • Added multilabel support to ROC metric (#114)
  • Added testing for half precision (#77, #135)
  • Added AverageMeter for ad-hoc averages of values (#138)
  • Added prefix argument to MetricCollection (#70)
  • Added __getitem__ as metric arithmetic operation (#142)
  • Added property is_differentiable to metrics and test for differentiability (#154)
  • Added support for average, ignore_index and mdmc_average in Accuracy metric (#166)
  • Added postfix arg to MetricCollection (#188)

Changed

  • Changed ExplainedVariance from storing all preds/targets to tracking 5 statistics (#68)
  • Changed behavior of confusionmatrix for multilabel data to better match multilabel_confusion_matrix from sklearn (#134)
  • Updated FBeta arguments (#111)
  • Changed reset method to use detach.clone() instead of deepcopy when resetting to default (#163)
  • Metrics passed as dict to MetricCollection will now always be in deterministic order (#173)
  • Allowed MetricCollection pass metrics as arguments (#176)

Deprecated

  • Rename argument is_multiclass -> multiclass (#162)

Removed

  • Prune remaining deprecated (#92)

Fixed

  • Fixed when _stable_1d_sort to work when n>=N (PL^6177)
  • Fixed _computed attribute not being correctly reset (#147)
  • Fixed to Blau score (#165)
  • Fixed backwards compatibility for logging with older version of pytorch-lightning (#182)

Contributors

@alanhdu, @arvindmuralie77, @bhadreshpsavani, @Borda, @ethanwharris, @lucadiliello, @maximsch2, @SkafteNicki, @thomasgaudelet, @victorjoos

If we forgot someone due to not matching commit email with GitHub account, let us know :]