Information retrieval
Information Retrieval
Information retrieval (IR) metrics are used to evaluate how well a system is retrieving information from a database or from a collection of documents. This is the case with search engines, where a query provided by the user is compared with many possible results, some of which are relevant and some are not.
When you query a search engine, you hope that results that could be useful are ranked higher on the results page. However, each query is usually compared with a different set of documents. For this reason, we had to implement a mechanism to allow users to easily compute the IR metrics in cases where each query is compared with a different number of possible candidates.
For this reason, IR metrics feature an additional argument called indexes that say to which query a prediction refers to. In the end, all query-document pairs are grouped by query index and then the final result is computed as the average of the metric over each group.
In total 6 new metrics have been added for doing information retrieval:
- RetrievalMAP (Mean Average Precision)
- RetrievalMRR (Mean Reciprocal Rank)
- RetrievalPrecision (Precision for IR)
- RetrievalRecall (Recall for IR)
- RetrievalNormalizedDCG (Normalized Discounted Cumulative Gain)
- RetrievalFallOut (Fall Out rate for IR)
Special thanks go to @lucadiliello, for implementing all IR.
Expanding and improving the collection
In addition to expanding our collection to the field of information retrieval, this release also includes new metrics for the classification domain:
- BootStrapper metric that can wrap around any other metric in our collection for easy computation of confidence intervals
- CohenKappa is a statistic that is used to measure inter-rater reliability for qualitative (categorical) items
- MatthewsCorrcoef or phi coefficient is used in machine learning as a measure of the quality of binary (two-class) classifications
- Hinge loss is used for "maximum-margin" classification, most notably for support vector machines.
- PearsonCorrcoef is a metric for measuring the linear correlation between two sets of data
- SpearmanCorrcoef is a metric for measuring the rank correlation between two sets of data. It assesses how well the relationship between two variables can be described using a monotonic function.
Binned metrics
The current implementation of the AveragePrecision and PrecisionRecallCurve has the drawback that it saves all predictions and targets in memory to correctly calculate the metric value. These metrics now receive a binned version that calculates the value at fixed thresholds. This is less precise than original implementations but also much more memory efficient.
Special thanks go to @SkafteNicki, for letting all this happen.
[0.3.0] - 2021-04-20
Added
- Added
BootStrapperto easily calculate confidence intervals for metrics (#101) - Added Binned metrics (#128)
- Added metrics for Information Retrieval:
- Added other metrics:
- Added
average='micro'as an option in AUROC for multilabel problems (#110) - Added multilabel support to
ROCmetric (#114) - Added testing for
halfprecision (#77, #135) - Added
AverageMeterfor ad-hoc averages of values (#138) - Added
prefixargument toMetricCollection(#70) - Added
__getitem__as metric arithmetic operation (#142) - Added property
is_differentiableto metrics and test for differentiability (#154) - Added support for
average,ignore_indexandmdmc_averageinAccuracymetric (#166) - Added
postfixarg toMetricCollection(#188)
Changed
- Changed
ExplainedVariancefrom storing all preds/targets to tracking 5 statistics (#68) - Changed behavior of
confusionmatrixfor multilabel data to better matchmultilabel_confusion_matrixfrom sklearn (#134) - Updated FBeta arguments (#111)
- Changed
resetmethod to usedetach.clone()instead ofdeepcopywhen resetting to default (#163) - Metrics passed as dict to
MetricCollectionwill now always be in deterministic order (#173) - Allowed
MetricCollectionpass metrics as arguments (#176)
Deprecated
- Rename argument
is_multiclass->multiclass(#162)
Removed
- Prune remaining deprecated (#92)
Fixed
- Fixed when
_stable_1d_sortto work whenn>=N(PL^6177) - Fixed
_computedattribute not being correctly reset (#147) - Fixed to Blau score (#165)
- Fixed backwards compatibility for logging with older version of pytorch-lightning (#182)
Contributors
@alanhdu, @arvindmuralie77, @bhadreshpsavani, @Borda, @ethanwharris, @lucadiliello, @maximsch2, @SkafteNicki, @thomasgaudelet, @victorjoos
If we forgot someone due to not matching commit email with GitHub account, let us know :]