Spaces:
Runtime error
Runtime error
A newer version of the Gradio SDK is available:
5.9.1
metadata
title: Specificity
tags:
- evaluate
- metric
description: >-
Specificity is the fraction of the negatives examples that were correctly
labeled by the model as negatives. It can be computed with the equation:
Specificity = TN / (TN + FP) Where TN is the true negatives and FP is the
false positives.
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
Metric Card for Specificity
Metric Description
Specificity is the fraction of the negatives examples that were correctly labeled by the model as negatives. It can be computed with the equation: Where is the true negatives and is the false positives.
How to Use
At minimum, this metric takes as input two lists, each containing ints: predictions and references.
>>> specificity_metric = evaluate.load('nevikw39/specificity')
>>> results = specificity_metric.compute(references=[0, 1], predictions=[0, 1])
>>> print(results)
["{'specificity': 1.0}"]
Inputs
- predictions (
list
ofint
): The predicted labels. - references (
list
ofint
): The ground truth labels. - labels (
list
ofint
): The set of labels to include whenaverage
is not set tobinary
, and their order when average isNone
. Labels present in the data can be excluded in this input, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. Defaults to None. - neg_label (
int
): The class label to use as the 'negative class' when calculating the specificity. Defaults to0
. - average (
string
): This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to'binary'
.'binary'
: Only report results for the class specified byneg_label
. This is applicable only if the target labels and predictions are binary.'micro'
: Calculate metrics globally by counting the total true negatives, false positives, and false negatives.'macro'
: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.'weighted'
: Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters'macro'
to account for label imbalance. Note that it can result in an F-score that is not between sensitivity and specificity.'samples'
: Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).
- sample_weight (
list
offloat
): Sample weights Defaults toNone
. - zero_division (): Sets the value to return when there is a zero division. Defaults to .
'warn'
: If there is a zero division, the return value is0
, but warnings are also raised.0
: If there is a zero division, the return value is0
.1
: If there is a zero division, the return value is1
.
Output Values
- specificity (
float
, orarray
offloat
): Either the general specificity score, or the specificity scores for individual classes, depending on the values input tolabels
andaverage
. Minimum possible value is 0. Maximum possible value is 1. A higher specificity means that more of the positive examples have been labeled correctly. Therefore, a higher specificity is generally considered better.
Values from Popular Papers
Examples
Example 1-A simple example with some errors
>>> specificity_metric = evaluate.load('nevikw39/specificity')
>>> results = specificity_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1])
>>> print(results)
{'specificity': 0.5}
Example 2-The same example as Example 1, but with neg_label=1
instead of the default neg_label=0
.
>>> specificity_metric = evaluate.load('nevikw39/specificity')
>>> results = specificity_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], neg_label=1)
>>> print(results)
{'specificity': 0.6666666666666666}
Example 3-The same example as Example 1, but with sample_weight
included.
>>> specificity_metric = evaluate.load('nevikw39/specificity')
>>> sample_weight = [0.9, 0.2, 0.9, 0.3, 0.8]
>>> results = specificity_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], sample_weight=sample_weight)
>>> print(results)
{'specificity': 0.8181818181818181}
Example 4-A multiclass example, using different averages.
>>> specificity_metric = evaluate.load('nevikw39/specificity')
>>> predictions = [0, 2, 1, 0, 0, 1]
>>> references = [0, 1, 2, 0, 1, 2]
>>> results = specificity_metric.compute(predictions=predictions, references=references, average='macro')
>>> print(results)
{'specificity': 0.3333333333333333}
>>> results = specificity_metric.compute(predictions=predictions, references=references, average='micro')
>>> print(results)
{'specificity': 0.3333333333333333}
>>> results = specificity_metric.compute(predictions=predictions, references=references, average='weighted')
>>> print(results)
{'specificity': 0.3333333333333333}
>>> results = specificity_metric.compute(predictions=predictions, references=references, average=None)
>>> print(results)
{'specificity': array([1., 0., 0.])}
Limitations and Bias
Citation
@article{scikit-learn, title={Scikit-learn: Machine Learning in {P}ython}, author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.}, journal={Journal of Machine Learning Research}, volume={12}, pages={2825--2830}, year={2011}}