nevikw39 commited on
Commit
dc76a90
1 Parent(s): bc9d259

implement specificity

Browse files
Files changed (3) hide show
  1. README.md +75 -18
  2. specificity.py +91 -52
  3. tests.py +3 -13
README.md CHANGED
@@ -1,11 +1,9 @@
1
  ---
2
  title: Specificity
3
- datasets:
4
- -
5
  tags:
6
  - evaluate
7
  - metric
8
- description: "TODO: add a description here"
9
  sdk: gradio
10
  sdk_version: 3.19.1
11
  app_file: app.py
@@ -14,37 +12,96 @@ pinned: false
14
 
15
  # Metric Card for Specificity
16
 
17
- ***Module Card Instructions:*** *Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.*
18
-
19
  ## Metric Description
20
- *Give a brief overview of this metric, including what task(s) it is usually used for, if any.*
 
 
 
21
 
22
  ## How to Use
23
- *Give general statement of how to use the metric*
24
 
25
- *Provide simplest possible example for using the metric*
 
 
 
 
 
 
 
26
 
27
  ### Inputs
28
- *List all input arguments in the format below*
29
- - **input_field** *(type): Definition of input, with explanation if necessary. State any default value(s).*
30
 
31
- ### Output Values
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
- *Explain what this metric outputs and provide an example of what the metric output looks like. Modules should return a dictionary with one or multiple key-value pairs, e.g. {"bleu" : 6.02}*
34
 
35
- *State the range of possible values that the metric's output can take, as well as what in that range is considered good. For example: "This metric can take on any value between 0 and 100, inclusive. Higher scores are better."*
36
 
37
  #### Values from Popular Papers
38
- *Give examples, preferrably with links to leaderboards or publications, to papers that have reported this metric, along with the values they have reported.*
39
 
40
  ### Examples
41
- *Give code examples of the metric being used. Try to include examples that clear up any potential ambiguity left from the metric description above. If possible, provide a range of examples that show both typical and atypical results, as well as examples where a variety of input parameters are passed.*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
  ## Limitations and Bias
44
- *Note any known limitations or biases that the metric has, with links and references if possible.*
45
 
46
  ## Citation
47
- *Cite the source where this metric was introduced.*
 
 
 
48
 
49
  ## Further References
50
- *Add any useful further references.*
 
1
  ---
2
  title: Specificity
 
 
3
  tags:
4
  - evaluate
5
  - metric
6
+ description: "Specificity is the fraction of the negatives examples that were correctly labeled by the model as negatives. It can be computed with the equation: Specificity = TN / (TN + FP) Where TN is the true negatives and FP is the false positives."
7
  sdk: gradio
8
  sdk_version: 3.19.1
9
  app_file: app.py
 
12
 
13
  # Metric Card for Specificity
14
 
 
 
15
  ## Metric Description
16
+
17
+ Specificity is the fraction of the negatives examples that were correctly labeled by the model as negatives. It can be computed with the equation:
18
+ $$\text{Specificity} = \frac{TN}{TN + FP}$$
19
+ Where $TN$ is the true negatives and $FP$ is the false positives.
20
 
21
  ## How to Use
 
22
 
23
+ At minimum, this metric takes as input two lists, each containing ints: predictions and references.
24
+
25
+ ```python
26
+ >>> specificity_metric = evaluate.load('nevikw39/specificity')
27
+ >>> results = specificity_metric.compute(references=[0, 1], predictions=[0, 1])
28
+ >>> print(results)
29
+ ["{'specificity': 1.0}"]
30
+ ```
31
 
32
  ### Inputs
 
 
33
 
34
+ - **predictions** (`list` of `int`): The predicted labels.
35
+ - **references** (`list` of `int`): The ground truth labels.
36
+ - **labels** (`list` of `int`): The set of labels to include when `average` is not set to `binary`, and their order when average is `None`. Labels present in the data can be excluded in this input, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. Defaults to None.
37
+ - **neg_label** (`int`): The class label to use as the 'negative class' when calculating the specificity. Defaults to `0`.
38
+ - **average** (`string`): This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.
39
+ - `'binary'`: Only report results for the class specified by `neg_label`. This is applicable only if the target labels and predictions are binary.
40
+ - `'micro'`: Calculate metrics globally by counting the total true negatives, false positives, and false negatives.
41
+ - `'macro'`: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
42
+ - `'weighted'`: Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. Note that it can result in an F-score that is not between sensitivity and specificity.
43
+ - `'samples'`: Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).
44
+ - **sample_weight** (`list` of `float`): Sample weights Defaults to `None`.
45
+ - **zero_division** (): Sets the value to return when there is a zero division. Defaults to .
46
+ - `'warn'`: If there is a zero division, the return value is `0`, but warnings are also raised.
47
+ - `0`: If there is a zero division, the return value is `0`.
48
+ - `1`: If there is a zero division, the return value is `1`.
49
 
50
+ ### Output Values
51
 
52
+ - **specificity** (`float`, or `array` of `float`): Either the general specificity score, or the specificity scores for individual classes, depending on the values input to `labels` and `average`. Minimum possible value is 0. Maximum possible value is 1. A higher specificity means that more of the positive examples have been labeled correctly. Therefore, a higher specificity is generally considered better.
53
 
54
  #### Values from Popular Papers
 
55
 
56
  ### Examples
57
+
58
+ Example 1-A simple example with some errors
59
+ ```python
60
+ >>> specificity_metric = evaluate.load('nevikw39/specificity')
61
+ >>> results = specificity_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1])
62
+ >>> print(results)
63
+ {'specificity': 0.5}
64
+ ```
65
+ Example 2-The same example as Example 1, but with `neg_label=1` instead of the default `neg_label=0`.
66
+ ```python
67
+ >>> specificity_metric = evaluate.load('nevikw39/specificity')
68
+ >>> results = specificity_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], neg_label=1)
69
+ >>> print(results)
70
+ {'specificity': 0.6666666666666666}
71
+ ```
72
+ Example 3-The same example as Example 1, but with `sample_weight` included.
73
+ ```python
74
+ >>> specificity_metric = evaluate.load('nevikw39/specificity')
75
+ >>> sample_weight = [0.9, 0.2, 0.9, 0.3, 0.8]
76
+ >>> results = specificity_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], sample_weight=sample_weight)
77
+ >>> print(results)
78
+ {'specificity': 0.8181818181818181}
79
+ ```
80
+ Example 4-A multiclass example, using different averages.
81
+ ```python
82
+ >>> specificity_metric = evaluate.load('nevikw39/specificity')
83
+ >>> predictions = [0, 2, 1, 0, 0, 1]
84
+ >>> references = [0, 1, 2, 0, 1, 2]
85
+ >>> results = specificity_metric.compute(predictions=predictions, references=references, average='macro')
86
+ >>> print(results)
87
+ {'specificity': 0.3333333333333333}
88
+ >>> results = specificity_metric.compute(predictions=predictions, references=references, average='micro')
89
+ >>> print(results)
90
+ {'specificity': 0.3333333333333333}
91
+ >>> results = specificity_metric.compute(predictions=predictions, references=references, average='weighted')
92
+ >>> print(results)
93
+ {'specificity': 0.3333333333333333}
94
+ >>> results = specificity_metric.compute(predictions=predictions, references=references, average=None)
95
+ >>> print(results)
96
+ {'specificity': array([1., 0., 0.])}
97
+ ```
98
 
99
  ## Limitations and Bias
 
100
 
101
  ## Citation
102
+
103
+ ```
104
+ @article{scikit-learn, title={Scikit-learn: Machine Learning in {P}ython}, author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.}, journal={Journal of Machine Learning Research}, volume={12}, pages={2825--2830}, year={2011}}
105
+ ```
106
 
107
  ## Further References
 
specificity.py CHANGED
@@ -11,85 +11,124 @@
11
  # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
  # See the License for the specific language governing permissions and
13
  # limitations under the License.
14
- """TODO: Add a description here."""
15
 
16
  import evaluate
17
  import datasets
18
 
 
 
19
 
20
- # TODO: Add BibTeX citation
21
  _CITATION = """\
22
- @InProceedings{huggingface:module,
23
- title = {A great new module},
24
- authors={huggingface, Inc.},
25
- year={2020}
26
- }
27
  """
28
 
29
- # TODO: Add description of the module here
30
  _DESCRIPTION = """\
31
- This new module is designed to solve this great ML task and is crafted with a lot of care.
 
 
32
  """
33
 
34
 
35
- # TODO: Add description of the arguments of the module here
36
  _KWARGS_DESCRIPTION = """
37
- Calculates how good are predictions given some references, using certain scores
38
  Args:
39
- predictions: list of predictions to score. Each predictions
40
- should be a string with tokens separated by spaces.
41
- references: list of reference for each prediction. Each
42
- reference should be a string with tokens separated by spaces.
 
 
 
 
 
 
 
 
 
 
 
43
  Returns:
44
- accuracy: description of the first score,
45
- another_score: description of the second score,
46
  Examples:
47
- Examples should be written in doctest format, and should illustrate how
48
- to use the function.
49
-
50
- >>> my_new_module = evaluate.load("my_new_module")
51
- >>> results = my_new_module.compute(references=[0, 1], predictions=[0, 1])
52
- >>> print(results)
53
- {'accuracy': 1.0}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  """
55
 
56
- # TODO: Define external resources urls if needed
57
- BAD_WORDS_URL = "http://url/to/external/resource/bad_words.txt"
58
-
59
 
60
  @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
61
  class Specificity(evaluate.Metric):
62
- """TODO: Short description of my evaluation module."""
63
 
64
  def _info(self):
65
- # TODO: Specifies the evaluate.EvaluationModuleInfo object
66
  return evaluate.MetricInfo(
67
- # This is the description that will appear on the modules page.
68
  module_type="metric",
69
  description=_DESCRIPTION,
70
  citation=_CITATION,
71
  inputs_description=_KWARGS_DESCRIPTION,
72
- # This defines the format of each prediction and reference
73
- features=datasets.Features({
74
- 'predictions': datasets.Value('int64'),
75
- 'references': datasets.Value('int64'),
76
- }),
77
- # Homepage of the module for documentation
78
- homepage="http://module.homepage",
79
- # Additional links to the codebase or references
80
- codebase_urls=["http://github.com/path/to/codebase/of/new_module"],
81
- reference_urls=["http://path.to.reference.url/new_module"]
 
 
 
 
82
  )
83
 
84
- def _download_and_prepare(self, dl_manager):
85
- """Optional: download external resources useful to compute the scores"""
86
- # TODO: Download external resources if needed
87
- pass
88
-
89
- def _compute(self, predictions, references):
90
- """Returns the scores"""
91
- # TODO: Compute the different scores of the module
92
- accuracy = sum(i == j for i, j in zip(predictions, references)) / len(predictions)
93
- return {
94
- "accuracy": accuracy,
95
- }
 
 
 
 
 
 
 
 
 
11
  # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
  # See the License for the specific language governing permissions and
13
  # limitations under the License.
14
+ """Specificity metric."""
15
 
16
  import evaluate
17
  import datasets
18
 
19
+ from sklearn.metrics import recall_score
20
+
21
 
 
22
  _CITATION = """\
23
+ @article{scikit-learn, title={Scikit-learn: Machine Learning in {P}ython}, author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.}, journal={Journal of Machine Learning Research}, volume={12}, pages={2825--2830}, year={2011}}
 
 
 
 
24
  """
25
 
26
+
27
  _DESCRIPTION = """\
28
+ Specificity is the fraction of the negatives examples that were correctly labeled by the model as negatives. It can be computed with the equation:
29
+ Specificity = TN / (TN + FP)
30
+ Where TN is the true negatives and FP is the false positives.
31
  """
32
 
33
 
 
34
  _KWARGS_DESCRIPTION = """
 
35
  Args:
36
+ - **predictions** (`list` of `int`): The predicted labels.
37
+ - **references** (`list` of `int`): The ground truth labels.
38
+ - **labels** (`list` of `int`): The set of labels to include when `average` is not set to `binary`, and their order when average is `None`. Labels present in the data can be excluded in this input, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. Defaults to None.
39
+ - **neg_label** (`int`): The class label to use as the 'negative class' when calculating the specificity. Defaults to `0`.
40
+ - **average** (`string`): This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.
41
+ - `'binary'`: Only report results for the class specified by `neg_label`. This is applicable only if the target labels and predictions are binary.
42
+ - `'micro'`: Calculate metrics globally by counting the total true negatives, false positives, and false negatives.
43
+ - `'macro'`: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
44
+ - `'weighted'`: Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. Note that it can result in an F-score that is not between sensitivity and specificity.
45
+ - `'samples'`: Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).
46
+ - **sample_weight** (`list` of `float`): Sample weights Defaults to `None`.
47
+ - **zero_division** (): Sets the value to return when there is a zero division. Defaults to .
48
+ - `'warn'`: If there is a zero division, the return value is `0`, but warnings are also raised.
49
+ - `0`: If there is a zero division, the return value is `0`.
50
+ - `1`: If there is a zero division, the return value is `1`.
51
  Returns:
52
+ - **specificity** (`float`, or `array` of `float`): Either the general specificity score, or the specificity scores for individual classes, depending on the values input to `labels` and `average`. Minimum possible value is 0. Maximum possible value is 1. A higher specificity means that more of the positive examples have been labeled correctly. Therefore, a higher specificity is generally considered better.
 
53
  Examples:
54
+ Example 1-A simple example with some errors
55
+ >>> specificity_metric = evaluate.load('nevikw39/specificity')
56
+ >>> results = specificity_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1])
57
+ >>> print(results)
58
+ {'specificity': 0.5}
59
+ Example 2-The same example as Example 1, but with `neg_label=1` instead of the default `neg_label=0`.
60
+ >>> specificity_metric = evaluate.load('nevikw39/specificity')
61
+ >>> results = specificity_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], neg_label=1)
62
+ >>> print(results)
63
+ {'specificity': 0.6666666666666666}
64
+ Example 3-The same example as Example 1, but with `sample_weight` included.
65
+ >>> specificity_metric = evaluate.load('nevikw39/specificity')
66
+ >>> sample_weight = [0.9, 0.2, 0.9, 0.3, 0.8]
67
+ >>> results = specificity_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], sample_weight=sample_weight)
68
+ >>> print(results)
69
+ {'specificity': 0.8181818181818181}
70
+ Example 4-A multiclass example, using different averages.
71
+ >>> specificity_metric = evaluate.load('nevikw39/specificity')
72
+ >>> predictions = [0, 2, 1, 0, 0, 1]
73
+ >>> references = [0, 1, 2, 0, 1, 2]
74
+ >>> results = specificity_metric.compute(predictions=predictions, references=references, average='macro')
75
+ >>> print(results)
76
+ {'specificity': 0.3333333333333333}
77
+ >>> results = specificity_metric.compute(predictions=predictions, references=references, average='micro')
78
+ >>> print(results)
79
+ {'specificity': 0.3333333333333333}
80
+ >>> results = specificity_metric.compute(predictions=predictions, references=references, average='weighted')
81
+ >>> print(results)
82
+ {'specificity': 0.3333333333333333}
83
+ >>> results = specificity_metric.compute(predictions=predictions, references=references, average=None)
84
+ >>> print(results)
85
+ {'specificity': array([1., 0., 0.])}
86
  """
87
 
 
 
 
88
 
89
  @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
90
  class Specificity(evaluate.Metric):
91
+ """Specificity metric."""
92
 
93
  def _info(self):
 
94
  return evaluate.MetricInfo(
 
95
  module_type="metric",
96
  description=_DESCRIPTION,
97
  citation=_CITATION,
98
  inputs_description=_KWARGS_DESCRIPTION,
99
+ features=datasets.Features(
100
+ {
101
+ "predictions": datasets.Sequence(datasets.Value("int32")),
102
+ "references": datasets.Sequence(datasets.Value("int32")),
103
+ }
104
+ if self.config_name == "multilabel"
105
+ else {
106
+ "predictions": datasets.Value("int32"),
107
+ "references": datasets.Value("int32"),
108
+ }
109
+ ),
110
+ reference_urls=[
111
+ "https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html"
112
+ ],
113
  )
114
 
115
+ def _compute(
116
+ self,
117
+ predictions: datasets.Sequence(datasets.Value("int32")) | datasets.Value("int32"),
118
+ references: datasets.Sequence(datasets.Value("int32")) | datasets.Value("int32"),
119
+ labels=None,
120
+ neg_label=0,
121
+ average="binary",
122
+ sample_weight=None,
123
+ zero_division="warn",
124
+ ):
125
+ score = recall_score(
126
+ references,
127
+ predictions,
128
+ labels=labels,
129
+ pos_label=neg_label,
130
+ average=average,
131
+ sample_weight=sample_weight,
132
+ zero_division=zero_division,
133
+ )
134
+ return {"specificity": float(score) if score.size == 1 else score}
tests.py CHANGED
@@ -1,17 +1,7 @@
1
  test_cases = [
2
  {
3
- "predictions": [0, 0],
4
- "references": [1, 1],
5
- "result": {"metric_score": 0}
6
  },
7
- {
8
- "predictions": [1, 1],
9
- "references": [1, 1],
10
- "result": {"metric_score": 1}
11
- },
12
- {
13
- "predictions": [1, 0],
14
- "references": [1, 1],
15
- "result": {"metric_score": 0.5}
16
- }
17
  ]
 
1
  test_cases = [
2
  {
3
+ "predictions": [0, 0, 1, 1, 1],
4
+ "references": [0, 1, 0, 1, 1],
5
+ "result": {"specificity": 0.5}
6
  },
 
 
 
 
 
 
 
 
 
 
7
  ]