lvwerra HF staff commited on
Commit
c94788e
1 Parent(s): d72cd42

Upload 4 files

Browse files
Files changed (4) hide show
  1. README.md +128 -0
  2. accuracy_score.py +154 -0
  3. app.py +6 -0
  4. requirements.txt +2 -0
README.md ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: accuracy_score
3
+ emoji: 🤗
4
+ colorFrom: blue
5
+ colorTo: orange
6
+ tags:
7
+ - evaluate
8
+ - metric
9
+ - sklearn
10
+ description: >-
11
+ "Accuracy classification score."
12
+ sdk: gradio
13
+ sdk_version: 3.12.0
14
+ app_file: app.py
15
+ pinned: false
16
+ ---
17
+
18
+ This metric is part of the Scikit-learn integration into 🤗 Evaluate. You can find all available metrics in the [Scikit-learn organization](https://huggingface.co/scikit-learn) on the Hugging Face Hub.
19
+
20
+ <p align="center">
21
+ <img src="https://raw.githubusercontent.com/scikit-learn/scikit-learn/main/doc/logos/1280px-scikit-learn-logo.png" width="400"/>
22
+ </p>
23
+
24
+ # Metric Card for `sklearn.metrics.accuracy_score`
25
+
26
+
27
+
28
+
29
+ ## Input Convention
30
+
31
+ To be consistent with the `evaluate` input conventions the scikit-learn inputs are renamed:
32
+
33
+ - `y_true`: `references`
34
+ - `y_pred`: `predictions`
35
+
36
+
37
+ ## Usage
38
+
39
+ ```python
40
+ import evaluate
41
+
42
+ metric = evaluate.load("sklearn/accuracy_score")
43
+ results = metric.compute(references=references, predictions=predictions)
44
+ ```
45
+
46
+ ## Description
47
+
48
+ Accuracy classification score.
49
+
50
+ In multilabel classification, this function computes subset accuracy:
51
+ the set of labels predicted for a sample must *exactly* match the
52
+ corresponding set of labels in y_true.
53
+
54
+ Read more in the :ref:`User Guide <accuracy_score>`.
55
+
56
+ Parameters
57
+ ----------
58
+ y_true : 1d array-like, or label indicator array / sparse matrix
59
+ Ground truth (correct) labels.
60
+
61
+ y_pred : 1d array-like, or label indicator array / sparse matrix
62
+ Predicted labels, as returned by a classifier.
63
+
64
+ normalize : bool, default=True
65
+ If ``False``, return the number of correctly classified samples.
66
+ Otherwise, return the fraction of correctly classified samples.
67
+
68
+ sample_weight : array-like of shape (n_samples,), default=None
69
+ Sample weights.
70
+
71
+ Returns
72
+ -------
73
+ score : float
74
+ If ``normalize == True``, return the fraction of correctly
75
+ classified samples (float), else returns the number of correctly
76
+ classified samples (int).
77
+
78
+ The best performance is 1 with ``normalize == True`` and the number
79
+ of samples with ``normalize == False``.
80
+
81
+ See Also
82
+ --------
83
+ balanced_accuracy_score : Compute the balanced accuracy to deal with
84
+ imbalanced datasets.
85
+ jaccard_score : Compute the Jaccard similarity coefficient score.
86
+ hamming_loss : Compute the average Hamming loss or Hamming distance between
87
+ two sets of samples.
88
+ zero_one_loss : Compute the Zero-one classification loss. By default, the
89
+ function will return the percentage of imperfectly predicted subsets.
90
+
91
+ Notes
92
+ -----
93
+ In binary classification, this function is equal to the `jaccard_score`
94
+ function.
95
+
96
+ Examples
97
+ --------
98
+ >>> from sklearn.metrics import accuracy_score
99
+ >>> y_pred = [0, 2, 1, 3]
100
+ >>> y_true = [0, 1, 2, 3]
101
+ >>> accuracy_score(y_true, y_pred)
102
+ 0.5
103
+ >>> accuracy_score(y_true, y_pred, normalize=False)
104
+ 2
105
+
106
+ In the multilabel case with binary label indicators:
107
+
108
+ >>> import numpy as np
109
+ >>> accuracy_score(np.array([[0, 1], [1, 1]]), np.ones((2, 2)))
110
+ 0.5
111
+
112
+
113
+ ## Citation
114
+ ```bibtex
115
+ @article{scikit-learn,
116
+ title={Scikit-learn: Machine Learning in {P}ython},
117
+ author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
118
+ and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
119
+ and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
120
+ Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
121
+ journal={Journal of Machine Learning Research},
122
+ volume={12},
123
+ pages={2825--2830},
124
+ year={2011}
125
+ }
126
+ ```
127
+ ## Further References
128
+ - Docs: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html
accuracy_score.py ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """TODO: Add a description here."""
15
+
16
+ import datasets
17
+ from sklearn.metrics import accuracy_score
18
+
19
+ import evaluate
20
+
21
+
22
+ _CITATION = """\
23
+ @article{scikit-learn,
24
+ title={Scikit-learn: Machine Learning in {P}ython},
25
+ author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
26
+ and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
27
+ and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
28
+ Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
29
+ journal={Journal of Machine Learning Research},
30
+ volume={12},
31
+ pages={2825--2830},
32
+ year={2011}
33
+ }
34
+ """
35
+
36
+ _DESCRIPTION = """\
37
+ Accuracy classification score.
38
+ """
39
+
40
+
41
+ _KWARGS_DESCRIPTION = """
42
+ Note: To be consistent with the `evaluate` input conventions the scikit-learn inputs are renamed:
43
+ - `y_true`: `references`
44
+ - `y_pred`: `predictions`
45
+
46
+ Scikit-learn docstring:
47
+ Accuracy classification score.
48
+
49
+ In multilabel classification, this function computes subset accuracy:
50
+ the set of labels predicted for a sample must *exactly* match the
51
+ corresponding set of labels in y_true.
52
+
53
+ Read more in the :ref:`User Guide <accuracy_score>`.
54
+
55
+ Parameters
56
+ ----------
57
+ y_true : 1d array-like, or label indicator array / sparse matrix
58
+ Ground truth (correct) labels.
59
+
60
+ y_pred : 1d array-like, or label indicator array / sparse matrix
61
+ Predicted labels, as returned by a classifier.
62
+
63
+ normalize : bool, default=True
64
+ If ``False``, return the number of correctly classified samples.
65
+ Otherwise, return the fraction of correctly classified samples.
66
+
67
+ sample_weight : array-like of shape (n_samples,), default=None
68
+ Sample weights.
69
+
70
+ Returns
71
+ -------
72
+ score : float
73
+ If ``normalize == True``, return the fraction of correctly
74
+ classified samples (float), else returns the number of correctly
75
+ classified samples (int).
76
+
77
+ The best performance is 1 with ``normalize == True`` and the number
78
+ of samples with ``normalize == False``.
79
+
80
+ See Also
81
+ --------
82
+ balanced_accuracy_score : Compute the balanced accuracy to deal with
83
+ imbalanced datasets.
84
+ jaccard_score : Compute the Jaccard similarity coefficient score.
85
+ hamming_loss : Compute the average Hamming loss or Hamming distance between
86
+ two sets of samples.
87
+ zero_one_loss : Compute the Zero-one classification loss. By default, the
88
+ function will return the percentage of imperfectly predicted subsets.
89
+
90
+ Notes
91
+ -----
92
+ In binary classification, this function is equal to the `jaccard_score`
93
+ function.
94
+
95
+ Examples
96
+ --------
97
+ >>> from sklearn.metrics import accuracy_score
98
+ >>> y_pred = [0, 2, 1, 3]
99
+ >>> y_true = [0, 1, 2, 3]
100
+ >>> accuracy_score(y_true, y_pred)
101
+ 0.5
102
+ >>> accuracy_score(y_true, y_pred, normalize=False)
103
+ 2
104
+
105
+ In the multilabel case with binary label indicators:
106
+
107
+ >>> import numpy as np
108
+ >>> accuracy_score(np.array([[0, 1], [1, 1]]), np.ones((2, 2)))
109
+ 0.5
110
+
111
+
112
+ """
113
+
114
+
115
+ @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
116
+ class AccuracyScore(evaluate.Metric):
117
+ """Accuracy classification score."""
118
+
119
+ def _info(self):
120
+ return evaluate.MetricInfo(
121
+ # This is the description that will appear on the modules page.
122
+ module_type="metric",
123
+ description=_DESCRIPTION,
124
+ citation=_CITATION,
125
+ inputs_description=_KWARGS_DESCRIPTION,
126
+ # This defines the format of each prediction and reference
127
+ features=[
128
+ datasets.Features(
129
+ {
130
+ "predictions": datasets.Sequence(datasets.Value("int32")),
131
+ "references": datasets.Sequence(datasets.Value("int32")),
132
+ }
133
+ ),
134
+ datasets.Features(
135
+ {"predictions": datasets.Sequence(datasets.Value("int32")), "references": datasets.Value("int32")}
136
+ ),
137
+ datasets.Features(
138
+ {"predictions": datasets.Value("int32"), "references": datasets.Sequence(datasets.Value("int32"))}
139
+ ),
140
+ datasets.Features({"predictions": datasets.Value("int32"), "references": datasets.Value("int32")}),
141
+ ],
142
+ # Homepage of the module for documentation
143
+ homepage="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html",
144
+ # Additional links to the codebase or references
145
+ codebase_urls=["https://github.com/scikit-learn/scikit-learn"],
146
+ reference_urls=["https://scikit-learn.org/stable/index.html"],
147
+ )
148
+
149
+ def _compute(self, predictions, references, normalize=True, sample_weight=None):
150
+ """Returns the scores"""
151
+
152
+ score = accuracy_score(y_true=references, y_pred=predictions, normalize=normalize, sample_weight=sample_weight)
153
+
154
+ return {"accuracy_score": score}
app.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ import evaluate
2
+ from evaluate.utils import launch_gradio_widget
3
+
4
+
5
+ module = evaluate.load("sklearn/accuracy_score")
6
+ launch_gradio_widget(module)
requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ evaluate
2
+ sklearn