Felipehonorato commited on
Commit
92feaeb
1 Parent(s): 1745649

Improved readme

Browse files
Files changed (2) hide show
  1. README.md +11 -22
  2. eer.py +0 -1
README.md CHANGED
@@ -17,34 +17,23 @@ pinned: false
17
  ***Module Card Instructions:*** *Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.*
18
 
19
  ## Metric Description
20
- *Give a brief overview of this metric, including what task(s) it is usually used for, if any.*
21
 
22
- ## How to Use
23
- *Give general statement of how to use the metric*
24
 
25
- *Provide simplest possible example for using the metric*
 
 
 
26
 
27
  ### Inputs
28
  *List all input arguments in the format below*
29
- - **input_field** *(type): Definition of input, with explanation if necessary. State any default value(s).*
30
-
31
- ### Output Values
32
-
33
- *Explain what this metric outputs and provide an example of what the metric output looks like. Modules should return a dictionary with one or multiple key-value pairs, e.g. {"bleu" : 6.02}*
34
-
35
- *State the range of possible values that the metric's output can take, as well as what in that range is considered good. For example: "This metric can take on any value between 0 and 100, inclusive. Higher scores are better."*
36
-
37
- #### Values from Popular Papers
38
- *Give examples, preferrably with links to leaderboards or publications, to papers that have reported this metric, along with the values they have reported.*
39
-
40
- ### Examples
41
- *Give code examples of the metric being used. Try to include examples that clear up any potential ambiguity left from the metric description above. If possible, provide a range of examples that show both typical and atypical results, as well as examples where a variety of input parameters are passed.*
42
-
43
- ## Limitations and Bias
44
- *Note any known limitations or biases that the metric has, with links and references if possible.*
45
 
46
  ## Citation
47
- *Cite the source where this metric was introduced.*
48
 
49
  ## Further References
50
- *Add any useful further references.*
 
17
  ***Module Card Instructions:*** *Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.*
18
 
19
  ## Metric Description
20
+ *Equal Error Rate (EER) is a measure that shows the performance of a biometric system, like fingerprint or facial recognition. It's the point where the system's False Acceptance Rate (letting the wrong person in) and False Rejection Rate (blocking the right person) are equal. The lower the EER value, the better the system's performance.
21
 
22
+ EER is used in various security applications, such as airports, banks, and personal devices like smartphones and laptops, to evaluate the effectiveness of the biometric system in correctly identifying users.*
 
23
 
24
+ ## How to Use
25
+ ```
26
+ eer = evaluate.load("Felipehonorato/eer")
27
+ ```
28
 
29
  ### Inputs
30
  *List all input arguments in the format below*
31
+ - **predictions** *(int): Predicted values from your model*
32
+ - **reference** *(int): Ground truth values from your model*
33
+ - **pos_label** *(int): The label id that corresponds to positive labels*
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
  ## Citation
36
+ *It is difficult to pinpoint the exact first paper that introduced the concept of Equal Error Rate (EER). However, the term "equal error rate" and its significance to biometric systems gained popularity during the late 1990s and early 2000s. One notable early work discussing EER is the book "Biometrics: Personal Identification in Networked Society" written by Anil K. Jain, Ruud M. Bolle, and Sharath Pankanti, published in 1999. The authors provided insights into various metrics used to assess the performance of biometric systems, including EER.*
37
 
38
  ## Further References
39
+ [Codebase](https://github.com/YuanGongND/python-compute-eer)
eer.py CHANGED
@@ -58,7 +58,6 @@ class EER(evaluate.Metric):
58
 
59
  def _compute(self, predictions, references, pos_label=1):
60
  """Returns EER the scores"""
61
- eer_score_list = []
62
 
63
  fpr, tpr, threshold = sklearn.metrics.roc_curve(references, predictions, pos_label=pos_label)
64
  fnr = 1 - tpr
 
58
 
59
  def _compute(self, predictions, references, pos_label=1):
60
  """Returns EER the scores"""
 
61
 
62
  fpr, tpr, threshold = sklearn.metrics.roc_curve(references, predictions, pos_label=pos_label)
63
  fnr = 1 - tpr