EduardoPacheco commited on
Commit
538be00
β€’
1 Parent(s): 0beafc3

update: name of metric to DetailedWER

Browse files
Files changed (3) hide show
  1. README.md +5 -5
  2. app.py +1 -1
  3. argwer.py β†’ detailed_wer.py +2 -2
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- title: ArgWER
3
  tags:
4
  - evaluate
5
  - metric
@@ -12,17 +12,17 @@ app_file: app.py
12
  pinned: false
13
  ---
14
 
15
- # Metric Card for ArgWER
16
 
17
  ## Metric Description
18
- ArgWER is an enhanced version of the Word Error Rate (WER) metric used for evaluating speech recognition systems. While it calculates the standard WER score, it also provides detailed information about different types of errors (insertions, deletions, and substitutions) when requested. This makes it particularly useful for detailed analysis of speech recognition system performance.
19
 
20
  ## How to Use
21
  The metric can be loaded and used through the `evaluate` library:
22
 
23
  ```python
24
  import evaluate
25
- wer = evaluate.load("EduardoPacheco/argwer")
26
  predictions = ["this is the prediction", "there is an other sample"]
27
  references = ["this is the reference", "there is another one"]
28
  wer_score = wer.compute(predictions=predictions, references=references)
@@ -59,7 +59,7 @@ Basic usage:
59
  ```python
60
  predictions = ["this is the prediction", "there is an other sample"]
61
  references = ["this is the reference", "there is another one"]
62
- wer = evaluate.load("EduardoPacheco/argwer")
63
 
64
  # Basic WER score
65
  wer_score = wer.compute(predictions=predictions, references=references)
 
1
  ---
2
+ title: DetailedWER
3
  tags:
4
  - evaluate
5
  - metric
 
12
  pinned: false
13
  ---
14
 
15
+ # Metric Card for DetailedWER
16
 
17
  ## Metric Description
18
+ DetailedWER is an enhanced version of the Word Error Rate (WER) metric used for evaluating speech recognition systems. While it calculates the standard WER score, it also provides detailed information about different types of errors (insertions, deletions, and substitutions) when requested. This makes it particularly useful for detailed analysis of speech recognition system performance.
19
 
20
  ## How to Use
21
  The metric can be loaded and used through the `evaluate` library:
22
 
23
  ```python
24
  import evaluate
25
+ wer = evaluate.load("argmaxinc/detailed-wer")
26
  predictions = ["this is the prediction", "there is an other sample"]
27
  references = ["this is the reference", "there is another one"]
28
  wer_score = wer.compute(predictions=predictions, references=references)
 
59
  ```python
60
  predictions = ["this is the prediction", "there is an other sample"]
61
  references = ["this is the reference", "there is another one"]
62
+ wer = evaluate.load("argmaxinc/detailed-wer")
63
 
64
  # Basic WER score
65
  wer_score = wer.compute(predictions=predictions, references=references)
app.py CHANGED
@@ -2,5 +2,5 @@ import evaluate
2
  from evaluate.utils import launch_gradio_widget
3
 
4
 
5
- module = evaluate.load("EduardoPacheco/argwer")
6
  launch_gradio_widget(module)
 
2
  from evaluate.utils import launch_gradio_widget
3
 
4
 
5
+ module = evaluate.load("argmaxinc/detailed-wer")
6
  launch_gradio_widget(module)
argwer.py β†’ detailed_wer.py RENAMED
@@ -66,14 +66,14 @@ Examples:
66
 
67
  >>> predictions = ["this is the prediction", "there is an other sample"]
68
  >>> references = ["this is the reference", "there is another one"]
69
- >>> wer = evaluate.load("EduardoPacheco/argwer")
70
  >>> wer_score = wer.compute(predictions=predictions, references=references)
71
  >>> print(wer_score)
72
  0.5
73
  """
74
 
75
  @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
76
- class ArgWER(evaluate.Metric):
77
  """TODO: Short description of my evaluation module."""
78
 
79
  def _info(self):
 
66
 
67
  >>> predictions = ["this is the prediction", "there is an other sample"]
68
  >>> references = ["this is the reference", "there is another one"]
69
+ >>> wer = evaluate.load("argmaxinc/detailed-wer")
70
  >>> wer_score = wer.compute(predictions=predictions, references=references)
71
  >>> print(wer_score)
72
  0.5
73
  """
74
 
75
  @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
76
+ class DetailedWER(evaluate.Metric):
77
  """TODO: Short description of my evaluation module."""
78
 
79
  def _info(self):