Spaces:
Configuration error
Configuration error
add local perplexity
Browse files
README.md
CHANGED
@@ -1,50 +1,19 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
tags:
|
6 |
-
- evaluate
|
7 |
-
- metric
|
8 |
-
description: "TODO: add a description here"
|
9 |
sdk: gradio
|
10 |
sdk_version: 3.19.1
|
11 |
app_file: app.py
|
12 |
pinned: false
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
*Provide simplest possible example for using the metric*
|
26 |
-
|
27 |
-
### Inputs
|
28 |
-
*List all input arguments in the format below*
|
29 |
-
- **input_field** *(type): Definition of input, with explanation if necessary. State any default value(s).*
|
30 |
-
|
31 |
-
### Output Values
|
32 |
-
|
33 |
-
*Explain what this metric outputs and provide an example of what the metric output looks like. Modules should return a dictionary with one or multiple key-value pairs, e.g. {"bleu" : 6.02}*
|
34 |
-
|
35 |
-
*State the range of possible values that the metric's output can take, as well as what in that range is considered good. For example: "This metric can take on any value between 0 and 100, inclusive. Higher scores are better."*
|
36 |
-
|
37 |
-
#### Values from Popular Papers
|
38 |
-
*Give examples, preferrably with links to leaderboards or publications, to papers that have reported this metric, along with the values they have reported.*
|
39 |
-
|
40 |
-
### Examples
|
41 |
-
*Give code examples of the metric being used. Try to include examples that clear up any potential ambiguity left from the metric description above. If possible, provide a range of examples that show both typical and atypical results, as well as examples where a variety of input parameters are passed.*
|
42 |
-
|
43 |
-
## Limitations and Bias
|
44 |
-
*Note any known limitations or biases that the metric has, with links and references if possible.*
|
45 |
-
|
46 |
-
## Citation
|
47 |
-
*Cite the source where this metric was introduced.*
|
48 |
-
|
49 |
-
## Further References
|
50 |
-
*Add any useful further references.*
|
|
|
1 |
+
title: Perplexity
|
2 |
+
emoji: 🤗
|
3 |
+
colorFrom: blue
|
4 |
+
colorTo: red
|
|
|
|
|
|
|
|
|
5 |
sdk: gradio
|
6 |
sdk_version: 3.19.1
|
7 |
app_file: app.py
|
8 |
pinned: false
|
9 |
+
tags:
|
10 |
+
- evaluate
|
11 |
+
- metric
|
12 |
+
description: >-
|
13 |
+
Perplexity (PPL) is one of the most common metrics for evaluating language
|
14 |
+
models. It is defined as the exponentiated average negative log-likelihood of
|
15 |
+
a sequence, calculated with exponent base `e`.
|
16 |
+
|
17 |
+
For more information on perplexity, see [this
|
18 |
+
tutorial](https://huggingface.co/docs/transformers/perplexity).
|
19 |
+
Metric Card for Perplexity
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|