yu-val-weiss
commited on
Commit
·
2338f58
1
Parent(s):
c54bac9
make static
Browse files
README.md
CHANGED
@@ -3,9 +3,7 @@ title: BLiMP
|
|
3 |
emoji: 🎈
|
4 |
colorFrom: blue
|
5 |
colorTo: red
|
6 |
-
sdk:
|
7 |
-
sdk_version: 5.20.1
|
8 |
-
app_file: app.py
|
9 |
pinned: false
|
10 |
tags:
|
11 |
- evaluate
|
@@ -24,11 +22,13 @@ description: >-
|
|
24 |
# Metric Card for BLiMP
|
25 |
|
26 |
## Metric Description
|
|
|
27 |
BLiMP is a challenge set for evaluating what language models (LMs) know about major grammatical phenomena in English. BLiMP consists of 67 sub-datasets,
|
28 |
each containing 1000 minimal pairs isolating specific contrasts in syntax, morphology, or semantics.
|
29 |
The data is automatically generated according to expert-crafted grammars.
|
30 |
|
31 |
## Intended Uses
|
|
|
32 |
Any language generation task.
|
33 |
|
34 |
## How to Use
|
@@ -42,11 +42,13 @@ results = blimp.compute(model_id='pico-lm/pico-decoder')
|
|
42 |
```
|
43 |
|
44 |
### Inputs
|
|
|
45 |
- **model_id** (str): model used for calculating BLiMP.
|
46 |
- **batch_size** (int): the batch size to run texts through the model. Defaults to 16.
|
47 |
- **device** (str): device to run on, defaults to `cuda` when available
|
48 |
|
49 |
### Output Values
|
|
|
50 |
This metric outputs a dictionary with the BLiMP scores for each subdataset.
|
51 |
If one of the input texts is longer than the max input length of the model, then it is truncated to the max length for the perplexity computation.
|
52 |
|
@@ -57,7 +59,9 @@ If one of the input texts is longer than the max input length of the model, then
|
|
57 |
The range of this metric is [0, inf). A lower score is better.
|
58 |
|
59 |
### Examples
|
|
|
60 |
Calculating perplexity on predictions defined here:
|
|
|
61 |
```python
|
62 |
perplexity = evaluate.load("perplexity", module_type="metric")
|
63 |
input_texts = ["lorem ipsum", "Happy Birthday!", "Bienvenue"]
|
@@ -71,7 +75,9 @@ print(round(results["mean_perplexity"], 2))
|
|
71 |
print(round(results["perplexities"][0], 2))
|
72 |
>>>32.25
|
73 |
```
|
|
|
74 |
Calculating perplexity on predictions loaded in from a dataset:
|
|
|
75 |
```python
|
76 |
perplexity = evaluate.load("perplexity", module_type="metric")
|
77 |
input_texts = datasets.load_dataset("wikitext",
|
@@ -105,4 +111,4 @@ print(round(results["perplexities"][0], 2))
|
|
105 |
abstract = { We introduce The Benchmark of Linguistic Minimal Pairs (BLiMP),1 a challenge set for evaluating the linguistic knowledge of language models (LMs) on major grammatical phenomena in English. BLiMP consists of 67 individual datasets, each containing 1,000 minimal pairs—that is, pairs of minimally different sentences that contrast in grammatical acceptability and isolate specific phenomenon in syntax, morphology, or semantics. We generate the data according to linguist-crafted grammar templates, and human aggregate agreement with the labels is 96.4\%. We evaluate n-gram, LSTM, and Transformer (GPT-2 and Transformer-XL) LMs by observing whether they assign a higher probability to the acceptable sentence in each minimal pair. We find that state-of-the-art models identify morphological contrasts related to agreement reliably, but they struggle with some subtle semantic and syntactic phenomena, such as negative polarity items and extraction islands. }
|
106 |
}
|
107 |
|
108 |
-
```
|
|
|
3 |
emoji: 🎈
|
4 |
colorFrom: blue
|
5 |
colorTo: red
|
6 |
+
sdk: static
|
|
|
|
|
7 |
pinned: false
|
8 |
tags:
|
9 |
- evaluate
|
|
|
22 |
# Metric Card for BLiMP
|
23 |
|
24 |
## Metric Description
|
25 |
+
|
26 |
BLiMP is a challenge set for evaluating what language models (LMs) know about major grammatical phenomena in English. BLiMP consists of 67 sub-datasets,
|
27 |
each containing 1000 minimal pairs isolating specific contrasts in syntax, morphology, or semantics.
|
28 |
The data is automatically generated according to expert-crafted grammars.
|
29 |
|
30 |
## Intended Uses
|
31 |
+
|
32 |
Any language generation task.
|
33 |
|
34 |
## How to Use
|
|
|
42 |
```
|
43 |
|
44 |
### Inputs
|
45 |
+
|
46 |
- **model_id** (str): model used for calculating BLiMP.
|
47 |
- **batch_size** (int): the batch size to run texts through the model. Defaults to 16.
|
48 |
- **device** (str): device to run on, defaults to `cuda` when available
|
49 |
|
50 |
### Output Values
|
51 |
+
|
52 |
This metric outputs a dictionary with the BLiMP scores for each subdataset.
|
53 |
If one of the input texts is longer than the max input length of the model, then it is truncated to the max length for the perplexity computation.
|
54 |
|
|
|
59 |
The range of this metric is [0, inf). A lower score is better.
|
60 |
|
61 |
### Examples
|
62 |
+
|
63 |
Calculating perplexity on predictions defined here:
|
64 |
+
|
65 |
```python
|
66 |
perplexity = evaluate.load("perplexity", module_type="metric")
|
67 |
input_texts = ["lorem ipsum", "Happy Birthday!", "Bienvenue"]
|
|
|
75 |
print(round(results["perplexities"][0], 2))
|
76 |
>>>32.25
|
77 |
```
|
78 |
+
|
79 |
Calculating perplexity on predictions loaded in from a dataset:
|
80 |
+
|
81 |
```python
|
82 |
perplexity = evaluate.load("perplexity", module_type="metric")
|
83 |
input_texts = datasets.load_dataset("wikitext",
|
|
|
111 |
abstract = { We introduce The Benchmark of Linguistic Minimal Pairs (BLiMP),1 a challenge set for evaluating the linguistic knowledge of language models (LMs) on major grammatical phenomena in English. BLiMP consists of 67 individual datasets, each containing 1,000 minimal pairs—that is, pairs of minimally different sentences that contrast in grammatical acceptability and isolate specific phenomenon in syntax, morphology, or semantics. We generate the data according to linguist-crafted grammar templates, and human aggregate agreement with the labels is 96.4\%. We evaluate n-gram, LSTM, and Transformer (GPT-2 and Transformer-XL) LMs by observing whether they assign a higher probability to the acceptable sentence in each minimal pair. We find that state-of-the-art models identify morphological contrasts related to agreement reliably, but they struggle with some subtle semantic and syntactic phenomena, such as negative polarity items and extraction islands. }
|
112 |
}
|
113 |
|
114 |
+
```
|
app.py
DELETED
@@ -1,5 +0,0 @@
|
|
1 |
-
import evaluate
|
2 |
-
from evaluate.utils import launch_gradio_widget
|
3 |
-
|
4 |
-
module = evaluate.load("./blimp.py", module_type="metric")
|
5 |
-
launch_gradio_widget(module)
|
|
|
|
|
|
|
|
|
|
|
|