mwhanna commited on
Commit
ef6a551
·
verified ·
1 Parent(s): dab0c2a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -22
README.md CHANGED
@@ -1,22 +1,100 @@
1
- ---
2
- license: mit
3
- dataset_info:
4
- features:
5
- - name: clean
6
- dtype: string
7
- - name: corrupted
8
- dtype: string
9
- - name: correct_idx
10
- dtype: string
11
- splits:
12
- - name: train
13
- num_bytes: 1233552
14
- num_examples: 10000
15
- download_size: 204075
16
- dataset_size: 1233552
17
- configs:
18
- - config_name: default
19
- data_files:
20
- - split: train
21
- path: data/train-*
22
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ dataset_info:
4
+ features:
5
+ - name: clean
6
+ dtype: string
7
+ - name: corrupted
8
+ dtype: string
9
+ - name: correct_idx
10
+ dtype: string
11
+ splits:
12
+ - name: train
13
+ num_bytes: 1233552
14
+ num_examples: 10000
15
+ download_size: 204075
16
+ dataset_size: 1233552
17
+ configs:
18
+ - config_name: default
19
+ data_files:
20
+ - split: train
21
+ path: data/train-*
22
+ language:
23
+ - en
24
+ ---
25
+
26
+ # Dataset Card for Dataset Name
27
+
28
+ <!-- Provide a quick summary of the dataset. -->
29
+
30
+ This is a dataset with examples from the Greater-Than circuit task.
31
+
32
+ ## Dataset Details
33
+
34
+ ### Dataset Description
35
+
36
+ <!-- Provide a longer summary of what this dataset is. -->
37
+
38
+ - **Curated by:** Michael Hanna
39
+ - **Language(s) (NLP):** English
40
+ - **License:** MIT
41
+
42
+ ### Dataset Sources
43
+
44
+ <!-- Provide the basic links for the dataset. -->
45
+
46
+ - **Repository:** [https://github.com/hannamw/gpt2-greater-than](https://github.com/hannamw/gpt2-greater-than)
47
+ - **Paper:** [How does {GPT}-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model](https://openreview.net/forum?id=p4PckNQR8k)
48
+
49
+ ## Uses
50
+
51
+ This dataset is intended to be a model-agnostic version of the greater-than task.
52
+ The original task consisted of examples like `The war lasted from the year 1742 to the year 17`, based on the fact that GPT-2 small tokenizes 4-digit years into two, two-digit tokens.
53
+ One would then compute model performance as the probability assigned to years greater than 42, minus that assigned to years less-than or equal to 42.
54
+
55
+ New models now tokenize years differently; Llama tokenizes 1742 as `[174][2]`, and Gemma 2 tokenizes it as `[1][7][4][2]`.
56
+ You can still compute the probability assigned to good and bad decades; for example:
57
+ - For Llama 3, if the token at position [174] is y1, and the token at [2] is y1, you want to compute p(y1>174) + p(y1=174)* p(y2>2) - (p(y1<174) + p(y1=174)* p(y2<=2))
58
+ - For Gemma 2, if the token at position [4] is y1, and the token at [2] is y1, you want to compute p(y1>4) + p(y1=4)* p(y2>2) - (p(y1<4) + p(y1=4)* p(y2<=2))
59
+
60
+ For these purposes, it's easier to have the full string, i.e. `The war lasted from the year 1742 to the year 1743`, rather than the shortened version `The war lasted from the year 1742 to the year 17`.
61
+
62
+ ## Dataset Structure
63
+
64
+ `clean`: The original greater-than example sentences
65
+
66
+ `corrupted`: The corrupted version of the corresponding sentence in `clean`, with the start-year decade set to `01`.
67
+
68
+ `year`: The start year from the corresponding sentence in `clean`.
69
+
70
+ ## Dataset Creation
71
+
72
+ ### Source Data
73
+
74
+ As described in the paper, this dataset was automatically created, using the template `The [event] lasted from the year [XX][YY] to the year [XX]`.
75
+ Michael Hanna and Ollie Liu developed the list of nouns used as `[event]`.
76
+
77
+ ## Citation [optional]
78
+
79
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
80
+ [How does {GPT}-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model](https://openreview.net/forum?id=p4PckNQR8k)
81
+
82
+ **BibTeX:**
83
+ ```
84
+ @inproceedings{
85
+ hanna2023how,
86
+ title={How does {GPT}-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model},
87
+ author={Michael Hanna and Ollie Liu and Alexandre Variengien},
88
+ booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
89
+ year={2023},
90
+ url={https://openreview.net/forum?id=p4PckNQR8k}
91
+ }
92
+ ```
93
+
94
+ ## Dataset Card Authors
95
+
96
+ Michael Hanna
97
+
98
+ ## Dataset Card Contact
99
+
100