Fill-Mask
Transformers
ONNX
English
bert
exbert
Jingya HF Staff commited on
Commit
5ec2849
·
1 Parent(s): 12012fe

Update biased examples

Browse files
Files changed (1) hide show
  1. README.md +50 -36
README.md CHANGED
@@ -77,12 +77,26 @@ You can use this model directly with a pipeline for masked language modeling fro
77
  >>> unmasker = pipeline('fill-mask', model='bert-base-uncased', accelerator="ort")
78
  >>> unmasker("The capital of France is [MASK].")
79
 
80
- [
81
- {'score': 0.4167858958244324, 'token': 3000, 'token_str': 'paris', 'sequence': 'the capital of france is paris.'},
82
- {'score': 0.07141812890768051, 'token': 22479, 'token_str': 'lille', 'sequence': 'the capital of france is lille.'},
83
- {'score': 0.06339272111654282, 'token': 10241, 'token_str': 'lyon', 'sequence': 'the capital of france is lyon.'},
84
- {'score': 0.04444783180952072, 'token': 16766, 'token_str': 'marseille', 'sequence': 'the capital of france is marseille.'},
85
- {'score': 0.030297117307782173, 'token': 7562, 'token_str': 'tours', 'sequence': 'the capital of france is tours.'}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
86
  ]
87
  ```
88
 
@@ -114,49 +128,49 @@ predictions:
114
  >>> unmasker = pipeline('fill-mask', model='bert-base-uncased', accelerator="ort")
115
  >>> unmasker("The man worked as a [MASK].")
116
 
117
- [{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
118
- 'score': 0.09747550636529922,
119
  'token': 10533,
120
- 'token_str': 'carpenter'},
121
- {'sequence': '[CLS] the man worked as a waiter. [SEP]',
122
- 'score': 0.0523831807076931,
123
  'token': 15610,
124
- 'token_str': 'waiter'},
125
- {'sequence': '[CLS] the man worked as a barber. [SEP]',
126
- 'score': 0.04962705448269844,
127
  'token': 13362,
128
- 'token_str': 'barber'},
129
- {'sequence': '[CLS] the man worked as a mechanic. [SEP]',
130
- 'score': 0.03788609802722931,
131
  'token': 15893,
132
- 'token_str': 'mechanic'},
133
- {'sequence': '[CLS] the man worked as a salesman. [SEP]',
134
- 'score': 0.037680890411138535,
135
  'token': 18968,
136
- 'token_str': 'salesman'}]
 
137
 
138
  >>> unmasker("The woman worked as a [MASK].")
139
 
140
- [{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
141
- 'score': 0.21981462836265564,
142
  'token': 6821,
143
- 'token_str': 'nurse'},
144
- {'sequence': '[CLS] the woman worked as a waitress. [SEP]',
145
- 'score': 0.1597415804862976,
146
  'token': 13877,
147
- 'token_str': 'waitress'},
148
- {'sequence': '[CLS] the woman worked as a maid. [SEP]',
149
- 'score': 0.1154729500412941,
150
  'token': 10850,
151
- 'token_str': 'maid'},
152
- {'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
153
- 'score': 0.037968918681144714,
154
  'token': 19215,
155
- 'token_str': 'prostitute'},
156
- {'sequence': '[CLS] the woman worked as a cook. [SEP]',
157
- 'score': 0.03042375110089779,
158
  'token': 5660,
159
- 'token_str': 'cook'}]
 
160
  ```
161
 
162
  This bias will also affect all fine-tuned versions of this model.
 
77
  >>> unmasker = pipeline('fill-mask', model='bert-base-uncased', accelerator="ort")
78
  >>> unmasker("The capital of France is [MASK].")
79
 
80
+ [{'score': 0.4167858958244324,
81
+ 'token': 3000,
82
+ 'token_str': 'paris',
83
+ 'sequence': 'the capital of france is paris.'},
84
+ {'score': 0.07141812890768051,
85
+ 'token': 22479,
86
+ 'token_str': 'lille',
87
+ 'sequence': 'the capital of france is lille.'},
88
+ {'score': 0.06339272111654282,
89
+ 'token': 10241,
90
+ 'token_str': 'lyon',
91
+ 'sequence': 'the capital of france is lyon.'},
92
+ {'score': 0.04444783180952072,
93
+ 'token': 16766,
94
+ 'token_str': 'marseille',
95
+ 'sequence': 'the capital of france is marseille.'},
96
+ {'score': 0.030297117307782173,
97
+ 'token': 7562,
98
+ 'token_str': 'tours',
99
+ 'sequence': 'the capital of france is tours.'}
100
  ]
101
  ```
102
 
 
128
  >>> unmasker = pipeline('fill-mask', model='bert-base-uncased', accelerator="ort")
129
  >>> unmasker("The man worked as a [MASK].")
130
 
131
+ [{'score': 0.09747613966464996,
 
132
  'token': 10533,
133
+ 'token_str': 'carpenter',
134
+ 'sequence': 'the man worked as a carpenter.'},
135
+ {'score': 0.0523831732571125,
136
  'token': 15610,
137
+ 'token_str': 'waiter',
138
+ 'sequence': 'the man worked as a waiter.'},
139
+ {'score': 0.04962756112217903,
140
  'token': 13362,
141
+ 'token_str': 'barber',
142
+ 'sequence': 'the man worked as a barber.'},
143
+ {'score': 0.03788623586297035,
144
  'token': 15893,
145
+ 'token_str': 'mechanic',
146
+ 'sequence': 'the man worked as a mechanic.'},
147
+ {'score': 0.03768099099397659,
148
  'token': 18968,
149
+ 'token_str': 'salesman',
150
+ 'sequence': 'the man worked as a salesman.'}]
151
 
152
  >>> unmasker("The woman worked as a [MASK].")
153
 
154
+ [{'score': 0.21981455385684967,
 
155
  'token': 6821,
156
+ 'token_str': 'nurse',
157
+ 'sequence': 'the woman worked as a nurse.'},
158
+ {'score': 0.15974153578281403,
159
  'token': 13877,
160
+ 'token_str': 'waitress',
161
+ 'sequence': 'the woman worked as a waitress.'},
162
+ {'score': 0.11547334492206573,
163
  'token': 10850,
164
+ 'token_str': 'maid',
165
+ 'sequence': 'the woman worked as a maid.'},
166
+ {'score': 0.0379691943526268,
167
  'token': 19215,
168
+ 'token_str': 'prostitute',
169
+ 'sequence': 'the woman worked as a prostitute.'},
170
+ {'score': 0.030423566699028015,
171
  'token': 5660,
172
+ 'token_str': 'cook',
173
+ 'sequence': 'the woman worked as a cook.'}]
174
  ```
175
 
176
  This bias will also affect all fine-tuned versions of this model.