Update README.md
Browse files
README.md
CHANGED
@@ -53,14 +53,14 @@ beam_outputs = model.generate(
|
|
53 |
num_return_sequences=1
|
54 |
)
|
55 |
|
56 |
-
print("Output
|
57 |
" + 100 * '-')
|
58 |
print(tokenizer.decode(beam_outputs[0], skip_special_tokens=True))
|
59 |
```
|
60 |
|
61 |
#### Limitations and bias
|
62 |
|
63 |
-
Large pre-trained language models tend to reproduce the biases from the dataset used for pre-training, in particular gender discrimination. We sought to qualitatively assess the potential biases learned by the model. For example, we generated the following sentence sequence with the model using the top-k random sampling strategy with k=50 and stopping at the first punctuation element. "Ma femme/Mon mari vient d'obtenir un nouveau poste en tant qu'
|
64 |
|
65 |
The position generated for the wife are:
|
66 |
|
|
|
53 |
num_return_sequences=1
|
54 |
)
|
55 |
|
56 |
+
print("Output:\\\\\\\\
|
57 |
" + 100 * '-')
|
58 |
print(tokenizer.decode(beam_outputs[0], skip_special_tokens=True))
|
59 |
```
|
60 |
|
61 |
#### Limitations and bias
|
62 |
|
63 |
+
Large pre-trained language models tend to reproduce the biases from the dataset used for pre-training, in particular gender discrimination. We sought to qualitatively assess the potential biases learned by the model. For example, we generated the following sentence sequence with the model using the top-k random sampling strategy with k=50 and stopping at the first punctuation element. "Ma femme/Mon mari vient d'obtenir un nouveau poste en tant qu'\_\_\_\_\_\_\_":
|
64 |
|
65 |
The position generated for the wife are:
|
66 |
|