repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 2,791 | closed | Create BERT-of-Theseus model card | 02-10-2020 01:38:30 | 02-10-2020 01:38:30 | ||
transformers | 2,790 | closed | Is there any way that I can directly feed the hidden output of the embedding layer into each of the transformer's layer? | Hello,
For an original sequence ```X``` that has length of ```n```, I am interested in feeding the embedding of the original sequence ```X``` (```E```) as an input to the self-attention block of each layer of ```GPT2LMHeadModels``` (here, layer = self-attention block + feedforward block), and examine the layer output generated by ```E```.
Is there any way that I can carry out this task with HuggingFace ```GPT2LMHeadsModel``` transformers?
Thank you, | 02-09-2020 19:06:39 | 02-09-2020 19:06:39 | How about manually handling the embeddings and attention layers?
```py
from transformers import GPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("gpt2")
sequence = tokenizer.encode("Try this out", return_tensors="pt")
embeds = model.get_input_embeddings()(seq)
first_layer_output, first_layer_attentions = model.transformer.h[0](embeds)
```<|||||>Hello,
Thank again for your reply!
1. Just to make sure that I am understanding this correctly, is the line
```python
model.transformer.h[0]
```
used to access the first layer of the transformer? so that I can access the second layer, third layer, etc., with ```model.transformer.h[1], model.transformer.h[2]``` and so on?
2. To access the output head of the transformer, do I simply do:
```python
model.transformer.h[last index]
````
?
Thank you!<|||||>Yes, in GPT-2 the layers can be accessed via the `h` attribute. You're correct in your assumption regarding accessing the second and third layers.
This gives you the output of the MLP, which is of dimension `(batch_size, sequence_length, hidden_dim)`.<|||||>Hello,
Thank you for your reply.
I am having some trouble understanding the MLP function, which is found [here](https://github.com/huggingface/transformers/blob/73028c5df0c28ca179fbe565482a9c2143787f61/src/transformers/modeling_gpt2.py#L200).
Q1. For MLP, why are we setting the n_state to be equal to 3072, which is 4 * n_embd?
Q2. Below is the forward function for the MLP class:
```python
def forward(self, x):
h = self.act(self.c_fc(x))
h2 = self.c_proj(h)
return self.dropout(h2)
```
in the forward function above, what exactly do the lines ``` h = self.act(self.c_fc(x))``` and ``` h2 = self.c_proj(h)``` do?
Thank you,<|||||>> Yes, in GPT-2 the layers can be accessed via the `h` attribute. You're correct in your assumption regarding accessing the second and third layers.
>
> This gives you the output of the MLP, which is of dimension `(batch_size, sequence_length, hidden_dim)`.
How would you feed input directly into a particular Keras Bert layer? Is there a way to automatically feed inputs at one layer, and have the rest be processed starting at that layer?
Purpose: I would like to feed the hidden states of one transformer, into another, so I would need to bypass the inputID->embedding layer.
I did some tinkering and tried this
```
testt = tf.random.uniform([3, 5,768], minval=-1, maxval=1, dtype=tf.dtypes.float32, seed=None, name=None)
model.layers[0].encoder.layer[3]((testt, None, None))
```
Seems promising, since output shapes are (3, 5, 768).
Edit:
Maybe I can create a new model from these individual layers.
```
testt = tf.random.uniform([3, 5,768], minval=-1, maxval=1, dtype=tf.dtypes.float32
def get_new_model():
inputHiddenVals = tf.keras.Input(shape=[None, 768], dtype=tf.float32, name='input_Q',
batch_size=None)
hidden1 = model.layers[0].encoder.layer[3]((inputHiddenVals, None, None))
hidden2 = model.layers[0].encoder.layer[4]((hidden1[0], None, None))
hidden3 = model.layers[0].encoder.layer[5]((hidden2[0], None, None))
modelNew = tf.keras.Model(inputs=inputHiddenVals, outputs=hidden3)
return modelNew
nModel = get_new_model()
nModel(testt)
```
Seems to work
<|||||>Update, doesn't seem to work. The copied layers have parameters missing.
```
from transformers import TFBertModel, AutoModel, TFRobertaModel
import tensorflow as tf
import tensorflow_addons as tfa
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
import os
from copy import deepcopy
logger = tf.get_logger()
logger.info(tf.__version__)
def get_mini_models():
tempModel = TFRobertaModel.from_pretrained('bert-base-uncased', from_pt=True)
layer9 = deepcopy(tempModel.layers[0].encoder.layer[8])
layer10 = deepcopy(tempModel.layers[0].encoder.layer[9])
inputHiddenVals = tf.keras.Input(shape=[None, 768], dtype=tf.float32, name='input_Q',
batch_size=None)
hidden1 = layer9((inputHiddenVals, None, None), training=True)
hidden2 = layer10((hidden1[0], None, None), training=True)
modelNew = tf.keras.Model(inputs=inputHiddenVals, outputs=hidden2)
del tempModel
return modelNew
@tf.function
def loss_fn(_, probs):
bs = tf.shape(probs)[0]
labels = tf.eye(bs, bs)
return tf.losses.categorical_crossentropy(labels,
probs,
from_logits=True)
model = get_mini_models()
# model.layers[2].trainable = False
model.compile(loss=loss_fn,
optimizer=tfa.optimizers.AdamW(weight_decay=1e-4, learning_rate=1e-5,
epsilon=1e-06))
tempModel = TFRobertaModel.from_pretrained('bert-base-uncased', from_pt=True)
layer9 = deepcopy(tempModel.layers[0].encoder.layer[8])
for i, var in enumerate(model.weights):
print(model.weights[i].name)
```
> tf_roberta_model/roberta/encoder/layer_._8/attention/self/query/kernel:0
> tf_roberta_model/roberta/encoder/layer_._8/attention/self/query/bias:0
> tf_roberta_model/roberta/encoder/layer_._8/attention/self/key/kernel:0
> tf_roberta_model/roberta/encoder/layer_._8/attention/self/key/bias:0
> tf_roberta_model/roberta/encoder/layer_._8/attention/self/value/kernel:0
> tf_roberta_model/roberta/encoder/layer_._8/attention/self/value/bias:0
It's missing a layer, and not even all the weights for the first layer were transferred
```
for i, var in enumerate(layer9.weights):
print(layer9.weights[i].name)
```
> tf_roberta_model_1/roberta/encoder/layer_._8/attention/self/query/kernel:0
> tf_roberta_model_1/roberta/encoder/layer_._8/attention/self/query/bias:0
> tf_roberta_model_1/roberta/encoder/layer_._8/attention/self/key/kernel:0
> tf_roberta_model_1/roberta/encoder/layer_._8/attention/self/key/bias:0
> tf_roberta_model_1/roberta/encoder/layer_._8/attention/self/value/kernel:0
> tf_roberta_model_1/roberta/encoder/layer_._8/attention/self/value/bias:0
> tf_roberta_model_1/roberta/encoder/layer_._8/attention/output/dense/kernel:0
> tf_roberta_model_1/roberta/encoder/layer_._8/attention/output/dense/bias:0
> tf_roberta_model_1/roberta/encoder/layer_._8/attention/output/LayerNorm/gamma:0
> tf_roberta_model_1/roberta/encoder/layer_._8/attention/output/LayerNorm/beta:0
> tf_roberta_model_1/roberta/encoder/layer_._8/intermediate/dense/kernel:0
> tf_roberta_model_1/roberta/encoder/layer_._8/intermediate/dense/bias:0
> tf_roberta_model_1/roberta/encoder/layer_._8/output/dense/kernel:0
> tf_roberta_model_1/roberta/encoder/layer_._8/output/dense/bias:0
> tf_roberta_model_1/roberta/encoder/layer_._8/output/LayerNorm/gamma:0
> tf_roberta_model_1/roberta/encoder/layer_._8/output/LayerNorm/beta:0
Here's a colab notebook if you want to play around with it
https://colab.research.google.com/drive/1XoESTWyo4qr4uApIai7Ac4tUDAeLDEI-?usp=sharing |
transformers | 2,789 | closed | Is there any way that I can extract the hidden output from the self-attention layer? | Hello,
From my understanding, for the ```GPT2LMHeadModel```, the output ```past``` allows me to retrieve the key and value vectors that are used in the self-attention block (which is prior to the feedforward block).
Is there any way I can extract the output of the self-attention block **at a particular head of a single layer** of ```GPT2LMHeadModel``` (if I am understanding this correctly, the output ```hidden_states``` only returns the output after the input had gone into the feedforward block... but what I want is to extract the output from the self-attention block, which happens before the feedforward block).
Thank you, | 02-09-2020 18:43:09 | 02-09-2020 18:43:09 | That would be the attentions, which you can output by specifying `output_attentions=True` in your configuration object.<|||||>Hello,
Thank you very much for your reply.
What I want to obtain though, is not the individual attention weights themselves but rather the final product of the self-attention layer at each head (the transformed embeddings that the self-attention layer produces, before they go into the feedforward layer for final processing).
Is there any way that I can get this final product of the self-attention layer at each head?
Thank you,<|||||>You mean you want to obtain the result after the softmax multiplied by the value vector?<|||||>Hello,
I would like to obtain the result that is obtained after the sum of (value) * (softmax) got multiplied by the matrix H (i.e. the final output embedding of the self-attention layer from a single head)
Thank you,<|||||>Then that is literally the attentions I mentioned earlier, see in the [source code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_gpt2.py#L163-L164).<|||||>Hello,
so the output of a single attention-head, which is the thing I want to extract, can be formulated as the following:
O = AE(W^V)H
where
O = output of a single attention-head
A = matrix that stores attention weights for all tokens in sequence
E = matrix that stores the embeddings of all tokens in a sequence
W^V = matrix that we multiply with E to generate the value vector of all tokens of the sequence
H = projection matrix that is used to generate the final product of a single attention-head
If I am not mistaken, ```attention``` gives out the matrix A...
but what I am looking to get is the output O.....
Is there anyway that I can get the output O? or does ```attention``` give out the output O, like you described before?
Thank you and sorry for the long question, your help is much appreciated.
<|||||>To be even more clear, I just want the output of each head within the layers of transformer. Is there any way that I can get the output of each individual head?
Thank you,<|||||>Hello, if I can't get the output of individual attention-head explicitly, is there any way that I can retrieve the matrix H, where H is from the formula below:
O = AE(W^V)H
O = output of a single attention-head
A = matrix that stores attention weights for all tokens in sequence
E = matrix that stores the embeddings of all tokens in a sequence
W^V = matrix that we multiply with E to generate the value vector of all tokens of the sequence
H = projection matrix that is used to generate the final product of a single attention-head
Thank you,<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@h56cho
Hello
I also want to know if I can get such hidden outputs
Do you have any progress with it?
Thank you in advance |
transformers | 2,788 | closed | SQuAD preprocessing not working for roberta (wrong p_mask) | **Description**
The pipeline for QA crashes for roberta models.
It's loading the model and tokenizer correctly, but the SQuAD preprocessing produces a wrong `p_mask` leading to no possible prediction and the error message below.
The observed `p_mask` for a roberta model is
```[0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...] ```
while it should only mask the question tokens like this
``` [0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, ...]```
I think the deeper root cause here is that roberta's `token_type_ids` returned from `encode_plus` are now all zeros (introduced in https://github.com/huggingface/transformers/pull/2432) and the creation of `p_mask` in `squad_convert_example_to_features` relies on this information:
https://github.com/huggingface/transformers/blob/520e7f211926e07b2059bc8e21b668db4372e4db/src/transformers/data/processors/squad.py#L189-L202
Haven't checked yet, but this might also affect training/eval if `p_mask` is used there.
**How to reproduce?**
```
model_name = "deepset/roberta-base-squad2"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
res = nlp({
'question': 'What is roberta?',
'context': 'Roberta is a language model that was trained for a longer time, on more data, without NSP'
})
```
results in
```
File "/home/mp/deepset/dev/transformers/src/transformers/pipelines.py", line 847, in __call__
for s, e, score in zip(starts, ends, scores)
File "/home/mp/deepset/dev/transformers/src/transformers/pipelines.py", line 847, in <listcomp>
for s, e, score in zip(starts, ends, scores)
KeyError: 0
```
**Environment**
- Ubuntu 18.04
- Python 3.7.6
- PyTorch 1.3.1 | 02-09-2020 08:32:36 | 02-09-2020 08:32:36 | I think I have a problem that is related regarding training/evaluation using run_squad.py.
I wanted to train a roberta model on my own Q&A dataset mixed with the SQuAD dataset by running:
`python ./examples/run_squad.py --output_dir=/home/jupyter/sec_roberta/roberta-base-mixed-quad --model_type=roberta --model_name_or_path=roberta-large --do_train --train_file=../sec_roberta/financial_and_squad2_train.json --do_eval --predict_file=../sec_roberta/financial_and_squad2_dev.json --learning_rate=1.5e-5 --num_train_epochs=2 --max_seq_length 384 --doc_stride 128 --overwrite_output_dir --per_gpu_train_batch_size=6 --per_gpu_eval_batch_size=6 --warmup_steps 500 --weight_decay 0.01 --version_2_with_negative`
I ran into this error:
```
02/12/2020 08:22:38 - INFO - __main__ - Creating features from dataset file at .
--
0%\| \| 0/542 [00:00<?, ?it/s]
Traceback (most recent call last): File "./examples/run_squad.py", line 853, in <module> main() File "./examples/run_squad.py", line 791, in main
train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False)
File "./examples/run_squad.py", line 474, in load_and_cache_examples
examples = processor.get_train_examples(args.data_dir, filename=args.train_file)
File "/opt/anaconda3/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 501, in get_train_examples
return self._create_examples(input_data, "train")
File "/opt/anaconda3/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 559, in _create_examples
answers=answers,
File "/opt/anaconda3/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 633, in __init__
self.start_position = char_to_word_offset[start_position_character]
IndexError: list index out of range
```
I tested my dataset on roberta-base and it works, so I don't necessarily think my dataset is the issue.
Also, I ran the same code using the SQuAD 2.0 dataset on roberta large and also on a lm-finetuned version of roberta large and both work, so this is all very mysterious to me.
I thought it could be related.<|||||>Update: a fresh install of transformers fixed it for me...
i run into a similar error when trying to use the run_squad.py example to train roberta-large on Squad 2.0
when i run
`export DATA_DIR=./data
python ./transformers/examples/run_squad.py \
--model_type roberta \
--model_name_or_path roberta-large \
--do_train \
--do_eval \
--version_2_with_negative \
--train_file $DATA_DIR/squad2/train-v2.0.json \
--predict_file $DATA_DIR/squad2/dev-v2.0.json \
--per_gpu_eval_batch_size=6 \
--per_gpu_train_batch_size=6 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--overwrite_output_dir \
--overwrite_cache \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 100000 \
--output_dir ./roberta_squad/`
i get the following error:
> Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/opt/anaconda3/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/home/joshua_wagner/.local/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 198, in
squad_convert_example_to_features
p_mask = np.array(span["token_type_ids"])
KeyError: 'token_type_ids'
Environment:
- Debian GNU/Linux 9.11
- Python 3.7
- PyTorch 1.4.0<|||||>same error as @joshuawagner93 <|||||>@joshuawagner93 @HenrykBorzymowski, this issue should have been patched with #3439. Could you install the latest release and let me know if it fixes your issue?<|||||>@LysandreJik works perfectly fine! Thx <|||||>@LysandreJik reinstall fixed the issue, thank you<|||||>@LysandreJik Unfortunately, we still face the same issue when we try to use roberta in the pipeline for inference. #3439 didn't seem to help for this. <|||||>Hi @tholor, indeed, it seems I thought this issue was resolved when it really wasn't. I just opened #4049 which should fix the issue.<|||||>Awesome, thanks for working on this @LysandreJik!<|||||>@tholor, the PR should be merged soon, thank you for your patience!<|||||>Great, thank you! Looking forward to it :) |
transformers | 2,787 | closed | Distillation code loss functions | # ❓ Questions & Help
Why compute cross entropy loss from the hard labels in distillation code?
if self.alpha_clm > 0.0:
shift_logits = s_logits[..., :-1, :].contiguous()
shift_labels = lm_labels[..., 1:].contiguous()
loss_clm = self.lm_loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
loss += self.alpha_clm * loss_clm
The model outputs loss when passed with the labels.
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 02-09-2020 05:21:33 | 02-09-2020 05:21:33 | Hello @snaik2016,
The part of code you're referring to is not a distillation loss. It's the "classic" causal language modeling loss.
Victor<|||||>Not referring to the "distillation loss" just the part of the code where loss is computed in distillation code. The exact same quantity is return by model output when labels are passed.<|||||>Oh yes, you are right, this could be factorized in.
Just note that you have to be careful with the `ignore_index` and make sure it's coherent with your processing (if I remember correctly, at one point, not all the models were using the same `ignore_index` in the loss computation).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,786 | closed | SequenceSummary: config.summary_activation = 'relu' would be ignored | This isn't a bug, but merely an unintuitive argument name.
`summary_activation` sounds like it can be general, e.g. "relu" or "gelu" or something, but if its not "tanh" its ignored.
Since I assume it's annoying to go through all the configs and rename a field to use_tanh=True, I propose that we raise if summary_activation is a string thats not tanh, instead of silently just using no activation.
Another approach could integrate an `ACT2FN` dictionary (see https://github.com/huggingface/transformers/issues/1347)
to actually support the other activation functions.
Happy to do either approach if others think it would be useful.
| 02-08-2020 22:11:47 | 02-08-2020 22:11:47 | I think both approaches are reasonable. @LysandreJik @thomwolf?<|||||>I agree with both approaches as well. The second one would probably be the most useful.<|||||>Yes, like Lysandre |
transformers | 2,785 | closed | Create README.md | Albert xxlarge version 1 language model fine-tuned on SQuAD2.0 with the following results:
```
{'exact': 85.65653162637918,
'f1': 89.260458954177,
'total': 11873,
'HasAns_exact': 82.6417004048583,
'HasAns_f1': 89.85989020967376,
'HasAns_total': 5928,
'NoAns_exact': 88.66274179983179,
'NoAns_f1': 88.66274179983179,
'NoAns_total': 5945,
'best_exact': 85.65653162637918,
'best_exact_thresh': 0.0,
'best_f1': 89.2604589541768,
'best_f1_thresh': 0.0}
```
with script:
```
python -m torch.distributed.launch --nproc_per_node=2 ${RUN_SQUAD_DIR}/run_squad.py \
--model_type albert \
--model_name_or_path albert-xxlarge-v1 \
--do_train \
--train_file ${SQUAD_DIR}/train-v2.0.json \
--predict_file ${SQUAD_DIR}/dev-v2.0.json \
--version_2_with_negative \
--num_train_epochs 3 \
--max_steps 8144 \
--warmup_steps 814 \
--do_lower_case \
--learning_rate 3e-5 \
--max_seq_length 512 \
--doc_stride 128 \
--save_steps 2000 \
--per_gpu_train_batch_size 1 \
--gradient_accumulation_steps 24 \
--output_dir ${MODEL_PATH}
CUDA_VISIBLE_DEVICES=0 python ${RUN_SQUAD_DIR}/run_squad.py \
--model_type albert \
--model_name_or_path ${MODEL_PATH} \
--do_eval \
--train_file ${SQUAD_DIR}/train-v2.0.json \
--predict_file ${SQUAD_DIR}/dev-v2.0.json \
--version_2_with_negative \
--do_lower_case \
--max_seq_length 512 \
--per_gpu_eval_batch_size 48 \
--output_dir ${MODEL_PATH}
```
using the following system & software:
```
OS/Platform: Linux-4.15.0-76-generic-x86_64-with-debian-buster-sid
GPU/CPU: 2 x NVIDIA 1080Ti / Intel i7-8700
Transformers: 2.3.0
PyTorch: 1.4.0
TensorFlow: 2.1.0
Python: 3.7.6
```
Inferencing/prediction works with the current Transformers v2.4.1
Access this `albert_xxlargev1_sqd2_512` fine-tuned model with "tried & true" code:
```
config_class, model_class, tokenizer_class = \
AlbertConfig, AlbertForQuestionAnswering, AlbertTokenizer
model_name_or_path = "ahotrod/albert_xxlargev1_squad2_512"
config = config_class.from_pretrained(model_name_or_path)
tokenizer = tokenizer_class.from_pretrained(model_name_or_path, do_lower_case=True)
model = model_class.from_pretrained(model_name_or_path, config=config)
```
or the AutoModels (AutoConfig, AutoTokenizer & AutoModel) should also work, however I
have yet to use them in my app & confirm:
```
from transformers import AutoConfig, AutoTokenizer, AutoModel
model_name_or_path = "ahotrod/albert_xxlargev1_squad2_512"
config = AutoConfig.from_pretrained(model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, do_lower_case=True)
model = AutoModel.from_pretrained(model_name_or_path, config=config)
``` | 02-08-2020 21:32:17 | 02-08-2020 21:32:17 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2785?src=pr&el=h1) Report
> Merging [#2785](https://codecov.io/gh/huggingface/transformers/pull/2785?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/520e7f211926e07b2059bc8e21b668db4372e4db?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2785?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2785 +/- ##
=======================================
Coverage 75.13% 75.13%
=======================================
Files 93 93
Lines 15249 15249
=======================================
Hits 11457 11457
Misses 3792 3792
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2785?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2785?src=pr&el=footer). Last update [520e7f2...48a103f](https://codecov.io/gh/huggingface/transformers/pull/2785?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Just used code fences for consistency with other model cards.
Thanks for sharing! |
transformers | 2,784 | closed | ERROR:CUDA out of memory when using GPT2 tour | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
I follow the tour in the program.Everything is OK . When i start the train,an error occurred.
RuntimeError:
CUDA out of memory. Tried to allocate 12.00 MiB (GPU 0; 6.00 GiB total capacity; 4.44 GiB already allocated; 3.06 MiB free; 4.57 GiB reserved in total by PyTorch)
I'm using GTX2060(6GB) i'd like to know whether this GPU is quite qualified for the work.
Thanks | 02-08-2020 10:13:12 | 02-08-2020 10:13:12 | I've already tried to change batch_size to 1. This doesn't seem to be effective |
transformers | 2,783 | closed | Features proposals to simplify training Tensorflow model | # 🚀 Feature request
Hello,
On my side I thought to implement some features to simplify the way we can train Tensorflow models (I think it can certainly be adapted to Pytorch as well), and I wanted to know if it might be useful for you. Here a non exhaustive list of features I have in mind:
1. Augment the tranining pipeline with some useful functions such as:
- An [LR finder](https://docs.fast.ai/callbacks.lr_finder.html) that will try to find the best LR for a specific dataset
- [Approach to help to set better hyperparameters](https://arxiv.org/abs/1803.09820)
- [Cyclical LR during training](https://arxiv.org/abs/1506.01186)
- Augment the config.json file of each model with specific training parameters (epochs, LR, batch size, GPUs, etc...) in order to better reproduce a specific training.
2. Modify a bit the `TFPretrainedModel` class in order to better handle:
- multiple GPU training
- Custom training loop
- Custom optimizer creation
- Gradient Accucmulation
- Add a checkpoint manager
- Handle Tensorboard
3. Modify few model classes to add custom loss computation such as for the NER as I have done in the TF example.
I don't know if it sounds interesting for you @thomwolf, @julien-c and @LysandreJik ?
| 02-08-2020 10:05:55 | 02-08-2020 10:05:55 | I think that we are trying to keep model logic and training logic very separate. So if you want to try to make the tf **examples** (where training happens) simpler, I'd recommend sending a PR that does that for `examples/run_tf_glue.py` or `example_run_tf_ner.py`, or even adding a new Tensorflow example for a task you care about.<|||||>Cool, thanks a lot for your reply!!
All these features are not only focus on Tensorflow, I have written Tensorflow because it is the framework I know, but I'm sure we can totally apply them on the Pytorch part as well.
I think I haven't been clear enough and this is my fault, sorry. What I meant is two kind of features:
The point 1. is specific to the [training part of the core pipeline](https://github.com/huggingface/transformers/blob/master/src/transformers/commands/train.py). I also do agree that modifying the config file can be confusing, maybe to create a separate config file specifically for training, I don't know, I'm still open to suggestions. My thought is to have some metadata on the training itself in order to be able to easily reproduce it without giving yourself the values to the network as parameters but just by sharing a file that we can upload to the models hubstore.
The point 2 is more like some utilities to simplify how to handle models.
The point 3 is to have something similar to the Pytorch models where the loss is directly computed in the forward method, I was thinking it could be a good idea to have the same facility for Tensorflow.
I have already started to work on 2 and 3 to see how it can be, and the pros/cons on the existing examples. I will certainly do a PR later this week or next week to see if you have any review on it.
(What I gonna say below is just my own opinion)
I'm suggesting all this because when I talk with most of my colleagues or friends (that are not very familiar with Deep Learning) they don't have the knowledge to create the NER example either in TF or in Pytorch, but would like to train a NER model for their work/project, same thing for simple text classification and they don't want to be bored by writing all this very technical code or to know the meaning of each parameter of the scripts. And in companies I think that more and more people want to train their model without having any ML knowledge.<|||||>Thanks, and I agree very much with your vision! Looking forward to your PR!
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,782 | closed | RoBERTaMultiChoice does not work with `roberta-large` | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
**roberta-large**
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [* ] the official example scripts:
https://github.com/huggingface/transformers/tree/6c1b23554f8bb5b5e1f6c80969acab764c755678/examples#multiple-choice
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ *] an official GLUE/SQUaD task: **SWAG**
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
just follow the example code of running swag dataset but with `robert-large` instead of `roberta-base` (which works well)
```
export SWAG_DIR=~/swagaf-master/data/
python ./examples/run_multiple_choice.py \
--model_type roberta \
--task_name swag \
--model_name_or_path roberta-large \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $SWAG_DIR \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--max_seq_length 80 \
--output_dir tmp/swag_base \
--per_gpu_eval_batch_size=16 \
--per_gpu_train_batch_size=16 \
--gradient_accumulation_steps 2 \
--overwrite_output
```
And it will say:
02/08/2020 00:46:23 - INFO - transformers.modeling_utils - **Weights of RobertaForMultipleChoice not initialized from pretrained model:** ['classifier.weight', 'classifier.bias']
02/08/2020 00:46:23 - INFO - transformers.modeling_utils - **Weights from pretrained model not used in RobertaForMultipleChoice:** ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight']
Consequently, the script is to learn a model from the scratch instead of fine-tuning pre-trained roberta-large.
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
It should load the pre-trained weights of roberta-large model.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.4.1
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): GPU
- Tensorflow version (GPU?): n/a
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| 02-08-2020 08:50:47 | 02-08-2020 08:50:47 | @yuchenlin May I ask if you figured out the bug? |
transformers | 2,781 | closed | Flaky TF pipelines test on CircleCI | Environment: CircleCI
Test: `tests/test_pipelines.py::MultiColumnInputTestCase::test_tf_question_answering`
Traceback: https://circleci.com/gh/huggingface/transformers/15691?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
Diff where I changed nothing relevant and the test started passing: https://github.com/huggingface/transformers/pull/2745/commits/a4edf2e878d23346f45715ac213f1f870ae8ec0c
Happy to look deeper if helpful! | 02-08-2020 00:28:04 | 02-08-2020 00:28:04 | Indeed, this is a recurring error. I have not yet found the time to dive into it yet, we should do so shortly.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,780 | closed | Pipelines- if initial model download is interrupted, everything is ruined | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): pipeline('ner') and pipeline('feature-extraction')
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
- [x] my own modified scripts: (give details below)
The tasks I am working on is:
- [x] (mostly NA) my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. fresh install transformers from source
2. run:
```python
from transformers import pipeline
model =pipeline('feature-extraction')
```
3. interrupt download. rerun #2
Error on reload:
```python
Downloading: 100%
230/230 [00:01<00:00, 136B/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
~/miniconda3/envs/hugging/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
466 try:
--> 467 state_dict = torch.load(resolved_archive_file, map_location="cpu")
468 except Exception:
~/miniconda3/envs/hugging/lib/python3.7/site-packages/torch/serialization.py in load(f, map_location, pickle_module)
357 try:
--> 358 return _load(f, map_location, pickle_module)
359 finally:
~/miniconda3/envs/hugging/lib/python3.7/site-packages/torch/serialization.py in _load(f, map_location, pickle_module)
548 assert key in deserialized_objects
--> 549 deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
550 offset = None
RuntimeError: unexpected EOF. The file might be corrupted.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-26-2fd4b689c1db> in <module>
----> 1 featify=pipeline('feature-extraction')
~/miniconda3/envs/hugging/lib/python3.7/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, modelcard, **kwargs)
1084 "Trying to load the model with Tensorflow."
1085 )
-> 1086 model = model_class.from_pretrained(model, config=config, **model_kwargs)
1087
1088 return task(model=model, tokenizer=tokenizer, modelcard=modelcard, framework=framework, **kwargs)
~/miniconda3/envs/hugging/lib/python3.7/site-packages/transformers/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
375 for config_class, model_class in MODEL_MAPPING.items():
376 if isinstance(config, config_class):
--> 377 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
378 raise ValueError(
379 "Unrecognized configuration class {} for this kind of AutoModel: {}.\n"
~/miniconda3/envs/hugging/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
468 except Exception:
469 raise OSError(
--> 470 "Unable to load weights from pytorch checkpoint file. "
471 "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. "
472 )
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
```
## Expected behavior
Model should (download and) load.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.4.1
- Platform: WSL
- Python version: 3.7.6.final.0
- PyTorch version (GPU?): 0.4.1 (no)
- Tensorflow version (GPU?): none
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| 02-07-2020 23:40:37 | 02-07-2020 23:40:37 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,779 | closed | configuration from custom config file not working | I am trying to get the configuration from a custom config file by the following line :
`config = GPT2Config.from_pretrained("./lm/gpt2-xl/lm/my_config.json")`
This is similar to the example of this [page]. But I am getting the following error:
```
OSError: Model name './lm/gpt2-xl/lm/my_config.json' was not found in model name list. We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/./lm/gpt2-xl/lm/my_config.json/config.json' was a path, a model identifier, or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.
(https://huggingface.co/transformers/main_classes/configuration.html#transformers.PretrainedConfig)
```
Am I missing something? | 02-07-2020 21:51:11 | 02-07-2020 21:51:11 | Did you try using `GPT2Config.from_json_file('xxx.json')`? `from_pretrained` should be used when pointing to a directory containing a `config.json` file.<|||||>yes it is working now <|||||>Great to hear! |
transformers | 2,778 | closed | Preserve spaces in GPT-2 tokenizers | **The issue**: The GPT-2 and RoBERTa tokenizers are incorrectly stripping whitespace following special characters, preventing the BPE encoder from correctly encoding spaces in tokens following RoBERTa `<mask>` and `<unk>` tokens.
```
tokenizer.convert_ids_to_tokens(tokenizer.encode('She likes <mask> cats.'))
# output: ['<s>', 'She', 'Ġlikes', '<mask>', 'cats', '.', '</s>']
# should be: ['<s>', 'ĠShe', 'Ġlikes', '<mask>', 'Ġcats', '.', '</s>']
```
This makes the model inputs (and therefore outputs) incorrect. This issue manifests itself in the `fill-mask` pipeline where the model erroneously thinks the mask is a prefix to the following word when using RoBERTa:
```
roberta_fillmask = pipeline("fill-mask")
sentence = "She likes <mask> cats."
roberta_fillmask(sentence)
# top predictions: "She likes bobcats.", "She likes pussycats."
```
This PR makes the following changes:
- Preserves trailing whitespace following special tokens
- Inserts a space after the prepended start token when `add_special_tokens` is `True` in `encode()` so that the user doesn't have to include a leading space in the string. This can be overriden with the `add_prefix_space` argument.
- Adds a `framework` argument to the `pipeline` factory function, allowing users to easily specify TF vs PyTorch
After making these changes, the top predictions from the above example become 'She likes cute cats.' and 'She likes her cats.' | 02-07-2020 20:27:21 | 02-07-2020 20:27:21 | You're correct about the GPT-2 tokenizer – I failed to consider that GPT2 doesn't have a BOS token. I've pushed an alternative solution that defines a base `prepare_for_tokenization` method which children can override to make changes to the text before tokenization.
As for your second point, the changes are made where sequences are encoded in different ways and then compared. The clearest example is probably [here](https://github.com/huggingface/transformers/pull/2778/files#diff-1ca2285a5350e3d634978637356a9bdbR266-R267). The first encode is done with `add_special_tokens=False` whereas the second is done with `add_special_tokens=True`. Since adding special tokens now also adds a prefix space by default in RoBERTa, it's necessary to add `add_prefix_space=False` in the second encode so that the results are consistent.<|||||>Cool! I believe the `run_tests_tf` is failing due to a tokenization error (linked with your PR).<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2778?src=pr&el=h1) Report
> Merging [#2778](https://codecov.io/gh/huggingface/transformers/pull/2778?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/73368963b200f2d70d2267bd49a3fa794850b3ff?src=pr&el=desc) will **decrease** coverage by `1.05%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2778?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2778 +/- ##
==========================================
- Coverage 75.09% 74.03% -1.06%
==========================================
Files 93 93
Lines 15250 15263 +13
==========================================
- Hits 11452 11300 -152
- Misses 3798 3963 +165
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2778?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `100% <100%> (ø)` | :arrow_up: |
| [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `96.26% <100%> (+0.05%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `86.1% <100%> (+0.41%)` | :arrow_up: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `71.42% <100%> (-0.52%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.94% <0%> (-2.28%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.06% <0%> (-1.33%)` | :arrow_down: |
| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/2778/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2778?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2778?src=pr&el=footer). Last update [7336896...a7bacfa](https://codecov.io/gh/huggingface/transformers/pull/2778?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Looks like it was an incorrect pipelines test which was treating each test sentence as one concat'd sequence, [here](https://github.com/huggingface/transformers/pull/2778/files#diff-ca5a8abd41d5c7bd3e6da1636c531976R97).<|||||>Yap at the moment on the Rust side, there is also an issue regarding the tokenization of Roberta, but which is slightly different from the one here.
When this PR land to master i'll rebase the tokenizers-v2 branch to run all the new tests that the branch brings and see if there is nothing breaking :).
It looks great to me 👍 |
transformers | 2,777 | closed | distilbert-base-cased | - Weights
- Readmes and docs
- Previous omissions
weights are uploaded on s3, along with modelcards.
@LysandreJik Could you make sure I didn't forget anything?
@mfuntowicz Could have a check on the pipeline part? | 02-07-2020 19:22:21 | 02-07-2020 19:22:21 | do we want to actually change the model used in the pipeline?<|||||>> do we want to actually change the model used in the pipeline?
I'm not sure to understand the rationale behind the question.
Purely from a perf point of view, it's the same inf speed, while having better metrics than before.<|||||>Nevermind the failing test, it's a Heisenbug. Merge when ready. |
transformers | 2,776 | closed | Pipeline for text classification | # 🚀 Feature request
Could you please add a text classification pipeline?
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 02-07-2020 17:15:01 | 02-07-2020 17:15:01 | Did you check the README?
grep `text-classification: Initialize a TextClassificationPipeline directly, or see sentiment-analysis for an example.
`<|||||>Sorry, missed this somehow. Thanks for adding it!
On Fri, Feb 7, 2020 at 12:38 PM Julien Chaumond <[email protected]>
wrote:
> Did you check the README?
>
> grep text-classification: Initialize a TextClassificationPipeline
> directly, or see sentiment-analysis for an example.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/2776?email_source=notifications&email_token=AALSLVVTFNJWUYUPOVQ7PVLRBWMBJA5CNFSM4KRR6SN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELD46DA#issuecomment-583520012>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AALSLVWTPQIT7XNKXLWZSYLRBWMBJANCNFSM4KRR6SNQ>
> .
>
<|||||>@julien-c trying that throws:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-7-b612b71d2864> in <module>
1 sentiment_analysis = pipeline('sentiment-analysis')
----> 2 text_classification = pipeline('text-classification')
~/SpacedOut/engage-sentiment/.venv/lib/python3.7/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, modelcard, framework, **kwargs)
1024 # Retrieve the task
1025 if task not in SUPPORTED_TASKS:
-> 1026 raise KeyError("Unknown task {}, available tasks are {}".format(task, list(SUPPORTED_TASKS.keys())))
1027
1028 framework = framework or get_framework(model)
KeyError: "Unknown task text-classification, available tasks are ['feature-extraction', 'sentiment-analysis', 'ner', 'question-answering', 'fill-mask']"
```<|||||>you should import and use `TextClassificationPipeline` directly (i.e. there isn't a shortcut to use in `pipeline()`)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I'm going to ask the stupid question, and say there are no tutorial or code examples for `TextClassificationPipeline`. I mean I can dig up the source code, but documentation without examples is never my thing. Would be helpful if I know the data format for `run_tf_text_classification.py` as well. I guess what I'm asking is to finetune a text classification model, but the example at https://huggingface.co/transformers/custom_datasets.html is way too long. Quoting a meme, "ain't nobody got time for that". |
transformers | 2,775 | closed | Using fast tokenizers with pipelines | # 🚀 Feature request
Currently tokenizers are not working with QA pipeline, because they do not have the tokenize method implemented. Speeding up the tokenization would be really beneficial for my application.
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 02-07-2020 17:13:41 | 02-07-2020 17:13:41 | Hi @AlecS12,
I'm currently working on integrating tokenizers library inside transformers with pipelines support.
It should not be long now before it lang on master / get released.
You can track the development here: https://github.com/huggingface/transformers/pull/2674
If you want to checkout out the branch **tokenizers-v2** and give it a try, I'm more than happy to get your feedback.
Morgan<|||||>Hi @mfuntowicz,
That's great news. I checked out tokenizers-v2 and tried it in a web server (flask) and jupyterlab. In both cases got the same error. Could you please look into this?
```python
from transformers import pipeline
nlp = pipeline('question-answering', model='bert-large-uncased-whole-word-masking-finetuned-squad')
nlp({
'question': 'Where is the cookie?',
'context': 'I keep cookies in a red plastic container.'
})
...
I0212 15:15:53.546577 140155558979392 modeling_utils.py:456] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-pytorch_model.bin from cache at /home/a652726/.cache/torch/transformers/ca2ac20761877486c1e2204d99653106b9adacf9a5eb18ec71b41d2dbef42103.2db7ae79c41a184c87600faabafa1369db2b16457723fd154ca3b436c4172807
convert squad examples to features: 0%| | 0/1 [00:00<?, ?it/s]
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/a652726/miniconda3/envs/nlp2/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/a652726/miniconda3/envs/nlp2/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/data/home/a652726/transformers/src/transformers/data/processors/squad.py", line 141, in squad_convert_example_to_features
truncation_strategy="only_second" if tokenizer.padding_side == "right" else "only_first",
File "/data/home/a652726/transformers/src/transformers/tokenization_utils.py", line 1741, in encode_plus
**kwargs,
File "/data/home/a652726/transformers/src/transformers/tokenization_utils.py", line 1676, in batch_encode_plus
tokens = self._tokenizer.encode(*batch_text_or_text_pairs[0])
File "/home/a652726/miniconda3/envs/nlp2/lib/python3.7/site-packages/tokenizers/implementations/base_tokenizer.py", line 131, in encode
return self._tokenizer.encode(sequence, pair)
TypeError
"""
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
<ipython-input-1-c0bdf7f90854> in <module>
3 nlp({
4 'question': 'Where is the cookie?',
----> 5 'context': 'I keep cookies in the red plastic container.'
6 })
...
```
<|||||>I will definitively have a look, and will keep you posted.
Thanks for reporting<|||||>Hi @mfuntowicz,
I installed the latest 2.5.1 release and the pipeline error is still there. Had to roll back to 2.4.1.<|||||>release 2.5.1 does not produce the error by default anymore, because it changed the default Autotokenizer to slow, but the bug is still there:
```python
import transformers
from transformers import pipeline
tokenizer = transformers.AutoTokenizer.from_pretrained("bert-base-uncased", use_fast=True)
nlp = pipeline('question-answering', model='bert-large-uncased-whole-word-masking-finetuned-squad', tokenizer=tokenizer)
nlp({
'question': 'Where is the cookie?',
'context': 'I keep cookies in the red plastic container.'
})
nlp({
'question': 'Where is the cookie?',
'context': 'I keep cookies in the red plastic container.'
})
convert squad examples to features: 0%| | 0/1 [00:00<?, ?it/s]W0226 10:50:27.573424 140277524375360 tokenization_utils.py:1782] Fast tokenizers add special tokens by default. To remove special tokens, please specify`add_special_tokens=False` during the initialisation rather than when calling `encode`,`encode_plus` or `batch_encode_plus`.
W0226 10:50:27.576760 140277524375360 tokenization_utils.py:1782] Fast tokenizers add special tokens by default. To remove special tokens, please specify`add_special_tokens=False` during the initialisation rather than when calling `encode`,`encode_plus` or `batch_encode_plus`.
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/a652726/miniconda3/envs/nlp2/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/a652726/miniconda3/envs/nlp2/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/data/home/a652726/transformers/src/transformers/data/processors/squad.py", line 141, in squad_convert_example_to_features
truncation_strategy="only_second" if tokenizer.padding_side == "right" else "only_first",
File "/data/home/a652726/transformers/src/transformers/tokenization_utils.py", line 1889, in encode_plus
**kwargs,
File "/data/home/a652726/transformers/src/transformers/tokenization_utils.py", line 1815, in batch_encode_plus
tokens = self._tokenizer.encode(*batch_text_or_text_pairs[0])
File "/home/a652726/miniconda3/envs/nlp2/lib/python3.7/site-packages/tokenizers/implementations/base_tokenizer.py", line 141, in encode
return self._tokenizer.encode(sequence, pair)
TypeError
"""
'''
<|||||>Hi @AlecS12,
Thanks for trying out 2.5.1. The issue is still there because for the question-answering pipeline we're relying on a method from the squad data processor `squad_convert_example_to_feature` which is not compatible which the fast tokenizers.
I'll have soon have a look at this to make it compatible with the fast tokenizers.
Sorry for the inconvenience. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi @mfuntowicz,,
The problem is still there in 2.10.1. Could you please reopen the issue and fix it?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,774 | closed | embedding index getting out of range while running gpt2-xl model | I am trying to run [hugginface][1] gpt2-xl model. I ran code from the [quickstart][2] page that load the small gpt2 model and generate text by the following code:
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained('gpt2')
generated = tokenizer.encode("The Manhattan bridge")
context = torch.tensor([generated])
past = None
for i in range(100):
print(i)
output, past = model(context, past=past)
token = torch.argmax(output[0, :])
generated += [token.tolist()]
context = token.unsqueeze(0)
sequence = tokenizer.decode(generated)
print(sequence)
```
This is running perfectly. Then I try to run `gpt2-xl` model.
I changed `tokenizer` and `model` loading code like following:
```
tokenizer = GPT2Tokenizer.from_pretrained("gpt2-xl")
model = GPT2LMHeadModel.from_pretrained('gpt2-xl')
```
The `tokenizer` and `model` loaded perfectly. But I a getting error on the following line:
` output, past = model(context, past=past)`
The error is:
` RuntimeError: index out of range: Tried to access index 204483 out of table with 50256 rows. at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418
`
Looking at error it seems that the embedding size is not correct. So I write the following line to specifically fetch the config file of `gpt2-xl`:
` config = GPT2Config.from_pretrained("gpt2-xl")`
But, here `vocab_size:50257`
So I changed explicitly the value by:
` config.vocab_size=204483`
Then after printing the `config`, I can see that the previous line took effect in the configuration. But still, I am getting the same error. | 02-07-2020 16:53:29 | 02-07-2020 16:53:29 | Indeed, there was an error in the code, thank you for letting us know! I've patched it with fd639e5be31f83447c37cf79023fd98bac29f86c.
It is now [updated in the docs](https://huggingface.co/transformers/quickstart.html#using-the-past). Thanks! |
transformers | 2,773 | closed | How to load a pretrained TF model using AutoModel? | Run the following code:
```
import tensorflow as tf
from transformers import AutoModel, TFBertModel
auto_model = AutoModel.from_pretrained("bert-base-uncased")
tfbert_model = TFBertModel.from_pretrained("bert-base-uncased")
print(auto_model.__class__)
print(tfbert_model.__class__)
```
Then the output is:
```
<class 'transformers.modeling_bert.BertModel'>
<class 'transformers.modeling_tf_bert.TFBertModel'>
```
It seems that AutoModel defaultly loads the pretrained PyTorch models, but how can I use it to load a pretrained TF model? | 02-07-2020 10:55:18 | 02-07-2020 10:55:18 | Hi @erikchwang, you should use `TFAutoModel` instead<|||||>Is this TFAutoModel mentioned in the document? I cannot find it...<|||||>I'll add it to the model pages soon. Thanks!<|||||>when I load TF model by use AutoModel from your document, there are many errors, like this
`model = AutoModel.from_pretrained(r'/Users/maxiong/Workpace/Code/transformers/pre_model/bert_model.ckpt.index', from_tf=True, config=config)
`

when I used TFAutoModel to load a model, there is like this
`model = TFAutoModel.from_pretrained(r'/Users/maxiong/Workpace/Code/transformers/pre_model/bert_model.ckpt.index', config=config)
`

I tried many functions to load TF Pretraining model in your document, most of them appeared errors
<|||||>I can't able to load model for model = TFAutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
and
TFAutoModel
.from_pretrained('microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext')
anyone ?<|||||>> I can't able to load model for model = TFAutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
> and
> TFAutoModel
> .from_pretrained('microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext')
> anyone ?
HI @blmali, I had the same issue when trying to load "emilyalsentzer/Bio_Discharge_Summary_BERT". I solved it by passing `from_pt` argument as `True`:
`model = TFAutoModel.from_pretrained("emilyalsentzer/Bio_Discharge_Summary_BERT", from_pt=True)`.
I hope this helps. |
transformers | 2,772 | closed | How to generate different suggestions with GPT2 or XLNet like Write With Transformers? | Hello,
I want to generate with run_generation more different suggestions of the next words, preferably with variable length and different terms or synonyms, like it is done in Write With Transformer.
Any suggestion or idea on how to achieve this?
Thanks | 02-07-2020 09:03:01 | 02-07-2020 09:03:01 | I closed this issue since I found useful to set the `do_sample` argument to True, as mentioned in this issue: https://github.com/huggingface/transformers/issues/2415 |
transformers | 2,771 | closed | export to onnx issue | Hi experts,
I got an error when running the onnx model after conversion. Can anyone please help to take a look?
code:
`torch.onnx.export(model,
(input_ids, attention_mask, token_type_ids),
"bert.onnx",
input_names=['input_ids', 'attention_mask', 'token_type_ids'],
export_params=True, verbose=True)`
`sess = rt.InferenceSession("bert.onnx")
inputs = {'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids}
outputs = sess.run(None, inputs)
`
error:
Traceback (most recent call last):
File "test.py", line 29, in <module>
outputs = sess.run(None, inputs)
File "/usr/local/lib/python3.6/dist-packages/onnxruntime/capi/session.py", line 142, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running Gather node. Name:'' Status Message: indices element out of data bounds, idx=1 must be within the inclusive range [-1,0]
| 02-07-2020 06:53:51 | 02-07-2020 06:53:51 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,770 | closed | The prediction output is random | When I use the official example scripts to predict a text sentence classification model, I found that the output is different every time。
```
from transformers import BertTokenizer, BertForSequenceClassification
import torch
import numpy as np
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
model.eval()
input_ids = torch.tensor(tokenizer.encode("my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
outputs= model(input_ids)
print(outputs)
```
fist result :(tensor([[-0.1939, 0.1449]], grad_fn=<AddmmBackward>),)
second result:(tensor([[-0.2425, -0.2737]], grad_fn=<AddmmBackward>),)
third result: (tensor([[ 0.0494, -0.7208]], grad_fn=<AddmmBackward>),)
......
I expected the outputs to be the same... am I doing this wrong? | 02-07-2020 05:13:27 | 02-07-2020 05:13:27 | Hi @Mozen,
You will need to train your model for sequence classification first.
The pre-trained models are not yet trained for the downstream task. So right now, you have an untrained sequence classification head on top of Bert.
I could not find where it was mentioned in the docs, but have a look at [this comment](https://github.com/huggingface/transformers/issues/1979#issuecomment-559597512).<|||||>@jwallat ok, thanks a lot!<|||||>Please close the question if your question is answered. |
transformers | 2,769 | closed | Model download: tf-xlm-roberta-large "tf_model.h5" file missing | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): tf-xlm-roberta-large
The "tf_model.h5" file for tf-xlm-roberta-large appears to be missing as the following url from model hub is returning "NoSuchKey" errors: https://s3.amazonaws.com/models.huggingface.co/bert/jplu/tf-xlm-roberta-large/tf_model.h5
If intentional, is it being reupload soon? | 02-07-2020 02:58:48 | 02-07-2020 02:58:48 | same for roberta-base and roberta-large<|||||>cc @jplu :)<|||||>@jplu Renamed the file using `aws s3 mv s3://models.huggingface.co/bert/jplu/tf-xlm-roberta-large/xlm-roberta-large-tf_model.h5 s3://models.huggingface.co/bert/jplu/tf-xlm-roberta-large/tf_model.h5`
Does it work now @paradc2 ?
@RichJackson Which exact models are you talking about?<|||||>humm this is weird I was sure to have properly named the models... This is certainly my bad then. I'm really sorry guys!
Here what I have when I `ls` my repo:
```
(transformers) ┌─[jplu@robinson] - [~/transformers] - [ven. févr. 07, 16:05]
└─[$] <git:(fix-tf-distil*)> ./transformers-cli s3 ls
Filename LastModified ETag Size
------------------------------------ ------------------------ -------------------------------------- ----------
tf-camembert-base/config.json 2020-01-31T23:00:26.000Z "da462af1da162d7145bf47f066533574" 596
tf-camembert-base/tf_model.h5 2020-01-30T12:25:25.000Z "fbce3cf6602dbb56daf6ea2b9642eefc" 545172724
tf-flaubert-base-cased/config.json 2020-01-31T23:00:26.000Z "b1bb00ff27331cee714b82d659b18d0e" 942
tf-flaubert-base-cased/tf_model.h5 2020-01-31T16:53:31.000Z "1418889252dda2462c2e8b8b0b74010d" 764558620
tf-flaubert-base-uncased/config.json 2020-01-31T23:00:26.000Z "b88f774bef4f4ab20748b728441fd03e" 942
tf-flaubert-base-uncased/tf_model.h5 2020-01-31T16:54:12.000Z "db954070da0d1435e07ae67713de63c3" 757260944
tf-flaubert-large-cased/config.json 2020-01-31T23:00:26.000Z "e0a5f3081bbb858a0096daa18a55157d" 1030
tf-flaubert-large-cased/tf_model.h5 2020-01-31T16:55:28.000Z "10b53d7cec21cc2d5a28a8d6a225e0ad" 1775057844
tf-flaubert-small-cased/config.json 2020-01-31T23:00:27.000Z "b4fe61d6ed58fbbc00d3f5aca3a23829" 1007
tf-flaubert-small-cased/tf_model.h5 2020-01-31T16:54:58.000Z "a8c6e15d7434dca7d49f1666b4933f2a" 358615548
tf-xlm-roberta-base/config.json 2020-01-31T23:00:27.000Z "3bb4d32c4818bf4ce53021f6ce7839df" 737
tf-xlm-roberta-base/tf_model.h5 2020-01-30T10:30:20.000Z "248f95f776e119c46132860f11085c2d" 1885418496
tf-xlm-roberta-large/config.json 2020-01-31T23:00:27.000Z "d6f295d68b0414208f5fc1cbc2f0dce6" 738
tf-xlm-roberta-large/tf_model.h5 2020-02-07T14:51:57.000Z "44602b7afc746bc6971e793f4534dcf0-390" 3271420488
```<|||||>here's the exception:
```
02/07/2020 15:03:17 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-vocab.json from cache at /home/kxfv271/.cache/torch/transformers/d0c5776499adc1ded22493fae699da0971c1ee4c2587111707a4d177d20257a2.ef00af9e673c7160b4d41cfda1f48c5f4cba57d5142754525572a846a1ab1b9b
02/07/2020 15:03:17 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-merges.txt from cache at /home/kxfv271/.cache/torch/transformers/b35e7cd126cd4229a746b5d5c29a749e8e84438b14bcdb575950584fe33207e8.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda
02/07/2020 15:03:18 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base/pytorch_model.bin from cache at None
Traceback (most recent call last):
File "<masked>lib/python3.7/site-packages/torch/serialization.py", line 289, in _check_seekable
f.seek(f.tell())
AttributeError: 'NoneType' object has no attribute 'seek'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<masked>/lib/python3.7/site-packages/transformers/modeling_utils.py", line 467, in from_pretrained
state_dict = torch.load(resolved_archive_file, map_location="cpu")
File "<masked/>lib/python3.7/site-packages/torch/serialization.py", line 525, in load
with _open_file_like(f, 'rb') as opened_file:
File "<masked>/lib/python3.7/site-packages/torch/serialization.py", line 217, in _open_file_like
return _open_buffer_reader(name_or_buffer)
File "<masked>/lib/python3.7/site-packages/torch/serialization.py", line 202, in __init__
_check_seekable(buffer)
File "<masked>/lib/python3.7/site-packages/torch/serialization.py", line 292, in _check_seekable
raise_err_msg(["seek", "tell"], e)
File "<masked>/lib/python3.7/site-packages/torch/serialization.py", line 285, in raise_err_msg
raise type(e)(msg)
AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/data/home/kxfv271/.pycharm_helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/data/home/kxfv271/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/datadrive/pycharm_project_817/aznlp_tools/rbert_paper/rbert_ablations.py", line 1182, in <module>
main()
File "/datadrive/pycharm_project_817/aznlp_tools/rbert_paper/rbert_ablations.py", line 1108, in main
cache_dir=args.cache_dir if args.cache_dir else None
File "<masked>/lib/python3.7/site-packages/transformers/modeling_utils.py", line 470, in from_pretrained
"Unable to load weights from pytorch checkpoint file. "
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
```
looks like the vocab and config files are still available?<|||||>@RichJackson, this is a different error to @paradc2. Could you show us which command raised this error?<|||||>I'm running a (modified) version of the run_glue.py example. I think the problem is on [this line](https://github.com/huggingface/transformers/blob/73368963b200f2d70d2267bd49a3fa794850b3ff/examples/run_glue.py#L634). If you don't provide a --cache-dir argument, this evaluates to None? Hence the above log line:
```
02/07/2020 15:03:18 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base/pytorch_model.bin from cache at None
```
i.e. model links seem to be ok<|||||>do you mind opening a new issue for this?<|||||>@julien-c yes, the tf-xlm-roberta-large download works as expected now. Thanks!<|||||>>
>
> I'm running a (modified) version of the run_glue.py example. I think the problem is on [this line](https://github.com/huggingface/transformers/blob/73368963b200f2d70d2267bd49a3fa794850b3ff/examples/run_glue.py#L634). If you don't provide a --cache-dir argument, this evaluates to None? Hence the above log line:
>
> ```
> 02/07/2020 15:03:18 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base/pytorch_model.bin from cache at None
> ```
>
> i.e. model links seem to be ok
Hi Sir, i have the same problem with Camembert while fine-tuning on FQUAD. any solutions Sir ? |
transformers | 2,768 | closed | why take the first hidden state for sequence classification (DistilBertForSequenceClassification) | In the last few layers of sequence classification [here][1], the first hidden state of the sequence length of the transformer output to be used for classification.
hidden_state = distilbert_output[0] # (bs, seq_len, dim) <-- transformer output
pooled_output = hidden_state[:, 0] # (bs, dim) <-- first hidden state
pooled_output = self.pre_classifier(pooled_output) # (bs, dim)
pooled_output = nn.ReLU()(pooled_output) # (bs, dim)
pooled_output = self.dropout(pooled_output) # (bs, dim)
logits = self.classifier(pooled_output) # (bs, dim)
Is there any benefit to taking the first hidden state over the last, average, or even the use of a Flatten layer instead?
I've also asked this question on [Stack Overflow](https://stackoverflow.com/questions/60087613/why-take-the-first-hidden-state-for-sequence-classification-distilbertforsequen)
[1]: https://github.com/huggingface/transformers/blob/33d3072e1c54bcd235447b98c6dea1b4cb71234c/src/transformers/modeling_distilbert.py#L634 | 02-07-2020 02:44:19 | 02-07-2020 02:44:19 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,767 | closed | Adapter-BERT is missing in transformers library? | Adapter BERT obtain comparable results to BERT on several NLP tasks while achieving parameter efficiency. ( https://github.com/google-research/adapter-bert ) @thomwolf
I think, it will be useful if adapter-bert is also included in the library.
| 02-07-2020 01:26:13 | 02-07-2020 01:26:13 | Up, I think this is an awesome idea<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,766 | closed | Fix documentation in ProjectedAdaptiveLogSoftmax | The shape of outputs for forward in ProjectedAdaptiveLogSoftmax is flipped in the documentation: it should be log probabilities when `labels` is `None` and NLLs otherwise. This is what the code does, but the docstring has them flipped. | 02-07-2020 01:04:05 | 02-07-2020 01:04:05 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2766?src=pr&el=h1) Report
> Merging [#2766](https://codecov.io/gh/huggingface/transformers/pull/2766?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33d3072e1c54bcd235447b98c6dea1b4cb71234c?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2766?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2766 +/- ##
======================================
Coverage 75.1% 75.1%
======================================
Files 93 93
Lines 15249 15249
======================================
Hits 11452 11452
Misses 3797 3797
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2766?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/2766/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `53.33% <ø> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2766?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2766?src=pr&el=footer). Last update [33d3072...8725c54](https://codecov.io/gh/huggingface/transformers/pull/2766?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Perfect, thank you!! |
transformers | 2,765 | closed | Add option to `cached_path` to automatically extract archives | Slight modification to `cached_path` so that zip and tar archives can be automatically extracted.
- archives are extracted in the same directory than the (possibly downloaded) archive in a created extraction directory named from the archive.
- automatic extraction is activated by setting `extract_compressed_file=True` when calling `cached_file`.
- the extraction directory is re-used t avoid extracting the archive again unless we set `force_extract=True`, in which case the cached extraction directory is removed and the archive is extracted again.
Currently not added to the `from_pretrained` methods. Probably better to have the user control this explicitly at this level (by first extracting the archive) => open to discussion though.
Also include a simple proposal to add TF/PT compatibility in hf_buckets (cc @julien-c) | 02-06-2020 23:10:13 | 02-06-2020 23:10:13 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2765?src=pr&el=h1) Report
> Merging [#2765](https://codecov.io/gh/huggingface/transformers/pull/2765?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2c12464a20160061a8b436b4939e8d5fa2437a15?src=pr&el=desc) will **decrease** coverage by `0.36%`.
> The diff coverage is `31.03%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2765?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2765 +/- ##
==========================================
- Coverage 75.09% 74.73% -0.37%
==========================================
Files 93 93
Lines 15250 15273 +23
==========================================
- Hits 11452 11414 -38
- Misses 3798 3859 +61
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2765?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2765/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.26% <100%> (-0.56%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2765/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `61.26% <100%> (-0.07%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2765/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `67.74% <25.92%> (-5.36%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2765/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `52.94% <0%> (-21.57%)` | :arrow_down: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2765/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `68.62% <0%> (-3.32%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2765/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `84.87% <0%> (-0.82%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2765?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2765?src=pr&el=footer). Last update [2c12464...c6c5c3f](https://codecov.io/gh/huggingface/transformers/pull/2765?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,764 | closed | [examples] rename run_lm_finetuning to run_language_modeling | And corresponding doc updates | 02-06-2020 20:10:31 | 02-06-2020 20:10:31 | Great!! |
transformers | 2,763 | closed | Add albert-base-v3 to pretrained models? | # 🚀 Feature request
Albert v3 was recently released on TFHub [here](https://tfhub.dev/google/albert_base/3). Could you please add it to the list of available pretrained models [here](https://huggingface.co/transformers/pretrained_models.html)?
## Motivation
Would provide the community with the most up-to-date albert version.
| 02-06-2020 19:47:23 | 02-06-2020 19:47:23 | Hi, as mentioned in their changelog, the only difference between the v2 and v3 is the compatibility with TF 1.15 as they removed the `einsum` operation.
It won't change anything for the huggingface/transformers users as the models available here are only for TF2.

|
transformers | 2,762 | closed | Add contributors snapshot | powered by https://github.com/sourcerer-io/hall-of-fame | 02-06-2020 19:18:01 | 02-06-2020 19:18:01 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2762?src=pr&el=h1) Report
> Merging [#2762](https://codecov.io/gh/huggingface/transformers/pull/2762?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33d3072e1c54bcd235447b98c6dea1b4cb71234c?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2762?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2762 +/- ##
======================================
Coverage 75.1% 75.1%
======================================
Files 93 93
Lines 15249 15249
======================================
Hits 11452 11452
Misses 3797 3797
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2762?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2762/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2762/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.77% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2762/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.21% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2762/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `94.27% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2762/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.39% <0%> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2762?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2762?src=pr&el=footer). Last update [33d3072...8b6a98e](https://codecov.io/gh/huggingface/transformers/pull/2762?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Nice! |
transformers | 2,761 | closed | [docs] Add menu w/ links to other pages on hf.co | 02-06-2020 18:49:53 | 02-06-2020 18:49:53 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2761?src=pr&el=h1) Report
> Merging [#2761](https://codecov.io/gh/huggingface/transformers/pull/2761?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33d3072e1c54bcd235447b98c6dea1b4cb71234c?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2761?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2761 +/- ##
======================================
Coverage 75.1% 75.1%
======================================
Files 93 93
Lines 15249 15249
======================================
Hits 11452 11452
Misses 3797 3797
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2761?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2761?src=pr&el=footer). Last update [33d3072...e6944e6](https://codecov.io/gh/huggingface/transformers/pull/2761?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>btw @LysandreJik mkdocs looks really cool :)<|||||>Yeah I really like mkdocs as well |
|
transformers | 2,760 | closed | build: add poetry, an alternative to setup.py with dependency versions tracked | Hello,
I have written a `pyproject.toml` so your project can be setup using [poetry](https://github.com/python-poetry/poetry) .
That way requirements versions can easily be tracked for people wanting to use **poetry** (it is optional).
# Example
```bash
# first setup your virtual environment, then:
pip install poetry
poetry install # this is equivalent to 'pip install .' but with versions tracked
poetry install --extras testing # pip install -e ".[testing]"
poetry install --extras examples # pip install -r examples/requirements.txt
poetry install --extras torch # pip install -e ".[torch]"
poetry install --extras tf # pip install -e ".[tf]"
# edit: updating dependencies to the latest possible:
poetry update
# adding new dependencies
poetry add MyPyModule
```
# Notes
This does not change any python code, i.e. everything still works :)
| 02-06-2020 16:43:36 | 02-06-2020 16:43:36 | Not sure why this breaks the CI?
Shouldn't we _not_ version control the .lock file?<|||||>To me one interesing is to track `.lock` that way you are always certain to have a working versions, i.e. dependencies version that match together.
Indeed I am not sure why the CI fails :/<|||||>Pinging @aaugustin our Python-ecosystem/packaging expert on this, but I don’t think we want to commit to maintaining multiple different install systems<|||||>https://circleci.com/gh/huggingface/transformers/15309?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
```
builder = WheelBuilder(poetry, SystemEnv(Path(sys.prefix)), NullIO())
```
The CI now uses `poetry` for `wheels` which was unexpected to me :/
`poetry build -f wheel` broken for C extensions
#1332 : https://github.com/python-poetry/poetry/issues/1332
I am not for changing the way you build, but only give the option to users to be able to manage dependencies using `poetry` :wink:
<|||||>I have squashed my commits (sorry for the multiple CI runs)
Some issues where:
- `python 3.5` is needed (I used 3.6 so it complies with `black`), so its matches your `setup.py`
- email address of one authors was not compliant (needs to be: `"author <email>"`)
New error:
```
The following workers failed to return coverage data, ensure that pytest-cov is installed on these workers.
```
I am investigating.
Edit:
Could it be because I push once more?
Because it is present in `.circleci/config.yml` and seems to be installed during the `ci` :thinking:
And it works for few tests having also `pytest-cov`
Edit 2: **all good**, must have been committing once more<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2760?src=pr&el=h1) Report
> Merging [#2760](https://codecov.io/gh/huggingface/transformers/pull/2760?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33d3072e1c54bcd235447b98c6dea1b4cb71234c?src=pr&el=desc) will **decrease** coverage by `25.36%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2760?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2760 +/- ##
===========================================
- Coverage 75.1% 49.73% -25.37%
===========================================
Files 93 93
Lines 15249 15249
===========================================
- Hits 11452 7584 -3868
- Misses 3797 7665 +3868
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2760?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `0% <0%> (-100%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `0% <0%> (-100%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `0% <0%> (-100%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `0% <0%> (-97.83%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `0% <0%> (-96.55%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `0% <0%> (-96.06%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `0% <0%> (-95.85%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `0% <0%> (-95.13%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `0% <0%> (-94.67%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `0% <0%> (-92.83%)` | :arrow_down: |
| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/2760/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2760?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2760?src=pr&el=footer). Last update [33d3072...4ab71c3](https://codecov.io/gh/huggingface/transformers/pull/2760?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Like @julien-c said, we don't want to maintain both poetry and setuptools configurations, because this is likely to create confusion and waste everyone's time (which already started with CI in this PR).
Switching to poetry could be a good move, but then we should ditch `setup.py` entirely and make sure all workflows are still operational.<|||||>> Like @julien-c said, we don't want to maintain both poetry and setuptools configurations, because this is likely to create confusion and waste everyone's time (which already started with CI in this PR).
>
> Switching to poetry could be a good move, but then we should ditch `setup.py` entirely and make sure all workflows are still operational.
I am not against you ditch `setup.py` but that's your call :wink:
As for not wanting to maintain both `poetry` and `setuptools`, maybe there will be people wanting to use `poetry` and will maintain it themselves. This would not mean any extra work for people not wanting to update `poetry` (even if I think maintaining `poetry` does not require much effort, there was some at the beginning and it has been done :wink: )
<|||||>Replying to the discussion about the lock file: in pipenv projects I never share the lock file. Yes, you get better (locked) version control but in practice this does not work cross platform at all. Hashes for slightly more complex packages are mostly platform dependent. Installations between colleagues failed because of this. The lock file is a good idea in practice or for in-house deployment but is not useful in the real world, I think. <|||||>On the lock file discussion, I think it's not worth it to version it in git for libraries, in general. The pro is an almost reproducible environment. Then con is having to constantly keep it up-to-date for new versions of all the dependencies (including the transitive ones), even for little changes (e.g., tqdm from 4.36.0 to 4.36.1). You could also avoid updating it, but then you'd never catch bugs on new versions. So I think it's good to keep reproducibility on python projects that are not libraries, especially when you want to make sure your code works on production as similar to your env as possible.
As an outsider, I see moving to poetry as a good idea. Pros: it works well and fast, specifying the test/dev/docs dependencies, simpler and fewer package configuration files (in theory, only `pyproject.toml`), can check if your env complies with the config file, can specify Python versions, can publish the package easily, more flexibility when specifying the dependencies' versions. The only con I see, apart from learning the tool which should be fast, is that `pip install --editable` wouldn't work as of today for the users.<|||||>> in practice this does not work cross platform at all
I agree that's an argument against it.
> Then con is having to constantly keep it up-to-date for new versions of all the dependencies (including the transitive ones), even for little changes (e.g., tqdm from 4.36.0 to 4.36.1).
I see no reason why you would need to keep it up-to-date. To me it is simply a (near) guarantee to be able to have a working environment to develop on the project. No matter if you don't have all the latest updates. Most little changes from dependencies have little to no impact on your own development. (library or project)
Anyhow, feel free to tell me to remove the `.lock` or to close this issue & PR 😉
<|||||>> I see no reason why you would need to keep it up-to-date. To me it is simply a (near) guarantee to be able to have a working environment to develop on the project. No matter if you don't have all the latest updates. Most little changes from dependencies have little to no impact on your own development. (library or project)
The problem I see is that some dependency versions are gonna stall forever, while actually the latest ones haven't been tried and are more likely to break the codebase.<|||||>> The problem I see is that some dependency versions are gonna stall forever, while actually the latest ones haven't been tried and are more likely to break the codebase.
It does not seem to be a good behavior to add breaking new dependencies 🤔 (especially in a `lib` with 22k ⭐️ )
As for stalling ones, `poetry update` most of the time will do the trick (updating your state to a newer working state) and I suppose there should be 1 or several people interested with having a most up to date setup and could contribute it though I may be naive 😇
<|||||>> It does not seem to be a good behavior to add breaking new dependencies (especially in a `lib` with 22k )
Not breaking dependencies, but a dependency version update makes your codebase to break, especially when you have an old version because the one in the codebase is quite old.
> As for stalling ones, `poetry update` most of the time will do the trick (updating your state to a newer working state) and I suppose there should be 1 or several people interested with having a most up to date setup and could contribute it though I may be naive
There's dependabot. But I think it's not worth it, that's my point.<|||||>A compromise could be that dependabot sends PRs monthly so it's less overwhelming. But I still don't see keeping the test env reproducible as an advantage (it doesn't reflect users' env).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,759 | closed | Loss is calculated on all tokens, including padding, in the LM fine-tuning example | # 🐛 Bug
The BERT fine-tuning example uses a special index to mark ignored locations for the loss function:
`loss_fct = CrossEntropyLoss(ignore_index=-1)`
While in the same example, the masking function that samples locations to be included or excluded uses a different index: -100 (which is the default ignored index for the cross-entropy loss function, if one is not supplied):
`labels[~masked_indices] = -100 # We only compute loss on masked tokens`
Model I am using (Bert, XLNet ...): All models.
Language I am using the model on (English, Chinese ...): All languages
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: LM Finetuning
* [ ] my own task or dataset: (give details below)
## Expected behavior
Loss should be computed only on the 15% (mlm_probability) of sampled tokens.
- `transformers` version: 2.3, 2.4
| 02-06-2020 15:22:16 | 02-06-2020 15:22:16 | Hello, `BertForMaskedLM` [does not use an ignore index set to -1](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L1018), nor does any other models.
It was updated in [2.4.0](https://github.com/huggingface/transformers/releases/tag/v2.4.0). The scripts should be run against the latest version of the library.
If you want to run against v2.3.0, please use a script from v2.3.0, for example [run_lm_finetuning](https://github.com/huggingface/transformers/blob/v2.3.0/examples/run_lm_finetuning.py).<|||||>You're right, I mixed up the versions. closing the issue. |
transformers | 2,758 | closed | TFRoberta output with attention_mask changes in version 2.3.0 vs 2.4.1 | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Roberta
Language I am using the model on (English, Chinese ...): not relevant
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
```python
import tensorflow as tf
tf.get_logger().setLevel('CRITICAL')
import transformers
print(transformers.__version__)
from transformers import TFRobertaModel, RobertaConfig
from numpy.testing import assert_allclose
config = RobertaConfig()
model = TFRobertaModel(config)
input1 = tf.constant([[5, 3, 4, 8, 7, 1, 6]])
attention_mask1 = tf.constant([[1, 1, 1, 1, 1, 0, 1]])
out1, _ = model({'input_ids': input1, 'attention_mask': attention_mask1})
input2 = tf.constant([[5, 3, 4, 8, 7, 5, 6]])
attention_mask2 = tf.constant([[1, 1, 1, 1, 1, 0, 1]])
out2, _ = model({'input_ids': input2, 'attention_mask': attention_mask2})
assert_allclose(out1.numpy()[:, :5, :], out2.numpy()[:, :5, :])
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: I am using dummy token ids
## To reproduce
Steps to reproduce the behavior:
1. make a new virtualenv
2. install tensorflow
3. pip install transformers=2.3.0
4. save the script in test_mask.py
5. run python test_mask.py
6. repeat 1-5, but in 2 install the latest release: pip install transformers
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
In case of transformers==2.3.0, the test **passes**, giving the following output:
```
2.3.0
```
In case of transformers==2.4.1, the test **fails**:
```
2.4.1
Traceback (most recent call last):
File "test_attention_mask.py", line 21, in <module>
assert_allclose(out1.numpy()[:, :5, :], out2.numpy()[:, :5, :])
File "/home/bartosz/.pyenv/versions/aphp-django/lib/python3.7/site-packages/numpy/testing/_private/utils.py", line 1533, in assert_allclose
verbose=verbose, header=header, equal_nan=equal_nan)
File "/home/bartosz/.pyenv/versions/aphp-django/lib/python3.7/site-packages/numpy/testing/_private/utils.py", line 846, in assert_array_compare
raise AssertionError(msg)
AssertionError:
Not equal to tolerance rtol=1e-07, atol=0
Mismatched elements: 3840 / 3840 (100%)
Max absolute difference: 0.43364888
Max relative difference: 337.9916
x: array([[[ 0.742064, -1.048889, -1.133795, ..., 1.208201, -0.110544,
-1.556664],
[-0.307906, -0.545374, -1.124657, ..., 0.067571, -0.857922,...
y: array([[[ 0.718682, -0.995075, -1.105745, ..., 1.380688, -0.071943,
-1.627201],
[-0.390375, -0.534317, -1.113236, ..., 0.178188, -0.822041,...
```
## Expected behavior
In my understanding, the test should pass, because the only difference between inputs `input1` and `input2` is oken with index 5 which is masked in both `attention_masks` (i.e. t, `input1[5]==1` while `input2[5]==5`). Note that I don't take the embedding for this token in the comparison of the outputs.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.4.1
- Platform: linux
- Python version: 3.7.4
- PyTorch version (GPU?): No
- Tensorflow version (GPU?): 2.1.0 (no GPU)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 02-06-2020 15:17:38 | 02-06-2020 15:17:38 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,757 | closed | Cannot reproduce SQUAD Example | I'm not able to reproduce the squad experimentation (via the example). I tried the command line;
python run_squad.py \
--model_type bert \
--model_name_or_path bert-base-cased \
--do_train \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
That gived very weird results
Then I read a little the forum and I tried:
python3 run_squad.py \
--model_type bert \
--model_name_or_path bert-base-cased \
--do_train \
--do_eval \
--do_lower_case \
--version_2_with_negative \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 10000 \
--output_dir debug_squad/ \
--overwrite_output_dir
(I also tried with v2 dataset it not works anymore). Can you give me some lead to reproduce the results given in the git readme, or the branch to do it ?
| 02-06-2020 12:14:54 | 02-06-2020 12:14:54 | 1. What do you mean "weird results"?
2. What do you mean "v2 dataset it not works anymore"?
3. Please provide all the information required in the template, i.e. python version, transformers version, torch version etc<|||||>1. F1 & Exact match:~18 (should be 88/81 no ?)
2. Squad 2.0
3. Python 3.6, last transformer version (clone yesterday), torch 1.4.0
Tensorboard :

|
transformers | 2,756 | closed | BERT decoder: Fix failure with the default attention mask. | PyTorch < 1.3 requires multiplication operands to be of the same type. This was violated when using default attention mask (i.e.., `attention_mask=None` in arguments) given BERT in the decoder mode.
In particular, this was breaking `Model2Mode`l and made a tutorial from quickstart.md failing.
A test is included, but here is a minimal snippet to reproduce:
```python
import torch
from transformers import BertModel
model = BertModel.from_pretrained("bert-base-uncased", is_decoder=True)
inputs = torch.LongTensor([[1, 2, 3]])
model(inputs) # no `attention_mask` provided
```
On PyTorch 1.2 or older this was failing with
```
Traceback (most recent call last):
...
File "/home/oleksiy.syvokon/transformers/src/transformers/modeling_bert.py", line 735, in forward
extended_attention_mask = causal_mask[:, None, :, :] * attention_mask[:, None, None, :]
RuntimeError: expected device cpu and dtype Float but got device cpu and dtype Long
``` | 02-06-2020 11:56:21 | 02-06-2020 11:56:21 | Thanks for the feedback! That's a valid concern. I made config handling consistent, at least for the BERT tests. But if you decide that it's too much change for such a trivial fix, I can revert the changes in tests.<|||||>(Seems like CircleCI tests failure is transient and unrelated to the PR)<|||||>That's one way of solving the issue, but now it makes the BERT tests incoherent with the rest of the tests, which all use tuples instead of dictionaries. For this PR, I believe the most simple would be to revert to using tuples and use this tuple in `test_bert_model_as_decoder_with_default_input_mask`. What do you think?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2756?src=pr&el=h1) Report
> Merging [#2756](https://codecov.io/gh/huggingface/transformers/pull/2756?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1f5db9a13c8932e02e6e7d599a16dc262b1570bf?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2756?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2756 +/- ##
=======================================
Coverage 75.02% 75.02%
=======================================
Files 93 93
Lines 15275 15275
=======================================
Hits 11460 11460
Misses 3815 3815
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2756?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.9% <ø> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `100% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `100% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `100% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.82% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `96.54% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.05% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.84% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `95.11% <0%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.66% <0%> (ø)` | :arrow_up: |
| ... and [19 more](https://codecov.io/gh/huggingface/transformers/pull/2756/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2756?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2756?src=pr&el=footer). Last update [1f5db9a...b5b92ed](https://codecov.io/gh/huggingface/transformers/pull/2756?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@LysandreJik, I agree. Reverted unnecessary changes in tests.<|||||>Great, thanks @asivokon !! |
transformers | 2,755 | closed | Multi-text files support for run_lm_finetuning | # 🚀 Feature request
Support multi-text files for run_lm_finetuning.
## Motivation
Currently, you support training from scratch but it only supports a single file. Usually, when we train from scratch we train a model using multi-text files, not a single text file.
It will be great to support multi-text file and maybe separate finetuning from training from scratch files.
| 02-06-2020 09:13:47 | 02-06-2020 09:13:47 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,754 | closed | Changed vocabulary save function. Variable name was inconsistent | Caused an error to be thrown when passing a file name instead of a directory.
UnboundLocalError: local variable 'vocab_file' referenced before assignment
Associated with issue #2753
| 02-06-2020 09:06:30 | 02-06-2020 09:06:30 | Great, thank you for taking the time to fix it! |
transformers | 2,753 | closed | Saving tokenizer vocabulary throws error when passing file name instead of directory. | # 🐛 Bug
## Information
Using transfo-xl-wt103
UnboundLocalError: local variable 'vocab_file' referenced before assignment
## To reproduce
Steps to reproduce the behavior:
```
from transformers import TransfoXLTokenizer, TFTransfoXLLMHeadModel
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
tokenizer.save_vocabulary('vocab.txt')
```
## Pull Request
https://github.com/huggingface/transformers/pull/2754
| 02-06-2020 08:56:58 | 02-06-2020 08:56:58 | Closed by #2754 |
transformers | 2,752 | closed | Fix multi-gpu evaluation in run_glue.py example | Fix multi-gpu evaluation while training in `examples/run_glue.py` | 02-06-2020 07:23:04 | 02-06-2020 07:23:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2752?src=pr&el=h1) Report
> Merging [#2752](https://codecov.io/gh/huggingface/transformers/pull/2752?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9d87eafd118739a4c121d69d7cff425264f01e1c?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2752?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2752 +/- ##
=======================================
Coverage 74.51% 74.51%
=======================================
Files 87 87
Lines 14920 14920
=======================================
Hits 11117 11117
Misses 3803 3803
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2752?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2752?src=pr&el=footer). Last update [9d87eaf...31218ea](https://codecov.io/gh/huggingface/transformers/pull/2752?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great, thanks! |
transformers | 2,751 | closed | Sentence pair classification | # ❓ Questions & Help
Hi,
I want to do sentence pair classification on Quora Questions Dataset by fine-tuning BERT. I am new to this and do not know where to start? Can anyone let me know how do i get started with this?
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 02-06-2020 03:10:58 | 02-06-2020 03:10:58 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,750 | closed | default output of BertModel.from_pretrained('bert-base-uncased') | By default `output = BertModel.from_pretrained('bert-base-uncased')` is a 2-tuple where `output[0]` is the hidden states of the last layer, but how is `output[1]` computed? It doesn't seem to be average of the last layer hidden states vectors over multiple tokens. I am trying to leverage output as sentence embedding, not sure if I should use `output[1]`. Thank you! | 02-05-2020 22:55:13 | 02-05-2020 22:55:13 | See https://huggingface.co/transformers/v1.2.0/_modules/pytorch_transformers/modeling_bert.html#BertModel
I think this is your output[1]:
"Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during Bert pretraining. This output is usually **not** a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence."<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,749 | closed | Upgrade run_generation | The script `run_generation` has a few issues that I aim to fix in this PR:
- [x] The XLNet and XLM generations are broken (crash)
- [x] An end of sequence token is added to all sequences, even when models don't have that token, and results in weird end of sequences.
- [x] No way to generate multiple sequences at a time as it was possible before
- [x] The `length` parameter doesn't take into account the prompt length.
- [x] The prompt is concatenated to the generated sequence, which results in concatenating the initial text for XLNet.
- [x] Actually implement languages for XLM | 02-05-2020 21:17:28 | 02-05-2020 21:17:28 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2749?src=pr&el=h1) Report
> Merging [#2749](https://codecov.io/gh/huggingface/transformers/pull/2749?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ada24def22199459d8c1decc311dfe8dae7a7d8c?src=pr&el=desc) will **decrease** coverage by `0.02%`.
> The diff coverage is `25%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2749?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2749 +/- ##
==========================================
- Coverage 75.1% 75.07% -0.03%
==========================================
Files 93 93
Lines 15249 15255 +6
==========================================
Hits 11452 11452
- Misses 3797 3803 +6
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2749?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `60.7% <0%> (-0.63%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2749/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.46% <100%> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2749?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2749?src=pr&el=footer). Last update [ada24de...f2bcc91](https://codecov.io/gh/huggingface/transformers/pull/2749?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Changes have been added in PR #2885. |
transformers | 2,748 | closed | TFAlbertModelTest::test_pt_tf_model_equivalence -> Fatal Python Error on Mac | Running the unit tests locally on mac, I get "Fatal Python error: Aborted"
To reproduce, try `pytest tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_pt_tf_model_equivalence `
### Environment Info
- `transformers` version: 2.4.1
- Platform: Darwin-19.0.0-x86_64-i386-64bit
- Python version: 3.7.5
- PyTorch version (GPU?): 1.3.1 (False)
- Tensorflow version (GPU?): 2.0.0 (False)
### Traceback
```
tests/test_modeling_tf_albert.py Fatal Python error: Aborted
Current thread 0x0000000110d2adc0 (most recent call first):
File "/Users/shleifer/miniconda3/envs/nb/lib/python3.7/site-packages/torch/nn/functional.py", line 1372 in linear
File "/Users/shleifer/miniconda3/envs/nb/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 87 in forward
File "/Users/shleifer/miniconda3/envs/nb/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541 in __call__
File "/Users/shleifer/transformers_fork/src/transformers/modeling_albert.py", line 321 in forward
File "/Users/shleifer/miniconda3/envs/nb/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541 in __call__
File "/Users/shleifer/transformers_fork/src/transformers/modeling_albert.py", line 566 in forward
File "/Users/shleifer/miniconda3/envs/nb/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541 in __call__
File "/Users/shleifer/transformers_fork/tests/test_modeling_tf_common.py", line 111 in test_pt_tf_model_equivalence
```
https://github.com/huggingface/transformers/issues/2240 has a different error message from a similar test.
Thanks! | 02-05-2020 19:50:59 | 02-05-2020 19:50:59 | #2240 was an error with DistilBERT and was fixed with https://github.com/huggingface/transformers/commit/ea2600bd5f1d36f2fb61958be21db5b901e33884
Does this error happen every time you run the test suite?<|||||>yes!<|||||>I'm running on Darwin 19.2, Python 3.7.5, torch 1.3.1, tensorflow 2.0.0 and transformers from source and I can't replicate this bug 😕
I'm thinking this may be due to a memory issue but it's hard to say given the cryptic error message<|||||>I bumped tensorflow to 2.1 and cant replicate this failure **or** the flaky CircleCI test #2781
- `transformers` version: 2.4.1
- Platform: Darwin-19.0.0-x86_64-i386-64bit
- Python version: 3.7.5
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in><|||||>I also just tried to use python 3.5 and can't replicate.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,747 | closed | Arxiv README | 02-05-2020 18:44:18 | 02-05-2020 18:44:18 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2747?src=pr&el=h1) Report
> Merging [#2747](https://codecov.io/gh/huggingface/transformers/pull/2747?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2184f87003c18ad8a172ecab9a821626522cf8e7?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2747?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2747 +/- ##
======================================
Coverage 75.1% 75.1%
======================================
Files 93 93
Lines 15249 15249
======================================
Hits 11452 11452
Misses 3797 3797
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2747?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2747?src=pr&el=footer). Last update [2184f87...69d18f4](https://codecov.io/gh/huggingface/transformers/pull/2747?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>LOTM
(= Looks outstanding to me)<|||||>I really did my best |
|
transformers | 2,746 | closed | Added CamembertForQuestionAnswering | 02-05-2020 17:42:23 | 02-05-2020 17:42:23 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2746?src=pr&el=h1) Report
> Merging [#2746](https://codecov.io/gh/huggingface/transformers/pull/2746?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2184f87003c18ad8a172ecab9a821626522cf8e7?src=pr&el=desc) will **increase** coverage by `<.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2746?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2746 +/- ##
=========================================
+ Coverage 75.1% 75.1% +<.01%
=========================================
Files 93 93
Lines 15249 15253 +4
=========================================
+ Hits 11452 11456 +4
Misses 3797 3797
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2746?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/2746/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.87% <ø> (ø)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/2746/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `29.18% <ø> (ø)` | :arrow_up: |
| [src/transformers/modeling\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2746/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100% <100%> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2746?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2746?src=pr&el=footer). Last update [2184f87...74c277a](https://codecov.io/gh/huggingface/transformers/pull/2746?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@julien-c Could you please review ;) |
|
transformers | 2,745 | closed | Add BartModel | This ports BART, a "sequence-to-sequence model trained with denoising as pretraining objective." from https://github.com/pytorch/fairseq/tree/master/examples/bart
The decoder is left-to-right, the encoder is biredictional. As such, the code only uses a causal attention mask in the decoder.
### TODO:
- [x] conversion of pretrained weights
- [x] some unit testing
- [x] inference produces the same results as the fairseq version.
- [x] decide on signature/splitting of encoder, decoder arguments (see https://github.com/huggingface/transformers/blob/808bbd5a6abe5b26656ffd809ce0e753495c912a/src/transformers/modeling_encoder_decoder.py#L240
)
- [x] Docstrings
- [x] More comments for code readers
### Future PRs
- [ ] example with correct pretraining objective
- [ ] `BartForSummarization.from_pretrained('bart-large-cnn')`
| 02-05-2020 17:10:03 | 02-05-2020 17:10:03 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2745?src=pr&el=h1) Report
> Merging [#2745](https://codecov.io/gh/huggingface/transformers/pull/2745?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/564fd75d65e66d3ac2a7c39558aa1079c9845152?src=pr&el=desc) will **increase** coverage by `0.76%`.
> The diff coverage is `84.39%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2745?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2745 +/- ##
=========================================
+ Coverage 75.34% 76.1% +0.76%
=========================================
Files 94 98 +4
Lines 15440 15946 +506
=========================================
+ Hits 11633 12136 +503
- Misses 3807 3810 +3
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2745?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `26.66% <ø> (+1.36%)` | :arrow_up: |
| [src/transformers/utils\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy91dGlsc19lbmNvZGVyX2RlY29kZXIucHk=) | `0% <0%> (ø)` | |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <100%> (-0.07%)` | :arrow_down: |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.91% <100%> (+0.03%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `73.6% <100%> (+12.27%)` | :arrow_up: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `75.47% <100%> (+0.23%)` | :arrow_up: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `100% <100%> (ø)` | |
| [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `100% <100%> (ø)` | :arrow_up: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100% <100%> (ø)` | |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.29% <100%> (+0.07%)` | :arrow_up: |
| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/2745/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2745?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2745?src=pr&el=footer). Last update [564fd75...6db143e](https://codecov.io/gh/huggingface/transformers/pull/2745?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I marked some things "resolved" that I've done locally so that I can keep track. Pls advise if it is confusing/not the correct style!<|||||>> I marked some things "resolved" that I've done locally so that I can keep track. Pls advise if it is confusing/not the correct style!
It's ok but obviously I can't discuss the new changes then. |
transformers | 2,744 | closed | Albert language model fine tuning not running run_lm_finetuning.py | # ❓ Questions & Help
## Information
Model I am using (Albert(all types)):
Language I am using the model on (English):
The problem arises when using:
* [ ] the official example scripts: (give details below)
the code returns memory allocation problems when run with any version from albert. i tried to reduce the sequence length and batch size to a minum setting, but the issue still arises. my setting and the minimized setting both run normally with bert or roberta, the issue arises only when i change the model to Albert.
an example:
`tcmalloc: large alloc 1951195136 bytes == 0x7f750f664000 @ 0x7f76efbf8887 0x7f764c2a1b79 0x7f764c29fb0f 0x7f764c29fc33 0x7f764c26a155 0x7f764c26837e 0x7f764c26bbb1 0x7f764c2606df 0x50a8af 0x50c5b9 0x509d48 0x50aa7d 0x50c5b9 0x508245 0x509642 0x595311 0x5a067e 0x50d966 0x58efc9 0x4c9546 0x5886f4 0x58892e 0x551b81 0x5aa6ec 0x50abb3 0x50c5b9 0x508245 0x50a080 0x50aa7d 0x50c5b9 0x508245`
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
language model finetuning for albert
## To reproduce
Steps to reproduce the behavior:
1. in run_lm_finetuning add:
` from transformers import (AlbertConfig,
AlbertForMaskedLM,
AlbertTokenizer,
)`
2.add to MODEL_CLASSES dictionary:
` "albert": (AlbertConfig, AlbertForMaskedLM, AlbertTokenizer),`
3. add file text.txt, a similar txt file to the wiki dataset that's mentioned in the docs.
4.run the finetuning script:
`python transformers/examples/run_lm_finetuning.py \
--output_dir=output \
--model_type=albert \
--model_name_or_path=albert-base-v1 \
--do_train \
--train_data_file test.txt \
--block_size 50 \
--per_gpu_train_batch_size 2 \
--max_steps 520000 \
--weight_decay 0.01 \
--logging_steps 5000 \
--mlm`
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment
* OS: Google colab
* Python version: 3.7
* PyTorch version: 1.3.1
* `transformers` version (or branch): latest
* Using GPU ? yes
* Distributed or parallel setup ? no
* Any other relevant information:
| 02-05-2020 12:19:43 | 02-05-2020 12:19:43 | @thomwolf can you give any insights regarding this?<|||||>how much lines in `test.txt`?<|||||>1,041,130 line<|||||>I have a similar issue finetuning the language model with bert. In the end, I had to scale down my training to ~200,000 lines to make it work, which is a very small proportion of my original dataset.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,743 | closed | PreTrainedEncoderDecoder keeps giving me the same next token | Hi, I am trying to use PreTrainedEncoderDecoder to train a seq2seq model. I have a working training code, but I'm not sure if I'm doing things correctly because the decoded token is always the same token during inference.
The data is paired input and target sentence pairs.
The dataset class looks like this:
```
class LineByLineLabelledTextDataset(Dataset):
"""Labelled text dataset where a line corresponds to a sample."""
def __init__(self,
lines,
tokenizer,
sep="|||",
max_seqlen=512):
self.lines = lines
self.tokenizer = tokenizer
self.sep = sep
self.max_seqlen = max_seqlen
def __len__(self):
return len(self.lines)
def __getitem__(self, i):
splitted = self.lines[i].split(self.sep)
input, target = splitted[0], splitted[1]
# target += " [GEN_STOP]"
input_dict = self.tokenizer.encode_plus(input,
max_length=self.max_seqlen,
pad_to_max_length=True)
target_dict = self.tokenizer.encode_plus(target,
max_length=self.max_seqlen,
pad_to_max_length=True)
return torch.tensor(input_dict["input_ids"]), torch.tensor(target_dict["input_ids"]), torch.tensor(input_dict["attention_mask"]), torch.tensor(target_dict["attention_mask"])
```
The training function for one step looks like this:
```
def train_batch(batch, model, optimizer, device, phase="train"):
input_ids = batch[0].to(device)
target_ids = batch[1].to(device)
input_attention_mask = batch[2].to(device)
target_attention_mask = batch[3].to(device)
optimizer.zero_grad()
with torch.set_grad_enabled(phase == "train"):
outputs = model(input_ids, target_ids,
encoder_attention_mask=input_attention_mask,
decoder_attention_mask=target_attention_mask,
decoder_lm_labels=target_ids)
lm_loss = outputs[0]
loss = lm_loss
loss.backward()
optimizer.step()
return loss
```
The decode function looks like this
```
def decode(encoder_input_text, model, tokenizer, max_length=20):
model.eval()
text = encoder_input_text
generated_text = "[CLS]"
while len(generated_text.split()) < max_length:
encoder_input_ids = tokenizer.encode(text)
encoder_input_tensor = torch.tensor([encoder_input_ids])
print(f"encoder_input_tensor: {encoder_input_tensor}")
decoder_input_ids = tokenizer.encode(generated_text, add_special_tokens=False)
decoder_input_tensor = torch.tensor([decoder_input_ids])
print(f"decoder_input_tensor: {decoder_input_tensor}")
with torch.no_grad():
outputs = model(encoder_input_ids=encoder_input_tensor, decoder_input_ids=decoder_input_tensor)
predictions = outputs[0]
predicted_index = torch.argmax(predictions[0, -1]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
generated_text += " " + predicted_token
print(generated_text)
print(len(generated_text.split()))
if len(generated_text.split()) >= max_length:
break
return generated_text
```
I see the training loss goes down a bit during training. I don't know what I'm doing wrong. | 02-05-2020 11:54:32 | 02-05-2020 11:54:32 | I'm using both "bert-base-uncased" for both encoder and decoder.<|||||>> I think I found the problem. I moved the code inside the train_step outside to the enclosing function and it seems to work.
Hi, I am having the same problem, what solved it for you?<|||||>> > I think I found the problem. I moved the code inside the train_step outside to the enclosing function and it seems to work.
>
> Hi, I am having the same problem, what solved it for you?
Hi, the last time I tried, I was able to got it to work by training it at a lower learning rate and training for more iterations. Try troubleshooting the code by lowering the number of samples, and try to overfit to the training set by training it for more iterations. The loss should go down to at least less than 0.6. Proceed with the full dataset only when things work. |
transformers | 2,742 | closed | do_lower_case strips accents! | # 🐛 Bug
When calling BertTokenizer with do_lower_case=True, the tokenizer gets rid of accents, which is a misleading behavior not indicated in the name of the parameter. We suggest that you create another parameter which indicates whether or not to strip accents, separated from do_lower_case! This also happens in AutoTokenizer. For some languages, like spanish, this is crucial (hacia is not the same as hacía). Moreover, it's set to True by default.
https://github.com/huggingface/transformers/blob/2184f87003c18ad8a172ecab9a821626522cf8e7/src/transformers/tokenization_bert.py#L346
| 02-05-2020 11:47:20 | 02-05-2020 11:47:20 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Well - is this realy a bug or just an improvement of the documentation?<|||||>In my opinion it's a bug as it's misleading. If you want to do lower that doesn't mean you want to strip accents too. Those are two separate actions which the user should decide separately. |
transformers | 2,741 | closed | XLM-Roberta mask filling error | # XLM-Roberta mask token filling error
Hi! I am trying to use XLM-Roberta for Masked LM task, but the error occurs when the model fills masked token in a test sentence.
**The code is:**
```
config.model_name = 'xlm-roberta-base'
tokenizer: tr.XLMRobertaTokenizer = tr.XLMRobertaTokenizer.from_pretrained(config.model_name)
model: tr.XLMRobertaForMaskedLM = tr.XLMRobertaForMaskedLM.from_pretrained(config.model_name)
input_ids = tokenizer.encode_plus("I want to <mask> New York!",
max_length=config.max_length)['input_ids']
x = np.full((config.max_length), fill_value=tokenizer.pad_token_id)
attn = np.zeros_like(x)
for i, tok in enumerate(input_ids):
x[i] = tok
attn[i] = 1
x = torch.tensor(x).unsqueeze(0).to(device)
attn = torch.tensor(attn).unsqueeze(0).to(device)
outputs = model(x, attention_mask=attn, masked_lm_labels=x)
```
**The error is**
```
RuntimeError: cublas runtime error : library not initialized at ../aten/src/THC/THCGeneral.cpp:216
```
When I try Albert for similar task everything works fine, but the Roberta family doesn't.
Could you please help with this issue? | 02-05-2020 09:28:07 | 02-05-2020 09:28:07 | Found solution in [#2509](https://github.com/huggingface/transformers/pull/2509).<|||||>Hi, indeed this is an error. This will be fixed once #3198 is merged. |
transformers | 2,740 | closed | T5 | the code is here
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = T5WithLMHeadModel.from_pretrained('t5-small')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
print(tokenizer._tokenize("Hello, my dog is cute"))
print(input_ids)
print(input_ids.shape)
outputs = model(input_ids=input_ids,attention_mask=torch.tensor([5.]))
print(outputs[0].shape)
and the error is
extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility
UnboundLocalError: local variable 'extended_attention_mask' referenced before assignment
| 02-05-2020 08:09:37 | 02-05-2020 08:09:37 | Your attention_mask should be torch.tensor([[1,1,1,1,1,1]]<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,739 | closed | Development Infrastructure for ML Projects | Hi guys, I am sorry to post an issue that is a bit outside the scope of this project. But I have been a consistent watcher of the transformers project and it is excellent on how I can collaborate and coordinate with people and develop something.
But I am a rookie when it comes to working with people and get going with a project and I have been assigned a task to create reliable and scalable infrastructure, where a team can have space for research, development, test, deploy.
I have been dabbling around with bitbucket pipelines and docker but it would be helpful to get your opinions on it. | 02-05-2020 05:46:31 | 02-05-2020 05:46:31 | This is an interesting subject but is way too broad for discussing here<|||||>@julien-c any forum you know for having this discussion? |
transformers | 2,738 | closed | Fix GPT2 config set to trainable | There's currently a bug in the GPT2 model which prevents it from being saved. This is caused by setting the trainable parameter to the GPT2 config, which cannot be packaged later in the save pipeline. Gotta love python...
Here is a simple script which you can use to reproduce this bug (and check the fix):
```
from transformers import (TFGPT2Model)
if __name__ == '__main__':
_base_model = TFGPT2Model.from_pretrained("gpt2")
print(base_model._layers[0].trainable)
``` | 02-04-2020 22:12:23 | 02-04-2020 22:12:23 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2738?src=pr&el=h1) Report
> Merging [#2738](https://codecov.io/gh/huggingface/transformers/pull/2738?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9e5b549b4d47678bdc74bc8f650e82cf25bfc245?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2738?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2738 +/- ##
=======================================
Coverage 74.09% 74.09%
=======================================
Files 93 93
Lines 15249 15249
=======================================
Hits 11298 11298
Misses 3951 3951
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2738?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2738/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.66% <100%> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2738?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2738?src=pr&el=footer). Last update [9e5b549...5346295](https://codecov.io/gh/huggingface/transformers/pull/2738?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,737 | closed | Version 2.4.1 breaks run_lm_finetuning.py, version 2.3.0 runs fine | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
Maksed language modeling
https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py
The tasks I am working on is:
* [ x] my own task or dataset: (give details below)
I am running on this dataset (though I doubt the issue is with the dataset, just use any text file)
https://drive.google.com/open?id=18oogYKR-VCQlFyUaYcGfgDiKTrFtkTHn
## To reproduce
Steps to reproduce the behavior:
```
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
python run_lm_finetuning.py --train_data_file train.raw --output_dir /output --model_type 'bert' --mlm --model_name_or_path 'bert-base-uncased' --do_train
```
without cuda for different error message
```
python run_lm_finetuning.py --train_data_file train.raw --output_dir /output --model_type 'bert' --mlm --model_name_or_path 'bert-base-uncased' --do_train --no_cuda
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Error message when using CUDA
```
Epoch: 0% 0/1 [00:00<?, ?it/s]
Iteration: 0% 0/17 [00:00<?, ?it/s]/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [14,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [18,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [23,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [27,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [29,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed.
THCudaCheck FAIL file=/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu line=110 error=710 : device-side assert triggered
Traceback (most recent call last):
File "HFpretrain.py", line 771, in <module>
main()
File "HFpretrain.py", line 721, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "HFpretrain.py", line 325, in train
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 1019, in forward
masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), masked_lm_labels.view(-1))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 916, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 2021, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1838, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:110
```
CPU error message
```
Epoch: 0% 0/1 [00:00<?, ?it/s]
Iteration: 0% 0/17 [00:00<?, ?it/s]Traceback (most recent call last):
File "HFpretrain.py", line 771, in <module>
main()
File "HFpretrain.py", line 721, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "HFpretrain.py", line 325, in train
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 1019, in forward
masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), masked_lm_labels.view(-1))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 916, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 2021, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1838, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
IndexError: Target -1 is out of bounds.
```
## Expected behavior
Should train as usual
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.4.1
- Platform: google colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?): n/a
- Using GPU in script?: both cpu and gpu Tesla T4
- Using distributed or parallel set-up in script?: o
| 02-04-2020 21:53:39 | 02-04-2020 21:53:39 | Hi, can you check that the current version of `run_lm_finetuning` crashes on your side by pulling the latest repo version? The `run_lm_finetuning` script was updated 30 minutes ago in regard to that error.<|||||>Ah yes, it works, specifically switching into this line
`labels[~masked_indices] = -1 `
to this line
`labels[~masked_indices] = -100`<|||||>Glad it works, thanks for checking.<|||||>I wonder what does it mean for the rest of the code base. Are masked tokens now -100 instead of -1?<|||||>Yes, since [v2.4.0](https://github.com/huggingface/transformers/releases/tag/v2.4.0). The reason is explained in the "Ignored indices in PyTorch loss computing" section in the previous link.<|||||>> Yes, since [v2.4.0](https://github.com/huggingface/transformers/releases/tag/v2.4.0). The reason is explained in the "Ignored indices in PyTorch loss computing" section in the previous link.
where is the link?<|||||>The link is the [v2.4.0](https://github.com/huggingface/transformers/releases/tag/v2.4.0). You can click on it. |
transformers | 2,736 | closed | TensorFlow XLM doesn't accept NumPy arrays for the attention mask | Convert NumPy attention mask to a TensorFlow tensor so that the mask creation doesn't crash
closes #2729 | 02-04-2020 20:24:15 | 02-04-2020 20:24:15 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2736?src=pr&el=h1) Report
> Merging [#2736](https://codecov.io/gh/huggingface/transformers/pull/2736?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9c67196b83a824df577742d32d38e9121d8a9285?src=pr&el=desc) will **increase** coverage by `<.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2736?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2736 +/- ##
==========================================
+ Coverage 74.09% 74.09% +<.01%
==========================================
Files 93 93
Lines 15249 15251 +2
==========================================
+ Hits 11298 11300 +2
Misses 3951 3951
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2736?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2736/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `90.47% <100%> (+0.06%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2736?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2736?src=pr&el=footer). Last update [9c67196...3c9a47e](https://codecov.io/gh/huggingface/transformers/pull/2736?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Hi, any update on this PR?<|||||>After discussing it with @thomwolf, it seems I was mistaken when believing that our TensorFlow models should accept numpy inputs. They should be converted to TensorFlow inputs. We should update the documentation to reflect this. Closing this PR as unrelated to the doc changes. |
transformers | 2,735 | closed | test_attention_weights cleanup | No logic changes, just uses getattr to make code more readable. | 02-04-2020 19:01:18 | 02-04-2020 19:01:18 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2735?src=pr&el=h1) Report
> Merging [#2735](https://codecov.io/gh/huggingface/transformers/pull/2735?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/86a0bb6e2117ad98141d92b700964aa0e73f8f49?src=pr&el=desc) will **decrease** coverage by `0.27%`.
> The diff coverage is `5.4%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2735?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2735 +/- ##
==========================================
- Coverage 74.09% 73.82% -0.28%
==========================================
Files 93 93
Lines 15248 15249 +1
==========================================
- Hits 11298 11257 -41
- Misses 3950 3992 +42
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2735?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `83.33% <ø> (ø)` | :arrow_up: |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `95.85% <ø> (ø)` | :arrow_up: |
| [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `86.41% <ø> (ø)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `65.25% <ø> (ø)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.21% <ø> (ø)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `94.27% <ø> (ø)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `61.32% <ø> (ø)` | :arrow_up: |
| [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `74.78% <ø> (ø)` | :arrow_up: |
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `79.14% <ø> (ø)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.39% <ø> (ø)` | :arrow_up: |
| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/2735/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2735?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2735?src=pr&el=footer). Last update [86a0bb6...ce4241a](https://codecov.io/gh/huggingface/transformers/pull/2735?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great, thanks @sshleifer! |
transformers | 2,734 | closed | pass langs parameter to certain XLM models | Adding an argument that specifies the language the SQuAD dataset is in so language-sensitive XLMs (e.g. `xlm-mlm-tlm-xnli15-1024`) don't default to language `0`.
Allows resolution of issue #1799 . | 02-04-2020 18:18:19 | 02-04-2020 18:18:19 | This seems to be failing a line length check, but my lines are not the longest in the file -- let me know if I should edit (the whole file) to conform.<|||||>Hi, thanks for opening this pull request! For the code quality to pass, you can check what's wrong with `make quality` at the root of the repo, and fix the black/isort issues with `make style`. Do you mind running the latter command and pushing your changes?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2734?src=pr&el=h1) Report
> Merging [#2734](https://codecov.io/gh/huggingface/transformers/pull/2734?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9c67196b83a824df577742d32d38e9121d8a9285?src=pr&el=desc) will **decrease** coverage by `1.08%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2734?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2734 +/- ##
=========================================
- Coverage 74.09% 73% -1.09%
=========================================
Files 93 93
Lines 15249 15249
=========================================
- Hits 11298 11133 -165
- Misses 3951 4116 +165
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2734?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `55.39% <0%> (-9.86%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.94% <0%> (-2.28%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.06% <0%> (-1.33%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2734?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2734?src=pr&el=footer). Last update [9c67196...6070974](https://codecov.io/gh/huggingface/transformers/pull/2734?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks for introducing me to new code check tools! Looks like we're good?<|||||>Great, thank you for doing the changes! |
transformers | 2,733 | closed | Save model wrapped in Keras | Hi all,
Sorry for my naive question but I am trying to save my keras model (<class 'tensorflow.python.keras.engine.training.Model'>) in which I use TFBertModel() function as an hidden layer. To do that I use the save() function provided by the tf.keras package.
But I got this error:
```python
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-13-3b315f7219da> in <module>()
----> 1 model.save('model_weights.h5')
8 frames
/tensorflow-2.1.0/python3.6/tensorflow_core/python/keras/engine/network.py in get_config(self)
915 def get_config(self):
916 if not self._is_graph_network:
--> 917 raise NotImplementedError
918 return copy.deepcopy(get_network_config(self))
919
NotImplementedError:
```
The error can be reproduce from my colab : https://colab.research.google.com/drive/18HYwffkXCylPqeA-8raL82vfwOjb-aLP
And another question is how should I call this model for prediction ?
Thx for your help! | 02-04-2020 17:26:38 | 02-04-2020 17:26:38 | Same problem.<|||||>On which version are you running? Is it possible that [this fix](https://github.com/huggingface/transformers/pull/3103) fixed your issue? Can you try installing from master to check?<|||||>This doesn't look like the same thing I was fixing in #3103 so I doubt that that helped.<|||||>In particular, from `Network` docstring:
```
Two types of `Networks` exist: Graph Networks and Subclass Networks. Graph
networks are used in the Keras Functional and Sequential APIs. Subclassed
networks are used when a user subclasses the `Model` class. In general,
more Keras features are supported with Graph Networks than with Subclassed
Networks, specifically:
- Model cloning (`keras.models.clone`)
- Serialization (`model.get_config()/from_config`, `model.to_json()/to_yaml()`
- Whole-model saving (`model.save()`)
```
Based on the traceback, apparently the model is a subclass model, so it needs to override `get_config` in order to support serialization. (The fix in #3103 is for a problem with using `TF*MainLayer` classes within a Keras model, so it doesn't address this.)<|||||>@gthb so is there any way to save the models wrapped in keras?<|||||>> @gthb so is there any way to save the models wrapped in keras?
I'm sure there's _some_ way, just a question of how much custom work you have to do (probably some, given the above quote).
But are you sure you need to be using `TFBertModel` and not `TFBertMainLayer`, for your hidden layer? `TFBertModel` is literally just this (plus docstrings):
```python
class TFBertModel(TFBertPreTrainedModel):
def __init__(self, config, *inputs, **kwargs):
super().__init__(config, *inputs, **kwargs)
self.bert = TFBertMainLayer(config, name="bert")
def call(self, inputs, **kwargs):
outputs = self.bert(inputs, **kwargs)
return outputs
```
... so unless you need something in particular from `TFBertModel`'s superclasses, maybe using `TFBertMainLayer` directly would simplify things for you?<|||||>Thanks @gthb for your reply. I've updated my colab and now it works after I changed the following line:
`model=TFBertModel.from_pretrained('bert-base-cased', config=config)`
to:
`model=TFBertMainLayer(config=config)`
however I can't call the function from_pretrained. Is the class implicitly set by providing the config options from BERTConfig ?
Another point, I am facing a problem during the training of the model when it wraps in keras.
Using:
`embedding = model([word_inputs, mask_inputs, seg_inputs])[0]`
I get:
`tensorflow:Gradients do not exist for variables ['tf_bert_main_layer/pooler/dense/kernel:0', 'tf_bert_main_layer/pooler/dense/bias:0'] when minimizing the loss.`
I would like to use layers from transformers combined with a CNN (require 3D tensors as input) but in order to keep weights learned by the model I tried the pooler output (which provides 2D tensors): `model([word_inputs, mask_inputs, seg_inputs])[1]`
but it doesn't fit with CNN:
`ValueError: Input 0 of layer input is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 768]`
Do you have an idea how I should reshape it to fit with a conv1D layer ?
The error can be reproduce from my colab : https://colab.research.google.com/drive/18HYwffkXCylPqeA-8raL82vfwOjb-aLP<|||||>> I can't call the function from_pretrained. Is the class implicitly set by providing the config options from BERTConfig ?
I'm guessing you mean that `TFBertMainLayer` does not have a `from_pretrained` method. Yep, but `BertConfig` does, so this works:
```
from transformers import BertConfig, TFBertMainLayer
config_name = "bert-base-uncased" # for instance
config = BertConfig.from_pretrained(config_name)
main_layer = TFBertMainLayer(config)
```
> Do you have an idea how I should reshape it to fit with a conv1D layer ?
Isn't your Conv1D layer intended to convolve over the token sequence? The pooled output produces a single vector representing the whole sequence, not separate vectors for each token of the sequence. So you are probably mistaken in trying to use the pooled output (or I'm not understanding your intent).<|||||>Yes you've right I've misunderstood the nature of the pooler output (probably I've been misleaded by these related topics:[#2256](https://github.com/huggingface/transformers/issues/2256) and [#1727](https://github.com/huggingface/transformers/issues/1727)). So when I am using the last_hidden_state I am getting this warning:
`
tensorflow:Gradients do not exist for variables ['tf_bert_main_layer/pooler/dense/kernel:0', 'tf_bert_main_layer/pooler/dense/bias:0'] when minimizing the loss.`
but the model seems train however, when I load it I am getting:
```
File "/home/X/", line 69, in train
loaded_model = tf.keras.models.load_model(dirModel+self.options.t+'cnn.h5')
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/saving/save.py", line 146, in load_model
return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/saving/hdf5_format.py", line 193, in load_model_from_hdf5
model._make_train_function()
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 2057, in _make_train_function
params=self._collected_trainable_weights, loss=self.total_loss)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py", line 503, in get_updates
grads = self.get_gradients(loss, params)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py", line 397, in get_gradients
"K.argmax, K.round, K.eval.".format(param))
ValueError: Variable <tf.Variable 'tf_bert_main_layer_1/pooler/dense/kernel:0' shape=(768, 768) dtype=float32> has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.
```
Here, the model used:
```
# Define inputs
word_inputs = tf.keras.layers.Input(shape=(max_seq_length,), name='word_inputs', dtype='int32')
mask_inputs = tf.keras.layers.Input(shape=(max_seq_length,), name='mask_inputs', dtype='int32')
seg_inputs = tf.keras.layers.Input(shape=(max_seq_length,), name='seg_inputs', dtype='int32')
# Call BERT model
config_name = "bert-base-uncased" # for instance
config = BertConfig.from_pretrained(config_name)
main_layer = TFBertMainLayer(config)
embedding = model([word_inputs, mask_inputs, seg_inputs])[0]
conv=tf.keras.layers.Conv1D(128, kernel_size=5, activation='relu', name="input")(embedding)
pooling = tf.keras.layers.MaxPooling1D()(conv)
lstm = tf.keras.layers.LSTM(128)(pooling)
dense = tf.keras.layers.Dense(64, activation='relu')(lstm)
# Final output
outputs = tf.keras.layers.Dense(1, activation='sigmoid', name='outputs')(dense)
# Compile model
model = tf.keras.Model(inputs=[word_inputs, mask_inputs, seg_inputs], outputs=outputs)
model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
model.save('cnn.h5')
loaded_model = tf.keras.models.load_model('cnn.h5')
```
So what's I am doing wrong ?<|||||>@gthb
> ... so unless you need something in particular from TFBertModel's superclasses, maybe using TFBertMainLayer directly would simplify things for you?
Simply initializing `TFBertMainLayer` as
```
main_layer = TFBertMainLayer(config)
```
won't load pretrained parameters as opposed to `TFBertModel.from_pretrained(...)`, right?
<|||||>> won't load pretrained parameters as opposed to TFBertModel.from_pretrained(...), right?
Oops, yes, there's that little thing! 😄 You can load the weights e.g. like this:
```python
bert_weights_file = TFBertPreTrainedModel.pretrained_model_archive_map[config_name]
bert_weights_file = cached_path(bert_weights_file)
model.load_weights(bert_weights_file, by_name=True)
```<|||||>> > won't load pretrained parameters as opposed to TFBertModel.from_pretrained(...), right?
>
> Oops, yes, there's that little thing! You can load the weights e.g. like this:
>
> ```python
> bert_weights_file = TFBertPreTrainedModel.pretrained_model_archive_map[config_name]
> bert_weights_file = cached_path(bert_weights_file)
> model.load_weights(bert_weights_file, by_name=True)
> ```
I'm getting this error, using transformers 2.11.0 version :
```python
AttributeError: type object 'TFBertPreTrainedModel' has no attribute 'pretrained_model_archive_map'
```
I'm using this syntax in my code :
```python
config = BertConfig.from_pretrained(config_name)
bert_weights_file = TFBertPreTrainedModel.pretrained_model_archive_map[config_name]
```<|||||>@PoriNiki yeah, from a quick `git log -S pretrained_model_archive_map` that attribute went away in https://github.com/huggingface/transformers/pull/4636 “Kill model archive maps” — merged to master in https://github.com/huggingface/transformers/commit/d4c2cb402d6674211726fd5f4803d1090664e438 and first released in v2.11.0.
By staring at `TFPreTrainedModel.from_pretrained` a bit, the right way ought to be something like:
```
from transformers.file_utils import hf_bucket_url, TF2_WEIGHTS_NAME
bert_weights_file_url = hf_bucket_url(config_name, filename=TF2_WEIGHTS_NAME)
bert_weights_file = cached_path(bert_weights_file_url)
```
(not tested)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I still have this issue. Can't save my model, only saving weight<|||||>For other people (@ch-hristov) still having trouble with this, I wrote up an explanation and workarounds on stackoverflow: https://stackoverflow.com/questions/62482511/tfbertmainlayer-gets-less-accuracy-compared-to-tfbertmodel/64000378#64000378
It seems like it would be useful to smooth out this workflow, as many people using keras will run into this issue when they try to save their model. @gthb What do you think about adding something like `from_pretrained` to `MainLayer`, and pulling out the logic from `TFPreTrainedModel.from_pretrained` to support both? <|||||>Sounds good, but I have just switched jobs and am not using transformers, don't really have the cycles to help, sorry! <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi,
Also encountering this issue, couldn't make the the solution by @dmlicht work yet.
Can anyone provide another feedback on that?
Also, will this issue be addressed by the HF team? |
transformers | 2,732 | closed | Error for run_lm_finetuning.py (CUDA error: device-side assert triggered) | ### Reporting Error
I updated transformers from 2.3.x to 2.4.1 today, and I'm facing a runtime error which is RuntimeError: CUDA error: device-side assert triggered.
I reviewed recent updates and found out the commits [Follow up 213] is causing the error.
Below are the changes from the commits:
- labels[~masked_indices] = -100 # We only compute loss on masked tokens
+ labels[~masked_indices] = -1 # We only compute loss on masked tokens
The changes are related to the calculation of masked language model loss, so the problem seems to occur when args.mlm is True. (If I change the value -1 to -100, it works fine)
Any suggestions?
### Sys Info
OS: Windows 10
Transformers: 2.4.1
PyTorch: 1.4.0
Tensorflow: 2.1.0
### Full Stack Trace
C:\Users\USER\Anaconda3\python.exe C:/Users/USER/PycharmProjects/Testing/huggingface/run_lm_finetuning.py --output_dir=output --model_type=roberta --model_name_or_path=roberta-base --do_train --train_data_file=../data/wikitext-2/wiki.train.raw --do_eval --eval_data_file=../data/wikitext-2/wiki.test.raw --evaluate_during_training --mlm --per_gpu_train_batch_size=1 --per_gpu_eval_batch_size=1
2020-02-04 10:46:01.194260: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
02/04/2020 10:46:05 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False
02/04/2020 10:46:05 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-config.json from cache at C:\Users\USER\.cache\torch\transformers\e1a2a406b5a05063c31f4dfdee7608986ba7c6393f7f79db5e69dcd197208534.a7ab0e5de2d8321d6d6a15b199110f2c99be72976b7d151423cb8d8c261a13b6
02/04/2020 10:46:05 - INFO - transformers.configuration_utils - Model config RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"do_sample": false,
"eos_token_ids": 0,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 12,
"num_labels": 2,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"pruned_heads": {},
"repetition_penalty": 1.0,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 1,
"use_bfloat16": false,
"vocab_size": 50265
}
02/04/2020 10:46:05 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-vocab.json from cache at C:\Users\USER\.cache\torch\transformers\d0c5776499adc1ded22493fae699da0971c1ee4c2587111707a4d177d20257a2.ef00af9e673c7160b4d41cfda1f48c5f4cba57d5142754525572a846a1ab1b9b
02/04/2020 10:46:05 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-merges.txt from cache at C:\Users\USER\.cache\torch\transformers\b35e7cd126cd4229a746b5d5c29a749e8e84438b14bcdb575950584fe33207e8.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda
02/04/2020 10:46:05 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-pytorch_model.bin from cache at C:\Users\USER\.cache\torch\transformers\228756ed15b6d200d7cb45aaef08c087e2706f54cb912863d2efe07c89584eb7.49b88ba7ec2c26a7558dda98ca3884c3b80fa31cf43a1b1f23aef3ff81ba344e
02/04/2020 10:46:10 - INFO - transformers.modeling_utils - Weights of RobertaForMaskedLM not initialized from pretrained model: ['lm_head.decoder.bias']
02/04/2020 10:46:12 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=510, cache_dir=None, config_name=None, device=device(type='cuda'), do_eval=True, do_train=True, eval_all_checkpoints=False, eval_data_file='../data/wikitext-2/wiki.test.raw', evaluate_during_training=True, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=5e-05, line_by_line=False, local_rank=-1, logging_steps=500, max_grad_norm=1.0, max_steps=-1, mlm=True, mlm_probability=0.15, model_name_or_path='roberta-base', model_type='roberta', n_gpu=1, no_cuda=False, num_train_epochs=1.0, output_dir='output', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=1, per_gpu_train_batch_size=1, save_steps=500, save_total_limit=None, seed=42, server_ip='', server_port='', should_continue=False, tokenizer_name=None, train_data_file='../data/wikitext-2/wiki.train.raw', warmup_steps=0, weight_decay=0.0)
02/04/2020 10:46:12 - INFO - __main__ - Loading features from cached file ../data/wikitext-2\roberta_cached_lm_510_wiki.train.raw
02/04/2020 10:46:12 - INFO - __main__ - ***** Running training *****
02/04/2020 10:46:12 - INFO - __main__ - Num examples = 4740
02/04/2020 10:46:12 - INFO - __main__ - Num Epochs = 1
02/04/2020 10:46:12 - INFO - __main__ - Instantaneous batch size per GPU = 1
02/04/2020 10:46:12 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 1
02/04/2020 10:46:12 - INFO - __main__ - Gradient Accumulation steps = 1
02/04/2020 10:46:12 - INFO - __main__ - Total optimization steps = 4740
Epoch: 0%| | 0/1 [00:00<?, ?it/s]
Iteration: 0%| | 0/4740 [00:00<?, ?it/s]C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [9,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [18,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [21,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [23,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [27,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [28,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.
Traceback (most recent call last):
File "C:/Users/USER/PycharmProjects/Testing/huggingface/run_lm_finetuning.py", line 790, in <module>
main()
File "C:/Users/USER/PycharmProjects/Testing/huggingface/run_lm_finetuning.py", line 740, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "C:/Users/USER/PycharmProjects/Testing/huggingface/run_lm_finetuning.py", line 356, in train
loss.backward()
File "C:\Users\USER\Anaconda3\lib\site-packages\torch\tensor.py", line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "C:\Users\USER\Anaconda3\lib\site-packages\torch\autograd\__init__.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: CUDA error: device-side assert triggered
Epoch: 0%| | 0/1 [00:00<?, ?it/s]
Iteration: 0%| | 0/4740 [00:00<?, ?it/s]
Process finished with exit code 1
| 02-04-2020 15:56:50 | 02-04-2020 15:56:50 | Hi, thank you for your report. As discussed in #2719, 3bf5417 should have fixed it. Please let me know if it fixes your issue.<|||||>Yes, if I change `labels[~masked_indices] = -1` to `labels[~masked_indices] = -100`, then it works fine for both lm (GPT) and mlm (BERT-like).
But I'm worrying about 3bf5417 because I think these changes were made to fix #2718 which is _Masked indices should have -1 and not -100_.
<|||||>The [PyTorch CrossEntropyLoss](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss) method has a default `ignore_index` set to -100. When no `ignore_index` is specified, it is correct to assume it is set to -100.
None of the CrossEntropy losses defined in DistilBERT have a different `ignore_index` specified, so it is correct to assume that `-100` should be used in all cases. This is the case for all models in the library since v2.4.0.<|||||>Then, I think this case is cleared. Thanks for your help :) |
transformers | 2,731 | closed | Masked LM and TFBertForSequenceClassification | Hello,
Is it correct to say that fine-tuning a TFBertForSequenceClassification model is the same as fine-tuning BERT's mlm and in additional a classification layer at the same time?
Thanks! | 02-04-2020 15:29:46 | 02-04-2020 15:29:46 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,730 | closed | QuickStart code error | In the model2model quickstart example, I'm getting an error here:
`outputs = model(question_tensor, answer_tensor, decoder_lm_labels=labels_tensor)`
With the following message:
`RuntimeError: The size of tensor a (8) must match the size of tensor b (768) at non-singleton dimension 3`
Any ideas? | 02-04-2020 15:19:56 | 02-04-2020 15:19:56 | Hi! Indeed this was an error, it should have been fixed with https://github.com/huggingface/transformers/commit/90ab15cb7a8fcf8bf58c05453ddf1aa6a4fa00c1.
Could you try installing from source:
```py
pip install git+https://github.com/huggingface/transformers
```
and let me know if it fixes your issue?<|||||>Install was successful.
But now I get the following:
> Traceback (most recent call last):
> File "model2model.py", line 82, in <module>
> model = Model2Model.from_pretrained('fine-tuned-weights')
> File "/path/venvs/nlp/lib/python3.6/site-packages/transformers/modeling_encoder_decoder.py", line 323, in from_pretrained
> raise ValueError("Only the Bert model is currently supported.")
> ValueError: Only the Bert model is currently supported.
Traded on error for another.
<|||||>```py
model = Model2Model.from_pretrained('fine-tuned-weights')
```
Do you have a folder called `fine-tuned-weights` in your directory?<|||||>To wrap this up: Just to confirm, there is no existing 'fine-tuned-weights' pretrained model.
'fine-tuned-weights' is just a name for a hypothetical pretrained model.<|||||>Thank you, this commit works fine and fix the issue.<|||||>I'm still a bit confused going along with the Quickstart guide and trying to get a fine-tuned Model2Model to work.
First of all, as a minor note, the suggested line in the guide
`model = Model2Model.from_pretrained('fine-tuned-weights')`
won't work _even if a folder with that name exists_, as `from_pretrained` actually checks if this model path or name contains the string "bert" (among other things, see [here](https://github.com/huggingface/transformers/blob/e693cd1e877aa191d3317faed33e87d1558c9406/src/transformers/modeling_encoder_decoder.py#L282)). I understand that this is more of a placeholder name than anything else, but it might still be confusing.
Then, let's assume I saved a fine-tuned Model2Model instance via `model.save_pretrained(PATH)` (where this PATH now contains the string "bert"). The suggested loading of this via `from_pretrained`will still fail: A saved Model2Model is actually split into encoder and decoder, so simply using the top directory containing both for loading will obviously fail. Thus, I only have the option of either loading the encoder _or_ decoder model, which will then, in the newly loaded Model2Model instance, be used as _both the encoder and decoder_, as this is how Model2Model is loaded: a single (BERT-)model used as encoder and decoder. But that can't be correct for _fine-tuned_ versions of this model, can it? Or am I just missing something obvious here?
<|||||>Hi @redfarg, thanks for you comment. This is misleading indeed. We're in the process of adding BART to the library (@sshleifer), improving the experience with encoder-decoder architectures/Model2Model is part of the roadmap. |
transformers | 2,729 | closed | Attention Mask for TFXLM Model doesn't work | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): XLM
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
from transformers import *
tokenizer = XLMTokenizer.from_pretrained('xlm-mlm-enfr-1024')
model = TFXLMModel.from_pretrained('xlm-mlm-enfr-1024')
text = "Good evening."
input_ids = tokenizer.encode(text, add_special_tokens=True)
last_hidden_states = model(np.array([input_ids]), attention_mask=np.ones_like(np.array(input_ids)))
```
Error output:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/daksh/miniconda3/envs/rasa-tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/Users/daksh/miniconda3/envs/rasa-tf2/lib/python3.6/site-packages/transformers/modeling_tf_xlm.py", line 589, in call
outputs = self.transformer(inputs, **kwargs)
File "/Users/daksh/miniconda3/envs/rasa-tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/Users/daksh/miniconda3/envs/rasa-tf2/lib/python3.6/site-packages/transformers/modeling_tf_xlm.py", line 348, in call
mask, attn_mask = get_masks(slen, lengths, self.causal, padding_mask=attention_mask)
File "/Users/daksh/miniconda3/envs/rasa-tf2/lib/python3.6/site-packages/transformers/modeling_tf_xlm.py", line 88, in get_masks
tf.debugging.assert_equal(shape_list(mask), [bs, slen])
File "/Users/daksh/miniconda3/envs/rasa-tf2/lib/python3.6/site-packages/transformers/modeling_tf_utils.py", line 546, in shape_list
static = x.shape.as_list()
AttributeError: 'tuple' object has no attribute 'as_list'
```
Works fine if I attention mask is removed
## Expected behavior
`last_hidden_states` is a tuple of type `tf.Tensor`
## Environment info
- `transformers` version: 2.3.0
- Platform: OSX
- Python version: 3.6.5
- Tensorflow version (GPU?): 2.1.0(CPU)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 02-04-2020 14:47:32 | 02-04-2020 14:47:32 | Hi! There seems to be an error in the current implementation where it doesn't accept NumPy arrays, only TensorFlow arrays. I'm working on it in [the branch fix-tf-xlm](https://github.com/huggingface/transformers/tree/fix-tf-xlm). In the meantime, you can use a tf.Tensor instead and it should work fine.
Please be aware that your attention mask should be defined as `np.ones_like(np.array([input_ids]))` instead of your current `np.ones_like(np.array(input_ids))` or else it'll be a dimension short.
The following code is your code modified to run:
```py
from transformers import *
import numpy as np
import tensorflow as tf
tokenizer = XLMTokenizer.from_pretrained('xlm-mlm-enfr-1024')
model = TFXLMModel.from_pretrained('xlm-mlm-enfr-1024')
text = "Good evening."
input_ids = tokenizer.encode(text, add_special_tokens=True)
last_hidden_states = model(np.array([input_ids]), attention_mask=tf.constant(np.ones_like(np.array([input_ids]))))
```<|||||>Hi @LysandreJik When can we expect your fix to be merged and released in an official release?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi, is this bug now fixed?
Thanks! |
transformers | 2,728 | closed | RuntimeError: expected dtype Float but got dtype Long - run_lm_finetuning.py | # 🐛 Bug
## Information
I'm using my Camembert-based language model on Italian language (built from scratch).
I'm trying to use [run_lm_finetuning.py](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py) to fine-tune my language model on a dataset.
@julien-c suggested my to add `--line_by_line` to my launch script, beacuse without that flag, the program blocked on the tokenization of the training set. That advide let the program work. But after some hours, the program crashes with a strange Runtime Error: in the assignment of 10 % of random words to masks at [line 218](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py#L218) in the _mask_tokens()_ function:
```python
# 10% of the time, we replace masked input tokens with random word
indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced
random_words = torch.randint(len(tokenizer), labels.shape, dtype=torch.long)
inputs[indices_random] = random_words[indices_random] #it crashes here
```
The error was this:
`RuntimeError: expected dtype Float but got dtype Long
`
It's a strange error, beacuse it crashes after several minutes or hours after the launch. The time of the proper functioning of program seems random: sometimes the script completes 3/4 epochs and then it crashes, sometimes it crashes before the end of the first epoch.
## To reproduce
I launched this:
```bash
python3 run_lm_finetuning.py \
--train_data_file /path/to/train.txt \
--eval_data_file /path/to/eval.txt \
--output_dir /path/to/output \
--mlm \
--do_train \
--do_eval \
--model_type camembert \
--model_name_or_path /path/to/my/model \
--per_gpu_train_batch_size 8 \
--per_gpu_eval_batch_size 8 \
--overwrite_output_dir \
--overwrite_cache \
--max_steps 500000 \
--block_size 128 \
--save_steps 50000 \
--eval_all_checkpoints \
--line_by_line
```
I got this error in the middle of 6th epoch:
```
File "run_lm_finetuning.py", line 801, in <module>51:22, 4.94it/s]
main()
File "run_lm_finetuning.py", line 750, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_lm_finetuning.py", line 342, in train
inputs, labels = mask_tokens(batch, tokenizer, args) if args.mlm else (batch, batch)
File "run_lm_finetuning.py", line 222, in mask_tokens
inputs[indices_random] = random_words[indices_random]
RuntimeError: expected dtype Float but got dtype Long
Epoch: 55%|█████▍ | 6/11 [20:26:33<17:02:07, 12265.60s/it]
Iteration: 69%|██████▊ | 33378/48603 [1:47:45<49:09, 5.16it/s]
```
I'm managing to run the code anyway, restarting the program using the flag `--model_name_or_path` and giving the last saved checkpoint rather then the original language model every time it crashes.
I printed `inputs[indices_random]` and `random_words[indices_random]` beacause are the two variables in the line in which the program crashes:
- The code crashes with this 2 variables:
```
inputs[indices_random] = tensor([1173.])
Random_words[indices_random] = tensor([4220])
Traceback (most recent call last):
File "run_lm_finetuning.py", line 797, in <module>
main()
File "run_lm_finetuning.py", line 747, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_lm_finetuning.py", line 349, in train
inputs, labels = mask_tokens(batch, tokenizer, args) if args.mlm else (batch, batch)
File "run_lm_finetuning.py", line 229, in mask_tokens
inputs[indices_random] = random_words[indices_random]
RuntimeError: expected dtype Float but got dtype Long
Epoch: 60%|██████ | 3/5 [14:31:21<9:40:54, 17427.18s/it]
```
- while before the crash the code enters the _mask_tokens()_ function corretly and prints lines like these:
```
inputs[indices_random] = tensor([19807, 78, 51, 1204])
Random_words[indices_random] = tensor([14538, 15381, 30255, 3778])
```
In my opinion the only difference is that **tensor([1173.])** in the crash example, contains a not integer value (there is the '.' at the end of the number, while all the other times not. Maybe with a cast of `inputs[indices_random]` it would work.
## Environment info
- `transformers` version: 2.4.1
- Platform: Linux-4.4.0-108-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.8
- PyTorch version (GPU?): 1.3.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No | 02-04-2020 11:49:18 | 02-04-2020 11:49:18 | @paulthemagno thanks for creating this. My environment is exactly the same, except I am running the lm fine tuner on a Python 3.7.3 environment. @LysandreJik asked for more information. A couple of more inputs although I am not sure if this is going to help. This problem is not happening when I subset my dataset and run the code. Neither on training nor evaluation. So this tells me there is a data problem somewhere.
So I caught the runtime error and output the results of those data objects for my dataset on line 120-122 that is posted above. Only unusual thing I see is that one of the examples for this batch where the code errors out is all zeros for the input tensor.
```
tensor([[ 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.],
[ 2004., 2019., 5587., 10497., 2819., 2000., 1996., 2194., 2015.,
2128., 8569., 28200., 10896., 1010., 3531., 2421., 1037., 2862.,
1997., 2169., 8875., 2073., 1996., 2194., 103., 2015., 2007.,
1996., 103., 1997., 1996., 2060., 4243., 2000., 103., 8946.,
3388., 1012., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.],
...
```
My data is split as 1) each sentence in one line, 2) and the documents are split by an empty line as recommended in the documentation. I looked at that particular document but I do not see anything unusual but maybe some of this provide some clues. My hunch was that there is a non-ascii character which I am cleaning up, or maybe a dash or underscore repeating many many times for that particular example but if my eyes are not failing me, I can't find that in the dataset for that batch.
Thanks you all...
<|||||>Hi, I'm looking into this problem right now, thank you for providing so much helpful information!
I could set up an experiment where I would get the same error, and I patched it with a cast as @paulthemagno recommended.
It is visible in 1ebfeb7. This should hopefully patch your issue, but as I don't have your particular dataset I can't verify first hand. Do you mind letting me know if it fixes it?<|||||>> Hi, I'm looking into this problem right now, thank you for providing so much helpful information!
> I could set up an experiment where I would get the same error, and I patched it with a cast as @paulthemagno recommended.
>
> It is visible in [9c67196](https://github.com/huggingface/transformers/commit/9c67196b83a824df577742d32d38e9121d8a9285). This should hopefully patch your issue, but as I don't have your particular dataset I can't verify first hand. Do you mind letting me know if it fixes it?
Thanks, that patch or code block does not reflect the change, maybe a typo on the commit hash? <|||||>Indeed, sorry, edited.<|||||>@LysandreJik happy to confirm that it worked. I patched my own script with the added line for casting the inputs and it ran through the whole corpus 160K records and outputted 5.6 perplexity score. I am assuming it worked. Thank you very much...
Ozan<|||||>Fantastic! Thank you for checking!<|||||>> Fantastic! Thank you for checking!
You're welcome. I am glad I could help. By the way, out of topic, could you shed some light on why my input tensors are truncated to length = 51 as you can see in my original post above. I don't see where I set that to 51 nor a hard code somewhere. Here are my script arguments:
```
python run_lm_finetuning.py \
--train_data_file /path/to/data \
--eval_data_file /path/to/eval_file \
--output_dir /path/fine_tuned/bert_uncased_lm \
--mlm \
--do_train \
--do_eval \
--cache_dir /cache_dir \
--model_type bert \
--model_name_or_path bert-base-uncased \
--per_gpu_train_batch_size 16 \
--gradient_accumulation_steps 2 \
--per_gpu_eval_batch_size 16 \
--block_size 256 \
--eval_all_checkpoints \
--line_by_line \
--fp16
```
As far as I understand Block size is the after tokenization, max seq length is that, where is 51 coming from? This might be a stupid question but I am just trying to avoid making a gross error and get a little bit more of an understanding of the code.
Ozan
<|||||>That seems weird, indeed, but it's hard for me to debug without having more information about your dataset. Since you're using the `--line_by_line` flag, it should be building tensors according to the line returns in your dataset. Is it possible 51 is the maximum length of a sequence for that specific batch, so it pads up to 51 for the rest of the batch?<|||||>Yes, that must be it, I checked some random batches and the length for the input tensors varies from batch to batch. I apologize for sidetracking this thread. Seemed like while I had you and the data above, I would get a quick answer. thank you again. <|||||>No worries, glad I could help.<|||||>> Hi, I'm looking into this problem right now, thank you for providing so much helpful information!
> I could set up an experiment where I would get the same error, and I patched it with a cast as @paulthemagno recommended.
>
> It is visible in [1ebfeb7](https://github.com/huggingface/transformers/commit/1ebfeb79469d544a2bd817aa32c77e0514485ff9). This should hopefully patch your issue, but as I don't have your particular dataset I can't verify first hand. Do you mind letting me know if it fixes it?
Thanks to all. I had already launched the code before you wrote this message, with the additional line `inputs = inputs.type(dtype=torch.long)` without the _clone_ method. It has worked, but I think it is better to restart from 0. Also beacuse re-launching the code from the last saved checkpoint (before the crash), I have noticed that the first new checkpoint had a peek on the perplexity and after that it return to decrease, so better restarting.
Anyway the code worked with my change, so I think also with yours, which is more correct :) |
transformers | 2,727 | closed | XLM Roberta token_type_ids bug with batch_encode_plus | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): XLM Roberta
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
import transformers
name = "xlm-roberta-base"
tokenizer = XLMRobertaTokenizer.from_pretrained(name)
model = XLMRobertaModel.from_pretrained(name)
x = tokenizer.batch_encode_plus(
["foo", "bar bar bar"], add_special_tokens=True, return_tensors="pt"
)
model(**x)
```
<details><summary>Output</summary>
<p>
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-33-3743974223ad> in <module>
7 ["foo", "bar bar bar"], add_special_tokens=True, return_tensors="pt"
8 )
----> 9 model(**x)
~/Library/Caches/pypoetry/virtualenvs/camphr-v19AnSgn-py3.7/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/Library/Caches/pypoetry/virtualenvs/camphr-v19AnSgn-py3.7/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask)
797
798 embedding_output = self.embeddings(
--> 799 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
800 )
801 encoder_outputs = self.encoder(
~/Library/Caches/pypoetry/virtualenvs/camphr-v19AnSgn-py3.7/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/Library/Caches/pypoetry/virtualenvs/camphr-v19AnSgn-py3.7/lib/python3.7/site-packages/transformers/modeling_roberta.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
62
63 return super().forward(
---> 64 input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds
65 )
66
~/Library/Caches/pypoetry/virtualenvs/camphr-v19AnSgn-py3.7/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
189 inputs_embeds = self.word_embeddings(input_ids)
190 position_embeddings = self.position_embeddings(position_ids)
--> 191 token_type_embeddings = self.token_type_embeddings(token_type_ids)
192
193 embeddings = inputs_embeds + position_embeddings + token_type_embeddings
~/Library/Caches/pypoetry/virtualenvs/camphr-v19AnSgn-py3.7/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~/Library/Caches/pypoetry/virtualenvs/camphr-v19AnSgn-py3.7/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input)
112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)
115
116 def extra_repr(self):
~/Library/Caches/pypoetry/virtualenvs/camphr-v19AnSgn-py3.7/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1482 # remove once script supports set_grad_enabled
1483 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1484 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1485
1486
RuntimeError: index out of range: Tried to access index 1 out of table with 0 rows. at ../aten/src/TH/generic/THTensorEvenMoreMath.cpp:418
```
</p>
</details>
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
No error
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.4.1
- Platform: OS X
- Python version: 3.7.4
- PyTorch version (GPU?): 1.4.0 (no GPU)
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 02-04-2020 11:19:12 | 02-04-2020 11:19:12 | # Note
- No error occurs for other models (e.g. `bert-base-cased`)
- I think the configuration of `xlm-roberta-base` is incorrect:
```
>>> cfg = XLMRobertaConfig.from_pretrained("xlm-roberta-base")
>>> cfg.type_vocab_size
1 # 2 is correct?
```<|||||>> * I think the configuration of `xlm-roberta-base` is incorrect:
>
>
> ```
> >>> cfg = XLMRobertaConfig.from_pretrained("xlm-roberta-base")
> >>> cfg.type_vocab_size
> 1 # 2 is correct?
> ```
No the configuration is correct. The offical [XLM-RoBERTa](https://github.com/pytorch/fairseq/tree/master/examples/xlmr) doesn't have any token_type_ids:
```
...
(decoder): RobertaEncoder(
(sentence_encoder): TransformerSentenceEncoder(
(embed_tokens): Embedding(250002, 1024, padding_idx=1)
(embed_positions): LearnedPositionalEmbedding(514, 1024, padding_idx=1)
(layers)
...
```
The problem here is that encode_plus produces model independent token_type_ids. I'm currently working on a fix (#2702). You can just replace the produced token_type_ids for now with:
`x = {key:value for (key,value) in x.items() if key != 'token_type_ids'}`
<|||||>Hi! This will work once #3198 is merged. Please note, however, that the following:
```py
x = tokenizer.batch_encode_plus(
["foo", "bar bar bar"], add_special_tokens=True, return_tensors="pt"
)
```
will not work as your two sequences, "foo" and "bar bar bar", once tokenized, are not of equal length. To ensure this gets tokenized, you will need to pass `pad_to_max_length=True` to `batch_encode_plus`:
```py
x = tokenizer.batch_encode_plus(
["foo", "bar bar bar"], add_special_tokens=True, return_tensors="pt", pad_to_max_length=True
)
``` |
transformers | 2,726 | closed | convert_tokens_to_ids(self, tokens)中的ids.append(self.vocab[token]),KeyError | # ❓ Questions & Help
## Details
<!-- Description of your issue -->
KeyError Traceback (most recent call last)
<ipython-input-20-dfb0a32a8e67> in <module>()
19 for mask_pos in mask_positions:
20 candidates = options[num]
---> 21 candidates_ids = tokenizer.convert_tokens_to_ids(candidates)
22 token_ids = tokenizer.convert_tokens_to_ids(tokenized_text)
23 tokens_tensor = torch.tensor([token_ids])
~/anaconda3/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py in convert_tokens_to_ids(self, tokens)
119 ids = []
120 for token in tokens:
--> 121 ids.append(self.vocab[token])
122 if len(ids) > self.max_len:
123 logger.warning(
KeyError: 'persuading'
How to solve KeyError problems?Thank you. | 02-04-2020 09:41:20 | 02-04-2020 09:41:20 | The token that you are trying to convert to ids doesn't exist. 'persuading' is a long word, so it's likely that it is not as such present in the vocab. Instead you'll have to tokenize it first into subword units.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,725 | closed | add TinyBERT? | # 🌟 New model addition
## Model description
TinyBERT is a smaller version of the Base BERT model, it uses transformer distillation (a type of knowledge distillation) to transfer the plenty of knowledge encoded in a large “teacher” BERT to a small “student” TinyBERT. is empirically effective and achieves more than 96% the performance of teacher BERTBASE on GLUE benchmark, while being 7.5x smaller and 9.4x faster on inference. TinyBERT is also significantly better than state-of-the-art baselines on BERT distillation, with only ∼28% parameters and ∼31% inference time of them. Here I have feature request to add the pretrained weights of TinyBERT after general learning from https://github.com/huawei-noah/Pretrained-Language-Model and model for both TF2.0 and pytorch. I think the transformer distillation method should be introduced too.
https://arxiv.org/pdf/1909.10351.pdf
<!-- Important information -->
## Open source status
* [x] the model implementation is available: (give details)
https://github.com/huawei-noah/Pretrained-Language-Model, only pytorch is available in my knowledge at the moment
https://github.com/koursaros-ai/nboost
* [x] the model weights are available: (give details)
https://github.com/huawei-noah/Pretrained-Language-Model
* [x] who are the authors: (mention them, if possible by @gh-username)
https://github.com/huawei-noah/Pretrained-Language-Model @jacobrxz https://github.com/jacobrxz
| 02-04-2020 02:01:08 | 02-04-2020 02:01:08 | tinybert just a normal bert with smaller parameters. I am not sure whether huggingface team will create new object called `TinyBert`. I think you can simply contact `huawei-noah` first to get permission to upload tinybert using your personal account.<|||||>Or you could ask them if they would create an [org account](https://huggingface.co/organizations) and upload TinyBert there.
I'll also ping them as it would be really great (cc @jacobrxz)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,724 | closed | sequence labeling for sentences and not tokens | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I have sentences that belong to a paragraph. Each sentence has a label.
[s1,s2,s3,..], [l1,l2,l3,...]
I understand that I have to encode each sentence using an encoder e.g bert, and then use sequence labeling. Could you guide me on how I could do that, combining them?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
https://stackoverflow.com/questions/60048900/sequence-labeling-for-sentences-and-not-tokens | 02-03-2020 23:58:53 | 02-03-2020 23:58:53 | Hi! You could leverage one of the `XXXForSequenceClassification` models for this. Their purpose is to classify sequences into a given number of labels. You would need to initialize a model from a pre-trained checkpoint:
```py
from transformers import BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("bert-base-cased")
```
This instantiates the base transformer model, but doesn't instantiate the classifier layer on top, you would need to train that with a fine-tuning on your own specific task. <|||||>Does the fact that I want to classify entire sentences and not words, makes any difference? And if yes what is this difference? Is there any example with this specific use case?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,723 | closed | Improved testing | Adding some tests for some models that were not tested. | 02-03-2020 23:32:31 | 02-03-2020 23:32:31 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2723?src=pr&el=h1) Report
> Merging [#2723](https://codecov.io/gh/huggingface/transformers/pull/2723?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6c1b23554f8bb5b5e1f6c80969acab764c755678?src=pr&el=desc) will **increase** coverage by `0.93%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2723?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2723 +/- ##
==========================================
+ Coverage 74.09% 75.03% +0.93%
==========================================
Files 93 93
Lines 15248 15248
==========================================
+ Hits 11298 11441 +143
+ Misses 3950 3807 -143
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2723?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2ZsYXViZXJ0LnB5) | `100% <0%> (+25%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.77% <0%> (+30.51%)` | :arrow_up: |
| [src/transformers/modeling\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `83.82% <0%> (+55.14%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2723?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2723?src=pr&el=footer). Last update [6c1b235...74b1cb3](https://codecov.io/gh/huggingface/transformers/pull/2723?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,722 | closed | Bert and Roberta models cannot be converted to TFLite | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert / Roberta
Language I am using the model on (English, Chinese ...): N/A
The problem arises when using:
* [ ] the official example scripts: (give details below)
Sort of. Using the tflite conversion script provided here:
https://github.com/huggingface/tflite-android-transformers/blob/master/models_generation/distilbert.py
The tasks I am working on is: Converting models to tflite format
## To reproduce
Steps to reproduce the behavior:
I first tried the example script provided above to convert a distilbert model to tflite, and it worked fine. The GPT conversion also works great.
Next, I modified the above script to the following:
```
import tensorflow as tf
from transformers import TFRobertaModel
model = TFRobertaModel.from_pretrained('roberta-base')
input_spec = [tf.TensorSpec([1, 128], tf.int32), tf.TensorSpec([1, 128], tf.int32)]
model._set_inputs(input_spec, training=False)
print(model.inputs)
print(model.outputs)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
# For conversion with hybrid quantization:
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
converter.experimental_new_converter = True
tflite_model = converter.convert()
```
Note that the above can be replaced with "TFBertModel" and "bert-base-cased" with 3 input tensors with the same result below.
## Expected behavior
No errors, creates tflite model.
## Actual behavior
Error for both BERT and Roberta:
```
[<tf.Tensor 'input_1_11:0' shape=(None, 128) dtype=int32>, <tf.Tensor 'input_2_9:0' shape=(None, 128) dtype=int32>]
[<tf.Tensor 'tf_roberta_model_1/Identity:0' shape=(None, 128, 768) dtype=float32>, <tf.Tensor 'tf_roberta_model_1/Identity_1:0' shape=(None, 768) dtype=float32>]
---------------------------------------------------------------------------
ConverterError Traceback (most recent call last)
<ipython-input-15-1f63532e8b87> in <module>
26 converter.experimental_new_converter = True
27
---> 28 tflite_model = converter.convert()
29
30 open("distilbert-squad-384.tflite", "wb").write(tflite_model)
c:\drive\projects\ml-notebooks\pycharm-venv\lib\site-packages\tensorflow_core\lite\python\lite.py in convert(self)
444 input_tensors=input_tensors,
445 output_tensors=output_tensors,
--> 446 **converter_kwargs)
447
448 if self._is_calibration_quantize():
c:\drive\projects\ml-notebooks\pycharm-venv\lib\site-packages\tensorflow_core\lite\python\convert.py in toco_convert_impl(input_data, input_tensors, output_tensors, enable_mlir_converter, *args, **kwargs)
447 input_data.SerializeToString(),
448 debug_info_str=debug_info_str,
--> 449 enable_mlir_converter=enable_mlir_converter)
450 return data
451
c:\drive\projects\ml-notebooks\pycharm-venv\lib\site-packages\tensorflow_core\lite\python\convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
198 stdout = _try_convert_to_unicode(stdout)
199 stderr = _try_convert_to_unicode(stderr)
--> 200 raise ConverterError("See console for info.\n%s\n%s\n" % (stdout, stderr))
201 finally:
202 # Must manually cleanup files.
ConverterError: See console for info.
2020-02-03 14:16:20.869205: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
2020-02-03 14:16:25.853657: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Cumsum
2020-02-03 14:16:25.854123: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.854715: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.855259: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.855869: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.856324: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.856863: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.857394: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.857914: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.858543: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.859107: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.859552: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:25.860084: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: Erf
2020-02-03 14:16:26.060782: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 1517 operators, 2651 arrays (0 quantized)
2020-02-03 14:16:26.149298: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 1517 operators, 2651 arrays (0 quantized)
2020-02-03 14:16:26.149831: F tensorflow/lite/toco/graph_transformations/resolve_strided_slice_attributes.cc:95] Check failed: start_indices_size <= num_input_axes (4 vs. 2)StridedSlice op requires no more than 2 start indices
Fatal Python error: Aborted
Current thread 0x000013c4 (most recent call first):
File "c:\drive\projects\ml-notebooks\pycharm-venv\lib\site-packages\tensorflow_core\lite\toco\python\toco_from_protos.py", line 52 in execute
File "c:\drive\projects\ml-notebooks\pycharm-venv\lib\site-packages\absl\app.py", line 250 in _run_main
File "c:\drive\projects\ml-notebooks\pycharm-venv\lib\site-packages\absl\app.py", line 299 in run
File "c:\drive\projects\ml-notebooks\pycharm-venv\lib\site-packages\tensorflow_core\python\platform\app.py", line 40 in run
File "c:\drive\projects\ml-notebooks\pycharm-venv\lib\site-packages\tensorflow_core\lite\toco\python\toco_from_protos.py", line 89 in main
File "C:\drive\projects\ml-notebooks\pycharm-venv\Scripts\toco_from_protos.exe\__main__.py", line 9 in <module>
File "C:\python\lib\runpy.py", line 85 in _run_code
File "C:\python\lib\runpy.py", line 193 in _run_module_as_main
```
## Environment info
- `transformers` version:
- Platform: Windows 10
- Python version: 3.6.8
- PyTorch version (GPU?): N/A
- Tensorflow version (GPU?): 2.0.0-dev20191002 (gpu=yes)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: N/A
| 02-03-2020 21:27:32 | 02-03-2020 21:27:32 | cc @Pierrci <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,721 | closed | Is transformers ovewriting tokenizer? | Hello. I haven't been able to use tokenizer since friday.
It seems that if I install transformers via pip it overwrites tokenizer installation with a version that doesn't work.
If I get a new instance and do that:
`pip install transformers`
When, I do that:
`pip install tokenizers`
I got the following msg:
> Requirement` already satisfied: tokenizers in /usr/local/lib/python3.7/site-packages (0.0.11)
And if I tried to import, I got this error msg:
> ImportError: cannot import name 'BertWordPieceTokenizer' from 'tokenizers'
I was wondering if it is a problem related to the new Transformers you released last Friday.
| 02-03-2020 15:31:48 | 02-03-2020 15:31:48 | Hi, I believe your issue was solved with https://github.com/huggingface/tokenizers/issues/120<|||||>Sure. I will close this case. |
transformers | 2,720 | closed | Add READMEs to Tensorflow versions of CamemBERT and XLM-RoBERTa | Add model cards. | 02-03-2020 12:28:27 | 02-03-2020 12:28:27 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2720?src=pr&el=h1) Report
> Merging [#2720](https://codecov.io/gh/huggingface/transformers/pull/2720?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2ba147ecffa28e5a4f96eebd09dcd642117dedae?src=pr&el=desc) will **decrease** coverage by `0.26%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2720?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2720 +/- ##
==========================================
- Coverage 74.09% 73.82% -0.27%
==========================================
Files 93 93
Lines 15248 15248
==========================================
- Hits 11298 11257 -41
- Misses 3950 3991 +41
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2720?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2720/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `52.94% <0%> (-21.57%)` | :arrow_down: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2720/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `68.79% <0%> (-3.33%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2720/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `84.87% <0%> (-0.82%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2720/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.3% <0%> (-0.52%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2720?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2720?src=pr&el=footer). Last update [2ba147e...312b0d4](https://codecov.io/gh/huggingface/transformers/pull/2720?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks @jplu! By the way (b/c I saw you uploaded a README to S3),
- we might support pushing READMEs from the S3 bucket to the repo automatically.
- we definitely will find a system for users to get merge rights on their model cards (via a GitHub bot maybe)<|||||>Yep, at first I intuitively thought that the method was the first bullet point you proposed, and then I finally saw that I had to do a PR.
Your second bullet point, I think, might be feasible with the Github Actions. |
transformers | 2,719 | closed | Error when running run_lm_finetuning.py | I'm getting the following error when trying to finetune BERT for armenian language
RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at C:\w\1\s\windows\pytorch\aten\src\THNN/generic/ClassNLLCriterion.c:97
| 02-03-2020 09:48:39 | 02-03-2020 09:48:39 | Hi, we would need more information to help you (all the information required in the bug template): transformers version, the full error trace, the script version.
This error is probably due to a version mismatch between your script and the transformers version you have installed.<|||||>I'm having a similar issue too.
I updated transformers from 2.3.x to 2.4.1 today, and I'm facing a runtime error which is RuntimeError: CUDA error: device-side assert triggered.
I reviewed recent updates and found out the commits [Follow up 213] is causing the error.
Below are the changes from the commits:
- labels[~masked_indices] = -100 # We only compute loss on masked tokens
+ labels[~masked_indices] = -1 # We only compute loss on masked tokens
The changes are related to the calculation of masked language model loss, so the problem seems to occur when args.mlm is True.
Any suggestions?
============================================
The full error trace
============================================
2020-02-03 21:36:34.839995: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
02/03/2020 21:36:38 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False
02/03/2020 21:36:39 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-config.json from cache at C:\Users\*****\.cache\torch\transformers\e1a2a406b5a05063c31f4dfdee7608986ba7c6393f7f79db5e69dcd197208534.a7ab0e5de2d8321d6d6a15b199110f2c99be72976b7d151423cb8d8c261a13b6
02/03/2020 21:36:39 - INFO - transformers.configuration_utils - Model config RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"do_sample": false,
"eos_token_ids": 0,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 12,
"num_labels": 2,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"pruned_heads": {},
"repetition_penalty": 1.0,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 1,
"use_bfloat16": false,
"vocab_size": 50265
}
02/03/2020 21:36:39 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-vocab.json from cache at C:\Users\*****\.cache\torch\transformers\d0c5776499adc1ded22493fae699da0971c1ee4c2587111707a4d177d20257a2.ef00af9e673c7160b4d41cfda1f48c5f4cba57d5142754525572a846a1ab1b9b
02/03/2020 21:36:39 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-merges.txt from cache at C:\Users\*****\.cache\torch\transformers\b35e7cd126cd4229a746b5d5c29a749e8e84438b14bcdb575950584fe33207e8.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda
02/03/2020 21:36:39 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-pytorch_model.bin from cache at C:\Users\*****\.cache\torch\transformers\228756ed15b6d200d7cb45aaef08c087e2706f54cb912863d2efe07c89584eb7.49b88ba7ec2c26a7558dda98ca3884c3b80fa31cf43a1b1f23aef3ff81ba344e
02/03/2020 21:36:44 - INFO - transformers.modeling_utils - Weights of RobertaForMaskedLM not initialized from pretrained model: ['lm_head.decoder.bias']
02/03/2020 21:36:46 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=510, cache_dir=None, config_name=None, device=device(type='cuda'), do_eval=True, do_train=True, eval_all_checkpoints=False, eval_data_file='../data/wikitext-2/wiki.test.raw', evaluate_during_training=True, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=5e-05, line_by_line=False, local_rank=-1, logging_steps=100, max_grad_norm=1.0, max_steps=-1, mlm=True, mlm_probability=0.15, model_name_or_path='roberta-base', model_type='roberta', n_gpu=1, no_cuda=False, num_train_epochs=1.0, output_dir='save', overwrite_cache=False, overwrite_output_dir=True, per_gpu_eval_batch_size=4, per_gpu_train_batch_size=4, save_steps=100, save_total_limit=1, seed=42, server_ip='', server_port='', should_continue=False, tokenizer_name=None, train_data_file='../data/wikitext-2/wiki.train.raw', warmup_steps=0, weight_decay=0.0)
02/03/2020 21:36:46 - INFO - __main__ - Loading features from cached file ../data/wikitext-2\roberta_cached_lm_510_wiki.train.raw
02/03/2020 21:36:46 - INFO - __main__ - ***** Running training *****
02/03/2020 21:36:46 - INFO - __main__ - Num examples = 4740
02/03/2020 21:36:46 - INFO - __main__ - Num Epochs = 1
02/03/2020 21:36:46 - INFO - __main__ - Instantaneous batch size per GPU = 4
02/03/2020 21:36:46 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 4
02/03/2020 21:36:46 - INFO - __main__ - Gradient Accumulation steps = 1
02/03/2020 21:36:46 - INFO - __main__ - Total optimization steps = 1185
Epoch: 0%| | 0/1 [00:00<?, ?it/s]
Iteration: 0%| | 0/1185 [00:00<?, ?it/s]C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [9,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [18,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [21,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [23,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [27,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [28,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.
Traceback (most recent call last):
File "C:/Users/*****/PycharmProjects/*****/huggingface/run_lm_finetuning.py", line 818, in <module>
main()
File "C:/Users/*****/PycharmProjects/*****/huggingface/run_lm_finetuning.py", line 768, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "C:/Users/*****/PycharmProjects/*****/huggingface/run_lm_finetuning.py", line 356, in train
loss.backward()
File "C:\Users\*****\Anaconda3\lib\site-packages\torch\tensor.py", line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "C:\Users\*****\Anaconda3\lib\site-packages\torch\autograd\__init__.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: CUDA error: device-side assert triggered
Epoch: 0%| | 0/1 [00:00<?, ?it/s]
Iteration: 0%| | 0/1185 [00:00<?, ?it/s]
Process finished with exit code 1
===================================================
<|||||> Hi @LysandreJik thank you for help I updated transformers version from 2.3.0 to 2.4.1 and it started to work<|||||>Hi @gjgjgjik, this error shouldn't happen if you have transformers v2.4.1 and you have the updated script. Are you running the `run_lm_finetuning` script after the commit you mentioned, and against the v2.4.1 library?<|||||>Hi @LysandreJik, I uninstalled and re-installed transformers v2.4.1 using `pip install git+https://github.com/huggingface/transformers`, but it still happens. The `run_lm_finetuning` script that I have used is the latest one because it contains the changes from [Follow up 213]. I simply copied the whole source code from the repository. I'm still able to run GPT which is not a masked language model though.
FYI,
OS: Windows 10
Transformers: 2.4.1
PyTorch: 1.4.0
Tensorflow: 2.1.0<|||||>Alright @gjgjgjik, I'm looking into it.<|||||>Indeed @gjgjgjik, I got confused on this -100/-1 fix. The correct value should be -100, and I updated it in 3bf5417. |
transformers | 2,718 | closed | DistilBertForMaskedLM is not passing ignore_index to loss fct nn.CrossEntropyLoss | # 🐛 Bug
I'm running `run_lm_finetuning.py` and got the error below:
```
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.
THCudaCheck FAIL file=/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu line=110 error=710 : device-side assert triggered
Traceback (most recent call last):
File "run_lm_finetuning.py", line 795, in <module>
main()
File "run_lm_finetuning.py", line 745, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_lm_finetuning.py", line 349, in train
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/home/tamvm/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/tamvm/.local/lib/python3.6/site-packages/transformers/modeling_distilbert.py", line 550, in forward
masked_lm_labels.view(-1))
File "/home/tamvm/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/tamvm/.local/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 916, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/home/tamvm/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 2016, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/home/tamvm/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 1842, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
```
Looking through the code, I realized that `-100` is used as labels for indices that are not masked. However, `DistilBertForMaskedLM` is not passing `ignore_index=-100` to `nn.CrossEntropyLoss`, which makes loss function calculate loss on `-100` labels. and hence the error.
[https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_distilbert.py#L510](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_distilbert.py#L510)
## Information
Model I am using (Bert, XLNet ...): DistilBert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* run_lm_finetuning.py
The tasks I am working on is:
* Fine tune masked language model
## To reproduce
Steps to reproduce the behavior:
```bash
python run_lm_finetuning.py \
--output_dir=finetune_output \
--model_type=distilbert \
--model_name_or_path=distilbert-base-multilingual-cased \
--do_train \
--train_data_file=./finetune_data/train.raw.txt \
--do_eval \
--eval_data_file=./finetune_data/val.raw.txt \
--mlm \
--block_size=128
```
## Expected behavior
Model should start training process without problem.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.3.0
- Platform: Ubuntu 18
- Python version: 3.6.9
- PyTorch version (GPU): 1.3.1
| 02-03-2020 09:47:38 | 02-03-2020 09:47:38 | You're absolutely correct, this was a bug. I've updated it in 239dd23.<|||||>Thank you :). Will close this bug. |
transformers | 2,717 | closed | error while training distilbert multilingual model | hi,
I am trying to finetune distilbert multilingual cased model, but i am getting error while training the model:
while with the same code using distilbert uncased , there is no such error.
Can you please check if there is some problem with distilbert multilingual cased model?
error is:
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 8 array(s), for inputs ['output_1', 'output_2', 'output_3', 'output_4', 'output_5', 'output_6', 'output_7', 'output_8'] but instead got the following list of 1 arrays: [<tf.Tensor 'ExpandDims:0' shape=(None, 1) dtype=int64>]
| 02-03-2020 06:30:39 | 02-03-2020 06:30:39 | pls reply to above<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,716 | closed | Added README.md to Swedish BERT models from National Library of Sweden | Following the lead of others these are not actual model cards but rather the README.md-files from https://github.com/Kungbib/swedish-bert-models | 02-02-2020 22:12:12 | 02-02-2020 22:12:12 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2716?src=pr&el=h1) Report
> Merging [#2716](https://codecov.io/gh/huggingface/transformers/pull/2716?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2ba147ecffa28e5a4f96eebd09dcd642117dedae?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2716?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2716 +/- ##
=======================================
Coverage 74.09% 74.09%
=======================================
Files 93 93
Lines 15248 15248
=======================================
Hits 11298 11298
Misses 3950 3950
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2716?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2716?src=pr&el=footer). Last update [2ba147e...e46c8bf](https://codecov.io/gh/huggingface/transformers/pull/2716?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks @marma!
As also mentioned on https://github.com/huggingface/transformers/pull/2720#issuecomment-581430234, we'll find a way for users to get merge rights on their model cards (via a GitHub bot maybe)
|
transformers | 2,715 | closed | Optimize causal mask using torch.where | Instead of multiplying by 1.0 float mask, use torch.where with a bool mask for increased performance. | 02-02-2020 21:11:03 | 02-02-2020 21:11:03 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2715?src=pr&el=h1) Report
> Merging [#2715](https://codecov.io/gh/huggingface/transformers/pull/2715?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33ef7002e17fe42b276dc6d36c07a3c39b1f09ed?src=pr&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2715?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2715 +/- ##
==========================================
- Coverage 77.8% 77.79% -0.02%
==========================================
Files 100 100
Lines 17051 17052 +1
==========================================
- Hits 13267 13266 -1
- Misses 3784 3786 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2715?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.2% <100%> (+0.04%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.15% <0%> (-0.18%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2715/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.81% <0%> (-0.14%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2715?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2715?src=pr&el=footer). Last update [33ef700...a54a418](https://codecov.io/gh/huggingface/transformers/pull/2715?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks – what's the PyTorch compatibility on this?<|||||>Not sure about that, where can I find more info on compatibility? I think it only relies on torch.where (introduced <= 1.0.0) and tensors of dtype torch.bool (introduced in 1.2.0). Does the None (newaxis) slicing introduce compatibility issues?
If we want to maintain compatibility with 1.0.0, I think we can use torch.uint8 instead of torch.bool.<|||||>Hi, I'd recommend to make the following changes:
1. Keep the original shapes of _bias_ buffer (because otherwise it breaks loading of already trained models) and make dtype equal to torch.uint8, so it'd be compatible with pytorch 1.0.0 as no torch.bool type available.
`self.register_buffer("bias", torch.tril(torch.ones((n_ctx, n_ctx), dtype=torch.uint8)).view(1, 1, n_ctx, n_ctx))`
2. Keep -1e4 constant in a buffer to reduce allocations on each _attn call and make it works automatically with different devices (CPU and CUDA):
`self.register_buffer("masked_bias", torch.tensor(-1e4))`
3. Keep `b = self.bias[:, :, ns - nd : ns, :ns]` line as _bias_ buffer have the original shape now
4. So the _where_ statement should look like `w = torch.where(b, w, self.masked_bias)`
As a result, overall speedup will be at 10-15% here as I measured, and the code should be 100% compatible with pytorch 1.0.0<|||||>Hi @Akababa,
Thanks for the PR. I think this is a great change. I checked and it does lead to a significant speed-up :-)
Could you fix the tests and I think then we can merge (see https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md)
1) You should fetch the master branch and rebase your branch on top of it.
2) Make sure to run `make style` in the root folder before pushing to pass the "check_code_quality" test.<|||||>Great work @Akababa - this looks good to me!
@LysandreJik @thomwolf - could you check and merge? <|||||>Checked slow hardcoded GPT2 tests and it looks all good! |
transformers | 2,714 | closed | How to add Dense layer on top of TFBertForSequenceClassification model? | I am having a really hard time adding the dense layers on the top of this model. I have tried to add the layers of `TFBertForSequenceClassification` in a sequential model with some dense layers like this:
```
bert_model = TFBertForSequenceClassification.from_pretrained("bert-base-cased", config=config)
model = keras.models.Sequential()
model.add(bert_model.layers[0])
model.add(keras.layers.Dense(10, activation='relu'))
model.add(keras.layers.Dense(1, activation='sigmoid'))
```
But when I fit the model using:
```
model.fit(
[padded, attention_mask],
[np.array(df[1][:2000])],
epochs=100,
)
```
I am getting this error:
```AttributeError: 'list' object has no attribute 'shape'```
I have also tried to use the layers of the `TFBertForSequenceClassification` in `keras.models.Model() class`. But, again there is no way to get the input layer. For example using `bert_model.layers[0].input_shape` gives the following error:
```
1571 """
1572 if not self._inbound_nodes:
-> 1573 raise AttributeError('The layer has never been called '
1574 'and thus has no defined input shape.')
1575 all_input_shapes = set(
AttributeError: The layer has never been called and thus has no defined input shape.
```
What is the right way to add layers on the top of this model. | 02-02-2020 20:21:48 | 02-02-2020 20:21:48 | Found the solution on [1936](https://github.com/huggingface/transformers/issues/1936). Closing.<|||||>Can you write down the solution here? @sainimohit23 |
transformers | 2,713 | closed | Weights of FlaubertForQuestionAnswering not initialized from pretrained model | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
`Flaubert`
Language I am using the model on (English, Chinese ...):
`French`
The tasks I am working on is: **Fine-tune Flaubert on French-translated SQuAD**
The problem arises when using:
```
python3 ./examples/run_squad.py \
--model_type flaubert \
--model_name_or_path flaubert-base-uncased \
--do_train \
--do_eval \
--do_lower_case \
--train_file SQuAD-v1.1-train_fr_ss999_awstart2_net.json \
--predict_file SQuAD-v1.1-dev_fr_ss999_awstart2_net.json \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir output \
--per_gpu_eval_batch_size=3 \
--per_gpu_train_batch_size=3
```
For some reason the downloaded weights from pre-trained model `flaubert-base-uncased` are not initialized for training:
```python-traceback
2/02/2020 15:10:53 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False
02/02/2020 15:10:53 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/flaubert/flaubert_base_uncased/config.json from cache at /root/.cache/torch/transformers/d1cf66823bb82e0ef671e7bae75bf86161cbf8ca218f893bc0129599e6e40c2a.e40562626242ae71bf0ce9aa0832297b724c4859407a09771341048981bb3736
02/02/2020 15:10:53 - INFO - transformers.configuration_utils - Model config FlaubertConfig {
"amp": 1,
"architectures": [
"FlaubertWithLMHeadModel"
],
"asm": false,
"attention_dropout": 0.1,
"bos_index": 0,
"bos_token_id": 0,
"bptt": 512,
"causal": false,
"clip_grad_norm": 5,
"do_sample": false,
"dropout": 0.1,
"emb_dim": 768,
"embed_init_std": 0.02209708691207961,
"encoder_only": true,
"end_n_top": 5,
"eos_index": 1,
"eos_token_ids": 0,
"finetuning_task": null,
"fp16": true,
"gelu_activation": true,
"group_by_size": true,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"id2lang": {
"0": "fr"
},
"init_std": 0.02,
"is_decoder": false,
"is_encoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"lang2id": {
"fr": 0
},
"lang_id": 0,
"langs": [
"fr"
],
"layer_norm_eps": 1e-12,
"layerdrop": 0.0,
"length_penalty": 1.0,
"lg_sampling_factor": -1,
"lgs": "fr",
"mask_index": 5,
"mask_token_id": 0,
"max_batch_size": 0,
"max_length": 20,
"max_position_embeddings": 512,
"max_vocab": -1,
"mlm_steps": [
[
"fr",
null
]
],
"model_type": "flaubert",
"n_heads": 12,
"n_langs": 1,
"n_layers": 12,
"num_beams": 1,
"num_labels": 2,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_index": 2,
"pad_token_id": 0,
"pre_norm": false,
"pruned_heads": {},
"repetition_penalty": 1.0,
"sample_alpha": 0,
"share_inout_emb": true,
"sinusoidal_embeddings": false,
"start_n_top": 5,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "first",
"summary_use_proj": true,
"temperature": 1.0,
"tokens_per_batch": -1,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"unk_index": 3,
"use_bfloat16": false,
"use_lang_emb": true,
"vocab_size": 67542,
"word_blank": 0,
"word_dropout": 0,
"word_keep": 0.1,
"word_mask": 0.8,
"word_mask_keep_rand": "0.8,0.1,0.1",
"word_pred": 0.15,
"word_rand": 0.1,
"word_shuffle": 0
}
02/02/2020 15:10:53 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/flaubert/flaubert_base_uncased/vocab.json from cache at /root/.cache/torch/transformers/8f54ff51875f0422a9c265ab77515058f2655b901caa5f8ff19954c8a126a2fe.4dbbb80764d7ce5ea8639cef2ffdf2c6be3c491192c042bba9651d56b917d49c
02/02/2020 15:10:53 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/flaubert/flaubert_base_uncased/merges.txt from cache at /root/.cache/torch/transformers/42f0fe2cd5eebb0c450bd936b0104b27c21e33138b445e9c7124094e05df02f6.5e19e4f2e2e9e11ecde5cc44c2c65f0dc11671ff5dfcd0066699e64bbc7c5a8d
02/02/2020 15:10:53 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/flaubert/flaubert_base_uncased/pytorch_model.bin from cache at /root/.cache/torch/transformers/2931084022a5d35320c07628cb7de631bdefe38f0e87d5d48a9e04be799ce0ef.8a02ed26eb9bc391a8fd64b6acce3b2167eb7a01cd4365502dca3a5980918425
02/02/2020 15:11:00 - INFO - transformers.modeling_utils - Weights of FlaubertForQuestionAnswering not initialized from pretrained model: ['qa_outputs.start_logits.dense.bias', 'qa_outputs.start_logits.dense.weight', 'qa_outputs.end_logits.dense_0.bias', 'qa_outputs.end_logits.dense_0.weight', 'qa_outputs.end_logits.LayerNorm.bias', 'qa_outputs.end_logits.LayerNorm.weight', 'qa_outputs.end_logits.dense_1.bias', 'qa_outputs.end_logits.dense_1.weight', 'qa_outputs.answer_class.dense_0.bias', 'qa_outputs.answer_class.dense_0.weight', 'qa_outputs.answer_class.dense_1.weight']
02/02/2020 15:11:00 - INFO - transformers.modeling_utils - Weights from pretrained model not used in FlaubertForQuestionAnswering: ['pred_layer.proj.bias', 'pred_layer.proj.weight']
02/02/2020 15:11:06 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', data_dir=None, device=device(type='cuda'), do_eval=True, do_lower_case=True, do_train=True, doc_stride=128, eval_all_checkpoints=False, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=3e-05, local_rank=-1, logging_steps=500, max_answer_length=30, max_grad_norm=1.0, max_query_length=64, max_seq_length=384, max_steps=-1, model_name_or_path='flaubert-base-uncased', model_type='flaubert', n_best_size=20, n_gpu=1, no_cuda=False, null_score_diff_threshold=0.0, num_train_epochs=2.0, output_dir='output', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=3, per_gpu_train_batch_size=3, predict_file='SQuAD-v1.1-dev_fr_ss999_awstart2_net.json', save_steps=500, seed=42, server_ip='', server_port='', threads=1, tokenizer_name='', train_file='SQuAD-v1.1-train_fr_ss999_awstart2_net.json', verbose_logging=False, version_2_with_negative=False, warmup_steps=0, weight_decay=0.0)
02/02/2020 15:11:06 - INFO - __main__ - Creating features from dataset file at .
100%|██████████████████████████████████████████████████████████████████████████████████████| 442/442 [00:42<00:00, 10.49it/s]
convert squad examples to features: 4%|█▉ | 3457/84943 [02:18<37:30, 36.21it/s]
```
## To reproduce
Steps to reproduce the behavior:
Edit `run_squad.py` to support `flaubert`:
```python
MODEL_CLASSES = {
"bert": (BertConfig, BertForQuestionAnswering, BertTokenizer),
"roberta": (RobertaConfig, RobertaForQuestionAnswering, RobertaTokenizer),
"xlnet": (XLNetConfig, XLNetForQuestionAnswering, XLNetTokenizer),
"xlm": (XLMConfig, XLMForQuestionAnswering, XLMTokenizer),
"distilbert": (DistilBertConfig, DistilBertForQuestionAnswering, DistilBertTokenizer),
"albert": (AlbertConfig, AlbertForQuestionAnswering, AlbertTokenizer),
"flaubert": (FlaubertConfig, FlaubertForQuestionAnswering, FlaubertTokenizer),
}
```
I had to do some other little edits. Then execute script:
```bash
python3 ./examples/run_squad.py \
--model_type flaubert \
--model_name_or_path flaubert-base-uncased \
--do_train \
--do_eval \
--do_lower_case \
--train_file SQuAD-v1.1-train_fr_ss999_awstart2_net.json \
--predict_file SQuAD-v1.1-dev_fr_ss999_awstart2_net.json \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir output \
--per_gpu_eval_batch_size=3 \
--per_gpu_train_batch_size=3
```
Dataset available here: https://github.com/Alikabbadj/French-SQuAD
## Expected behavior
Load weights from pre-trained `flaubert-base-uncased` model to fine-tune on FR SQuAD train then use new trained weights to evaluate model on FR SQuAD dev.
## Environment info
- `transformers` version: `transformers==2.4.1`
- Platform: `Deep Learning AMI (Ubuntu 16.04) Version 26.0 `
| 02-02-2020 15:35:47 | 02-02-2020 15:35:47 | The Flaubert checkpoints contain the base transformer model, not the weights for question answering (similar to most checkpoints). The point of the `run_squad` script is to fine-tune the weights of the additional question answering head to the specific task (french squad in your case).<|||||>Hi @LysandreJik
1. Actually I had the intiuition something was wrong because I had this "missing weights" message again after QA training during evaluation step, and all evaluation metrics were equal to 0... like if the learned weights during QA training were not loaded at evaluation step? How to make sure eval step loads the learned weights?
2. I just retried running the command above (training + eval) from a fresh env and now I have a new issue:
```python-traceback
convert squad examples to features: 100%|█████████████████████████████████████████████████████████████████████████████████$
| 84943/84943 [52:24<00:00, 27.01it/s]
add example index and unique id: 100%|█████████████████████████████████████████████████████████████████████████████████| 8$
943/84943 [00:00<00:00, 587500.52it/s]
02/09/2020 14:42:05 - INFO - __main__ - Saving features into cached file ./cached_train_flaubert-base-uncased_384
02/09/2020 14:44:49 - INFO - __main__ - ***** Running training *****
02/09/2020 14:44:49 - INFO - __main__ - Num examples = 87016
02/09/2020 14:44:49 - INFO - __main__ - Num Epochs = 2
02/09/2020 14:44:49 - INFO - __main__ - Instantaneous batch size per GPU = 3
02/09/2020 14:44:49 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 3
02/09/2020 14:44:49 - INFO - __main__ - Gradient Accumulation steps = 1
02/09/2020 14:44:49 - INFO - __main__ - Total optimization steps = 58012
Epoch: 0%|
| 0/2 [00:00<?, ?it/sTraceback (most recent call last):
| 0/29006 [00:00<?, ?it/s]
File "./examples/run_squad.py", line 857, in <module>
main()
File "./examples/run_squad.py", line 796, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "./examples/run_squad.py", line 231, in train
outputs = model(**inputs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/transformers/modeling_xlm.py", line 1036, in forward
inputs_embeds=inputs_embeds,
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/transformers/modeling_flaubert.py", line 235, in forward
tensor = tensor + self.lang_embeddings(langs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 576, in __getattr__
type(self).__name__, name))
AttributeError: 'FlaubertModel' object has no attribute 'lang_embeddings'
Epoch: 0%|
| 0/2 [00:00<?, ?it/s]
Iteration: 0%|
| 0/29006 [00:00<?, ?it/s]
```
It seems `lang_embeddings` is not available in `FlaubertModel`:
https://github.com/huggingface/transformers/blob/d426b58b9e32a2ffc8c8a1196143270e22054a46/src/transformers/modeling_flaubert.py#L229-L240
It is declared in XLM:
https://github.com/huggingface/transformers/blob/d426b58b9e32a2ffc8c8a1196143270e22054a46/src/transformers/modeling_xlm.py#L358-L365
Do you have any idea how to fix this? Thanks!<|||||>Hi,
1) Indeed, you should not have gotten these warnings if the model loaded was the one that you just trained.
2) This should have been fixed with https://github.com/huggingface/transformers/commit/cfb7d108bd4ad067a03faf15255a6ea55a6c8d39, could you install from source and let me know if it fixes your issue?<|||||>@LysandreJik It's fixed now! Thank you 👍 <|||||>Great to hear! |
transformers | 2,712 | closed | a problem occur when I train Chinese distilgpt2 model | ### When I was training a new model from zero to one, the following questions appeared, please help me answer them, thank you very much!


C:\Users\gaochangkuan\Desktop\transformers-master\examples\distillation>python train.py --student_type gpt2 --student_config training_configs/distilgpt2.json --teacher_type gpt2 --teacher_name distilgpt2 --alpha_ce 5.0 --alpha_mlm 2.0 --alpha_cos 1.0 --alpha_clm 0.0 --mlm --freeze_pos_embs --data_file data/binarized_text.bert-base-chinese.pickle --token_counts data/token_counts.bert-base-chinese.pickle --dump_path model --force
02/02/2020 22:26:54 - INFO - transformers.file_utils - PID: 27864 - PyTorch version 1.4.0+cpu available.
02/02/2020 22:27:02 - INFO - utils - PID: 27864 - Experiment will be dumped and logged in model
02/02/2020 22:27:02 - INFO - utils - PID: 27864 - Param: Namespace(adam_epsilon=1e-06, alpha_ce=5.0, alpha_clm=0.0, alpha_cos=1.0, alpha_mlm=2.0, alpha_mse=0.0, batch_size=5, checkpoint_interval=4000, data_file='data/binarized_text.bert-base-chinese.pickle', dump_path='model', force=True, fp16=False, fp16_opt_level='O1', freeze_pos_embs=True, freeze_token_type_embds=False, gradient_accumulation_steps=50, group_by_size=True, initializer_range=0.02, is_master=True, learning_rate=0.0005, local_rank=0, log_interval=500, master_port=-1, max_grad_norm=5.0, mlm=True, mlm_mask_prop=0.15, mlm_smoothing=0.7, multi_gpu=False, n_epoch=3, n_gpu=0, restrict_ce_to_mask=False, seed=56, student_config='training_configs/distilgpt2.json', student_pretrained_weights=None, student_type='gpt2', teacher_name='distilgpt2', teacher_type='gpt2', temperature=2.0, token_counts='data/token_counts.bert-base-chinese.pickle', warmup_prop=0.05, weight_decay=0.0, word_keep=0.1, word_mask=0.8, word_rand=0.1)
Using cache found in C:\Users\gaochangkuan/.cache\torch\hub\huggingface_pytorch-pretrained-BERT_master
02/02/2020 22:27:12 - INFO - transformers.configuration_utils - PID: 27864 - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-config.json from cache at C:\Users\gaochangkuan\.cache\torch\transformers\8a3b1cfe5da58286e12a0f5d7d182b8d6eca88c08e26c332ee3817548cf7e60a.3767c74c8ed285531d04153fe84a0791672aff52f7249b27df341dbce09b8305
02/02/2020 22:27:12 - INFO - transformers.configuration_utils - PID: 27864 - Model config BertConfig {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"directionality": "bidi",
"do_sample": false,
"eos_token_ids": 0,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-12,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 12,
"num_labels": 2,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"pruned_heads": {},
"repetition_penalty": 1.0,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 21128
}
02/02/2020 22:27:22 - INFO - transformers.tokenization_utils - PID: 27864 - loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-vocab.txt from cache at C:\Users\gaochangkuan\.cache\torch\transformers\8a0c070123c1f794c42a29c6904beb7c1b8715741e235bee04aca2c7636fc83f.9b42061518a39ca00b8b52059fd2bede8daa613f8a8671500e518a8c29de8c00
02/02/2020 22:27:22 - INFO - utils - PID: 27864 - Special tokens {'unk_token': 100, 'sep_token': 102, 'pad_token': 0, 'cls_token': 101, 'mask_token': 103}
02/02/2020 22:27:22 - INFO - utils - PID: 27864 - Loading data from data/binarized_text.bert-base-chinese.pickle
02/02/2020 22:27:22 - INFO - utils - PID: 27864 - Loading token counts from data/token_counts.bert-base-chinese.pickle (already pre-computed)
02/02/2020 22:27:22 - INFO - utils - PID: 27864 - Splitting 124 too long sequences.
02/02/2020 22:27:22 - INFO - utils - PID: 27864 - Remove 2840 too short (<=11 tokens) sequences.
02/02/2020 22:27:22 - INFO - utils - PID: 27864 - Remove 0 sequences with a high level of unknown tokens (50%).
02/02/2020 22:27:22 - INFO - utils - PID: 27864 - 30807 sequences
02/02/2020 22:27:22 - INFO - utils - PID: 27864 - Data loader created.
02/02/2020 22:27:22 - INFO - utils - PID: 27864 - Loading student config from training_configs/distilgpt2.json
02/02/2020 22:27:22 - INFO - transformers.configuration_utils - PID: 27864 - loading configuration file training_configs/distilgpt2.json
02/02/2020 22:27:22 - INFO - transformers.configuration_utils - PID: 27864 - Model config GPT2Config {
"architectures": null,
"attn_pdrop": 0.1,
"bos_token_id": 0,
"do_sample": false,
"embd_pdrop": 0.1,
"eos_token_ids": 0,
"finetuning_task": null,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_epsilon": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_layer": 6,
"n_positions": 1024,
"num_beams": 1,
"num_labels": 2,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"pruned_heads": {},
"repetition_penalty": 1.0,
"resid_pdrop": 0.1,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"use_bfloat16": false,
"vocab_size": 21128
}
02/02/2020 22:27:24 - INFO - utils - PID: 27864 - Student loaded.
02/02/2020 22:27:24 - INFO - transformers.configuration_utils - PID: 27864 - loading configuration file E:\GPT2_Text_generation\GPT2-Chinese-master\GPT2Model\config.json
02/02/2020 22:27:24 - INFO - transformers.configuration_utils - PID: 27864 - Model config GPT2Config {
"architectures": null,
"attn_pdrop": 0.1,
"bos_token_id": 0,
"do_sample": false,
"embd_pdrop": 0.1,
"eos_token_ids": 0,
"finetuning_task": null,
"id2label": {
"0": "LABEL_0"
},
"initializer_range": 0.02,
"is_decoder": false,
"label2id": {
"LABEL_0": 0
},
"layer_norm_epsilon": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_layer": 10,
"n_positions": 1024,
"num_beams": 1,
"num_labels": 1,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": true,
"output_past": true,
"pad_token_id": 0,
"pruned_heads": {},
"repetition_penalty": 1.0,
"resid_pdrop": 0.1,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"use_bfloat16": false,
"vocab_size": 21128
}
02/02/2020 22:27:24 - INFO - transformers.modeling_utils - PID: 27864 - loading weights file E:\GPT2_Text_generation\GPT2-Chinese-master\GPT2Model\pytorch_model.bin
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - Teacher loaded from distilgpt2.
21128 21128
768 768
1024 1024
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - Initializing Distiller
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - Using [0, 3, 7, 11, 15, 19, 23, 27, 31, 35, 39, 43, 47, 51, 55, 59, 63, 67, 71, 75, 79, 83, 87, 91, 95, 99, 103, 107, 111, 115, 119, 123, 127, 131, 135, 139, 143, 147, 151, 155, 159, 163, 167, 171, 175, 179, 183, 187, 191, 195, 199, 203, 207, 211, 215, 219, 223, 227, 231, 235, 239, 243, 247, 251, 255, 259, 263, 267, 271, 275, 279, 283, 287, 291, 295, 299, 303, 307, 311, 315, 319, 323, 327, 331, 335, 339, 343, 347, 351, 355, 359, 363, 367, 371, 375, 379, 383, 387, 391, 395, 399, 403, 407, 411, 415, 419, 423, 427, 431, 435, 439, 443, 447, 451, 455, 459, 463, 467, 471, 475, 479, 483, 487, 491, 495, 499, 503, 507, 511, inf] as bins for aspect lengths quantization
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - Count of instances per bin: [1267 1907 1866 1702 1584 1483 1380 1237 1205 1101 1047 974 882 854
758 672 598 583 593 519 492 453 444 414 371 338 352 305
290 298 260 250 260 214 210 200 189 190 149 153 121 125
124 106 116 105 87 100 78 103 73 70 74 78 65 70
52 43 46 51 48 38 49 28 32 41 34 27 29 31
28 39 28 23 25 26 17 25 23 12 20 17 17 20
8 12 15 16 8 11 11 10 13 11 3 9 8 5
9 5 6 6 6 4 10 4 6 3 3 3 2 4
3 6 4 3 7 2 6 9 1 2 6 2 3 134]
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - Using MLM loss for LM step.
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - --- Initializing model optimizer
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - ------ Number of trainable parameters (student): 58755072
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - ------ Number of parameters (student): 59541504
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - --- Initializing Tensorboard
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - Starting training
02/02/2020 22:27:27 - INFO - utils - PID: 27864 - --- Starting epoch 0/2
-Iter: 0%| | 0/6162 [00:00<?, ?it/s]Traceback (most recent call last):
File "train.py", line 329, in <module>
main()
File "train.py", line 324, in main
distiller.train()
File "C:\Users\gaochangkuan\Desktop\transformers-master\examples\distillation\distiller.py", line 355, in train
self.step(input_ids=token_ids, attention_mask=attn_mask, lm_labels=lm_labels)
File "C:\Users\gaochangkuan\Desktop\transformers-master\examples\distillation\distiller.py", line 385, in step
input_ids=input_ids, attention_mask=attention_mask
**ValueError: too many values to unpack (expected 2)** | 02-02-2020 14:36:33 | 02-02-2020 14:36:33 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,711 | closed | TypeError: apply_gradients() missing 1 required positional argument: 'clip_norm' | # 🐛 Bug
## Information
Model I am using (TFBertModel):
Language I am using the model on (English):
Also I'm using `tensorflow==2.1.0` and `transformers==2.3.0`
The problem arises when using:
* [x] the official example scripts: (give details below)
I'm trying to use the `optimization_tf.create_optimizer` from the source code.
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
Just a text classification task.
## To reproduce
Steps to reproduce the behavior:
1. Try to use the model the regular way
2. When running the model with "optimization_tf.create_optimizer"
## Environment info
```
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_eager.py in _process_single_batch(model, inputs, targets, output_loss_metrics, sample_weights, training)
271 loss_scale_optimizer.LossScaleOptimizer):
272 grads = model.optimizer.get_unscaled_gradients(grads)
--> 273 model.optimizer.apply_gradients(zip(grads, trainable_weights))
274 else:
275 logging.warning('The list of trainable weights is empty. Make sure that'
TypeError: apply_gradients() missing 1 required positional argument: 'clip_norm'
```
## How I am able to run
On the class `AdamWeightDecay` and the method `apply_gradients` I just call the supper function like this:
```
def apply_gradients(self, grads_and_vars, name=None):
return super().apply_gradients(grads_and_vars)
```
but as you can see I'm not using the `clip_norm` as the source example uses.
Is there a way to use the original source function as described in the source code? | 02-02-2020 13:46:22 | 02-02-2020 13:46:22 | Hi, indeed this optimizer `AdamWeightDecay` requires an additional argument for truncating the gradient norm.
It essentially feeds the `clip_norm` argument (which is the second required argument in `apply_gradients`) to [tf.clip_by_global_norm](https://www.tensorflow.org/api_docs/python/tf/clip_by_global_norm).
You can see a usage example in our [run_tf_ner.py example](https://github.com/huggingface/transformers/blob/master/examples/run_tf_ner.py#L203)<|||||>I wasn't able to implement this fix on my problem but I think this answer closes the issue, thank!<|||||>This problem occurs if you don't specify `clip_norm` when calling `apply_gradients`.
If using a custom training loop, the fix is easy :)
If you are using `keras.model.fit`, you can do it the following way:
```
from functools import partialmethod
AdamWeightDecay.apply_gradients = partialmethod(AdamWeightDecay.apply_gradients, clip_norm=1.0)
optimizer = create_optimizer(p.learning_rate, num_train_steps=total_steps, num_warmup_steps=warmup_steps)
``` |
transformers | 2,710 | closed | Removed unused fields in DistilBert TransformerBlock | A few fields in the TransformerBlock are unused - this small PR cleans it up.
| 02-02-2020 09:10:58 | 02-02-2020 09:10:58 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2710?src=pr&el=h1) Report
> Merging [#2710](https://codecov.io/gh/huggingface/transformers/pull/2710?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2ba147ecffa28e5a4f96eebd09dcd642117dedae?src=pr&el=desc) will **decrease** coverage by `0.27%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2710?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2710 +/- ##
==========================================
- Coverage 74.09% 73.81% -0.28%
==========================================
Files 93 93
Lines 15248 15243 -5
==========================================
- Hits 11298 11252 -46
- Misses 3950 3991 +41
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2710?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `95.79% <ø> (-0.07%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `52.94% <0%> (-21.57%)` | :arrow_down: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `68.79% <0%> (-3.33%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `84.87% <0%> (-0.82%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2710/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.3% <0%> (-0.52%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2710?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2710?src=pr&el=footer). Last update [2ba147e...d40db22](https://codecov.io/gh/huggingface/transformers/pull/2710?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 2,709 | closed | DistributedDataParallel for multi-gpu single-node runs in run_lm_finetuning.py | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Modify `run_lm_finetuning.py` with DDP for multi-gpu single-node jobs.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
In it's current state, `run_lm_finetuning.py` does not run in DDP for multi-gpu single-node training jobs. This results in all but the first GPU having very low utilization (as low as 50%, when the first one is in the high 80%) due to the way simple DP works. Once implemented, the load would be more evenly balanced across all the GPUs.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I can help implementing this feature, but would need guidance on what should/shouldn't be modified to get this working properly. | 02-02-2020 07:19:49 | 02-02-2020 07:19:49 | As far as I can see, the script fully supports DDP:
https://github.com/huggingface/transformers/blob/2ba147ecffa28e5a4f96eebd09dcd642117dedae/examples/run_lm_finetuning.py#L282-L286
I haven't run the script myself, but looking at the source this should work with the [torch launch](https://pytorch.org/docs/stable/distributed.html#launch-utility) utility. Your command would then look like this when using a single node with four GPUs.
```bash
python -m torch.distributed.launch --nproc_per_node 4 run_lm_finetuning.py [arguments]
```
<|||||>Ah ok, I wasn't aware that it had to be launched this way. I was looking at the code and thought it DDP would happen only when the process was launched across multiple nodes.
Thanks for the help @BramVanroy |
transformers | 2,708 | closed | Can't pickle local object using the finetuning example. | I was testing out the finetuning example from the repo:
`python run_lm_finetuning.py --train_data_file="finetune-output/KantText.txt" --output_dir="finetune-output/hugkant" --model_type=gpt2 --model_name_or_path=gpt2 --do_train --block_size=128`
While saving the checkpoint, it gives the following error:
```
Traceback (most recent call last):
File "run_lm_finetuning.py", line 790, in <module>
main()
File "run_lm_finetuning.py", line 740, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_lm_finetuning.py", line 398, in train
torch.save(scheduler.state_dict(), os.path.join(output_dir, "scheduler.pt"))
File "D:\Software\Python\lib\site-packages\torch\serialization.py", line 209, in save
return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
File "D:\Software\Python\lib\site-packages\torch\serialization.py", line 134, in _with_file_like
return body(f)
File "D:\Software\Python\lib\site-packages\torch\serialization.py", line 209, in <lambda>
return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickle_protocol))
File "D:\Software\Python\lib\site-packages\torch\serialization.py", line 282, in _save
pickler.dump(obj)
AttributeError: Can't pickle local object 'get_linear_schedule_with_warmup.<locals>.lr_lambda'
``` | 02-01-2020 22:48:47 | 02-01-2020 22:48:47 | Do you mind specifying which versions of everything you're using, as detailed in the [bug report issue template](https://github.com/huggingface/transformers/issues/new/choose)?<|||||>Hi @Normand-1024
were you able to fix this error?
as i am getting the same error while trying to run glue task (QQP) but works fine when i run MRPC.<|||||>Hi,
I was able to get rid of this error by upgrading the torch version.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Still having this issue with transformers `2.11.0`.<|||||>@lucadiliello did you maange to fix it?<|||||>Not yet. A solution would be to use `dill` instead of `pickle`... but I'm not sure how to do it.<|||||>Getting same error,
Not sure how to fix this error.<|||||>same error, with all newest version<|||||>I solved by reimplementing all the schedulers without lambda functions. [here](https://github.com/iKernels/transformers-lightning) I published many schedulers.<|||||>Same error with all newest version too.

<|||||>UP.
Is it a package version related issue?<|||||>Having the same issue:
`Can't pickle local object 'get_linear_schedule_with_warmup.<locals>.lr_lambda'`<|||||>For running the example scripts passing `--no_multi_process` solved it for me.
I haven't looked into the huggingface code yet but I could imagine that [this](https://stackoverflow.com/questions/52265120/python-multiprocessing-pool-attributeerror) is the bug here. I think it only shows up when `spawn` instead of `fork` is used to create new processes, which is why the developers might have missed it.<|||||>I set the `gpus=1`, and it works. <|||||>Well, this seems that it is a local object that can not be forked, you may define it at each forked process. This may work well. However, somebody should fix it. |
transformers | 2,707 | closed | Fix typo in examples/utils_ner.py | `"%s-%d".format()` -> `"{}-{}".format()` | 02-01-2020 16:02:26 | 02-01-2020 16:02:26 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2707?src=pr&el=h1) Report
> Merging [#2707](https://codecov.io/gh/huggingface/transformers/pull/2707?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ddb6f9476b58ed9bf4433622ca9aa49932929bc0?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2707?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2707 +/- ##
=======================================
Coverage 74.25% 74.25%
=======================================
Files 92 92
Lines 15216 15216
=======================================
Hits 11298 11298
Misses 3918 3918
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2707?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2707?src=pr&el=footer). Last update [ddb6f94...dd19c80](https://codecov.io/gh/huggingface/transformers/pull/2707?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Good catch, thanks |
transformers | 2,706 | closed | Load from tf2.0 checkpoint fail | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Download tf2.0 checkpoint from https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/uncased_L-12_H-768_A-12.tar.gz
2. unpack the model tar.gz to `bert_models` folder
3. start an iPython console and type following codes:
```python
import tensorflow as tf
from transformers import TFBertModel, BertConfig
config = BertConfig.from_json_file("./bert_models/uncased_L-12_H-768_A-12/bert_config.json")
model = TFBertModel.from_pretrained("./bert_models/uncased_L-12_H-768_A-12/bert_model.ckpt.index", config=config)
```
4. I check the original code from tf2.0 and found they didn't implement model.load_weights when by_name is True. Error is following:
NotImplementedError: Weights may only be loaded based on topology into Models when loading TensorFlow-formatted weights (got by_name=True to load_weights)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment
* OS: CentOS Linux release 7.4.1708 (Core)
* Python version: 3.7.6
* PyTorch version: 1.3.1
* `transformers` version (or branch):
* Using GPU ? Yes
* Distributed or parallel setup ? No
* Any other relevant information:
| 02-01-2020 15:24:18 | 02-01-2020 15:24:18 | Hi, in order to convert an official checkpoint to a checkpoint readable by `transformers`, you need to use the script `convert_bert_original_tf_checkpoint_to_pytorch`. You can then load it in a `BertModel` (PyTorch) or a `TFBertModel` (TensorFlow), by specifying the argument `from_pt=True` in your `from_pretrained` method.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,705 | closed | What is the input for TFBertForSequenceClassification? | # ❓ Questions & Help
What is the input for TFBertForSequenceClassification?
## Details
I have a simple multiclass text data on which I want to train the BERT model.
From docs I have found the input format of data:
```a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])```
In my understanding:
`input_ids`- tokenized sentences, generated from BERT tokenizer.
`attention_mask`- As name suggests it is attention mask. I should use it to mask out padding tokens. Please correct me if I am wrong.
Now what is `token_type_ids'? is it necessary?
When I tried to print output_shape of the model? I got:
`AttributeError: The layer has never been called and thus has no defined output shape.`
So, let's say my dataset has 5 classes. Does this model expect one-hot encoded vector of shape [BATCH_SIZE, CLASSES] for .fit() method?
Also if I don't use .from_pretrained() method, will it load an untrained model? | 02-01-2020 10:20:29 | 02-01-2020 10:20:29 | Have a look at an example, for instance [`run_tf_glue.py`](https://github.com/huggingface/transformers/blob/master/examples/run_tf_glue.py). To better understand all the arguments, I advise you to read the [documentation](https://huggingface.co/transformers/model_doc/bert.html#bertmodel). You'll find that token_type_ids are
> Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token
So they're only practically useful if your input contains two sequences (for instance if you wish to model some relationship between sentence A and sentence B). In your case, it's probably not needed.<|||||>Hi @BramVanroy as you said I tried to run the code from run_tf_glue.py. Yesterday it was working fine on google colab. But today when I tried to rerun the script. I am getting following error:
```
ImportError Traceback (most recent call last)
<ipython-input-7-63fb7d040ab0> in <module>()
4 import tensorflow_datasets
5
----> 6 from transformers import (
7 BertConfig,
8 BertForSequenceClassification,
ImportError: cannot import name 'TFBertForSequenceClassification'
```<|||||>Looks like there was some issue in colab session. So, closing this.<|||||>@sainimohit23 Getting similar issue in local Jupyter notebook.
"AttributeError: module 'transformers' has no attribute 'TFBertForSequenceClassification' "
Looks like there is some changes in transformers package.
let me know if this is fixed..
<|||||>> @sainimohit23 Getting similar issue in local Jupyter notebook.
> "AttributeError: module 'transformers' has no attribute 'TFBertForSequenceClassification' "
> Looks like there is some changes in transformers package.
>
> let me know if this is fixed..
Are you using the latest version of transformers? Try updating, because it is right there in the source code:
https://github.com/huggingface/transformers/blob/5c3d441ee1dc9150ccaf1075eb0168bbfe28c7f9/src/transformers/modeling_tf_bert.py#L875<|||||>@BramVanroy Using latest version of transformers. Double checked.
Let me know if there is any other issue.
Please find below details useful.
```
`AttributeError Traceback (most recent call last)
<ipython-input-6-5c0ab52ed729> in <module>
----> 1 model = BertForSequenceClassification.from_pretrained('sentiment_model/',from_tf=True) # re-load
2 tokenizer = BertTokenizer.from_pretrained('sentiment_model/')
~\AppData\Roaming\Python\Python37\site-packages\transformers\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
485 from transformers import load_tf2_checkpoint_in_pytorch_model
486
--> 487 model = load_tf2_checkpoint_in_pytorch_model(model, resolved_archive_file, allow_missing_keys=True)
488 except ImportError:
489 logger.error(
~\AppData\Roaming\Python\Python37\site-packages\transformers\modeling_tf_pytorch_utils.py in load_tf2_checkpoint_in_pytorch_model(pt_model, tf_checkpoint_path, tf_inputs, allow_missing_keys)
223 # Instantiate and load the associated TF 2.0 model
224 tf_model_class_name = "TF" + pt_model.__class__.__name__ # Add "TF" at the beggining
--> 225 tf_model_class = getattr(transformers, tf_model_class_name)
226 tf_model = tf_model_class(pt_model.config)
227
AttributeError: module 'transformers' has no attribute 'TFBertForSequenceClassification'`
```<|||||>I went through the source code, and this should work _unless_ Tensorflow is not installed in your environment. In such a case, the Tensorflow models are not imported in __init__. Make sure that Tensorflow is installed.
https://github.com/huggingface/transformers/blob/0dbddba6d2c5b2c6fc08866358c1994a00d6a1ff/src/transformers/__init__.py#L287-L313<|||||>Hi @BramVanroy, Thanks for the help there was a miss match of tensorflow version, but it looks like the issue is something different.
`RuntimeError: storage has wrong size: expected -273778883 got 768`
Either fine tuned model is corrupted or other issue. .
Thanks<|||||>Can you post the full trace?<|||||>@BramVanroy Please find the below details useful.
Let me know what can be the issue.
```
RuntimeError Traceback (most recent call last)
<ipython-input-14-d609d3be6585> in <module>
2 # model = BertForSequenceClassification.from_pretrained('sentiment_model/')
3
----> 4 model = BertForSequenceClassification.from_pretrained("sentiment_model/", num_labels=2)
5 tokenizer = BertTokenizer.from_pretrained('sentiment_model/')
~\AppData\Roaming\Python\Python37\site-packages\pytorch_pretrained_bert\modeling.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
601 if state_dict is None and not from_tf:
602 weights_path = os.path.join(serialization_dir, WEIGHTS_NAME)
--> 603 state_dict = torch.load(weights_path, map_location='cpu')
604 if tempdir:
605 # Clean up temp dir
~\AppData\Roaming\Python\Python37\site-packages\torch\serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
384 f = f.open('rb')
385 try:
--> 386 return _load(f, map_location, pickle_module, **pickle_load_args)
387 finally:
388 if new_fd:
~\AppData\Roaming\Python\Python37\site-packages\torch\serialization.py in _load(f, map_location, pickle_module, **pickle_load_args)
578 for key in deserialized_storage_keys:
579 assert key in deserialized_objects
--> 580 deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
581 if offset is not None:
582 offset = f.tell()
RuntimeError: storage has wrong size: expected -273778883 got 768
```
<|||||>@vijender412 i found this [comment](https://github.com/pytorch/pytorch/issues/12042#issuecomment-426466826) useful <|||||>I don't use Tensorflow, but the documentation suggests that you should load your model like this:
https://github.com/huggingface/transformers/blob/20fc18fbda3669c2f4a3510e0705b2acd54bff07/src/transformers/modeling_utils.py#L366-L368<|||||>@ArashHosseini gone through that but was not able to link my code.
@BramVanroy
The fine tuned model was saved using'
```
model.save_pretrained('./sentiment_model/')
tokenizer.save_pretrained('./sentiment_model/')
```
And files created were (config.json,pytorch_model.bin,special_tokens_map.json,tokenizer_config.json,vocab.txt) So no checkpoint were created wrt to tensorflow.
Now as per documentation the loading should be
```
model = BertForSequenceClassification.from_pretrained("sentiment_model/", num_labels=2)
tokenizer = BertTokenizer.from_pretrained('sentiment_model/')
```
The tokenizer is getting loaded but getting issues while loading model.
"RuntimeError: storage has wrong size: expected -273778883 got 768"
<|||||>Then why did you say in your original comment that you had a Tensorflow mismatch?
I am not sure why this happens. Please open your own topic, and provide all necessary information from the template.<|||||>@BramVanroy Earlier I was getting this issue
`AttributeError: module 'transformers' has no attribute 'TFBertForSequenceClassification'`
which got resolved by changing tensorflow version to 2.0.
**For current issue l will create a new issue after tracing out my code from scratch.**<|||||>> @ArashHosseini gone through that but was not able to link my code.
>
> @BramVanroy
> The fine tuned model was saved using'
>
> ```
> model.save_pretrained('./sentiment_model/')
> tokenizer.save_pretrained('./sentiment_model/')
> ```
>
> And files created were (config.json,pytorch_model.bin,special_tokens_map.json,tokenizer_config.json,vocab.txt) So no checkpoint were created wrt to tensorflow.
>
> Now as per documentation the loading should be
>
> ```
> model = BertForSequenceClassification.from_pretrained("sentiment_model/", num_labels=2)
> tokenizer = BertTokenizer.from_pretrained('sentiment_model/')
> ```
>
> The tokenizer is getting loaded but getting issues while loading model.
> "RuntimeError: storage has wrong size: expected -273778883 got 768"
Hi, I meet the same issue(can't load state_dict after saving it), Have you solve it? |
transformers | 2,704 | closed | How to make transformers examples use GPU? | # ❓ Questions & Help
I'm training the run_lm_finetuning.py with wiki-raw dataset. The training seems to work fine, but it is not using my GPU. Is there any flag which I should set to enable GPU usage?
## Details
I'm training the run_lm_finetuning.py with wiki-raw dataset. The training seems to work fine, but it is not using my GPU. Is there any flag which I should set to enable GPU usage?
| 02-01-2020 04:02:21 | 02-01-2020 04:02:21 | GPU should be used by default and can be disabled with the `no_cuda` flag. If your GPU is not being used, that means that PyTorch can't access your CUDA installation.
What is the output of running this in your Python interpreter?
```python
import torch
torch.cuda.is_available()
```<|||||>Thanks for the response. The output is True. Looks like it is using the GPU. But the utilization never crosses 10%. <|||||>And how is your CPU usage? Which GPU are you using? Which settings are you using? (Batch size, seq len...)<|||||>CPU Usage also is less than 10%. I'm using a Ryzen 3700X with Nvidia 2080 ti. I did not change any default settings of the batch size (4) and sequence length. <|||||>@abhijith-athreya What was the issue? I am facing the same issue. I am encoding the sentences using bert model but it's quite slow and not using GPU too.
<|||||>You need to post some sample code @monk1337, also https://discuss.huggingface.co will be more suited<|||||>@julien-c
It's working now.
from transformers import BertTokenizer, BertModel, BertForMaskedLM
def assign_GPU(Tokenizer_output):
tokens_tensor = Tokenizer_output['input_ids'].to('cuda:0')
token_type_ids = Tokenizer_output['token_type_ids'].to('cuda:0')
attention_mask = Tokenizer_output['attention_mask'].to('cuda:0')
output = {'input_ids' : tokens_tensor,
'token_type_ids' : token_type_ids,
'attention_mask' : attention_mask}
return output
```
sentence = 'Hello World!'
tokenizer = BertTokenizer.from_pretrained('bert-large-uncased')
model = BertModel.from_pretrained('bert-large-uncased')
inputs = assign_GPU(tokenizer(sentence, return_tensors="pt"))
model = model.to('cuda:0')
outputs = model(**inputs)
outputs
```<|||||>> @julien-c
>
> It's working now.
>
> from transformers import BertTokenizer, BertModel, BertForMaskedLM
> def assign_GPU(Tokenizer_output):
>
> ```
> tokens_tensor = Tokenizer_output['input_ids'].to('cuda:0')
> token_type_ids = Tokenizer_output['token_type_ids'].to('cuda:0')
> attention_mask = Tokenizer_output['attention_mask'].to('cuda:0')
>
> output = {'input_ids' : tokens_tensor,
> 'token_type_ids' : token_type_ids,
> 'attention_mask' : attention_mask}
>
> return output
> ```
>
> ```
> sentence = 'Hello World!'
> tokenizer = BertTokenizer.from_pretrained('bert-large-uncased')
> model = BertModel.from_pretrained('bert-large-uncased')
>
> inputs = assign_GPU(tokenizer(sentence, return_tensors="pt"))
> model = model.to('cuda:0')
> outputs = model(**inputs)
> outputs
> ```
Hey, I just want to complement here. The current version of transformers does support the call to `to()` for the `BatchEncoding` returned by the tokenizer, making it much more cleaner:
```python
> device = "cuda:0" if torch.cuda.is_available() else "cpu"
> sentence = 'Hello World!'
> tokenizer = BertTokenizer.from_pretrained('bert-large-uncased')
> model = BertModel.from_pretrained('bert-large-uncased')
> inputs = tokenizer(sentence, return_tensors="pt").to(device)
> model = model.to(device)
> outputs = model(**inputs)
```<|||||>wanted to add that in the new version of transformers, the Pipeline instance can also be run on GPU using as in the following example:
```python
pipeline = pipeline(TASK,
model=MODEL_PATH,
device=1, # to utilize GPU cuda:1
device=0, # to utilize GPU cuda:0
device=-1) # default value which utilize CPU
```<|||||>> wanted to add that in the new version of transformers, the Pipeline instance can also be run on GPU using as in the following example:
>
> ```python
> pipeline = pipeline(TASK,
> model=MODEL_PATH,
> device=1, # to utilize GPU cuda:1
> device=0, # to utilize GPU cuda:0
> device=-1) # default value which utilize CPU
> ```
And about work with multiple GPUs? |
transformers | 2,703 | closed | run_lm_finetuning.py on bert-base-uncased with wikitext-2-raw does not work | # 🐛 Bug
## Running run_lm_finetuning.py on bert-base-uncased with wikitext-2-raw does not work.
Model I am using (Bert, XLNet ...): Bert - bert-base-uncased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [*] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [*] an official GLUE/SQUaD task: train language model on wikitext
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Installed Transformers from the source (git pull and then pip install). Downloaded Wikitext-2 raw dataset.
2. Ran this command ""python run_lm_finetuning.py --output_dir=output --model_type=bert --model_name_or_path=bert-base-uncased --do_train --train_data_file=E:\\Code\\data\\wikitext-2-raw\\wiki.train.raw --do_eval --eval_data_file=E:\\Code\\data\\wikitext-2-raw\\wiki.test.raw --mlm""
3. This fails in train() method. I haven't touched the code. Stacktraces below:
python run_lm_finetuning.py --output_dir=output --model_type=bert --model_name_or_path=bert-base-uncased --do_train --train_data_file=E:\\Code\\data\\wikitext-2-raw\\wiki.train.raw --do_eval --eval_data_file=E:\\Code\\data\\wikitext-2-raw\\wiki.test.raw --mlm
2020-01-31 21:51:38.831236: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
01/31/2020 21:51:40 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False
01/31/2020 21:51:40 - INFO - transformers.file_utils - https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json not found in cache or force_download set to True, downloading to C:\Users\athre\AppData\Local\Temp\tmp91_kkef0
01/31/2020 21:51:40 - INFO - transformers.file_utils - copying C:\Users\athre\AppData\Local\Temp\tmp91_kkef0 to cache at C:\Users\athre\.cache\torch\transformers\4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.8f56353af4a709bf5ff0fbc915d8f5b42bfff892cbb6ac98c3c45f481a03c685
01/31/2020 21:51:40 - INFO - transformers.file_utils - creating metadata file for C:\Users\athre\.cache\torch\transformers\4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.8f56353af4a709bf5ff0fbc915d8f5b42bfff892cbb6ac98c3c45f481a03c685
01/31/2020 21:51:40 - INFO - transformers.file_utils - removing temp file C:\Users\athre\AppData\Local\Temp\tmp91_kkef0
01/31/2020 21:51:40 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json from cache at C:\Users\athre\.cache\torch\transformers\4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.8f56353af4a709bf5ff0fbc915d8f5b42bfff892cbb6ac98c3c45f481a03c685
01/31/2020 21:51:40 - INFO - transformers.configuration_utils - Model config {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"num_labels": 2,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pruned_heads": {},
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 30522
}
01/31/2020 21:51:40 - INFO - transformers.file_utils - https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt not found in cache or force_download set to True, downloading to C:\Users\athre\AppData\Local\Temp\tmpx8hth4qr
01/31/2020 21:51:41 - INFO - transformers.file_utils - copying C:\Users\athre\AppData\Local\Temp\tmpx8hth4qr to cache at C:\Users\athre\.cache\torch\transformers\26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
01/31/2020 21:51:41 - INFO - transformers.file_utils - creating metadata file for C:\Users\athre\.cache\torch\transformers\26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
01/31/2020 21:51:41 - INFO - transformers.file_utils - removing temp file C:\Users\athre\AppData\Local\Temp\tmpx8hth4qr
01/31/2020 21:51:41 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at C:\Users\athre\.cache\torch\transformers\26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
01/31/2020 21:51:41 - INFO - transformers.file_utils - https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin not found in cache or force_download set to True, downloading to C:\Users\athre\AppData\Local\Temp\tmpy8kf8hkd
01/31/2020 21:54:14 - INFO - transformers.file_utils - copying C:\Users\athre\AppData\Local\Temp\tmpy8kf8hkd to cache at C:\Users\athre\.cache\torch\transformers\aa1ef1aede4482d0dbcd4d52baad8ae300e60902e88fcb0bebdec09afd232066.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157
01/31/2020 21:54:14 - INFO - transformers.file_utils - creating metadata file for C:\Users\athre\.cache\torch\transformers\aa1ef1aede4482d0dbcd4d52baad8ae300e60902e88fcb0bebdec09afd232066.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157
01/31/2020 21:54:14 - INFO - transformers.file_utils - removing temp file C:\Users\athre\AppData\Local\Temp\tmpy8kf8hkd
01/31/2020 21:54:14 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin from cache at C:\Users\athre\.cache\torch\transformers\aa1ef1aede4482d0dbcd4d52baad8ae300e60902e88fcb0bebdec09afd232066.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157
01/31/2020 21:54:16 - INFO - transformers.modeling_utils - Weights from pretrained model not used in BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
01/31/2020 21:54:18 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=510, cache_dir=None, config_name=None, device=device(type='cuda'), do_eval=True, do_train=True, eval_all_checkpoints=False, eval_data_file='E:\\\\Code\\\\data\\\\wikitext-2-raw\\\\wiki.test.raw', evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=5e-05, line_by_line=False, local_rank=-1, logging_steps=500, max_grad_norm=1.0, max_steps=-1, mlm=True, mlm_probability=0.15, model_name_or_path='bert-base-uncased', model_type='bert', n_gpu=1, no_cuda=False, num_train_epochs=1.0, output_dir='output', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=4, per_gpu_train_batch_size=4, save_steps=500, save_total_limit=None, seed=42, server_ip='', server_port='', should_continue=False, tokenizer_name=None, train_data_file='E:\\\\Code\\\\data\\\\wikitext-2-raw\\\\wiki.train.raw', warmup_steps=0, weight_decay=0.0)
01/31/2020 21:54:18 - INFO - __main__ - Loading features from cached file E:\\Code\\data\\wikitext-2-raw\bert_cached_lm_510_wiki.train.raw
01/31/2020 21:54:18 - INFO - __main__ - ***** Running training *****
01/31/2020 21:54:18 - INFO - __main__ - Num examples = 4664
01/31/2020 21:54:18 - INFO - __main__ - Num Epochs = 1
01/31/2020 21:54:18 - INFO - __main__ - Instantaneous batch size per GPU = 4
01/31/2020 21:54:18 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 4
01/31/2020 21:54:18 - INFO - __main__ - Gradient Accumulation steps = 1
01/31/2020 21:54:18 - INFO - __main__ - Total optimization steps = 1166
Epoch: 0%| | 0/1 [00:00<?, ?it/s]C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed. | 0/1166 [00:00<?, ?it/s]
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [9,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [14,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [18,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [21,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [27,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [28,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [29,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed.
Traceback (most recent call last):
File "run_lm_finetuning.py", line 790, in <module>
main()
File "run_lm_finetuning.py", line 740, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_lm_finetuning.py", line 356, in train
loss.backward()
File "E:\Code\torch_env\lib\site-packages\torch\tensor.py", line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "E:\Code\torch_env\lib\site-packages\torch\autograd\__init__.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: CUDA error: device-side assert triggered
Epoch: 0%| | 0/1 [00:00<?, ?it/s]
Iteration: 0%|
## Expected behavior
Training should start.
## Environment
* OS: Windows 10
* Python version: 3.7
* PyTorch version: 1.4 stable
* `transformers` version (or branch): Latest (Jan-31-2020)
* Using GPU ? Yes
* Distributed or parallel setup ? Only 1 GPU
* Any other relevant information:
| 02-01-2020 03:03:08 | 02-01-2020 03:03:08 | Hi, did you manage to fix your issue?<|||||>Hi,
Yes, I took the latest build, and it worked without any changes. |
transformers | 2,702 | closed | DistilBERT does not support token type ids, but the tokenizers produce them | ```Python
>>> tokenizer = transformers.AutoTokenizer.from_pretrained("distilbert-base-uncased-distilled-squad")
>>> tokenized = tokenizer.encode_plus("I ate a clock yesterday.", "It was very time consuming.")
>>> tokenized
{'input_ids': [101, 1045, 8823, 1037, 5119, 7483, 1012, 102, 2009, 2001, 2200, 2051, 15077, 1012, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
>>> model = transformers.AutoModel.from_pretrained("distilbert-base-uncased-distilled-squad")
>>> model(**tokenized)
TypeError: forward() got an unexpected keyword argument 'token_type_ids'
```
In contrast, RoBERTa also does not support token type ids, but its forward method still takes the parameter, and its tokenizer produces type ids that are all zero. | 02-01-2020 01:03:37 | 02-01-2020 01:03:37 | RoBERTa accepts token typen ids because RoBERTa is basically the same architecture as BERT. (The "innovation" lies in how it's pretrained, not architectural changes.) It's literally nothing more than this:
https://github.com/huggingface/transformers/blob/ddb6f9476b58ed9bf4433622ca9aa49932929bc0/src/transformers/modeling_roberta.py#L149-L169
Distilbert's changes are more intricate.
Looking at your example, I agree that it'd be nice that all forward methods have the same signature for easier use of the `AutoModel`s.
T5 does something different, it just accepts `**kwargs`. That would solve the issue that there is now for Distilbert, but it has some adverse, non-pythonic side effects (imo): less readability, no IDE autocomplete, default values need to be set inside the method rather than declaration (in `pop`). I'm not a big fan of this.
It's best that the maintainers make a suggestion on how to continue with this.<|||||>> RoBERTa accepts token typen ids because RoBERTa is basically the same architecture as BERT. (The "innovation" lies in how it's pretrained, not architectural changes.)
The fairseq roBERTa doesn't accepts token typ ids and doesn't even has a layer for those:
```
TransformerSentenceEncoder(
(embed_tokens): Embedding(50265, 768, padding_idx=1)
(embed_positions): LearnedPositionalEmbedding(514, 768, padding_idx=1)
```
The huggingface implementation of RoBERTa accepts token typ ids because RobertaModel inherits from BertModel and the layer is inherited by RobertaEmbeddings from BertEmbeddings:
```
RobertaEmbeddings(
(word_embeddings): Embedding(50265, 768, padding_idx=1)
(position_embeddings): Embedding(514, 768, padding_idx=1)
(token_type_embeddings): Embedding(1, 768)
(LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
```
The huggingface RoBERTa It is still along with the fairseq implementation due to the dictionary size of the token_type_embeddings layer. It only accepts one value (e.g. 0) while Bert accepts two values (e.g. 0 and 1).
Back to the original topic. Call [encode_plus](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.encode_plus) with return_token_type_ids=False and you won't get them.<|||||>Yes, I was talking about the transformers implementation where Roberta is subclassing the Bertmodel.
Of course it's possible to just change the argument when encoding, but you'd want a unified approach so that you can just use automodel/autotokenizer, encode your input, and feed the encoded inputs to the forward method *for any input to automodel without having to change anything else*. In that respect this is more a usability question.
As an alternative to unifying the signature of all models, Distilbert's Tokenizer can be changed to not return the token type ids. <|||||>Okay, in case the OP is looking for a generic solution I think it is cleaner to get the parameters from the model itself by calling `model.forward.__code__.co_varnames`. This will return a tuple of parameters names and can be used with a dictionary comprehension like below:
```
from transformers import DistilBertModel
tokenized = {'input_ids': [101, 1045, 8823, 1037, 5119, 7483, 1012, 102, 2009, 2001, 2200, 2051, 15077, 1012, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
model = DistilBertModel.from_pretrained('distilbert-base-uncased-distilled-squad')
tokenized = {key:value for (key,value) in tokenized.items() if key in model.forward.__code__.co_varnames}
model(**tokenized)
```<|||||>That's cool, but it would be a lot better if this were streamlined on the library-side rather than users having to implement this themselves. Options are, as far as I can see:
- make sure the signature for all models' forward models are the same, with None-values where unexpected values occur
- ensure that tokenizers only return the features that their respective models use
The second one seems like the way to go imo.<|||||>On the fact that our RoBERTa implem takes (inoperant by default) `token_type_ids`, maybe we should actually remove them from the implem. If you want to train some, you can always subclass RoBERTa and add them back (but I'm not 100% sure a lot of people use them). Thoughts?<|||||>> On the fact that our RoBERTa implem takes (inoperant by default) `token_type_ids`, maybe we should actually remove them from the implem. If you want to train some, you can always subclass RoBERTa and add them back (but I'm not 100% sure a lot of people use them). Thoughts?
Agreed. Sticking close to original implementations or particularly their descriptions in paper (i.e. no token_type_ids in this case) seems a good idea. Users who read the paper or saw examples in other implementations would expect that. As you say, if required it's not that hard to add them again.
On top of that, a one-on-one relationship between the output of `encode_plus` and the input of the corresponding model seems a neat improvement, too, so that the issue of OP doesn't ever occur. What this means is that using an `AutoModel` and `AutoTokenizer` can always be used like this without running into type errors.
```python
encoded = tokenizer.encode_plus(...)
out = model(**encoded )
```
As I mentioned before, I am not a big fan of how this is done in T5, which accepts anything in its forward (`**kwargs`). It is easier to implement, but it has many drawbacks in terms of usability and perhaps maintenance.<|||||>I use type ids. I just recently built a model that relies on them. I even monkey-patched a bigger embedding matrix into RoBERTa to get the ability back. But maybe a cleaner implementation would be if `forward()` took another tensor of shape `(batch, tokens, hidden_size)` that just gets added to the word piece embedding.
Either way though, it's more important to me that the output of the tokenizer matches the input of the model.<|||||>@julien-c : I'm not sure how you handle that, but I would like to work on both issues (RoBERTa token_type embedding layer and encode_plus should only output model related tokens). Can you please assign this issue to me? Should I create a separate issue regarding RoBERTa? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This is an old issue, but I thought I'd try to ask here.
As of `transformers>=4.18`, the `**kwargs` argument was removed from the `call` methods of all models. Thus, an error occurs if you supply `token_type_ids` to `TFDistilbertForSequenceClassification.call` method.
What is the recommended way to programmatically determine whether or not a model accepts the `token_type_id` parameter in the latest version of **transformers**?<|||||>You can just inspect its signature with the `inspect` module.<|||||>Thanks, Sylvain.
For anyone stumbling on this issue, the recommended solution continues to be to simply examine the signature of the `call` method (for TensorFlow models) or the `forward` method (in PyTorch models) with the `inspect` module or (as shown above in 2020) with something like this:
```python
from transformers import TFAutoModelForSequenceClassification
model = TFAutoModelForSequenceClassification.from_pretrained('distilbert-base-uncased')
uses_token_type_ids = ("token_type_ids" in model.call.__code__.co_varnames)
print(uses_token_type_ids)
# prints False
``` |
transformers | 2,701 | closed | Store Model cards in the repo | 01-31-2020 22:13:22 | 01-31-2020 22:13:22 | Thanks for importing the readme files :heart: <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2701?src=pr&el=h1) Report
> Merging [#2701](https://codecov.io/gh/huggingface/transformers/pull/2701?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d426b58b9e32a2ffc8c8a1196143270e22054a46?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2701?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2701 +/- ##
=======================================
Coverage 74.25% 74.25%
=======================================
Files 92 92
Lines 15216 15216
=======================================
Hits 11298 11298
Misses 3918 3918
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2701?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2701?src=pr&el=footer). Last update [d426b58...d126da9](https://codecov.io/gh/huggingface/transformers/pull/2701?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 2,700 | closed | Add TF2 version of FlauBERT | Hello,
I worked today to add the new FlauBERT model in TF2 version. Translated models are available in:
```
jplu/tf-flaubert-base-cased
jplu/tf-flaubert-large-cased
jplu/tf-flaubert-small-cased
jplu/tf-flaubert-base-uncased
``` | 01-31-2020 16:56:50 | 01-31-2020 16:56:50 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2700?src=pr&el=h1) Report
> :exclamation: No coverage uploaded for pull request base (`master@bae644c`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).
> The diff coverage is `75%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2700?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2700 +/- ##
=========================================
Coverage ? 73.78%
=========================================
Files ? 93
Lines ? 15351
Branches ? 0
=========================================
Hits ? 11326
Misses ? 4025
Partials ? 0
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2700?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2700/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.32% <75%> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2700?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2700?src=pr&el=footer). Last update [bae644c...a39b8de](https://codecov.io/gh/huggingface/transformers/pull/2700?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Did we mean to delete camembert imports @LysandreJik . That's why the tests on HEAD are breaking afaict<|||||>I think they were a duplicate |
transformers | 2,699 | closed | CLI script to gather environment info | I noticed that all too often people leave the "Environment" section in their issue empty. However, things such as the version number of PT/TF and `transformers` itself are very useful to know when trying to debug things.
This PR adds a small script to the existing CLI workflow. Running `python transformers-cli info` will output something like this:
```
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 2.4.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.6.8
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
Note that GPUs being available through the DL framework (`GPU?`) are retrieved automatically, but that users still have to specify whether or not they are actually using the GPU.
In addition, the relevant issue templates have been updated to direct users to the script. | 01-31-2020 16:47:31 | 01-31-2020 16:47:31 | This is really cool!<|||||>> This is really cool!
Props to spaCy, since I basically stole [the idea](https://github.com/explosion/spaCy/blob/master/spacy/cli/info.py) from them. <|||||>LGTM but I've also pinged @mfuntowicz as he will have good insight<|||||>Is there a way to see which tests are run in `check_code_quality`? I'm curious as to why it fails.<|||||>It fails because the file `/home/circleci/transformers/src/transformers/commands/info.py` would be reformatted by black.
You can run `make style` at the root to set everything to black style.<|||||>> It fails because the file `/home/circleci/transformers/src/transformers/commands/info.py` would be reformatted by black.
>
> You can run `make style` at the root to set everything to black style.
Thanks. Is that different from running `black .`? I did that, and it formats all files (not only info.py). What I mean is that that suggests that all previously committed files must have also failed the test (since black changes them when I run the command) but during the test only info.py fails. Perhaps you are using a specific stylesheet.
**EDIT**: never mind, found the actual command in the [Makefile](https://github.com/huggingface/transformers/blob/master/Makefile). It's the same result indeed.<|||||>The black command we used uses a specific line-length which is different to the default (we use a line length of 119, we like it better). We also set it to be based on Python 3.5.
I believe the test now fails because of isort; that's weird, it should have been triggered by the `make style` as well and should have fixed the imports on its own.<|||||>> The black command we used uses a specific line-length which is different to the default (we use a line length of 119, we like it better). We also set it to be based on Python 3.5.
>
> I believe the test now fails because of isort; that's weird, it should have been triggered by the `make style` as well and should have fixed the imports on its own.
I am now manually running black/isort (without the 'check' flag) and pushing those commits in hopes that the tests will then pass. But, correct me if I'm wrong, isn't circleCI supposed to apply these runs (black, isort etc) before running the tests?<|||||>Did you install isort with the exact version that's pinned in `CONTRIBUTING.md`?
If you do, both `make style` and `make quality` should reliably pass.<|||||>It's a very cool feature to provide through the CLI.
I may suggest to rename the command from `info` to `env` as we may want to keep `info` for exposing information about models through cards / config.
What do you think ? @BramVanroy @julien-c @LysandreJik <|||||>> Did you install isort with the exact version that's pinned in `CONTRIBUTING.md`?
>
> If you do, both `make style` and `make quality` should reliably pass.
Ah, I missed the note on isort. Fixed it now.
> It's a very cool feature to provide through the CLI.
>
> I may suggest to rename the command from `info` to `env` as we may want to keep `info` for exposing information about models through cards / config.
>
> What do you think ? @BramVanroy @julien-c @LysandreJik
Good suggestion and future-proof! I renamed the CLI method to env, so the full command is
```
python transformers-cli env
```
Issue templates have been update and this command has been added to CONTRIBUTING.md as well.
<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2699?src=pr&el=h1) Report
> Merging [#2699](https://codecov.io/gh/huggingface/transformers/pull/2699?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/161c88f0861e71e757bd4516369e836555cd3ded?src=pr&el=desc) will **decrease** coverage by `0.15%`.
> The diff coverage is `56.78%`.
[](https://codecov.io/gh/huggingface/transformers/pull/2699?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2699 +/- ##
==========================================
- Coverage 74.24% 74.09% -0.16%
==========================================
Files 92 93 +1
Lines 15215 15247 +32
==========================================
Hits 11297 11297
- Misses 3918 3950 +32
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2699?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100% <ø> (ø)` | :arrow_up: |
| [src/transformers/configuration\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2NhbWVtYmVydC5weQ==) | `100% <ø> (ø)` | :arrow_up: |
| [src/transformers/commands/env.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9lbnYucHk=) | `0% <0%> (ø)` | |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `75.23% <100%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `100% <100%> (ø)` | :arrow_up: |
| [src/transformers/tokenization\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `34.56% <100%> (ø)` | :arrow_up: |
| [src/transformers/configuration\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100% <100%> (ø)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `85.69% <100%> (ø)` | :arrow_up: |
| [src/transformers/modeling\_tf\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `100% <100%> (ø)` | :arrow_up: |
| [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `100% <100%> (ø)` | :arrow_up: |
| ... and [23 more](https://codecov.io/gh/huggingface/transformers/pull/2699/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2699?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2699?src=pr&el=footer). Last update [161c88f...da9cf7c](https://codecov.io/gh/huggingface/transformers/pull/2699?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>LGTM, thanks a lot @BramVanroy |
transformers | 2,698 | closed | Typo on markdown link in README.md | 01-31-2020 15:40:24 | 01-31-2020 15:40:24 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2698?src=pr&el=h1) Report
> Merging [#2698](https://codecov.io/gh/huggingface/transformers/pull/2698?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0aa40e9569a71306036de3a217eed55521699604?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/2698?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #2698 +/- ##
=======================================
Coverage 74.24% 74.24%
=======================================
Files 92 92
Lines 15215 15215
=======================================
Hits 11297 11297
Misses 3918 3918
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2698?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2698?src=pr&el=footer). Last update [0aa40e9...9a11e6f](https://codecov.io/gh/huggingface/transformers/pull/2698?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great, thanks! |
|
transformers | 2,697 | closed | Albert language model fine tuning not running run_lm_finetuning.py | # 🐛 Bug
## Information
Model I am using (Albert(all types)):
Language I am using the model on (English):
The problem arises when using:
* [ ] the official example scripts: (give details below)
the code returns memory allocation problems when run with any version from albert. i tried to reduce the sequence length and batch size to a minum setting, but the issue still arises. my setting and the minimized setting both run normally with bert or roberta, the issue arises only when i change the model to Albert.
an example:
`tcmalloc: large alloc 1951195136 bytes == 0x7f750f664000 @ 0x7f76efbf8887 0x7f764c2a1b79 0x7f764c29fb0f 0x7f764c29fc33 0x7f764c26a155 0x7f764c26837e 0x7f764c26bbb1 0x7f764c2606df 0x50a8af 0x50c5b9 0x509d48 0x50aa7d 0x50c5b9 0x508245 0x509642 0x595311 0x5a067e 0x50d966 0x58efc9 0x4c9546 0x5886f4 0x58892e 0x551b81 0x5aa6ec 0x50abb3 0x50c5b9 0x508245 0x50a080 0x50aa7d 0x50c5b9 0x508245`
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
language model finetuning for albert
## To reproduce
Steps to reproduce the behavior:
1. in run_lm_finetuning add:
` from transformers import (AlbertConfig,
AlbertForMaskedLM,
AlbertTokenizer,
)`
2.add to MODEL_CLASSES dictionary:
` "albert": (AlbertConfig, AlbertForMaskedLM, AlbertTokenizer),`
3. add file text.txt, a similar txt file to the wiki dataset that's mentioned in the docs.
4.run the finetuning script:
`python transformers/examples/run_lm_finetuning.py \
--output_dir=output \
--model_type=albert \
--model_name_or_path=albert-base-v1 \
--do_train \
--train_data_file test.txt \
--block_size 50 \
--per_gpu_train_batch_size 2 \
--max_steps 520000 \
--weight_decay 0.01 \
--logging_steps 5000 \
--mlm`
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment
* OS: Google colab
* Python version: 3.7
* PyTorch version: 1.3.1
* `transformers` version (or branch): latest
* Using GPU ? yes
* Distributed or parallel setup ? no
* Any other relevant information:
| 01-31-2020 15:00:27 | 01-31-2020 15:00:27 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I am facing the same problem with BERT fine tuning for a masked language modeling fine tuning task. Can someone please help? I am exactly following https://github.com/huggingface/transformers/tree/master/examples/language-modeling |
transformers | 2,696 | closed | Missing `do_sample` argument for run_generation example | # ❓ Questions & Help
It seems the arguments `k`, `p`, `temperature` are disabled because `do_sample` is set to False by default. Thus, [run_generation.py](https://github.com/huggingface/transformers/blob/master/examples/run_generation.py) will always use greedy decoding no matter how the `k`, `p`, `temperature` are set, which is kind of misleading.
I think `do_sample` argument should be included in the code. | 01-31-2020 14:53:27 | 01-31-2020 14:53:27 | You're absolutely correct, I pushed a fix with 7365f01!<|||||>Ran into this issue myself by accident.
IMO, `do_sample=True` should be the default behavior for `generate()` since that's more in line with user expectations.<|||||>I agree with you, the default should be set to `True`. I've changed the default in 6c1b235.<|||||>Decision reverted in #3298 (see the PR for discussion and details).
New default to `do_sample==False`. |
transformers | 2,695 | closed | get_linear_schedule_with_warmup method can't be found in optimization.py | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BERT
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
My transformer was downloaded from anaconda cloud by using command "conda install -c conda-forge transformers", when I tried to use AdamW as what you show in the example, I found there is no **get_linear_schedule_with_warmup** in the **transformers.optimization.py** file for me to create a scheduler.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment
* OS:
* Python version:
* PyTorch version:
* `transformers` version (or branch):
* Using GPU ?
* Distributed or parallel setup ?
* Any other relevant information:
| 01-31-2020 14:37:51 | 01-31-2020 14:37:51 | There clearly is:
https://github.com/huggingface/transformers/blob/0aa40e9569a71306036de3a217eed55521699604/src/transformers/optimization.py#L47-L59
Please fill out the complete template - it's there for a reason. If you had shown us which version you're working with, we could probably tell you that your version is too late, or at least dig further. Now we can't help at all, it's just guess work.
Give code, full error trace, and your PyTorch/Tensorflow version.<|||||>I thought my transformers was the lastest version, but I found it's not when I checked it on the anaconda cloud "https://anaconda.org/conda-forge/transformers". The problem has been solved after reinstalling, Thanks for your reply :)<|||||>In the future, please fill out the complete template. Please close this question if you don't have any more questions.<|||||>OK |
transformers | 2,694 | closed | AutoModel fails to load FlauBERT with `output_hidden_states` | MWE:
```python
import transformers
model = transformers.AutoModel.from_pretrained("flaubert-base-cased", output_hidden_states=True)
```
Tested on rev 5a6b138 fails with
```console
Traceback (most recent call last):
File "mwe.py", line 3, in <module>
model = transformers.AutoModel.from_pretrained("flaubert-base-cased", output_hidden_states=True)
File "<redacted>/transformers/modeling_auto.py", line 377, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "<redacted>/transformers/modeling_utils.py", line 463, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
TypeError: __init__() got an unexpected keyword argument 'output_hidden_states'
```
This works when loading directly from `transformers.FlaubertModel`. | 01-31-2020 13:38:43 | 01-31-2020 13:38:43 | Hi! the `output_hidden_states` should be specified in the configuration when loading from `AutoModel` classes. Doing the following is necessary to instantiate a class with hidden states:
```py
import transformers
config = transformers.AutoConfig.from_pretrained("flaubert-base-cased", output_hidden_states=True)
model = transformers.AutoModel.from_pretrained("flaubert-base-cased", config=config)
```
However, your issue showed me there was a bug with the loading of FlauBERT models with AutoModels, which I patched in https://github.com/huggingface/transformers/commit/ff6f1492e8296f511682fd56fcf62be0854723a2.
Please install from source to have the fix: `pip install git+https://github.com/huggingface/transformers`, I'll push a pypi patch for this soon.<|||||>Oh, okay, thanks. From what I understood of [AutoModel](https://huggingface.co/transformers/model_doc/auto.html#transformers.AutoModel.from_pretrained) doc, I thought all `**kwargs` in `AutoModel.from_pretrained` were passed to the config.<|||||>Indeed, the documentation seems misleading in that regard. I'm updating it.<|||||>`AutoTokenizer` seems to have the same problem as the one you fixed in `AutoModel`
```python
transformers.AutoTokenizer.from_pretrained("flaubert-base-uncased")
```
results in
```console
OSError: Model name 'flaubert-base-uncased' was not found in tokenizers model name list (xlm-mlm-en-2048, xlm-mlm-ende-1024, xlm-mlm-enfr-1024, xlm-mlm-enro-1024, xlm-mlm-tlm-xnli15-1024, xlm-mlm-xnli15-1024, xlm-clm-enfr-1024, xlm-clm-ende-1024, xlm-mlm-17-1280, xlm-mlm-100-1280)
```<|||||>Indeed it does, thanks @Evpok !<|||||>Should have been patched and tested with 1e82cd8.<|||||>Thanks for the quick response ♥<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I think this change should be bolder in documentations somehow as I had this problem too. |
transformers | 2,693 | closed | Input file format for examples/run_lm_finetuning.py | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I wanted to employ the `examples/run_lm_finetuning.py` from the transformers repository on a pretrained Bert model. However, from following the documentation it is not evident how a corpus file should be structured (apart from referencing the Wiki-2 dataset). I've tried
- One document per line (multiple sentences)
- One sentence per line. Documents are separated by a blank line (this I found in some older pytorch-transformers documentation)
By looking at the code of `examples/run_lm_finetuning.py` it is not directly evident how sequence pairs for the Next Sentence Prediction objective are formed. Would the --line-by-line option help here? I'd be grateful, if someone could give me some hints how a text corpus file should look like.
Many thanks and cheers,
nminds
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
[SO link](https://stackoverflow.com/questions/60001698/how-exactly-should-the-input-file-be-formatted-for-the-language-model-finetuning) | 01-31-2020 10:41:34 | 01-31-2020 10:41:34 | Hi, two datasets are available in `run_lm_finetuning.py`:
- `TextDataset`, which just splits your data into chunks with no attention whatsoever to the line returns or separators
- `LineByLineTextDataset`, which splits your data into chunks, being careful not to overstep line returns as each line is interpreted as a document.
None of those datasets, nor the run_lm_finetuning script in itself handle the next sentence prediction objective. It handles masked language modeling when the `--mlm` flag option is passed, and the causal language modeling when no `--mlm` flag option is passed.<|||||>@LysandreJik
[this](https://github.com/huggingface/transformers/blob/33d3072e1c54bcd235447b98c6dea1b4cb71234c/examples/run_lm_finetuning.py#L135) will drop tokens beyond len of `512`?<|||||>> Hi, two datasets are available in `run_lm_finetuning.py`:
>
> * `TextDataset`, which just splits your data into chunks with no attention whatsoever to the line returns or separators
> * `LineByLineTextDataset`, which splits your data into chunks, being careful not to overstep line returns as each line is interpreted as a document.
>
> None of those datasets, nor the run_lm_finetuning script in itself handle the next sentence prediction objective. It handles masked language modeling when the `--mlm` flag option is passed, and the causal language modeling when no `--mlm` flag option is passed.
Hi, Is there any particular reason to exclude the next sentence prediction objective? <|||||>@nauman-chaudhary it will drop the tokens beyond the maximum input size of the model. For BERT, it is indeed 512. Feel free to implement a more complex behavior if your dataset has a lot lines that go over the 512 token limit.
@fajri91 Yes, for a couple of reasons:
- Having a simple MLM/CLM objective is simpler, both to understand (user) and to maintain (maintainer)
- The RoBERTa paper has proven that the NSP objective was not particularly helpful
- Only BERT has the class (`BertForPreTraining`) to manage the NSP objective, whereas `run_lm_finetuning` supports several models available in the library
- If anyone wants to implement the NSP objective, it is very easy for them to change the dataset/training loop to do so.<|||||>@LysandreJik given the fact that tokenizer drops input size above 512 is it worth to prepare the input dataset by using sliding window over documents? What I mean by that is instead of dropping a lot of text, I will transform long document into i.e 4 sentences in each line, with sliding window over the whole document.<|||||>Yes, this is a reasonable strategy.<|||||>Thanks for clarification, it's super helpful to know this!<|||||>> Hi, two datasets are available in `run_lm_finetuning.py`:
>
> * `TextDataset`, which just splits your data into chunks with no attention whatsoever to the line returns or separators
> * `LineByLineTextDataset`, which splits your data into chunks, being careful not to overstep line returns as each line is interpreted as a document.
>
> None of those datasets, nor the run_lm_finetuning script in itself handle the next sentence prediction objective. It handles masked language modeling when the `--mlm` flag option is passed, and the causal language modeling when no `--mlm` flag option is passed.
hello @LysandreJik
if i realized TextDataset correctly , it makes a long sentence of all the corpus and cut them 512 by 512 and gives it to the model (if the max_len is supposed to be 512 ) and then we will have no padding
but LineByLineTextDataset , pad each line to reach 512 and gives it to model
by which one we will get better results in downstream tasks ?
(and in downstream tasks we are unforced to do padding)
thanks!<|||||>@marrrcin @LysandreJik thank you for the comments! I have a domain specific corpus; Geology, which is about 2GB text file. I prepared an input file where, I scanned a 4-sentences window on my raw text, and wrote each 4-sentence window onto a new line.
So, my modified input file has 4 sentences per line ending with \n. Now I am training BERT MLM training with BertWordPieceTokenizer from scratch. run_language_model.py gets to LineByLineTextDataSet and takes almost 2 hours to process my input file. I feel this is quite slow for only 2GB file.
My input looks something like this, there is no space between lines and there are 2 millions of lines:
{
first sentence starts here. second sentence. now third sentence. and forth sentence
new line with fifth sentence. sixth sentence here, then seventh sentence. finally eight sentence
new line with night sentence, then tenth sentence ...
...
}
is there anyway to speed this up? <|||||>just upgraded HF to v.2.9 and now it took 37 minutes instead of 108 minutes. thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 2,692 | closed | Regarding distlbert uncased model's size | I finetuned the distlbert uncased model, and I had a thought that since it is a lower layer model, it should have less weight. But, to my surprise, I find that the model generated after finetuning, I saved it as :
tf.saved_model.save(model, "./tempdir/distilbert/2/")
and the tf model got saved.
this model has a very high size(of 810 Mb), although it should be less .And the original bert mdoel which is a large model has a size of 410 Mb.
Please look into the matter | 01-31-2020 09:54:32 | 01-31-2020 09:54:32 | please reply<|||||>You're comparing two formats that are different, so the comparison doesn't really make sense. The BERT model weighs 410MB when saved as a HDF5 file, whereas DistilBERT weighs 810MB when saved as a SavedModel, which also contains the graph and variables.
Saving both files in HDF5:
```py
from transformers import TFBertModel, TFDistilBertModel
bert = TFBertModel.from_pretrained("bert-base-uncased")
distilbert = TFDistilBert.from_pretrained("distilbert-base-uncased")
bert.save_pretrained("bert")
distilbert.savePretrained("distilbert")
```
`ls` in "bert" -> 414MB for the `tf_model.h5`
`ls` in "distilbert" -> 254MB for the `tf_model.h5`<|||||>I got your answer, that's correct.But, since I need to serve the model using tfserving,so I need a SavedModel format only.
1)Or is there any way to serve .h5 models through tfserving?
2)Or is there any way i can remove adam momentum and such trainig variables from my model to reduce the size in tensorflow 2.1?,because i have seen in tf 2.x freezing graph is deprecated
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.