repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 9,110 | closed | Not able to train RoBERTa language model from scratch | I tried training RoBERTa language model from scratch using
<code> !python ./run_mlm.py
--model_name_or_path roberta-base
--train_file './data_lm.txt'
--do_train
--line_by_line
--num_train_epochs 3
--output_dir ./roberta
</code>
Due to the limits on the colab storage, I deleted all the checkpoints generated during the training process. So finally my model directory has the following files -
1. config.json
2. merges.txt
3. pytorch_model.bin
4. special_tokens_map.json
5. tokenizer.config
6. vocab.json
But after loading my trained model using RobertaTokenizer, RobertaForSequenceClassification and fine-tuning it for my classification task, i am receiving almost same accuracy as by loading and fine-tuning the readily available 'roberta-base'.
Also, when i try loading it as
`model = RobertaModel.from_pretrained('./roberta)
`
I get the warning-
<code>
Some weights of RobertaModel were not initialized from the model checkpoint at ./roberta/ and are newly initialized: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
</code>
So, my question is, Is there something wrong in the training procedure of the Language Model on my dataset and the loading process? Or the checkpoints that i deleted were important?
| 12-14-2020 21:04:15 | 12-14-2020 21:04:15 | The `run_mlm` script trains a `RobertaForMaskedLM` model, which does not have a pooler layer. That's why you get this warning when using this pretrained model to initialize a `RobertaModel`.<|||||>Thanks @sgugger! Now I get where the problem was.
Moreover, I found some good tutorials on it. Here are the links for those who need it.
https://zablo.net/blog/post/training-roberta-from-scratch-the-missing-guide-polish-language-model/
https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb |
transformers | 9,109 | closed | Cannot disable logging from trainer module | @sgugger @stas00
- `transformers` version: 3.2.0
- Platform:
- Python version: 3.7.6
- PyTorch version (GPU?): 1.6.0, Tesla V100
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Parallel
I am using the Hugging Face Trainer class for NER fine-tuning. Whenever I turn evaluation on with the `--do_eval` argument, my console gets overwhelmed with a printed dictionary that appears to be coming from evaluation that is happening inside of the Trainer.
The dictionary has the keys:
` 'eval_loss', 'eval_accuracy_score', 'eval_precision', 'eval_recall', 'eval_f1', 'eval_class_report', 'eval_predictions' `
It's especially hard to read the console with NER predictions, because `eval_predictions` is a list with each token receiving an IOB tag.
I tried suppressing logging from transformers with this solution https://github.com/huggingface/transformers/issues/3050. I also tried disabling all logging below CRITICAL level. The problem persisted, and I noticed that the console output of the evaluation dictionary appeared to be coming from a print statement.
I tried suppressing all print statements from the Trainer's `train(...)` method, using this solution https://stackoverflow.com/questions/977840/redirecting-fortran-called-via-f2py-output-in-python/978264#978264. That worked, but now I have no logging of training at all :(.
| 12-14-2020 19:55:26 | 12-14-2020 19:55:26 | Could you elaborate on the script you're using. `run_ner.py` does not report `eval_class_report` or `eval_predictions` and there are no print statements in it, nor are there in `Trainer.train` method.<|||||>Yes, it's a custom script. The script creates an `AutoModelForTokenClassification,` passes it to the `Trainer` and calls the `Trainer.train` method.
We are using WandB to plot a confusion matrix, so we define our own `compute_metrics` function which we also pass to the `Trainer` (sorry should have stated this earlier). `compute metrics` does return a dictionary with the keys `'predictions', 'class_report' and 'target'`. It looks like one that gets output from `Trainer` has the prefix `'eval_'` in front of each key produced by `compute_metrics`.
I can't find anywhere in our script or our custom dependencies where we print or log this dictionary, and when I suppress console output from `Trainer` then the problem stops.
<|||||>Is it just a matter of changing the log level? `run_ner.py` sets it to `INFO` for the main process (it does it twice - once for the root logger and another for transformers' logger:
https://github.com/huggingface/transformers/blob/251eb70c979d74d3823e999236ff3621b07510a1/examples/token-classification/run_ner.py#L158-L168
<|||||>Maybe your `Trainer` ends up with a `PrinterCallback` that prints all the logs. You can remove this with
```
form transformers.trainer_callback import PrinterCallback
trainer.remove_callback(PrinterCallback)
```<|||||>Fixed! I had to upgrade to 4.0. Ended up having to upgrade to import the PrinterCalback, but I believe the upgrade itself fixed the problem. <|||||>Even better if the upgrade fixes the problem! There were printing statements in older versions indeed.<|||||>Same issue here - How do I stop all prints coming from trainer.predict() ? |
transformers | 9,108 | closed | Time for second encoding is much higher than first time | Hi,
using a bert model on a single gpu to encode multiple times after each each other like
```
bert_model = TFBertModel.from_pretrained('bert-base-cased',output_hidden_states=True)
input = tokenizer(data , max_length=MAX_SEQ_lEN,padding="max_length",truncation=True, return_tensors="tf")
outputs1 = bert_model(input)
###time1 : 0.1 seconds
outputs2 = bert_model(input)
### time2: 1.7 seconds
```
gives a unproportional high time for the second encoding. If the first encoding time is just 0.1 second I would assume to have each other encoding afterweards also about 0.1 seconds. I run this multiple times and it seems a patterns that the encoding after the first encoding is significantly larger.
Can someone explain this behaviour? I assume it is due to gpu.
```
Env: win 10, python: 3.6, tensorflow 2.3, transformers 3.3.1
GPU: Nvidia mx 150
```
| 12-14-2020 18:36:55 | 12-14-2020 18:36:55 | Could you show the entire code, including how you instantiate the model, as well as your environment information, as mentioned in the template? Thank you.<|||||>that's ok?<|||||>With the following code:
```py
from transformers import TFBertModel, BertTokenizer
from time import time
import tensorflow as tf
print("GPU Available:", tf.test.is_gpu_available())
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
bert_model = TFBertModel.from_pretrained('bert-base-cased',output_hidden_states=True)
input = tokenizer("Hey is this slow?" * 100 , max_length=512,padding="max_length",truncation=True, return_tensors="tf")
for i in range(100):
start = time()
outputs = bert_model(input)
print(time() - start)
```
Running on CPU doesn't increase the time for me:
```
GPU Available: False
0.5066382884979248
0.5038580894470215
0.5125613212585449
0.5018391609191895
0.4927494525909424
0.5066125392913818
0.49803781509399414
0.5140326023101807
0.501518726348877
0.49771928787231445
0.5038976669311523
```
[Running on GPU, no problem either:](https://colab.research.google.com/drive/1tfynzpOiQJKkEi0vkpKaDXTwhB5-k6Td?usp=sharing)
```
GPU Available: True
0.09349918365478516
0.09653115272521973
0.09893131256103516
0.10591268539428711
0.09297466278076172
0.09105610847473145
0.10088920593261719
0.0935661792755127
0.09639692306518555
0.10130929946899414
0.0947415828704834
0.09380221366882324
```<|||||>Thanks. If I do exactly your code I observe an increasing time! Very strange, but I assume that this has something to do with the gpu memory not releasing?
```
0.07779383659362793
0.20029330253601074
0.2085282802581787
0.22140789031982422
0.23041844367980957
0.22839117050170898
0.23337340354919434
0.22336935997009277
0.22971582412719727
0.22768259048461914
0.22839140892028809
0.22934865951538086
0.23038363456726074
0.22646212577819824
0.23062443733215332
0.22713351249694824
0.24032235145568848
0.24936795234680176
0.24984216690063477
0.2523007392883301
0.2481672763824463
0.2532966136932373
0.24833273887634277
0.2513241767883301
0.2522923946380615
0.2536492347717285
0.25013017654418945
0.25212621688842773
0.24585843086242676
0.25535058975219727
0.2563152313232422
0.2423419952392578
0.6144394874572754
0.647824764251709
0.6494302749633789
0.6406776905059814
0.6507377624511719
0.6411724090576172
0.6513652801513672
0.6484384536743164
0.6489207744598389
0.6405856609344482
0.6493120193481445
0.6484384536743164
0.6372919082641602
0.6494011878967285
0.6433298587799072
0.65077805519104
0.6475985050201416
0.6383304595947266
0.6525297164916992
0.6413178443908691
0.6475212574005127
0.6485188007354736
0.64430832862854
0.6478779315948486
0.6457436084747314
0.7288320064544678
0.6573460102081299
0.6572368144989014
0.5861053466796875
0.6324939727783203
0.722456693649292
0.6353938579559326
0.6324222087860107
0.6373186111450195
0.6216456890106201
0.6627655029296875
0.7275354862213135
0.6035926342010498
0.6590445041656494
0.5936176776885986
0.6416335105895996
0.6400752067565918
1.1317992210388184
1.2438006401062012
1.2430295944213867
1.2435650825500488
1.2585129737854004
1.2704930305480957
1.2204067707061768
1.2424969673156738
1.2366819381713867
1.2533769607543945
1.2510595321655273
1.2426464557647705
1.2566087245941162
1.2392685413360596
```
If you don't mind, my issue regarding the usage for the input of the tokenizer is still open. :)
https://github.com/huggingface/transformers/issues/7674<|||||>Indeed, I'll try and check the issue ASAP. Thanks for the reminder!<|||||>@LysandreJik . Thank you! But, do you have any idea for my issue? It seems it is a gpu issue?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,107 | closed | Fix T5 model parallel test | The model was defined in the wrong model tester. | 12-14-2020 18:32:01 | 12-14-2020 18:32:01 | |
transformers | 9,106 | closed | Cannot load community model on local machine | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.6
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Probably @LysandreJik
## Information
Model I am using: https://huggingface.co/huggingtweets/xinqisu
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior: (this is the instruction on the model page)
```
from transformers import pipeline
generator = pipeline('text-generation', model='huggingtweets/xinqisu')
generator("My dream is", num_return_sequences=5)
```
It gives me
```
OSError: Can't load config for 'huggingtweets/xinqisu'. Make sure that:
- 'huggingtweets/xinqisu' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'huggingtweets/xinqisu' is the correct path to a directory containing a config.json file
```
## Expected behavior
The generator should work with the snippet above. I have trained other `huggingtweets` models and they still work with the same code, for example, the following still works, it downloaded the model successfully.
```
from transformers import pipeline
generator = pipeline('text-generation', model='huggingtweets/billgates')
generator("My dream is", num_return_sequences=5)
```
| 12-14-2020 15:37:08 | 12-14-2020 15:37:08 | Hello! I believe this is so because this model uses the new weights system, which was introduced in v3.5.1. Please upgrade your transformers version to at least v3.5.1, we recommand the latest (v4.0.1):
```
pip install -U transformers==4.0.1
```<|||||>@LysandreJik Thanks for the quick reply! It works now π |
transformers | 9,105 | closed | Added TF OpenAi GPT1 Sequence Classification | This PR implements Sequence classification for TF OpenAi GPT1 model.
TFOpenAIGPTForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. Transformer XL ,GPT-2) do.
Fixes #7623
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik @jplu
| 12-14-2020 14:47:08 | 12-14-2020 14:47:08 | Not sure, why test cases are failing for 'run_tests_tf'. @LysandreJik .
Let me know any action require from my side.<|||||>Can you rebase on master, a fix has been recently merged.<|||||>The tests have already been fixed on `master`, merging! Thanks a lot @spatil6 |
transformers | 9,104 | closed | Cannot load custom tokenizer for Trainer | ## Environment info
- `transformers` version: 4.0.0
- Platform: Linux-5.9.13-zen1-1-zen-x86_64-with-glibc2.2.5
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@mfuntowicz
## Information
Model I am using: My own
The problem arises when using:
* [X] the official example scripts: (give details below)
The tasks I am working on is:
* [X] my own task or dataset: (give details below)
I want to finetune a model on my own dataset. For now it doesn't matter if i finetune bert, DistilBert or other, I just want good embeddings for text similarity (cosine distance)
## To reproduce
Steps to reproduce the behavior:
1. Read the [How to train tutorial](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb)
2. Train your own tokenizer (working perfectly) and got `model-vocab.json` and `model-merges.txt`
3. Load and encode with :
```
tokenizer = ByteLevelBPETokenizer(./models/custom/my_model-vocab.json","./models/custom/my_model-merges.txt")
```
This works nicelly !
4. Try to do the same with a DistilBertTokenizerFast to use with the `Trainer` class
```
tokenizer = DistilBertTokenizerFast.from_pretrained('./models/custom', max_len=512)
```
5. Get the error `check that './models/custom' is the correct path to a directory containing relevant tokenizer files`
Note : I also tried to add a `config.json` file next to the merge and vocab which seemed missing but it doesn't change anything.
I also tried a RobertaTokenizerFast (and the 'not fast' version) but same problem
## Expected behavior
Train a custom tokenizer and be able to load it with a ModelTokenizer for the Trainer.
(The BPE tokenizer which works do not have the `mask_token` attribute to work with the dataset loader)
| 12-14-2020 14:20:31 | 12-14-2020 14:20:31 | After playing with it more and changing files, names etc... I managed to made it work with Roberta. I guess it was a stupid name error... Sorry for taking 5mn of you time reading this.
I realize it couldn't work with distilbert (bert) as the tokenizers are differents.
In the end, the model is training.
Maybe it will help someone else one day.
Have a good day. <|||||>Glad you could resolve your issue! |
transformers | 9,103 | closed | Seq2Seq training calculate_rouge with precision and recall | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: master
- Platform: Google Colab
- Python version: 3.6.9
- PyTorch version (GPU?): pytorch-lightning==1.0.4
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
Trainer: @sgugger
examples/seq2seq: @patil-suraj
## Information
Model I am using (Bert, XLNet ...): bart-base
The tasks I am working on is:
summarization on XSUM
## To reproduce
Change `calculate_rouge` function in `utils.py` with `return_precision_and_recall=True`.
Fine-tune any seq2seq model with the official script `finetune.py`:
```
!python3 $finetune_script \
--model_name_or_path facebook/bart-base \
--tokenizer_name facebook/bart-base \
--data_dir $data_dir \
--learning_rate 3e-5 --label_smoothing 0.1 --num_train_epochs 2 \
--sortish_sampler --freeze_embeds --adafactor \
--task summarization \
--do_train \
--max_source_length 1024 \
--max_target_length 60 \
--val_max_target_length 60 \
--test_max_target_length 100 \
--n_train 8 --n_val 2 \
--train_batch_size 2 --eval_batch_size 2 \
--eval_beams 2 \
--val_check_interval 0.5 \
--log_every_n_steps 1 \
--logger_name wandb \
--output_dir $output_dir \
--overwrite_output_dir \
--gpus 1
```
Throws the error
```
Validation sanity check: 100%|ββββββββββ| 1/1 [00:01<00:00, 1.67s/it]Traceback (most recent call last):
File "/content/drive/My Drive/MAGMA: Summarization/seq2seq/finetune.py", line 443, in <module>
main(args)
File "/content/drive/My Drive/MAGMA: Summarization/seq2seq/finetune.py", line 418, in main
logger=logger,
File "/content/drive/My Drive/MAGMA: Summarization/seq2seq/lightning_base.py", line 389, in generic_train
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 440, in fit
results = self.accelerator_backend.train()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 54, in train
results = self.train_or_test()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/accelerator.py", line 68, in train_or_test
results = self.trainer.train()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 462, in train
self.run_sanity_check(self.get_model())
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 650, in run_sanity_check
_, eval_results = self.run_evaluation(test_mode=False, max_batches=self.num_sanity_val_batches)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 597, in run_evaluation
num_dataloaders=len(dataloaders)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 196, in evaluation_epoch_end
deprecated_results = self.__run_eval_epoch_end(num_dataloaders, using_eval_result)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 247, in __run_eval_epoch_end
eval_results = model.validation_epoch_end(eval_results)
File "/content/drive/My Drive/MAGMA: Summarization/seq2seq/finetune.py", line 190, in validation_epoch_end
k: np.array([x[k] for x in outputs]).mean() for k in self.metric_names + ["gen_time", "gen_len"]
File "/content/drive/My Drive/MAGMA: Summarization/seq2seq/finetune.py", line 190, in <dictcomp>
k: np.array([x[k] for x in outputs]).mean() for k in self.metric_names + ["gen_time", "gen_len"]
File "/usr/local/lib/python3.6/dist-packages/numpy/core/_methods.py", line 163, in _mean
ret = ret / rcount
TypeError: unsupported operand type(s) for /: 'dict' and 'int'
```
From my understanding self.metric_names should be a list. | 12-14-2020 13:57:17 | 12-14-2020 13:57:17 | Hi there. Please note that this script is not maintained anymore and is provided as is. We only maintain the `finetune_trainer.py` script now.<|||||>Ok, I will switch to that one. Thank you |
transformers | 9,102 | closed | Unexpected logits shape on prediction with TFRobertaForSequenceClassification | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0
- Platform: Linux-4.9.0-11-amd64-x86_64-with-debian-9.11
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0a0+bf2bbd9 (False)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Distributed
I am using the the TFRobertaForSequenceClassification to create a classifier. According to the documentation, the logits output should have a shape of (batch_size, num_labels) which makes sense. I however get (batch_size, seq_length, num_labels).
Code to reproduce:
```
from transformers import TFRobertaForSequenceClassification, RobertaConfig
import numpy as np
seq_len = 512
classifier = TFRobertaForSequenceClassification(RobertaConfig())
#create random inputs for demo
input_ids = np.random.randint(0,10000, size=(seq_len,))
attention_mask = np.random.randint(0,2, size=(seq_len,))
token_type_ids = np.random.randint(0,2, size=(seq_len,))
#make a prediction with batch_size of 1
output = classifier.predict([input_ids, attention_mask, token_type_ids])
print(output.logits.shape) -> prints out (512,2)
```
## Expected behavior
Logits in the shape of (batch_size,num_labels) or (1,2)
| 12-14-2020 13:39:42 | 12-14-2020 13:39:42 | Hello! The main issue here is that your arrays are of shape `(seq_length)`, whereas they should be of shape `(batch_size, seq_length)`, even if the batch size is 1.
Updating your code to reflect that:
```py
from transformers import TFRobertaForSequenceClassification, RobertaConfig
import numpy as np
bs = 1
seq_len = 510
classifier = TFRobertaForSequenceClassification(RobertaConfig())
#create random inputs for demo
input_ids = np.random.randint(0,10000, size=(bs, seq_len,))
attention_mask = np.random.randint(0,2, size=(bs, seq_len,))
token_type_ids = np.random.randint(0,2, size=(bs, seq_len,))
#make a prediction with batch_size of 1
output = classifier.predict([input_ids, attention_mask, token_type_ids])
print(output.logits.shape) # -> outputs (1, 2)
```
However, there seems to be an error as the model cannot handle a sequence length of 512 when used this way. @jplu running the above code with a sequence length of 512 results in the following error:
```
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,510] = 512 is not in [0, 512)
[[node tf_roberta_for_sequence_classification/roberta/embeddings/position_embeddings/embedding_lookup (defined at /home/lysandre/Workspaces/Python/transformers/src/transformers/models/roberta/modeling_tf_roberta.py:199) ]] [Op:__inference_predict_function_8030]
Errors may have originated from an input operation.
Input Source operations connected to node tf_roberta_for_sequence_classification/roberta/embeddings/position_embeddings/embedding_lookup:
tf_roberta_for_sequence_classification/roberta/embeddings/add (defined at /home/lysandre/Workspaces/Python/transformers/src/transformers/models/roberta/modeling_tf_roberta.py:122)
Function call stack:
predict_function
```
Using a smaller sequence length doesn't raise the error. Do you mind weighing in on the issue?<|||||>Yep, you are limited to 510 tokens + 2 extra tokens (beginning + end)<|||||>After talking about it a bit offline with @jplu we realize there might be an issue with the `predict` method when passing in the values as a list. Could you try passing them as a dictionary instead?
Doing this instead:
```py
output = classifier.predict({"input_ids": input_ids, "attention_mask": attention_mask, "token_type_ids": token_type_ids})
```<|||||>Hei! Thank you for the feedback. I passed the parameters as a dict with everything else unchanged but still get the output as (seq_len, num_labels) unfortunately. <|||||>Can you try this:
```
from transformers import TFRobertaForSequenceClassification, RobertaConfig
import numpy as np
bs = 1
seq_len = 510
classifier = TFRobertaForSequenceClassification(RobertaConfig())
input_ids = np.random.randint(0,10000, size=(bs, seq_len,))
attention_mask = np.random.randint(0,2, size=(bs, seq_len,))
token_type_ids = np.zeros(shape=(bs, seq_len,))
classifier.predict({"input_ids": input_ids, "attention_mask": attention_mask, "token_type_ids": token_type_ids})
```<|||||>Yes, this works with seq_len = 510. It might help stating this behaviour in the docs or perhaps raise an error or show a warning when one tries to input an unbatched sample. Also a bit confusing that seq_len needs to be 510 and not 512 to account for the extra tokens (and the error received when one tries with 512 is a bit murky). Anyway, thanks for the help. I'll go ahead and close this. |
transformers | 9,101 | closed | Fix a broken link in documentation | # What does this PR do?
Fixes a broken link to the BERTology example in documentation
Fixes #9100
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
documentation: @sgugger
| 12-14-2020 13:00:25 | 12-14-2020 13:00:25 | |
transformers | 9,100 | closed | Link to BERTology example is broken | Link to BERTology example is broken in Documentation (https://huggingface.co/transformers/bertology.html)
| 12-14-2020 12:57:36 | 12-14-2020 12:57:36 | |
transformers | 9,099 | closed | bug with _load_optimizer_and_scheduler in trainer.py | ## Environment info
- `transformers` version: 3.5.1
- Platform: GPU
- Python version: 3.7
- PyTorch version (GPU?): 1.4.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Trainer: @sgugger
Text Generation: @patrickvonplaten
## Information
I am using finetune_trainer.py, what I am observing is that in trainer.py, when you call _load_optimizer_and_scheduler if the model_path folder exist, then this ignores the user's set learning rate, meaning it continues with some saved learning rate than actually use the learning rate user set, could you have a look please? thanks
| 12-14-2020 11:00:20 | 12-14-2020 11:00:20 | Hi @rabeehk ,
In `_load_optimizer_and_scheduler` if `model_path` exists and `optimizer` and `scheduler` `state_dict` is found then that means you are loading from a saved checkpoint and continue training from there, so the `lr` is read from the `scheduler` and used instead of the set LR. This is expected behaviour.<|||||>Hi Suraj,
thanks for the reply. I have a couple of questions on this, 1) I see this
is ignoring the training epochs in when loading from the saved checkpoints,
so it does not train for the epochs set, how could I resolve it? Also, if I
want to change the lr, could I load from checkpoint but change the lr?could
you give me some information how loading from trained optimizer, could
help?
to explain better, I train a model for X epochs, then I want to finetune it
on other datasets with extra Y epochs with different learning rate, for
this I pass the updated model to trainer, but then should I pass the
model_path so it loads from the saved checkpoint of optimizer? and why this
is ignoring the set number of epochs?
thanks
On Mon, Dec 14, 2020 at 11:07 AM Suraj Patil <[email protected]>
wrote:
> Hi @rabeehk <https://github.com/rabeehk> ,
>
> In _load_optimizer_and_scheduler if model_path exists and optimizer and
> scheduler state_dict is found then that means you are loading from a
> saved checkpoint and continue training from there, so the lr is read from
> the scheduler and used instead of the set LR. This is expected behaviour.
>
> β
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/9099#issuecomment-744366176>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABP4ZCDDDMJGNROAD3AKZFTSUXWVJANCNFSM4U2SCXGA>
> .
>
<|||||>If you want to fine-tune the saved checkpoint on another dataset then you could save it in diff path or remove the saved `optimizer` and `scheduler` files.
Also @sgugger might have a better answer here.<|||||>Hi Suraj,
I am having a similar problem here.
When the trainer continues from a checkpoint, i.e. `trainer.train(check_point_path)`, I notice a peek in the learning curve. I suspect that is related to what Rabeeh has mentioned.
Please have a look at the learning curve I got after I had to resume the training twice.

Any ideas?<|||||>> this I pass the updated model to trainer, but then should I pass the model_path so it loads from the saved checkpoint of optimizer? and why this is ignoring the set number of epochs?
Passing a `model_path` to the train method is done when you want to resume an interrupted training, which is why it does not do all epochs (it resumes the training from where you where). If you want to do a new training, you should not use that argument and manually pass the optimizer/scheduler you want to use at init.
@abdullah-alnahas I have no idea what your plot is since you haven't told us how you generated it. <|||||>Thanks for your response @sgugger , and sorry for not making myself clear.
I am training an Electra model from scratch using the [`Trainer`] API.(https://huggingface.co/transformers/main_classes/trainer.html). I have interrupted the trainer twice, then resumed training by `trainer.train(latest_checkpoint_path)`.
After that, I have generated the learning curve plot from `{latest_checkpoint_path}/trainer_state.json`'s `log_history` using `step` as the x axis, and `loss` as the y axis.
My question: Is it normal that the learning curve peaks after resuming the training from a checkpoint after an interruption?<|||||>The loss is reinitialized to 0 (it's not saved with the checkpoints) so it could come from this. There were also some recent changes in how the loss is logged so having your transformers version would help. The CI tests the final values of the weights of a (small) model are the same with a full training or resumed training, so I think this is just some weird reporting of the loss.<|||||>thanks Suraj and everyone, makes sense not to initialize the optimizers.<|||||>Hi @sgugger,
I encountered the same issue on Transformers 4.3.0. I think the problem is not the loss being reinitialized as 0, but that the model is not being loaded from model_path. Only `TrainerState` is loaded but not the model weights. I looked through the code before concluding this, but as a sanity check, the current code will run even if `pytorch_model.bin` is not in the checkpoint directory, confirming that its not being loaded at all. It's odd that the CI tests are passing....
Anyway I modified `trainer.py:train()`under the code block:
```
# Check if continuing training from a checkpoint
if model_path and os.path.isfile(os.path.join(model_path, "trainer_state.json")):
...
self._globalstep_last_logged = self.state.global_step
if isinstance(self.model, PreTrainedModel):
model = model.from_pretrained(model_path)
if not self.is_model_parallel:
model = model.to(self.args.device)
else:
state_dict = torch.load(os.path.join(model_path, WEIGHTS_NAME))
model.load_state_dict(state_dict)
```
`self._globalstep_last_logged = self.state.global_step` ensures the first logging of the loss is correct. `self._globalstep_last_logged` should not be 0 (that line is removed in the later part of the code)
The training is properly resumed after this.
<|||||>`Trainer` does not handle the reloading of the model indeed, which can be confusing. So l'll add that functionality this afternoon!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,098 | closed | [RAG, Bart] Align RAG, Bart cache with T5 and other models of transformers | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
In Transformers, the cache should always have the same structure. This becomes especially important for composite models like `RAG` and `EncoderDecoder` that expect all models to have the same cache.
Bart and T5 had different caches with Bart being most different from the standard cache of the library.
This PR aligns the `past_key_values` cache of Bart/Rag with all other models in the library. In general, the philosophy should be:
the past_key_value should have exactly one level for each layer, no matter whether the model is a decoder-only a.k.a. GPT2 or BART. This was not correctly refactored in BART (it should have been implemented 1-to-1 as in T5). No breaking changes here though.
- `past_key_value` tuple for each layer should always be a tuple of tensors, **not** a tuple of a tuple
- for decodre-only models (GPT2), the tuple for each layer contains 2 tensors: key and value states
- for seq2seq (BART/T5), the tuple for each layer contains 4 tensors: key and value states of uni-directional self-attention, saved key and value states for cross-attention
This doesn't break any backward compatibility and should fix some RAG problems (@ratthachat). All RAG, Bart slow tests are passing and changes correspond just to the tuple structure.
PR is blocking me for TFBart refactor -> will merge already.
cc @LysandreJik, @sgugger, @patil-suraj for info.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-14-2020 10:36:53 | 12-14-2020 10:36:53 | |
transformers | 9,097 | closed | Is the LayoutLM working now? | Getting endless erros when trying to use the LayoutLMForTokenClassification from transformers for NER task, is just me doing wrong or the function still on work?
Really appreciate if anyone can give some information. | 12-14-2020 09:25:13 | 12-14-2020 09:25:13 | Hi @shaonanqinghuaizongshishi
Could you please post the code snippet, stack trace and your env info so that we can take a look ?<|||||>I am working on:
ubuntu 16.04
torch 1.5.0
transformers 3.4.0
```
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tokenizer = LayoutLMTokenizer.from_pretrained(model_path)
model = LayoutLMForTokenClassification.from_pretrained(model_path, num_labels=config.num_labels).to(device)
outputs = model(b_input_ids, bbox=b_boxes, token_type_ids=None,
attention_mask=b_input_mask, labels=b_labels)
```
Then I run CUDA_LAUNCH_BLOCKING=1 python layoutLM.py, and got the following error:
```
Traceback (most recent call last):
File "layoutLM.py", line 275, in <module>
train(train_dataloader, validation_dataloader)
File "layoutLM.py", line 162, in train
attention_mask=b_input_mask, labels=b_labels)
File "/Classification/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Classification/lib/python3.6/site-packages/transformers/modeling_layoutlm.py", line 864, in forward
return_dict=return_dict,
File "/Classification/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Classification/lib/python3.6/site-packages/transformers/modeling_layoutlm.py", line 701, in forward
inputs_embeds=inputs_embeds,
File "/Classification/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/Classification/lib/python3.6/site-packages/transformers/modeling_layoutlm.py", line 118, in forward
+ token_type_embeddings
RuntimeError: CUDA error: an illegal memory access was encountered
```
<|||||>Hi there!
I have been investigating the model by making [integration tests](https://github.com/NielsRogge/transformers/blob/e5431da34ab2d03d6114303f18fd70192c880913/tests/test_modeling_layoutlm.py#L318), and turns out it outputs the same tensors as the original repository on the same input data, so there are no issues (tested this both for the base model - `LayoutLMModel` as well as the models with heads on top - `LayoutLMForTokenClassification` and `LayoutLMForSequenceClassification`).
However, the model is poorly documented in my opinion, I needed to first look at the original repository to understand everything. I made a demo notebook that showcases how to fine-tune HuggingFace's `LayoutLMForTokenClassification` on the FUNSD dataset (a sequence labeling task): https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb
Let me know if this helps you!
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,096 | closed | Fix variable name in TrainingArguments docstring | # What does this PR do?
Corrects a var name in the docstring for `TrainingArguments` (there is no `ignore_skip_data`)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sgugger | 12-14-2020 07:51:55 | 12-14-2020 07:51:55 | |
transformers | 9,095 | closed | [TorchScript] Received several warning during Summarization model conversion | ## Environment info
Using Transformers 4.0.1 and PyTorch 1.6.0.
```pytorch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
tokenizer = AutoTokenizer.from_pretrained("sshleifer/distilbart-cnn-6-6")
model = AutoModelForSeq2SeqLM.from_pretrained("sshleifer/distilbart-cnn-6-6")
# model = BartModel.from_pretrained("sshleifer/bart-tiny-random")
input_ids = decoder_input_ids = torch.tensor([19 * [1] + [model.config.eos_token_id]])
traced_model = torch.jit.trace(model, (input_ids, decoder_input_ids), strict=False)
traced_model.save("distilbart.pt")
```
I have to disable the strict checking in order to pass. (Error message without disable the strict flag):
```
RuntimeError: Encountering a dict at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for `list`, use a `tuple` instead. for `dict`, use a `NamedTuple` instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.
```
Here is the warning messages:
```
/Users/qingla/PycharmProjects/pytorch/venv/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py:232: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if not padding_mask.any():
/Users/qingla/PycharmProjects/pytorch/venv/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py:175: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if decoder_padding_mask is not None and decoder_padding_mask.shape[1] > 1:
/Users/qingla/PycharmProjects/pytorch/venv/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py:716: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert key_padding_mask is None or key_padding_mask.shape == (bsz, src_len)
/Users/qingla/PycharmProjects/pytorch/venv/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py:718: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attn_weights.size() == (bsz * self.num_heads, tgt_len, src_len)
/Users/qingla/PycharmProjects/pytorch/venv/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py:736: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert attn_output.size() == (bsz * self.num_heads, tgt_len, self.head_dim)
/Users/qingla/PycharmProjects/pytorch/venv/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py:287: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if torch.isinf(x).any() or torch.isnan(x).any():
```
If these warning are indicated correctly, them the model I traced is highly tied to the dummy input I provided which would bring inaccurate inference result... Any thoughts on how to improve it? @sshleifer Thanks! | 12-14-2020 07:14:20 | 12-14-2020 07:14:20 | Have you tried removing the `strict=False`, and instead specify `return_dict=False` when you initialize the model with `from_pretrained`? Can you let me know if this fixes your issue?<|||||>> Have you tried removing the `strict=False`, and instead specify `return_dict=False` when you initialize the model with `from_pretrained`? Can you let me know if this fixes your issue?
Thanks. It seemed the error message is gone. However, I still receive the warning messages. Is there anyway I can modify the script and make it work without warning?<|||||>Usually these do not impact the result, as they are python values that do not change over time. Have you seen an error in prediction?<|||||>@LysandreJik Sounds good. Haven't seen anything wrong yet. :) |
transformers | 9,094 | closed | head mask issue transformers==3.5.1 | ## Environment info
- `transformers` version: 3.5.1
- Platform: windows & linux
- Python version: python 3.7
- PyTorch version (GPU?): 1.7.0
- Tensorflow version (GPU?): 2.3.1
- Using GPU in script?: Yes both CPU and GPU
- Using distributed or parallel set-up in script?: No
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
## Information
Hi
I am using tiny albert Chinese as an encoder. and I've also tried to use albert transformer in my code.
Thing is I have to change a little bit source code to avoid some head mask issues.
watch transformers/modeling_albert.py
line 387: layer_output = albert_layer(hidden_states, attention_mask, **head_mask[layer_index]**, output_attentions)
however a few lines above, the defualt head_mask is None. So **TypeError: 'NoneType' object is not subscriptable** would be raised
It's not a deep bug and could be easily avoided if making a torch.ones head_mask. Just want to bring it up so it might probably help the others who encounter the same problem.
| 12-14-2020 06:56:53 | 12-14-2020 06:56:53 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,093 | closed | Not able to load T5 tokenizer | Transformers==4.0.0
torch == 1.7.0+cu101
tensorflow == 2.3.0
Platform = Colab notebook
@julien-c @patrickvonplaten
Not able to load T5 tokenizer using
`tokenizer = T5Tokenizer.from_pretrained('t5-base')'
Getting error -

I am able to download the pre-trained model though. | 12-14-2020 06:55:44 | 12-14-2020 06:55:44 | Hey @adithyaan-creator,
as the error message says you need to install the sentence piece library :-)
If you run:
```
pip install sentencepiece==0.1.91
```
before, it should work.<|||||>Thanks @patrickvonplaten . <|||||>Hi @patrickvonplaten
I installed sentencepiece, but it still doesnt seem to be working for me. Please see the snapshot below. Please help.

<|||||>@DesiKeki try sentencepiece version 0.1.94.<|||||>Thanks @adithyaan-creator , it worked!<|||||>Hello @patrickvonplaten
I have gone through the issue and the suggestions given above. However, I am facing the same issue and for some reason, none of the above solutions are proving fruitful.
The issue I am facing is exactly the same as the one stated above:
`from transformers import T5Tokenizer,T5ForConditionalGeneration,Adafactor`
`!pip install sentencepiece==0.1.91`
`tokenizer = T5Tokenizer.from_pretrained("t5-base")`
`print(tokenizer)`
The output of the above code is: None.
I tried using other versions of sentencepiece as well (as the one suggested above 0.1.94 and others as well). But it is still not working.

<|||||>Did you restart your kernel after installing `sentencepiece`? See conversation in https://github.com/huggingface/transformers/issues/10797<|||||>> Did you restart your kernel after installing `sentencepiece`? See conversation in #10797
it works for me, thank you<|||||>> Did you restart your kernel after installing `sentencepiece`? See conversation in #10797
It works for me. Thanks a lot. |
transformers | 9,092 | closed | Patch *ForCausalLM model with TF resize_token_embeddings | cc @jplu | 12-14-2020 05:30:42 | 12-14-2020 05:30:42 | |
transformers | 9,091 | closed | Chinese | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 12-14-2020 02:46:27 | 12-14-2020 02:46:27 | |
transformers | 9,090 | closed | run_clm example gives `CUDA out of memory. Tried to allocate` error | ## Environment info
Google Colab with GPU runtime.
- Python version: 3.6.9
## Information
I'm trying to run the GPT2 training example from `https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py`.
The problem arises when using:
- Run CLM language modeling example.
## To reproduce
Steps to reproduce the behavior:
1. Open Google Colab with GPU on
2. Run
```
!git clone https://github.com/huggingface/transformers
%cd transformers
!pip install .
%cd examples
%cd language-modeling
!pip install -r requirements.txt
```
```
!python run_clm.py \
--model_name_or_path gpt2 \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--output_dir test
```
Log:
```
[INFO|trainer.py:668] 2020-12-14 02:09:02,049 >> ***** Running training *****
[INFO|trainer.py:669] 2020-12-14 02:09:02,049 >> Num examples = 2318
[INFO|trainer.py:670] 2020-12-14 02:09:02,049 >> Num Epochs = 3
[INFO|trainer.py:671] 2020-12-14 02:09:02,049 >> Instantaneous batch size per device = 8
[INFO|trainer.py:672] 2020-12-14 02:09:02,049 >> Total train batch size (w. parallel, distributed & accumulation) = 8
[INFO|trainer.py:673] 2020-12-14 02:09:02,049 >> Gradient Accumulation steps = 1
[INFO|trainer.py:674] 2020-12-14 02:09:02,049 >> Total optimization steps = 870
0% 0/870 [00:00<?, ?it/s]Traceback (most recent call last):
File "run_clm.py", line 357, in <module>
main()
File "run_clm.py", line 327, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 767, in train
tr_loss += self.training_step(model, inputs)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1096, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1120, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py", line 895, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py", line 740, in forward
output_attentions=output_attentions,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py", line 295, in forward
output_attentions=output_attentions,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py", line 239, in forward
attn_outputs = self._attn(query, key, value, attention_mask, head_mask, output_attentions)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py", line 166, in _attn
w = torch.matmul(q, k)
RuntimeError: CUDA out of memory. Tried to allocate 384.00 MiB (GPU 0; 15.90 GiB total capacity; 14.75 GiB already allocated; 185.88 MiB free; 14.81 GiB reserved in total by PyTorch)
0% 0/870 [00:00<?, ?it/s]
```
## Expected behavior
Outputs the model in the output_dir with no memory error.
| 12-14-2020 02:17:31 | 12-14-2020 02:17:31 | You should try to reduce the batch size. This will reduce the memory usage.<|||||>Yup. What @LysandreJik said is correct. Use the following:
`--per_device_train_batch_size x \`
`--per_device_eval_batch_size x \`
Replace x with your preferred batch size, I would recommend the highest power of 2 your GPU memory allows.<|||||>It worked! With the Colab's GPU memory size of 12.72GB, the batch size worked at:
`--per_device_train_batch_size 2 \`
`--per_device_eval_batch_size 16 \`
Thanks for the quick response guys. |
transformers | 9,089 | closed | Fix a bug in eval_batch_retrieval of eval_rag.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Following the instructions in [RAG example](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag#retrieval-evaluation), I was trying to evaluate retrieval against DPR evaluation data.
`pipenv run python eval_rag.py --model_name_or_path facebook/rag-sequence-nq --model_type rag_sequence --evaluation_set output/biencoder-nq-dev.questions --gold_data_path output/biencoder-nq-dev.pages --predictions_path output/retrieval_preds.tsv --eval_mode retrieval --k 1`
With the above command, I faced the following error and confirmed that `question_enc_outputs` is a tuple whose length is 1.
```
...
loading weights file https://huggingface.co/facebook/rag-sequence-nq/resolve/main/pytorch_model.bin from cache at /home/ubuntu/.cache/huggingface/transformers/9456ce4ba210322153f704e0f26c6228bd6c0caad60fe1b3bdca001558adbeca.ee816b8e716f9741a2ac602bb9c6f4d84eff545b0b00a6c5353241bea6dec221
All model checkpoint weights were used when initializing RagSequenceForGeneration.
All the weights of RagSequenceForGeneration were initialized from the model checkpoint at facebook/rag-sequence-nq.
If your task is similar to the task the model of the checkpoint was trained on, you can already use RagSequenceForGeneration for predictions without further training.
initializing retrieval
Loading index from https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/
loading file https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/hf_bert_base.hnswSQ8_correct_phi_128.c_index.index.dpr from cache at /home/ubuntu/.cache/huggingface/transformers/a481b3aaed56325cb8901610e03e76f93b47f4284a1392d85e2ba5ce5d40d174.a382b038f1ea97c4fbad3098cd4a881a7cd4c5f73902c093e0c560511655cc0b
loading file https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/hf_bert_base.hnswSQ8_correct_phi_128.c_index.index_meta.dpr from cache at /home/ubuntu/.cache/huggingface/transformers/bb9560964463bc761c682818cbdb4e1662e91d25a9407afb102970f00445678c.f8cbe3240b82ffaad54506b5c13c63d26ff873d5cfabbc30eef9ad668264bab4
7it [00:00, 212.77it/s]
Traceback (most recent call last):
File "eval_rag.py", line 315, in <module>
main(args)
File "eval_rag.py", line 301, in main
answers = evaluate_batch_fn(args, model, questions)
File "eval_rag.py", line 99, in evaluate_batch_retrieval
question_enc_pool_output = question_enc_outputs.pooler_output
AttributeError: 'tuple' object has no attribute 'pooler_output'
```
With this simple change (`question_enc_outputs.pooler_output` -> `question_enc_outputs[0]`), I got to run the evaluation code and confirmed
`INFO:__main__:Precision@1: 70.74`
## Environments
- Ubuntu 18.04 LTS
- Python 3.7.7
- transformers 4.0.1
- torch: 1.7.1
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
@ola13 (confirmed by `git blame`) @patrickvonplaten @lhoestq | 12-14-2020 01:49:45 | 12-14-2020 01:49:45 | @lhoestq - feel free to merge if you're ok with the PR |
transformers | 9,088 | closed | run_clm.py Early stopping with ^C | - `transformers` version: 4.0.1
- Platform: Colab
- Python version: 3.6.9
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger
## Information
Model I am using: GPT2
The problem arises when using:
`run_clm.py`
## To reproduce
`!python ./transformers/examples/language-modeling/run_clm.py \
--model_name_or_path ./GPT2_PRETRAINED_LOCAL \
--dataset_name bookcorpusopen \
--dataset_config_name plain_text \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--block_size 128 \
--gradient_accumulation_steps 1 \
--overwrite_output_dir \
--do_train \
--do_eval \
--num_train_epochs 20 \
--save_steps 50000 \
--save_total_limit 1 \
--output_dir ./GPT2-trained-save`
Output:
`2020-12-13 20:02:51.391764: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
12/13/2020 20:02:53 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0distributed training: False, 16-bits training: False
12/13/2020 20:02:53 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='./RPT-trained-save', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=2, per_device_eval_batch_size=2, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=20.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Dec13_20-02-53_7d34d2e22bee', logging_first_step=False, logging_steps=500, save_steps=50000, save_total_limit=1, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='./RPT-trained-save', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None)
Downloading: 4.00kB [00:00, 2.73MB/s]
Downloading: 2.10kB [00:00, 1.47MB/s]
Downloading and preparing dataset book_corpus_open/plain_text (download: 2.24 GiB, generated: 6.19 GiB, post-processed: Unknown size, total: 8.43 GiB) to /root/.cache/huggingface/datasets/book_corpus_open/plain_text/1.0.0/5cc3e4620a202388e77500f913b37532be8b036287436f3365e066671a1bd97e...
Downloading: 100% 2.40G/2.40G [02:41<00:00, 14.9MB/s]
9990 examples [01:04, 149.50 examples/s]^C`
The ^C automatically appears and the script stops.
## Expected behavior
The training process takes place as normal.
| 12-13-2020 20:19:17 | 12-13-2020 20:19:17 | ^C means you have hit Ctrl + C on your machine and stops the command running. You should re-run the command without hitting Ctrl + C.<|||||>Yup I am aware that ^C is a halt command. I am running this on colab and I have tried to run this 5-7 times now, not hitting Ctrl+C once. For some reason it appears itself and halts the execution. <|||||>There might be something in colab that aborts bash command after some time then, or it happens when the session disconnects. But there is absolutely nothing in the script that triggers a cancel like this, so there is nothing we can do to fix this.
Note that the scripts are not meant to be run on Colab, we have [notebook versions](https://github.com/huggingface/notebooks/tree/master/examples) of them for that.<|||||>I think I have figured out the issue. This is happening because the dataset is large and when the full thing is loaded, colab crashes.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,087 | closed | BertForSequenceClassification finetune training loss and accuracy have some problem | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: v4.0.0
- Platform: colab pro
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101
- Tensorflow version (GPU?): 2.3.0
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
@sgugger
@JetRunner
## Information
I follow the paper https://arxiv.org/pdf/2003.02245.pdf to do augmentation and test the performance
Model I am using Bert for BertTokenizer, BertForMaskedLM, BertForSequenceClassification
The problem arises when using:
Use Trainer to fine-tuning on both training set and concatenated set of training set and augmentation set, the Training log is No log or 0.683592, and accuracy is always 0.8
The tasks I am working on is:
An official GLUE task: sst2, using by huggingface datasets package
The details:
Trainer setting I follow the examples/text_classification.ipynb to build the compute_metrics function and tokenize mapping function, but the training loss and accuracy have bug
my tokenized datasets format:

compute_function, little modify by examples/text_classification.ipynb

bert_finetuned_setting

fine_tuned result

| 12-13-2020 18:39:28 | 12-13-2020 18:39:28 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests, rather than help with training.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
transformers | 9,086 | closed | Getting a 404 error when loading TFXLMRobertaModel from 'xlm-roberta-large' | Getting a 404 when trying to load the model.
Manyually checked the https://huggingface.co repository for the xlm-roberta-large, was only able to find the Pytorch models, why aren't the TF models available for this, if its not, why is it not explicitly mentioned in the documentation? | 12-13-2020 17:42:20 | 12-13-2020 17:42:20 | Could you try with the flag `from_pt=True` when using `from_pretrained`? <|||||>That worked, thanks |
transformers | 9,085 | closed | Adding to docs how to train CTRL Model with control codes. | # π Feature request
At the moment there is no explanation in the docs how to train a CTRL model with user defined `control codes`.
## Motivation
At the moment there is no explanation in the docs how to train a CTRL model with user defined `control codes`. I think it should be added because control codes are an important part of CTRL model.
## Your contribution
I am currently struggling on coming up with ideas on how to do that using transformer interface, but I'd love to open a PR after I understand how to do that. | 12-13-2020 15:20:18 | 12-13-2020 15:20:18 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Did anyone figure out how to do this? |
transformers | 9,084 | closed | Problem with Token Classification models | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.1
- Platform: Win10
- Python version:3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.3 CPU
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
examples/token-classification: @stefan-it
## Information
I followed this tutorial https://huggingface.co/transformers/custom_datasets.html#token-classification-with-w-nut-emerging-entities for token classification but results were really bad. So I changed the dataset to conll2003 and simplified the data a little (remove sentences without entities, keep only sentences with a certain length) as I saw that the model should perform well on this data. Unfortunately the results are still bad for example after epoch two with the bert model set trainable=True:
Conf mat: (rows are prediction, columns are the labels)
[[ 1 3 21 10 5 10 1 16 172]
[ 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 1 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0]
[ 2 7 7 13 7 6 0 9 162]]
Classification report
(' precision recall f1-score support\n'
'\n'
' 0 0.33 0.00 0.01 239\n'
' 1 0.00 0.00 0.00 0\n'
' 2 0.00 0.00 0.00 0\n'
' 3 0.00 0.00 0.00 1\n'
' 4 0.00 0.00 0.00 0\n'
' 5 0.00 0.00 0.00 0\n'
' 6 0.00 0.00 0.00 0\n'
' 7 0.00 0.00 0.00 0\n'
' 8 0.49 0.76 0.59 213\n'
'\n'
' accuracy 0.36 453\n'
' macro avg 0.09 0.08 0.07 453\n'
'weighted avg 0.40 0.36 0.28 453\n')
I tried a lot of things and checked pre-processing and post-processing multiple times and can't find a bug in there.
The model is close to the tutorial(in the tutorial it's a DistilBert model but as if performed in the same manner I changed to the bigger brother) but it seems like it's not learning at all. Though it should perform well with conll data and in other tutorials this model has shown good results (for example: https://www.depends-on-the-definition.com/named-entity-recognition-with-bert/)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Here's the code is use (not complete preprocessing):
```
import tensorflow as tf
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
train_encodings = tokenizer(train_texts, is_split_into_words=True, return_offsets_mapping=True, padding=True, truncation=True)
val_encodings = tokenizer(val_texts, is_split_into_words=True, return_offsets_mapping=True, padding=True, truncation=True)
train_dataset = tf.data.Dataset.from_tensor_slices((dict(train_encodings), train_labels))
val_dataset = tf.data.Dataset.from_tensor_slices((dict(val_encodings), val_labels))
from transformers import TFBertForTokenClassification
model = TFBertForTokenClassification.from_pretrained('bert-base-cased', num_labels=len(unique_tags)) #unique tags are infered from training data
model.layers[0].trainable = True
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
model.compile(optimizer=optimizer, loss=model.compute_loss) #or tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
for epoch in range(8):
model.fit(train_dataset.shuffle(64).batch(16), batch_size=16, verbose=1, epochs=10)
predictions = model.predict(val_dataset)
# some post-processing. predictions is a list with all logits. for calculating the metrics I only consider
# the tags that are not -100 (which are supposed to be ignored ).
good_indexes = [i for i, l in enumerate(val_labels) if l != -100]
list_preds = []
for logi in predictions['logits']:
list_preds.append(np.argmax(logi))
pred_post = [list_preds[j] for j in good_indexes]
print(confusion_matrix(pred_post, label_post))
report = classification_report(pred_post, label_post)
pprint(report)
```
## Expected behavior
Better performance i.e. reasonable F1 Scores (for example: https://www.depends-on-the-definition.com/named-entity-recognition-with-bert/) | 12-13-2020 12:29:21 | 12-13-2020 12:29:21 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests, rather than help with training-related issues.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead? You'll have better answers over there.
Thanks! |
transformers | 9,083 | closed | Image rendering not working in example notebook | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: n/a
- Platform: n/a
- Python version: n/a
- PyTorch version (GPU?): n/a
- Tensorflow version (GPU?): n/a
- Using GPU in script?: n/a
- Using distributed or parallel set-up in script?: n/a
### Who can help
As advised looking at the git blame @mfuntowicz @n1t0 could you advise?
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Go to https://github.com/huggingface/transformers/blob/master/notebooks/02-transformers.ipynb
2. Go to section `Want it lighter? Faster? Let's talk distillation!`
3. You should see that there is an image which is not rendering like the below

<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Should be an image to explain something
<!-- A clear and concise description of what you would expect to happen. -->
| 12-12-2020 21:36:15 | 12-12-2020 21:36:15 | |
transformers | 9,082 | closed | Add parallelization support for T5EncoderModel | # What does this PR do?
Extend T5EncoderModel to support model parallization across different GPUs.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
T5: @patrickvonplaten
| 12-12-2020 21:14:18 | 12-12-2020 21:14:18 | Very cool! Could you also enable the parallelization tests for these models? You can check how it was done in the initial model parallel PR, [here's the commit](https://github.com/huggingface/transformers/pull/8696/commits/cde47f0d110176d3834b736dac27bc9bc2a4de43) related to the tests. You can just add the `T5EncoderModel` to the `all_parallelizable_model_classes` attribute of the `T5ModelTester` class.<|||||>> Very cool! Could you also enable the parallelization tests for these models? You can check how it was done in the initial model parallel PR, [here's the commit](https://github.com/huggingface/transformers/pull/8696/commits/cde47f0d110176d3834b736dac27bc9bc2a4de43) related to the tests. You can just add the `T5EncoderModel` to the `all_parallelizable_model_classes` attribute of the `T5ModelTester` class.
Thanks for the tip.
Done, please let me know if anything else is needed from my side.<|||||>Also it would be great if you could run `make style && make quality` or `make fixup` to solve the quality issues.<|||||>> This LGTM. Looking into it it seems we have an error in `T5Stask` as it is creating the device map with `torch.cuda.device_count()`, rather than the `range` of that value like you're doing it here. Since we're always passing the device map to `T5Stack` (it's never used as a standalone model) we don't see it, but it doesn't seem correct.
>
> What do you think? If you think this is true, do you mind adding a `range` in `T5Stack` so that we can merge it together? Thanks!
Yes, you are correct, T5Stack should also use range. Since "get_device_map" function apply len to it .
I have updated T5Stack using range.<|||||>> Also it would be great if you could run `make style && make quality` or `make fixup` to solve the quality issues.
Done and passed the code quality testing.<|||||>Wonderful! |
transformers | 9,081 | closed | Segmentation fault (core dumped) running run_qa.py | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.1
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): distilbert-base-uncased (but other bert variants do the same)
The problem arises when using:
* [ X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.mkdir squad
wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json -O squad/train-v2.0.json
wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json -O squad/dev
2. python run_qa.py \
--model_name_or_path distilbert-base-uncased \
--do_train \
--train_file ./squad/train-v2.0.json \
--per_device_train_batch_size 2 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./models/ \
--overwrite_output_dir
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
12/12/2020 21:22:50 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False
12/12/2020 21:22:50 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='./models/', overwrite_output_dir=True, do_train=True, do_eval=False, do_predict=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=2, per_device_eval_batch_size=8, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=3e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=2.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Dec12_21-22-50_piero-laptop', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='./models/', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None)
Using custom data configuration default
Downloading and preparing dataset json/default-0b904584a9578d6f (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/piero/.cache/huggingface/datasets/json/default-0b904584a9578d6f/0.0.0/70d89ed4db1394f028c651589fcab6d6b28dddcabbe39d3b21b4d41f9a708514...
0 tables [00:00, ? tables/s]Segmentation fault (core dumped)
## Note:
I would like to test the script on the downloaded SQUAD dataset to apply the script after to my own dataset. If I run as below, everything works fine
python run_qa.py \
--model_name_or_path bert-base-uncased \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 4 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./models \
--overwrite_output_dir
| 12-12-2020 20:30:08 | 12-12-2020 20:30:08 | same problem here? what is going on because I run glue smoothly.it seems that the problem related to the script itself.<|||||>The problem is in your JSON file. The squad v2 JSON file is not in a format the datasets library can directly preprocess, so you need to make it compliant with it. You should take this issue to the [`datasets`](https://github.com/huggingface/datasets) library and explain what your needs is.
You can also check the mock data file used in the [tests](https://github.com/huggingface/transformers/blob/master/tests/fixtures/tests_samples/SQUAD/sample.json) to see the expected format. A datasets expert would know better than me but I think the problem is that the squad JSON file has lists of dicts for the "answers" field when datasets expects a dictionary keys to list.<|||||>The problem is not the JSON file that I have and I was able to solve it by using Transformers 3.x with no issues.<|||||>Transformers v4 does not support training on SQuAD v2 via its example training script. For now, you have to use Transformers v3.<|||||>Yes you could run it with the older script which was parsing the JSON differently. The new version uses the datasets library and requires the JSON to be organized differently (for compatibility with Arrow). <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,080 | closed | Fine tune GPT-2 pytorch | Hello,
I want to fine tune GPT-2 (PyTorch version) on a custom dataset. Words or small phrases of the dataset are marked, for example:
_some text [ss] word / small phrase [se] some other text._
I want to generate this kind of text with GPT-2, so firstly I thought to add [ss] and [se] as special tokens.
I am looking for a training script sample for GPT-2, to see how to prepare data as input for the model (if preprocessing, or specific format is needed), which type of loss to use, etc. but I cannot find any. I also looked through the [library](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) and also didn't find the training script.
Are there any suggestions?
Thank you in advance.
P.s. if this is not the appropriate place for this question, feel free to direct me accordingly. | 12-12-2020 15:21:19 | 12-12-2020 15:21:19 | This repo(https://github.com/kifish/GPT4NLG) will help you out.<|||||>Have you taken a look at the [`run_clm.py` script](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) in the examples? It seems to be doing exactly what you're looking for.
Erratum: I just realized that your link to `library` was actually a link to `run_clm.py`. What do you mean you didn't find the training script? Do you mean to say you would like an understandable guide on how to fine-tune GPT-2? Then [this guide](https://github.com/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) on DialoGPT, which is based off of the GPT-2 architecture, may be helpful.<|||||>@kifish thank you for your reply, it is an interesting repository.
I am doing this for the first time so I am looking for something more simple, although I got some ideas reading the code which I will use.<|||||>@LysandreJik thank you , for the helpful tutorial.
When I mentioned that I didn't find the training script, I meant that in line 327 of the `run_clm.py`, the `main` method calls the `train` method of a `Trainer`, but I couldn't find the code of the `train` method.
In addition, for the data preparation as input for the model, except for the extra tokens that I mentioned in my initial post, I thought to also add a `bos` and `eos` token at the begging and at the end of each text respectively, so that the model learns when a text starts and ends. The GPT-2 Tokenizer has already these tokens but they have the same id, they both have the id 50256. What is the reason behind this?
In order to deal with this, another way to prepare data is to use just the `eos` token to denote the end of a text, since the model should basically learn when a text ends. Can you please explain briefly?
Thank you in advance.
<|||||>The code for the `train` method is in the `Trainer` class that you can find [here](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L565). It's a bit encapsulated so it might be a bit hard to follow - but we're working on simpler examples that show a basic PyTorch training loop and which should be out in a few months.
You may also find [this notebook interesting, which goes into finer detail on how to train a language model.](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb)
Regarding your question on the extra tokens, adding a `bos` and `eos` token depends on how the model was pre-trained. BERT requires these as they were used during its pre-training. However, GPT-2 has very few special tokens during pre-training: a single `<|endoftext|>` token that was placed between sequences. I recommend you read the GPT-2 paper to get an idea of their pre-processing; we try to stay as close to the original implementation as possible.<|||||>@LysandreJik thank you for your reply.
I am writing the code for training GPT-2 and firstly, as you suggested, I concatenated the input texts separated by `<|endoftext|>` token and then split it into fixed lengths, which is the model input.
In the GPT-2 casual language modelling, the model input is also the labels of the model, so to compute the loss, the input and labels arguments of the loss function are the same. The [tutorial](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling.ipynb) that you mentioned in the previous post, when references the model input and labels talks about a shifting on the left or on the right.
To calculate the loss correctly, are the model input and labels arguments of the loss function the same, or am I missing something?<|||||>If you pass the labels to the model, you should pass exactly the same value as the input IDs. The model then shifts the label IDs and calculates the loss on its own.<|||||>@LysandreJik I have a question regarding training.
For a Language Model which are indicative metrics to monitor training? Because for now I just consider the loss. For example, in classification it is usually considered the F1 score.
Additionally, in the tutorial that you shared, no early stopping was used in training, and the model was trained for three epochs. I have a small dataset of ~8K small texts (2MB). Do you have any suggestions on how to train the model, i.e. if there is no early stopping, how to decide when to stop training and how can I evaluate training?
You have been very helpful,
regards.<|||||>Hello @kifish
the GPT4NLG repo that you have shared back then was very helpful but I am no longer able to see it. Can you do anything about that?
Thank you in advance.<|||||>> Hello @kifish
>
> the GPT4NLG repo that you have shared back then was very helpful but I am no longer able to see it. Can you do anything about that?
>
> Thank you in advance.
https://github.com/kifish/GPT4NLG/tree/github<|||||> https://github.com/kifish/GPT4NLG/tree/github
Thank you @kifish. If possible, let it visible for some days.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,079 | closed | T5 fails on many datasets with [libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted | ## Environment info
- `transformers` version: 3.5.1
- Platform: GPU
- Python version: 3.7
- PyTorch version (GPU?): 1.0.4
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
tokenizers: @mfuntowicz
Trainer: @sgugger
TextGeneration: @TevenLeScao
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
examples/seq2seq: @patil-suraj
## Information
Hi
I am testing seq2seq model with T5 on different datasets and this is always getting the following bug, this is really blocking me as this fails for many datasets. could you have a look please? thanks
```
[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0):
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): CHECK failed: (index) >= (0):
Aborted
```
To reproduce the error please run on 1 GPU:
```
git clone [email protected]:rabeehk/debug-seq2seq.git
python setup.py develop
cd seq2seq
python finetune_t5_trainer.py temp.json
```
Full output of the program:
```
(internship) rkarimi@vgnh008:/idiap/user/rkarimi/dev/debug-seq2seq/seq2seq$ python finetune_t5_trainer.py temp.json
2020-12-12 15:38:16.234542: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2020-12-12 15:38:16.234598: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
12/12/2020 15:38:32 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: False
12/12/2020 15:38:32 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(output_dir='outputs/test', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluate_during_training=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=64, per_device_eval_batch_size=64, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=0.01, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=2, max_steps=-1, warmup_steps=500, logging_dir='runs/Dec12_15-38-32_vgnh008', logging_first_step=True, logging_steps=200, save_steps=200, save_total_limit=1, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=200, dataloader_num_workers=0, past_index=-1, run_name='outputs/test', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, label_smoothing=0.1, sortish_sampler=False, predict_with_generate=True, adafactor=False, encoder_layerdrop=None, decoder_layerdrop=None, dropout=None, attention_dropout=None, lr_scheduler='linear', fixed_length_emb=None, encoder_projection=None, encoder_pooling=None, projection_length=None, only_projection_bottleneck=False, concat_projection_token=False, gcs_bucket='ruse-xcloud-bucket', temperature=10, train_adapters=True, do_finetune=True, parametric_task_embedding=False, eval_output_dir='outputs/finetune-adapter/test-n-1-lr-1e-02-e-20')
Some weights of T5ForConditionalGeneration were not initialized from the model checkpoint at t5-small and are newly initialized: ['encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.0.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.0.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.1.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.1.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.2.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.2.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.3.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.3.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.4.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.4.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.5.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.5.layer.1.adapter_controller.post_layer_norm.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.0.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.0.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.1.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.1.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.2.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.2.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.3.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.3.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.4.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.4.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.5.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.5.layer.2.adapter_controller.post_layer_norm.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
12/12/2020 15:38:44 - INFO - filelock - Lock 140079090376272 acquired on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock
12/12/2020 15:38:44 - INFO - filelock - Lock 140079090376272 released on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock
Using custom data configuration default
12/12/2020 15:38:44 - INFO - filelock - Lock 140082549312272 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
12/12/2020 15:38:44 - INFO - filelock - Lock 140082549312272 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
12/12/2020 15:38:44 - INFO - filelock - Lock 140082549365648 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
Reusing dataset boolq (/idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)
12/12/2020 15:38:44 - INFO - filelock - Lock 140082549365648 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
Loading cached processed dataset at /idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534/cache-6810ece2a440c3be.arrow
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 acquired on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock
12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 released on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock
Using custom data configuration default
12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
12/12/2020 15:38:45 - INFO - filelock - Lock 140082549365200 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
Reusing dataset boolq (/idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)
12/12/2020 15:38:45 - INFO - filelock - Lock 140082549365200 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
Loading cached processed dataset at /idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534/cache-9a2822394a3a4e34.arrow
12/12/2020 15:38:45 - INFO - seq2seq.metrics.metrics - selected metric <function build_compute_metrics_fn.<locals>.classification_metrics at 0x7f66b464cc20> for task boolq
12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - ***** Running training *****
12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Num examples = 10
12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Num Epochs = 2
12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Instantaneous batch size per device = 64
12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 64
12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Gradient Accumulation steps = 1
12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Total optimization steps = 2
{'loss': 529.79443359375, 'learning_rate': 2e-05, 'epoch': 1.0}
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 2.37it/s]12/12/2020 15:38:46 - INFO - seq2seq.trainers.trainer -
Training completed. Do not forget to share your model on huggingface.co/models =)
{'epoch': 2.0}
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 2.43it/s]
12/12/2020 15:38:46 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/test
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929680 acquired on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock
12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929680 released on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock
Using custom data configuration default
12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929360 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929360 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
12/12/2020 15:38:59 - INFO - filelock - Lock 140079085355216 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
Reusing dataset boolq (/idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)
12/12/2020 15:38:59 - INFO - filelock - Lock 140079085355216 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock
Loading cached processed dataset at /idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534/cache-164dd1d57e9fa69a.arrow
12/12/2020 15:38:59 - INFO - seq2seq.metrics.metrics - selected metric <function build_compute_metrics_fn.<locals>.classification_metrics at 0x7f66b40c67a0> for task boolq
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - ***** Running training *****
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Num examples = 1
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Num Epochs = 2
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Instantaneous batch size per device = 64
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 64
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Gradient Accumulation steps = 1
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Total optimization steps = 2
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from checkpoint, will skip to saved global_step
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from epoch 2
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from global step 2
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Will skip the first 0 steps in the first epoch
0%| | 0/2 [00:00<?, ?it/s]12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer -
Training completed. Do not forget to share your model on huggingface.co/models =)
{'epoch': 2.0}
0%| | 0/2 [00:00<?, ?it/s]
12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/finetune-adapter/test-n-1-lr-1e-02-e-20/boolq
12/12/2020 15:39:07 - INFO - seq2seq.utils.utils - using task specific params for boolq: {'max_length': 3}
12/12/2020 15:39:07 - INFO - seq2seq.trainers.trainer - ***** Running Evaluation *****
12/12/2020 15:39:07 - INFO - seq2seq.trainers.trainer - Num examples = 3269
12/12/2020 15:39:07 - INFO - seq2seq.trainers.trainer - Batch size = 64
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 52/52 [00:12<00:00, 4.86it/s][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0):
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): CHECK failed: (index) >= (0):
Aborted
```
| 12-12-2020 14:41:00 | 12-12-2020 14:41:00 | Hi
I traced the error and this is happening in this line:
label_str = tokenizer.batch_decode(pred.label_ids, skip_special_tokens=True)
https://github.com/rabeehk/debug-seq2seq/blob/1bcadb4b5497a0cbab6c2778e87335c5edcbd0a2/seq2seq/metrics/metrics.py#L99
here is the format of label_ids
```
pred.label_ids [[10747 7 15 1]
[10998 1 0 0]
[10998 1 0 0]
...
[10998 1 0 -100]
[10998 1 0 -100]
[10998 1 0 -100]] (3269, 4)
```
could you please have a look? this is really blocking me, as T5 tokenizer fails for many datasets. thanks <|||||>I understood the issue now, previously boolq dataset had labels of 0/1 => max_decoding length of 3, now they changed it to True/False => max decoding length of 4, which causes the bug in my codes for decoding since max_decoding length was set to 3. this is solved now. thanks @lhoestq <|||||>Glad you resolved your issue. |
transformers | 9,078 | closed | Add Definition of a transformer to the glossary | Thought it may be helpful to have an easy to understand definition for what a transformer is in the [glossary](https://huggingface.co/transformers/glossary.html) for any new joiners.
@sgugger any thoughts?
Happy to add a definition if provided with one in the open pull request #8949 | 12-12-2020 14:23:02 | 12-12-2020 14:23:02 | Are you looking for https://huggingface.co/transformers/model_summary.html ?<|||||>@cronoik thanks for taking a look! Wasn't looking for that page, was trying to find just a general definition of what a transformer is in terms of the general concept<|||||>Do you have something like this in mind: https://github.com/huggingface/transformers/blob/master/notebooks/02-transformers.ipynb ?<|||||>No, wasn't looking for a notebook, just a one line description/explanation of what a transformer is
How would you describe it to someone who doesn't know about transformers?<|||||>Maybe simply as self-attention based deep learning model architecture.<|||||>@cronoik That's a good start, `self-attention` & `deep learning` aren't yet defined in the glossary
How would you define those?<|||||>`self-attention`: each element of the input finds out which other elements of the input they should attend to.
`deep learning`: machine learning algorithms which uses NN which several layers.
@darigovresearch <|||||>@cronoik thanks for that! Would you like to put in a pull request so that your definitions go into the transformers glossary and the set of flashcards that we built or would you like us to do it?
I'm sure those definitions would be welcome and easily merged by the maintainers
https://www.darigovresearch.com/huggingface-transformers-glossary-flashcards<|||||>Thanks @sgugger for merging the pull request, @cronoik your definitions are now on the glossary page and I have also added them to the flashcards so this issue can now be closed. Thank you both for your help!
Glossary https://huggingface.co/transformers/glossary.html
Flashcards https://www.darigovresearch.com/huggingface-transformers-glossary-flashcards<|||||>Thanks for your commits ;). |
transformers | 9,076 | closed | Clarify use of TrainingArguments.disable_tqdm in Jupyter Notebooks | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Closes #8831 and adds some minor tweaks / improvements to the `TrainingArguments` classes.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-12-2020 11:00:35 | 12-12-2020 11:00:35 | Thanks for the suggestions! I've included them so should be good to go :)<|||||>Thanks a lot! |
transformers | 9,075 | closed | Zero Shot Classification Pipeline gives poor results locally than online demo | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.1
- Platform: Colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0 Yes
- Tensorflow version (GPU?): No
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@julien-c @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): facebook/bart-large-mnli
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. I have a small dataset of 26 examples and I want to classify them into 2 classes. I first ran all the examples in the [online demo](https://huggingface.co/zero-shot/) and got around 80% accuracy.
2. Then I ran the code on Colab and got only 53% accuracy which I think is just a random answer between the labels.
3. I am aware of the fact that this issue has been opened before and resolved but it isn't working for me. ([This is the previous issue](https://github.com/huggingface/transformers/issues/8122))
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-mnli")
model = AutoModelForSequenceClassification.from_pretrained("facebook/bart-large-mnli")
classifier = pipeline(task='zero-shot-classification', model=model, tokenizer=tokenizer)
hypothesis_template = 'This text is about {}.'
labels = ['Single Patient', 'Multiple Patient']
def predict(sequence, labels, hypothesis_template):
results = classifier(sequence, labels,
hypothesis_template=hypothesis_template)
pred_idx = np.array(results['scores']).argmax()
pred_cls = labels[pred_idx]
return pred_idx, pred_cls
def evaluate(dataset, labels, hypothesis_template):
n_correct = 0
for sequence, label in tqdm(dataset.values):
_, pred = predict(sequence, labels, hypothesis_template)
n_correct += (pred == label)
acc = n_correct / len(dataset)
print('Accuracy:', acc)
patients = pd.read_csv('patient_classification.csv')
evaluate(patients, labels, hypothesis_template)
```
While loading the model I get this warning message.
```
Some weights of the model checkpoint at facebook/bart-large-mnli were not used when initializing BartForSequenceClassification: ['model.encoder.version', 'model.decoder.version']
- This IS expected if you are initializing BartForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BartForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The results of the online demo and my local code (Colab) are supposed to be the same. | 12-12-2020 10:59:34 | 12-12-2020 10:59:34 | Maybe @joeddav has an idea!<|||||>The pipeline output is sorted from highest to lowest scores, so in your code `pred_idx` will always be `0` and `pred_cls` will always be `"Single Patient"`. Instead you want,
```python
pred_cls = results['labels'][0]
pred_idx = labels.index(pred_cls)
```<|||||>Oh lol, I didn't know it was that simple xD. Thanks @joeddav that increased the accuracy to 73% (though less than online demo) which is good enough. Thank you so much! |
transformers | 9,072 | closed | get type error when I run the example code of token classification | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:4.0.0
- Platform:linux and macos
- Python version:3.7 and 3.8
- PyTorch version (GPU?):1.7.0
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
The problem arises when using:
hf_argparser in line 64. type Optional[bool] can not get into the loop
elif field.type is bool or field.type is Optional[bool]
but after I change is to == ,the error disappear.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
| 12-12-2020 09:53:44 | 12-12-2020 09:53:44 | Hi, could you provide the full error, as well as the command you use to launch the script? Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,071 | closed | attention_mask size | # π Feature request
current attention_mask argument is a tensor of shape [batch_size, sequence_length],
I'd like it to be a tensor of shape [batch_size, from_seq_length, to_seq_length], as I want to set a different attention mask for a different position.
| 12-12-2020 03:32:21 | 12-12-2020 03:32:21 | have implemented that by myself.
If needed, I can make a pull request<|||||>Hi, I'm also trying to utilize such a customized mask. Would you mind sharing your implementation? Thank you! |
transformers | 9,070 | closed | [CI doc] safely testing experimental CI features | After causing a few CI workflow disruptions with my recent attempts to figure out how to get circleCI do something new (skip heavy builds on doc-only PRs) I realized that future such experiments can be much smoother and lead to close to zero annoyance to anybody involved in submitting and handling PRs.
This PR documents my idea on how to do it given the current limitations of CircleCI and GithubActions, so that we could continue doing such experiments in the future and not interfere with anything.
That's said please vote here:
* Github Actions: https://github.com/actions/runner/issues/2347
* CircleCI: https://ideas.circleci.com/ideas/CCI-I-344 (unfortunately requires a free account to vote)
to get a much simpler support for being able to have a failing step that shouldn't impact the overall PR status.
@LysandreJik, @sgugger
| 12-11-2020 23:09:14 | 12-11-2020 23:09:14 | |
transformers | 9,069 | closed | Fix some typos | 12-11-2020 21:20:22 | 12-11-2020 21:20:22 | @patil-suraj this pull request can also be closed!
Most of the typos were already fixed, the remaining ones were fixed in [this pull request](https://github.com/huggingface/transformers/pull/10989)
|
|
transformers | 9,068 | closed | [wip] [ci] experiment for documentation | please ignore for now. thanks.
| 12-11-2020 20:49:44 | 12-11-2020 20:49:44 | It is clear now - ready to document how to do it right: https://github.com/huggingface/transformers/pull/9070
|
transformers | 9,067 | closed | Fix min_null_pred in the run_qa script | # What does this PR do?
The `min_null_prediction` variable in the `run_qa` script was actually the maximum because the < was in the wrong direction... | 12-11-2020 18:04:19 | 12-11-2020 18:04:19 | |
transformers | 9,066 | closed | Add BartForCausalLM analogs to `ProphetNetForCausalLM` | # π Feature request
Bart is a seq2seq model, but there might be applications where one would like to use only the pre-trained BartDecoder in an EncoderDecoder setting with a "long" encoder, such as
```python
from transformers import EncoderDecoderModel
model = EncoderDecoderModel("allenai/longformer-large-4096", "facebook/bart-large")
# fine-tune model ...
```
This is already possible for ProphetNet:
```python
from transformers import EncoderDecoderModel
import torch
model = EncoderDecoderModel.from_encoder_decoder_pretrained("allenai/longformer-large-4096", "microsoft/prophetnet-large-uncased")
input_ids = torch.tensor([10 * [1]])
labels = torch.tensor([10 * [0]])
loss = model(input_ids, decoder_input_ids=labels, labels=labels).loss
loss.backward()
```
, but not yet for Bart. This "Good first/second issue" is about implemented a `BartForCausalLM` analogs to the one in ProphetNet here:
https://github.com/huggingface/transformers/blob/9cc9f4122e2a1027a6011951e3c6629a0f1b6c3e/src/transformers/models/prophetnet/modeling_prophetnet.py#L1882
To verify that the feature works as expected, one should make sure that the following tests are added:
- A `BartStandaloneDecoderModelTest` class as is done in https://github.com/huggingface/transformers/blob/9cc9f4122e2a1027a6011951e3c6629a0f1b6c3e/tests/test_modeling_prophetnet.py#L1072
- And an encoder-decoder test class as it's done here: https://github.com/huggingface/transformers/blob/9cc9f4122e2a1027a6011951e3c6629a0f1b6c3e/tests/test_modeling_encoder_decoder.py#L758
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
mentioned above for Long-range seq2seq warm-starting e.g.
## Your contribution
I'm more than happy to guide someone through this issue!
It's a bit more advanced so I'll give it both "Good first issue" and "Good second issue".
You can claim the issue by writing it below and/or opening a PR :-)
| 12-11-2020 16:26:12 | 12-11-2020 16:26:12 | I dont know if @MeRajat claimed this issue, However if not **I want to take this issue** <|||||>Usually one opens a PR to claim the issue (the PR does not have to be finished) - so I think it's still open. <|||||>@patrickvonplaten This FR is opened for some time,so thought of working on it, almost completed development, should I raise PR?
As @sadakmed is also working on it, so thought of asking.<|||||>Hey @spatil6,
I think the PR is already in an advanced stage, so I hope the PR is finished by next week. If not, I'll ping you again :-) |
transformers | 9,065 | closed | Remove docs only check | Remove the docs only check as it can result in [crashes](https://app.circleci.com/pipelines/github/huggingface/transformers/17209/workflows/d3807ea6-9697-4699-a114-98e6b4d2c4d0/jobs/135576).
Will revert if you disagree @stas00. | 12-11-2020 15:24:51 | 12-11-2020 15:24:51 | |
transformers | 9,064 | closed | Embedding documents on multi-GPU single-ode Docker using pretrained models of huggingface transformers and pytorch DistributedDataParallel | Hi,
This is a question.
I am trying to embed some documents including a couple of sentences using huggingface transformers models. I have multi-gpu single-node and I want to do embedding parallel and distributed in all 8 gpus. I tried to use pytorch DistributedDataParallel, but I think all sentences are sending to all GPUs and for all sentences, it is returning one tensor. this is a sample code:
import torch
import torch.nn as nn
import torch.nn.functional as F
import time
import argparse
import os
from transformers import AlbertTokenizer, AlbertModel
import numpy
from tqdm import tqdm
from torch.utils.data import DataLoader,TensorDataset
def parse_args():
parse = argparse.ArgumentParser()
parse.add_argument(
'--local_rank',
dest = 'local_rank',
type = int,
default = 0,
)
parse.add_argument("--gpu", type=str, default='None',
help="choose gpu device.")
return parse.parse_args()
def train():
args = parse_args()
if not args.gpu == 'None':
device = torch.device("cuda")
os.environ["CUDA_VISIBLE_DEVICES"]=args.gpu
else:
device = torch.device("cpu")
torch.cuda.set_device(args.local_rank)
torch.distributed.init_process_group(
backend='nccl',
init_method='env://',
)
tokenizer = AlbertTokenizer.from_pretrained('albert-xxlarge-v2')
sentences=['I love tea',
'He hates tea',
'We love tea',
'python coder',
'geeksforgeeks',
'coder in geeksforgeeks']
sentence_tokens = []
for sent in (sentences):
token_id = tokenizer.encode(sent, max_length=128, add_special_tokens=True, pad_to_max_length=True)
sentence_tokens.append(token_id)
original_sentences = torch.tensor(sentence_tokens)
train_dataset = TensorDataset(original_sentences)
#setup training sampler
train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset,num_replicas=len(sentences))
#setup training data loader with the train sampler setup
train_dataloader = DataLoader(train_dataset, batch_size=16,sampler=train_sampler, shuffle=False)
model = AlbertModel.from_pretrained('albert-xxlarge-v2', return_dict=True)
model = model.to(device)
model = nn.parallel.DistributedDataParallel(model,
device_ids = [args.local_rank, ],
output_device = args.local_rank,\
find_unused_parameters=True
)
for batch in (train_dataloader):
batch_input_tensors = batch[0].to('cuda')
outputs = model(batch_input_tensors)
last_hidden_states = outputs.last_hidden_state
average= torch.mean(last_hidden_states,dim=1)
if __name__ == "__main__":
train()
all of sentences are sending to all 8 GPUs and output as last_hidden_states is only one tensor. I got the average of tensor elements because I thought at the end they should be same but they aren't. how can do it distributed and sentences distribute to GPUs and embed over there? and finally for each sentence or for my final case each Doc I have one tensor as feature vector? thanks
| 12-11-2020 15:11:53 | 12-11-2020 15:11:53 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,063 | closed | Fix T5 and BART for TF | # What does this PR do?
This PR fix the TensorFlow implementation of T5 and BART to make them graph compilation+execution compliant and then be able to create a savedmodel for each them.
The slow tests `test_saved_model_with_hidden_states_output` and `test_saved_model_with_attentions_output` are now passing for both models.
| 12-11-2020 15:10:33 | 12-11-2020 15:10:33 | I should have addressed everybody's comments :) |
transformers | 9,062 | closed | Bump notebook from 6.1.4 to 6.1.5 in /examples/research_projects/movement-pruning/lxmert | Bumps [notebook](https://github.com/jupyter/jupyterhub) from 6.1.4 to 6.1.5.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/jupyter/jupyterhub/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 12-11-2020 15:08:06 | 12-11-2020 15:08:06 | |
transformers | 9,061 | open | CharacterBERT | # π New model addition
## Model description
**CharacterBERT** is a **variant of BERT** that uses a CharacterCNN module **instead of** WordPieces. As a result, the model:
1. Does not require/rely on a WordPiece vocabulary
2. Produces a single embedding for any (reasonable) input token
3. Is more robust to misspellings
Paper: https://www.aclweb.org/anthology/2020.coling-main.609/
<!-- Important information -->
## Open source status
* [x] the model implementation is available: https://github.com/helboukkouri/character-bert
* [x] the model weights are available: https://github.com/helboukkouri/character-bert/blob/main/download.py#L16
* [x] who are the authors: @helboukkouri @osf9018 @Jekub @hiroshinoji @PierreZweigenbaum and Junichi Tsujii
I am willing to work on a PR but I will probably need some guidance π | 12-11-2020 14:26:37 | 12-11-2020 14:26:37 | After reading the paper again, I'm really excited to pre-train models for another domain π€ do you know when the pre-training code will be available π€<|||||>@stefan-it glad to hear that you enjoyed our work. I haven't released the pre-training code yet as it is not as user friendly as I would want it to be. But it just happens that I'm planning to work on releasing a first version some time **this week**, so good timing π.
You can subscribe to the following issue if you want to be notified: https://github.com/helboukkouri/character-bert/issues/4
Cheers!<|||||>Sounds great @helboukkouri! Let us know if we can help in any way, we'd love to see character BERT in `transformers`!<|||||>Hey @helboukkouri , really cool PR for the upcoming model integration :hugs:
I've already looked at it, and have a question about the `CharacterMapper` implementation. So in the current implementation it supports a maximum word length of 50 (so all word representations are padded to this length, if I'm correctly reading it). Do you think it would decrease training (and later fine-tuning) time, when using a smaller value :thinking:
So e.g. in German we could have really long words such as "Bezirksschornsteinfegermeister", but 50 is really long (but I think this is [language-dependend](https://arxiv.org/pdf/1207.2334.pdf)).<|||||>Hey @stefan-it, thanks! π
> Do you think it would decrease training (and later fine-tuning) time, when using a smaller value π€
When we compute some stats around model speed, we find that while CharacterBERT is twice as slow as BERT during pre-training (108% slower), it is not as slow during downstream task fine-tuning (19% on avg.) This means that most of the "slowness" happens during pre-training, which makes us think that the Masked Language Modeling output layer is at fault here. In particular, the differences with BERT are: (1) no parameter sharing between the wordpiece embedding matrix and the output layer and (2) a larger output layer (we use top 100k tokens in the pre-training corpus as a vocabulary) since we want to be able to predict a reasonably high number of tokens so that MLM can be beneficial.
So to answer your question: reducing the maximum word length might reduce overall speeds but this change will probably negligible when compared to the effects listed above.
You may wonder why we used 50 character long representations. To be honest, we didn't want to tweak this `CharacterCNN` too much as it is originally the same layer that is used in ELMo. We just trusted the guys from AllenAI to have done a good work choosing the architecture and just re-used it π<|||||>Hi @helboukkouri thanks for your detailed answer! This explains the whole training time/speed topic really great :hugs: <|||||>> After reading the paper again, I'm really excited to pre-train models for another domain π€ do you know when the pre-training code will be available π€
Code is out! Feel free to open issues if you have any problems using it.<|||||>Hi @helboukkouri, I have read the paper with great interest. I am currently working on the same topic. I tried to reproduce the result with our custom data. We could complete phase 1. Now we are heading towards fine-tuning of pretrained model for MLM and NSP tasks. Would you consider sharing research materials for the same. <|||||>Hi @pradeepsinghnitk, thanks for your interest.
Could you be more specific about what you mean by `phase 1` and also if by `fine-tuning of pretrained model for MLM and NSP tasks` you mean pre-training or actual task-specific finetuning (e.g. on text classification tasks)?
In any case, check this code as it gives basic context for loading a model and running an inference. Fine-tuning it on any task should be straightforward (as you would with BERT basically) : https://github.com/helboukkouri/character-bert
And for NSP and MLM (which is usually what is called `pre-training`), the code is here: https://github.com/helboukkouri/character-bert-pretraining
Unfortunately, the import of CharacterBERT in the `transformers` library did not really succeed. It's been a while but if I remember well the issues were related to the different tests failing due to character-based tokenization being not very well supported at the time.
I'll notify everybody if I ever go back to working on this again.
Cheers!<|||||>Thank you for your response.
To be specific about phase 1; bash $WORKDIR/bash_scripts/run_pretraining.character_bert.step_1.sh (phase 1: maximum input length of 128 and maximum number of masked tokens per input of 20.) we could successfully execute this for char_bert pertaining and also for bert_based pretraining.
Now, we would like to reproduce https://github.com/helboukkouri/character-bert-finetuning. But there was no code uploaded here.
"And for NSP and MLM (which is usually what is called pre-training), the code is here: https://github.com/helboukkouri/character-bert-pretraining". this part of the scripts we have already executed
<|||||>Looking forward to this integration since December 2020!<|||||>@stefan-it Hi Stefan, I saw it on your twitter account that you finished training German version of CharacterBERT. It is not on Huggingface yet, but I am writing my master thesis on OCR post correction on historical german corpus, and can really use it! Can you tell me how I can have access to your model? Thank you so much! Greetings from Stuttgart! |
transformers | 9,060 | closed | ImportError: cannot import name 'SAVE_STATE_WARNING' from 'torch.optim.lr_scheduler' - SAVE_STATE_WARNING has been removed from pytorch | ERROR:
..../my37/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 40, in <module>
from torch.optim.lr_scheduler import SAVE_STATE_WARNING
ImportError: cannot import name 'SAVE_STATE_WARNING' from 'torch.optim.lr_scheduler' (......./my37/lib/python3.7/site-packages/torch/optim/lr_scheduler.py)
Please update transformers to be compatible with the latest pytorch source code (build from master branch:
'SAVE_STATE_WARNING' was removed from pytorch a few days ago. | 12-11-2020 13:30:30 | 12-11-2020 13:30:30 | .I can see you have fixed this in the source code.<|||||>I just upgraded to torch 1.8 and I got this error.
```ImportError while importing test module '/home/dwalter/Documents/projects/lm/lm_ml/modules/quantization/tests/test_quantize.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/home/dwalter/anaconda3/envs/lm-torch1.8/lib/python3.6/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_quantize.py:7: in <module>
import quantization.fused_nn as qnni
quantization/__init__.py:2: in <module>
from .fused_nn import ConvNL2d
quantization/fused_nn.py:4: in <module>
from .nn import Conv2d
quantization/nn/__init__.py:8: in <module>
import transformers.modeling_bert as bert
/home/dwalter/anaconda3/envs/lm-torch1.8/lib/python3.6/site-packages/transformers/__init__.py:626: in <module>
from .trainer import Trainer
/home/dwalter/anaconda3/envs/lm-torch1.8/lib/python3.6/site-packages/transformers/trainer.py:69: in <module>
from .trainer_pt_utils import (
/home/dwalter/anaconda3/envs/lm-torch1.8/lib/python3.6/site-packages/transformers/trainer_pt_utils.py:40: in <module>
from torch.optim.lr_scheduler import SAVE_STATE_WARNING
E ImportError: cannot import name 'SAVE_STATE_WARNING'
```
Is there something I need to fix in my code or did I not upgrade correctly?
upgraded with `pip install --upgrade torch`<|||||>@dwalterlm you're probably on an older Transformers version. This was fixed in https://github.com/huggingface/transformers/pull/8979, could you try upgrading to a more recent version, like `v4.3.0`?<|||||>>
the version of torch is too high,try use : torch 1.7.1 |
transformers | 9,059 | open | overflow_to_sample_mapping missing in in documentation | In the [documentation ]( https://huggingface.co/transformers/master/main_classes/tokenizer.html#transformers.PreTrainedTokenizerFast.__call__)of the fast tokenizer, the `overflow_to_sample_mapping` field is missing.
Instead the `overflowing_tokens` is listed there, which is only part of the base tokenizer.
| 12-11-2020 12:37:46 | 12-11-2020 12:37:46 | Indeed! Do you want to open a PR with a fix?<|||||>No not really.
But I've found that the documentation is the same because the `PreTrainedTokenizerFast` inherits the `__call__` method as well as the documentation from `PreTrainedTokenizerBase`.
The `__call__` method documentation is a concatenation of two different documentations.
So changing that single line of documentation is more complicated than I expected.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I think that it does? |
transformers | 9,058 | closed | "resize_token_embeddings" in BertForeMaskedLM won't change last linear layer "output dimension" | ## Environment info
- `transformers` version: 4.0.1
### Who can help
albert, bert, GPT2, XLM: @LysandreJik
## Information
Model I am using (Bert, XLNet ...): BertForMaskedLM
## To reproduce
`resize_token_embeddings` cannot change decoder output feature dimension.
```
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained(tokenizer_path) # len(tokenizer) == 30541 (I add some new tokens)
model.bert.resize_token_embeddings(len(tokenizer))
>>>
BertForMaskedLM(
(bert): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30541, 768) ######################### This is correct, but the decoder is wrong.
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
...
...
(cls): BertOnlyMLMHead(
(predictions): BertLMPredictionHead(
(transform): BertPredictionHeadTransform(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(decoder): Linear(in_features=768, out_features=30522, bias=True) ########## out_features not changed
)
)
)
``` | 12-11-2020 12:28:25 | 12-11-2020 12:28:25 | Sorry, My mistakes. |
transformers | 9,057 | closed | Having to specify too many `ignore_keys` in `Trainer.prediction_step` | Since all model output dicts that have logits give it with the key `logits` I think this code could be simplified to just use the `logits` key. (Rather than having to specify a bunch of `ignore_keys`.)
from:
https://github.com/huggingface/transformers/blob/e20ac6611df97f66148ce8b7886f01ffe9d17484/src/transformers/trainer.py#L1471-L1473
to:
```python
if isinstance(outputs, dict):
loss = outputs["loss"].mean().detach()
logits = (outputs.get('logits', None),)
```
This prevents other keys for being sent to `nested_concat` causing an error:
https://github.com/huggingface/transformers/blob/e20ac6611df97f66148ce8b7886f01ffe9d17484/src/transformers/trainer.py#L1367-L1368
I'd be happy to make this change, let me know if I'm missing something here. | 12-11-2020 11:47:02 | 12-11-2020 11:47:02 | No, this is too simplistic. First of all, all question-answering models return two logits called `start_logits` and `end_logits`. Then a user might want to get the predictions for all their `all_hidden_states` or `all_attentions` when the model has the proper config keys, which is why the `Trainer` gather all the tensors different from the loss.<|||||>Ahh I see, guess there's no simple answer here. Thanks for the info! |
transformers | 9,056 | closed | Token classification example (run_ner.py) should work without fast tokenizers | # π Feature request
The token classification example ( [run_ner.py](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py) ) calls the Tokenizer with `return_offsets_mapping=True` (line 279).
This is not allowed for Python tokenizers and raises the error `NotImplementedError: return_offset_mapping is not available when using Python tokenizers. To use this feature, change your tokenizer to one deriving from transformers.PreTrainedTokenizerFast.`
run_ner.py should align tokens and labels even if a fast tokenizer is not available.
## Motivation
There isn't a fast tokenizer available for `vinai/bertweet-base`, and I guess this may apply to a few other models as well.
Passing `vinai/bertweet-base` as `model_name_or_path` to `run_ner.py` instantly raises `NotImplementedError`. | 12-11-2020 10:55:46 | 12-11-2020 10:55:46 | Yes, this script only supports models that have a fast tokenizer (there is now a clear assert of that after the tokenizer is loaded). The old script will work with models that only have a slow tokenizer.<|||||>> Yes, this script only supports models that have a fast tokenizer (there is no a clear assert of that after the tokenizer is loaded). The old script will work with models that only have a slow tokenizer.
Somebody wrote an assert four days ago. There are certain inconveniences to the old script, e.g. it doesn't utilize the `datasets` library. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I am facing same problem in BioGPT
|
transformers | 9,055 | closed | Can't load mt5 model after resizing token embedding | ## Environment info
- `transformers` version: 4.0.1
- Platform: macOS-10.15.6-x86_64-i386-64bit
- Python version: 3.8.3
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
@patrickvonplaten
## Description
I am having issues to reload a saved mt5 model when the token embedding has been resized. This error doesn't appear with the t5 model. I receive the following error :
`Error(s) in loading state_dict for MT5ForConditionalGeneration:
size mismatch for lm_head.weight: copying a param with shape torch.Size([250112, 768]) from checkpoint, the shape in current model is torch.Size([250102, 768]).`
Is there something different between the models that I am missing ?
## To reproduce :
```python
from transformers import MT5ForConditionalGeneration, AutoTokenizer, T5ForConditionalGeneration
model_class = MT5ForConditionalGeneration #T5ForConditionalGeneration
model_path = "google/mt5-base" # "t5-base"
model = model_class.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
tokenizer.add_tokens(['<tok1>', '<tok2>'])
model.resize_token_embeddings(len(tokenizer))
SAVING_PATH = "/tmp/test_model"
model.save_pretrained(SAVING_PATH)
tokenizer.save_pretrained(SAVING_PATH)
new_model = model_class.from_pretrained(SAVING_PATH)
```
| 12-11-2020 10:15:32 | 12-11-2020 10:15:32 | Hey @alecoutre1 I think this was fixed very recently.
I cannot reproduce your error on master -> could you try to pip install the master version and see if the error persists?
```
pip install git+https://github.com/huggingface/transformers
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,054 | closed | [Flax] Align FlaxBertForMaskedLM with BertForMaskedLM, implement from_pretrained, init | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR:
1) Implements a flax from_pretrained, save_pretrained method and let's `FlaxPreTrainedModel.from_pretained()` default to Flax instead of PyTorch. Tests are added and `bert-base-cased` and `roberta-base` model weights have been uploaded to the model hub. I gave the flax model file the name `flax_model.msgpack` similar to `pytorch_model.bin`.
2) Corrects FlaxBertForMaskedLM to align it with BertForMaskedLM: Some weights were incorrectly transposed and the activation function was different to Bert.
3) Adds `FlaxBertPretrainedModel` to Bert (and Roberta resp.) as it's done in PT.
4) Refactors the tests a bit. It's relatively easy to init a FlaxModel now I think without going over PyTorch (see tests).
5) Enforces naming convention that every model has a corresponding `Module` class. As discussed with @mfuntowicz in Flax it does not seem to be possible to make `PreTrainedModel` a `nn.Module` because `nn.Module` should by design be state-less and not contain a `self.params` attribute and thus we always require a `....Module` in addition to every `....Model` class in Flax (@mfuntowicz can probably better explain why I think and I guess we should have an offline discussion about it). I started this design principle now in the PR. Let me know what you think @mfuntowicz @sgugger @LysandreJik @thomwolf
Might be a good idea to go into the PR and play with the tests a bit.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-11-2020 10:02:59 | 12-11-2020 10:02:59 | Update: The init and `.from_pretrained()` should now be more aligned in Flax.
Notably, the user never has to call `init(...)` himself. This is done automatically in `FlaxBertModel(config)` just as it's done in PT.
The `from_pretrained(...)` method now yields explicit warnings which weights are randomly initialized and which ones were correctly loaded, just as it's done in PT.
Would be awesome if @mfuntowicz @sgugger @LysandreJik, you guys could do a second review. If this design is good for you, I'd be keen to merge this PR and think about a more general convert method.
```python
from transformers import FlaxBertModel, BertConfig
model = FlaxBertModel(BertConfig())
hid_states = model(np.ones((1, 1))) # init was done automatically
# one can also add the input shape used for the init to keep flexibility
model = FlaxBertModel(BertConfig(), input_shape=((16, 128))
hid_states = model(np.ones((1, 1))) # init was done automatically
# also the from_pretrained method now yields an explicit warning when weights are loaded, just as it's done in PT:
model = FlaxBertModel.from_pretrained("roberta-base")
# -> rnd initializes 'bias', 'dense.kernel', 'dense.bias', 'layer_norm.bias', 'decoder.weight', 'layer_norm.weight' with explicit warning
```
|
transformers | 9,053 | closed | TFTrainingArguments | ## Environment info
- `transformers` version: 4.0.1
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.3.1
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
@sgugger @jplu @stefan-it
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
[ ] the official example scripts: (give details below)
[x] my own modified scripts: (give details below)
The tasks I am working on is:
[ ] an official GLUE/SQUaD task: (give the name)
[x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. specifiy training args.
2. run trainer
3. raise Exception where `evaluation_strategy` in training_args becomes `evaluate_strategy`
```python
training_args = TFTrainingArguments(
output_dir="/root/Data/marco-passage-ranking/results",
overwrite_output_dir=True,
do_train=True,
do_eval=True,
do_predict=False,
evaluation_strategy="no",
eval_steps=1000,
per_device_train_batch_size=8, # batch size per device during training
per_device_eval_batch_size=8, # batch size for evaluation
learning_rate=1e-6,
max_steps=400000,
warmup_steps=40000,
logging_dir="./tmp/log",
logging_steps=1000,
save_steps=1000,
fp16=False,
# eval_steps=1000,
xla =False
)
trainer = TFTrainer(
model=model,
args=training_args,
train_dataset=train_ds.take(100000),
eval_dataset=dev_ds.take(10000),
compute_metrics=compute_metrics,
)
trainer.train()
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-19-25eb465360cc> in <module>
7 )
8
----> 9 trainer.train()
~/Softwares/anaconda3/envs/tf2.0/lib/python3.7/site-packages/transformers/trainer_tf.py in train(self)
562 if (
563 self.args.eval_steps > 0
--> 564 and self.args.evaluate_strategy == EvaluationStrategy.STEPS
565 and self.global_step % self.args.eval_steps == 0
566 ):
AttributeError: 'TFTrainingArguments' object has no attribute 'evaluate_strategy'
```
I think this might be a bug where the inconsistency of eval_strategy name raises Exception. Any advice? | 12-11-2020 09:34:21 | 12-11-2020 09:34:21 | Oh this is a typo, do you want to open a PR to fix it? |
transformers | 9,052 | closed | Add caching mechanism to BERT/RoBERTa/GPT2 for Seq2Seq accelerated generation | # π Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
All Seq2Seq models that make use of `generate()` usually allow `past_key_values` to be cached for both the cross-attention layer and the uni-directional decoder self-attention layer. For this feature request we should implement the feature for Bert2Bert, and, Roberta2Roberta.
We should implement this feature analogs to how it is implemented in Bart. This means that we should
- 1) add the caching mechanism in the AttentionLayer as shown here for Bart: https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/bart/modeling_bart.py#L234
- 2) pass the `past_key_values` as tuple through the layers, making sure that it's optional for the cross-attention layer: https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/bart/modeling_bart.py#L433
- 3) Adapt the mask correspondingly. The easiest option is probably to just copy how it's done in Bart and remove the old attention_masking logic (making sure that all tests pass): https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/bart/modeling_bart.py#L91 and https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/bart/modeling_bart.py#L76
- 4) Add a test for `BertLMHeadModel` and `RobertaForCausalLM` that verifies that the caching mechanism works as expected:
https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/tests/test_modeling_bart.py#L287
- 5) "Turn on" caching for Encoder-Decoder (this should be the last step and this might cause some other problems - happy to help here!): https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L427
This might be a good issue for you @patil-suraj if interested :-)
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 12-11-2020 09:30:36 | 12-11-2020 09:30:36 | @patrickvonplaten Hi, I'd like to ask why the decoding only reuses previous key and values but no query. Since if the model parameter rests static, the query vector can be reused as well.
Appreciate for your reply.<|||||>Hi @liyucheng09
Good question!
We don't need to reuse query states because when caching is enabled we just need the query states for the current last token since only the last query vector is needed to predict the next token.
Hope this makes it clear.<|||||>These blogs might actually help as well:
- https://huggingface.co/blog/encoder-decoder
- https://jalammar.github.io/illustrated-gpt2/
to better understand the difference between query, key and value :-) |
transformers | 9,051 | closed | update tatoeba workflow | # What does this PR do?
Update the tatoeba model upload workflow for our new git-based system.
| 12-11-2020 09:09:15 | 12-11-2020 09:09:15 | |
transformers | 9,050 | closed | yuk | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-11-2020 08:26:54 | 12-11-2020 08:26:54 | |
transformers | 9,049 | closed | New version of flax requires frozen dicts | Small update to maintain compatibility with new version of flax.
@mfuntowicz | 12-11-2020 04:47:13 | 12-11-2020 04:47:13 | Hey @KristianHolsheimer,
Sorry I saw your PR a bit too late...I think this is solved now on master no?<|||||>Okay no worries. Sorry I should've tagged you. Thanks for the reply |
transformers | 9,048 | closed | π [TFBART] LayerDrop not working on TPU | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.0.dev0
- Platform: Linux-4.19.0-13-cloud-amd64-x86_64-with-debian-10.7
- Python version: 3.7.3
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No (TPU)
- Using distributed or parallel set-up in script?: Yes
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): TFBart
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
## To reproduce
When I try to run TFBart on TPU, I'm getting the following error :
> ValueError: "attn" is None at the end of the TRUE branch.
It seems to come from the LayerDrop operation :
https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/bart/modeling_tf_bart.py#L387-L391
<details>
<summary> Full stack trace (click to expand...)</summary>
>2020/12/11 00:35:34 - INFO - transformers_addons.trainer_tf - ***** Running Evaluation *****
2020/12/11 00:35:34 - INFO - transformers_addons.trainer_tf - Batch size = 8
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
Traceback (most recent call last):
File "train.py", line 203, in <module>
main()
File "train.py", line 194, in main
result = trainer.evaluate()
File "/home/remondnicola/text-summarization/transformers_addons/trainer_tf.py", line 281, in evaluate
output = self._prediction_loop(eval_dataset, description="Evaluation", prediction_loss_only=prediction_loss_only)
File "/home/remondnicola/text-summarization/transformers_addons/trainer_tf.py", line 207, in _prediction_loop
loss, logits = self._evaluate_steps(features, labels)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
result = self._call(*args, **kwds)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 627, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 506, in _initialize
*args, **kwds))
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 2446, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 2777, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 2667, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py", line 981, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 441, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 3299, in bound_method_wrapper
return wrapped_fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py", line 968, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
>
> /home/remondnicola/text-summarization/transformers_addons/trainer_tf.py:169 _evaluate_steps *
per_replica_loss, per_replica_logits = self.args.strategy.experimental_run_v2(
train.py:29 _run_model *
out = self.model(features, training=training, **labels)
/home/remondnicola/text-summarization/transformers_addons/models/bart/modeling_tf_bart.py:88 call *
outputs = super().call(inputs["input_ids"],
/home/remondnicola/.venv/summarization/lib/python3.7/site-packages/transformers/models/bart/modeling_tf_bart.py:1110 call *
outputs = self.model(
/home/remondnicola/.venv/summarization/lib/python3.7/site-packages/transformers/models/bart/modeling_tf_bart.py:977 call *
inputs["encoder_outputs"] = self.encoder(
/home/remondnicola/.venv/summarization/lib/python3.7/site-packages/transformers/models/bart/modeling_tf_bart.py:388 call *
if training and (dropout_probability < self.layerdrop): # skip the layer
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:924 if_stmt
basic_symbol_names, composite_symbol_names)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:962 tf_if_stmt
error_checking_orelse)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/deprecation.py:507 new_func
return func(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/control_flow_ops.py:1177 cond
return cond_v2.cond_v2(pred, true_fn, false_fn, name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/cond_v2.py:91 cond_v2
op_return_value=pred)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py:981 func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:958 error_checking_orelse
basic_symbol_names + composite_symbol_names)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:298 _verify_tf_cond_vars
functools.partial(_verify_single_cond_var, name), body_var, orelse_var)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:617 map_structure
structure[0], [func(*x) for x in entries],
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:617 <listcomp>
structure[0], [func(*x) for x in entries],
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:242 _verify_single_cond_var
raise ValueError('"{}" is None at the end of the TRUE branch.'.format(name))
>
> ValueError: "attn" is None at the end of the TRUE branch.
</details>
| 12-11-2020 00:53:09 | 12-11-2020 00:53:09 | It seems to work if I completely remove the `LayerDrop` (by commenting out the `if` clause, in both encoder and decoder).<|||||>Hey @astariul-colanim,
I think a fix in #9029 (replacing the if-else by a `continue` statement) should do the trick.
Could you try again from the branch and let me know?
Thanks!<|||||>So far #9029 seems working perfectly !
Let's close this issue when #9029 is merged :)
Thanks for the fix !<|||||>@patrickvonplaten Finally the model crash during evaluation..
<details>
<summary> Full stack trace (click to view)</summary>
```
2020/12/15 01:19:09 - INFO - transformers_addons.trainer_tf - ***** Running Evaluation *****
2020/12/15 01:19:09 - INFO - transformers_addons.trainer_tf - Batch size = 8
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=Tru
e)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=Tru
e)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=Tru
e)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
Traceback (most recent call last):
File "train.py", line 190, in main
trainer.train()
File "/home/remondnicola/text-summarization/transformers_addons/trainer_tf.py", line 339, in train
result = self.evaluate()
File "/home/remondnicola/text-summarization/transformers_addons/trainer_tf.py", line 281, in evaluate
output = self._prediction_loop(eval_dataset, description="Evaluation", prediction_loss_only=prediction_loss_only)
File "/home/remondnicola/text-summarization/transformers_addons/trainer_tf.py", line 207, in _prediction_loop
loss, logits = self._evaluate_steps(features, labels)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
result = self._call(*args, **kwds)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 627, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 506, in _initialize
*args, **kwds))
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 2446, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 2777, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 2667, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py", line 981, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 441, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 3299, in bound_method_wrapper
return wrapped_fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py", line 968, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
/home/remondnicola/text-summarization/transformers_addons/trainer_tf.py:169 _evaluate_steps *
per_replica_loss, per_replica_logits = self.args.strategy.experimental_run_v2(
train.py:29 _run_model *
out = self.model(features, training=training, **labels)
/home/remondnicola/text-summarization/transformers_addons/models/bart/modeling_tf_bart.py:97 call *
outputs = super().call(inputs["input_ids"],
/home/remondnicola/.venv/summarization/lib/python3.7/site-packages/transformers/models/bart/modeling_tf_bart.py:1222 call *
outputs = self.model(
/home/remondnicola/.venv/summarization/lib/python3.7/site-packages/transformers/models/bart/modeling_tf_bart.py:1062 call *
inputs["encoder_outputs"] = self.encoder(
/home/remondnicola/.venv/summarization/lib/python3.7/site-packages/transformers/models/bart/modeling_tf_bart.py:719 call *
for encoder_layer in self.layers:
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:924 if_stmt
basic_symbol_names, composite_symbol_names)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:962 tf_if_stmt
error_checking_orelse)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/deprecation.py:507 new_func
return func(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/control_flow_ops.py:1177 cond
return cond_v2.cond_v2(pred, true_fn, false_fn, name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/cond_v2.py:91 cond_v2
op_return_value=pred)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py:981 func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:958 error_checking_orelse
basic_symbol_names + composite_symbol_names)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:298 _verify_tf_cond_vars
functools.partial(_verify_single_cond_var, name), body_var, orelse_var)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:617 map_structure
structure[0], [func(*x) for x in entries],
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:617 <listcomp>
structure[0], [func(*x) for x in entries],
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:242 _verify_single_cond_var
raise ValueError('"{}" is None at the end of the TRUE branch.'.format(name))
ValueError: "all_attentions" is None at the end of the TRUE branch.
```
</details><|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,047 | closed | Change nn.dropout to layer.Dropout in TFBart | # What does this PR do?
This PR changes all the `tf.nn.dropout` calls in `modeling_tf_bart.py` and use `tf.keras.layers.Dropout` instead.
More consistent with `modeling_tf_roberta.py`
Fixes #9045
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten | 12-11-2020 00:41:27 | 12-11-2020 00:41:27 | Hey @astariul-colanim
Thanks for the fix! Looks good to me if it solves the error on TPU. Also cc @jplu |
transformers | 9,046 | closed | BlenderBot RuntimeError: CUDA error: device-side assert triggered | ## Environment info
- `transformers` version: 4.0.0
- Platform: Linux-5.4.0-56-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes (GTX 1060 6GB)
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): I am using the BlenderbotForConditionalGeneration ('facebook/blenderbot-90M') along with the relevant small tokenizer.
The problem arises when using:
I am using my own trainer implementation. I think that the problem has to do with the indexes of the labels. More specifically when I am using:
```outputs = self.model(input_ids=inputs, attention_mask=inputs_att, labels=pad_targets, return_dict=True)```
everything works fine as the "pad_targets" are the targets using 0 as the index for masked (padded) tokens.
However when I am using:
```outputs = self.model(input_ids=inputs, attention_mask=inputs_att, labels=repl_targets, return_dict=True)```
and then printing the outputs['loss'] the following error is occurred:
`RuntimeError: CUDA error: device-side assert triggered`
as the "repl_targets" are the targets using the -100 as the index for masked (padded) tokens.
The aforementioned error also occurs when using the argument:
`decoder_input_ads=repl_targets`
The tasks I am working on is:
Dialogue generation in Empathetic Dialogues dataset.
## Expected behavior
I think that there is a problem with the -100 padding token. But I am not sure :) | 12-11-2020 00:35:24 | 12-11-2020 00:35:24 | Hey @manzar96,
It would be awesome if you could provide a full code snippet that I can copy paste and run to reproduce the error. I am not able to do so with your code above.
Thanks a lot!<|||||>I made an example:
```import torch
from transformers import BlenderbotSmallTokenizer, \
BlenderbotForConditionalGeneration
DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
model = BlenderbotForConditionalGeneration.from_pretrained('facebook/blenderbot-90M')
model.to(DEVICE)
inputs = torch.tensor([[14, 49, 42, 626, 2727, 1063, 5, 0, 0, 0, 0, 0, 0, 0],
[14, 1322, 7, 1427, 13, 7, 153, 384, 5, 14,
18, 64, 7261, 5]], device=DEVICE)
inputs_att = torch.tensor([[1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]],
device=DEVICE)
repl_targets = torch.tensor([[ 46, 15, 3283, 20, -100, -100, -100, -100, -100, -100, -100, -100,
-100, -100],
[ 121, 54, 37, 53, 60, 12, 447, 10, 1427, 15, 51, 11,
598, 20]], device=DEVICE)
pad_targets = torch.tensor([[ 46, 15, 3283, 20, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0],
[ 121, 54, 37, 53, 60, 12, 447, 10, 1427, 15, 51, 11,
598, 20]], device=DEVICE)
outputs=model.forward(input_ids=inputs, attention_mask=inputs_att,
labels=repl_targets, return_dict=True)
import ipdb;ipdb.set_trace()
```
If you try printing the outputs['loss'] the error occurs. However, if you replace the `repl_targets` with the `pad_targets` variable everything works fine (but the loss does not mask 0, so that's not always correct for use).<|||||>@patrickvonplaten
This is a bug, in bart `decoder_input_ids` are prepared by shifting the `labels` to right, but it doesn't replace -100 with `pad_token_id`.
https://github.com/huggingface/transformers/blob/6587cf9f8448b5573cf4a1c639ef4857472d1da0/src/transformers/models/bart/modeling_bart.py#L65-L73
In T5 we automatically replace -100 with `pad_token_id` when preparing `decoder_input_ids`.
https://github.com/huggingface/transformers/blob/6587cf9f8448b5573cf4a1c639ef4857472d1da0/src/transformers/models/t5/modeling_t5.py#L740-L756<|||||>You're right @patil-suraj - do you want to open a PR to fix it in Bart? :-) <|||||>Yeah! |
transformers | 9,045 | closed | π [TF_BART] "<internal expr>" has dtype float32 in the TRUE branch, but dtype=int32 in the FALSE branch | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.0.dev0
- Platform: Linux-4.19.0-13-cloud-amd64-x86_64-with-debian-10.7
- Python version: 3.7.3
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No (TPU)
- Using distributed or parallel set-up in script?: Yes
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): TFBart
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
## To reproduce
When I try to run TF_Bart on TPU, I'm getting the following error :
> TypeError: "<internal expr>" has dtype float32 in the TRUE branch, but dtype=int32 in the FALSE branch. TensorFlow control flow requires that they are the same.
It seems to come from the dropout operation :
https://github.com/huggingface/transformers/blob/b01ddc9577b87f057e163d49563ee3f74f4810cf/src/transformers/models/bart/modeling_tf_bart.py#L373
<details>
<summary> Full stack trace (click to expand...)</summary>
>2020/12/11 00:00:55 - INFO - transformers_addons.trainer_tf - ***** Running Evaluation *****
2020/12/11 00:00:55 - INFO - transformers_addons.trainer_tf - Batch size = 8
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
Traceback (most recent call last):
File "train.py", line 203, in <module>
main()
File "train.py", line 194, in main
result = trainer.evaluate()
File "/home/remondnicola/text-summarization/transformers_addons/trainer_tf.py", line 281, in evaluate
output = self._prediction_loop(eval_dataset, description="Evaluation", prediction_loss_only=prediction_loss_only)
File "/home/remondnicola/text-summarization/transformers_addons/trainer_tf.py", line 207, in _prediction_loop
loss, logits = self._evaluate_steps(features, labels)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
result = self._call(*args, **kwds)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 627, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 506, in _initialize
*args, **kwds))
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 2446, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 2777, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 2667, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py", line 981, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 441, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 3299, in bound_method_wrapper
return wrapped_fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py", line 968, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:
>
> /home/remondnicola/text-summarization/transformers_addons/trainer_tf.py:169 _evaluate_steps *
per_replica_loss, per_replica_logits = self.args.strategy.experimental_run_v2(
train.py:29 _run_model *
out = self.model(features, training=training, **labels)
/home/remondnicola/text-summarization/transformers_addons/models/bart/modeling_tf_bart.py:88 call *
outputs = super().call(inputs["input_ids"],
/home/remondnicola/.venv/summarization/lib/python3.7/site-packages/transformers/models/bart/modeling_tf_bart.py:1110 call *
outputs = self.model(
/home/remondnicola/.venv/summarization/lib/python3.7/site-packages/transformers/models/bart/modeling_tf_bart.py:977 call *
inputs["encoder_outputs"] = self.encoder(
/home/remondnicola/.venv/summarization/lib/python3.7/site-packages/transformers/models/bart/modeling_tf_bart.py:373 call *
x = tf.nn.dropout(x, rate=self.dropout if training else 0)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:924 if_stmt
basic_symbol_names, composite_symbol_names)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:962 tf_if_stmt
error_checking_orelse)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/deprecation.py:507 new_func
return func(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/control_flow_ops.py:1177 cond
return cond_v2.cond_v2(pred, true_fn, false_fn, name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/cond_v2.py:91 cond_v2
op_return_value=pred)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py:981 func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:958 error_checking_orelse
basic_symbol_names + composite_symbol_names)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:298 _verify_tf_cond_vars
functools.partial(_verify_single_cond_var, name), body_var, orelse_var)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:617 map_structure
structure[0], [func(*x) for x in entries],
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:617 <listcomp>
structure[0], [func(*x) for x in entries],
/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/operators/control_flow.py:267 _verify_single_cond_var
orelse_var.dtype.name))
>
> TypeError: "<internal expr>" has dtype float32 in the TRUE branch, but dtype=int32 in the FALSE branch. TensorFlow control flow requires that they are the same.
</details> | 12-11-2020 00:10:24 | 12-11-2020 00:10:24 | |
transformers | 9,044 | closed | XLNet ONNX model giving error: "Attempting to broadcast an axis by a dimension other than 1" | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.1
- Platform: Linux-4.14.193-113.317.amzn1.x86_64-x86_64-with-glibc2.10
- Python version: 3.7.6
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@TevenLeScao @mfuntowicz @patil-suraj
## Information
Model I am using (Bert, XLNet ...): XLNet
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Trained HuggingFace Transformers model XLNetForSequenceClassification on custom dataset with PyTorch backend.
2. Used provided `convert_graph_to_onnx.py` script to convert model (from saved checkpoint) to ONNX format.
3. Loaded the model with ONNXRuntime
4. When feeding in int64 numpy arrays `input_ids` and `attention_masks`, the model returns the following error except when both inputs have shape (x, 1) or (x, 6). There is nothing in the configuration of the model or the structure of the training data from my end that would require shape (x, 1) or (x, 6).
```
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Add node. Name:'Add_26' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/math/element_wise_ops.h:361 void onnxruntime::BroadcastIterator::Init(int64_t, int64_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. 5 by 6
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The expected behavior is for the model to return predictions successfully (i.e. probabilities for all classes). | 12-10-2020 22:21:37 | 12-10-2020 22:21:37 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,043 | closed | The example code does not work | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Linux-4.4.0-154-generic-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.12
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
@LysandreJik @sgugger
## Information
When I run the code about Question Answering from the documentation https://huggingface.co/transformers/task_summary.html, there is an error reported.
## To reproduce
```
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
model = AutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
text = r"""
π€ Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose
architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNetβ¦) for Natural Language Understanding (NLU) and Natural
Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between
TensorFlow 2.0 and PyTorch.
"""
if __name__ == "__main__":
questions = [
"How many pretrained models are available in π€ Transformers?",
"What does π€ Transformers provide?",
"π€ Transformers provides interoperability between which frameworks?",
]
for question in questions:
inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="pt")
input_ids = inputs["input_ids"].tolist()[0]
text_tokens = tokenizer.convert_ids_to_tokens(input_ids)
outputs = model(**inputs)
answer_start_scores = outputs.start_logits
answer_end_scores = outputs.end_logits
answer_start = torch.argmax(
answer_start_scores
) # Get the most likely beginning of answer with the argmax of the score
answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score
answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
print(f"Question: {question}")
print(f"Answer: {answer}")
```
## Error message
File "/home/pxf109/LegalContractModel/example.py", line 22, in <module>
answer_start_scores = outputs.start_logits
AttributeError: 'tuple' object has no attribute 'start_logits
| 12-10-2020 22:01:35 | 12-10-2020 22:01:35 | The example code in the documentation of version 4 works with transformers version 4. You can find the examples for older versions (since you seem to be running v3.3.1) by clicking on the navigation bar at the left of the documentation pages. [Here](https://huggingface.co/transformers/v3.3.1/) is a direct link to v3.3.1. |
transformers | 9,042 | closed | [finetune_trainer] enhancements and fixes | The main need was to add speed metrics to perform speed performance regressions. But on the way a bunch of other things got worked on. Hopefully you will find the proposed changes useful.
This PR change `trainer`
* [x] adds an optional `metric_key_prefix` for `evaluate` and `predict` functions to return metrics with a prefix key set by the user rather than the default `eval_`.
This PR change `finetune_trainer`
* [x] utils: sort json keys when dumping to filesystem
* [x] renames s/eval/val/ for the validation dataset results
* [x] adds speed metrics for all: train/eval/test (samples_per_second/runtime/n_objs)
* [x] refactors logging/saving code for each mode
* [x] renames internal vars to tell which is metrics and which is output that is more than just metrics
* [x] fixes a bug where all_results.json wasn't getting saved in the right place
* [x] rounds up loss values to 4 decimals - before it was `"eval_loss": 368.2950744628906,` - not sure if it's better done upstream in the trainer?
Here is a sample of `all_results.json` after this change:
```
{
"epoch": 1.0,
"test_bleu": 22.8548,
"test_gen_len": 35.9,
"test_loss": 734.8612,
"test_n_ojbs": 10,
"test_runtime": 2.5185,
"test_samples_per_second": 3.971,
"train_n_ojbs": 200,
"train_runtime": 24.9101,
"train_samples_per_second": 8.029
"val_bleu": 26.581,
"val_gen_len": 31.4,
"val_loss": 738.3438,
"val_n_ojbs": 200,
"val_runtime": 33.9329,
"val_samples_per_second": 5.894,
}
```
@sgugger, @patil-suraj, @patrickvonplaten | 12-10-2020 21:56:50 | 12-10-2020 21:56:50 | Unfortunately the naming has been done a long time ago and even if it's not ideal, we can't break it like this as people rely on the names of the keys in their code. I would advocate for the renaming to be done in the script directly and not inside `Trainer`.
If there is really a lot of people asking for it, we can think of a strategy to rename those keys progressively with some kind of deprecation warning, but since it's merely cosmetic, I would leave that for scripts using Trainer.<|||||>I see what you mean that someone relying on "eval_loss" when doing predict would have their code broken. Yes, we can't do that.
I moved this fix back into the finetune trainer as it was originally.
Could we set a a target for when we could do breaking changes and fix this bug?
I also find it strange that we use `--n_val` but `eval_`
And then `predict` vs `test_`.
The callbacks are inconsistent too :(
I'd plan a design session where we first collect all the rough edges and inputs on what needs to be polished and then adjust the trainer so that it's not limping for the rest of its life. Can this be done?<|||||>> Could we set a a target for when we could do breaking changes and fix this bug?
Like I said, unless there is strong demand for it, I think we're just going to leave it as it. It's not the ideal naming choice but we have to deal with it now (kind of like PretrainedConfig vs PreTrainedConfig).
> I also find it strange that we use `--n_val` but `eval_`
>
> And then `predict` vs `test_`.
I don't understand that part. Also `predict` could be used for test or evaluation, so `predict` does not mean test.
> The callbacks are inconsistent too
Could you elaborate? If it's the evaluate vs predict you mentioned, there is a reason. `prediction_step` is called both in `predict` and `evaluate` whereas the `on_evaluate` is only called at `evaluate`.
<|||||>> > Could we set a a target for when we could do breaking changes and fix this bug?
>
> Like I said, unless there is strong demand for it, I think we're just going to leave it as it. It's not the ideal naming choice but we have to deal with it now (kind of like PretrainedConfig vs PreTrainedConfig).
I'm not sure how this is similar. I call `trainer.predict()` and get in return `eval_` metrics - this is very confusing.
> > I also find it strange that we use `--n_val` but `eval_`
> > And then `predict` vs `test_`.
>
> I don't understand that part. Also `predict` could be used for test or evaluation, so `predict` does not mean test.
I suppose from the perspective of the existing trainer like finetune they are the same. But surely this is much less of an issue than `val` vs `eval`.
> > The callbacks are inconsistent too
>
> Could you elaborate? If it's the evaluate vs predict you mentioned, there is a reason. `prediction_step` is called both in `predict` and `evaluate` whereas the `on_evaluate` is only called at `evaluate`.
Ah, I see, thank you for clarifying that - then why is there no `on_predict` to match `on_evaluate`? I assumed it was the former.
<|||||>There is no `on_predict` event because the training loop never calls `Trainer.predict`. It does however call `Trainer.evaluate`. I guess we could add the `on_predict` event that would be called at the end of a `Trainer.predict` method.
> But surely this is much less of an issue than `val` vs `eval`.
Could you please clarifying that part? I'm not sure what you mean by this.
> I'm not sure how this is similar. I call `trainer.predict()` and get in return `eval_` metrics - this is very confusing.
If we go down that road, `trainer.predict` should only return predictions and not even the metrics (which we won't do either as it's a bigger breaking change but it would definitely make sense to me). Predict and evaluate do not mean test vs evaluation, it's really a matter of getting the predictions of the model vs evaluating the metrics on a given dataset (which could be train/eval/test).
I can get behind adding a prefix argument to those method that defaults to `None` and will be used to prefix the metrics. If one is passed, it's used (so it's easier to get the `test_` prefix you want and does not require ugly post-processing) otherwise `eval_` is used to avoid any breaking changes. Would that work for you?<|||||>>
>
> But surely this is much less of an issue than val vs eval.
>
> Could you please clarifying that part? I'm not sure what you mean by this.
Of course, we have `--n_val` (mnemonic validation), but then we return `eval_(foo|bar)` as the metrics for "validation". But see below.
So now that you have further expanded on eval+predict not being correlated to validation+testing (thank you!), I think I'm not approaching the problem in the right way.
Really, then there is no bug with both `predict` and `evaluate` returning metrics with `eval_`-prefixed keys and the bug is really in the end use in `finetune_reader.py`. Here is what I'm thinking:
1. It shouldn't be `eval_bleu` and `test_bleu`, it should be `val_bleu` and `test_bleu` - because these are both evaluation report on 2 different splits so `--n_val` dataset should lead to `val_bleu` metrics, and `--n_test` to `test_bleu` (not sure of `valid` or `val` - probably `val` to match `--n_val`)
2. Ideally that whole `eval_` prefix should be removed altogether, since it just has a potential at being confused with `val` as in `validation`, and there are no other metrics in that context - the trainer code forcefully adds `eval_` to all metrics - but as we said it's not possible to do w/o a breaking change, and it's not really a problem anyway. these are just evaluation metrics - no problem here.
3. What the interface could use then is getting a `split` argument which it could prepend to the metrics keys, so if someone is doing evaluation on the validation dataset the metrics will be returned could start with `val_eval_`.
So if my logic makes sense practically we can either:
1) leave trainer alone and recode `finetune_reader.py` to prefix `eval_` with the split name - so it'll be `val_eval_bleu` and `test_eval_bleu`
2) add an optional trainer argument `split` for `evaluate` and `predict` and have the trainer arrange the split name prefixed in the metrics as in the option above.
Probably the 1st one is simpler, since it gives the user full flexibility.
The same should happen to the results file naming - we need to choose whether those are `(val|test )_results.json`or `(eval|predict)_results.json` - and not the currently confusing pair `eval_results.json`, but `test_results.json`.
<|||||>If you're happy with `val_eval_bleu` and `test_eval_bleu`, it's fine by me. I'd rather name `split` `prefix` in solution 2 unless I understand badly what you mean by it. It's also fine by me and could be a feature other users find useful (if they don't want `eval_xxx` as names).<|||||>> If you're happy with `val_eval_bleu` and `test_eval_bleu`, it's fine by me. I'd rather name `split` `prefix` in solution 2 unless I understand badly what you mean by it. It's also fine by me and could be a feature other users find useful (if they don't want `eval_xxx` as names).
OK, 3 follow up questions:
1. I suggested split `since` it's typically either `train|val|test`, but `prefix` works just as well. Except it's unclear then in the function API - `prefix` to what? `metrics_key_prefix`?
2. So we are discussing to optionally prefix `eval_bleu`, etc. with something and not replace `eval_`, yes? So the end result is f`"{prefix}_eval_bleu"`, etc.
3. If so, should the prefix include the separator `_` (`test_`) or just be (`test`) and trainer will `"_".join([prefix, key])`? I suppose the latter
What do you think @patrickvonplaten + @patil-suraj? I think @sgugger's priority is the trainer itself, but what do you think about what would be ideal for seq2seq examples domain?<|||||>For 1, yes `metric_key_prefix` sounds better. For 2, I was thinking of replacing the `eval_` actually, which goes with 3, the prefix should not have the `_`. <|||||>@sgugger, please have a look - do we want None as the default and use `eval` in the code or `eval` in the function signature - I suppose the latter, right? I'm a bit confused here with the optional, having non-None default and keeping the API unbroken. Help?<|||||>And so then the only remaining thing that I'm stuck with - do we call the results of `evaluate` in finetune trainer `val` or `eval`? Since we call it `test` for `predict` - so confusing. Are those results run on dataset splits and then should be `val` and `test` or results on functionality they check and then they should be `eval` and `predict` - but `predict` doesn't work, since the results are evaluation results.
I think they should be `val` and `test` since both sets are evaluation results on 2 different splits.<|||||>As the comments issue is unrelated to this PR - how about I just let you edit those comments as you think would be the best, @sgugger. Anything you choose works for me. Thank you.<|||||>It doesn't look like others are going to review this PR. I didn't want to force anybody by asking to review, just tagging. Is it better to ask for a review explicitly?
@sgugger, please let me know if you still want to adjust unrelated to this PR comments or should I merge it and you will deal with it later.
Thank you!<|||||>> Left some nits
Thank you, @patrickvonplaten!
I went on and removed the `optional` word in the docs section as well to match the function signature. You haven't suggested I do that, so just want to make sure I did the right thing.<|||||>> I went on and removed the `optional` word in the docs section as well to match the function signature. You haven't suggested I do that, so just want to make sure I did the right thing.
So that was wrong - thank you for fixing that, @sgugger
- So we are removing `Optional` from the function signature because `Optional == Union[..., None] `and we have no None here
- but we are documenting that the argument is `optional` to the user
|
transformers | 9,041 | closed | google/bert2bert_L-24_wmt_de_en doesn't match official implementation | ## Environment info
- `transformers` version: 4.0.0
- Platform: Linux-5.4.0-1030-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.8
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten ; maybe @patil-suraj
## Information
I'm trying to running the `transformers` implementation of WMT14 DE->EN translation, using the `google/bert2bert_L-24_wmt_de_en` checkpoint and [instructions](https://huggingface.co/google/bert2bert_L-24_wmt_de_en).
The BLEU score I get using translations from `transformers` implementation are substantially lower than those I get from [the official Tensorflow model](https://github.com/google-research/google-research/tree/master/bertseq2seq) -- 24.7 w/ HF vs 34.0 w/ the official implementation.
## To reproduce
The following snippet shows qualitative differences in the output of the models:
```python
import datasets
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# --
# Load dataset
dataset = datasets.load_dataset("wmt14", "de-en", split="test")
sentence = dataset[20]['translation']['de']
target = dataset[20]['translation']['en']
print(target)
# If the street is clear, the pedestrian obtains a green light immediately, if not, there is a delay of around 15 seconds.
# --
# HF model
tokenizer = AutoTokenizer.from_pretrained("google/bert2bert_L-24_wmt_de_en", pad_token="<pad>", eos_token="</s>", bos_token="<s>")
model = AutoModelForSeq2SeqLM.from_pretrained("google/bert2bert_L-24_wmt_de_en")
input_ids = tokenizer(sentence, return_tensors="pt", add_special_tokens=False).input_ids
output_ids = model.generate(input_ids)[0]
output_str = tokenizer.decode(output_ids, skip_special_tokens=True)
print(output_str)
# the road is free, it takes about 15 seconds if not directly for the footganger.
# --
# TF model
import tensorflow.compat.v1 as tf
import tensorflow_hub as hub
import tensorflow_text as tf_text
tf.disable_eager_execution()
# Load model
model = hub.Module('https://tfhub.dev/google/bertseq2seq/bert24_de_en/1')
# Setup session
sess = tf.InteractiveSession()
sess.run(tf.tables_initializer())
sess.run(tf.global_variables_initializer())
# Define graph
src = tf.placeholder(tf.string, shape=[None])
translate = model(src)
# Translate
output_str = sess.run(translate, feed_dict = {
src : [sentence]
})
print(output_str[0])
# "If the road is clear, there is a green area for the pedestrian, if not it takes about 15 seconds."
```
I can also share the (custom) scripts I'm using to run inference on the entire dataset and compute BLEU scores. Note I am using the same BLEU code for both implementations.
## Expected behavior
I would expect the BLEU scores and the quality of the translations to be comparable.
Thanks! | 12-10-2020 21:53:54 | 12-10-2020 21:53:54 | Hey @bkj,
Thanks for the very in-detailed issue. It would be awesome if you could also share your custom scripts here to evaluate on the entire dataset. This indeed seems like a problem, I'll look into it<|||||>@patrickvonplaten Thanks for the quick response.
Code to run inference w/ the two models can be found here:
https://github.com/bkj/hf_bert2bert_debug
By default, it just runs one batch to save time -- you can run on the whole test dataset by setting `QUICKRUN = False` in each of the files.
BLEU scores on this batch are ~ 23 for HF and ~ 35 for TF.
Let me know what you think! I'm not super familiar w/ `transformers`, so it's possible I'm making some pre/post-processing mistake -- so likely a good idea to double check my glue code.<|||||>Hey @bkj,
I'll try to allocate time to solve this problem. I think it is indeed a fundamental difference between the two implementations - will try to investigate. Thanks for your response!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>Sorry for replying that late!
The problem is that the original code for those translation models is not published so that debugging isn't really possible. The original github can be found here: https://github.com/google-research/google-research/tree/master/bertseq2seq and the pretrained weights here: https://tfhub.dev/google/bertseq2seq/roberta24_bbc/1 in case someone is very motivated to take a deeper look.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,040 | closed | Zero Shot Classification Pipeline fails when running in CPU-only Docker container | ## Environment info
- `transformers` version: 4.0.0
- Platform: MacOS 10.15.7 (2018 MacBook Pro 15")
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0 CPU Only
- Tensorflow version (GPU?): N/A
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: N/A
### Who can help
Maybe @LysandreJik ?
## Information
Model I am using (Bert, XLNet ...): facebook/bart-large-mnli
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Pull and start the official [transformers-pytorch-cpu](https://hub.docker.com/r/huggingface/transformers-pytorch-cpu/dockerfile) container.
2. `docker exec -it huggingface bash`
3. `python3`
4. `from transformers import pipeline`
5. `classifier = pipeline("zero-shot-classification", model='facebook/bart-large-mnli', tokenizer='facebook/bart-large-mnli', device=-1)`
Step 4 above results in the following warning being printed:
```
/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
```
Step 5 above results in the models being downloaded, then `Killed` is printed and the Python interpreter exits.
## Expected behavior
When running locally in a Jupyter Notebook or directly in the terminal (not in a container), the following works correctly and the warning about the CUDA initialization isn't printed:
```
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='facebook/bart-large-mnli', tokenizer='facebook/bart-large-mnli', device=-1)
```
The problem seems to be limited to either the zero shot classification pipeline, or the facebook/bart-large-mnli model, since the following works correctly in the container (though the warning from Step 4 about the CUDA initialization is still printed):
```
from transformers import pipeline
classifier = pipeline('sentiment-analysis')
```
| 12-10-2020 21:35:05 | 12-10-2020 21:35:05 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,039 | closed | BERT outputs are different with the same input in training mode | When the training mode is enabled, BERT model returns different outputs even for the same input, is there any idea on why this happens?
```python
from transformers import BertModel, BertTokenizer
model = BertModel.from_pretrained("bert-base-uncased")
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
a = tokenizer.encode("Hello how are you?", return_tebsors='pt')
model.train()
torch.mean(model(a)[0])
# tensor(-0.0167, grad_fn=<MeanBackward0>)
torch.mean(model(a)[0])
# tensor(-0.0162, grad_fn=<MeanBackward0>)
torch.mean(model(a)[0])
# tensor(-0.0156, grad_fn=<MeanBackward0>)
``` | 12-10-2020 21:32:25 | 12-10-2020 21:32:25 | Hi there! The [forum](https://discuss.huggingface.co/) is a better place for those kinds of general questions, as we keep the issues for bugs and feature requests only.
To answer your question, this is because most Deep Learning models (including BERT) use a technique called [dropout](https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf) to generalize better, which randomly zeros some activations during training. This randomness is the reason you are getting different results for the same inputs.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @lamthuy were you able to investigate and find out the cause?<|||||>Same question here!<|||||>I am also facing the same error.
@sgugger I thought the dropout is deactivated once I call model.eval()
<|||||>Not sure what your code sample is @lava18 . The code sample above (fixed like below) always returns the same value once the line `model.train()` is removed.
```py
import torch
from transformers import BertModel, BertTokenizer
model = BertModel.from_pretrained("bert-base-uncased")
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
a = tokenizer.encode("Hello how are you?", return_tensors='pt')
torch.mean(model(a).to_tuple()[0])
```<|||||>You need to call `model.eval()` after training (or before inference). That should deactivate the dropouts and you will always get the same output for the same input. |
transformers | 9,038 | closed | Fix typo #9012 (#1) | There is a tiny typo in the code "transformers/examples/language-modeling/run_mlm_wwm.py" at line 284. [Details.](https://github.com/huggingface/transformers/issues/9012)
# What does this PR do?
Fixes #9012
## Before submitting
- [Y] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [Y] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
albert, bert, XLM: @LysandreJik
examples/distillation: @VictorSanh
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 12-10-2020 21:13:06 | 12-10-2020 21:13:06 | |
transformers | 9,037 | closed | fix the typo 9012 | # What does this PR do?
Fixes #9012
## Before submitting
- [Y] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [Y] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
albert, bert, XLM: @LysandreJik
examples/distillation: @VictorSanh
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 12-10-2020 20:30:28 | 12-10-2020 20:30:28 | Hello! I think you reverted your commit. The PR shows no diff.<|||||>> Hello! I think you reverted your commit. The PR shows no diff.
Hi! Yes, unfortunately, I was too quick... The first commit does the fix.<|||||>I opened [PR](https://github.com/huggingface/transformers/pull/9038) with correct changes. |
transformers | 9,036 | closed | [docs] missing info on call back registry | https://huggingface.co/transformers/main_classes/callback.html is missing instructions/examples on how to register a callback. Thanks.
@sgugger | 12-10-2020 18:21:55 | 12-10-2020 18:21:55 | I guess the corresponding test demonstrates the usage: https://github.com/huggingface/transformers/blob/5c0bf39782c9eac8df55b89518f61c430862a7f6/tests/test_trainer_callback.py
<|||||>I don't think this issue needs to be closed, one more example could be added to the documentatio! Let's make a good first issue out of it, maybe a contributor could help there :-)<|||||>Hi @sgugger. I'm new and I'd like to start contributing. Can I work on this issue?<|||||>Sure!<|||||>Thanks. Would an example like this be okay? -
```
class MyCallback(TrainerCallback):
"A callback that prints a message at the beginning of training"
def on_train_begin(self, args, state, control, **kwargs):
print("Starting training")
trainer = Trainer(
model,
args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
callbacks=[MyCallback]
)
```
Also, should I add the example to https://huggingface.co/transformers/main_classes/callback.html, or would the [training tutorial](https://huggingface.co/transformers/training.html) be a better place?<|||||>I think is should also show how to use `add_callback` as an alternative too, otherwise, that's the gist of it. The callbacks page is perfect for this I think.<|||||>Thanks, I've added an example for `add_callback` too and opened a PR [here](https://github.com/huggingface/transformers/pull/10928). It's failing some tests right now but I'm not sure why since I've only modified the `callback.rst` file. Could you please help me figure out why this might be happening?<|||||>The CI is flakey at times, I restarted the failing jobs. It looks like network problems, may have to restart again later if it still fails.
edit: no luck, still network issues, will try again later, but do not worry, as long as the docs job passes and it does - you're good.<|||||>Cool, thanks! |
transformers | 9,035 | closed | Improve coverage of the documentation | Currently, some public classes are not documented anywhere because we didn't create the corresponding doc pages. Those missing pages are:
- Benchmark classes
- Bert Japanese
- Data collators
If someone feels like working on one of those, please tag yourself with a comment on this issue. Once the objects are properly documented, they can be removed from the `SHOULD_BE_DOCUMENTED` constant in [this file](https://github.com/huggingface/transformers/blob/1310e1a758edc8e89ec363db76863c771fbeb1de/utils/check_repo.py#L374).
| 12-10-2020 17:00:52 | 12-10-2020 17:00:52 | I added docs for Bertweet [here](https://github.com/huggingface/transformers/pull/9379), first contribution, let me know if there is anything missing<|||||>Reopening the issue are not all of the items are fixed yet!<|||||>Hello @sgugger,
If no one else is working on it yet, I'd like to work on the `Bert Japanese` document.
(I also am interested in working on `Data collators`, but Iβd like to do that one by one. If there is someone else who would like to work on, please give priority to that person.)
<|||||>Go ahead @forest1988 :-)<|||||>Thanks, I'll do my best!<|||||>I deeply apologize for my delay in opening a PR for Bert Japanese.
I've just opened the PR.
https://github.com/huggingface/transformers/pull/11219
If you find any flaws, please let me know. I'll correct it soon.
|
transformers | 9,034 | closed | Refactor FLAX tests | # What does this PR do?
This PR refactors the FLAX models tests in a `test_modeling_flax_common` file and speeds them up by using small random models instead of pretrained ones. It will hopefully speed up the CI and make it less flaky! | 12-10-2020 16:55:24 | 12-10-2020 16:55:24 | |
transformers | 9,033 | closed | Make ProphetNetModel really compatible with EncoderDecoder | The interesting part of ProphetNet is its decoder which can do n-gram causal language modeling. So it could be very interesting to load a pre-trained prophetnet decoder model into an encoder-decoder design with - let's say - a longformer encoder for long-range sequence modeling.
Due to some narrow-minded thinking on my part, this didn't work previously.
```python
from transformers import EncoderDecoderModel
EncoderDecoderModel.from_encoder_decoder_pretrained("allenai/longformer-large-4096", "microsoft/prophetnet-large-uncased")
```
As one can see none of pre-trained **decoder** weights are loaded into the model. The reason is because `ProphetNetForCausalLM` was badly modularized in `ProphetNetForCausalLM`.
Merging this PR would make it possible to load any prophetnet decoder into an encoder-decoder model and fine-tuning an "build-it-yourself" encoder decoder would become much easier, *e.g.*:
```python
from transformers import EncoderDecoderModel
import torch
model = EncoderDecoderModel.from_encoder_decoder_pretrained("allenai/longformer-large-4096", "microsoft/prophetnet-large-uncased")
input_ids = torch.tensor([10 * [1]])
labels = torch.tensor([10 * [0]])
loss = model(input_ids, decoder_input_ids=labels, labels=labels).loss
loss.backward()
```
The above use-case might also be interesting for @ibeltagy actually.
## Breaking changes
This does introduce a pretty heavy breaking change to `ProphetNetForCausalLM`. However, the only reason this class was created was to make it useable with `EncoderDecoderModel` and this arguably failed a bit the first time since it made it way too difficult to load pretrained ProphetNet models into the `EncoderDecoderModel`. I guess I see this more of solving a bug then "new design". Also there are no pre-trained `ProphetNetForCausalLM` models on the model hub and I highly doubt anybody has really used this class.
I want to use the same pattern for BartForCausalLM and T5ForCausalLM, so it'd be great to get this merged even though there are some breaking changes. | 12-10-2020 16:08:19 | 12-10-2020 16:08:19 | |
transformers | 9,032 | closed | ImportError: cannot import name 'DPRReader' from 'transformers' | Hi, I am trying to run below code, it can be found at this [link](https://huggingface.co/transformers/model_doc/dpr.html#dprreader)
```
from transformers import DPRReader, DPRReaderTokenizer
tokenizer = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base')
model = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base')
encoded_inputs = tokenizer(
questions=["What is love ?"],
titles=["Haddaway"],
texts=["'What Is Love' is a song recorded by the artist Haddaway"],
return_tensors='pt'
)
outputs = model(**encoded_inputs)
start_logits = outputs.stat_logits
end_logits = outputs.end_logits
relevance_logits = outputs.relevance_logits
```
But got error
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Windows (GCP Instance)
- Python version: 3.8.6
- PyTorch version (GPU?): '1.7.0+cpu'
- Tensorflow version (GPU?): Not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I am just trying to run DPR model.
## To reproduce
Steps to reproduce the behavior:
1. Execute this in your python
```
from transformers import DPRReader, DPRReaderTokenizer
tokenizer = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base')
model = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base')
encoded_inputs = tokenizer(
questions=["What is love ?"],
titles=["Haddaway"],
texts=["'What Is Love' is a song recorded by the artist Haddaway"],
return_tensors='pt'
)
outputs = model(**encoded_inputs)
start_logits = outputs.stat_logits
end_logits = outputs.end_logits
relevance_logits = outputs.relevance_logits
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
The error traceback is below:
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-2f646c4dd41f> in <module>
----> 1 from transformers import DPRReader, DPRReaderTokenizer
2 tokenizer = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base')
3 model = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base')
4 encoded_inputs = tokenizer(
5 questions=["What is love ?"],
ImportError: cannot import name 'DPRReader' from 'transformers' (C:\<some_path>\env\lib\site-packages\transformers\__init__.py)
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I haven't executed this but I would hope nothing in output as all the results are stored in variables. | 12-10-2020 14:15:28 | 12-10-2020 14:15:28 | The DPR model is part of the 3.1.0 release. Please update your transformers library (4.0.1 is the current release btw :)).
https://github.com/huggingface/transformers/releases/tag/v3.1.0<|||||>Hi @cronoik , Thanks for answering. I tried this
```
pip install transformers==4.0.1
```
but got this error
```
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
sentence-transformers 0.3.9 requires transformers<3.6.0,>=3.1.0, but you have transformers 4.0.1 which is incompatible.
dpr 0.1.0 requires transformers<3.1.0,>=3.0.0, but you have transformers 4.0.1 which is incompatible.
```
So then I installed version 3.1.0 as follows:
```
pip install transformers==3.1.0
```
But still getting dependency error.
```
Collecting transformers==3.1.0
Using cached transformers-3.1.0-py3-none-any.whl (884 kB)
Requirement already satisfied: requests in c:\users\hiteshsom\documents\env\lib\site-packages (from transformers==3.1.0) (2.25.0)
Requirement already satisfied: sacremoses in c:\users\hiteshsom\documents\env\lib\site-packages (from transformers==3.1.0) (0.0.43)
Requirement already satisfied: tqdm>=4.27 in c:\users\hiteshsom\documents\env\lib\site-packages (from transformers==3.1.0) (4.48.0)
Requirement already satisfied: numpy in c:\users\hiteshsom\documents\env\lib\site-packages (from transformers==3.1.0) (1.18.5)
Requirement already satisfied: filelock in c:\users\hiteshsom\documents\env\lib\site-packages (from transformers==3.1.0) (3.0.12)
Requirement already satisfied: packaging in c:\users\hiteshsom\documents\env\lib\site-packages (from transformers==3.1.0) (20.4)
Requirement already satisfied: sentencepiece!=0.1.92 in c:\users\hiteshsom\documents\env\lib\site-packages (from transformers==3.1.0) (0.1.94)
Requirement already satisfied: regex!=2019.12.17 in c:\users\hiteshsom\documents\env\lib\site-packages (from transformers==3.1.0) (2020.11.13)
Requirement already satisfied: six in c:\users\hiteshsom\documents\env\lib\site-packages (from packaging->transformers==3.1.0) (1.15.0)
Requirement already satisfied: pyparsing>=2.0.2 in c:\users\hiteshsom\documents\env\lib\site-packages (from packaging->transformers==3.1.0) (2.4.7)
Requirement already satisfied: chardet<4,>=3.0.2 in c:\users\hiteshsom\documents\env\lib\site-packages (from requests->transformers==3.1.0) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\hiteshsom\documents\env\lib\site-packages (from requests->transformers==3.1.0) (2020.11.8)
Requirement already satisfied: idna<3,>=2.5 in c:\users\hiteshsom\documents\env\lib\site-packages (from requests->transformers==3.1.0) (2.10)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\hiteshsom\documents\env\lib\site-packages (from requests->transformers==3.1.0) (1.25.10)
Requirement already satisfied: regex!=2019.12.17 in c:\users\hiteshsom\documents\env\lib\site-packages (from transformers==3.1.0) (2020.11.13)
Requirement already satisfied: six in c:\users\hiteshsom\documents\env\lib\site-packages (from packaging->transformers==3.1.0) (1.15.0)
Requirement already satisfied: click in c:\users\hiteshsom\documents\env\lib\site-packages (from sacremoses->transformers==3.1.0) (7.1.2)
Requirement already satisfied: joblib in c:\users\hiteshsom\documents\env\lib\site-packages (from sacremoses->transformers==3.1.0) (0.17.0)
Requirement already satisfied: tqdm>=4.27 in c:\users\hiteshsom\documents\env\lib\site-packages (from transformers==3.1.0) (4.48.0)
Collecting tokenizers==0.8.1.rc2
Using cached tokenizers-0.8.1rc2-cp38-cp38-win_amd64.whl (1.9 MB)
Installing collected packages: tokenizers, transformers
Attempting uninstall: tokenizers
Found existing installation: tokenizers 0.9.4
Uninstalling tokenizers-0.9.4:
Successfully uninstalled tokenizers-0.9.4
Attempting uninstall: transformers
Found existing installation: transformers 4.0.1
Uninstalling transformers-4.0.1:
Successfully uninstalled transformers-4.0.1
Successfully installed tokenizers-0.8.1rc2 transformers-3.1.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
dpr 0.1.0 requires transformers<3.1.0,>=3.0.0, but you have transformers 3.1.0 which is incompatible.
```<|||||>Well, you have a package (`dpr`) installed that requires transformers<3.1.0,>=3.0.0. You can now do what you always do in such a dependency conflict situation:
1. Ask yourself if you need this package, if not uninstall it.
2. Create a virtual environment and install the transformers library there.
3. Force pip to install the package anyway but keep in mind that this might break the `dpr` package.
Do the one that suits your needs the most.
<|||||>Hi, I installed `transformers==3.0.0` which I think installed `dpr` but gave dependency error on `sentence transformers` and then I installed `transformers==3.1.0` which only gives dependency error in `dpr` and now when I do `pip freeze` and I get both the packages.
After this I ran the example script and it gave this output
```
HBox(children=(FloatProgress(value=0.0, description='Downloading', max=231508.0, style=ProgressStyle(descriptiβ¦
HBox(children=(FloatProgress(value=0.0, description='Downloading', max=484.0, style=ProgressStyle(description_β¦
HBox(children=(FloatProgress(value=0.0, description='Downloading', max=437998572.0, style=ProgressStyle(descriβ¦
Some weights of DPRReader were not initialized from the model checkpoint at facebook/dpr-reader-single-nq-base and are newly initialized: ['span_predictor.encoder.bert_model.embeddings.position_ids']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-1-2f646c4dd41f> in <module>
9 )
10 outputs = model(**encoded_inputs)
---> 11 start_logits = outputs.stat_logits
12 end_logits = outputs.end_logits
13 relevance_logits = outputs.relevance_logits
AttributeError: 'tuple' object has no attribute 'stat_logits'
```
Its still an error but attribute error and not Import error so may be we can close this issue.<|||||>That is because the class output objects were introduced in a later transformer version. For 3.1.0 the variable `outputs` is still a tuple and you need to check the documentation of DPRReader to figure out which element of the tuple is `stat_logits`, `end_logits` and `relevance_logits`.
But I have just checked the installed packages in a virtual environment with 3.1.0 and 4.0.0 and both had no package called `dpr` installed. You probably got it from somewhere else and can remove it.<|||||>`dpr` may come by installing `transformers` version `3.0.0`<|||||>>
>
> That is because the class output objects were introduced in a later transformer version. For 3.1.0 the variable `outputs` is still a tuple and you need to check the documentation of DPRReader to figure out which element of the tuple is `stat_logits`, `end_logits` and `relevance_logits`.
Thanks for this. I will check documentation
<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,031 | closed | GPT2 attention mask | I want to use gpt2 to generate a list of options
> an option is a sentence starting with a special token '<option>'.
As I don't want the following options to rely on the previous options, I think I should mask all the previous options.
I could simply implement that during generation by generating one option per time, but I don't know how to do that during training. | 12-10-2020 14:10:36 | 12-10-2020 14:10:36 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks!<|||||>Thanks for reminding!
https://discuss.huggingface.co/t/dynamic-attention-mask-during-gpt-2-training/2789
On 12/11/2020 06:16οΌLysandre Debut<[email protected]> wroteοΌ
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the forum instead?
Thanks!
β
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe. |
transformers | 9,030 | closed | Initial README for `t5-small-indonesian-summarization-cased` model | Initial README for Indonesian T5 Summarization Small Model | 12-10-2020 11:08:31 | 12-10-2020 11:08:31 | Awesome! @panggi - did you check out the new `mt5` model as well by any chance? It should work better for your use-case I think :-) <|||||>Thanks @patrickvonplaten, i just knew it from you about `mt5` and definitely will check it out! :)<|||||>Thanks for sharing @panggi |
transformers | 9,029 | closed | [TF Bart] Refactor TFBart | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Mirror of #8900 for TFBart.
The same improvements are done for Bart except adding torchscript functionality (as it does not exist in tf bart).
- [x] Keep dims consistent within the model -> no switching around between time x batch_size and batch_size x time. We can just stick to batch_size x time throughout the whole forward pass just like other models do too.
- [x] Clean the Attention layer: Replace dict cache by past_key_values tuple (consistency with other models and stateless which is better IMO). Break up complicated if-else cascade and remove unnecessary parameters.
- [x] Correct error with past_key_values/decoder_input_ids/use_cache
- [x] Add input_embeds to Bart
- [x] (very subjectively) better naming
- [x] Check that all slow tests are passing
- [x] Update docstring and final design change check
- [x] Refactor Bart tests
- [x] should solve https://github.com/huggingface/transformers/issues/9048
- [x] Check no speed regression | 12-10-2020 11:08:12 | 12-10-2020 11:08:12 | No speed regression on GPU brutasse in graph mode. PR is ready for review IMO.<|||||>> Awesome!! Thanks for taking care of this part!!
>
> Should we merge #9063 before or after this one?
Let's merge after your PR. I'll take the merge conflicts from you :-)
Also this way I can play around a bit with the new not-existing-cast-bool functionality, yaaay! |
transformers | 9,028 | closed | Initial README for `t5-base-indonesian-summarization-cased` model | Initial README for Indonesian T5 Summarization Base Model | 12-10-2020 10:59:24 | 12-10-2020 10:59:24 | |
transformers | 9,027 | closed | Uber AI plug and play language model (PPLM) | Hi Team,
Thanks for the hugging face repo and appreciate your great efforts towards adding datasets and models, I was trying to find PPLM model in model page but it was shown as 404 error, could you please check the model if it's available and let me know
Thanks
Ajay | 12-10-2020 10:07:41 | 12-10-2020 10:07:41 | I don't think we host a specific pplm model in the model hub (cc @julien-c).
Also we don't really plan on continuing to support PPLM in the future<|||||>@ajay01994 are you looking for the code? https://github.com/huggingface/transformers/tree/master/examples/text-generation/pplm
Or you might be referring to https://transformer.huggingface.co/model/pplm β but indeed we don't host the inference anymore (cc @LysandreJik) as it was a bit costly to support.<|||||>Thanks for your quick reply.Actually the PPLM model is having issues due to use of transformers lib - 3.1.0 which is old and thus need certain changes to upgrade. Do you know any better or similar model than PPLM for controlling text in GPT-2 ? that would be of great help
Regards
Ajay <|||||>not at the top of my head, but maybe @mimosavvy or @w4nderlust knows!<|||||>ok....closing this for now ,thanks for your help :)<|||||>> Thanks for your quick reply.Actually the PPLM model is having issues due to use of transformers lib - 3.1.0 which is old and thus need certain changes to upgrade. Do you know any better or similar model than PPLM for controlling text in GPT-2 ? that would be of great help
>
> Regards
>
> Ajay
Can you be more specific about the issues? There has been a PR that should have solved the change to dictionary as returns to the model. |
transformers | 9,026 | closed | Compatibility scripts | # π Feature request
Scripts that translate code in older transformers versions into equivalent code that is compatible with newer versions.
## Motivation
After talking with several other people in my research groups, compatibility issues and getting stuck with old versions have turned out to be pretty common problems. Seeing as many of the things preventing backward compatibility are syntactic (slightly different interfaces for tokenizers, different file paths) I thought it might be possible to add scripts to the package, which translate code to fit, say, one major version higher up (then if a user wanted to step up multiple versions, they could just run several scripts in sequence).
Some rudimentary usage example:
`python transformers-2.11-3.0.py my_project/*`
## Your contribution
I could try to implement such an example scripts, but it would probably take me months and result in a sub-optimal output due to my knowledge about python parsing and the interface changes between major and minor versions of transformers.
I did find out a [refactoring](https://github.com/python-rope/rope) library for python, or some snippet showing how to [unparse a python AST](https://svn.python.org/projects/python/trunk/Demo/parser/unparse.py).
| 12-10-2020 09:10:58 | 12-10-2020 09:10:58 | Hello! Thank you for your proposal. This, however, sounds like a colossal amount of work for limited gain - we try to keep the breaking changes across versions to a minimum. I understand that these do happen from time to time, but only for very good reasons that are thoroughly discussed beforehand.
Would you be able to share what breaking changes have impacted you, and have been a bit too hard to overcome/not documented enough, preventing you from upgrading? Understanding this will help us do better in the future.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,025 | closed | Untranslation of some words from an external dictionary | I use some pre-trained translator models from your library (for example, Helsinki-NLP). During the translation process, I would like to leave some words untranslatable (for example, acronyms, toponyms or names) due to the presence of errors in their translation. I tried adding extra tokens and replacing these words with conditional tokens (<extra_id_0>), but this translation requires raising the num_beams parameter, which significantly slows down the translation. Unfortunately, I can't find any additional mechanisms to perform this task. Is it provided or creating it is a custom task? | 12-10-2020 08:03:20 | 12-10-2020 08:03:20 | Hey @Dmitry-Sn,
It would be great if you could ask these kind of questions on the forum: https://discuss.huggingface.co/ . We try to keep github for issues and less for user-specific use cases. Thanks!<|||||>>
Hi, have you found some way to solve this problem? I meet the same situation, some proper nouns can' t translated properly, and I want to keep it as its native format.<|||||>Hi @vpegasus! There were not great ideas, generally.
As far as I remember, I decided to try to finetune the model for a special token. In my task, it was simple - since I wanted to keep geographical names in the original language, I replaced only them in the training data with a special token (I didn't have time to check the effectiveness, since I left that company).
Colleagues also suggested a solution with special characters (it seems to separate the word with <> characters), but it worked poorly.
All this applies to the models on MarianMT. |
transformers | 9,024 | closed | Use Softmax classifiering for run_glue.py example | Hi,
I want to do binary text classification and I'm adapting [run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py) script to my task. The current model uses a linear classifier and the predictions are not in the range of [0,1]. Could you please guide me on how I could use softmax classifier instead of linear classifier?
I added following code after the model is loaded but I get an error related related to loss function that I pasted below. Any suggestion on how to fix this?
`model.classifier = torch.nn.Softmax(dim=1)`
```
File "run_glue.py", line 300, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/Users/royaho/PycharmProjects/download_pretrained_model/venv/lib/python3.6/site-packages/transformers/trainer.py", line 775, in train
tr_loss += self.training_step(model, inputs)
File "/Users/royaho/PycharmProjects/download_pretrained_model/venv/lib/python3.6/site-packages/transformers/trainer.py", line 1112, in training_step
loss = self.compute_loss(model, inputs)
File "/Users/royaho/PycharmProjects/download_pretrained_model/venv/lib/python3.6/site-packages/transformers/trainer.py", line 1136, in compute_loss
outputs = model(**inputs)
File "/Users/royaho/PycharmProjects/download_pretrained_model/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/Users/royaho/PycharmProjects/download_pretrained_model/venv/lib/python3.6/site-packages/transformers/modeling_bert.py", line 1377, in forward
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
File "/Users/royaho/PycharmProjects/download_pretrained_model/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/Users/royaho/PycharmProjects/download_pretrained_model/venv/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 962, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/Users/royaho/PycharmProjects/download_pretrained_model/venv/lib/python3.6/site-packages/torch/nn/functional.py", line 2468, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/Users/royaho/PycharmProjects/download_pretrained_model/venv/lib/python3.6/site-packages/torch/nn/functional.py", line 2262, in nll_loss
.format(input.size(0), target.size(0)))
ValueError: Expected input batch_size (768) to match target batch_size (3).
``` | 12-10-2020 07:08:45 | 12-10-2020 07:08:45 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,023 | closed | run_clm.py Issue | MODEL_FOR_CAUSAL_LM_MAPPING is None | When I use latest code of [run_clm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py), it raises the following issue:
```
python run_clm.py \
--model_name_or_path gpt2 \
--train_file path_to_train_file \
--validation_file path_to_validation_file \
--do_train \
--do_eval \
--output_dir /tmp/test-clm
```
```
Traceback (most recent call last):
File "run_clm.py", line 51, in <module>
MODEL_CONFIG_CLASSES = list(MODEL_FOR_CAUSAL_LM_MAPPING.keys())
AttributeError: 'NoneType' object has no attribute 'keys'
```
I checked and noticed that MODEL_FOR_CAUSAL_LM_MAPPING is None. Any suggestion? | 12-10-2020 07:06:35 | 12-10-2020 07:06:35 | Hi, could you please provide your environment info as asked in the template? `transformers-cli env`<|||||>hey, any solution yet?<|||||>@LysandreJik I am having the same issue and this is my env:
- `transformers` version: 4.1.0.dev0
- Platform: Linux-5.4.0-1029-gcp-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: NO (using TPU)
- Using distributed or parallel set-up in script?: <|||||>I got a similar error trying to run `run_clm.py` on a TPU.
` File "/kaggle/working/transformers/examples/language-modeling/run_clm.py", line 33, in <module>
from transformers import (
ImportError: cannot import name 'MODEL_FOR_CAUSAL_LM_MAPPING' from 'transformers' (/opt/conda/lib/python3.7/site-packages/transformers/__init__.py)`<|||||>@Clickative any solution?
<|||||>It seems you do not have PyTorch installed? `run_clm.py` is a PyTorch script.<|||||>I encountered this same error and followed the advice from @LysandreJik. I installed PyTorch using `pip3 install torch torchvision` and this resolved the issue.<|||||>On Kaggle TPU, the current docker seems to have old version of transformers and it reads from the conda environment, so new installs are not taken into account (try `pip show`). Even if docker is set to latest. The GPU docker seem to run a version with 0.9.3 of tokenizers and latest transformers need 0.9.4 which is another issue.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,022 | closed | About the input of BERT | Hello, if I want to maintain two different dictionaries, one is BERT's original dictionary and the other is a custom dictionary, and then the input is `[CLS] BERT dictionary corpus [SEP] custom dictionary corpus [SEP]`, how do I handle the input of the model and what part of the source code do I need to change? Thanks! | 12-10-2020 05:38:24 | 12-10-2020 05:38:24 | Hey @BeerTai,
It would be great if you could ask these kind of questions on the forum: https://discuss.huggingface.co/ . We try to keep github for issues and less for user-specific use cases. Thanks! |
transformers | 9,021 | closed | Error tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base") | I run pretrain PhoBERT error .
ValueError Traceback (most recent call last)
<ipython-input-75-d17717702336> in <module>()
3
4 phobert = AutoModel.from_pretrained("vinai/phobert-base")
----> 5 tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base")
6
7 # INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
/usr/local/lib/python3.6/dist-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
323 if tokenizer_class is None:
324 raise ValueError(
--> 325 "Tokenizer class {} does not exist or is not currently imported.".format(tokenizer_class_candidate)
326 )
327 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
ValueError: **Tokenizer class PhobertTokenizerFast does not exist or is not currently imported.** | 12-10-2020 03:43:59 | 12-10-2020 03:43:59 | Hey @trungtruc123,
can you try upgrading your transformer version?
The following code snippet (as stated on the model card: https://github.com/VinAIResearch/PhoBERT) works perfectly fine for me.
```python
import torch
from transformers import AutoModel, AutoTokenizer
phobert = AutoModel.from_pretrained("vinai/phobert-base")
# For transformers v4.x+:
tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base", use_fast=False)
```
Version:
- `transformers` version: 4.1.0.dev0
- Platform: Linux-5.4.0-1030-gcp-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0.dev20201117 (True)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
It should also work with transformers 4.0.0.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,020 | closed | Fix typo in modeling_tf_bart | # What does this PR do?
Fix typo in `modeling_tf_bart`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@patrickvonplaten @sshleifer
| 12-10-2020 03:32:55 | 12-10-2020 03:32:55 | |
transformers | 9,019 | closed | getattr introduces bug when setting booleans with config file | Hi,
In finetune_trainer.py, line 162: https://github.com/huggingface/transformers/blob/5e637e6c690e45d13ebf7296e1ea9dcc188d0f07/examples/seq2seq/finetune_trainer.py#L162
If the user is calling this script with a json config file and setting one of the attributes to false, then on line 162, the result of if `getattr(training_args, p, None)` would be false and if condition would not be called, this results in bug for setting booleans, could you change this line to following to resolve it:
`if hasattr(training_args, p): `
thank you. | 12-09-2020 23:14:16 | 12-09-2020 23:14:16 | Hey @rabeehk,
could you attach a code snippet that we can copy-paste to reproduce the error? Thanks!<|||||>Hi Patrick,
I checked and in finetune_trainer.py you consider these parameters only which are all of type float:
` extra_model_params = ("encoder_layerdrop", "decoder_layerdrop", "dropout", "attention_dropout")
`
In this case the issue would not happen, but if one of these parameters were boolean, lets say "A", if the user pass a config file like below to `finetune_trainer.py`
```
//config.json
{
A: false
}
```
and if the code tries to updating the value of "A" in the config file as line 162, then this introduces the bug of not setting A.
For now since the variables you consider are float, this wont happen, so please feel free to close the bug. Still safer to change 162 with` if hasattr(training_args, p):`
thanks.
<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,018 | closed | Fix PreTrainedTokenizer.pad when first inputs are empty | # What does this PR do?
Currently, `PreTrainedTokenizer.pad` errors when the first `input_ids` are empty (because it tries to guess the type of the tokens by looking at the first element). This PR slightly changes the behavior to loop until we find a non empty list.
Fixes #8674 (not the initial issue but the one mentioned at the end) | 12-09-2020 22:20:49 | 12-09-2020 22:20:49 | |
transformers | 9,017 | closed | Fix documention of book in LayoutLM | # What does this PR do?
The documentation of the `bbox` argument in the LayoutLM models has some bad copy-paste errors, this PR fixes that.
Fixes #9016
| 12-09-2020 22:02:45 | 12-09-2020 22:02:45 | |
transformers | 9,016 | closed | LayoutLM wrong shape for bbox in docs | ## Environment info
(Colab, 09 december 2020, CPU runtime)
- `transformers` version: 4.0.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
documentation: @sgugger
## Information
LayoutLM documentation indicates that the shape for the bbox input is [B, seq_len]:

However, from the code (https://github.com/huggingface/transformers/blob/master/src/transformers/models/layoutlm/modeling_layoutlm.py#L103) bounding boxes are encoded in the form [tl_col, tl_row, br_col, br_row]. Therefore the accepted shape is [B, seq_len, 4].
## To reproduce
Reproduced on this colab: https://colab.research.google.com/drive/1ZRPKlX8-C41nYq3o6QVS68h1JAj1nQMI?usp=sharing
## Expected behavior
Better explanation in docs
| 12-09-2020 21:46:55 | 12-09-2020 21:46:55 | Sounds right, will fix. |
transformers | 9,015 | closed | MPNet copyright files | # What does this PR do?
MPnet and the copyright PRs were merged around the same time, so MPNet does not have copyright in every files it introduced. This PR fixes that. | 12-09-2020 21:30:01 | 12-09-2020 21:30:01 | |
transformers | 9,014 | closed | Enforce all objects in the main init are documented | # What does this PR do?
Some objects added by contributors or the team are regularly forgotten. This PR changes the script that inspects whether or not models are documented to encompass all objects in the main init (and adds documentation for multiple forgotten objects). | 12-09-2020 20:36:16 | 12-09-2020 20:36:16 | |
transformers | 9,013 | closed | [model_cards] Migrate cards from this repo to model repos on huggingface.co | Fellow reviewers/contributors, please take a look at the documentation part and let me know your thoughts.
---
#### β οΈ Still to-do before merging β οΈ
- [x] Post a message on the Forum: https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
- [x] Update the buttons on the model pages
- [x] merge all out-standing model card PRs on the transformers repo
- [x] the actual migration into the hf.co model repos
ETA: I plan on doing this Thursday (Dec 10) or Friday (Dec 11)! | 12-09-2020 20:17:11 | 12-09-2020 20:17:11 | @patrickvonplaten, unless we erase them from the history, it won't make git clone any faster.<|||||>@sgugger it might make a `git checkout` slightly faster. I don't think the model cards were ever an issue in terms of git performance though (8% of number of files in the repo, 7% of total size of the repo)
<|||||>https://github.com/github/git-sizer is an awesome tool by the way
```
$ git-sizer
Processing blobs: 22486
Processing trees: 25585
Processing commits: 7749
Matching commits to trees: 7749
Processing annotated tags: 33
Processing references: 216
| Name | Value | Level of concern |
| ---------------------------- | --------- | ------------------------------ |
| Biggest checkouts | | |
| * Maximum path length [1] | 135 B | * |
[1] 030c0d2cdc80cf8dcf23a6ee55c20e979548a181 (refs/heads/master^{tree})
```
vs on datasets (cc @lhoestq):
```
$ git-sizer
Processing blobs: 9449
Processing trees: 12905
Processing commits: 1245
Matching commits to trees: 1245
Processing annotated tags: 16
Processing references: 110
| Name | Value | Level of concern |
| ---------------------------- | --------- | ------------------------------ |
| Biggest objects | | |
| * Blobs | | |
| * Maximum size [1] | 17.8 MiB | * |
| | | |
| Biggest checkouts | | |
| * Number of directories [2] | 4.89 k | ** |
| * Maximum path depth [3] | 16 | * |
| * Maximum path length [3] | 231 B | ** |
[1] 3fe8eaab7a337ea2a8d06daa5721fc5935ba3098 (75cafce7677d6f66c49c34e43cfbc425e1f50d30:datasets/anli/dummy/plain_text/0.1.0/dummy_data.zip)
[2] 681c565a3d8f63535823a1d33438c2b76ba3c706 (refs/heads/master^{tree})
[3] c94ea70f34be4ed2723fc1c647340792ba03879c (7cd045237bb77f3b32877d31aae87789ec57ffab^{tree})
```<|||||>**Update**: I deployed the new buttons/call to actions on the model pages.
I also created a new Forum topic (@sgugger @Pierrci @patrickvonplaten) titled "Model cards" where users can suggest edits or creations of existing model cards, in case they don't have write access to the corresponding model repo:
https://discuss.huggingface.co/t/about-the-model-cards-category/2777<|||||>All out-standing model card PRs were merged. No more model cards PRs expected!
Will migrate existing ones now. |
transformers | 9,012 | closed | "run_mlm_wwm.py", line 284 AttributeError: 'DataTrainingArguments' object has no attribute 'valid_ref_file' | Hi! There is a tiny typo in the code "transformers/examples/language-modeling/run_mlm_wwm.py" at line 284. It should be:
`if data_args.validation_ref_file is not None:` since at line 103 in `DataTrainingArguments` it defined as `validation_ref_file:` | 12-09-2020 19:06:58 | 12-09-2020 19:06:58 | Hi! Do you want to submit a PR?<|||||>Sure, I'll be happy to do it. I need permission... `remote: Permission to huggingface/transformers.git denied to NatLun137.`<|||||>Hmmm I think you just tried to push on `huggingface/transformers`? You should fork the repo, apply your changes there and then open a PR here. I see you created your fork already, how did you open a PR then? Did you use the GitHub UI? |
transformers | 9,011 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-09-2020 17:43:33 | 12-09-2020 17:43:33 | |
transformers | 9,010 | closed | Reorganize examples | # What does this PR do?
This PR reorganizes the examples folder by splitting it in two:
- `examples` that stay in this folder are the base example scripts maintained with the state of the library, expected to work on master. We accept PRs on them and will try our best to fix issues.
- `research-projects` are (often) more complex examples that we don't really maintain. They work on a specific version of the library (sometimes even a specific commit). We don't accept PRs on them except minor typo fixes *or* PRs from the original authors that want to bring an update to those scripts. Issues opened for those are probably less efficient than directly contacting the authors.
Each example/research project lives in a folder of its own, with its particular requirements in a `requirements.txt` file (instead of a global requirements file as before).
The seq2seq subfolder is less organized than the others, so I did my best to split its research project part from its example part. I made sure all tests are passing and duplicated the needed files, but @stas00 and @patil-suraj please tell me if you something obvious that I missed. We will leave the research-project part as is and clean a bit more the part in the examples in other PRs.
| 12-09-2020 16:09:27 | 12-09-2020 16:09:27 | another bit - `examples/seq2seq/test_data` is also used by `research_projects/seq2seq` - perhaps symlink? <|||||>Maybe a hard copy in that case, just in case the data changes/moves on the examples side.<|||||>> Maybe a hard copy in that case, just in case the data changes/moves on the examples side.
Then need to check which specific sub-dirs are needed - if I'm not mistaken it's only `test_data/wmt_en_ro/`
I'd still use a symlink to avoid git repo bloat and this can always be easily fixed if there is a divergence down the road.<|||||>Does this mean we no longer explicitly support pytorch_lightning? <|||||>> Does this mean we no longer explicitly support pytorch_lightning?
I would rather like it if we ditch the custom transformer trainer and just use lightning. |
transformers | 9,009 | closed | fixes #8968 | **This is the same PR as: [link](https://github.com/huggingface/transformers/pull/8991#issue-534604029). I was asked to create a new one due to a merge mistake.**
# What does this PR do? (Text of the previous PR)
One of the 3.X releases introduced output objects that replaced the previously returned tuples. This PR updates the transformers notebook to reflect that update.
Fixes #8968
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@patrickvonplaten
@LysandreJik | 12-09-2020 15:14:19 | 12-09-2020 15:14:19 | |
transformers | 9,008 | closed | [Docs] Fix some typos for group beam search | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-09-2020 14:11:25 | 12-09-2020 14:11:25 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.