repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
6,502
closed
Truncated last sentence after bart finetuning on custom dataset.
- `transformers` version: 3.0.2 - Platform: - Python version: 3.6 - PyTorch version (GPU?): 1.4 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Single GPU ### Who can help @sshleifer ## Information Model I am using (Bert, XLNet ...): BART The problem arises when using: * [x] the official example scripts: (give details below) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Finetuning BART-large-xsum on my custom dataset with differing configs: min_length=590, max_length=620. 2. Doing inference on the trained model. 3. Sentences (specifically, last sentence) that BART produces oftentimes (~90% of the cases) is incomplete. ## Expected behavior I would expect to have a complete and neat output, without having truncated outputs. Should mention that when I do the inference on the raw bart-large-cnn (or -xsum) checkpoint, I do not see this problem and all the outputs are complete. It sounds to me that finetuned-bart on custom dataset is not able to emit <EOS> token. I also checked this thread: https://github.com/huggingface/transformers/issues/5674 which faces the same problem, but couldn't find the answer.
08-15-2020 16:18:02
08-15-2020 16:18:02
What was your training command? <|||||>I used the `finetune_tiny_bart.sh` script in the seq2seq examples. @sshleifer If that helps to figure out the source of the problem, as I know the position_embeddings of bart-large-cnn model is 1026 (with addition of SOS, and EOS tokens). Since my task is long summarization, I changed it to 2050, and let the model learn the whole on my custom dataset; Additionally, as I mentioned earlier, I have also increased the `min_length` and `max_length` in the BART config class. But the problem still remains.<|||||>I've never really trained with such large length parameters, but we have been seeing similar problems for many models. I think these are the lines causing the issue, will try to get a fix soon. https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py#L144<|||||>I have been facing this issue in the new versions of finetune.sh as well... even for T5... eg. In only the first quarter century after the breakup of the Soviet Union, Azerbaijan has impressed the Caucasus region and the world with its progress. Although it still must work diligently to enhance citizen inputs into its governance structures, continue to expand its productive capacity beyond the energy sectors, and distribute its new wealth equitably among its entire population, the country has faced the complex challenges of independence with a mostly steady hand. Much has been achieved in rediscovery of a proud national identity, new resource abundance, sound transportation infrastructure, and a thriving capital city that is now a vibrant modern regional hub. Among the most important next steps for policy priority over the coming decades will be in sustaining the progress already made with continuing "greener" approaches to development, and increasing diversification of the economy beyond just the oil and natural gas sectors. Initiatives already in place have started along this road, but will need to be strengthened over<|||||>I would love to replicate (need data) or have one of you test on the branch with my proposed fix: https://github.com/huggingface/transformers/pull/6654 ```bash git fetch git checkout batch-parity-cleaner ```<|||||>That branch is broken right now, I will comment when it's fixed.<|||||>Should work now!<|||||>Hey, Was trying out your branch, the earlier version atleast ran fine. After pulling the latest like you mentioned...getting back this: f"Mbart is using sequence lengths {self.max_source_length}, {self.max_target_length}. " Validation sanity check: 0it [00:00, ?it/s]Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized. Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized. Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized. Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized. Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized. Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized. Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized. Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized. I assume this has to do with the translation code? Any suggestions how to get around this?<|||||>Fixed on the branch, sorry about that!<|||||>So...should I try now?<|||||>Yah!<|||||>Still the same :(<|||||>OK. Which dataset are you using? I can't really debug without being able to see what a batch looks like when it goes into the model.<|||||>I am using a custom dataset but you can try with BillSum as well and you should be able to reproduce the issue. And btw here I was talking about this particular issue: > Hey, > Was trying out your branch, the earlier version atleast ran fine. > After pulling the latest like you mentioned...getting back this: > f"Mbart is using sequence lengths {self.max_source_length}, {self.max_target_length}. " > Validation sanity check: 0it [00:00, ?it/s]Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized. > Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized. > Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized. > Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized. > Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized. > Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized. > Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized. > Keyword arguments {'src_lang': None, 'tgt_lang': None} not recognized. > > I assume this has to do with the translation code? Any suggestions how to get around this? I am not able to start training itself with your branch. Let me know if you need anymore info.<|||||>Yes a training command I can paste into my terminal to run on billsum and reproduce your failure.<|||||>I just tried this and got the same: `./finetune.sh \ --data_dir "../../../BillSum" --train_batch_size=2 --eval_batch_size=8 --output_dir="/content/models/t5_narrative_512/" --num_train_epochs 2 --model_name_or_path="t5-base" --n_val 1000 --val_check_interval 0.5 --max_source_length=512 --max_target_length=150 --val_max_target_length=150 --test_max_target_length=150 `<|||||>@patil-suraj we think these should be fixed both for t5 and bart, right?<|||||>Yes, AFAIK these issues are fixed now. @amanpreet692 could you try this with the latest master branch ?<|||||>I used distilbart: `tokenizer_dbart = BartTokenizer.from_pretrained('sshleifer/distilbart-cnn-6-6')` `model_dbart = BartForConditionalGeneration.from_pretrained('sshleifer/distilbart-cnn-6-6')` The last sentence of the summary obtained from the model is sometimes truncated. Is this expected? @sshleifer <|||||>@patil-suraj Sorry I got back to this only now, I checked out the latest from repo today and ran finetune.sh for finetuning and could still see this issue, eg. Research on inventory risk management based on abc analysis. The traditional ABC analysis is a kind of management method from the ABC curve. ABC curve is also called Pareto (Pareto) curve. The basic idea is, " vital few and the majority of the general ". In all the inventory, the cumulative percentage of species ranged from 5% to 15% and the average amount of funds occupied the cumulative percentages of 60% ~ 80% items identified as A class; the cumulative proportion of funds is 20% ~ 30% of the goods, identified as B class; and the rest as class C. The different objects use different management methods and means. In the China's enterprises, The command I used is the same as above, only I removed the fp16 parameter from the script.<|||||>Hi @amanpreet692, Could you post the arguments you are passing to `generate` ? for ex. `num_beams, max_length, length_penalty` etc <|||||>Hey, I haven't tinker with the arguments to generate so I guess they should be the same as in config for distilbart: "early_stopping": true, "length_penalty": 2.0, "max_length": 142, "min_length": 56, "no_repeat_ngram_size": 3, "num_beams": 4 Let me know if you need anything else. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,501
closed
Longformer slow than Bert
When i set max_length = 2048, i found Longformer's speed is slower than commen bert, why?
08-15-2020 14:59:35
08-15-2020 14:59:35
well BERT can only generate up to 512 tokens and both models are Autoencoding models and usually not used for causal language generation (Autoregressive models are used for this). You can check out the difference here: https://huggingface.co/transformers/model_summary.html.
transformers
6,500
closed
Always got RuntimeError while converting ALBERT model to TorchScript (.pt file)
I am trying to convert ALBERT to a `.pt` file from the original albert model from transformers.(I am not very familiar with TorchScript so I want the `.pt` to be clean) The code I ran (following the tutorial from [https://huggingface.co/transformers/torchscript.html](https://huggingface.co/transformers/torchscript.html)): ``` from transformers import AlbertModel, AlbertTokenizer, AlbertConfig import torch enc = AlbertTokenizer.from_pretrained("albert-xxlarge-v2") text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" tokenized_text = enc.tokenize(text) masked_index = 8 tokenized_text[masked_index] = '[MASK]' indexed_tokens = enc.convert_tokens_to_ids(tokenized_text) segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1] tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) dummy_input = [tokens_tensor, segments_tensors] config = AlbertConfig(vocab_size_or_config_json_file=73000, hidden_size=4096, num_hidden_layers=12, num_attention_heads=64, intermediate_size=16384, torchscript=True) model = AlbertModel(config) model.eval() model = AlbertModel.from_pretrained("albert-xxlarge-v2", torchscript=True) traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors]) torch.jit.save(traced_model, "albert-xxlarge-v2.pt") ``` But the second last line threw out a error: `RuntimeError: The size of tensor a (15) must match the size of tensor b (14) at non-singleton dimension 3` From the tutorial: ``` The trace is created relatively to the inputs’ dimensions. It is therefore constrained by the dimensions of the dummy input, and will not work for any other sequence length or batch size. When trying with a different size, an error such as: The expanded size of the tensor (3) must match the existing size (7) at non-singleton dimension 2 ``` So I tried changing `vocab_size_or_config_json_file` to a larger value, but still got the same error. Am I doing something wrong? Thanks for any advice.
08-15-2020 12:17:53
08-15-2020 12:17:53
transformers
6,499
closed
Add examples/bert-loses-patience who can help
08-15-2020 11:54:38
08-15-2020 11:54:38
transformers
6,498
closed
Could not output hidden states using TFBertModel
## Environment info - `transformers` version: 3.0.2 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic (on Google Colab) - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik @jplu ## Information 1. When using TFBertModel, I tried to output the hidden states. Firstly, I tried to set config.output_hidden_states=True, but it gave the "tuple index out of range" error. The code is: from transformers import TFBertModel, BertConfig import tensorflow as tf def single_bert(): id = Input((128,), dtype=tf.int32) mask = Input((128,), dtype=tf.int32) atn = Input((128,), dtype=tf.int32) bert_config = BertConfig.from_pretrained('bert-base-uncased', output_hidden_states=True) bert_model = TFBertModel.from_pretrained('bert-base-uncased', config = bert_config) embedding = bert_model(id, attention_mask=mask, token_type_ids=atn)[2] model = tf.keras.Model(inputs=[id, mask, atn], outputs=embedding) return model model = single_bert() model.summary() 2. I have also tried to pass "output_hidden_states=True", but it still gave "tuple index out of range" error: from transformers import TFBertModel, BertConfig import tensorflow as tf def single_bert(): id = Input((128,), dtype=tf.int32) mask = Input((128,), dtype=tf.int32) atn = Input((128,), dtype=tf.int32) bert_model = TFBertModel.from_pretrained('bert-base-uncased') embedding = bert_model(id, attention_mask=mask, token_type_ids=atn, output_hidden_states=True)[2] model = tf.keras.Model(inputs=[id, mask, atn], outputs=embedding) return model model = single_bert() model.summary() ## To reproduce Steps to reproduce the behavior: 1. 2. ## Expected behavior I need to add some custom layers on top of the output hidden states and fine-tune the whole model, so firstly, I have to get the hidden states of Bert.
08-15-2020 09:19:01
08-15-2020 09:19:01
Sorry for the format of the two codes. I have modified them and posted them here: 1. ```python from transformers import TFBertModel, BertConfig import tensorflow as tf def single_bert(): id = Input((128,), dtype=tf.int32) mask = Input((128,), dtype=tf.int32) atn = Input((128,), dtype=tf.int32) bert_config = BertConfig.from_pretrained('bert-base-uncased', output_hidden_states=True) bert_model = TFBertModel.from_pretrained('bert-base-uncased', config = bert_config) embedding = bert_model(id, attention_mask=mask, token_type_ids=atn)[2] model = tf.keras.Model(inputs=[id, mask, atn], outputs=embedding) return model model = single_bert() model.summary() ``` 2. ```python from transformers import TFBertModel, BertConfig import tensorflow as tf def single_bert(): id = Input((128,), dtype=tf.int32) mask = Input((128,), dtype=tf.int32) atn = Input((128,), dtype=tf.int32) bert_model = TFBertModel.from_pretrained('bert-base-uncased') embedding = bert_model(id, attention_mask=mask, token_type_ids=atn, output_hidden_states=True)[2] model = tf.keras.Model(inputs=[id, mask, atn], outputs=embedding) return model model = single_bert() model.summary() ```<|||||>I have ran your code with minor edits: embedding = bert_model(id, attention_mask=mask, token_type_ids=atn, output_hidden_states=True) For the variable embedding, it only output 2 element in a tuple (<tf.Tensor 'tf_bert_model/Identity:0' shape=(None, 128, 768) dtype=float32>, <tf.Tensor 'tf_bert_model/Identity_1:0' shape=(None, 768) dtype=float32>) So I think you would want to extract the last embedding layer index -1 or just 1 instead of index 2 (non-existent). I have seen other people with index 2 (such as : [https://github.com/huggingface/transformers/issues/4048](url) ), but of course their tuple has length more than 2, you should investigate more in your code. I even tried to have your code structured like theirs ` embedding = bert_model(input = [id, atn])` But it gives out the same output, may be it's just because of the different pre-trained model itself that give out different tuple length, so try to investigate more <|||||>Hello! There are three possibilities to use `TFBertModel` either with a list, a dic or positional argumentst: 1) With list: you have to explicitely give a list of size 10 corresponding to `[input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict, training]` 2) With a dict. This is the recommended way, as you can specify only the keys you need. 3) With positional arguments (as proposed by @vuhluu)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,497
open
BERT and SpanBERT for Coreference Resolution
# 🌟 New model addition ## Model description This is a recent approach for co-reference resolution using BERT, implemented from the papers [BERT for Coreference Resolution: Baselines and Analysis](https://arxiv.org/abs/1908.09091) and [SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529), which is the current state of the art on OntoNotes (79.6 F1). It uses tensorflow 1.14.0. Reason why this is interesting is it achieves strong improvements on the OntoNotes (+3.9 F1) and GAP (+11.5 F1) benchmarks. Also, I think it would be a nice addition to huggingface library, as it has only the neuralcoref as the coreference resolution module. ## Open source status * [x] the model implementation is available: (https://github.com/mandarjoshi90/coref) * [x] the model weights are available: (https://github.com/facebookresearch/SpanBERT) * [x] who are the authors: (@mandarjoshi90, @jkkummerfeld, @wenyudu)
08-15-2020 06:33:23
08-15-2020 06:33:23
Commenting for visibility - is this available now ? can't seem to find it, would love to use this for a question-answering project i'm working on! <|||||>I'd also like to see this model incorporated into the core list of supported models. I did note that you can download it from the community models here though: https://huggingface.co/SpanBERT/spanbert-base-cased<|||||>Are there any translations of the above repository (https://github.com/mandarjoshi90/coref) into the awesome HuggingFace API ? That would be very cool to test :D !<|||||>I would like to work on this...but will need some guidance
transformers
6,496
closed
Add Model Card for electra-base-german-uncased
This adds the model card for electra-base-german-uncased. Could you please also have a look into #6495 because something went wrong with the upload. Thanks Philip
08-15-2020 05:26:16
08-15-2020 05:26:16
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6496?src=pr&el=h1) Report > Merging [#6496](https://codecov.io/gh/huggingface/transformers/pull/6496?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9cbc0350deaa7e146a8c8dbb6ad4dc9bd6afc4f&el=desc) will **increase** coverage by `0.06%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6496/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6496?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6496 +/- ## ========================================== + Coverage 80.37% 80.44% +0.06% ========================================== Files 156 156 Lines 28058 28058 ========================================== + Hits 22552 22571 +19 + Misses 5506 5487 -19 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6496?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <ø> (ø)` | | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <ø> (+0.25%)` | :arrow_up: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <0.00%> (+0.16%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6496/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <0.00%> (+29.31%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6496?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6496?src=pr&el=footer). Last update [24107c2...a9ce8ff](https://codecov.io/gh/huggingface/transformers/pull/6496?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,495
closed
Model Upload does not show up `german-nlp-group/electra-base-german-uncased`
Hi, yesterday I uploaded a new model to `german-nlp-group/electra-base-german-uncased`: ```bash $ transformers-cli s3 ls --organization german-nlp-group Neither PyTorch nor TensorFlow >= 2.0 have been found.Models won't be available and only tokenizers, configurationand file/data utilities can be used. Filename LastModified ETag Size ------------------------------------------------- ------------------------ ---------------------------------- --------- electra-base-german-uncased/config.json 2020-08-14T17:13:01.000Z "10c75064301189f269b4898d4265cd61" 467 electra-base-german-uncased/pytorch_model.bin 2020-08-14T17:13:37.000Z "a621e1cb07af0a08aaa643af52f9f189" 444881731 electra-base-german-uncased/tokenizer_config.json 2020-08-14T17:43:33.000Z "7f6d7cb22bc6342b9c942da874754264" 86 electra-base-german-uncased/vocab.txt 2020-08-14T17:43:31.000Z "e9fa1e40c556fc02c62ebaa214a52dc4" 275501 ``` But it does not show up. See here: https://huggingface.co/german-nlp-group What happened here? Could you fix that? Thanks Philip
08-15-2020 05:11:55
08-15-2020 05:11:55
The files are there: https://cdn.huggingface.co/german-nlp-group/electra-base-german-uncased/tokenizer_config.json But it simply does not show up...<|||||>Maybe related to #6478<|||||>Having the same problem here. Uploaded a new model (`salti/xlm-roberta-large-arabic_qa`) earlier this morning and it doesn't show up in the model hub, although I can download it and use it using the `from_pretrained` method.<|||||>@julien-c @Pierrci <|||||>Maybe some sync service just needs a restart? :-)<|||||>Not a sync service, but a (uncaught) user error :) Fixed: https://huggingface.co/german-nlp-group/electra-base-german-uncased#german-electra-uncased
transformers
6,494
closed
[testing] a new TestCasePlus subclass + get_auto_remove_tmp_dir()
I present to you a new `TestCasePlus` class, which is an extension of `testutil.TestCase`. Currently it only has one extra feature, but I'm sure there will be more in the future, hence the more generic name. So the intention was to provide: - an easy way to create unique temp dirs in test modules and get them automatically removed at the end of the test, regardless of whether a test succeeded or not. - an easy way not to remove the temp dir for debug purposes - provide a hardcoded temp dir for debug purposes (and secure so that `rm -r /something` won't happen) - optionally, clean up the temp dir right away if a hardcoded path is provided Some ideas were discussed here: https://github.com/huggingface/transformers/issues/6471 So this PR implements this feature and uses it in 2 test modules that currently don't have a complete solution, and removing much much code on the way. Usage: Feature 1: Flexible auto-removable temp dirs which are guaranteed to get removed at the end of test. In all the following scenarios the temp dir will be auto-removed at the end of test, unless `after=False`. 1. create a unique temp dir and delete it at the end, `tmp_dir` will contain the path to the created temp dir ``` def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir() ``` 2. create a temp dir of my choice and delete it at the end - useful for debug when you want to monitor a specific directory ``` def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir(tmp_dir="./tmp/run/test") ``` or just: ``` tmp_dir = self.get_auto_remove_tmp_dir("./tmp/run/test") ``` 3. create a temp dir of my choice and do not delete it at the end - useful for when you want to look at the temp results ``` def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir(tmp_dir="./tmp/run/test", after=False) ``` or just: ``` tmp_dir = self.get_auto_remove_tmp_dir("./tmp/run/test", False) ``` 4. create a temp dir of my choice and ensure to delete it right away - useful for when you disabled deletion in the previous test run and want to make sure the that tmp dir is empty before the new test is run ``` def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir(tmp_dir="./tmp/run/test", before=True) ``` Note 1: In order to run the equivalent of `rm -r` safely, only subdirs of the project repository checkout are allowed if an explicit `tmp_dir` is used, so that by mistake no `/tmp` or similar important part of the filesystem will get nuked. i.e. please always pass paths that start with `./` Note 2: Each test can register multiple temp dirs and they all will get auto-removed, unless requested otherwise. So you can see from the 4 main possible scenarios, during debug one needs to tweak only one line of code. There is only one small remaining deficiency: Since the temp dir is pre-created, the tests will not be able to test things like `--output_dir` creation in examples - i.e. the dir will already be there. So if needed, the code can be extended to have a flag to not create the dir, but only register it for deletion. Though it'd be tricky for when `tmp_dir` is not passed explicitly and we rely on `tempfile`- I guess it can create and immediately delete the temp dir and save and reuse its path - I don't know whether there might be a race condition here. But chances are that this is not really needed. Thank you for reading. Ideas and suggestions for improvements are welcome. @JetRunner, @LysandreJik, @sshleifer, @sgugger
08-15-2020 04:47:51
08-15-2020 04:47:51
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6494?src=pr&el=h1) Report > Merging [#6494](https://codecov.io/gh/huggingface/transformers/pull/6494?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/895ed8f4511ce9f2d1475e7f11c776dab87461d1&el=desc) will **increase** coverage by `0.17%`. > The diff coverage is `31.81%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6494/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6494?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6494 +/- ## ========================================== + Coverage 80.38% 80.55% +0.17% ========================================== Files 156 156 Lines 28058 28079 +21 ========================================== + Hits 22554 22619 +65 + Misses 5504 5460 -44 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6494?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6494/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | | | [src/transformers/testing\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6494/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `48.80% <33.33%> (-3.13%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6494/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6494/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.69% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6494/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <0.00%> (+0.16%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6494/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <0.00%> (+29.31%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6494?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6494?src=pr&el=footer). Last update [24107c2...f695e5f](https://codecov.io/gh/huggingface/transformers/pull/6494?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,493
closed
Fixes paths with spaces in seq2seq example
Fixes https://github.com/huggingface/transformers/issues/6477
08-14-2020 19:14:13
08-14-2020 19:14:13
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6493?src=pr&el=h1) Report > Merging [#6493](https://codecov.io/gh/huggingface/transformers/pull/6493?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/895ed8f4511ce9f2d1475e7f11c776dab87461d1&el=desc) will **increase** coverage by `0.21%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6493/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6493?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6493 +/- ## ========================================== + Coverage 80.38% 80.59% +0.21% ========================================== Files 156 156 Lines 28058 28058 ========================================== + Hits 22554 22613 +59 + Misses 5504 5445 -59 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6493?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.69% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <0.00%> (+0.16%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6493/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <0.00%> (+29.31%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6493?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6493?src=pr&el=footer). Last update [24107c2...057a225](https://codecov.io/gh/huggingface/transformers/pull/6493?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks!
transformers
6,492
closed
Fixed label datatype for STS-B
The STS Benchmark has decimal labels instead of integers. But inside the `glue_convert_examples_to_features` function, when you're using TensorFlow datasets it is casting the label as an integer in the returned TF data generator. With this simple edit, the function changes its casting datatype according to the selected task.
08-14-2020 18:15:16
08-14-2020 18:15:16
Also, the CI wants you to run `make style` :)<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6492?src=pr&el=h1) Report > Merging [#6492](https://codecov.io/gh/huggingface/transformers/pull/6492?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d9cbc0350deaa7e146a8c8dbb6ad4dc9bd6afc4f&el=desc) will **decrease** coverage by `1.18%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6492/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6492?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6492 +/- ## ========================================== - Coverage 80.37% 79.19% -1.19% ========================================== Files 156 156 Lines 28058 28059 +1 ========================================== - Hits 22552 22221 -331 - Misses 5506 5838 +332 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6492?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.91% <ø> (-0.69%)` | :arrow_down: | | [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `48.91% <0.00%> (-0.18%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.20% <ø> (-3.26%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <ø> (ø)` | | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <0.00%> (ø)` | | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: | | [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `87.50% <0.00%> (-9.73%)` | :arrow_down: | | ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6492/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6492?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6492?src=pr&el=footer). Last update [24107c2...29e9a98](https://codecov.io/gh/huggingface/transformers/pull/6492?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,491
closed
Whole Word Masking Implementation
# 🚀 Feature request Currently, training the models from scratch like Roberta do not support whole word masking (e.g., language modeling examples). Only pre-trained models are available. Is it possible to include whole word masking in the input layers? ## Motivation Whole word masking leads to performance boosts. So, adding this feature would be useful if someone wants to train the models from scratch.
08-14-2020 17:54:35
08-14-2020 17:54:35
Would be a great improvement :+1: Here's btw. the commit that introduced WWM in BERT: https://github.com/google-research/bert/commit/0fce551b55caabcfba52c61e18f34b541aef186a<|||||>BERT using wordpiece tokenizer, however, roberta uses byte-piece tokenizer. I think the implementations may be slightly different, if not starkly different (due to different start token indicators).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,490
closed
[Doc] add more MBart and other doc
This PR 1. adds example for MBart 2. adds MBart in pre_trained models list and readme (Pegasus was missing from readme, so also added that). @sshleifer , @sgugger
08-14-2020 16:51:15
08-14-2020 16:51:15
@sgugger do you think it would be a good idea to add more fine-tuning info for MBart, since it requires input processed in a different way than other models as it is multilingual model ?<|||||>@sshleifer ,@sgugger added DPR in readme. <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6490?src=pr&el=h1) Report > Merging [#6490](https://codecov.io/gh/huggingface/transformers/pull/6490?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/895ed8f4511ce9f2d1475e7f11c776dab87461d1&el=desc) will **decrease** coverage by `0.46%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6490/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6490?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6490 +/- ## ========================================== - Coverage 80.38% 79.91% -0.47% ========================================== Files 156 156 Lines 28058 28058 ========================================== - Hits 22554 22423 -131 - Misses 5504 5635 +131 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6490?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYmFydC5weQ==) | `100.00% <ø> (ø)` | | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.19%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-1.51%)` | :arrow_down: | | ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6490/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6490?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6490?src=pr&el=footer). Last update [895ed8f...e1c522b](https://codecov.io/gh/huggingface/transformers/pull/6490?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Great! Thanks for the PR.
transformers
6,489
closed
GitHub Template: Tag @stefan-it for token classification related bug reports
Hi, this PR adds myself as person to tag for all token classification related bug reports :)
08-14-2020 16:48:31
08-14-2020 16:48:31
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6489?src=pr&el=h1) Report > Merging [#6489](https://codecov.io/gh/huggingface/transformers/pull/6489?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fe61c05b85f98846779bb490a747875e7d54ec2a&el=desc) will **decrease** coverage by `1.47%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6489/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6489?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6489 +/- ## ========================================== - Coverage 80.59% 79.11% -1.48% ========================================== Files 156 156 Lines 28058 28058 ========================================== - Hits 22612 22198 -414 - Misses 5446 5860 +414 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6489?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: | | [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `48.80% <0.00%> (-46.43%)` | :arrow_down: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `56.25% <0.00%> (-39.07%)` | :arrow_down: | | [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.24% <0.00%> (-3.53%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.99% <0.00%> (-1.31%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: | | ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6489/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6489?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6489?src=pr&el=footer). Last update [fe61c05...26634da](https://codecov.io/gh/huggingface/transformers/pull/6489?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@julien-c 🤔
transformers
6,488
closed
Fix TPU Convergence bug introduced by PR#6151
Currently with the bug introduced we're taking two optimizer steps per batch: one global one, where `xm.optimizer_step` injects a CRS between all cores in training, and one without. This has been affecting training accuracy (for example, XLNet GLUE on MNLI is not converging, etc.).
08-14-2020 16:27:04
08-14-2020 16:27:04
transformers
6,487
closed
about encoder and decoder input when using seq2seq model
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> Hello, I'm trying to using seq2seq model (such as bart and EncoderDecoderModel(bert2bert)) And I'm little bit confused about input_ids, decoder_input_ids, tgt in model inputs. As I know in seq2seq model, decoder_input should have special token(\<s> or something) before the sentence and target should have special token(\</s> or somethin) after the sentence. for example, `decoder_input = <s> A B C D E` , `target = A B C D E</s>` so my question is 1. Should I put the these special tokens in decoder_inputs_ids and tgt_ids when using seq2seq model in this library? or can i just pass the decoder_input_ids and tgt_ids without any special token ids? 2. Also, should I put `add_special_tokens=True` for encoder input_ids and put \</s> or \<eos> token after target ids? for example, `input = a b c d e, decoder_input = <s>A B C D E, target = A B C D E</s>` <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. -->
08-14-2020 16:01:47
08-14-2020 16:01:47
Hi @jungwhank for Bert2Bert, `pad_token` is used as `decoder_start_token_id` and the `input_ids` and `labels` begin with `cls_token_id` (`[CLS]` for bert ) and end with `sep_token_id` (`[SEP]` for bert). For training all you need to do is ```python3 input_text = "some input text" target_text = "some target text" input_ids = tokenizer(input_text, add_special_tokens=True, return_tensors="pt")["input_ids"] target_ids = tokenizer(target_text, add_special_tokens=True, return_tensors="pt")["input_ids"] model(input_ids=input_ids, decoder_input_ids=target_ids, labels=target_ids) ``` The EncoderDecoderModel class takes care adding `pad_token` to the `decoder_input_ids`. for inference ```python3 model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id) ``` Hope this clarifies your question. Also pinging @patrickvonplaten for more info.<|||||>Hi, @patil-suraj Thanks for answering. is it same for BartForConditionalGeneration? Actually, I wanna do kind of translation task and is it same `decoder_inputs_ids` and `labels`?<|||||>@patil-suraj's answer is correct! For the `EncoderDecoder` framework, one should set `model.config.decoder_start_token_id` to the BOS token (which in BERT's case does not exist so that we simply use CLS token). Bart is a bit different: - if you want to generate from a pretrained model, all you have to do is: `model.generate(input_ids)`. `input_ids` always refer to the encoder input tokens for Seq2Seq models and it depends on you if you want to add special tokens or not - this is not done automatically in the generate function. - if you want to have more control and just do one forward pass, you should define both `input_ids` and `decoder_input_ids` and in this case the `decoder_input_ids` should start with Bart's `decoder_start_token_id` `model.config.decoder_start_token_id`: `model(input_ids, decoder_input_ids=decoder_input_ids)`<|||||>@patrickvonplaten thanks for answering! But I have a question that Is there `decoder_start_token_id` in BartConfig? Should I just make my `decoder_input_ids` start with Bart's `model.config.bos_token_id` or set `model.config.decoder_start_token_id` = token_id?<|||||>I think I solved the problem. Thanks <|||||>@jungwhank Great ! Consider joining the awesome[ HF forum ](https://discuss.huggingface.co/), if you haven't already :) It's the best place to ask such questions. The whole community is there to help you and your questions will also help the community.
transformers
6,486
closed
from_pretrained() never works
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux - Python version: 3.6 - PyTorch version (GPU?): 1.5.1 (yes) - Tensorflow version (GPU?): - Using GPU in script?: not relevant - Using distributed or parallel set-up in script?: no ### Who can help @LysandreJik , @TevenLeScao , @mfuntowicz <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer tensorflow: @jplu documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): any The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. `import transformers as pt` 2. `pt.AutoModelForSequenceClassification.from_pretrained(<any_valid_model_id>)` 3. Observe the error below <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ```python >>> pt.AutoModelForSequenceClassification.from_pretrained('xlnet-base-cased') I0814 15:00:47.832349 46912496391360 configuration_utils.py:264] loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-config.json from cache at /xxx/torch/transformers/c9cc6e53904f7f3679a31ec4af244f4419e25ebc8e71ebf8c558a31cbcf07fc8.69e5e35e0b798cab5e473f253752f8bf4d280ee37682281a23eed80f6e2d09c6 I0814 15:00:47.832984 46912496391360 configuration_utils.py:300] Model config XLNetConfig { "architectures": [ "XLNetLMHeadModel" ], "attn_type": "bi", "bi_data": false, "bos_token_id": 1, "clamp_len": -1, "d_head": 64, "d_inner": 3072, "d_model": 768, "dropout": 0.1, "end_n_top": 5, "eos_token_id": 2, "ff_activation": "gelu", "initializer_range": 0.02, "layer_norm_eps": 1e-12, "mem_len": null, "model_type": "xlnet", "n_head": 12, "n_layer": 12, "pad_token_id": 5, "reuse_len": null, "same_length": false, "start_n_top": 5, "summary_activation": "tanh", "summary_last_dropout": 0.1, "summary_type": "last", "summary_use_proj": true, "task_specific_params": { "text-generation": { "do_sample": true, "max_length": 250 } }, "untie_r": true, "vocab_size": 32000 } Traceback (most recent call last): File "/xxx/.conda/envs/xxx/lib/python3.6/site-packages/transformers/modeling_utils.py", line 655, in from_pretrained raise EnvironmentError OSError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/xxx/.conda/envs/xxx/lib/python3.6/site-packages/transformers/modeling_auto.py", line 1363, in from_pretrained return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) File "/xxx/.conda/envs/xxx/lib/python3.6/site-packages/transformers/modeling_utils.py", line 662, in from_pretrained raise EnvironmentError(msg) OSError: Can't load weights for 'xlnet-base-cased'. Make sure that: - 'xlnet-base-cased' is a correct model identifier listed on 'https://huggingface.co/models' - or 'xlnet-base-cased' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt. ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> A pretrained model should be loaded. This worked (and still works) great in `pytorch_transformers`. I switched to `transformers` because XLNet-based models stopped working in `pytorch_transformers`. But surprise surprise in `transformers` no model whatsoever works for me.
08-14-2020 13:23:55
08-14-2020 13:23:55
Hello! This is probably an error with your network. Are you behind a firewall? Does it work on any other machine on the same network?<|||||>Many thanks for a quick response. It is possible. It works on another machine on another network. Is there any way to debug what it tries to download and why it fails? Any idea why the downloads work in pytorch_transformers but not in transformers?<|||||>Hi @sadaszewski , I think you can use the following script for just making a get request to the xlnet configuration file: ```python import requests r = requests.get("https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-config.json") print(r.text) ``` would be interesting to see the response then :)<|||||>Well but doesn't it seem like that's the only file it actually **manages** to get? As you can see in the printout it shows the config of the model... It fails loading weights I guess. How do I check those?<|||||>Oh, I can remember a recent location/CDN change. So the json configuration is loaded from the s3 link, but the model weight is located at `https://cdn.huggingface.co/xlnet-large-cased-pytorch_model.bin` -> could you check if you have access to this file?<|||||>And in `pytorch-transformers` the model was downloaded from: ```bash https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-pytorch_model.bin ```<|||||>I can confirm that it was a problem specific to my setup with trusted certificate for cdn.huggingface.co. Now fixed by specifying REQUESTS_CA_BUNDLE. Nevertheless it was nowhere to be found in the exception thrown by transformers that ultimately it has been caused by requests TLS handshake error. It would be very helpful if you considered adding exception chaining - https://www.python.org/dev/peps/pep-3134/ . Thanks for all your speedy replies!
transformers
6,485
closed
Add tests/test_tokenization_reformer.py
As titled. Attends to issue [#6333](https://github.com/huggingface/transformers/issues/6333).
08-14-2020 12:35:50
08-14-2020 12:35:50
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6485?src=pr&el=h1) Report > Merging [#6485](https://codecov.io/gh/huggingface/transformers/pull/6485?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9a8c168f56fe3c0e21d554a577ac03beb004ef89&el=desc) will **increase** coverage by `0.58%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6485/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6485?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6485 +/- ## ========================================== + Coverage 80.03% 80.61% +0.58% ========================================== Files 156 156 Lines 28058 28058 ========================================== + Hits 22456 22620 +164 + Misses 5602 5438 -164 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6485?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.69% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.97%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: | | [src/transformers/tokenization\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `95.00% <0.00%> (+13.33%)` | :arrow_up: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `95.31% <0.00%> (+39.06%)` | :arrow_up: | | ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6485/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6485?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6485?src=pr&el=footer). Last update [b5ba758...66f97dd](https://codecov.io/gh/huggingface/transformers/pull/6485?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,484
closed
Assertion error when training a new RoBERTa from scratch
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-3.10.0-862.14.4.el7.x86_64-x86_64-with-centos-7.5.1804-Core - Python version: 3.6.10 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: <No. ### Who can help Maybe @LysandreJik ? :-) ## Information Model I am using RoBERTa: The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) The dataset is a simple line-by-line dataset. ## To reproduce Steps to reproduce the behavior: 1. Train a tokenizer according to [this](https://huggingface.co/blog/how-to-train#2-train-a-tokenizer) 2. Split line-by-line dataset into train and eval 3. Run below: ```sh python run_language_modeling.py \ --output_dir $MODEL_DIR/myBERT-small-v1 \ --model_type roberta \ --mlm \ --config_name $MODEL_DIR/myBERT-small \ --tokenizer_name $MODEL_DIR/myBERT-small \ --do_train \ --do_eval \ --per_device_train_batch_size 8 \ --learning_rate 1e-4 \ --num_train_epochs 5 \ --save_total_limit 2 \ --save_steps 2000 \ --per_gpu_train_batch_size 16 \ --evaluate_during_training \ --line_by_line \ --train_data_file $HOME/myBERT/train.txt \ --eval_data_file $HOME/myBERT/eval.txt \ --seed 42 ``` ```log 08/13/2020 14:23:20 - INFO - transformers.configuration_utils - loading configuration file /home/erippeth/myBERT/model/myBERT-small/config.json 08/13/2020 14:23:20 - INFO - transformers.configuration_utils - Model config RobertaConfig { "architectures": [ "RobertaForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "type_vocab_size": 1, "vocab_size": 52000 } 08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - Model name '/home/erippeth/myBERT/model/myBERT-small' not found in model shortcut name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). Assuming '/home/erippeth/myBERT/model/myBERT-small' is a path, a model identifier, or url to a directory containing tokenizer files. 08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - Didn't find file /home/erippeth/myBERT/model/myBERT-small/added_tokens.json. We won't load it. 08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - Didn't find file /home/erippeth/myBERT/model/myBERT-small/special_tokens_map.json. We won't load it. 08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - Didn't find file /home/erippeth/myBERT/model/myBERT-small/tokenizer_config.json. We won't load it. 08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - Didn't find file /home/erippeth/myBERT/model/myBERT-small/tokenizer.json. We won't load it. 08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - loading file /home/erippeth/myBERT/model/myBERT-small/vocab.json 08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - loading file /home/erippeth/myBERT/model/myBERT-small/merges.txt 08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - loading file None 08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - loading file None 08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - loading file None 08/13/2020 14:23:20 - INFO - transformers.tokenization_utils_base - loading file None 08/13/2020 14:23:21 - INFO - __main__ - Training new model from scratch /home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_auto.py:709: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models. FutureWarning, 08/13/2020 14:23:27 - INFO - transformers.data.datasets.language_modeling - Creating features from dataset file at /home/erippeth/myBERT/train.txt 08/13/2020 17:40:20 - INFO - transformers.data.datasets.language_modeling - Creating features from dataset file at /home/erippeth/myBERT/eval.txt 08/13/2020 18:56:31 - WARNING - transformers.trainer - You are instantiating a Trainer but Tensorboard is not installed. You should consider installing it. 08/13/2020 18:56:31 - INFO - transformers.trainer - You are instantiating a Trainer but W&B is not installed. To use wandb logging, run `pip install wandb; wandb login` see https://docs.wandb.com/huggingface. 08/13/2020 18:56:31 - WARNING - transformers.training_args - Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred. 08/13/2020 18:56:31 - WARNING - transformers.training_args - Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred. 08/13/2020 18:56:31 - INFO - transformers.trainer - ***** Running training ***** 08/13/2020 18:56:31 - INFO - transformers.trainer - Num examples = 16661098 08/13/2020 18:56:31 - INFO - transformers.trainer - Num Epochs = 5 08/13/2020 18:56:31 - INFO - transformers.trainer - Instantaneous batch size per device = 8 08/13/2020 18:56:31 - INFO - transformers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 16 08/13/2020 18:56:31 - INFO - transformers.trainer - Gradient Accumulation steps = 1 08/13/2020 18:56:31 - INFO - transformers.trainer - Total optimization steps = 5206595 ^MEpoch: 0%| | 0/5 [00:00<?, ?it/s] ^MIteration: 0%| | 0/1041319 [00:00<?, ?it/s]ESC[A ^MIteration: 0%| | 1/1041319 [00:01<508:20:24, 1.76s/it]ESC[A ^MIteration: 0%| | 2/1041319 [00:02<395:24:33, 1.37s/it]ESC[A ^MIteration: 0%| | 3/1041319 [00:02<306:50:22, 1.06s/it]ESC[A/opt/conda/conda-bld/pytorch_1595629416375/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [229,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1595629416375/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [229,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ... /opt/conda/conda-bld/pytorch_1595629416375/work/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [275,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed. Iteration: 0%| | 3/1041319 [00:03<332:03:04, 1.15s/it] Epoch: 0%| | 0/5 [00:03<?, ?it/s] Traceback (most recent call last): File "run_language_modeling.py", line 281, in <module> main() File "run_language_modeling.py", line 245, in main trainer.train(model_path=model_path) File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/trainer.py", line 499, in train tr_loss += self._training_step(model, inputs, optimizer) File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/trainer.py", line 622, in _training_step outputs = model(**inputs) File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_roberta.py", line 239, in forward output_hidden_states=output_hidden_states, File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_bert.py", line 762, in forward output_hidden_states=output_hidden_states, File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_bert.py", line 439, in forward output_attentions, File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_bert.py", line 371, in forward hidden_states, attention_mask, head_mask, output_attentions=output_attentions, File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_bert.py", line 315, in forward hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, output_attentions, File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_bert.py", line 258, in forward context_layer = context_layer.permute(0, 2, 1, 3).contiguous() RuntimeError: CUDA error: device-side assert triggered ``` ## Expected behavior Model should train without failure, but instead it fails with an assertion error. I believe this is related to an embedding dimension issue, but the config's `vocab_size` matches the length of the newly-trained tokenizer and this is the embedding dimension set in the training script.
08-14-2020 12:35:10
08-14-2020 12:35:10
This may be due to an embedding dimension issue, but may also be due to a CUDA OOM error earlier that has been misreported in my experience. To verify that it is an embedding dimension issue, can you try using the `--no_cuda` flag?<|||||>Sure - let me give it a shot. The one issue is that the data is large so featurizing the inputs takes a long time (and isn't cached), so it may take several hours to report back.<|||||>@LysandreJik I confirmed it was indeed an embedding issue: ```log Traceback (most recent call last): File "run_language_modeling.py", line 281, in <module> main() File "run_language_modeling.py", line 245, in main trainer.train(model_path=model_path) File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/trainer.py", line 499, in train tr_loss += self._training_step(model, inputs, optimizer) File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/trainer.py", line 622, in _training_step outputs = model(**inputs) File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_roberta.py", line 239, in forward output_hidden_states=output_hidden_states, File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_bert.py", line 753, in forward input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_roberta.py", line 68, in forward input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/transformers/modeling_bert.py", line 179, in forward position_embeddings = self.position_embeddings(position_ids) File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 126, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/home/erippeth/miniconda3/envs/myBERT/lib/python3.6/site-packages/torch/nn/functional.py", line 1814, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self ``` What's not immediately clear is _why_ it's happening. My understanding is the process goes... 1. Load the tokenizer. 2. Encode each line (forcing the indices to necessarily fall in the range of |vocab|) 3. Train<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,483
closed
Regarding GPU use for LM
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Hi, I am running example given in README.md of language_modeling using following command: export TRAIN_FILE=/path/to/dataset/wiki.train.raw export TEST_FILE=/path/to/dataset/wiki.test.raw python run_language_modeling.py \ --output_dir=output \ --model_type=gpt2 \ --model_name_or_path=gpt2 \ --do_train \ --train_data_file=$TRAIN_FILE \ --do_eval \ --eval_data_file=$TEST_FILE It has started training but it is not using GPU (TITAN X) at all, when I see throug nvidia-smi command I am new to this So Can you please let me know if I'm missing anything here. Thanks.--> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
08-14-2020 11:26:31
08-14-2020 11:26:31
Hi, I am running example given in README.md of language_modeling using following command: export TRAIN_FILE=/path/to/dataset/wiki.train.raw export TEST_FILE=/path/to/dataset/wiki.test.raw python run_language_modeling.py \ --output_dir=output \ --model_type=gpt2 \ --model_name_or_path=gpt2 \ --do_train \ --train_data_file=$TRAIN_FILE \ --do_eval \ --eval_data_file=$TEST_FILE It has started training but it is not using GPU (TITAN X) at all, when I see throug nvidia-smi command I am new to this So Can you please let me know if I'm missing anything here. Thanks.<|||||>This is probably because torch doesn't detect that you have a GPU. Can you try launching a python console and running the following? ```py import torch print(torch.cuda.is_available()) ```<|||||>Yeah, I also found out about it later after posting this issue. I had to install cuda 9.1 and reboot the server then it worked. Thank You for your reply :)
transformers
6,482
closed
Longformer Memory Consumption query
Hello, Apologies if I am misunderstanding it, but if I use roberta with a max sequence length of 256 and I can run, for example, say, a batch size of 64 on one gpu for a task, with the longformer, can I use the same batch size with a window length of 256 with the max sequence length being 4096?
08-14-2020 10:46:24
08-14-2020 10:46:24
A question that might be of interest to @patrickvonplaten :)<|||||>Hey @PrudhviRaj12, my best answer would be to try it out :-) If can definitely work if you have enough GPU RAM. My best guess would be that in the scenario described by you above the Longformer version would require ca. `num_chunks` * `required_gpu_ram_for_roberta` and `num_chunks` in your case would be 4096 / 256 = 16. So you would need a lot of RAM to run `batch_size=64` and `max_length=4096` with Longformer, most likely not enough for one GPU (even if fp16).<|||||>I would also suggest adding gradient_checkpointing=True when you load your model with from_pretrained. This recent addition to the HF code base allowed me to go from using BERT with a max sequence length of 128-256 before running out of memory to now being able to use Longformer with a max seq length of up to 4096 on the same GPU setup! This thread helped me and may also help you: https://github.com/allenai/longformer/issues/80<|||||>Thanks @patrickvonplaten - I misunderstood the paper then. @HugToDebug thanks for your suggestion - I tried that but I am getting this warning ``` None of the inputs have requires_grad=True. Gradients will be None warnings.warn("None of the inputs have requires_grad=True. Gradients will be None" ``` when calling model.forward(sequence inputs, attention masks) with any model (be it longformer or bert or roberta) and the performance of the model is completely off of the same batch experimental setting with and without gradient checkpointing. Probably I am doing something wrong, I'll check that thread. I am only training the last N layers of the bert/roberta for my task, and I am setting requires grad = False for all the other layers and I am getting that warning. When I remove that condition of setting requires grad = False for some layers and leaving them true for all, I am not getting that warning. Any idea how to get around that issue?<|||||>Update: I was able to get rid of that warning by making one of the embedding weight matrices trainable (in my case - Roberta, token type embedding). It was only adding 768 more trainable parameters, but I am getting OOM. I had to cut down the batch size 4x to get it running on one gpu without OOM. Not sure why adding just 768 trainable params had that of an impact.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,481
closed
what's the difference between TFBertOutput and TFBertSelfOutput?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> `TFBertOutput` with `TFBertSelfOutput` seem same in their codes. why did you write two same layers? is there for some reasons?
08-14-2020 10:43:23
08-14-2020 10:43:23
Hey @policeme, fair point they seem to be exactly the same. There is a logical difference though `TFBertOutput` is the output of a `TFBertLayer` while `TFBertSelfOutput` is the output a `TFBertAtteniton` (Self-attention -> thus "SelfOutput"). But yeah this might seem a bit confusing at first.
transformers
6,480
closed
Import accuracy_score
08-14-2020 10:42:08
08-14-2020 10:42:08
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6480?src=pr&el=h1) Report > Merging [#6480](https://codecov.io/gh/huggingface/transformers/pull/6480?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9a8c168f56fe3c0e21d554a577ac03beb004ef89&el=desc) will **decrease** coverage by `0.06%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6480/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6480?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6480 +/- ## ========================================== - Coverage 80.03% 79.96% -0.07% ========================================== Files 156 156 Lines 28058 28058 ========================================== - Hits 22456 22437 -19 - Misses 5602 5621 +19 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6480?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.06% <0.00%> (-29.32%)` | :arrow_down: | | [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `71.83% <0.00%> (-23.95%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.53% <0.00%> (-22.78%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.26% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.97%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: | | ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/6480/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6480?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6480?src=pr&el=footer). Last update [9a8c168...ab9eb7f](https://codecov.io/gh/huggingface/transformers/pull/6480?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,479
closed
[TFTrainer] gradient accumulation error
## Environment info - `transformers` version: master (#9a8c168) - Tensorflow version: 2.3.0 ### Who can help Trainer: @sgugger tensorflow: @jplu ## Information When using >1 `gradient_accumulation_steps` with TFTrainer and model inputs which are *not* simple tensors (for example dicts) the trainer fails. Also, to me it looks like there are logic issues in the way the `reduced_features` are computed for the gradient accumulation (not sure though). Issue is here: https://github.com/huggingface/transformers/blob/9a8c168f56fe3c0e21d554a577ac03beb004ef89/src/transformers/trainer_tf.py#L602 ## To reproduce ```python import tensorflow as tf from transformers import TFT5ForConditionalGeneration, TFTrainer, TFTrainingArguments input_ids = [[1, 2, 3], [1, 2, 3]] labels = [1, 2, 3] dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': input_ids}, labels)) model = TFT5ForConditionalGeneration.from_pretrained("t5-base") training_args = TFTrainingArguments( output_dir='./results', # output directory logging_steps=100, max_steps=2, save_steps=2000, per_device_train_batch_size=2, # batch size per device during training per_device_eval_batch_size=8, # batch size for evaluation warmup_steps=0, # number of warmup steps for learning rate scheduler weight_decay=0.0, # strength of weight decay learning_rate=5e-5, gradient_accumulation_steps=2 ) with training_args.strategy.scope(): model = TFT5ForConditionalGeneration.from_pretrained("t5-base") trainer = TFTrainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=dataset, # training dataset ) trainer.train() ``` ## Issues ### Error Produces error `TypeError: unhashable type: 'slice'`. Also `/` produce floats on python3, which I guess is not intended here. Solution in same spirit could be conditional use of ``` reduced_features = { ft: features[ft][:self.args.train_batch_size // self.args.n_replicas] for ft in features } ``` Already mentioned here https://github.com/huggingface/transformers/pull/6038#issuecomment-664706046 ### Logic issue I don't understand what the `n_replicas` has to do with the gradient accumulation here? Shouldn't the denominator rather be `gradient_accumulation_steps`? And shouldn't it actually use the different slices of the features, and not always the "first" slice? Might be totally misunderstanding this. Also this line doesn't seem to have any purpose: https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py#L607 Happy to provide PR if someone can give me a hint on the logic issue.
08-14-2020 10:22:49
08-14-2020 10:22:49
We should wait for @jplu to come back from holiday for this, since he wrote that part of the code.<|||||>Good catch @maurice-g!! It is my fault, I did that part in a hury and I should have been more careful. This will be fixed in a next PR (currently doing it) `n_replicas` is important here because we have to get the number of tuple (features, labels) corresponding to the batch size per GPU and `self.args.train_batch_size` gives the total batch size (batch size per GPU * number of replicas).<|||||>Should be fixed in https://github.com/huggingface/transformers/pull/6713 :+1: <|||||>thanks for looking into this @jplu One further remark on your PR #6713: The code now works _iff_ the features are a dict, but does not anymore if the features are a raw tensor (which worked before). IMO this should work for both, therefore there needs to be a conditional check on the type and then both situations should be handled. Or do you think that's not a relevant case?<|||||>Keeping open until @jplu answers your question @maurice-g <|||||>You are right, it is not working anymore with list/tuple/raw tensors. This is on purpose because I'm gonna push the usage of dictionaries only in TF at some point. Is it a big issue for you to use dictionaries?<|||||>Ok, works for me, just wanted to point it out.
transformers
6,478
closed
Upladed model is not indexed
# ❓ Questions & Help Hi guys, I uploaded a model several hours ago (t5-base-finetuned-boolq) and it is not indexed in the model hub search engine yet! Thanks, Manu
08-14-2020 08:52:26
08-14-2020 08:52:26
The model is also not listed on your [page ](https://huggingface.co/mrm8488). Can you try re-uploading ?<|||||>If you load it in your code ```mrm8488/t5-base-finetuned-boolq``` it works! Maybe a problem indexing it.<|||||>cc @julien-c <|||||>Hi everyone, has there been a way fix this? I also uploaded a model (t5-podcast-summarisation) that hasn't shown up on the model hub. I am able to load it in my code using `paulowoicho/t5-podcast-summarisation` though<|||||>Fixed: - https://huggingface.co/mrm8488/t5-base-finetuned-boolq - https://huggingface.co/paulowoicho/t5-podcast-summarisation
transformers
6,477
closed
finetune.py: error: unrecognized arguments
### Who can help examples/distillation: @VictorSanh examples/seq2seq: @sshleifer ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Try running seq2seq/finetune.sh with data_dir or output_dir with escaped spaces in it 2. You'll get a `finetune.py: error: unrecognized arguments` This is bad because Google Drive mounts at `/content/drive/My Drive/` in Colab and thus the example scripts won't work if saving or reading from Drive. I've created a [Colab Notebook](https://colab.research.google.com/drive/1N-8m9FC9GbAywVJZAgSBkLqe24SPRfl8?usp=sharing) with repro. The fix I've found is to change: ``` python finetune.py \ --learning_rate=3e-5 \ --fp16 \ --gpus 1 \ --do_train \ --do_predict \ --n_val 1000 \ --val_check_interval 0.1 \ $@ ``` to ``` python finetune.py \ --learning_rate=3e-5 \ --fp16 \ --gpus 1 \ --do_train \ --do_predict \ --n_val 1000 \ --val_check_interval 0.1 \ "$@" ```
08-14-2020 05:30:39
08-14-2020 05:30:39
transformers
6,476
closed
Question about loss computing in BartForConditionalGeneration
I notice that in [BartForConditionalGeneration](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bart.py#L1043), the labels and logits are not shifted when computing cross-entropy loss. Should I provide a pre-possessed shifted labels to the model for training?
08-14-2020 03:34:48
08-14-2020 03:34:48
Hi @JamesHujy , yes when training BART you need to shift `labels` and `decoder_input_ids`. ```python3 target_text = "some target text" enc = tokenizer(target_text , return_tensors="pt") target_ids = enc["input_ids"] decoder_input_ids = target_ids[:, :-1].contiguous() labels = target_ids[:, 1:].clone() ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,475
closed
Use hash to clean the test dirs
This one solves it once for all. What do you think? @sgugger @LysandreJik
08-14-2020 02:49:29
08-14-2020 02:49:29
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6475?src=pr&el=h1) Report > Merging [#6475](https://codecov.io/gh/huggingface/transformers/pull/6475?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/05810cd80a5ca83065e0dbe5335c030c4a435ddb&el=desc) will **decrease** coverage by `1.12%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6475/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6475?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6475 +/- ## ========================================== - Coverage 80.55% 79.42% -1.13% ========================================== Files 153 153 Lines 28001 28001 ========================================== - Hits 22556 22241 -315 - Misses 5445 5760 +315 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6475?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6475/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6475/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6475/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.31% <0.00%> (-0.98%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6475/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.94% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6475/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.01% <0.00%> (+23.16%)` | :arrow_up: | | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6475/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6475?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6475?src=pr&el=footer). Last update [05810cd...646a7dc](https://codecov.io/gh/huggingface/transformers/pull/6475?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Agreed, thanks for the fix!
transformers
6,474
closed
Training Data of xlm-roberta-large-finetuned-conll03-* models
Hi, I'm curious about the training data of xlm-r models finetuned on conll ner datasets (e.g. xlm-roberta-large-finetuned-conll03-german, xlm-roberta-large-finetuned-conll03-english), are the models trained on train+dev sets?
08-14-2020 02:36:51
08-14-2020 02:36:51
Pinging @stefan-it <|||||>Hi @wangxinyu0922 , the models are only trained on the corresponding training data sets, that means development data was not used for training :)<|||||>That's great! Thank you!<|||||>> > > Hi @wangxinyu0922 , > > the models are only trained on the corresponding training data sets, that means development data was not used for training :) @stefan-it By the way, what is the accuracy of the model on the four datasets? The models are trained on document context or sentence context? I believe different context will affect the performance.
transformers
6,473
closed
[sched] polynomial_decay_schedule use default power=1.0
As discussed in https://github.com/huggingface/transformers/pull/6361 we weren't sure why fairseq's `polynomial_decay_schedule` `power` default was `1.0`, and decided to go with `2.0` as the latter does something polynomial. I got the devs at fairseq to answer this question: https://github.com/pytorch/fairseq/issues/2466#issuecomment-673146603 > myleott wrote: > This is based on the original BERT code, which implemented a linear decay via a polynomial schedule with power=1.0: https://github.com/google-research/bert/blob/f39e881b169b9d53bea03d2d341b31707a6c052b/optimization.py#L37 So, perhaps we do the same or we don't. If we don't - then the doc needs to be fixed that the default is `power=2.0` as currently it says `1.0` - my mistake. If we do (this PR), then the doc is already correct. Thanks.
08-14-2020 02:13:31
08-14-2020 02:13:31
transformers
6,472
closed
"BertEncoder' object has no attribute 'output_hidden_states"
Hi I have trained a Bert token classification model for the Italian language using the "dbmdz/bert-base-italian-uncased". I have trained the model in a machine running Pytorch-1.4.0 and transformer 3.0.2, when I installed it few days back as it's the latest version. I copied the saved best model to a server that runs Pytorch-1.4.0 & transformer version 2.3.0. I sent a request to the model to get the predictions, but I got the following warnings. # Inference code ``` tokenizer = transformers.BertTokenizer.from_pretrained("dbmdz/bert-base-italian-uncased", do_lower_case=False) Assuming I have tokenized the requested text into the variable "tokens" indexed_tokens = tokenizer.convert_tokens_to_ids(tokens) segments_ids = [0] * len(tokens) tokens_tensor = torch.tensor([indexed_tokens]).to(device) segments_tensors = torch.tensor([segments_ids]).to(device) logit = model(tokens_tensor, token_type_ids=None, attention_mask=segments_tensors) ``` # Warnings ``` Model name 'dbmdz/bert-base-italian-uncased' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1). Assuming 'dbmdz/bert-base-italian-uncased' is a path or url to a directory containing tokenizer files. Didn't find file dbmdz/bert-base-italian-uncased/added_tokens.json. We won't load it. Didn't find file dbmdz/bert-base-italian-uncased/special_tokens_map.json. We won't load it. Didn't find file dbmdz/bert-base-italian-uncased/tokenizer_config.json. We won't load it. loading file https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/bert-base-italian-uncased/vocab.txt from cache at /root/.cache/torch/transformers/02b5ab8ef6a3a1d4af18c318bb4c53155a59a3893dd557b922d2467b269cd405.5cbaac66fdfadbe363aad01956dac0be9bf700f2c8c87012dc078b87e2fa4181 loading file None loading file None loading file None ``` ``` ./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertForTokenClassification' has changed. Saved a reverse patch to BertForTokenClassification.patch. Run `patch -p0 < BertForTokenClassification.patch` to revert your changes. warnings.warn(msg, SourceChangeWarning) ./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertModel' has changed. Saved a reverse patch to BertModel.patch. Run `patch -p0 < BertModel.patch` to revert your changes. warnings.warn(msg, SourceChangeWarning) ./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertEmbeddings' has changed. Saved a reverse patch to BertEmbeddings.patch. Run `patch -p0 < BertEmbeddings.patch` to revert your changes. warnings.warn(msg, SourceChangeWarning) ./torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.normalization.LayerNorm' has changed. Saved a reverse patch to LayerNorm.patch. Run `patch -p0 < LayerNorm.patch` to revert your changes. warnings.warn(msg, SourceChangeWarning) ./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertEncoder' has changed. Saved a reverse patch to BertEncoder.patch. Run `patch -p0 < BertEncoder.patch` to revert your changes. warnings.warn(msg, SourceChangeWarning) ./torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.container.ModuleList' has changed. Saved a reverse patch to ModuleList.patch. Run `patch -p0 < ModuleList.patch` to revert your changes. warnings.warn(msg, SourceChangeWarning) ./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertLayer' has changed. Saved a reverse patch to BertLayer.patch. Run `patch -p0 < BertLayer.patch` to revert your changes. warnings.warn(msg, SourceChangeWarning) ./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertAttention' has changed. Saved a reverse patch to BertAttention.patch. Run `patch -p0 < BertAttention.patch` to revert your changes. warnings.warn(msg, SourceChangeWarning) ./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertSelfAttention' has changed. Saved a reverse patch to BertSelfAttention.patch. Run `patch -p0 < BertSelfAttention.patch` to revert your changes. warnings.warn(msg, SourceChangeWarning) ./torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. Saved a reverse patch to Linear.patch. Run `patch -p0 < Linear.patch` to revert your changes. warnings.warn(msg, SourceChangeWarning) ./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertSelfOutput' has changed. Saved a reverse patch to BertSelfOutput.patch. Run `patch -p0 < BertSelfOutput.patch` to revert your changes. warnings.warn(msg, SourceChangeWarning) ./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertIntermediate' has changed. Saved a reverse patch to BertIntermediate.patch. Run `patch -p0 < BertIntermediate.patch` to revert your changes. warnings.warn(msg, SourceChangeWarning) ./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertOutput' has changed. Saved a reverse patch to BertOutput.patch. Run `patch -p0 < BertOutput.patch` to revert your changes. warnings.warn(msg, SourceChangeWarning) ./torch/serialization.py:593: SourceChangeWarning: source code of class 'transformers.modeling_bert.BertPooler' has changed. Saved a reverse patch to BertPooler.patch. Run `patch -p0 < BertPooler.patch` to revert your changes. warnings.warn(msg, SourceChangeWarning) ./torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.activation.Tanh' has changed. Saved a reverse patch to Tanh.patch. Run `patch -p0 < Tanh.patch` to revert your changes. warnings.warn(msg, SourceChangeWarning) ``` and finally it ended with the below error. ``` "BertEncoder' object has no attribute 'output_hidden_states". ``` Can someone help me understand Is it because of the Pytorch, transformer version mismatch between the trained model on a machine and the inference on the server? or if "dbmdz/bert-base-italian-uncased" is available in the 2.3.0 version or not? or is there any other way I can make this work instead of retraining the model at a lower version to match the version of the server? Assuming that changing the versions in the server is not quite possible as of now. Appreciate your help.
08-14-2020 01:50:38
08-14-2020 01:50:38
I think you will have to tweak the model here a bit to make it work. Before you pass arguments to the model's call function, can you add this line: ```python model.output_hidden_states = False ``` and see whether the error persists<|||||>Same issue here. Problem is not solved after setting ''' model.output_hidden_states = False '''<|||||>Solved by upgrading transformers
transformers
6,471
closed
[testing] automatically clean up temp dirs during teardown
Recently, a crucial fix was applied to several tests https://github.com/huggingface/transformers/pull/6453/files. the temp dir wasn't getting cleaned and subsequent tests were unreliable as they were tapping into invalid old data. The remaining issue is that the added fix is not guaranteed to run. And it repeats itself many times. I thought of several ways to ensure the removal of the temp dir and how to make it easier to use in the tests. Here are some ideas I came up with ## Idea 1 Using a simple `tempfile.TemporaryDirectory` context manager: ``` from tempfile import TemporaryDirectory class ExamplesTests(unittest.TestCase): def test_run_pl_glue(self): with TemporaryDirectory() as tmp_dir: testargs = f""" run_pl_glue.py --output_dir {tmp_dir.name} [...] ``` Pros: - generic code - very localized - can be done multiple times in the same test and having a fresh temp dir Cons: - can't pass a specific fixed dir so that it's easier to debug - could write a custom context manager that supports both random and fixed path. - tricky to debug - if one wants the temp dir to not be removed while developing the test - could write a custom version that supports an argument that will not remove the temp dir - have to reach into the object to get the actual path with `obj.name` ## Idea 1.5 This one solves the cons of idea 1 Write a custom context manager that takes a hard-coded path and a flag to clean up or not to make it easy to debug, to be built on top of `tempfile.TemporaryDirectory`. I haven't written it yet. But the core should be similar to `TestsWithTempDir` shown in the next idea, plus a context manager. But here is how it would be used: ``` from transformers.test_utils import temp_dir_ctx class ExamplesTests(unittest.TestCase): def test_run_pl_glue(self): with temp_dir_ctx(cleanup=True, path=None) as tmp_dir: testargs = f""" run_pl_glue.py --output_dir {tmp_dir.name} [...] ``` So that we could have: Most of the time with minimal extra code, which will use a random path and auto-delete: ``` with temp_dir_ctx() as tmp_dir: do_something_with(tmp_dir.name) ``` If we want a specific tmp path: ``` with temp_dir_ctx(path="/use/my/path") as tmp_dir: do_something_with(tmp_dir.name) ``` if we are debugging and don't want the auto-deletion ``` with temp_dir_ctx(cleanup=False) as tmp_dir: do_something_with(tmp_dir.name) ``` the only remaining cons: - have to reach into the object to get the actual path with `obj.name` - can fix with `def __str__(self): return self.name` ## Idea 2 Solve the problem on the test class level, so that the tests don't need to do anything at all to clean up the temp dir. This solution uses `unittest.TestCase`'s `setUp`/`tearDown` fixtures. ``` from pathlib import Path import tempfile import shutil class TestsWithTempDir(unittest.TestCase): """ This class is for tests that need to automatically remove a temp dir at the end of the test regardless of its success or failure. if no `tmp_dir` is passed a unique temp dir is created. if it's passed that passed dir is used instead. In either case that path is created and `self.tmp_dir` is set to the path that was used. Example 1: Let the system choose the path class ExamplesTests(TestsWithTempDir): def test_run_something(self): print(f"{self.tmp_dir} will be removed at the end of the test") Example 2: Use the path I supply class ExamplesTests(TestsWithTempDir): def __init__(): super.__init__(tmp_dir="./foo/bar") def test_run_something(self): print(f"{self.tmp_dir} will be removed at the end of the test") """ def __init__(self, tmp_dir=None): self.tmp_dir = tmp_dir self.tmp_dir_obj = None def setUp(self): if self.tmp_dir: Path(self.tmp_dir).mkdir(parents=True, exist_ok=True) else: self.tmp_dir_obj = tempfile.TemporaryDirectory() self.tmp_dir = self.tmp_dir_obj.name def tearDown(self): if self.tmp_dir_obj: del self.tmp_dir_obj else: shutil.rmtree(self.tmp_dir, ignore_errors=True) ``` Pros: - moves the cleaning up responsibility away from the test, leaving the test focused to just what it tests - very flexible - can handle custom and random paths - debug should be relatively easy - just need to add another option or a method to not tear-down (I haven't implemented it yet) Cons: - only supports one tmp dir per test - won't work if multiple executions happen in the same test - the action is far removed from the code - could be hard to see - I'm especially concerned with running `shutil.rmtree` at a distance - it'd be easy to make a mistake of passing `/tmp/foo` instead of `./tmp/foo` or worse. I'd rather not use `shutil.rmtree` at all unless it's right there when the developer can see what they are removing. ----- After contemplating these different solutions, I feel that locality is more important than behind the scenes magic, so I feel the best solution would be Idea 1.5 - i.e. a custom context manager that makes it easy to debug, to be built on top of `tempfile.TemporaryDirectory`, and also supports a hardcoded tmp path. Please, let me know if any of these resonate with you and then I can code a PR that can be seen in action. Thank you!
08-14-2020 00:13:28
08-14-2020 00:13:28
Our @JetRunner has already gone and proposed a fix using hashes: https://github.com/huggingface/transformers/pull/6475 I think #6475 makes sense but your proposals also resonate with me, especially the 1.5. Using `tempfile.TemporaryDirectory` seems cleaner to me than manually removing the folder afterwards. The hardcoded paths are already set-up thanks to @JetRunner's hashes, but it does make it harder to debug as hashes are not understandable from a human point of view. @JetRunner, would love your input on @stas00's proposals!<|||||>Thanks @stas00 and @LysandreJik! Both idea 1 and 1.5 look good to me! Idea 2 is not flexible enough and I am worried about using the same temp dir for all test cases (if my understanding is right). Maybe idea 1 is good enough and idea 1.5 seems to be a little over complicated since people can just quickly change the directory name from `tmp_dir.name` to their local path for debugging and then do some cleaning themselves. Yes, I agree `temporary_dir` looks much better than `rmtree`. `rmtree` looks super scary. Also I wonder how can you trace the temp dir if the test is interrupted? Will it still be cleaned?<|||||>Thank you for looking at my suggestions! > Our @JetRunner has already gone and proposed a fix using hashes: #6475 Neat! A few notes: - it hasn't solved the problem of guaranteed cleanup. if the test asserts half way the clean up will not be run. - I like that it ends up with the same dir name for a given test all the time - it doesn't tell me what that `output_dir` is, have to take extra steps to figure it out - e.g. `ls -lt` - it's a bit too much copy-n-paste - the scaffolding is starting to dominate the test. It can be made into a function in testing_utils and there is no need to manually push `--output_dir` into `testargs`, could just use f" .... {output_dir}" into the existing list of `testargs` > it does make it harder to debug as hashes are not understandable from a human point of view. I concur. Though `tempfile`'s output is cryptic too: `/tmp/tmp0vpwv7ok` <|||||>> Idea 2 is not flexible enough and I am worried about using the same temp dir for all test cases (if my understanding is right). Right. That approach is problematic if you have concurrent tests running with `pytest -n 2+`. Good observation! It could be easily fixed though by for example using the test name as a unique string or a hash of it. While idea 2 is super-smooth - no changes to the test! It's too far removed from where things happen from the perspective of the developer working on the test. > Maybe idea 1 is good enough and idea 1.5 seems to be a little over complicated since people can just quickly change the directory name from tmp_dir.name to their local path for debugging and then do some cleaning themselves. You will have to comment out the `with ` line and re-indent the rest of the code (or replace with `if 1:`) if you want to switch to local path, since `tempfile` doesn't support such override - it's not debug-needs-friendly. > Yes, I agree temporary_dir looks much better than rmtree. rmtree looks super scary. I'm glad we both find it scary > Also I wonder how can you trace the temp dir if the test is interrupted? Will it still be cleaned? I'm not sure what you mean by 'trace'. It does the right thing wrt guaranteed cleanup. Testing In ipython: ``` import tempfile try: with tempfile.TemporaryDirectory() as tmp_dir: print(f"{tmp_dir} will be removed at the end of the test") !ls -l $tmp_dir assert False except: pass finally: !ls -l $tmp_dir ``` ``` /tmp/tmp0vpwv7ok will be removed at the end of the test total 0 ls: cannot access '/tmp/tmp0vpwv7ok': No such file or directory ``` it looks like it stringified `tmp_dir` and didn't need `tmp_dir.name`. What I don't like the most about idea 1, is that it'll constantly change the path, so you have to print it out all the time - and it's not an easy feat to find out that print out with the huge dump of std streams and then you have to copy-n-paste the unique string - very inefficient debug-wise. I'd say quite terrible. but as we said replacing it with: ``` - with tempfile.TemporaryDirectory() as tmp_dir: + if 1: tmp_dir="./local/path" ``` will do the trick. hence the idea 1.5, which will do this for you. plus let you control whether to delete or not. ----- One more cons of pre-creating a temp dir, regardless of how it is done is that it'll lead to not testing script's capacity to correctly create a non-existing dir for its outputs. <|||||>> > Yes, I agree temporary_dir looks much better than rmtree. rmtree looks super scary. > > I'm glad we both find it scary If we end up using it in a context manager I wonder whether it'd be a smart idea to protect the developer from wiping out parts of their system, by refusing to delete that dir unless it was created by the context manager - i.e. it'll assert if the dir already exists. And, of course, provide a flag `i_know_what_I_am_doing_dammit=True` which will bypass the baby-gate. I don't know. This isn't great either - it will interfere with testing - I just don't like `rm -r` happening anywhere where I don't explicitly see it, including what it's deleting. <|||||>I am okay with all these solutions and they have their own pros and cons! For Idea 1.5, I still think if the user (i.e., developer in this case) wants to use their own directory, we should not handle the cleaning part. On one hand, cleaning may have a risk of deleting parts of the user's file system by mistake; on the other hand, I don't think it's a good idea to make this function too complicated. Idea 2 LGTM too as long as you solve the contradiction in directory and rmtree is considered acceptable.<|||||>If we aren't cleaning up automatically the hardcoded path, then it defeats the purpose of 1.5 completely, i.e. best then to use 1.0 - i.e. use generic ` tempfile.TemporaryDirectory`. So we start using: ``` from tempfile import TemporaryDirectory [...] with TemporaryDirectory() as tmp_dir: print(f"{tmp_dir} will be removed at the end of the test") ``` and the developer working on the test and wanting a fixed path, will have to re-write this with: ``` from tempfile import TemporaryDirectory [...] # with TemporaryDirectory() as tmp_dir: if 1: tmp_dir="./local/path" print(f"{tmp_dir} will be removed at the end of the test") import shutil shutil.rmtree(tmp_dir, ignore_errors=True) ``` That's a lot to type :( <|||||>But with 1.5 we don't have to bother to reindent, right?<|||||>You mean, as in: ``` with temp_dir_ctx() as tmp_dir: do_something_with(tmp_dir.name) ``` vs: ``` with temp_dir_ctx(path="/use/my/path") as tmp_dir: do_something_with(tmp_dir.name) ``` no need to reindent indeed, but it'll be very confusing as it will behave differently if `path` is passed (no clean up)<|||||>Moreover, if we do tell the dev to use `shutil.rmtree(tmp_dir, ignore_errors=True)` we are back at square one - it won't be run if assert will happen before it, so the next test run will be "contaminated". I was thinking that in this particular situation, we actually need to wipe the dir out **before** the test is run. i.e. this is the real need. It's much easier to ensure it happens, because we can do it first things first, so no assert to expect. The after test clean up is a different need.<|||||>Fair enough! I don't really have a preference here so let's go with what you think makes the most sense!<|||||>It's clear that I want to have the cake and eat it too. I want a super-safe solution, yet, with minimal coding inside the test. I think that perhaps I have to choose one or the other. I just feel uncomfortable to take responsibility for creating a gun that someone might shoot their foot with (could be mine). If I were a robot my positronic brain would have melted right now.<|||||>Haha don't worry. All these solutions are better than what we have right now (#6475)<|||||>OK, I thought of something. We use 1.5 as originally proposed in the first comment, but in addition we require that the hardcoded path is a subdir of the cwd dir . Assert if it is not. Then in the worst case scenario something unwanted will get wiped under the cloned git dir, but the system is safe.<|||||>I agree. And we can listen to the community when the PR is done.<|||||># Idea 2.5 ``` class ExamplesTests(TestsWithTempDir): [...] def test_whatever(self): tmp_dir = self.remove_at_teardown("./tmp/dir") # code whatever, and nothing else to write, no extra indent/scope needed ``` This will require subclassing `unittest.TestCase`, to facilitate registry of one or more dirs to clean up via a new method `remove_at_teardown`, and the clean up of those dirs will get run automatically via its `def tearDown(self)`method which will do all the work (needs to be written). This is even simpler and solves most of the deficiencies of the previous ideas. - we still require a sub-dir for safety, will be validated at registry time. - this idea drops the use of temp dir as it's not user-friendly debug wise. So we go back to hardcoded paths. - it's flexible, you can add several tmp dirs to remove. - if you want to keep the dir, just comment out the registry call if we want to ensure the dir is clean from the get-go, we can use another method that will attempt to delete at addition time and during teardown. `self.remove_now_and_at_teardown` or a flag `remove_at_teardown(now=True)`. Thoughts?<|||||>Cool. However, it is not practical to prevent others from copying and pasting the code fragment and the same path will be a problem for parallel testing (as we discussed). In this case, I believe you can use a hash (like #6475). However, temporary dir is still a cool idea that I don't want to give up. Good to hear from @sshleifer @sgugger <|||||>I agree. I will code something that will support both, so by default we will use a unique tmp dir but for debug it'll allow for a hardcoded path. I will send a PR soonish. Thank you for the wonderful practical feedback, @JetRunner <|||||>Done: https://github.com/huggingface/transformers/pull/6494<|||||>Thanks for taking care of it! Closing this as resolved.
transformers
6,470
closed
Generation doc
Add documentation (and clean docstrings) of `GenerationMixin` and `TFGenerationMixin`.
08-13-2020 20:23:50
08-13-2020 20:23:50
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6470?src=pr&el=h1) Report > Merging [#6470](https://codecov.io/gh/huggingface/transformers/pull/6470?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/05810cd80a5ca83065e0dbe5335c030c4a435ddb&el=desc) will **decrease** coverage by `0.17%`. > The diff coverage is `96.73%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6470/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6470?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6470 +/- ## ========================================== - Coverage 80.55% 80.37% -0.18% ========================================== Files 153 156 +3 Lines 28001 28058 +57 ========================================== - Hits 22556 22552 -4 - Misses 5445 5506 +61 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6470?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.87% <ø> (-0.36%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <ø> (ø)` | | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <ø> (ø)` | | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <ø> (+4.05%)` | :arrow_up: | | [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `91.66% <87.50%> (+0.64%)` | :arrow_up: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `95.31% <95.31%> (ø)` | | | [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.68% <97.87%> (+0.71%)` | :arrow_up: | | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <100.00%> (+0.01%)` | :arrow_up: | | [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.47% <100.00%> (+0.14%)` | :arrow_up: | | [src/transformers/configuration\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX21iYXJ0LnB5) | `100.00% <100.00%> (ø)` | | | ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/6470/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6470?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6470?src=pr&el=footer). Last update [05810cd...d9cbc03](https://codecov.io/gh/huggingface/transformers/pull/6470?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,469
closed
Fix typo
08-13-2020 18:53:50
08-13-2020 18:53:50
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6469?src=pr&el=h1) Report > Merging [#6469](https://codecov.io/gh/huggingface/transformers/pull/6469?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e0b1dc8954b87c18f77a82000e81e02683b8eb1&el=desc) will **increase** coverage by `0.76%`. > The diff coverage is `87.43%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6469/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6469?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6469 +/- ## ========================================== + Coverage 79.77% 80.53% +0.76% ========================================== Files 148 153 +5 Lines 27214 28001 +787 ========================================== + Hits 21710 22552 +842 + Misses 5504 5449 -55 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6469?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | | | [src/transformers/data/test\_generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Rlc3RfZ2VuZXJhdGlvbl91dGlscy5weQ==) | `0.00% <0.00%> (ø)` | | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <ø> (-0.91%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <ø> (ø)` | | | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.25% <0.00%> (-0.13%)` | :arrow_down: | | [src/transformers/testing\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `51.92% <28.57%> (-20.81%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <37.50%> (-0.18%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <50.00%> (+1.79%)` | :arrow_up: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `90.90% <52.94%> (-5.68%)` | :arrow_down: | | [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `91.02% <66.66%> (-1.19%)` | :arrow_down: | | ... and [61 more](https://codecov.io/gh/huggingface/transformers/pull/6469/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6469?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6469?src=pr&el=footer). Last update [7bc0056...1e75c22](https://codecov.io/gh/huggingface/transformers/pull/6469?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks!
transformers
6,468
closed
convert_graph_to_onnx not working as expected.
# ❓ Questions & Help not sure if there is a bug. when running ```python from transformers.convert_graph_to_onnx import convert convert(framework="tf", model = my_fine_tuned_bert_model, output="onnx-fine-tuned/model.onnx", opset=11, tokenizer=tokenizer) ``` I got the following log/output ``` ONNX opset version set to: 11 Loading pipeline (model: <__main__.TFBertForMultiClassification object at 0x7f2c37ba9b50>, tokenizer: <transformers.tokenization_bert.BertTokenizerFast object at 0x7f2c37ba9ad0>) Creating folder onnx-fine-tuned /!\ Please note TensorFlow doesn't support exporting model > 2Gb /!\ Using framework TensorFlow: 2.1.0, keras2onnx: 1.7.0 Found input input_ids with shape: {0: 'batch', 1: 'sequence'} Found input token_type_ids with shape: {0: 'batch', 1: 'sequence'} Found input attention_mask with shape: {0: 'batch', 1: 'sequence'} Found output output_0 with shape: {0: 'batch'} WARNING:tensorflow:AutoGraph could not transform <bound method TFBertForMultiClassification.call of <__main__.TFBertForMultiClassification object at 0x7f2c37ba9b50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num' WARNING: AutoGraph could not transform <bound method TFBertForMultiClassification.call of <__main__.TFBertForMultiClassification object at 0x7f2c37ba9b50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num' WARNING:tensorflow:AutoGraph could not transform <bound method TFBertMainLayer.call of <transformers.modeling_tf_bert.TFBertMainLayer object at 0x7f2c3dc34910>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num' WARNING: AutoGraph could not transform <bound method TFBertMainLayer.call of <transformers.modeling_tf_bert.TFBertMainLayer object at 0x7f2c3dc34910>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Num' WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c371ffe90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c371ffe90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c3769bb50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c3769bb50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c3769bed0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c3769bed0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37682e10>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37682e10>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c8ca90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c8ca90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c9b050>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c9b050>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37ca6c90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37ca6c90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37cae910>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37cae910>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37caee90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37caee90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c3783eb10>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c3783eb10>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37834790>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37834790>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37834d10>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37834d10>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c3781e990>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c3781e990>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37814610>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37814610>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37814b90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37814b90>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37778850>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37778850>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c377704d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c377704d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37770a50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37770a50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c3775b6d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c3775b6d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37751350>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37751350>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c377518d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c377518d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c42550>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c42550>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c4b1d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c4b1d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c4b750>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c4b750>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c5f3d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c5f3d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c5ffd0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c5ffd0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c685d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c685d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c7c210>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c7c210>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c7ce50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c7ce50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c07410>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c07410>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c1a0d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c1a0d0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c1acd0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c1acd0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c24290>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37c24290>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c2ded0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertSelfOutput.call of <transformers.modeling_tf_bert.TFBertSelfOutput object at 0x7f2c37c2ded0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c35b50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertIntermediate.call of <transformers.modeling_tf_bert.TFBertIntermediate object at 0x7f2c37c35b50>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37bc3110>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertOutput.call of <transformers.modeling_tf_bert.TFBertOutput object at 0x7f2c37bc3110>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING:tensorflow:AutoGraph could not transform <bound method TFBertPooler.call of <transformers.modeling_tf_bert.TFBertPooler object at 0x7f2c37bc3cd0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 WARNING: AutoGraph could not transform <bound method TFBertPooler.call of <transformers.modeling_tf_bert.TFBertPooler object at 0x7f2c37bc3cd0>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Bad argument number for Name: 3, expecting 4 tf executing eager_mode: True tf.keras model eager_mode: False The ONNX operator number change on the optimization: 2579 -> 1674 ``` should I ignore the warning? The shape of the exported onnx model is ``` graph_name: tf_bert_for_multi_classification domain: onnxmltools description: input 0: "attention_mask" ["N", 7] Int32 input 1: "input_ids" ["N", 7] Int32 input 2: "token_type_ids" ["N", 7] Int32 output 0: "output_1" ["N", 4404, 1] Float ``` I don't think that's correct. where are "N" and 7 from? when I try to run the model on input ``` {'input_ids': array([ 101, 146, 1169, 1631, 1103, 3974, 117, 1169, 1128, 136, 102]), 'token_type_ids': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), 'attention_mask': array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])} ``` I got error ``` >>> results = session.run(None, inputs_onnx) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/nix/store/xws61xnjc03fjiwfh7ci5cwgg1chmp3l-python3.7-onnxruntime-1.4.0/lib/python3.7/site-packages/onnxruntime/capi/session.py", line 110, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (N11onnxruntime17PrimitiveDataTypeIlEE) , expected: (N11onnxruntime17PrimitiveDataTypeIiEE) ```
08-13-2020 18:25:00
08-13-2020 18:25:00
my model is ```python class TFBertForMultiClassification(TFBertPreTrainedModel): '''BERT Model class for multi-label classification using a softmax output layer ''' def __init__(self, config, *inputs, **kwargs): super(TFBertForMultiClassification, self).__init__(config, *inputs, **kwargs) self.num_labels = config.num_labels self.bert = TFBertMainLayer(config, name='bert') self.bert.trainable = False self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) self.classifier = tf.keras.layers.Dense(config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name='classifier', activation='sigmoid') self.config = config def get_config(self): return self.config def call(self, inputs, **kwargs): outputs = self.bert(inputs, **kwargs) pooled_output = outputs[1] pooled_output = self.dropout(pooled_output, training=kwargs.get('training', False)) logits = self.classifier(pooled_output) logits = tf.keras.backend.expand_dims(logits, axis=-1) outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here return outputs # logits, (hidden_states), (attentions) ```<|||||>created a smaller example to reproduce the problem. https://github.com/huggingface/transformers/issues/6503
transformers
6,467
closed
Error: 'GPT2Model' object has no attribute '_step' when converting tf-based checkpoint into pytorch
I'm trying to convert a tensorflow-based GPT-2 checkpoint into pytorch using `convert_gpt2_checkpoint_to_pytorch`, and get errors like: ``` INFO:transformers.modeling_gpt2:Converting TensorFlow checkpoint from /content/model.ckpt-220000 INFO:transformers.modeling_gpt2:Loading TF weight global_step with shape [] INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/LayerNorm_embed_norm/beta with shape [1536] INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/LayerNorm_embed_norm/beta/adafactor_v with shape [1536] INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/LayerNorm_embed_norm/gamma with shape [1536] INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/LayerNorm_embed_norm/gamma/adafactor_v with shape [1536] INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/pos_embed with shape [1024, 1536] INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/pos_embed/adafactor_vc with shape [1536] INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/pos_embed/adafactor_vr with shape [1024] INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/word_embed with shape [8021, 1536] INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/word_embed/adafactor_vc with shape [1536] INFO:transformers.modeling_gpt2:Loading TF weight newslm/embeddings/word_embed/adafactor_vr with shape [8021] INFO:transformers.modeling_gpt2:Loading TF weight newslm/layer00/LayerNorm_mlp_ln0/beta with shape [1536] INFO:transformers.modeling_gpt2:Loading TF weight newslm/layer00/LayerNorm_mlp_ln0/beta/adafactor_v with shape [1536] INFO:transformers.modeling_gpt2:Loading TF weight newslm/layer00/LayerNorm_mlp_ln0/gamma with shape [1536] INFO:transformers.modeling_gpt2:Loading TF weight newslm/layer00/LayerNorm_mlp_ln0/gamma/adafactor_v with shape [1536] ... INFO:transformers.modeling_gpt2:Loading TF weight newslm/layer47/value_layer/bias/adafactor_v with shape [1536] INFO:transformers.modeling_gpt2:Loading TF weight newslm/layer47/value_layer/kernel with shape [1536, 1536] INFO:transformers.modeling_gpt2:Loading TF weight newslm/layer47/value_layer/kernel/adafactor_vc with shape [1536] INFO:transformers.modeling_gpt2:Loading TF weight newslm/layer47/value_layer/kernel/adafactor_vr with shape [1536] --------------------------------------------------------------------------- ModuleAttributeError Traceback (most recent call last) <ipython-input-38-45b704eacf86> in <module>() 1 from transformers.convert_gpt2_original_tf_checkpoint_to_pytorch import convert_gpt2_checkpoint_to_pytorch ----> 2 convert_gpt2_checkpoint_to_pytorch('./model.ckpt-220000', '', 'pytorch') 2 frames /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __getattr__(self, name) 770 return modules[name] 771 raise ModuleAttributeError("'{}' object has no attribute '{}'".format( --> 772 type(self).__name__, name)) 773 774 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None: ModuleAttributeError: 'GPT2Model' object has no attribute '_step' ``` It seems that the program cannot convert `global_step` layer into pytorch. Is there any solution to this?
08-13-2020 17:29:48
08-13-2020 17:29:48
same problem<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,466
closed
add custom datasets tutorial
A tutorial showing examples for working with custom datasets on several tasks. Goals: 1. Keep it general. The point is to show people how to use their own datasets, so don't use any processors or utilities that are dataset-specific. 2. Show several tasks with different data formats. I include sequence classification with IMDb, token classification with W-NUT NER, and question answering with squad 2.0. Also link to how to train a language model blog post. 3. Prepare the data in a way that works with Trainer, TFTrainer, native PyTorch, and native TensorFlow with keras's `fit` method.
08-13-2020 17:21:58
08-13-2020 17:21:58
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6466?src=pr&el=h1) Report > Merging [#6466](https://codecov.io/gh/huggingface/transformers/pull/6466?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e0b1dc8954b87c18f77a82000e81e02683b8eb1&el=desc) will **decrease** coverage by `1.34%`. > The diff coverage is `83.53%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6466/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6466?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6466 +/- ## ========================================== - Coverage 79.77% 78.42% -1.35% ========================================== Files 148 153 +5 Lines 27214 28001 +787 ========================================== + Hits 21710 21960 +250 - Misses 5504 6041 +537 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6466?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | | | [src/transformers/data/test\_generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Rlc3RfZ2VuZXJhdGlvbl91dGlscy5weQ==) | `0.00% <0.00%> (ø)` | | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <ø> (-0.91%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.15% <ø> (-0.20%)` | :arrow_down: | | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.25% <0.00%> (-0.13%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.63% <4.00%> (-54.16%)` | :arrow_down: | | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <7.14%> (-70.00%)` | :arrow_down: | | [src/transformers/testing\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `51.92% <28.57%> (-20.81%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <37.50%> (-0.18%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <50.00%> (+1.79%)` | :arrow_up: | | ... and [72 more](https://codecov.io/gh/huggingface/transformers/pull/6466/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6466?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6466?src=pr&el=footer). Last update [7bc0056...31ea640](https://codecov.io/gh/huggingface/transformers/pull/6466?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Don't mind the failing test, it's been fixed on `master`.
transformers
6,465
closed
Longformer convert error
When i install transformers from source and convert bert to "long vesion", [failed.](https://colab.research.google.com/github/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb)
08-13-2020 15:53:23
08-13-2020 15:53:23
Error(s) in loading state_dict for RobertaLongForMaskedLM: size mismatch for embeddings.position_ids: copying a param with shape torch.Size([1, 512]) from checkpoint, the shape in current model is torch.Size([1, 4096]).<|||||>Hey @Maybewuss, This is a community notebook, so we don't really plan on maintaining this notebook with current library changes. Regarding your question I would suggest to post it on https://discuss.huggingface.co/ and/or to contact the author @ibeltagy - maybe he can help you. Before that it would be nice if you can create a notebook which can be used to re-create your error (replacing RoBERTA with BERT in the above notebook)<|||||>@patrickvonplaten Is there a way of converting existing 'short' models to Longformer? The notebook above (from allennlp) seem not to be useful since you can't automatically convert their 'long' model to Longformer Huggingface's class. The only way I see is to manually remap nodes.<|||||>Yeah, it is not straight-forward to convert *any* HF model to its "long" version. You will need to write some special code for this yourself I think. The notebook should work more as an example for how it can be done with a model like Roberta<|||||>I faced the same error with roberta. Size mismatch was in the position embedding and position ids. Adding the following lines to `create_long_model` helped: ```{python} model.roberta.embeddings.position_embeddings.weight.data = new_pos_embed # add after this line model.roberta.embeddings.position_embeddings.num_embeddings = len(new_pos_embed.data) # first, check that model.roberta.embeddings.position_embeddings.weight.data.shape is correct — has to be 4096 (default) of your desired length model.roberta.embeddings.position_ids = torch.arange( 0, model.roberta.embeddings.position_embeddings.num_embeddings )[None] ``` For some reason number of embeddings didn't change after adding new weight tensor, so we fix it and also add new position ids. I use torch==1.6.0 and transformers==3.4.0<|||||>@NadiaRom Been trying this implementation, but the forward pass in `RobertaLongSelfAttention` gets too many inputs in the forward pass. ```python class RobertaLongSelfAttention(LongformerSelfAttention): def forward( self, hidden_states, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, output_attentions=False, ): return super().forward(hidden_states, attention_mask=attention_mask, output_attentions=output_attentions) ``` And doesnt work with the current implementation in the transformer library [of the forward pass](https://github.com/huggingface/transformers/blob/c89bdfbe720bc8f41c7dc6db5473a2cb0955f224/src/transformers/models/longformer/modeling_longformer.py#L415) Any thought on how to solve this and use the conversion script in the current transformers release (3.5.1)?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.<|||||>@MarkusSagen, were you able to solve the `forward()` issue? <|||||>@versae I only looked at it for a couple of hours and decided it was easier to roll back to an earlier version of transformers. If anyone implements a fix, I would be very interested to hear 😊👌<|||||>@MarkusSagen, [this PR makes it work for 4.2.0](https://github.com/allenai/longformer/pull/166/), and with a couple of changes it also works for 4.9.0.
transformers
6,464
closed
[BartTokenizerFast] add BartTokenizerFast in AutoTokenizer
This PR adds BartTokenizerFast in AutoTokenizer. @sshleifer
08-13-2020 15:14:16
08-13-2020 15:14:16
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6464?src=pr&el=h1) Report > Merging [#6464](https://codecov.io/gh/huggingface/transformers/pull/6464?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/54c687e97c92efe6eba9e537bd98b47d9005a279&el=desc) will **decrease** coverage by `2.57%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6464/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6464?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6464 +/- ## ========================================== - Coverage 79.91% 77.33% -2.58% ========================================== Files 153 153 Lines 28005 28005 ========================================== - Hits 22379 21657 -722 - Misses 5626 6348 +722 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6464?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.72% <100.00%> (ø)` | | | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `34.11% <0.00%> (-63.30%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.26% <0.00%> (-53.69%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.16% <0.00%> (-14.46%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-6.16%)` | :arrow_down: | | [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: | | ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/6464/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6464?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6464?src=pr&el=footer). Last update [a442f87...c1b241e](https://codecov.io/gh/huggingface/transformers/pull/6464?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,463
closed
[LongformerTokenizerFast] add LongformerTokenizerFast in AutoTokenizer
This PR adds LongformerTokenizerFast in AutoTokenizer. Fixes #6459 @patrickvonplaten
08-13-2020 15:08:43
08-13-2020 15:08:43
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6463?src=pr&el=h1) Report > Merging [#6463](https://codecov.io/gh/huggingface/transformers/pull/6463?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/54c687e97c92efe6eba9e537bd98b47d9005a279&el=desc) will **increase** coverage by `0.18%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6463/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6463?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6463 +/- ## ========================================== + Coverage 79.91% 80.09% +0.18% ========================================== Files 153 153 Lines 28005 28005 ========================================== + Hits 22379 22431 +52 + Misses 5626 5574 -52 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6463?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.72% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <0.00%> (+0.16%)` | :arrow_up: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6463/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.58% <0.00%> (+27.51%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6463?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6463?src=pr&el=footer). Last update [54c687e...7f1278b](https://codecov.io/gh/huggingface/transformers/pull/6463?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,462
closed
minor typo fix in modeling_utils
08-13-2020 12:44:43
08-13-2020 12:44:43
Thanks!
transformers
6,461
closed
Sort unique_no_split_tokens to make it deterministic
The `unique_no_split_tokens` attribute of tokenizers is not deterministic, and it makes the hashing in the `nlp` lib return different hashes for the same tokenizer over different sessions. To fix that I changed its type to a `set` instead of a `list`. Fix #6460
08-13-2020 12:40:57
08-13-2020 12:40:57
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6461?src=pr&el=h1) Report > Merging [#6461](https://codecov.io/gh/huggingface/transformers/pull/6461?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9d94aecd516c7540a94b9d781ef28d7375a796bc&el=desc) will **decrease** coverage by `0.47%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6461/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6461?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6461 +/- ## ========================================== - Coverage 80.09% 79.62% -0.48% ========================================== Files 153 153 Lines 28005 28005 ========================================== - Hits 22430 22298 -132 - Misses 5575 5707 +132 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6461?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-70.95%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: | | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6461?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6461?src=pr&el=footer). Last update [9d94aec...dfb7549](https://codecov.io/gh/huggingface/transformers/pull/6461?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I've actually switched in the last version of transformers from a `set` to a `list` for the same reason (not deterministic for `nlp`). Are you sure this really solve the problem @lhoestq ? Also regarding backward compatibility, I'm fine with changing this from a list to a set @sgugger <|||||>Maybe we should rather have a sorted list?<|||||>`sorted` should solves the issue. I just tested and a `set` doesn't solve it actually. I'll change to `sorted`, thanks @thomwolf <|||||>This is such an important use-case (and potential source of regression) for us that we may want to add a test on that in `nlp` or `transformers` in a not too far future.<|||||>Yes definitely. Not sure how to test consistency across sessions in the CI though. I guess we could have tests with hardcoded hashes for some tokenizers but I'm not sure that's ideal. Or maybe there's a way to do two CI jobs in a row: one to generate the hashes in a first session, and one to verify that they're the same in another session.
transformers
6,460
closed
Hashing a tokenizer using the 🤗 nlp lib is not deterministic
In the `nlp` library it is common to use a tokenizer on a dataset. The library takes care of caching the results, so that if you run the tokenization twice, it will reuse the previous results. To make the caching work, we compute a hash of the tokenizer. However the `unique_no_split_tokens` attribute of tokenizers is not deterministic, and it makes the hashing return different hashes for the same tokenizer over different sessions. `unique_no_split_tokens` can be a list like `['[CLS]', '[MASK]', '[PAD]', '[SEP]', '[UNK]']` for example. But it happens that re-loading a tokenizer in another session shuffles the tokens in the list. For example this code doesn't always return the same output over different sessions: ```python from transformers import AutoTokenizer model_name = "distilbert-base-uncased-finetuned-sst-2-english" tokenizer = AutoTokenizer.from_pretrained(model_name) print(tokenizer.unique_no_split_tokens) ``` Reproduce on google colab: https://colab.research.google.com/drive/1nyskaLavcTCkXibZBlYX71bkG476uSzz?usp=sharing
08-13-2020 12:39:35
08-13-2020 12:39:35
transformers
6,459
closed
Autotokenizer not returning instance of LongformerTokenizerFast
## Environment info - `transformers` version: 3.0.2 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: Google Colab ## Information Model I am using : Longformer Path: **'allenai/longformer-base-4096'** and **'allenai/longformer-large-4096'** The problem arises when trying to load 'Fast' version for Longformer using Autotokenizer, the returned tokenizer instance is an object of LongformerTokenizer and not LongformerTokenizerFast. ![Annotation 2020-08-13 160156](https://user-images.githubusercontent.com/20542313/90124665-68ace300-dd7e-11ea-8cc7-fdd80f070bec.png) I require the offset mappings for a sub task of extracting word embeddings. ## To reproduce Just as in the screenshot i am adding the code below to instantiate the tokenizer object: ``` longformer_tokenizer = AutoTokenizer.from_pretrained( pretrained_model_name_or_path = 'allenai/longformer-base-4096', use_fast=True) print(longformer_tokenizer.is_fast) print(longformer_tokenizer) ``` And since its not an instance of transformers.LongformerTokenizerFast, I cannot `return_offsets_mapping=True` As in the below code throws `NotImplementedError` ``` longformer_encoded_dict = longformer_tokenizer.encode_plus(text=sequence_3, add_special_tokens = True, max_length = 75, truncation = True, pad_to_max_length = False, return_token_type_ids = False, return_attention_mask = True, return_overflowing_tokens = False, return_special_tokens_mask = False, return_offsets_mapping=True) ``` **Error** ` NotImplementedError: return_offsets_mapping is not available when using Python tokenizers.To use this feature, change your tokenizer to one deriving from transformers.PreTrainedTokenizerFast.` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> @mfuntowicz @patrickvonplaten
08-13-2020 10:46:03
08-13-2020 10:46:03
Hi @pratikdk thank you for reporting this! Just made a PR, will be fixed soon. Till then you can use `LongformerTokenizerFast` class
transformers
6,458
closed
Unknown task zero-shot-classification
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Ubuntu 18 - Python version: 3.7 ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer tensorflow: @jplu documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. I downloaded transformer version 3.0.2 2. From transformer, I imported pipeline 3. And from the pipeline, I was trying to load this task `zero-shot-classification` and then I got the error. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-12-1f0825594ce1> in <module> ----> 1 classifier = pipeline("zero-shot-classification") ~/anaconda3/lib/python3.7/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, framework, **kwargs) 1819 # Retrieve the task 1820 if task not in SUPPORTED_TASKS: -> 1821 raise KeyError("Unknown task {}, available tasks are {}".format(task, list(SUPPORTED_TASKS.keys()))) 1822 1823 framework = framework or get_framework(model) KeyError: "Unknown task zero-shot-classification, available tasks are ['feature-extraction', 'sentiment-analysis', 'ner', 'question-answering', 'fill-mask', 'summarization', 'translation_en_to_fr', 'translation_en_to_de', 'translation_en_to_ro', 'text-generation']" ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
08-13-2020 10:08:55
08-13-2020 10:08:55
Hello! this task is only available on the `master` branch as of now. You can install it as such: `pip install git+https://github.com/huggingface/transformers`. It will be in the next release!<|||||>This is still happening on Databricks even though I re-installed the package several times today. Any thoughts?<|||||>@Tolga28A Can you document which exact command(s) you run on Databricks (and how)?<|||||>pip install git+https://github.com/huggingface/transformers from transformers import pipeline classifier = pipeline('zero-shot-classification') and the output is: KeyError: "Unknown task zero-shot-classification, available tasks are ['feature-extraction', 'sentiment-analysis', 'ner', 'question-answering', 'fill-mask', 'summarization', 'translation_en_to_fr', 'translation_en_to_de', 'translation_en_to_ro', 'text-generation']" --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <command-2362828626522668> in <module> ----> 1 classifier = pipeline('zero-shot-classification') /local_disk0/.ephemeral_nfs/envs/pythonEnv-a6d3a5c1-2f0b-495b-828f-f792f8695d17/lib/python3.7/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, framework, **kwargs) 1819 # Retrieve the task 1820 if task not in SUPPORTED_TASKS: -> 1821 raise KeyError("Unknown task {}, available tasks are {}".format(task, list(SUPPORTED_TASKS.keys()))) 1822 1823 framework = framework or get_framework(model) KeyError: "Unknown task zero-shot-classification, available tasks are ['feature-extraction', 'sentiment-analysis', 'ner', 'question-answering', 'fill-mask', 'summarization', 'translation_en_to_fr', 'translation_en_to_de', 'translation_en_to_ro', 'text-generation']"<|||||>Start from a brand new venv or uninstall transformers before re-installing?
transformers
6,457
closed
Add POS tagging and Phrase chunking token classification examples
This PR adds POS tagging and Phrase chunking examples to token classification examples. The current example (NER) is minimally adjusted to allow users to experiment with their token classification model training easily. Although experimenting with token classifications other than NER token classification is already possible for skilled developers, this PR lowers the barrier to entry even further and demonstrates HF extensibility. The adjustments made consist of: - extracting [TokenClassificationTask](https://github.com/vblagoje/transformers/blob/6caa8c1946fb9e3fb76fad081833805b25b182df/examples/token-classification/utils_ner.py#L69) superclass - implementing the specific task particulars (reading of InputExample etc.) in task [subclasses](https://github.com/vblagoje/transformers/blob/6caa8c1946fb9e3fb76fad081833805b25b182df/examples/token-classification/tasks.py) - "dynamic loading" of a task [subclass](https://github.com/vblagoje/transformers/blob/6caa8c1946fb9e3fb76fad081833805b25b182df/examples/token-classification/run_ner.py#L118) depending on the token classification task trained I also noticed that: - [NER dataset](https://github.com/vblagoje/transformers/blob/6caa8c1946fb9e3fb76fad081833805b25b182df/examples/token-classification/run.sh#L1) used is unavailable and should be replaced. I didn't replace it in this PR - PL training needs to be slightly retrofitted to adjust for the latest PL's BaseTransformer master changes. I made the change to make sure my changes work for these new examples If you think adding one rather than two token task classification example is enough (say POS tagging) let me know - I'll remove the other. Also, please let me know if any additional adjustments are needed.
08-13-2020 09:17:44
08-13-2020 09:17:44
Hi @vblagoje , thanks for adding this :+1: GermEval dataset is currently not available - it seems that they've relaunched the shared task website. This dataset removal will also affect libraries such as Flair or `nlp` so I will try to find another mirror, thanks for reporting it! For PoS tagging it would be awesome if you could also report/output accuracy after training - just import `accuracy_score` from the `seqeval` package :)<|||||>Thanks for the review @stefan-it Let me know if there are any additional suggestions. Perhaps we can add appropriate URLs for the GermEval dataset and remove the chunking example if needed. <|||||>This looks great, thanks! Note that there is a big rework of the examples to use the nlp library and Trainer in the pipeline. We're polishing the APIs before we start converting every script. I'll tag you when we get to this one to make sure we don't break anything. In the meantime, could you take care of the styling issue so we can merge?<|||||>Ok @sgugger please do ping me and I'll make sure that all token classification examples work as expected, perhpas I can help with the transition. I am not sure why CI fails for styling, more specifically isort `ERROR: examples/token-classification/tasks.py Imports are incorrectly sorted.` It passes both on my working laptop and training machine. Could you please tell me how imports are incorrectly sorted in [tasks.py](https://github.com/vblagoje/transformers/blob/token_classification_examples/examples/token-classification/tasks.py) ?<|||||>It may be because of the dep you're adding to examples. It should probably be added in the `known_third_party` list [here](https://github.com/huggingface/transformers/blob/master/setup.cfg).<|||||>Ok @sgugger `check_code_quality` passes now, but there are other new failures. On a first look, they seem transient/unrelated to this PR? <|||||>Looks flaky, re-triggered the CI
transformers
6,456
closed
Open-Retrieval Question Answering (ORQA)
# 🌟 New model addition Open-Retrieval Question Answering system (ORQA) was introduced in the paper https://arxiv.org/abs/1906.00300. This approach is very useful for those who work on Open Domain Question Answering. <!-- Important information --> ## Open source status * [x] the model implementation is available: All the implementation code has been released in https://github.com/google-research/language/tree/master/language/orqa * [x] the model weights are available: `gs://orqa-data/` * [x] who are the authors: @kentonl et al.
08-13-2020 09:03:28
08-13-2020 09:03:28
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,455
closed
MASS : A generalization of BERT and GPT
# 🌟 New model addition ## Model description MASS is a novel pre-training method for sequence to sequence based language generation tasks. It randomly masks a sentence fragment in the encoder, and then predicts it in the decoder. In this way, MASS can jointly train the encoder and decoder to develop the capability of representation extraction and language modeling. This pre-training is very helpful when the encoder and decoder are shared between multiple languages. ## Open source status - [x] the model implementation is available : The model is implemented upon fair-seq [here.](https://github.com/microsoft/MASS) - [x] the model weights are available: Pre-trained model on various language pairs, for unsupervised translation, supervised translation and abstractive summarization are provided on the GitHub repo itself. - [x] Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu are the authors: ( @StillKeepTry , @tobyoup , @xutaatmicrosoftdotcom ) This is my first time contributing to this repository, so forgive me for any mistake . Please let know whether I should do it or not. Also, if anyone wants to come along and help, please let me know that too ! 😀
08-13-2020 07:51:18
08-13-2020 07:51:18
Can also try MP-Net of theirs next . <|||||>Sorry, just saw the request for MP-Net [here](https://github.com/huggingface/transformers/issues/4308) . Seems I was behind. So, shall I close this issue, or does anyone still want separate MASS model here ? @RyanHuangNLP<|||||>@RyanHuangNLP @StillKeepTry , @tobyoup , @xutaatmicrosoftdotcom <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,454
closed
Memory Issue while following LM tutorial
(Didn't get answer) https://stackoverflow.com/questions/63387831/memory-issue-while-following-lm-tutorial SPECS: torch==1.5.0 transformers==3.0.2 OS: Windows 10 CUDA: 10.1 GPU: RTX 2060 6G VRAM (x2) RAM: 32GB tutorial: https://huggingface.co/blog/how-to-train Hello I am trying to train my own language model and I have had some memory issues. I have tried to run some of this code in Pycharm on my computer and then trying to replicate in my Collab Pro Notebook. ## First, my code ``` from transformers import RobertaConfig, RobertaTokenizerFast, RobertaForMaskedLM, LineByLineTextDataset from transformers import DataCollatorForLanguageModeling, Trainer, TrainingArguments config = RobertaConfig(vocab_size=60000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1) tokenizer = RobertaTokenizerFast.from_pretrained("./MODEL DIRECTORY", max_len=512) model = RobertaForMaskedLM(config=config) print("making dataset") dataset = LineByLineTextDataset(tokenizer=tokenizer, file_path="./total_text.txt", block_size=128) print("making c") data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15) training_args = TrainingArguments(output_dir="./MODEL DIRECTORY", overwrite_output_dir=True, num_train_epochs=1, per_gpu_train_batch_size=64, save_steps=10000, save_total_limit=2) print("Building trainer") trainer = Trainer(model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, prediction_loss_only=True) trainer.train() trainer.save_model("./MODEL DIRECTORY") ``` `"./total_text.txt"` being a 1.7GB text file. ## PyCharm Attempt This code on pycharm builds the dataset and then would throw an error saying that my preferred gpu was running out of memory, and that Torch was already using 3.7GiB of memory. I tried: * import gc doing a gc clear to try to flush what ever was going on my gpu * Decreasing my batch size for my gpu (training only happened on a batch size of 8 resulting in 200,000+ epochs that all took 1.17 seconds) * Setting my `os.environ["CUDA_VISIBLE_OBJECTS"] =""` so that torch would have to use my CPU and not my GPU. Still threw same gpu memory error... So succumbing to the fact that torch, for the time being, was forcing itself to use my gpu, I decided to go to Collab. ## Collab Attempt Collab has different issues with my code. It does not have the memory to build the dataset, and crashes due to RAM shortages. I purchased a Pro account and then increased the usable RAM to 25GB, still memory shortages. Cheers!
08-13-2020 07:37:21
08-13-2020 07:37:21
Hi @raceee , GPU: RTX 2060 6G VRAM (x2) is GB GPU, so I don't think you will be able to use `batch_size` 64 with it. Try lowering your batch_size if your are running into OOM. AS for big dataset take a look at the [nlp](https://github.com/huggingface/nlp) package, it will allow you to load and process data lazily, so you won't face the RAM issue. <|||||>I just shrunk my train data set. Per advice #4668<|||||>> Hi @raceee , > GPU: RTX 2060 6G VRAM (x2) is GB GPU, so I don't think you will be able to use `batch_size` 64 with it. Try lowering your batch_size if your are running into OOM. > > AS for big dataset take a look at the [nlp](https://github.com/huggingface/nlp) package, it will allow you to load and process data lazily, so you won't face the RAM issue. HI @patil-suraj .. is there a code snippet that I could refer to. LineByLineTextDataset doesnt crash for me but takes forever.
transformers
6,453
closed
Clean directory after script testing
#6421 #6433 This PR cleans the directory after each script testing to prevent bugs like this.
08-13-2020 07:05:24
08-13-2020 07:05:24
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6453?src=pr&el=h1) Report > Merging [#6453](https://codecov.io/gh/huggingface/transformers/pull/6453?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ffea5ce2f4d154a3696b8fe2fb116fa09235700&el=desc) will **decrease** coverage by `2.51%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6453/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6453?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6453 +/- ## ========================================== - Coverage 79.89% 77.37% -2.52% ========================================== Files 153 153 Lines 27902 27902 ========================================== - Hits 22291 21588 -703 - Misses 5611 6314 +703 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6453?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `34.11% <0.00%> (-63.30%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.98% <0.00%> (-52.81%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.16% <0.00%> (-14.46%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-6.16%)` | :arrow_down: | | [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: | | [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `96.19% <0.00%> (-1.64%)` | :arrow_down: | | ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6453/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6453?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6453?src=pr&el=footer). Last update [4ffea5c...05950f1](https://codecov.io/gh/huggingface/transformers/pull/6453?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I think cleaning the output dir seems to be a better solution than modifying `output_dir`, since people may still copy those `output_dir` in the future. What do you think? @LysandreJik
transformers
6,452
closed
getting error while training bert language model. "ValueError: Expected input batch_size (8) to match target batch_size (1024)."
from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') from transformers import BertForSequenceClassification model = BertForSequenceClassification.from_pretrained('bert-base-uncased') %%time from transformers import LineByLineTextDataset,TextDataset paths = '/content/drive/My Drive/MyFile.txt' dataset = TextDataset( tokenizer=tokenizer, file_path=paths, block_size=128, ) from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="./EsperBERTo", overwrite_output_dir=True, num_train_epochs=1, save_total_limit=2, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, prediction_loss_only=True, ) %%time trainer.train() **'**''''''''''''''''''''''''''''''''''''''Getting error after executing trainer.train() ''''''''''''''''''''''''''''''''''''''''''''''''''''''''''**** ValueError Traceback (most recent call last) <ipython-input-12-0c647bc3a8b8> in <module>() ----> 1 get_ipython().run_cell_magic('time', '', 'trainer.train()') 10 frames <decorator-gen-60> in time(self, line, cell, local_ns) <timed eval> in <module>() /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 2214 if input.size(0) != target.size(0): 2215 raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' -> 2216 .format(input.size(0), target.size(0))) 2217 if dim == 2: 2218 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) **ValueError: Expected input batch_size (8) to match target batch_size (1024).** please help me to resolve this issue.
08-13-2020 05:09:29
08-13-2020 05:09:29
the same probleme ?? Expected input batch_size (15) to match target batch_size (0).<|||||>Same problem<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I have the same problem.<|||||>Same here! Anyone resolved it yet? @bharathrajcl @chaima-ai @gborodin @RufusGladiuz
transformers
6,451
closed
ERROR: No matching distribution found for tokenizers==0.8.1.rc1 (from transformers)
``` ERROR: Could not find a version that satisfies the requirement tokenizers==0.8.1.rc1 (from transformers) (from versions: 0.0.2, 0.0.3, 0.0.4, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.1.0, 0.1.1, 0.2.0, 0.2.1, 0.3.0, 0.4.0, 0.4.1, 0.4.2, 0.5.0, 0.5.1, 0.5.2, 0.6.0, 0.7.0, 0.8.0, 0.8.1) ERROR: No matching distribution found for tokenizers==0.8.1.rc1 (from transformers) ``` The error above occurs when I pip install transformers from an anaconda environment of ![image](https://user-images.githubusercontent.com/8081512/90089485-7b162500-dd5c-11ea-9e0e-16f8044b490e.png)
08-13-2020 02:59:03
08-13-2020 02:59:03
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,450
closed
Error in PyTorch Trainer when used with TPU
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.0a0+d6149a7 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer tensorflow: @jplu documentation: @sgugger --> @sgugger ## Information Model I am using (Bert, XLNet ...): BERT The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: SQUaD * [ ] my own task or dataset: (give details below) The following error arises when using the `run_squad_trainer.py` script with TPU: ```python Epoch: 0% 0/2 [00:00<?, ?it/s] Iteration: 0it [00:00, ?it/s]Exception in device=TPU:0: 'NoneType' object cannot be interpreted as an integer Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 119, in _start_fn fn(gindex, *args) File "/content/transformers/examples/question-answering/run_squad_trainer.py", line 156, in _mp_fn main() File "/content/transformers/examples/question-answering/run_squad_trainer.py", line 145, in main model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 584, in train self.epoch = epoch + (step + 1) / len(epoch_iterator) TypeError: 'NoneType' object cannot be interpreted as an integer ``` ## To reproduce Steps to reproduce the behavior: 1. install transformers from the master branch 2. install pytorch-xla using the following command: ```shell VERSION = "20200325" curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py python pytorch-xla-env-setup.py --version $VERSION ``` 3. run the training script (I'm using 1 tpu core merely to simplify the logs. The error is the same (for each core) when using 8 cores): ```shell cd transformers/examples/ python ./xla_spawn.py --num_cores 1 \ question-answering/run_squad_trainer.py \ --model_name_or_path bert-base-multilingual-cased \ --model_type bert \ --data_dir $DATA_DIR \ --do_train \ --per_device_train_batch_size 64 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir $OUT_DIR \ --overwrite_output_dir ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The script runs and trains the model <!-- A clear and concise description of what you would expect to happen. -->
08-13-2020 00:46:35
08-13-2020 00:46:35
I am receiving the same error. Even without using TPU. python run_glue.py --model_name_or_path bert-base-cased --task_name MRPC --do_train --do_eval --data_dir $GLUE_DIR/MRPC/ --max_seq_length 128 --per_device_train_batch_size --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/mrpc_output/ <|||||>Try with the following: ```bash !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py !python pytorch-xla-env-setup.py --version "nightly" !pip install git+https://github.com/huggingface/transformers.git !git clone https://github.com/huggingface/transformers.git !python transformers/examples/xla_spawn.py --num_cores 1 \ question-answering/run_squad_trainer.py \ --model_name_or_path bert-base-multilingual-cased \ --model_type bert \ --data_dir $DATA_DIR \ --do_train \ --per_device_train_batch_size 64 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir $OUT_DIR \ --overwrite_output_dir ``` To run it with all `8 TPU` cores, you most likely need the `35GB RAM` runtime from Google Colab. You can find it in this [notebook](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=QLGiFCDqvuil).<|||||>Thanks @AliOsm, it works!
transformers
6,449
closed
Trainer automatically drops unused columns in nlp datasets
Here is a basic example of use for evaluation on SST-2: ``` from nlp import load_dataset, load_metric from transformers import AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments dataset = load_dataset('glue', 'sst2') metric = load_metric('glue', 'sst2') model_name = "distilbert-base-uncased-finetuned-sst-2-english" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) encoded_dataset = dataset.map(lambda examples: tokenizer(examples['sentence'], padding=True), batched=True) args = TrainingArguments(output_dir = "test") def compute_metrics(eval_pred): predictions, labels = eval_pred return metric.compute(predictions.argmax(axis=-1), labels) trainer = Trainer( model, args, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset["validation"], compute_metrics=compute_metrics, ) trainer.evaluate() ``` The goal is to then refine this new API by trying to use it in all examples.
08-12-2020 19:42:58
08-12-2020 19:42:58
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6449?src=pr&el=h1) Report > Merging [#6449](https://codecov.io/gh/huggingface/transformers/pull/6449?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bc820476a5c72060f810f825298befd5ec85da4d?el=desc) will **decrease** coverage by `2.12%`. > The diff coverage is `37.03%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6449/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6449?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6449 +/- ## ========================================== - Coverage 79.98% 77.86% -2.13% ========================================== Files 153 153 Lines 28005 28031 +26 ========================================== - Hits 22401 21827 -574 - Misses 5604 6204 +600 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6449?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.27% <ø> (ø)` | | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.28% <27.27%> (-0.57%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.16% <80.00%> (-0.03%)` | :arrow_down: | | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.26% <0.00%> (-53.69%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.16% <0.00%> (-14.46%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-6.16%)` | :arrow_down: | | ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6449/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6449?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6449?src=pr&el=footer). Last update [bc82047...69d3ec5](https://codecov.io/gh/huggingface/transformers/pull/6449?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This is sweet!<|||||>Removed all the changes linked to metrics and moved the column dropping to anywhere we pass a Dataset (init, evaluate and predict). As discussed, we'll propose an API for the metrics once we have changed all examples to use `Trainer` and `nlp`, so we know exactly what the API has to support.
transformers
6,448
closed
[DO NOT SUBMIT] Run TPU examples for PR commits.
Trying out the CircleCI flow. I'll delete this PR after testing.
08-12-2020 17:47:29
08-12-2020 17:47:29
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6448?src=pr&el=h1) Report > Merging [#6448](https://codecov.io/gh/huggingface/transformers/pull/6448?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bc820476a5c72060f810f825298befd5ec85da4d&el=desc) will **increase** coverage by `0.09%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6448/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6448?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6448 +/- ## ========================================== + Coverage 79.98% 80.08% +0.09% ========================================== Files 153 153 Lines 28005 28005 ========================================== + Hits 22401 22429 +28 + Misses 5604 5576 -28 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6448?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+7.26%)` | :arrow_up: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: | | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6448/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6448?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6448?src=pr&el=footer). Last update [bc82047...6567722](https://codecov.io/gh/huggingface/transformers/pull/6448?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>The TPU test succeeded: https://app.circleci.com/pipelines/github/huggingface/transformers/10480/workflows/c669aea7-b861-4b1b-b90c-d8c8b50e60dc/jobs/72547 I'll delete this PR now
transformers
6,447
closed
[TF Longformer] Improve Speed for TF Longformer
This PR: - adds a simple test for all tf models to verify that the forward function can be used in graph mode - optimizes TF Longformer, by removing unnecessary calculation, such as `tf.transpose()` (In contrast to PyTorch, `tf.transpose()` allocates a new tensor and thus should be avoided). This also cleans up the code IMO. => These changes lead to a speed-up of 1.03 which is actually not that much...more details in benchmark below. After a lot of digging TF XLA will not be very easy to implement as a lot of kernels that are highly used in this model `tf.where` are not implemented for XLA (yet). So TF Longformer TPU will not work sadly for the moment @ibeltagy ### Conclusion For me the PR was also a good exercise to see whether TF can significantly sped up by removing unnecessary tensor allocations. It seems like it's not really worth it go through all the tf models if the improvement in speed is only around 2,3%.
08-12-2020 16:41:48
08-12-2020 16:41:48
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6447?src=pr&el=h1) Report > Merging [#6447](https://codecov.io/gh/huggingface/transformers/pull/6447?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a75c64d80c76c3dc71f735d9197a4a601847e0cd?el=desc) will **increase** coverage by `0.84%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6447/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6447?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6447 +/- ## ========================================== + Coverage 78.96% 79.81% +0.84% ========================================== Files 157 157 Lines 28486 28479 -7 ========================================== + Hits 22495 22730 +235 + Misses 5991 5749 -242 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6447?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <100.00%> (+73.82%)` | :arrow_up: | | [src/transformers/modeling\_tf\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `98.67% <100.00%> (-0.03%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.94% <0.00%> (-74.32%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+2.60%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (+2.75%)` | :arrow_up: | | ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/6447/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6447?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6447?src=pr&el=footer). Last update [a75c64d...ae3bbe2](https://codecov.io/gh/huggingface/transformers/pull/6447?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>### Speed Benchmarking: Running this command on the master branch: ``` python examples/benchmarking/run_benchmark.py --models allenai/longformer-base-4096 --no_memory --sequence_length 512 1024 ``` on this env: ``` - transformers_version: 3.0.2 - framework: TensorFlow - eager_mode: False - use_xla: False - framework_version: 2.2.0 - python_version: 3.8.5 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020-08-14 - time: 10:32:09.525696 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: N/A - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 0 - use_tpu: False ``` gives: ``` ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- allenai/longformer-base-4096 8 512 0.229 allenai/longformer-base-4096 8 1024 0.463 -------------------------------------------------------------------------------- ``` On this branch the speed is improved to: ``` ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- allenai/longformer-base-4096 8 512 0.223 allenai/longformer-base-4096 8 1024 0.447 -------------------------------------------------------------------------------- ``` So we can see an improvement of ca. 3%, which is not that much actually... I guess it's interesting to see what effect removing some unnecessary `tf.transpose()` has in TF, but it might not be worth to go through all `modeling_tf_...` files trying to remove `tf.transpose()` and similar functions.
transformers
6,446
closed
Get GKE logs via kubectl logs instead of gcloud logging read.
This should be a much faster method of getting logs from GKE back to the CircleCI machine.
08-12-2020 15:44:37
08-12-2020 15:44:37
transformers
6,445
closed
Test model outputs equivalence
Adds a test to check that the model outputs keep the same values and order as the tuple output.
08-12-2020 15:43:46
08-12-2020 15:43:46
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6445?src=pr&el=h1) Report > Merging [#6445](https://codecov.io/gh/huggingface/transformers/pull/6445?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/96c3329f19f28e47eab7f9f20ed3504619e16722&el=desc) will **increase** coverage by `0.38%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6445/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6445?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6445 +/- ## ========================================== + Coverage 79.95% 80.33% +0.38% ========================================== Files 153 153 Lines 27932 27928 -4 ========================================== + Hits 22332 22437 +105 + Misses 5600 5491 -109 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6445?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `98.69% <100.00%> (+0.63%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.06% <0.00%> (-27.52%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.26% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `91.23% <0.00%> (+0.21%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.71% <0.00%> (+0.37%)` | :arrow_up: | | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `96.09% <0.00%> (+0.41%)` | :arrow_up: | | [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `88.13% <0.00%> (+0.48%)` | :arrow_up: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `79.69% <0.00%> (+0.56%)` | :arrow_up: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+0.61%)` | :arrow_up: | | ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/6445/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6445?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6445?src=pr&el=footer). Last update [96c3329...400c5ad](https://codecov.io/gh/huggingface/transformers/pull/6445?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Awesome that we can remove the `cast_to_bool` hack here. Maybe we can remove it in `t5_modeling_tf_` as well<|||||>Side note, you should double-check the slow tests `test_saved_model_with_attentions_output` and `test_saved_model_with_hidden_states_output` still pass with the changes for the longformer model, as they are the ones that fail for t5 when we remove the `cast_to_bool` thingy.<|||||>> Side note, you should double-check the slow tests `test_saved_model_with_attentions_output` and `test_saved_model_with_hidden_states_output` still pass with the changes for the longformer model, as they are the ones that fail for t5 when we remove the `cast_to_bool` thingy. They did not pass with Longformer before as discussed with @jplu on the PR: https://github.com/huggingface/transformers/pull/5764#issuecomment-670002430, they should actually pass now I think :-)
transformers
6,444
closed
Can't download 'Helsinki-NLP/opus-mt-hye-eng' model
## Environment info - `transformers` version: 3.0.2 - Platform: Linux-5.3.0-51-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.3.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: NO - Using distributed or parallel set-up in script?: not sure ## Information Model I am using: MarianMTModel, AutoModelWithLMHead The problem arises when using the official example scripts (https://huggingface.co/Helsinki-NLP/opus-mt-hye-eng): ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-hye-eng") model = AutoModelWithLMHead.from_pretrained("Helsinki-NLP/opus-mt-hye-eng") ``` Gives error ``` /home/sonja/.local/lib/python3.6/site-packages/transformers/modeling_auto.py:798: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models. FutureWarning, --------------------------------------------------------------------------- OSError Traceback (most recent call last) ~/.local/lib/python3.6/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 654 if resolved_archive_file is None: --> 655 raise EnvironmentError 656 except EnvironmentError: OSError: During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-2-04055899a280> in <module> 1 from transformers import AutoTokenizer, AutoModelWithLMHead 2 tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-hye-eng") ----> 3 model = AutoModelWithLMHead.from_pretrained("Helsinki-NLP/opus-mt-hye-eng") ~/.local/lib/python3.6/site-packages/transformers/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 804 for config_class, model_class in MODEL_WITH_LM_HEAD_MAPPING.items(): 805 if isinstance(config, config_class): --> 806 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) 807 raise ValueError( 808 "Unrecognized configuration class {} for this kind of AutoModel: {}.\n" ~/.local/lib/python3.6/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 660 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a file named one of {WEIGHTS_NAME}, {TF2_WEIGHTS_NAME}, {TF_WEIGHTS_NAME}.\n\n" 661 ) --> 662 raise EnvironmentError(msg) 663 664 if resolved_archive_file == archive_file: OSError: Can't load weights for 'Helsinki-NLP/opus-mt-hye-eng'. Make sure that: - 'Helsinki-NLP/opus-mt-hye-eng' is a correct model identifier listed on 'https://huggingface.co/models' - or 'Helsinki-NLP/opus-mt-hye-eng' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt. ``` Tried to download the model manually from link I got while debugging (https://cdn.huggingface.co/Helsinki-NLP/opus-mt-hye-eng/pytorch_model.bin) but it doesn't return anything relatable. Although for 'hye-rus' model (https://cdn.huggingface.co/Helsinki-NLP/opus-mt-hye-rus/pytorch_model.bin) I can easily download the file. Works fine for "eng-hye" and "rus-hye" too. Hjälp, @sshleifer (sorry if mistagged)
08-12-2020 15:40:22
08-12-2020 15:40:22
Replicated, will fix.<|||||>use ```python AutoModelForSeq2SeqLM.from_pretrained('Helsinki-NLP/opus-mt-hy-en') ``` It performs better than the later hye-eng version for armenian-english. I removed hye-eng.<|||||>Thank you! It works
transformers
6,443
closed
Simple train from the start for translation transformer
Hi, sorry to bother. I am trying to train a translation transformer, I have seen the documentation but I am still really lost. I have two datasets, the original message and the translated message. Example: dataset_x.txt This is the message. This is another message. Another one. dataset_y.txt This<&>is the message<^>. This is another<&> message. Another one<%>. I wanted a simple script which could tokenize these datasets and train any suited model from scratch. Could anyone help me? Thanks a bunch!
08-12-2020 15:28:31
08-12-2020 15:28:31
I found the repo simpletransformers, which uses this marvelous repo of yours. I got to run a transformer through there, so I'll be closing the issue now. Thanks anyway!
transformers
6,442
closed
Adding PaddingDataCollator
New version of #6398
08-12-2020 15:22:31
08-12-2020 15:22:31
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6442?src=pr&el=h1) Report > Merging [#6442](https://codecov.io/gh/huggingface/transformers/pull/6442?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/96c3329f19f28e47eab7f9f20ed3504619e16722&el=desc) will **decrease** coverage by `0.01%`. > The diff coverage is `52.94%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6442/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6442?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6442 +/- ## ========================================== - Coverage 79.95% 79.93% -0.02% ========================================== Files 153 153 Lines 27932 27947 +15 ========================================== + Hits 22332 22339 +7 - Misses 5600 5608 +8 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6442?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.27% <ø> (ø)` | | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6442/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `90.90% <52.94%> (-5.68%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6442?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6442?src=pr&el=footer). Last update [96c3329...a153ed4](https://codecov.io/gh/huggingface/transformers/pull/6442?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,441
closed
MBartForConditionalGeneration
This PR adds MBartForConditionalGeneration. Regarding #6416 @sshleifer
08-12-2020 14:49:14
08-12-2020 14:49:14
The failure is coming from `test_modeling_tf_electra.py`<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6441?src=pr&el=h1) Report > Merging [#6441](https://codecov.io/gh/huggingface/transformers/pull/6441?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e0b1dc8954b87c18f77a82000e81e02683b8eb1&el=desc) will **increase** coverage by `0.29%`. > The diff coverage is `88.22%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6441/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6441?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6441 +/- ## ========================================== + Coverage 79.77% 80.06% +0.29% ========================================== Files 148 156 +8 Lines 27214 28024 +810 ========================================== + Hits 21710 22438 +728 - Misses 5504 5586 +82 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6441?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | | | [src/transformers/data/test\_generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Rlc3RfZ2VuZXJhdGlvbl91dGlscy5weQ==) | `0.00% <0.00%> (ø)` | | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <ø> (-0.91%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <ø> (ø)` | | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <ø> (+4.22%)` | :arrow_up: | | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.25% <0.00%> (-0.13%)` | :arrow_down: | | [src/transformers/testing\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `51.92% <28.57%> (-20.81%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <37.50%> (-0.18%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <50.00%> (+1.79%)` | :arrow_up: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `90.90% <52.94%> (-5.68%)` | :arrow_down: | | ... and [69 more](https://codecov.io/gh/huggingface/transformers/pull/6441/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6441?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6441?src=pr&el=footer). Last update [e92efcf...49f74a5](https://codecov.io/gh/huggingface/transformers/pull/6441?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks @sgugger , I've applied the suggestions.
transformers
6,440
closed
Getting Error from Default Data Collator while training Bert on SQUAD 2.0
Hello, I'm a fresher playing with the transformers. I suppose to train the BERT model `bert-base-cased` on SQUAD 2.0, but having an error in `data_collator`: It shows an error of `TypeError: an integer is required` Here is the detail: ``` Epoch: 0%| | 0/2 [00:00<?, ?it/s] Iteration: 0%| | 0/4135 [00:00<?, ?it/s] Epoch: 0%| | 0/2 [00:00<?, ?it/s] Traceback (most recent call last): File "/scratch/yyu/codebase/odqa/odqa/reader/reader_trainer.py", line 138, in <module> trainer.train() File "/scratch/yyu/codebase/odqa/odqa/reader/reader_trainer.py", line 119, in train self._trainer.train() File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/transformers/trainer.py", line 456, in train for step, inputs in enumerate(epoch_iterator): File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/tqdm/std.py", line 1130, in __iter__ for obj in iterable: File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 363, in __next__ data = self._next_data() File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 403, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/transformers/data/data_collator.py", line 62, in default_data_collator batch[k] = torch.tensor([f[k] for f in features], dtype=torch.long) TypeError: an integer is required (got type dict) ``` Here is my trainer configuration: ``` training_args = TrainingArguments( output_dir=self.output_dir, overwrite_output_dir=True, num_train_epochs=2, per_gpu_train_batch_size=32, # per_device_eval_batch_size=64, warmup_steps=500, weight_decay=0.01, # evaluate_during_training=True, save_steps=10_000, logging_dir='./logs', ) self._trainer = Trainer( model=self._model, args=training_args, compute_metrics=self.compute_metrics, train_dataset=self.train_dataset, eval_dataset=self.test_dataset ) ``` Does anyone know why it happens? And how can I fix this error? I don't know if this is one mistake in my code or from the transformers :-( Thanks for all help. Best
08-12-2020 14:31:00
08-12-2020 14:31:00
First things first, please use ``` when copy-pasting stack traces or code (I've edited your post to use that) otherwise it's not really readable. It's hard to know what's going on without knowing how you built your `train_dataset`. The data collator seems to have problems with it. The items should be dictionaries of list of ints/tensors. It seems there is some nested dictionary here.<|||||>Sorry about that I will remember to use it :-) And regarding the `train_dataset` I used `SquadV2Processor` to get examples for `train` and `eval`, and then convert them using `squad_convert_examples_to_features`: ``` _processor = SquadV2Processor() train_examples = _processor.get_train_examples(squad_dir, filename='SQuAD-v2.0-train.json') train_dataset = squad_convert_examples_to_features(train_examples, self._tokenizer, max_seq_length=384, doc_stride=128, threads=2,max_query_length=64, is_training=True) ``` Is this correct? <|||||>Could you print the result of `self.train_dataset[0]`? It would be helpful to see what the items look like.<|||||>i can't find any clues from the `self.train_dataset[0]`. It is a `SquadFeature` object, like: ``` 2020-08-12 20:48:57,663 -- [__main__:57][INFO]: train_dataset[0] input_ids: [101, 1706, 2292, 1225, 1103, 6567, 2090, 9273, 2845, 1107, 8109, 1107, 10111, 20500, 1699, 136, 102, 22182, 1193, 117, 1103, 1278, 1144, 170, 2336, 1959, 119, 1335, 4184, 1103, 4304, 4334, 112, 188, 2284, 10945, 1110, 170, 5404, 5921, 1104, 1103, 6567, 2090, 119, 13301, 1107, 1524, 1104, 1103, 4304, 4334, 1105, 4749, 1122, 117, 1110, 170, 7335, 5921, 1104, 4028, 1114, 1739, 1146, 14089, 5591, 1114, 1103, 7051, 107, 159, 21462, 1566, 24930, 2508, 152, 1306, 3965, 107, 119, 5893, 1106, 1103, 4304, 4334, 1110, 1103, 19349, 1104, 1103, 11373, 4641, 119, 13301, 1481, 1103, 171, 17506, 9538, 1110, 1103, 144, 10595, 2430, 117, 170, 14789, 1282, 1104, 8070, 1105, 9284, 119, 1135, 1110, 170, 16498, 1104, 1103, 176, 10595, 2430, 1120, 10111, 20500, 117, 1699, 1187, 1103, 6567, 2090, 25153, 1193, 1691, 1106, 2216, 17666, 6397, 3786, 1573, 25422, 13149, 1107, 8109, 119, 1335, 1103, 1322, 1104, 1103, 1514, 2797, 113, 1105, 1107, 170, 2904, 1413, 1115, 8200, 1194, 124, 11739, 1105, 1103, 3487, 17917, 114, 117, 1110, 170, 3014, 117, 2030, 2576, 5921, 1104, 2090, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 2020-08-12 20:48:57,663 -- [__main__:58][INFO]: train_dataset[0] attention_mask: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 2020-08-12 20:48:57,663 -- [__main__:59][INFO]: train_dataset[0] token_type_ids: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 2020-08-12 20:48:57,663 -- [__main__:60][INFO]: train_dataset[0] cls_index: 0 2020-08-12 20:48:57,663 -- [__main__:61][INFO]: train_dataset[0] p_mask: [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 2020-08-12 20:48:57,663 -- [__main__:62][INFO]: train_dataset[0] example_index: 0 2020-08-12 20:48:57,663 -- [__main__:63][INFO]: train_dataset[0] unique_id: 1000000000 2020-08-12 20:48:57,663 -- [__main__:64][INFO]: train_dataset[0] paragraph_len: 163 2020-08-12 20:48:57,663 -- [__main__:65][INFO]: train_dataset[0] token_is_max_context: {17: True, 18: True, 19: True, 20: True, 21: True, 22: True, 23: True, 24: True, 25: True, 26: True, 27: True, 28: True, 29: True, 30: True, 31: True, 32: True, 33: True, 34: True, 35: True, 36: True, 37: True, 38: True, 39: True, 40: True, 41: True, 42: True, 43: True, 44: True, 45: True, 46: True, 47: True, 48: True, 49: True, 50: True, 51: True, 52: True, 53: True, 54: True, 55: True, 56: True, 57: True, 58: True, 59: True, 60: True, 61: True, 62: True, 63: True, 64: True, 65: True, 66: True, 67: True, 68: True, 69: True, 70: True, 71: True, 72: True, 73: True, 74: True, 75: True, 76: True, 77: True, 78: True, 79: True, 80: True, 81: True, 82: True, 83: True, 84: True, 85: True, 86: True, 87: True, 88: True, 89: True, 90: True, 91: True, 92: True, 93: True, 94: True, 95: True, 96: True, 97: True, 98: True, 99: True, 100: True, 101: True, 102: True, 103: True, 104: True, 105: True, 106: True, 107: True, 108: True, 109: True, 110: True, 111: True, 112: True, 113: True, 114: True, 115: True, 116: True, 117: True, 118: True, 119: True, 120: True, 121: True, 122: True, 123: True, 124: True, 125: True, 126: True, 127: True, 128: True, 129: True, 130: True, 131: True, 132: True, 133: True, 134: True, 135: True, 136: True, 137: True, 138: True, 139: True, 140: True, 141: True, 142: True, 143: True, 144: True, 145: True, 146: True, 147: True, 148: True, 149: True, 150: True, 151: True, 152: True, 153: True, 154: True, 155: True, 156: True, 157: True, 158: True, 159: True, 160: True, 161: True, 162: True, 163: True, 164: True, 165: True, 166: True, 167: True, 168: True, 169: True, 170: True, 171: True, 172: True, 173: True, 174: True, 175: True, 176: True, 177: True, 178: True, 179: True} 2020-08-12 20:48:57,663 -- [__main__:66][INFO]: train_dataset[0] tokens: ['[CLS]', 'To', 'whom', 'did', 'the', 'Virgin', 'Mary', 'allegedly', 'appear', 'in', '1858', 'in', 'Lou', '##rdes', 'France', '?', '[SEP]', 'Architectural', '##ly', ',', 'the', 'school', 'has', 'a', 'Catholic', 'character', '.', 'At', '##op', 'the', 'Main', 'Building', "'", 's', 'gold', 'dome', 'is', 'a', 'golden', 'statue', 'of', 'the', 'Virgin', 'Mary', '.', 'Immediately', 'in', 'front', 'of', 'the', 'Main', 'Building', 'and', 'facing', 'it', ',', 'is', 'a', 'copper', 'statue', 'of', 'Christ', 'with', 'arms', 'up', '##rai', '##sed', 'with', 'the', 'legend', '"', 'V', '##eni', '##te', 'Ad', 'Me', 'O', '##m', '##nes', '"', '.', 'Next', 'to', 'the', 'Main', 'Building', 'is', 'the', 'Basilica', 'of', 'the', 'Sacred', 'Heart', '.', 'Immediately', 'behind', 'the', 'b', '##asi', '##lica', 'is', 'the', 'G', '##rot', '##to', ',', 'a', 'Marian', 'place', 'of', 'prayer', 'and', 'reflection', '.', 'It', 'is', 'a', 'replica', 'of', 'the', 'g', '##rot', '##to', 'at', 'Lou', '##rdes', ',', 'France', 'where', 'the', 'Virgin', 'Mary', 'reputed', '##ly', 'appeared', 'to', 'Saint', 'Bern', '##ade', '##tte', 'So', '##ubi', '##rous', 'in', '1858', '.', 'At', 'the', 'end', 'of', 'the', 'main', 'drive', '(', 'and', 'in', 'a', 'direct', 'line', 'that', 'connects', 'through', '3', 'statues', 'and', 'the', 'Gold', 'Dome', ')', ',', 'is', 'a', 'simple', ',', 'modern', 'stone', 'statue', 'of', 'Mary', '.', '[SEP]'] 2020-08-12 20:48:57,663 -- [__main__:67][INFO]: train_dataset[0] token_to_orig_map: {17: 0, 18: 0, 19: 0, 20: 1, 21: 2, 22: 3, 23: 4, 24: 5, 25: 6, 26: 6, 27: 7, 28: 7, 29: 8, 30: 9, 31: 10, 32: 10, 33: 10, 34: 11, 35: 12, 36: 13, 37: 14, 38: 15, 39: 16, 40: 17, 41: 18, 42: 19, 43: 20, 44: 20, 45: 21, 46: 22, 47: 23, 48: 24, 49: 25, 50: 26, 51: 27, 52: 28, 53: 29, 54: 30, 55: 30, 56: 31, 57: 32, 58: 33, 59: 34, 60: 35, 61: 36, 62: 37, 63: 38, 64: 39, 65: 39, 66: 39, 67: 40, 68: 41, 69: 42, 70: 43, 71: 43, 72: 43, 73: 43, 74: 44, 75: 45, 76: 46, 77: 46, 78: 46, 79: 46, 80: 46, 81: 47, 82: 48, 83: 49, 84: 50, 85: 51, 86: 52, 87: 53, 88: 54, 89: 55, 90: 56, 91: 57, 92: 58, 93: 58, 94: 59, 95: 60, 96: 61, 97: 62, 98: 62, 99: 62, 100: 63, 101: 64, 102: 65, 103: 65, 104: 65, 105: 65, 106: 66, 107: 67, 108: 68, 109: 69, 110: 70, 111: 71, 112: 72, 113: 72, 114: 73, 115: 74, 116: 75, 117: 76, 118: 77, 119: 78, 120: 79, 121: 79, 122: 79, 123: 80, 124: 81, 125: 81, 126: 81, 127: 82, 128: 83, 129: 84, 130: 85, 131: 86, 132: 87, 133: 87, 134: 88, 135: 89, 136: 90, 137: 91, 138: 91, 139: 91, 140: 92, 141: 92, 142: 92, 143: 93, 144: 94, 145: 94, 146: 95, 147: 96, 148: 97, 149: 98, 150: 99, 151: 100, 152: 101, 153: 102, 154: 102, 155: 103, 156: 104, 157: 105, 158: 106, 159: 107, 160: 108, 161: 109, 162: 110, 163: 111, 164: 112, 165: 113, 166: 114, 167: 115, 168: 115, 169: 115, 170: 116, 171: 117, 172: 118, 173: 118, 174: 119, 175: 120, 176: 121, 177: 122, 178: 123, 179: 123} ``` <|||||>Ok. You need to remove some keys from it as it has way too many attributes (the failure comes from the fact the data collator is trying to build a tensor from the `token_is_max_context` fields). The easiest way is probably to use the `SquadDataset` in `data.datasets.squad`, or you can just copy its `__getitem__` method and put it on your own dataset class.<|||||>Thanks Sylvain. But I found that my transformers library doesn't contain `data.datasets.squad` at all. I'm using `transformers==3.0.2`. It only contains `glue` and `language_model` two classes. <|||||>Would you mind to explain how the error comes from the data collator? The `token_is_max_context` only contains a list of `True` flags. Is there a certain order (keys) of these features?<|||||>The data collator should receive a dictionary string to list of ints/tensors. The value associated to `token_is_max_context` is neither a list of int or a tensor, hence the error. Note that you dataset should have items that are dictionaries with keys that are argument names your model will accept, which another reason why `token_is_max_context` needs to be removed.<|||||>Okay, it seems to make sense. There would be another question: does `transformers` process the original version of SQUAD dataset? I download the data from the SQUAD website, which shouldn't have any errors if it is the exact same as the one `transformers` used. Can I simply remove `token_is_max_context` from the `SquadFeatures` to solve this error? Also which version of `transformers` contains data.datasets.squad? I can't find it in 3.0.0, 2.9.0 and even 2.5.0. <|||||>You need to only extract : input_ids, attention_mask and token_type_ids as the rest is probably not helpful to your model. AFAICT the file [data.datasets.squad](https://github.com/huggingface/transformers/blob/master/src/transformers/data/datasets/squad.py) has been there for a while, so you should have it in those versions.<|||||>Thanks for help, Sylvain. Let me give a try today. Hopefully, it works fine. <|||||>Hi Sylvain, Sorry for the interruption again. I've created a new class that only contains `input_ids`, `attention_mask` and `token_type_ids`, then the system gives an error: ``` Traceback (most recent call last): File "/scratch/yyu/codebase/odqa/odqa/reader/reader_trainer.py", line 190, in <module> trainer.train() File "/scratch/yyu/codebase/odqa/odqa/reader/reader_trainer.py", line 168, in train self._trainer.train() File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/transformers/trainer.py", line 375, in train for step, inputs in enumerate(epoch_iterator): File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/tqdm/std.py", line 1130, in __iter__ for obj in iterable: File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 363, in __next__ data = self._next_data() File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 403, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/transformers/data/data_collator.py", line 91, in collate_batch batch = self._tensorize_batch(examples) File "/scratch/yyu/codebase/odqa/ENV/lib/python3.6/site-packages/transformers/data/data_collator.py", line 99, in _tensorize_batch length_of_first = examples[0].size(0) AttributeError: 'SimpleSquadFeature' object has no attribute 'size' ``` But I didn't find the `size()` function in the original class (SquadFeature) either. Do you know why it happens like that? And also I've installed transformers from 2.1.0 to 3.0.2 but can't import a class called `SquadDataset`, there is not a path [data.datasets.squad](https://github.com/huggingface/transformers/blob/master/src/transformers/data/datasets/squad.py), but `data.processor.squad`. When did transformers delete the `SquadDataset`? Thanks for help. Yanchao<|||||>Don't create a special class, just return a dictionary with those fields and it should work.<|||||>Hi, I've had nearly the same problem as yy147 (the TypeError related to the default_data_collator), even though my train_dataset had items that were precisely a dictionary mapping the keys: 'input_ids', 'attention_mask', 'token_type_ids', to lists of ints, and 'label' to a torch.LongTensor. Strangely I managed to solve the problem by copying the transformers.data.data_collator.default_data_collator into my own code and letting the Trainer use that. Python version 3.6.8 torch version 1.6.0 transformers version 3.0.2 Hope it helps, Gijs <|||||>> Don't create a special class, just return a dictionary with those fields and it should work. Thanks Sylvain. I've found the `SquadDateset`, which is surprisingly not included in any transformers versions if you run `pip install transfermers==3.0.2`. I can only find it if I install `transformers` from the source. It seems something needs to be fixed. Best, Yanchao <|||||>Thanks for help, Gijs. I will try to copy and paste it later. It is a wired situation. <|||||>Hi @yy147 , I am also getting a similar error: ``` length_of_first = examples[0].size(0) AttributeError: 'dict' object has no attribute 'size ``` Have you managed to fix your error?<|||||>Hi @gungor2, I found another source code extended from transformers example `https://github.com/kamalkraj/BERT-SQuAD/blob/master/utils.py`. It gives a great example to solve the problem of `squad_examples_to_features`. It works well for me. Hope it is helpful for you. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi, I have a related issue I am trying to train the [**TrOCR**](https://huggingface.co/microsoft/trocr-large-handwritten) model on my own data what I tried: ``` # the class for loading data has function : def __getitem__(self, idx): file_name = self.df['file_name'][idx] text = self.df['text'][idx] # prepare image (i.e. resize + normalize) image = Image.open(self.root_dir + file_name).convert("RGB") pixel_values = self.processor(image, return_tensors="pt").pixel_values labels = self.processor.tokenizer(text, padding="max_length", max_length=self.max_target_length).input_ids labels = [label if label != self.processor.tokenizer.pad_token_id else -100 for label in labels] encoding = {"pixel_values": pixel_values.squeeze(), "labels": torch.tensor(labels)} return encoding training_args = Seq2SeqTrainingArguments( num_train_epochs=25, learning_rate=5e-5, predict_with_generate=True, evaluation_strategy="steps", per_device_train_batch_size=64, per_device_eval_batch_size=64, fp16=True, output_dir="/1/large/", logging_steps=100, save_steps=2000, eval_steps=5000, ) trainer = Seq2SeqTrainer( model=model, tokenizer=processor.feature_extractor, args=training_args, compute_metrics=compute_metrics, train_dataset=train_dataset, eval_dataset=eval_dataset, data_collator=default_data_collator, ) ``` the feed input image to the model has a `height of 64 `fixed for all and different `width` The issue I see is: where the training stops after a few hours ``` Traceback (most recent call last): File "train.py", line 191, in <module> main() File "train.py", line 173, in main trainer.train() File "/home/user/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1521, in train return inner_training_loop( File "/home/user/venv/lib/python3.8/site-packages/transformers/trainer.py", line 1737, in _inner_training_loop for step, inputs in enumerate(epoch_iterator): File "/home/user/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 435, in __next__ data = self._next_data() File "/home/user/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/user/venv/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/home/user/venv/lib/python3.8/site-packages/transformers/trainer_utils.py", line 696, in __call__ return self.data_collator(features) File "/home/user/venv/lib/python3.8/site-packages/transformers/data/data_collator.py", line 67, in default_data_collator return torch_default_data_collator(features) File "/home/user/venv/lib/python3.8/site-packages/transformers/data/data_collator.py", line 129, in torch_default_data_collator batch[k] = torch.stack([f[k] for f in features]) RuntimeError: stack expects each tensor to be equal size, but got [128] at entry 0 and [139] at entry 19 1%|▊ | 1356/166025 [40:59<82:58:15, 1.81s/it] ``` Transformer version: 4.22.2 @sgugger @NielsRogge @ <|||||>Hi, It looks like your target texts aren't having the same length. You need to not only pad but also set `truncation=True` to make sure all texts have 128 tokens.
transformers
6,439
closed
TrainingArguments are ignored?!
## Environment info - `transformers` version: 3.0.2 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help Trainer: @sgugger ## Information Hey, I'm using `01_how-to-train.ipynb` to get a feeling for the object oriented way of training a bert model from scratch. Until now I've been using the scripts offered by offical bert repository. My target is to train all of my future Transformer models with your Huggingface interface (from scratch and of course fine tuning too). I used `max_steps = 500_000` but it gets completely ignored. After training is started the output says: ``` Iteration: 11639/422095 [1:52:03<70:16:42, 1.62it/s] Epoch 0/2 [00:00<?, ?it/s] ``` **Two epochs and 422095 iterations seems wrong!?** Official docs say _"max_steps = the total number of training steps to perform"_ Am I misinterpreting something? Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: * line by line dataset * training a bert language model from scratch (generating vocab, setting a config, ...) ## To reproduce Use colab "01_how-to-train.ipynb" (https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb ) and change TrainingArguments to the following: ``` training_args = TrainingArguments( output_dir="./smallBERTa", overwrite_output_dir=True, do_train=True, warmup_steps=5000, max_steps=500000, per_gpu_train_batch_size=64, save_steps=10_000, save_total_limit=2, ) ``` Yes, I am passing the `training_args` to the `Trainer()` object. ## Expected behavior I'm expecting to get 500.000 global training steps and just one epoch.
08-12-2020 13:34:48
08-12-2020 13:34:48
I don't see what's wrong? There are 422,095 iterations in one of your epochs, so to get to 500,000 you'll do one full epoch and the beginning of the second epoch. Training will stop after it has reached 500,000. You can't have just one epoch since you need more iterations to reach the `max_steps` you have given. That argument overrides the number of epochs. <|||||>Then I can definitely say: I misinterpreted the logs. There was something like "global steps" when using BERT's pretrain script and the value was identical with the previously set "max_steps" parameter. Now I get it ... Thanks for clearing it up.
transformers
6,438
closed
Training GPT2 and Reformer from scratch.
Hello, I am looking, for example, script/notebook to train GPT2 and Reformer model from scratch in German. Something similar to : https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb I am trying to modify the same notebook but GPT2 doesn't seem to accept LinebyLineDataset or padding.
08-12-2020 12:59:15
08-12-2020 12:59:15
Hey @VikasRajashekar, We are trying to move "non-bug" related questions to https://discuss.huggingface.co/ - could you post your question there again? :-) <|||||>Btw, for Reformer, you can check out these notebooks: https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb and https://github.com/patrickvonplaten/notebooks/blob/master/Reformer_For_Masked_LM.ipynb
transformers
6,437
closed
Fix #6428
The `HfArgumentParser` was failing on arguments type-annoted with `Optional[bool]`. This fixes that (and issue #6428 in the process) so we don't have to remember to not put `Optional` around bools.
08-12-2020 12:46:11
08-12-2020 12:46:11
👍
transformers
6,436
closed
Epoch iterator for run_pl_ner.py
Dear all, While training the run_pl_ner.py code, with increasing documents added the epoch iterator increases. But with this increase, it also prints a new line making the progress bar for the epoch span on multiple lines as shown below. I was wondering if there is a way to restrict this multi-line progress bar to one line? Thank You. ![image](https://user-images.githubusercontent.com/45199062/90014027-a6d6d380-dca6-11ea-86d0-1ccc5f820d18.png)
08-12-2020 12:18:26
08-12-2020 12:18:26
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,435
closed
Update README.md
08-12-2020 10:23:29
08-12-2020 10:23:29
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6435?src=pr&el=h1) Report > Merging [#6435](https://codecov.io/gh/huggingface/transformers/pull/6435?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ffea5ce2f4d154a3696b8fe2fb116fa09235700&el=desc) will **increase** coverage by `0.05%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6435/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6435?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6435 +/- ## ========================================== + Coverage 79.89% 79.94% +0.05% ========================================== Files 153 153 Lines 27902 27902 ========================================== + Hits 22291 22306 +15 + Misses 5611 5596 -15 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6435?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.72% <0.00%> (+2.27%)` | :arrow_up: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+4.76%)` | :arrow_up: | | ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6435/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6435?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6435?src=pr&el=footer). Last update [4ffea5c...f0507f3](https://codecov.io/gh/huggingface/transformers/pull/6435?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,434
closed
Centralize logging
The goal of this PR is to offer a better way to manage logging to the HuggingFace/transformers users. It's a very simple proposal: implement a single logger that is shared across all files, and implement three helper methods that can be used across the library and by users: ```py def get_logger(): ''' Returns the logger instance for the library, that can be managed as a traditional `logging` logger. ''' def get_verbosity(): ''' Returns the logger instance verbosity level. Used to manage what is printed, for example with tqdm loading bars. Same as doing: hf_logging.get_logger().getEffectiveLevel() ''' def set_verbosity(level: int): ''' Sets the logger instance verbosity level. Used to set the desired verbosity level across the library. Same as doing: hf_logging.get_logger().setLevel(level) ''' ``` Users can use these methods as such: ```py from transformers import hf_logging logger = hf_logging.get_logger() hf_logging.set_verbosity(hf_logging.INFO) # same as doing logger.setLevel(hf_logging.INFO) ``` The noteworthy additions/changes are shown below.
08-12-2020 09:31:53
08-12-2020 09:31:53
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6434?src=pr&el=h1) Report > Merging [#6434](https://codecov.io/gh/huggingface/transformers/pull/6434?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/461ae86812f9d75762bbdae2ac5776f9a5d702ea?el=desc) will **increase** coverage by `0.46%`. > The diff coverage is `91.58%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6434/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6434?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6434 +/- ## ========================================== + Coverage 79.63% 80.09% +0.46% ========================================== Files 156 157 +1 Lines 28420 28471 +51 ========================================== + Hits 22631 22805 +174 + Misses 5789 5666 -123 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6434?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/commands/convert.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9jb252ZXJ0LnB5) | `0.00% <0.00%> (ø)` | | | [src/transformers/commands/run.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9ydW4ucHk=) | `0.00% <0.00%> (ø)` | | | [src/transformers/commands/serving.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9zZXJ2aW5nLnB5) | `0.00% <0.00%> (ø)` | | | [src/transformers/commands/train.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFpbi5weQ==) | `0.00% <0.00%> (ø)` | | | [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.18% <ø> (-0.30%)` | :arrow_down: | | [src/transformers/data/metrics/squad\_metrics.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL21ldHJpY3Mvc3F1YWRfbWV0cmljcy5weQ==) | `0.00% <0.00%> (ø)` | | | [src/transformers/benchmark/benchmark\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.50% <66.66%> (ø)` | | | [src/transformers/utils/logging.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy91dGlscy9sb2dnaW5nLnB5) | `75.00% <75.00%> (ø)` | | | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <100.00%> (ø)` | | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `90.00% <100.00%> (ø)` | | | ... and [132 more](https://codecov.io/gh/huggingface/transformers/pull/6434/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6434?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6434?src=pr&el=footer). Last update [461ae86...c81c035](https://codecov.io/gh/huggingface/transformers/pull/6434?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Looks good to me as well! I'm thinking that it might be a good idea to create one helper function for each verbosity level: ``` hf_logger.set_info_verbosity() hf_logger.set_warning_verbosity() hf_logger.set_debug_verbosity() ``` These functions might be easier to remember...what do you think @LysandreJik ? <|||||>> Looks good to me as well! I'm thinking that it might be a good idea to create one helper function for each verbosity level: > > ``` > hf_logger.set_info_verbosity() > hf_logger.set_warning_verbosity() > hf_logger.set_debug_verbosity() > ``` > > These functions might be easier to remember...what do you think @LysandreJik ? For a simpler completion, I would rather call these: ``` hf_logger.set_verbosity_info() hf_logger.set_verbosity_warning() hf_logger.set_verbosity_debug() hf_logger.set_verbosity_error() # This one is important as well, to basically disactivate all infos/warnings ``` <|||||>h
transformers
6,433
closed
Fix PABEE & PL CI failure
#6421
08-12-2020 09:10:53
08-12-2020 09:10:53
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6433?src=pr&el=h1) Report > Merging [#6433](https://codecov.io/gh/huggingface/transformers/pull/6433?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ffea5ce2f4d154a3696b8fe2fb116fa09235700&el=desc) will **decrease** coverage by `2.51%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6433/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6433?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6433 +/- ## ========================================== - Coverage 79.89% 77.37% -2.52% ========================================== Files 153 153 Lines 27902 27902 ========================================== - Hits 22291 21588 -703 - Misses 5611 6314 +703 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6433?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `34.11% <0.00%> (-63.30%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.98% <0.00%> (-52.81%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.16% <0.00%> (-14.46%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-6.16%)` | :arrow_down: | | [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: | | [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `96.19% <0.00%> (-1.64%)` | :arrow_down: | | ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6433/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6433?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6433?src=pr&el=footer). Last update [4ffea5c...cd1ca4c](https://codecov.io/gh/huggingface/transformers/pull/6433?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Ah, PL test still failing!<|||||>@LysandreJik Don't worry. It seems I mistype some parameter name<|||||>Oops! @sshleifer could you have a look at the PL example? I've tried tweaking the parameters but it doesn't seem to work. <|||||>@stas00 Can you take a look? @sshleifer is on a vacation. Lots of thanks!<|||||>Yes, of course, I will be able to investigate in a few hours.<|||||>(we are talking about `examples/test_examples.py::ExamplesTests::test_run_pl_glue`) I'm able to reproduce the problem of low `acc` with those changes proposed in this PR. This PR I get: ``` acc = 0.5 f1 = 0.666 ``` The original pre-PR changes gives acc/f1=1.0 on my machine. If you have a look at https://github.com/huggingface/transformers/pull/6034 I tried various hparams to no avail, it was working fine on my machine, but CI kept on failing. It was just very counterproductive trying to experiment w/o being able to reproduce it locally, so after some time I gave up. So the test is not ideal, but at least it's testing that it runs. @sshleifer said he was able to match the CI's low accuracy on his hardware (pre this PR). <|||||>@stas00 Yes I've already found the problem in #6034 (output_dir) and fixed that in our PR. However the accuracy is still too low compared to the trainer version of run_glue. Since you can now reproduce the low acc, please give it a look! <|||||>Thank you for explaining what is happening, @JetRunner I have no perms to push, so try to use this: ``` testargs = """ run_pl_glue.py --model_name_or_path bert-base-cased --data_dir ./tests/fixtures/tests_samples/MRPC/ --task mrpc --do_train --do_predict --output_dir ./tests/fixtures/tests_samples/pl_temp_dir --train_batch_size=32 --learning_rate=1e-4 --num_train_epochs=4 --warmup_steps=3 --seed=42 --max_seq_length=128 """.split()` ``` I get acc/f1 of 1.0 with this config, the key was more `--num_train_epochs` and some warm-up. So you uncovered that these tests are very unreliable as they don't clean up after themselves and re-runs give invalid results. It's enough to get one run that succeeded, all the subsequent test re-runs will succeed at the moment. At the very least pl_glue needs to support `--overwrite_output_dir`. That explains why I couldn't get CI to work, as mine probably wasn't working all along, other than succeeding once and then always reporting the old success. So I was getting false positives. Should transformers warn a user when a pre-existing dir filled with outdated data is found or plainly refuse to run? <|||||>@stas00 this perm also outputs `0.5`, sadly. I feel maybe there's another bug here in the PL example? cc @LysandreJik @sshleifer <|||||>PABEE's bug is fixed in #6453. The reproducible low acc is still existing for PL. cc @LysandreJik @sshleifer
transformers
6,432
closed
TF2 implementation of LineByLineTextDataset?
Hi, I have a text file which I want to use as input for trainer_tf.py. Since it requires a dataset object as input, is there any implementation of something like the LineByLineTextDataset module in TF2 as well?
08-12-2020 08:18:09
08-12-2020 08:18:09
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,431
closed
Disabled pabee test
@JetRunner
08-12-2020 06:48:54
08-12-2020 06:48:54
transformers
6,430
closed
[WIP] QA Loss refactoring
This is a refactoring experiment as suggested at https://github.com/huggingface/transformers/issues/6204 10 models and 1 template have been refactored - I will check for more if this looks promising (the doc and new function's signature are incomplete). Let me know whether to continue or not. the refactoring was done with: ``` perl -0777 -pi -e ' $in = <<END; logits = self.qa_outputs(sequence_output) start_logits, end_logits = logits.split(1, dim=-1) start_logits = start_logits.squeeze(-1) end_logits = end_logits.squeeze(-1) total_loss = None if start_positions is not None and end_positions is not None: # If we are on multi-GPU, split add a dimension if len(start_positions.size()) > 1: start_positions = start_positions.squeeze(-1) if len(end_positions.size()) > 1: end_positions = end_positions.squeeze(-1) # sometimes the start/end positions are outside our model inputs, we ignore these terms ignored_index = start_logits.size(1) start_positions.clamp_(0, ignored_index) end_positions.clamp_(0, ignored_index) loss_fct = CrossEntropyLoss(ignore_index=ignored_index) start_loss = loss_fct(start_logits, start_positions) end_loss = loss_fct(end_logits, end_positions) total_loss = (start_loss + end_loss) / 2 END s/\Q$in\E/start_logits, end_logits, total_loss = self.calc_qa_loss(sequence_output, start_positions, end_positions)\n/msg ' \ ./templates/adding_a_new_model/modeling_* ./src/transformers/modeling_* ``` @sshleifer, I'm not sure how you're going to judge the coverage change as the coverage data is unreliable at the moment: https://github.com/huggingface/transformers/issues/6317
08-12-2020 06:41:13
08-12-2020 06:41:13
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6430?src=pr&el=h1) Report > Merging [#6430](https://codecov.io/gh/huggingface/transformers/pull/6430?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ffea5ce2f4d154a3696b8fe2fb116fa09235700&el=desc) will **increase** coverage by `0.17%`. > The diff coverage is `93.10%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6430/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6430?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6430 +/- ## ========================================== + Coverage 79.89% 80.06% +0.17% ========================================== Files 153 153 Lines 27902 27761 -141 ========================================== - Hits 22291 22228 -63 + Misses 5611 5533 -78 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6430?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.42% <89.47%> (+0.07%)` | :arrow_up: | | [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `81.85% <100.00%> (-0.19%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.99% <100.00%> (+0.23%)` | :arrow_up: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <100.00%> (+0.02%)` | :arrow_up: | | [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `81.87% <100.00%> (-0.26%)` | :arrow_down: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `90.96% <100.00%> (+0.10%)` | :arrow_up: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.50% <100.00%> (+0.04%)` | :arrow_up: | | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `95.82% <100.00%> (+0.13%)` | :arrow_up: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `96.38% <100.00%> (+0.59%)` | :arrow_up: | | [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `91.15% <100.00%> (+0.12%)` | :arrow_up: | | ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6430/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6430?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6430?src=pr&el=footer). Last update [4ffea5c...4e08790](https://codecov.io/gh/huggingface/transformers/pull/6430?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I am unsure about this as it goes against the "all in one files" policy transformers has for the models and has been highlighted as one of the main reason people like transformers in a recent survey. Yes the alternative is duplicate code in several files, yes this is officially considered as bad computer science practice and yes it is harder for us to maintain, but the point is to have everything available in one file a researcher can easily tweak. A similar attempt in #4944 has been ignored for the same reason. Tagging @thomwolf and @julien-c for their thoughts.<|||||>I'd hold off then with completing this until if and when you give a green light to do so. My idea of killing both rabbits with one shot would be writing an easy to maintain refactored code and have a tool that will unfold it for those who want it unfolded (and control the levels of how deep the unfolding goes). Does such a tool exist in python land?<|||||>> an easy to maintain refactored code and have a tool that will unfold it for those who want it unfolded (and control the levels of how deep the unfolding goes). Does such a tool exist in python land? We explored such tools with @aaugustin a few months ago and the conclusion then was to try and build a lightweight, home-built system for this.<|||||>We could add a simple script that copies the code from somewhere into the modeling files if necessary and another to check the consistency. The first could be called during `make style` and the second during `make quality`. I was thinking of doing something similar for the `TrainingArguments` and the examples this week (adding a tweakable training arguments file for each example using Trainer), so let's see how it goes for those and then continue with model refactoring the same way?<|||||>(probably the same script with a different flag, like `black`, but yes, I like this idea)<|||||>this proved to be a failed experiment, closing this down.
transformers
6,429
closed
[test schedulers] adjust to test the first step's reading
As I was working on a new scheduler, it was difficult to match numbers since the first step's reading was dropped in `unwrap_schedule` wrappers (they were taking the measurement after stepping). This PR adjusts the wrappers to first take a reading and then step. This PR also makes a small refactoring to move all the unwrapping into the script, so the test just compares 2 lists. (avoiding multiple `[l[0] for l in lrs_1]`) The updated table is: ``` scheds = { get_constant_schedule: ({}, [10.0] * self.num_steps), get_constant_schedule_with_warmup: ( {"num_warmup_steps": 4}, [0.0, 2.5, 5.0, 7.5, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0], ), get_linear_schedule_with_warmup: ( {**common_kwargs}, [0.0, 5.0, 10.0, 8.75, 7.5, 6.25, 5.0, 3.75, 2.5, 1.25], ), get_cosine_schedule_with_warmup: ( {**common_kwargs}, [0.0, 5.0, 10.0, 9.61, 8.53, 6.91, 5.0, 3.08, 1.46, 0.38], ), get_cosine_with_hard_restarts_schedule_with_warmup: ( {**common_kwargs, "num_cycles": 2}, [0.0, 5.0, 10.0, 8.53, 5.0, 1.46, 10.0, 8.53, 5.0, 1.46], ), get_polynomial_decay_schedule_with_warmup: ( {**common_kwargs, "power": 2.0, "lr_end": 1e-7}, [0.0, 5.0, 10.0, 7.656, 5.625, 3.906, 2.5, 1.406, 0.625, 0.156], ), } ``` Unrelated to the changes suggestion in this PR, it exposes 2 minor issues: 1. We definitely have a one off problem there, as the last step's reading is one reading too early (which this change exposes) - it doesn't complete the intended cycle. This is probably unimportant for 100s of steps, but it definitely stands out when developing a new scheduler. To illustrate, see this change in reported number for `get_polynomial_decay_schedule_with_warmup`: ``` - [5.0, 10.0, 7.656, 5.625, 3.906, 2.5, 1.406, 0.625, 0.156, 1e-07], + [0.0, 5.0, 10.0, 7.656, 5.625, 3.906, 2.5, 1.406, 0.625, 0.156], ``` the expected last step of `1e-07` is not there. It never was. 2. Also the first step's reading is `0.0` in all schedulers, except in `get_constant_schedule`, so the first step does nothing. This can be fixed with a potentially added `min_lr=1e-7` to all schedulers, as it was suggested by @sshleifer in one of the recent scheduler-related PRs. Let me know if this better fits into its own issue, as these issues have nothing to do with the PR itself. Or perhaps the 2 issues are just unimportant...
08-12-2020 03:38:31
08-12-2020 03:38:31
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6429?src=pr&el=h1) Report > Merging [#6429](https://codecov.io/gh/huggingface/transformers/pull/6429?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ffea5ce2f4d154a3696b8fe2fb116fa09235700&el=desc) will **increase** coverage by `0.05%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6429/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6429?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6429 +/- ## ========================================== + Coverage 79.89% 79.94% +0.05% ========================================== Files 153 153 Lines 27902 27902 ========================================== + Hits 22291 22307 +16 + Misses 5611 5595 -16 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6429?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+0.68%)` | :arrow_up: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.72% <0.00%> (+2.27%)` | :arrow_up: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+5.01%)` | :arrow_up: | | ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6429/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6429?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6429?src=pr&el=footer). Last update [4ffea5c...324dd60](https://codecov.io/gh/huggingface/transformers/pull/6429?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,428
closed
Error in run_tf_squad.py script
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer tensorflow: @jplu documentation: @sgugger --> @sgugger ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: SQUaD * [ ] my own task or dataset: (give details below) I'm simply trying to train a new question answering model using the TF trainer script, and I get the following error: ```python Traceback (most recent call last): File "run_tf_squad.py", line 244, in <module> main() File "run_tf_squad.py", line 123, in main parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TFTrainingArguments)) File "/usr/local/lib/python3.6/dist-packages/transformers/hf_argparser.py", line 40, in __init__ self._add_dataclass_arguments(dtype) File "/usr/local/lib/python3.6/dist-packages/transformers/hf_argparser.py", line 72, in _add_dataclass_arguments elif hasattr(field.type, "__origin__") and issubclass(field.type.__origin__, List): File "/usr/lib/python3.6/typing.py", line 1154, in __subclasscheck__ return super().__subclasscheck__(cls) File "/usr/lib/python3.6/abc.py", line 209, in __subclasscheck__ ok = cls.__subclasshook__(subclass) File "/usr/lib/python3.6/typing.py", line 890, in __extrahook__ if cls.__extra__ and issubclass(subclass, cls.__extra__): TypeError: issubclass() arg 1 must be a class ``` ## To reproduce Steps to reproduce the behavior: 1.install transformers from the master branch 2.run the example script in question-answering: ``` python run_tf_squad.py \ --model_name_or_path bert-base-uncased \ --output_dir model \ --max_seq_length 384 \ --num_train_epochs 2 \ --per_gpu_train_batch_size 8 \ --per_gpu_eval_batch_size 16 \ --do_train \ --logging_dir logs \ --logging_steps 10 \ --learning_rate 3e-5 \ --doc_stride 128 ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The script should run normally and train the model <!-- A clear and concise description of what you would expect to happen. -->
08-11-2020 23:57:54
08-11-2020 23:57:54
The error seems to be caused by the field `use_tfds` from the `DataTrainingArguments` class. Changing its type from `Optional[bool]` to `bool` and changing the default value to `False`, seem to resolve the issue, however, I don't really understand why and I'm not sure whether this is the right way to fix the issue. <|||||>Can reproduce, will investigate today.
transformers
6,427
closed
Activate check on the CI
The check of modules documented and tested was only in `make quality`, not circleCI
08-11-2020 21:09:19
08-11-2020 21:09:19
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6427?src=pr&el=h1) Report > Merging [#6427](https://codecov.io/gh/huggingface/transformers/pull/6427?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/34fabe1697f653dc0f54ac8f510d6ba5578a1a53&el=desc) will **increase** coverage by `2.57%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6427/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6427?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6427 +/- ## ========================================== + Coverage 77.38% 79.95% +2.57% ========================================== Files 153 153 Lines 27932 27932 ========================================== + Hits 21614 22332 +718 + Misses 6318 5600 -718 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6427?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <0.00%> (+0.19%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.58% <0.00%> (+0.35%)` | :arrow_up: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (+0.83%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.33% <0.00%> (+0.94%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.00%)` | :arrow_up: | | [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.87% <0.00%> (+1.06%)` | :arrow_up: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.58% <0.00%> (+1.20%)` | :arrow_up: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+1.36%)` | :arrow_up: | | ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6427/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6427?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6427?src=pr&el=footer). Last update [34fabe1...333b476](https://codecov.io/gh/huggingface/transformers/pull/6427?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,426
closed
Move prediction_loss_only to TrainingArguments
It didn't make sense to me to have that boolean flag in the init of `Trainer` when all the other ones are in `TrainingArguments` so I deprecated it and moved it. Let me know if you think it's a wrong move. Unrelated changes: had to fix `make quality` that was complaining about non-documented or tested models and it was easier to fix them than change my setup (which doesn't let me push if make quality fails).
08-11-2020 20:15:16
08-11-2020 20:15:16
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6426?src=pr&el=h1) Report > Merging [#6426](https://codecov.io/gh/huggingface/transformers/pull/6426?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/66fa8ceaeaa6fe12f1bd4a5e6b0a924f59f715d9&el=desc) will **decrease** coverage by `2.62%`. > The diff coverage is `36.36%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6426/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6426?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6426 +/- ## ========================================== - Coverage 79.90% 77.28% -2.63% ========================================== Files 153 153 Lines 27877 27884 +7 ========================================== - Hits 22276 21549 -727 - Misses 5601 6335 +734 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6426?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.25% <0.00%> (-0.13%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <60.00%> (-0.03%)` | :arrow_down: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `80.58% <100.00%> (+0.19%)` | :arrow_up: | | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `28.94% <0.00%> (-67.11%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.98% <0.00%> (-52.81%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `34.11% <0.00%> (-30.36%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.16% <0.00%> (-14.46%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: | | ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6426/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6426?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6426?src=pr&el=footer). Last update [66fa8ce...34cca14](https://codecov.io/gh/huggingface/transformers/pull/6426?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>no strong opinion on this (but as usual consider the BC/cleanliness ratio carefully)
transformers
6,425
closed
[examples] add pytest dependency
Not really a new dependency, since already installed by `pip install -e .[testing]` but some examples users just run: ``` pip install -r examples/requirements.txt ``` so they don't have it, and tests break.
08-11-2020 20:14:15
08-11-2020 20:14:15
transformers
6,424
closed
actions CI self-scheduled: run_examples torch even if run_torch_tests fails
![image](https://user-images.githubusercontent.com/6045025/89939113-7f6c0200-dbe5-11ea-984d-54c2ff749daa.png)
08-11-2020 19:14:55
08-11-2020 19:14:55
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,423
closed
Fixes to make life easier with the nlp library
This PR adds two things to make the interface easier with the `nlp` library: - `BatchEncoding` stops enforcing a 2-dim for every tensor, which causes problems for labels (which should be one vector of shape `[batch_size]`). - `PreTrainedTokenizerBase.pad` accepts tensors as inputs, which makes it easy to use this function for data collation. Added proper documentation and tests from @thomwolf initial work.
08-11-2020 18:46:47
08-11-2020 18:46:47
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6423?src=pr&el=h1) Report > Merging [#6423](https://codecov.io/gh/huggingface/transformers/pull/6423?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f6cb0f806efecb64df40c946dacaad0adad33d53&el=desc) will **increase** coverage by `2.27%`. > The diff coverage is `95.45%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6423/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6423?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6423 +/- ## ========================================== + Coverage 77.51% 79.79% +2.27% ========================================== Files 150 150 Lines 27789 27807 +18 ========================================== + Hits 21542 22188 +646 + Misses 6247 5619 -628 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6423?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `79.79% <ø> (+52.80%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.16% <95.45%> (+0.28%)` | :arrow_up: | | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <0.00%> (+0.19%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.58% <0.00%> (+0.35%)` | :arrow_up: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.75%)` | :arrow_up: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (+0.83%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.33% <0.00%> (+0.94%)` | :arrow_up: | | [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.87% <0.00%> (+1.06%)` | :arrow_up: | | ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6423/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6423?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6423?src=pr&el=footer). Last update [f6cb0f8...8edc948](https://codecov.io/gh/huggingface/transformers/pull/6423?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Merging then, we can follow up next week when @thomwolf is back if he has more comments.
transformers
6,422
closed
[test] replace capsys with the more refined CaptureStderr/CaptureStdout
Now https://github.com/huggingface/transformers/pull/6231 has been merged, we can now do a more refined, more tightly scoped std stream captures as shown [here](https://github.com/huggingface/transformers/pull/6231#issuecomment-671789424) otherwise no test functionality change. Any CI fails are unrelated.
08-11-2020 18:37:22
08-11-2020 18:37:22
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6422?src=pr&el=h1) Report > Merging [#6422](https://codecov.io/gh/huggingface/transformers/pull/6422?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ffea5ce2f4d154a3696b8fe2fb116fa09235700&el=desc) will **decrease** coverage by `2.51%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6422/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6422?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6422 +/- ## ========================================== - Coverage 79.89% 77.37% -2.52% ========================================== Files 153 153 Lines 27902 27902 ========================================== - Hits 22291 21588 -703 - Misses 5611 6314 +703 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6422?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `25.55% <0.00%> (-70.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `34.11% <0.00%> (-63.30%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.98% <0.00%> (-52.81%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.16% <0.00%> (-14.46%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-6.16%)` | :arrow_down: | | [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: | | [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `96.19% <0.00%> (-1.64%)` | :arrow_down: | | ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6422/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6422?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6422?src=pr&el=footer). Last update [4ffea5c...bece6ba](https://codecov.io/gh/huggingface/transformers/pull/6422?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,421
closed
test_run_glue_with_pabee failing
examples/bert-loses-patience/test_run_glue_with_pabee.py::PabeeTests::test_run_glue https://app.circleci.com/pipelines/github/huggingface/transformers/10373/workflows/0c9f2e61-2732-4857-84f0-71b59ddf10a9/jobs/71646 @JetRunner
08-11-2020 18:20:08
08-11-2020 18:20:08
@sshleifer Thanks for the issue. I noticed that this test fails sometimes. Do you have any idea how long has this problem been? Is it broken at the beginning or recently?<|||||>Started breaking within the last few days. It is breaking fairly consistently at this point.<|||||>It's I guess just flaky, for example it's not broken on master right now.<|||||>But it's not really a flaky "type of error". It's hitting IndexError on an embedding lookup. ``` IndexError: index out of range in self ``` <|||||>It also fails with the assert: `self.assertGreaterEqual(value, 0.75)` in my case (three times out of four right now).
transformers
6,420
closed
Experiment: ROUGE impact of using pegasus length-penalty implementation
code is under `length_normalization` in the pegasus [tf repo](https://github.com/google-research/pegasus)
08-11-2020 17:57:08
08-11-2020 17:57:08
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I tried this a bit and did not get any improvements. Meanwhile, our beam search is getting similar scores to pegasus in #6844 , so I am less motivated to push further. Branch with maybe correct beam search implem: https://github.com/sshleifer/transformers_fork/tree/peg-beam<|||||>> > > I tried this a bit and did not get any improvements. Meanwhile, our beam search is getting similar scores to pegasus in #6844 , so I am less motivated to push further. > Branch with maybe correct beam search implem: https://github.com/sshleifer/transformers_fork/tree/peg-beam If it is better how can we install this? Can we do this with pip? Thanks.
transformers
6,419
closed
Add pegasus model cards
08-11-2020 17:56:42
08-11-2020 17:56:42
transformers
6,418
closed
All learning rates are 0 warning
- `transformers` version: 3.0.2 - Platform: Linux - Python version: 3.6 - PyTorch version (GPU?): 1.4 (GPU) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @sshleifer ## Information Model I am using (Bert, XLNet ...): BART The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Running the example script in https://github.com/huggingface/transformers/tree/master/examples/seq2seq (finetune_bart_tiny.sh), I'm getting this warning in the beginning of training. However, the training process is continuing after that. Warning: ``` finetune.py:245: UserWarning: All learning rates are 0 warnings.warn("All learning rates are 0") Epoch 1: 0%| /home/sajad/anaconda3/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:224: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. warnings.warn("To get the last learning rate computed by the scheduler, " ``` ## Expected behavior While the training seemingly goes well, I'm wondering if this warning would cause problems, leading to deteriorate model's final performance? As a add-on, I've also incorporated the gradient checkpointing to some computational blocks of BART (modifying `modelling_bart.py` script a bit). But even w/o incorporating this module, I'm still getting this warning message? Any thoughts of how to solve it?
08-11-2020 16:55:36
08-11-2020 16:55:36
Duplicate of #5338 , ignore it. Sorry for the confusion.<|||||>It makes sense what you answered at #5338, thanks for the clarification. I'm closing this issue!
transformers
6,417
closed
how to fine tune t5 model for summarization task using tensorflow2?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
08-11-2020 16:49:25
08-11-2020 16:49:25
please see: https://discuss.huggingface.co/t/how-to-train-t5-with-tensorflow/641.
transformers
6,416
closed
Docs: Separate documentation for mbart
Currently all mbart documentation is stuffed into docs/ https://github.com/huggingface/transformers/blob/353b8f1e7a7361c0afd9e391381bc226b4a5ca8f/docs/source/model_doc/bart.rst#L42 mbart should have it's own `model_doc/mbart.rst` and entry in `pretrained_models.rst`. Optionally you can also create a new `src/transformers/modeling_mbart.py` with roughly these contents: ```python from .modeling_bart import BartForConditionalGeneration from .configuration_bart import MbartConfig class MBartForConditionalGeneration(BartForConditionalGeneration): config_class = MbartConfig # this model fully inherits its implementation from bart ```
08-11-2020 16:21:01
08-11-2020 16:21:01
@sshleifer I think adding new class should make thing more clear, should I go ahead with it ? Will also need to modify tests a little bit I guess<|||||>yes, thanks! key tests not to break ```bash RUN_SLOW=1 pytest tests/test_modeling_mbart.py ```<|||||>You can also make `tokenization_mbart.py`
transformers
6,415
closed
[EncoderDecoder] Add Cross Attention for GPT2
This PR implements **Bert2GPT2** by adding cross-attention layers to GPT2. Note that currently it is not possible to speed up decoder generation with the encoder-decoder framework (by using GPT2's past tensors) since it has to be implemented for all models that are compatible with the encoder/decoder framework (Bert, Roberta) before it can be used within the framework. All GPT2 `RUN_SLOW` tests are verified to pass. **Future PRs TODO**: - [ ] Verify that Bert2GPT2 works by training on CNN Daily Mail summarization - [ ] Add smart caching to Bert and add it to the encoder-decoder framework - [ ] Update encoder-decoder docs - [ ] Add a notebook explaining how to use encoder-decoder models.
08-11-2020 16:03:26
08-11-2020 16:03:26
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6415?src=pr&el=h1) Report > Merging [#6415](https://codecov.io/gh/huggingface/transformers/pull/6415?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bc820476a5c72060f810f825298befd5ec85da4d&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `96.61%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6415/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6415?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6415 +/- ## ========================================== - Coverage 79.98% 79.98% -0.01% ========================================== Files 153 153 Lines 28005 28039 +34 ========================================== + Hits 22401 22427 +26 - Misses 5604 5612 +8 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6415?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `91.66% <87.50%> (+0.64%)` | :arrow_up: | | [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.68% <97.87%> (+0.71%)` | :arrow_up: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <100.00%> (+0.01%)` | :arrow_up: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+7.51%)` | :arrow_up: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `87.73% <0.00%> (+63.19%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6415?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6415?src=pr&el=footer). Last update [bc82047...56094e2](https://codecov.io/gh/huggingface/transformers/pull/6415?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,414
closed
TypeError: forward() got an unexpected keyword argument 'labels'
## Environment info - `transformers` version: 3.0.2 - Platform: Linux-5.3.0-53-generic-x86_64-with-debian-buster-sid - Python version: 3.7.7 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: True - Using distributed or parallel set-up in script?: Distributed Hey there, I've run into this issue and not sure how to fix it: TypeError: forward() got an unexpected keyword argument 'labels' I'm running transformers v3.0.2 installed via pip Please see my code below. There is nothing fancy going on, I'm just trying to train RobertaMLM for a few more epochs on a different dataset. ```python import os import argparse import datetime from torch.utils.tensorboard import SummaryWriter from transformers import RobertaModel, RobertaConfig, RobertaTokenizerFast, LineByLineTextDataset, DataCollatorForLanguageModeling, Trainer, TrainingArguments from configs import model_directory, tensorboard_directory from logger import get_logger log = get_logger(__name__) args = argparse.Namespace( seed=42, model_id="Roberta2", pretrained_model_name_or_path="roberta-base", vocab_file="/data/nlp/roberta_vocabulary/roberta-base-vocab.json", merges_file="/data/nlp/roberta_vocabulary/roberta-base-merges.txt", filename="/data/nlp/trc2.txt", block_size=2**7, epochs=25, ) output_directory = os.path.join(model_directory, args.model_id) os.makedirs(output_directory, exist_ok=True) os.environ["TOKENIZERS_PARALLELISM"] = "false" def build_model(): tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_model_name_or_path=args.pretrained_model_name_or_path, lowercase=True, add_prefix_space=True, max_len=512) config = RobertaConfig.from_pretrained(args.pretrained_model_name_or_path) config.output_hidden_states = False model = RobertaModel.from_pretrained(pretrained_model_name_or_path=args.pretrained_model_name_or_path, config=config, cache_dir=output_directory) dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path=args.filename, block_size=args.block_size, ) data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15) training_args = TrainingArguments( seed=args.seed, output_dir=output_directory, overwrite_output_dir=True, num_train_epochs=args.epochs, per_device_train_batch_size=128, save_steps=10_000, # save_total_limit=2, fp16=True, fp16_opt_level="O1" ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, prediction_loss_only=True, ) trainer.train() trainer.save_model(output_directory) ``` tag: @sgugger
08-11-2020 14:30:00
08-11-2020 14:30:00
Please copy-paste the entire stack trace, just the error message is not enough to know what's going on :-)<|||||>Hi @sgugger thanks for the reply! Please see below for the full stack-trace ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-2-2942c2ba4004> in <module> 42 ) 43 ---> 44 trainer.train() 45 trainer.save_model(output_directory) /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path) 497 continue 498 --> 499 tr_loss += self._training_step(model, inputs, optimizer) 500 501 if (step + 1) % self.args.gradient_accumulation_steps == 0 or ( /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in _training_step(self, model, inputs, optimizer) 620 inputs["mems"] = self._past 621 --> 622 outputs = model(**inputs) 623 loss = outputs[0] # model outputs are always tuple in transformers (see doc) 624 /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), /opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs) 153 return self.module(*inputs[0], **kwargs[0]) 154 replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) --> 155 outputs = self.parallel_apply(replicas, inputs, kwargs) 156 return self.gather(outputs, self.output_device) 157 /opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in parallel_apply(self, replicas, inputs, kwargs) 163 164 def parallel_apply(self, replicas, inputs, kwargs): --> 165 return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) 166 167 def gather(self, outputs, output_device): /opt/conda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py in parallel_apply(modules, inputs, kwargs_tup, devices) 83 output = results[i] 84 if isinstance(output, ExceptionWrapper): ---> 85 output.reraise() 86 outputs.append(output) 87 return outputs /opt/conda/lib/python3.7/site-packages/torch/_utils.py in reraise(self) 393 # (https://bugs.python.org/issue2651), so we work around it. 394 msg = KeyErrorMessage(msg) --> 395 raise self.exc_type(msg) TypeError: Caught TypeError in replica 0 on device 0. Original Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'labels' ```<|||||>Oh, I misread. `RobertaModel` is not something you can use directly with `Trainer` as it doesn't have any objective (it's the base model without head). You should pick a model with head relevant to your task.<|||||>haha, i feel stupid now :) Thanks!<|||||> > Oh, I misread. `RobertaModel` is not something you can use directly with `Trainer` as it doesn't have any objective (it's the base model without head). You should pick a model with head relevant to your task. @sgugger can we finetune models with specific task only (like RobertaForMasekdLm etc) ? is there a way we can pre-train RobertaModel on our data then go for specific tasks?<|||||>I have similar a question as @sanjay23singh. I want to train a no-head RobertaModel on my corpus, then fine tuned using RobertaForSentenceClassification? (as below) ``` model = RobertaModel(config=config) training_args = .. trainer =... trainer.train() trainer.save_model('myRoberta') #fine tune sentimodel = RobertaForSequenceClassification.from_pretrained("./myRoberta") ``` My ultimate goal is training my own corpus with masked only, then do a classification. <|||||>> Oh, I misread. `RobertaModel` is not something you can use directly with `Trainer` as it doesn't have any objective (it's the base model without head). You should pick a model with head relevant to your task. Sorry, could you explain this in more details here. Thank you.<|||||>I'm absolutely confused by this too, it looks to me like there's a big missing piece in the doc.<|||||>its easy, training_args = TrainingArguments( output_dir="./My_train_BERT", overwrite_output_dir=True, num_train_epochs=5, per_gpu_train_batch_size=64, save_steps=10_000, save_total_limit=2, prediction_loss_only=True, **label_smoothing_factor=0.1** ##add it is ok )
transformers
6,413
closed
Create README.md
Model card for https://huggingface.co/akhooli/gpt2-small-arabic
08-11-2020 13:55:19
08-11-2020 13:55:19
transformers
6,412
closed
Create model card T5-base fine-tuned on event2Mind for Intent Prediction
More funny examples: https://twitter.com/mrm8488/status/1292952742395367424
08-11-2020 13:11:25
08-11-2020 13:11:25
transformers
6,411
closed
[EncoderDecoder] Add encoder-decoder for roberta/ vanilla longformer
This PR adds Roberta to the Encoder Decoder framework. Thus, it automatically makes it possible to use both `Roberta2Roberta` models and `Longformer2Roberta` model: ```python from transformers import EncoderDecoderModel model = EncoderDecoderModel.from_pretrained("roberta-base", "roberta-base") input_ids = torch.tensor([10 * [0]]) model(input_ids=input_ids, decoder_input_ids=input_ids) ``` and ```python from transformers import EncoderDecoderModel model = EncoderDecoderModel.from_pretrained("allenai/longformer-base-4096", "roberta-base") input_ids = torch.tensor([10 * [0]]) model(input_ids=input_ids, decoder_input_ids=input_ids) ``` Also pinging @ibeltagy and @patil-suraj
08-11-2020 12:40:55
08-11-2020 12:40:55
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6411?src=pr&el=h1) Report > Merging [#6411](https://codecov.io/gh/huggingface/transformers/pull/6411?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/404782912ad1324592c2d5bb2e88d1ee99a040b6&el=desc) will **decrease** coverage by `1.93%`. > The diff coverage is `92.85%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6411/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6411?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6411 +/- ## ========================================== - Coverage 79.77% 77.84% -1.94% ========================================== Files 150 150 Lines 27789 27826 +37 ========================================== - Hits 22170 21660 -510 - Misses 5619 6166 +547 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6411?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.25% <ø> (ø)` | | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `63.95% <ø> (-14.54%)` | :arrow_down: | | [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `91.02% <ø> (ø)` | | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <50.00%> (ø)` | | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.22% <50.00%> (-0.36%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.98% <97.36%> (+0.20%)` | :arrow_up: | | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `28.94% <0.00%> (-67.11%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.98% <0.00%> (-52.81%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/6411/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6411?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6411?src=pr&el=footer). Last update [4047829...ed8414a](https://codecov.io/gh/huggingface/transformers/pull/6411?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>> LGTM. Have people had good ROUGE with the compose two pretrained glue models and finetune for summarization approach? Hmm, I think it's very new so not sure if many people have tried out the framework yet. @patil-suraj - do you know if people work a lot with EncoderDecoder by chance? <|||||>> do you know if people work a lot with EncoderDecoder by chance? Seems like it, seen quite a few issues and questions (on forum as well) regarding EncoderDecoder, but no one has reported any good results yet<|||||>Looks great. Thanks, @patrickvonplaten. > LGTM. Have people had good ROUGE with the compose two pretrained glue models and finetune for summarization approach? @sshleifer, was thinking about the same thing. My guess is that numbers won't be great because cross-attention is randomly initialized? <|||||>> Looks great. Thanks, @patrickvonplaten. > > > LGTM. Have people had good ROUGE with the compose two pretrained glue models and finetune for summarization approach? > > @sshleifer, was thinking about the same thing. My guess is that numbers won't be great because cross-attention is randomly initialized? Btw, this paper does some great analysis on reusing checkpoints for Seq2Seq models: https://arxiv.org/pdf/1907.12461.pdf
transformers
6,410
closed
Cannot unzip the XNLI-MT 1.0 zip file.
# ❓ Questions & Help Is there anyone who succeeding the unzip of XNLI-MT 1.0 zip file? <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers -->
08-11-2020 09:11:11
08-11-2020 09:11:11
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I am able to unzip the archive form this link: https://dl.fbaipublicfiles.com/XNLI/XNLI-MT-1.0.zip
transformers
6,409
closed
TF2 TPU slow?
## Environment info - `transformers` version: 3.0.2 - Platform: ubuntu 18.04 - Python version: 3.6 - Tensorflow version (GPU?): TF 2.3.0 ### Who can help @jplu ## Information Model I am using (Bert, XLNet ...): BERT The problem arises when using: * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] task: MLM ## To reproduce I am using the TFTrainer from `transformers` for MLM pretraining. However, it seems that even for TPU training each batch is fed separately to the TPU, while it's usually more common to feed a bunch of batches to the TPU for efficiency (see https://github.com/tensorflow/models/blob/master/official/nlp/bert/model_training_utils.py#L226). I am not sure that's the only problem, but MLM pretraining BERT is around 3x slower on TPU with the TFTrainer compared to the official implementation (https://github.com/google-research/bert). For better TPU usage, we probably need sth like here: https://github.com/tensorflow/models/blob/master/official/nlp/bert/model_training_utils.py#L345-L361
08-11-2020 07:24:04
08-11-2020 07:24:04
Hello! Training on TPU is slow indeed because the training loop is not optimized for TPU. For two reasons: 1) we don't want to have a version that is specifically optimized for each device which will create to much confusion for maintenance. 2) having a training loop optimized for TPU will limit the possibility of logging, the logging will be at every epoch instead of every X steps. Which is a not wanted behavior. Nevertheless if you have a solution that respect these two points, I will be happy to review your PR :)<|||||>You are right, this will introduce some behavior changes because logging/saving is not possible while the TPU is processing several batches. I actually played around with this a bit and introduced a variable `steps_per_loop` that I set to `200` when using TPU. I then used an iterator for the dataset to only do batch feeding/optimization during this period. However, this only improved the training speed marginally, so I don't think it's worth a PR. What gives me a bigger speedup (around 80%) is actually using 'keras compile/fit', where we can set `experimental_steps_per_execution` to different values depending on the device, for the training loop. I would be curious though if we could substitute part of the trainer or even the whole training by a "keras style" implementation. However, this would change the current architecture quite significantly and I am not sure it will retain the generic properties of the TFTrainer class. Maybe it becomes clearer if I just show you how I implemented an option to use keras for the training loop in the TFTrainer (pretty hacky of course): https://gist.github.com/volker42maru/20641970599c27dc9503161f52aa67c9#file-tf_trainer_train-py-L85-L113 I will close this for now, because it's probably more of a long term prospect.
transformers
6,408
closed
i have used t5_base for abstractive summarization but it is not giving good results,Could you please give me solution for this
# 🖥 Benchmarking `transformers` ## Benchmark Which part of `transformers` did you benchmark? ## Set-up What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use? ## Results Put your results here!
08-11-2020 07:17:15
08-11-2020 07:17:15
Hi @gopal354 , this depends on lot of factors. What is the domain of your dataset ? There are many other summrization models available on the model hub trained on different datasets. You can try them as well. Or if you have a dataset, then you can further fine-tune these models on your domain. <|||||>Would be nice if the detailed question is written in the description box rather than title and use the relevant issue topic (this should be Questions & Help and not Benchmarking transformers). This will help the team and contributors to act faster on the issue :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,407
closed
Slow Decoding Speed when using BertForLMModel
I set the BertLMHeadModel as Decoder in my Seq2Seq model. It seems to work well in training. But when decoing, it decodes very slowly. I think there is no layer_past used in GPT2, XLNet in BertLMHeadModel and many attentions are computed repetitively?
08-11-2020 06:16:53
08-11-2020 06:16:53
Yeah, exactly - we should / could add the layer cache for `BertLMHeadModel`. It's not trivial to do it, but feel free to give it a try.
transformers
6,406
closed
RuntimeError: Error while creating shape using tf-xlm-roberta-large
I get the following runtime error after the 2nd fold. this is my model: maxlen = 50 `with strategy.scope(): #bert_encoder = TFBertModel.from_pretrained(model_name) base_model = TFAutoModel.from_pretrained(model_name) input_word_ids = tf.keras.Input(shape = (maxlen, ), dtype = tf.int32, name = "input_word_ids") input_mask = tf.keras.Input(shape = (maxlen, ), dtype = tf.int32, name = "input_mask") input_type_ids = tf.keras.Input(shape = (maxlen, ), dtype = tf.int32, name = "input_type_ids") embedding = base_model([input_word_ids, input_mask, input_type_ids])[0] output = tf.keras.layers.Dense(3, activation = 'softmax')(embedding[:, 0, :]) model = tf.keras.Model(inputs = [input_word_ids, input_mask, input_type_ids], outputs = output) model.compile(tf.keras.optimizers.Adam(lr = 1e-5), loss = 'sparse_categorical_crossentropy', metrics = ['accuracy']) ` And the traceback below: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-27-a45ce90453f2> in <module> 22 23 K.clear_session() ---> 24 model = build_model(maxlen, model_name) 25 checkpoint = tf.keras.callbacks.ModelCheckpoint( 26 'XLMRoberta_fold-%i.h5'%fold, monitor = 'val_loss', verbose = 1, save_best_only = True, <ipython-input-23-9faa2e5f1d9b> in build_model(maxlen, model_name) 2 with strategy.scope(): 3 #bert_encoder = TFBertModel.from_pretrained(model_name) ----> 4 base_model = TFAutoModel.from_pretrained(model_name) 5 input_word_ids = tf.keras.Input(shape = (maxlen, ), dtype = tf.int32, name = "input_word_ids") 6 input_mask = tf.keras.Input(shape = (maxlen, ), dtype = tf.int32, name = "input_mask") /opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 421 for config_class, model_class in TF_MODEL_MAPPING.items(): 422 if isinstance(config, config_class): --> 423 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) 424 raise ValueError( 425 "Unrecognized configuration class {} for this kind of TFAutoModel: {}.\n" /opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 482 return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True) 483 --> 484 model(model.dummy_inputs, training=False) # build the network with dummy inputs 485 486 assert os.path.isfile(resolved_archive_file), "Error retrieving file {}".format(resolved_archive_file) /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs) 966 with base_layer_utils.autocast_context_manager( 967 self._compute_dtype): --> 968 outputs = self.call(cast_inputs, *args, **kwargs) 969 self._handle_activity_regularization(inputs, outputs) 970 self._set_mask_metadata(inputs, outputs, input_masks) /opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_roberta.py in call(self, inputs, **kwargs) 229 heads. 230 """ --> 231 outputs = self.roberta(inputs, **kwargs) 232 return outputs 233 /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs) 966 with base_layer_utils.autocast_context_manager( 967 self._compute_dtype): --> 968 outputs = self.call(cast_inputs, *args, **kwargs) 969 self._handle_activity_regularization(inputs, outputs) 970 self._set_mask_metadata(inputs, outputs, input_masks) /opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_bert.py in call(self, inputs, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, output_attentions, output_hidden_states, training) 604 # head_mask = tf.constant([0] * self.num_hidden_layers) 605 --> 606 embedding_output = self.embeddings([input_ids, position_ids, token_type_ids, inputs_embeds], training=training) 607 encoder_outputs = self.encoder( 608 [embedding_output, extended_attention_mask, head_mask, output_attentions, output_hidden_states], /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs) 962 # Eager execution on data tensors. 963 with backend.name_scope(self._name_scope()): --> 964 self._maybe_build(inputs) 965 cast_inputs = self._maybe_cast_inputs(inputs) 966 with base_layer_utils.autocast_context_manager( /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in _maybe_build(self, inputs) 2414 # operations. 2415 with tf_utils.maybe_init_scope(self): -> 2416 self.build(input_shapes) # pylint:disable=not-callable 2417 # We must set also ensure that the layer is marked as built, and the build 2418 # shape is stored since user defined build functions may not be calling /opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_bert.py in build(self, input_shape) 144 "weight", 145 shape=[self.vocab_size, self.hidden_size], --> 146 initializer=get_initializer(self.initializer_range), 147 ) 148 super().build(input_shape) /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in add_weight(self, name, shape, dtype, initializer, regularizer, trainable, constraint, partitioner, use_resource, synchronization, aggregation, **kwargs) 575 synchronization=synchronization, 576 aggregation=aggregation, --> 577 caching_device=caching_device) 578 if regularizer is not None: 579 # TODO(fchollet): in the future, this should be handled at the /opt/conda/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py in _add_variable_with_custom_getter(self, name, shape, dtype, initializer, getter, overwrite, **kwargs_for_getter) 741 dtype=dtype, 742 initializer=initializer, --> 743 **kwargs_for_getter) 744 745 # If we set an initializer and the variable processed it, tracking will not /opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer_utils.py in make_variable(name, shape, dtype, initializer, trainable, caching_device, validate_shape, constraint, use_resource, collections, synchronization, aggregation, partitioner) 139 synchronization=synchronization, 140 aggregation=aggregation, --> 141 shape=variable_shape if variable_shape else None) 142 143 /opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in __call__(cls, *args, **kwargs) 257 def __call__(cls, *args, **kwargs): 258 if cls is VariableV1: --> 259 return cls._variable_v1_call(*args, **kwargs) 260 elif cls is Variable: 261 return cls._variable_v2_call(*args, **kwargs) /opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in _variable_v1_call(cls, initial_value, trainable, collections, validate_shape, caching_device, name, variable_def, dtype, expected_shape, import_scope, constraint, use_resource, synchronization, aggregation, shape) 218 synchronization=synchronization, 219 aggregation=aggregation, --> 220 shape=shape) 221 222 def _variable_v2_call(cls, /opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in getter(**kwargs) 64 65 def getter(**kwargs): ---> 66 return captured_getter(captured_previous, **kwargs) 67 68 return getter /opt/conda/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py in creator_with_resource_vars(next_creator, **kwargs) 1765 kwargs["initial_value"] = kwargs["initial_value"].wrapped_value 1766 -> 1767 return self._create_variable(next_creator, **kwargs) 1768 1769 def distributed_getter(getter, *args, **kwargs): /opt/conda/lib/python3.7/site-packages/tensorflow/python/distribute/tpu_strategy.py in _create_variable(self, next_creator, **kwargs) 670 tpu_values.TPUMirroredVariable, 671 tpu_values.TPUSyncOnReadVariable, --> 672 **kwargs) 673 674 def _reduce_to(self, reduce_op, value, destinations, experimental_hints): /opt/conda/lib/python3.7/site-packages/tensorflow/python/distribute/values.py in create_mirrored_variable(strategy, real_mirrored_creator, mirrored_cls, sync_on_read_cls, **kwargs) 692 # here. 693 with tape.stop_recording(): --> 694 value_list = real_mirrored_creator(**kwargs) 695 var_cls = sync_on_read_cls if is_sync_on_read else mirrored_cls 696 result = var_cls(strategy, value_list, aggregation) /opt/conda/lib/python3.7/site-packages/tensorflow/python/distribute/tpu_strategy.py in _real_mirrored_creator(**kwargs) 660 661 with context.device_policy(context.DEVICE_PLACEMENT_SILENT): --> 662 v = next_creator(**kwargs) 663 664 assert not isinstance(v, tpu_values.TPUMirroredVariable) /opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in <lambda>(**kwargs) 196 shape=None): 197 """Call on Variable class. Useful to force the signature.""" --> 198 previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs) 199 for _, getter in ops.get_default_graph()._variable_creator_stack: # pylint: disable=protected-access 200 previous_getter = _make_getter(getter, previous_getter) /opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py in default_variable_creator(next_creator, **kwargs) 2596 synchronization=synchronization, 2597 aggregation=aggregation, -> 2598 shape=shape) 2599 else: 2600 return variables.RefVariable( /opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/variables.py in __call__(cls, *args, **kwargs) 261 return cls._variable_v2_call(*args, **kwargs) 262 else: --> 263 return super(VariableMetaclass, cls).__call__(*args, **kwargs) 264 265 /opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py in __init__(self, initial_value, trainable, collections, validate_shape, caching_device, name, dtype, variable_def, import_scope, constraint, distribute_strategy, synchronization, aggregation, shape) 1432 aggregation=aggregation, 1433 shape=shape, -> 1434 distribute_strategy=distribute_strategy) 1435 1436 def _init_from_args(self, /opt/conda/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py in _init_from_args(self, initial_value, trainable, collections, caching_device, name, dtype, constraint, synchronization, aggregation, distribute_strategy, shape) 1568 name="initial_value", dtype=dtype) 1569 if shape is not None: -> 1570 if not initial_value.shape.is_compatible_with(shape): 1571 raise ValueError( 1572 "The initial value's shape (%s) is not compatible with " /opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in shape(self) 1063 # `_tensor_shape` is declared and defined in the definition of 1064 # `EagerTensor`, in C. -> 1065 self._tensor_shape = tensor_shape.TensorShape(self._shape_tuple()) 1066 except core._NotOkStatusException as e: 1067 six.raise_from(core._status_to_exception(e.code, e.message), None) RuntimeError: Error while creating shape
08-11-2020 05:05:27
08-11-2020 05:05:27
Facing the same issue on on GCP + TPU, same TF and TPU versions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
6,405
closed
[s2s] wmt download script use less ram
a few enhancement to https://github.com/huggingface/transformers/pull/6403 the main change: - rewrite not to load 100GB into RAM - wmt19 is huge! and then a few small things: - replaced the default dataset to wmt16 as it's much much smaller than wmt19 to experiment with (also it seems that at least wmt19-ru-en is missing a test dataset, while wmt16-run-en has it) - added lang defaults so it's easy to start experimenting with - moved tqdm to where a detailed progress can be seen - added some extra notes @sshleifer
08-11-2020 03:44:07
08-11-2020 03:44:07
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6405?src=pr&el=h1) Report > Merging [#6405](https://codecov.io/gh/huggingface/transformers/pull/6405?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b9ecd92ee4fcd515f542c73593a4b6fa0b2c81fc&el=desc) will **decrease** coverage by `2.10%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6405/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6405?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6405 +/- ## ========================================== - Coverage 80.10% 78.00% -2.11% ========================================== Files 149 149 Lines 27680 27680 ========================================== - Hits 22173 21591 -582 - Misses 5507 6089 +582 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6405?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `28.94% <0.00%> (-67.11%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.98% <0.00%> (-52.81%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `63.95% <0.00%> (-14.54%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.68% <0.00%> (-6.16%)` | :arrow_down: | | [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `82.71% <0.00%> (-2.47%)` | :arrow_down: | | [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `96.19% <0.00%> (-1.64%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.20% <0.00%> (-1.37%)` | :arrow_down: | | ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6405/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6405?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6405?src=pr&el=footer). Last update [b9ecd92...de72b0a](https://codecov.io/gh/huggingface/transformers/pull/6405?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>further testing showed that chunking doesn't make much of a difference - writing one record at a time is almost as fast as writing in chunks of 10K records - I think it's the reading that's the bottleneck here, which we can't optimize. So I removed the chunking from this PR. `wmt19-ru-en` with 37M records converted in ~40mins on my machine using this PR.
transformers
6,404
closed
[lightning_base] fix s2s logging, only make train_loader once
setup is called many times (incl twice by trainer.test), creating a dataloader each time. Will only creating a train_loader on the first call cause bad side effects that I don't understand @nateraw @williamFalcon ? I read docs [https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html], so I think I'm fine, but not sure. cc @stas00 Also: - add a fast test for run_ner - [pl] centralize `data_dir` argument to `add_generic_args` cause rule of 3 Checks: - verified xsum distillation trains well, has good LR logs. (warmup+linear decay are honored.)
08-11-2020 02:27:10
08-11-2020 02:27:10
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6404?src=pr&el=h1) Report > Merging [#6404](https://codecov.io/gh/huggingface/transformers/pull/6404?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/72add6c98f2c0607f088fa0c78d40f11e2efa4c4&el=desc) will **decrease** coverage by `0.11%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6404/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6404?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6404 +/- ## ========================================== - Coverage 80.38% 80.26% -0.12% ========================================== Files 156 156 Lines 28058 28058 ========================================== - Hits 22554 22521 -33 - Misses 5504 5537 +33 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6404?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.19%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.91% <0.00%> (-0.69%)` | :arrow_down: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.00% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6404/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6404?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6404?src=pr&el=footer). Last update [72add6c...e43061d](https://codecov.io/gh/huggingface/transformers/pull/6404?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
6,403
closed
[s2s] Script to save wmt data to disk
08-11-2020 01:55:57
08-11-2020 01:55:57
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6403?src=pr&el=h1) Report > Merging [#6403](https://codecov.io/gh/huggingface/transformers/pull/6403?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/00bb0b25ed66a4878f2e0ffdd1ca65b7684db57e&el=desc) will **decrease** coverage by `0.63%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6403/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6403?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #6403 +/- ## ========================================== - Coverage 80.24% 79.60% -0.64% ========================================== Files 149 149 Lines 27680 27680 ========================================== - Hits 22211 22035 -176 - Misses 5469 5645 +176 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6403?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.95% <0.00%> (-25.22%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.89% <0.00%> (-0.69%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: | | [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6403?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6403?src=pr&el=footer). Last update [00bb0b2...19f2c61](https://codecov.io/gh/huggingface/transformers/pull/6403?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).