repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
7,202
closed
Change to use relative imports in some files & Add python prompt symbols to example codes
- Moved transformers package import statements to relative imports in some files as in https://github.com/huggingface/transformers/pull/5796 - Added python prompt symbols in front of the example codes as in most of the codebase This is a resubmitted version of https://github.com/huggingface/transformers/pull/7188; I closed the previous PR because I used the wrong versions of style libraries.
09-17-2020 11:51:51
09-17-2020 11:51:51
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7202?src=pr&el=h1) Report > Merging [#7202](https://codecov.io/gh/huggingface/transformers/pull/7202?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0203ad43bcd0b29423dec6ca1a58ed58300f0d61?el=desc) will **decrease** coverage by `2.45%`. > The diff coverage is `71.42%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7202/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7202?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7202 +/- ## ========================================== - Coverage 80.86% 78.41% -2.46% ========================================== Files 169 169 Lines 32293 32293 ========================================== - Hits 26114 25322 -792 - Misses 6179 6971 +792 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7202?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `82.29% <ø> (ø)` | | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.44% <ø> (+0.67%)` | :arrow_up: | | [src/transformers/modeling\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kcHIucHk=) | `97.83% <ø> (ø)` | | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <ø> (-10.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.92% <ø> (ø)` | | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `72.36% <ø> (ø)` | | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.90% <ø> (ø)` | | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.33% <ø> (ø)` | | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `92.25% <ø> (ø)` | | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.23% <0.00%> (ø)` | | | ... and [26 more](https://codecov.io/gh/huggingface/transformers/pull/7202/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7202?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7202?src=pr&el=footer). Last update [0203ad4...da63e64](https://codecov.io/gh/huggingface/transformers/pull/7202?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,201
closed
added multilabel text classification notebook using distilbert to community notebooks
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
09-17-2020 09:36:57
09-17-2020 09:36:57
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7201?src=pr&el=h1) Report > Merging [#7201](https://codecov.io/gh/huggingface/transformers/pull/7201?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/45b0b1ff2f28d83c734b7505e1705b799ce2ff84?el=desc) will **increase** coverage by `2.45%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7201/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7201?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7201 +/- ## ========================================== + Coverage 78.41% 80.87% +2.45% ========================================== Files 169 169 Lines 32293 32293 ========================================== + Hits 25323 26116 +793 + Misses 6970 6177 -793 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7201?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/activations\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.77% <0.00%> (-0.68%)` | :arrow_down: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.18% <0.00%> (-0.36%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.77% <0.00%> (-0.28%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.20% <0.00%> (+0.27%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+10.00%)` | :arrow_up: | | ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/7201/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7201?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7201?src=pr&el=footer). Last update [45b0b1f...6a1e8df](https://codecov.io/gh/huggingface/transformers/pull/7201?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,200
closed
[RAG] PR to save status of previous RAG code
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
09-17-2020 08:56:58
09-17-2020 08:56:58
transformers
7,199
closed
Why does RoBERTa not label custom tokens as special tokens?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> Recently I’ve been working on training from scratch (with no pre-training whatsoever) a RoBERTa model starting from the code in [this tutorial][1]. I am working with a specific corpus I prepared according to my own format <s> ID 10 <i> COUNTRY USA <i> CAPITAL Washington DC </s> Here, `<s>` is supposed to be the `<bos>` token, `<i>` is my own `<sep>` token and `</s>` is the `<eos>` token. I noticed that one of the parameters that can be passed to the `tokenizer.encoder_plus` function is `add_special_tokens`. If `add_special_tokens=True`, the encoding of the sentence <s> COUNTRY USA <i> CAPITAL Washington DC </s> becomes <s> <s> COUNTRY USA <i> CAPITAL Washington DC </s> </s> and the `special_tokens_mask` is `1 0 0 0 0 0 0 0 0 1` (1, 8 zeros, 1). When I tried `add_special_tokens=False` on the same sentence `<s> COUNTRY USA <i> CAPITAL Washington DC </s>` the result of the encoding was correct: `<s> COUNTRY USA <i> CAPITAL Washington DC </s>` However, the `special_tokens_mask` is `0 0 0 0 0 0 0 0` (8 zeros). I therefore suspected that (for some reason) my token `<i>` were not recognized as a special token, so this time I tried to encode a sentence setting `add_special_tokens=False` and `True`, using only `<s>` and `</s>`. Token 4 is `<s>`, which is both the `<cls>` token and the `<bos>` token, while token 6 is `</s>`, which acts as `sep_token` and `eos_token`. The first case has `add_special_tokens=False` and its special token mask is full of 0’s, the first case has `add_special_tokens=True` and as expected the `<bos>` and `<eos>` tokens were added by the algorithm. The special tokens mask only shows the first and last tokens as “1”, while all the other 4’s and 6’s are missing. # original sentence <s> ID 10 </s><s> NAME Trevor </s> <s> COUNTRY USA </s><s> CAPITAL Washington DC </s> # encoding with add_special_tokens=False [4, 0, 232, 28, 27, 6, 4, 1, 232, 63, 93, 80, 97, 90, 93, 6, 4, 2, 232, 64, 62, 44, 6, 4, 3, 232, 66, 76, 94, 83, 84, 89, 82, 95, 90, 89, 232, 47, 46, 6] # encoding with add_special_tokens=True [4, 4, 0, 232, 28, 27, 6, 4, 1, 232, 63, 93, 80, 97, 90, 93, 6, 4, 2, 232, 64, 62, 44, 6, 4, 3, 232, 66, 76, 94, 83, 84, 89, 82, 95, 90, 89, 232, 47, 46, 6, 6] # special_tokens_mask with add_special_tokens=True [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1] After testing both versions, the result I got with `add_special_tokens=True` was very good, while the second with `add_special_tokens=False` failed. This raises a few issues that I wasn’t able to solve on my own: - How can I access the `special_tokens_mask` to correct it to what it should be? - Where does RoBERTa make use of that mask, if it does? - Is there a method for setting the mask to something I want? e.g. the mask for `<s> ID 10 <i> COUNTRY USA </s>` should be `1 0 0 1 0 0 1` if `<s>`, `</s>` and `<i>` are special tokens. - If RoBERTa is not the correct model to do this, what model should I go for? Thanks a lot! [1]: https://huggingface.co/blog/how-to-train <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**: https://stackoverflow.com/questions/63917142/why-does-roberta-not-label-custom-tokens-as-special-tokens https://discuss.huggingface.co/t/why-does-roberta-behave-differently-if-i-provide-a-corpus-that-contains-special-tokens/1083/2
09-17-2020 08:27:52
09-17-2020 08:27:52
Hi! That's a very detailed issue, very cool! Thanks for providing so much information. I believe you have a misconception about how to use `add_special_tokens` and how the special token mask is created. ## Adding special tokens Firstly, if you use the `add_special_tokens=True` flag, that means you expect the tokenizer to handle the special tokens on its own: you should not provide your special tokens. As you've shown in the example, if you encode a sequence with this flag, you'll get the encoded sequence surrounded by the `<bos>` and `<eos>` tokens: ```py >>> from transformers import RobertaTokenizer >>> tokenizer = RobertaTokenizer.from_pretrained("roberta-base") >>> input_ids_single_sequence = tokenizer.encode("This is a sequence", add_special_tokens=True) >>> tokenizer.decode(input_ids_single_sequence) "<s>This is a sequence</s>" ``` In your case, you're actually using a `<sep>` token as you're encoding two sequences. You can use the tokenizer to do the same, by passing two sequences to it: ```py >>> input_ids_sequence_pair = tokenizer.encode("This is a sequence", "This is another", add_special_tokens=True) >>> tokenizer.decode(input_ids_sequence_pair) '<s>This is a sequence</s></s>This is another</s>' ``` ⚠️ : Please note that the RoBERTa tokenizer is built using only `<s>` (the BOS token) and `</s>` (the SEP token), with two `</s></s>` as the separator. ## Special token mask If you try generating the special token mask here, you'll see that it has no issue generating it, __as long as you use the `already_has_special_tokens=True` flag__. You've already built the sequences with special tokens, so you should set this flag to `True`. ```py >>> tokenizer.get_special_tokens_mask(input_ids_single_sequence, already_has_special_tokens=True) [1, 0, 0, 0, 0, 1] >>> tokenizer.get_special_tokens_mask(input_ids_sequence_pair, already_has_special_tokens=True) [1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1] ``` ## Custom special tokens In your case you want to use different special tokens than what is done with the original RoBERTa implementation. That's okay, but then you should specify it to your tokenizer: ```py >>> tokenizer.add_special_tokens({'sep_token': '<i>'}) 1 # Returns the number of added tokens ``` When calling the `encode` method, you'll now correctly have the specified separator token where it should be: ```py >>> input_ids_sequence_pair = tokenizer.encode("This is a sequence", "This is another", add_special_tokens=True) >>> tokenizer.decode(input_ids_sequence_pair) '<s>This is a sequence <i> <i> This is another <i>' ``` ## Setting a custom encoding behavior In your use-case you show different behavior than the original RoBERTa tokenizer implementation, where you want your sequences encoded as: `<s> [SEQ_A] <i> [SEQ_B] </s>` If you still want to rely on the `add_special_tokens=True` flag in `encode` and the accurate special tokens mask even without respecting the original behavior, you can do so by subclassing the `RobertaTokenizer` (or the fast version) and overriding the following methods `build_inputs_with_special_tokens`, `get_special_tokens_mask`, `create_token_type_ids_from_sequences`: ```py class MyRobertaTokenizer(RobertaTokenizer): def build_inputs_with_special_tokens( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None ) -> List[int]: if token_ids_1 is None: return [self.cls_token_id] + token_ids_0 + [self.eos_token_id] cls = [self.cls_token_id] sep = [self.sep_token_id] eos = [self.eos_token_id] return cls + token_ids_0 + sep + token_ids_1 + eos def get_special_tokens_mask( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False ) -> List[int]: if already_has_special_tokens: if token_ids_1 is not None: raise ValueError( "You should not supply a second sequence if the provided sequence of " "ids is already formatted with special tokens for the model." ) return list(map(lambda x: 1 if x in [self.sep_token_id, self.cls_token_id] else 0, token_ids_0)) if token_ids_1 is None: return [1] + ([0] * len(token_ids_0)) + [1] return [1] + ([0] * len(token_ids_0)) + [1] + ([0] * len(token_ids_1)) + [1] def create_token_type_ids_from_sequences( self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None ) -> List[int]: sep = [self.sep_token_id] cls = [self.cls_token_id] eos = [self.eos_token_id] if token_ids_1 is None: return len(cls + token_ids_0 + eos) * [0] return len(cls + token_ids_0 + sep + token_ids_1 + eos) * [0] ``` This will then correctly encode your methods (given you initialize the correct special token as is done in the following snippet): ```py tokenizer = MyRobertaTokenizer.from_pretrained("roberta-base") tokenizer.add_special_tokens({'cls_token': '<s>', 'sep_token': '<i>', 'eos_token': '</s>'}) print(tokenizer.decode(tokenizer.encode("This is a sequence", add_special_tokens=True))) # <s>This is a sequence</s> print(tokenizer.decode(tokenizer.encode("This is a sequence", "This is another", add_special_tokens=True))) # <s>This is a sequence <i> This is another</s> ``` ## Conclusion I hope this helps you! If you need a better understanding of what's happening, I encourage you to read the following documentation pages: - [RobertaTokenizer](https://huggingface.co/transformers/model_doc/roberta.html#transformers.RobertaTokenizer) - [Tokenizer main class](https://huggingface.co/transformers/main_classes/tokenizer.html)<|||||>Thanks a lot for the extremely detailed answer! That cleared up a lot of doubts. I worked with some of the code you provided and got to the conclusion that RoBERTa as it is right now is probably not the best fit for my problem, so I'll have to work around that. <|||||>What a wonderful question and answer!
transformers
7,198
closed
how to continue training from a checkpoint with Trainer?
# ❓ Questions & Help ## Details I am trying to continue training my model (gpt-2) from a checkpoint, using Trainer. However when I try to do it the model starts training from 0, not from the checkpoint. I share my code because I don't know where I'm making the mistake. ``` import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') from transformers import TextDataset,DataCollatorForLanguageModeling, AutoTokenizer, GPT2LMHeadModel, Trainer, TrainingArguments tokenizer = AutoTokenizer.from_pretrained("gpt2-large") train_dataset = TextDataset( tokenizer=tokenizer, file_path='textfile (1).txt', block_size=128) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=False, ) model = GPT2LMHeadModel.from_pretrained("checkpoint-9500").to(device) ##HERE I LOAD FROM CHECKPOINT training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=4, # total # of training epochs per_device_train_batch_size=1, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, #eval_dataset=validation_dataset, prediction_loss_only=True, ) trainer.train() ``` Thanks a lot for the help.
09-17-2020 03:24:40
09-17-2020 03:24:40
Hi there, you have to pass the checkpoint path to the method `Trainer.train` to resume training: ``` trainer.train("checkpoint-9500") ``` If you set your logging verbosity to the INFO level (`transformers.logging.set_verbosity_info()`) you should then see information about the training resuming and the number of steps skipped.<|||||>Great, thanks a lot for your help Sylvain. Works perfect. <|||||>Hey, @fumpe which version of transformers were you using here? Please do let me know.<|||||>> Hi there, you have to pass the checkpoint path to the method `Trainer.train` to resume training: > > ``` > trainer.train("checkpoint-9500") > ``` > > If you set your logging verbosity to the INFO level (`transformers.logging.set_verbosity_info()`) you should then see information about the training resuming and the number of steps skipped. @sgugger if training is over, `num_train_epochs`, is reached, how do you load the checkpoint and train say for the next n epochs? when I load the checkpoint, it only trains until that epoch is over ``` trainer.num_train_epochs = trainer.num_train_epochs + 5 trainer.train("checkpoint-9500") ``` <|||||>`Trainer` does not have a `num_train_epochs` attribute that is used, you need to set `trainer.args.num_train_epochs`. To be sure everything is updated, you probably need to re-instantiate the `Trainer` with the new `TrainingArguments` though.<|||||>@sgugger: I wanted to fine tune a language model using ```--resume_from_checkpoint``` since I had sharded the text file into multiple pieces. I noticed that the ```_save()``` in Trainer doesn't save the optimizer & the scheduler state dicts and so I added a couple of lines to save the state dicts. And I printed the learning rate from scheduler using ```lr_scheduler.get_last_lr()``` in ```_load_optimizer_and_scheduler()``` right after this [line](https://github.com/huggingface/transformers/blob/ac12a5ae47b352458694194ae7c8b971310015ee/src/transformers/trainer.py#L1678). It always prints 0 (as the last learning rate) although I initialize the learning rate with a non-zero value during the first run and I am also leaving the warmup args as default (which is 0). Here's the command I am using to run the script: ``` CUDA_VISIBLE_DEVICES=2 python lm_scripts/lm_wwm.py --model_name_or_path "./output_dir/demo1/" --train_file \ "input_data_part2.csv" --validation_file "validation.csv" --logging_dir "./output_dir/demo_log2" --logging_steps 1000 \ --save_steps 1000 --save_total_limit 2 --seed 42 --output_dir "./output_dir/demo2" --evaluation_strategy "epoch" \ --per_device_train_batch_size 5 --per_device_eval_batch_size 10 --do_train --do_eval --resume_from_checkpoint True \ --ignore_data_skip --num_train_epochs 2 ``` Here's the [gist](https://gist.github.com/adithya8/7823d111efec9f3ff5313f2438068c2e) of the changes I made to the trainer code (Lines 904, 905, 1602, 1603). It would be super helpful if you could point me in the right direction to fix this?<|||||>@adithya8 The optimizer, learning rate scheduler, state of the scalers and all the RNGs are saved when doing a checkpoint, you should check the list of files you have in your `checkpoint-xxx` folder.<|||||>Gotcha! Thanks<|||||>When I resume training from a checkpoint, I use a new batch size different from the previous training and it seems that the number of the skipped epoch is wrong. For example, I trained a model for 10 epochs with `per_device_train_batch_size=10` and generate a checkpoint. Then I resume training from this checkpoint with `per_device_train_batch_size=10`, the trainer will skip 12 epochs. What should I do if I want to resume training with a different batch size?<|||||>That feature is not supported, you should resume training with the exact same hyperparameters or start a new training if you want to change them.<|||||>I have trained a model and saved the checkpoint,Now I want to increase the number of epochs to continue training, but I have a problem.My training loss is not able to match the previous training, there is a very big difference(First up, then down, but not down to the original level).I increased the number of epochs from 20 to 30. `training_args = TrainingArguments( output_dir= models_path + '/' + chain + '_' + modeltype + 'bert/two/log', save_total_limit=5, do_train=True, do_eval=True, evaluation_strategy='epoch', #eval_steps=1, overwrite_output_dir=True, num_train_epochs=30, per_device_train_batch_size=128, learning_rate=9.282980634054497e-05, weight_decay=0.1531103573951345, warmup_ratio=0.05549995455206759, logging_first_step=True, ignore_data_skip=True, save_steps=100, logging_dir = models_path + '/' + chain + '_' + modeltype + 'bert/two/runs/' + datetime.datetime.strftime(datetime.datetime.now(),'%Y-%m-%d_%H:%M:%S'), logging_steps=10, seed = 19930606 )` `trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset = eva_dataset )` ` trainer.train(resume_from_checkpoint=True)` Thanks a lot for the help.<|||||>> have trained a model and saved the checkpoint,Now I want to increase the number of epochs to continue training, but I have a problem.My training loss is not able to match the previous training, there is a very big difference(First up, then down, but not down to the original level).I increased the number of epochs > That feature is not supported, you should resume training with the exact same hyperparameters or start a new training if you want to change them. For example i want to freeze some layers in the checkpointed model. Is it possible?<|||||>I have two bin files like pytorch_model-00001-of-00002.bin and pytorch_model-00002-of-00002.bin in the checkpoint-1000 folder but it won't load! It still says that "ValueError: Can't find a valid checkpoint at checkpoint-1000."<|||||>Make sure you are using a source install of Transformers. This is a bug that was recently fixed @oscar-eaglewatch <|||||>How do I differentiate between, 1. continue pre-training from a checkpoint with the previous optimizer/scheduler state 2. continue pre-training from a checkpoint but reset the optimizer and the scheduler <|||||>@sgugger I have been changing my hyperparameters and got `Can't find a valid checkpoint`. Now it seems I cannot find the original hyperparameter settings (esp. `for` `per_device_train_batch_size`, `gradient_accumulation_steps`, and `max_steps`). I did find the `max_steps` in some json file in the checkpoint, but not the other hyperparameter settings. How can I find out the settings I should pick?<|||||>Hi, there, I got same error with deepspeed. training noramlly and resume got `Can't find a valid checkpoint` first I try resume_from_checkpoint with `out/lora-Vicuna-chat` (output_path) got `Can't find a valid checkpoint` then I send `out/lora-Vicuna-chat/checkpoint-6000` I can not load the lora weights........ ``` "base_model.model.model.layers.31.self_attn.k_proj.lora_B.default.weight", "base_model.model.model.layers.31.self_attn.v_proj.weight", "base_model.model.model.layers.31.self_attn.v_proj.lora_A.default.weight", "base_model.model.model.layers.31.self_attn.v_proj.lora_B.default.weight", "base_model.model.model.layers.31.self_attn.o_proj.weight", "base_model.model.model.layers.31.self_attn.o_proj.lora_A.default.weight", "base_model.model.model.layers.31.self_attn.o_proj.lora_B.default.weight", "base_model.model.model.layers.31.self_attn.rotary_emb.inv_freq", "base_model.model.model.layers.31.mlp.gate_proj.weight", "base_model.model.model.layers.31.mlp.gate_proj.lora_A.default.weight", "base_model.model.model.layers.31.mlp.gate_proj.lora_B.default.weight", "base_model.model.model.layers.31.mlp.down_proj.weight", "base_model.model.model.layers.31.mlp.down_proj.lora_A.default.weight", "base_model.model.model.layers.31.mlp.down_proj.lora_B.default.weight", "base_model.model.model.layers.31.mlp.up_proj.weight", "base_model.model.model.layers.31.mlp.up_proj.lora_A.default.weight", "base_model.model.model.layers.31.mlp.up_proj.lora_B.default.weight", "base_model.model.model.layers.31.input_layernorm.weight", Unexpected key(s) in state_dict:"base_model.model.model.layers.31.self_attn.q_proj.lora_A.weight", "base_model.model.model.layers.31.self_attn.q_proj.lora_B.weight", "base_model.model.model.layers.31.self_attn.k_proj.lora_A.weight", "base_model.model.model.layers.31.self_attn.k_proj.lora_B.weight", "base_model.model.model.layers.31.self_attn.v_proj.lora_A.weight", "base_model.model.model.layers.31.self_attn.v_proj.lora_B.weight", "base_model.model.model.layers.31.self_attn.o_proj.lora_A.weight", "base_model.model.model.layers.31.self_attn.o_proj.lora_B.weight", "base_model.model.model.layers.31.mlp.gate_proj.lora_A.weight", "base_model.model.model.layers.31.mlp.gate_proj.lora_B.weight", "base_model.model.model.layers.31.mlp.down_proj.lora_A.weight", "base_model.model.model.layers.31.mlp.down_proj.lora_B.weight", ``` the model with some suffix `default`, but samed model didn't have...... I am confused so much<|||||>Transformers have been a few years old, I thought that it should be mature by now, atleast 1 core version that works flawlessly all the way. But man, basic and simple issues like saving checkpoints, weight reload, resume, skip dataset etc... with pure transformers or deepspeed...still bumping around, real pain in the b** :( ... I must say that I admired Yolov5 most in the way they described things, annoucing news, bugs fixed. My 2 cents for this great repo, it deserves much more than this.<|||||>Falling back to 4.28.1 :((
transformers
7,197
closed
[s2s]add wandb summary metric that tracks best val bleu/rouge
right now, wandb just shows the last value, not the best value (associated with the checkpoint). So if you overtrained something you can't see that it was good on the wandb page.
09-17-2020 02:46:15
09-17-2020 02:46:15
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,196
closed
[s2s] fix kwarg typo
09-17-2020 01:58:23
09-17-2020 01:58:23
transformers
7,195
closed
GPT2 Heatmap Error: 'Parameter' object has no attribute 'get_shape'
``` import numpy as np import tensorflow as tf def compute_textual_saliency(model, embedding_matrix, tokenizer, text): token_ids = tokenizer.encode(text, add_special_tokens=True) vocab_size = embedding_matrix.get_shape()[0] heatmap_data = [] for masked_token_index in range(len(token_ids)): # print(f'processing token {masked_token_index + 1} / {len(token_ids)}') if masked_token_index == 0: heatmap_data.append({ 'token': '[CLR]', 'meta': ['', '', ''], 'heat': [1] + [0] * (len(token_ids) - 1) }) elif masked_token_index == len(token_ids) - 1: heatmap_data.append({ 'token': ' ', 'format': True }) heatmap_data.append({ 'token': '[SEP]', 'meta': ['', '', ''], 'heat': [0] * (len(token_ids) - 1) + [1] }) else: # Get the actual token target_token = tokenizer.convert_ids_to_tokens( token_ids[masked_token_index]) if target_token[0:2] == '##': target_token = target_token[2:] else: heatmap_data.append({ 'token': ' ', 'format': True }) # integers are not differentable, so use a one-hot encoding # of the intput token_ids_tensor = tf.constant([ token_ids[0:masked_token_index] + [tokenizer.mask_token_id] + token_ids[masked_token_index + 1:] ], dtype='int32') token_ids_tensor_one_hot = tf.one_hot(token_ids_tensor, vocab_size) # To select, the correct output witch is what the importance # measure targets, create a masking tensor. tf.gather_nd could also # be used, but this is easier. output_mask = np.zeros((1, len(token_ids), vocab_size)) output_mask[0, masked_token_index, token_ids[masked_token_index]] = 1 output_mask_tensor = tf.constant(output_mask, dtype='float32') # Compute gradient of the logits of the correct target, w.r.t. the # input with tf.GradientTape(watch_accessed_variables=False) as tape: tape.watch(token_ids_tensor_one_hot) inputs_embeds = tf.matmul(token_ids_tensor_one_hot,embedding_matrix) predict, = model({"inputs_embeds": inputs_embeds}) predict_mask_correct_token = tf.reduce_sum(predict * output_mask_tensor) # Get the top-3 predictions (_, top_3_indices) = tf.math.top_k(predict[0, masked_token_index, :], 3) top_3_predicted_tokens = tokenizer.convert_ids_to_tokens(top_3_indices) # compute the connectivity connectivity_non_normalized = tf.norm( tape.gradient(predict_mask_correct_token, token_ids_tensor_one_hot), axis=2) connectivity_tensor = ( connectivity_non_normalized / tf.reduce_max(connectivity_non_normalized) ) connectivity = connectivity_tensor[0].numpy().tolist() heatmap_data.append({ 'token': target_token, 'meta': top_3_predicted_tokens, 'heat': connectivity }) return heatmap_data ``` ``` text = ("context the formal study of grammar is an important part of education" " from a young age through advanced learning though the rules taught" " in schools are not a grammar in the sense most linguists use") from transformers import TFDistilBertForMaskedLM, DistilBertTokenizer dbert_model = TFDistilBertForMaskedLM.from_pretrained('distilbert-base-uncased') dbert_tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased') dbert_embmat = dbert_model.distilbert.embeddings.word_embeddings from transformers import AutoTokenizer, AutoModelWithLMHead gpt2_model = AutoModelWithLMHead.from_pretrained("gpt2") gpt2_tokenizer = AutoTokenizer.from_pretrained("gpt2") gpt2_embmat = gpt2_model.transformer.wte.weight from textualheatmap import TextualHeatmap heatmap = TextualHeatmap(facet_titles = ['BERT', 'Distil BERT'], show_meta=True) heatmap.set_data([ compute_textual_saliency(dbert_model, dbert_embmat, dbert_tokenizer, text), compute_textual_saliency(gpt2_model, gpt2_embmat, gpt2_tokenizer, text) ]) ``` `AttributeError: 'Parameter' object has no attribute 'get_shape'`
09-17-2020 01:33:15
09-17-2020 01:33:15
As the error says, `get_shape` does not exist in PyTorch parameters. Use `[...].shape` instead of `[...].get_shape()`.
transformers
7,194
closed
examples/seq2seq/__init__.py mutates sys.path
(**edit**: the PR went through a change - i updated the description here). Fixes part of the issue raised here: https://github.com/huggingface/transformers/pull/7109#issuecomment-693213029 This PR replaces the hack of retrying to import modules via `foo` and if failed via `.foo` depending on where the test is invoked from, so now it's possible to do: ``` pytest ./examples/seq2seq/test_seq2seq_examples.py --collect-only -q cd examples pytest ./seq2seq/test_seq2seq_examples.py --collect-only -q cd seq2seq/ pytest ./test_seq2seq_examples.py --collect-only -q ``` and all the imports just work. (I used `--collect-only -q` just to validate that it works, as it's a fast run) For scripts invocations work too: ``` python ./examples/seq2seq/run_eval.py -h cd examples python ./seq2seq/run_eval.py -h cd seq2seq/ python ./run_eval.py -h ``` While `conftest.py` worked for tests, it left the scripts broken. So in the 2nd attempt I replaced `conftest.py` with `__init__.py` that does the same and works for both types. If it's fitting, we can do the same for the rest of the `examples` sub-dirs. @sshleifer
09-17-2020 00:58:35
09-17-2020 00:58:35
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7194?src=pr&el=h1) Report > Merging [#7194](https://codecov.io/gh/huggingface/transformers/pull/7194?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4f6e52574248636352a746cfe6cc0b13cf3eb7f9?el=desc) will **increase** coverage by `2.14%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7194/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7194?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7194 +/- ## ========================================== + Coverage 78.63% 80.78% +2.14% ========================================== Files 174 174 Lines 33446 33446 ========================================== + Hits 26300 27018 +718 + Misses 7146 6428 -718 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7194?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.10% <0.00%> (-29.80%)` | :arrow_down: | | [src/transformers/activations\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: | | [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `79.03% <0.00%> (-7.80%)` | :arrow_down: | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.87% <0.00%> (-0.36%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.28% <0.00%> (ø)` | | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.13%)` | :arrow_up: | | ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/7194/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7194?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7194?src=pr&el=footer). Last update [4f6e525...2c2aa9e](https://codecov.io/gh/huggingface/transformers/pull/7194?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Have you tested invoking run_eval.py/finetune.py from examples/seq2seq? That is the workflow documented in `seq2seq/README.md` This command is cheap and if it starts running you know you are fine: https://discuss.huggingface.co/t/good-command-to-test-examples-seq2seq-refactors/982<|||||>Ah, we need tests that aren't tests too :) Good catch! ``` cd examples/seq2seq/ python ./run_eval.py -h Traceback (most recent call last): File "./run_eval.py", line 15, in <module> from .utils import calculate_bleu, calculate_rouge, parse_numeric_n_bool_cl_kwargs, use_task_specific_params ImportError: attempted relative import with no known parent package ``` back to the drawing board. Thanks for your vigilance, @sshleifer <|||||>OK, so while `conftest.py` worked for tests, it left the scripts broken. So in the 2nd attempt I replaced `conftest.py` with `__init__.py` that does the same and works for both types. @sshleifer, @LysandreJik #### Header added by sam because what is below is tangentially related to this PR :) p.s. it looks like importing a local module from a script is a very controversial thing from reading SO. If using the retry approach, it's still probably more readable to do: ``` if __name__ == '__main__': from mymodule import aaa else: from .mymodule import aaa ``` which is more telling. <|||||>> This command is cheap and if it starts running you know you are fine: Much faster to test is to: ``` python finetune.py -h ```<|||||>Pycharm users who develop on seq2seq a lot (I can think of 0 others haha) may need to change Preferences/Project Structure/Content Root to accommodate this. I posted [instructions](https://discuss.huggingface.co/t/pycharm-project-structure-seq2seq/1206) on the forums that I'll delete if this doesn't get merged: .<|||||>Merging this with the idea that there is a 10% chance we will have to revert because of some unforeseen dealbreaking issue.
transformers
7,193
closed
Add bos and eos to gpt2 tokenizer
Fix https://github.com/huggingface/transformers/issues/3311 Test: ``` tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') tokenizer.add_special_tokens({'pad_token': '<pad>'}) inputs = tokenizer.batch_encode_plus(["My cute dog", "My cute dog is is"], add_special_tokens=True, padding=True, return_tensors="pt") ``` output: before: ``` {'input_ids': tensor([[ 3666, 13779, 3290, 50257, 50257], [ 3666, 13779, 3290, 318, 318]]), 'attention_mask': tensor([[1, 1, 1, 0, 0], [1, 1, 1, 1, 1]])} ``` after: ``` {'input_ids': tensor([[50256, 3666, 13779, 3290, 50256, 50257, 50257], [50256, 3666, 13779, 3290, 318, 318, 50256]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1, 1]])} ```
09-17-2020 00:26:22
09-17-2020 00:26:22
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,192
closed
[s2s] run_eval/run_eval_search tweaks
working with @sshleifer on making run_eval/run_eval_search and tests better.
09-17-2020 00:20:55
09-17-2020 00:20:55
sadly I think `pytest.param` was needed or each entry needs to be a list or something: https://circle-production-customer-artifacts.s3.amazonaws.com/picard/forks/5bdabdd888af1f000130874a/278214696/5f62bdadb33de21283aa7519-0-build/artifacts/test_output.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20200917T014628Z&X-Amz-SignedHeaders=host&X-Amz-Expires=59&X-Amz-Credential=AKIAJR3Q6CR467H7Z55A%2F20200917%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=4ec6c5af819aab91ed8103865ad61af3617a43c8ffa34fe5192a45e3067e3cc0 ``` __________ ERROR collecting examples/seq2seq/test_seq2seq_examples.py __________ examples/seq2seq/test_seq2seq_examples.py::test_finetune: in "parametrize" the number of names (1): ['model'] must be equal to the number of values (31): patrickvonplaten/t5-tiny-random __________ ERROR collecting examples/seq2seq/test_seq2seq_examples.py __________ ```<|||||>> sadly I think `pytest.param` was needed or each entry needs to be a list or something: right I didn't run all tests, didn't see the other entries used an inefficient use of `parametrize`. Instead of: ``` @pytest.mark.parametrize("model", [BART_TINY, MBART_TINY]) ``` They were using: ``` @pytest.mark.parametrize(["model"], ... ``` notice the argument names are normally a string and not a list. Since it was made into the list, that's why all the extra code. a recent addition: https://huggingface.co/transformers/master/testing.html#parametrization BTW, if we switch to `parametrized` (which is now part of the dependencies for `dev`) it works identically with `unittest` and `pytest` tests - same API. I fixed it and now: ``` pytest --disable-warnings examples/seq2seq/test_seq2seq_examples.py --collect-only -q [...] examples/seq2seq/test_seq2seq_examples.py::test_run_eval_slow[sshleifer/bart-tiny-random] examples/seq2seq/test_seq2seq_examples.py::test_run_eval examples/seq2seq/test_seq2seq_examples.py::test_run_eval_slow[sshleifer/tiny-mbart] [...] ``` <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7192?src=pr&el=h1) Report > Merging [#7192](https://codecov.io/gh/huggingface/transformers/pull/7192?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/45b0b1ff2f28d83c734b7505e1705b799ce2ff84?el=desc) will **increase** coverage by `1.08%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7192/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7192?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7192 +/- ## ========================================== + Coverage 78.41% 79.50% +1.08% ========================================== Files 169 169 Lines 32293 32293 ========================================== + Hits 25323 25673 +350 + Misses 6970 6620 -350 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7192?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.83% <0.00%> (-0.72%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.91% <0.00%> (-0.14%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `93.23% <0.00%> (+74.20%)` | :arrow_up: | | ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/7192/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7192?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7192?src=pr&el=footer). Last update [45b0b1f...d329cf2](https://codecov.io/gh/huggingface/transformers/pull/7192?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,191
closed
Trainer multi label
This is a follow-up from #7126. The same kinds of models that can output multiple predictions expect multiple labels (not named "labels") so the evaluation code needs to be changed for this. To support models built by users, I added a `label_names` field in the `TrainingArguments` which contain the label names. It then defaults to `["labels"]` for most models, `["start_positions", "end_positions"]` for question answering models if the user does not set it to work seamlessly for all Transformers models. I ended up writing a few util functions that concat/numpify for tensors or nested lists/tuples of tensors to avoid testing everywhere in `Trainer`, I think the design is cleaner this way and it also supports model with crazy outputs (if we set `output_attentions=True` for instance). I also added a test for the multiple labels predictions.
09-16-2020 21:48:46
09-16-2020 21:48:46
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7191?src=pr&el=h1) Report > Merging [#7191](https://codecov.io/gh/huggingface/transformers/pull/7191?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3babef815c5372d039202b2247a4ed6afb2a410a?el=desc) will **decrease** coverage by `1.45%`. > The diff coverage is `70.83%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7191/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7191?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7191 +/- ## ========================================== - Coverage 80.86% 79.41% -1.46% ========================================== Files 169 169 Lines 32293 32322 +29 ========================================== - Hits 26115 25668 -447 - Misses 6178 6654 +476 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7191?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `58.88% <56.52%> (-1.41%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.72% <80.00%> (+0.25%)` | :arrow_up: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.74% <100.00%> (+0.07%)` | :arrow_up: | | [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: | | [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: | | ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/7191/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7191?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7191?src=pr&el=footer). Last update [3babef8...29ce623](https://codecov.io/gh/huggingface/transformers/pull/7191?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,190
closed
[build scripts] update
fill-in the missing info for the build script as provided by the searcher. (this is also an experiment in re-using the branch for a follow up PR, it shows the old commits that have been merged already, but the diff is new. not sure if there is a better way.)
09-16-2020 21:39:30
09-16-2020 21:39:30
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7190?src=pr&el=h1) Report > Merging [#7190](https://codecov.io/gh/huggingface/transformers/pull/7190?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0203ad43bcd0b29423dec6ca1a58ed58300f0d61?el=desc) will **decrease** coverage by `2.21%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7190/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7190?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7190 +/- ## ========================================== - Coverage 80.86% 78.65% -2.22% ========================================== Files 169 169 Lines 32293 32293 ========================================== - Hits 26114 25399 -715 - Misses 6179 6894 +715 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7190?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `33.57% <0.00%> (-65.72%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.85% <0.00%> (-55.15%)` | :arrow_down: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `57.14% <0.00%> (-39.69%)` | :arrow_down: | | [src/transformers/tokenization\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-34.89%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `57.28% <0.00%> (-15.08%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: | | ... and [19 more](https://codecov.io/gh/huggingface/transformers/pull/7190/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7190?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7190?src=pr&el=footer). Last update [0203ad4...dd95009](https://codecov.io/gh/huggingface/transformers/pull/7190?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I'm going to add it here https://github.com/huggingface/transformers/pull/7153 where it really belongs
transformers
7,189
closed
When trying to train LongformerModel got **forward() got an unexpected keyword argument 'labels'**
@sgugger @patrickvonplaten ## Environment info - `transformers` version: 3.1.0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: True - Using distributed or parallel set-up in script?: False The model I am using: Longformer The problem arises when using my own modified scripts (based on [How to train a new language model from scratch using Transformers and Tokenizers](https://github.com/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb) ) ``` !pip uninstall -y tensorflow !pip install git+https://github.com/huggingface/transformers !pip list | grep -E 'transformers|tokenizers' from pathlib import Path from tokenizers import (ByteLevelBPETokenizer, CharBPETokenizer, SentencePieceBPETokenizer, BertWordPieceTokenizer) tokenizer = ByteLevelBPETokenizer() tokenizer.train(files="/content/drive/My Drive/summa/news_text.txt", vocab_size=5000, min_frequency=5, special_tokens=[ "<s>", "<pad>", "</s>", "<unk>", "<mask>", ]) !mkdir LongformerL tokenizer.save_model("./LongformerL") from tokenizers.implementations import CharBPETokenizer from tokenizers.processors import BertProcessing tokenizer = ByteLevelBPETokenizer( "./LongformerL/vocab.json", "./LongformerL/merges.txt", ) tokenizer._tokenizer.post_processor = BertProcessing( ("</s>", tokenizer.token_to_id("</s>")), ("<s>", tokenizer.token_to_id("<s>")), ) tokenizer.enable_truncation(max_length=512) from transformers import LongformerConfig, LongformerModel config = LongformerConfig( vocab_size=5_000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1, ) from transformers import LongformerTokenizerFast tokenizer = LongformerTokenizerFast.from_pretrained("./LongformerL", max_len=512) from transformers import LongformerModel model = LongformerModel(config=config) model.num_parameters() from transformers import LineByLineTextDataset dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="/content/drive/My Drive/summa/news_text.txt", block_size=128, ) from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="/content/drive/My Drive/dataset_contract", overwrite_output_dir=True, per_device_train_batch_size=2, num_train_epochs=5, save_steps=1_000, save_total_limit=2, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, prediction_loss_only=True, ) trainer.train() ``` The tasks I am working on is my own task or dataset. I'd like to train a Longformer from scratch. ``` Epoch: 0% 0/5 [00:00<?, ?it/s] Iteration: 0% 0/50 [00:00<?, ?it/s] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-47-0c647bc3a8b8> in <module>() ----> 1 get_ipython().run_cell_magic('time', '', 'trainer.train()') 6 frames <decorator-gen-60> in time(self, line, cell, local_ns) <timed eval> in <module>() /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), **TypeError: forward() got an unexpected keyword argument 'labels'** ```
09-16-2020 21:15:50
09-16-2020 21:15:50
The base models for all architectures are not usable with `Trainer` since they don't have a training objective. You need to pick the model class adapted to your task.<|||||>> The base models for all architectures are not usable with `Trainer` since they don't have a training objective. You need to pick the model class adapted to your task. Thank you! I replaced the model class "LongformerModel" with the "LongformerForSequenceClassification". and got an error. ValueError: Expected input batch_size (2) to match target batch_size (256). Which class of Longformer model is suitable for me (if any)? Actually I'd like to perform a multi-classification task with my dataset. ``` ValueError Traceback (most recent call last) <ipython-input-76-0c647bc3a8b8> in <module>() ----> 1 get_ipython().run_cell_magic('time', '', 'trainer.train()') 11 frames <decorator-gen-60> in time(self, line, cell, local_ns) <timed eval> in <module>() /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 2214 if input.size(0) != target.size(0): 2215 raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' -> 2216 .format(input.size(0), target.size(0))) 2217 if dim == 2: 2218 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) ValueError: Expected input batch_size (2) to match target batch_size (256). ```<|||||>You can't use a `DataCollatorForLanguageModeling` for a classification task, this data collator prepares the data for language modeling (with masking since you set that option to `True`).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,188
closed
Change to use relative imports in some files & Add python prompt symbols to example codes
- Moved transformers package import statements to relative imports in some files as in https://github.com/huggingface/transformers/pull/5796 - Added python prompt symbols in front of the example codes as in most of the codebase
09-16-2020 20:42:50
09-16-2020 20:42:50
Hello, thank you for the review! Now the PR passes all the tests, but I think I messed up many commits by using a wrong version of style libraries. Would it be okay to close and resubmit this PR? (https://github.com/huggingface/transformers/pull/7202)<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7188?src=pr&el=h1) Report > Merging [#7188](https://codecov.io/gh/huggingface/transformers/pull/7188?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0203ad43bcd0b29423dec6ca1a58ed58300f0d61?el=desc) will **increase** coverage by `0.46%`. > The diff coverage is `71.42%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7188/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7188?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7188 +/- ## ========================================== + Coverage 80.86% 81.33% +0.46% ========================================== Files 169 169 Lines 32293 32293 ========================================== + Hits 26114 26264 +150 + Misses 6179 6029 -150 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7188?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `82.29% <ø> (ø)` | | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.44% <ø> (+0.67%)` | :arrow_up: | | [src/transformers/modeling\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kcHIucHk=) | `97.83% <ø> (ø)` | | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <ø> (ø)` | | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.92% <ø> (ø)` | | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `72.36% <ø> (ø)` | | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.90% <ø> (ø)` | | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.33% <ø> (ø)` | | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `92.25% <ø> (ø)` | | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.23% <0.00%> (ø)` | | | ... and [22 more](https://codecov.io/gh/huggingface/transformers/pull/7188/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7188?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7188?src=pr&el=footer). Last update [0203ad4...4bc0e5f](https://codecov.io/gh/huggingface/transformers/pull/7188?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,187
closed
[s2s-wip] ray instructions
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #7170 ### TODO - test - finish instructions - modify ray_tune_cli to not hardcode all possible lightning args. - probably more things
09-16-2020 20:19:54
09-16-2020 20:19:54
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,186
closed
[s2s] distributed eval cleanup
Fixes #7176 - [x] Fix determinism issue caused by hardcoding num_beams - [x] local_rank 0 logging - [x] local_rank 0 tqdm - [x] same results as `run_eval.py` - [x] deeper investigation of non-determinism. Gens the same. Labels different? - [x] save json to one line.
09-16-2020 19:17:49
09-16-2020 19:17:49
cc @stas00
transformers
7,185
closed
Create README.md for indobert-lite-large-p2
Create README.md for indobert-lite-large-p2
09-16-2020 18:50:37
09-16-2020 18:50:37
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7185?src=pr&el=h1) Report > Merging [#7185](https://codecov.io/gh/huggingface/transformers/pull/7185?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3babef815c5372d039202b2247a4ed6afb2a410a?el=desc) will **decrease** coverage by `1.43%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7185/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7185?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7185 +/- ## ========================================== - Coverage 80.86% 79.43% -1.44% ========================================== Files 169 169 Lines 32293 32293 ========================================== - Hits 26115 25653 -462 - Misses 6178 6640 +462 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7185?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: | | [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-5.77%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: | | ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/7185/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7185?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7185?src=pr&el=footer). Last update [3babef8...57bf96a](https://codecov.io/gh/huggingface/transformers/pull/7185?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,184
closed
Create README.md for indobert-lite-large-p1
Create README.md for indobert-lite-large-p1
09-16-2020 18:49:46
09-16-2020 18:49:46
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7184?src=pr&el=h1) Report > Merging [#7184](https://codecov.io/gh/huggingface/transformers/pull/7184?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3babef815c5372d039202b2247a4ed6afb2a410a?el=desc) will **decrease** coverage by `1.08%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7184/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7184?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7184 +/- ## ========================================== - Coverage 80.86% 79.78% -1.09% ========================================== Files 169 169 Lines 32293 32293 ========================================== - Hits 26115 25764 -351 - Misses 6178 6529 +351 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7184?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `19.71% <0.00%> (-72.34%)` | :arrow_down: | | [src/transformers/tokenization\_bert\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9nZW5lcmF0aW9uLnB5) | `39.28% <0.00%> (-55.36%)` | :arrow_down: | | [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `60.39% <0.00%> (-34.66%)` | :arrow_down: | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `67.79% <0.00%> (-31.36%)` | :arrow_down: | | [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `75.00% <0.00%> (-25.00%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `75.91% <0.00%> (-21.17%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `88.17% <0.00%> (-5.02%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.45% <0.00%> (-3.01%)` | :arrow_down: | | ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/7184/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7184?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7184?src=pr&el=footer). Last update [3babef8...a932393](https://codecov.io/gh/huggingface/transformers/pull/7184?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,183
closed
Create README.md for indobert-lite-base phase 2 model card
Create README.md for indobert-lite-base phase 2 model card
09-16-2020 18:48:39
09-16-2020 18:48:39
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7183?src=pr&el=h1) Report > Merging [#7183](https://codecov.io/gh/huggingface/transformers/pull/7183?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3babef815c5372d039202b2247a4ed6afb2a410a?el=desc) will **decrease** coverage by `1.41%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7183/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7183?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7183 +/- ## ========================================== - Coverage 80.86% 79.45% -1.42% ========================================== Files 169 169 Lines 32293 32293 ========================================== - Hits 26115 25657 -458 - Misses 6178 6636 +458 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7183?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: | | [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `81.70% <0.00%> (-4.77%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: | | ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7183/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7183?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7183?src=pr&el=footer). Last update [3babef8...bac886f](https://codecov.io/gh/huggingface/transformers/pull/7183?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,182
closed
Create README.md for indobert-lite-base-p1
Create README.md for indobert-lite-base-p1
09-16-2020 18:47:24
09-16-2020 18:47:24
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7182?src=pr&el=h1) Report > Merging [#7182](https://codecov.io/gh/huggingface/transformers/pull/7182?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3babef815c5372d039202b2247a4ed6afb2a410a?el=desc) will **decrease** coverage by `3.82%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7182/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7182?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7182 +/- ## ========================================== - Coverage 80.86% 77.04% -3.83% ========================================== Files 169 169 Lines 32293 32293 ========================================== - Hits 26115 24880 -1235 - Misses 6178 7413 +1235 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7182?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: | | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: | | [src/transformers/tokenization\_bert\_generation.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9nZW5lcmF0aW9uLnB5) | `39.28% <0.00%> (-55.36%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.37% <0.00%> (-42.10%)` | :arrow_down: | | [src/transformers/tokenization\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-34.89%)` | :arrow_down: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `71.60% <0.00%> (-20.44%)` | :arrow_down: | | [src/transformers/tokenization\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `47.05% <0.00%> (-13.24%)` | :arrow_down: | | ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/7182/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7182?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7182?src=pr&el=footer). Last update [3babef8...9f70c36](https://codecov.io/gh/huggingface/transformers/pull/7182?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,181
closed
Create README.md for indobert-large-p2 model card
Create README.md for indobert-large-p2 model card
09-16-2020 18:44:58
09-16-2020 18:44:58
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7181?src=pr&el=h1) Report > Merging [#7181](https://codecov.io/gh/huggingface/transformers/pull/7181?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3babef815c5372d039202b2247a4ed6afb2a410a?el=desc) will **decrease** coverage by `1.42%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7181/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7181?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7181 +/- ## ========================================== - Coverage 80.86% 79.44% -1.43% ========================================== Files 169 169 Lines 32293 32293 ========================================== - Hits 26115 25656 -459 - Misses 6178 6637 +459 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7181?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: | | [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `81.70% <0.00%> (-4.77%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: | | ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/7181/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7181?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7181?src=pr&el=footer). Last update [3babef8...9d0a80a](https://codecov.io/gh/huggingface/transformers/pull/7181?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,180
closed
Create README.md for indobert-large-p1 model card
Create README.md for indobert-large-p1 model card
09-16-2020 18:42:36
09-16-2020 18:42:36
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7180?src=pr&el=h1) Report > Merging [#7180](https://codecov.io/gh/huggingface/transformers/pull/7180?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3babef815c5372d039202b2247a4ed6afb2a410a?el=desc) will **increase** coverage by `1.04%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7180/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7180?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7180 +/- ## ========================================== + Coverage 80.86% 81.91% +1.04% ========================================== Files 169 169 Lines 32293 32293 ========================================== + Hits 26115 26453 +338 + Misses 6178 5840 -338 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7180?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `70.19% <0.00%> (-23.08%)` | :arrow_down: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `71.60% <0.00%> (-20.44%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.69% <0.00%> (-0.54%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.28% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.54% <0.00%> (+0.35%)` | :arrow_up: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.40%)` | :arrow_up: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.44% <0.00%> (+0.67%)` | :arrow_up: | | [...rc/transformers/data/datasets/language\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `94.11% <0.00%> (+1.17%)` | :arrow_up: | | ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/7180/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7180?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7180?src=pr&el=footer). Last update [3babef8...61fe03d](https://codecov.io/gh/huggingface/transformers/pull/7180?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,179
closed
Create README.md for indobert-base-p1 model card
Create README.md for indobert-base-p1 model card
09-16-2020 18:40:32
09-16-2020 18:40:32
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7179?src=pr&el=h1) Report > Merging [#7179](https://codecov.io/gh/huggingface/transformers/pull/7179?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3babef815c5372d039202b2247a4ed6afb2a410a?el=desc) will **decrease** coverage by `1.42%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7179/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7179?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7179 +/- ## ========================================== - Coverage 80.86% 79.44% -1.43% ========================================== Files 169 169 Lines 32293 32293 ========================================== - Hits 26115 25656 -459 - Misses 6178 6637 +459 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7179?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: | | [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `81.70% <0.00%> (-4.77%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: | | ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/7179/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7179?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7179?src=pr&el=footer). Last update [3babef8...c5157d5](https://codecov.io/gh/huggingface/transformers/pull/7179?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks for sharing – Next time please open just one PR it will be easier to fix things!
transformers
7,178
closed
Create README.md for indobert-base-p2
Create README.md for indobert-base-p2 model card
09-16-2020 18:40:05
09-16-2020 18:40:05
transformers
7,177
closed
RuntimeError: CUDA out of memory.
I'm trying to run run_language_modeling.py and fine-tune the GPT-2 model with my own text dataset with the following command: %%bash export TRAIN_FILE=Models/Data/train.txt export TEST_FILE=Models/Data/valid.txt export MODEL_NAME=gpt2 export OUTPUT_DIR=output python run_language_modeling.py \ --output_dir=output \ --model_type=gpt2 \ --model_name_or_path=gpt2 \ --do_train \ --train_data_file=$TRAIN_FILE \ --do_eval \ --eval_data_file=$TEST_FILE \ --cache_dir=None I am currently using the Colab environment to run the script on GPU but encountered the following error: RuntimeError: CUDA out of memory. Tried to allocate 384.00 MiB (GPU 0; 15.90 GiB total capacity; 14.75 GiB already allocated; 225.81 MiB free; 14.81 GiB reserved in total by PyTorch) I noticed that it might be possible to resolve this issue by changing the batch but am unsure how I can do this? Would love any pointers available ( --train_batch_size 16 wasn't an option with this file)! Thank you so much!
09-16-2020 18:33:26
09-16-2020 18:33:26
Found solution: --per_gpu_train_batch_size=1 \ I will be closing this issue now, thank you so much! Sorry for the trouble!
transformers
7,176
closed
distributed eval cleanup
- [x] local_rank 0 logging - [x] local_rank 0 tqdm - [x] same scores as `run_eval.py` - [x] deeper investigation of non-determinism. Gens the same. Labels different? - [x] save json to one line.
09-16-2020 17:49:42
09-16-2020 17:49:42
transformers
7,175
closed
Fix a few countings (steps / epochs) in trainer_tf.py
Mainly for @jplu , but @sgugger might have some comments, especially for issue 2. ## Description This PR fixes a few countings (steps / epochs) in [trainer_tf.py](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py). Since these issues are tight together, I decided to fix them in one PR. BTW, I didn't find a `test_tf_trainer.py` similar to `test_trainer.py`, so no test added. I just relaunched the commands to make sure the fixed code give the desired outputs. I put some comments in the code --> Some of them might be removed after the code is reviewed. Here is the list of issues and their causes: 1. Potential `ZeroDivisionError` for lines 498 - 501 (similar to PR #7125 ) epochs_trained = self.global_step // (self.num_train_examples // self.args.gradient_accumulation_steps) steps_trained_in_current_epoch = self.global_step % ( self.num_train_examples // self.args.gradient_accumulation_steps ) 2. The calculation of `epochs_trained` and `steps_trained_in_current_epoch` in 1. should be something like self.global_step // (self.num_train_examples // self.total_train_batch_size) instead, rather than using `self.args.gradient_accumulation_steps`. In pytorch trainer, we have (line 609) num_update_steps_per_epoch = len(train_dataloader) // self.args.gradient_accumulation_steps but there, `len(train_dataloader)` is the number of batches, not the number of examples. 3. Line 492, 498 and 499 self.global_step = iterations.numpy() .... epochs_trained = self.global_step // (self.num_train_examples // self.args.gradient_accumulation_steps) steps_trained_in_current_epoch = self.global_step % ( self.num_train_examples // self.args.gradient_accumulation_steps ) count some training progresses before the checkpoint is restored (line 511) ckpt.restore(self.model.ckpt_manager.latest_checkpoint).expect_partial() These countings should be done after the optimizer is restored, otherewise they will be always zero at those lines. 4. Line 517 epochs = 1 if self.args.max_steps > 0 else self.args.num_train_epochs make epochs to be `1` if `max_steps` is specified. This won't be a real bug, but when `max_steps` is larger than the optimizations steps per epochs, this is somehow confusing, especially when the log is shown. 5. Extra epoch issue: Line 542 for epoch_iter in range(epochs_trained, int(epochs + 1)): If `max_step` is not specified, and `num_train_epochs` is set to 1. Suppose in a first run, the model is trained for 1 step. Then when we resume the training, we will get `epochs_trained = 0` (see issue 2. above) and `epochs = 1`, there for the range becomes `range(0, 2)` which gives 2 epochs. ## Code showing the bugs 1. First download glue tasks python utils/download_glue_data.py --data_dir ./examples/text-classification/glue/ --tasks all 2. This code python3 run_tf_glue.py \ --task_name wnli \ --data_dir ./glue/WNLI \ --model_name_or_path distilbert-base-uncased \ --output_dir ./glue/WNLI/ \ --max_seq_length 16 \ --per_device_train_batch_size 16 \ --gradient_accumulation_steps 4 \ --max_steps 10 \ --logging_steps 1 \ --save_steps 5 \ --seed 1 \ --do_train \ --do_eval \ --do_predict \ --overwrite_output_dir gives logs like {'loss': 0.69778687, 'learning_rate': 4.4999997e-05, 'epoch': 0.2, 'step': 1} {'loss': 0.6953752, 'learning_rate': 4e-05, 'epoch': 0.3, 'step': 2} where `'epoch': 0.2, 'step': 1` is not correct for `epoch`, because they are only 10 steps. 3. Remove the checkpoints in the output_dir. Launch python3 run_tf_glue.py \ --task_name wnli \ --data_dir ./glue/WNLI \ --model_name_or_path distilbert-base-uncased \ --output_dir ./glue/WNLI/ \ --max_seq_length 16 \ --num_train_epochs 1 \ --per_device_train_batch_size 16 \ --gradient_accumulation_steps 4 \ --max_steps 2 \ --logging_steps 1 \ --save_steps 1 \ --seed 1 \ --do_train \ --do_eval \ --do_predict \ --overwrite_output_dir but stop it after 1 step. Then resume the training, we get logs like {'loss': 0.7005814, 'learning_rate': 0.0, 'epoch': 2.0, 'step': 1} and {'loss': 0.66841304, 'learning_rate': 0.0, 'epoch': -0.5, 'step': 2} Saving checkpoint for step 2 at ./glue/WNLI/checkpoint/ckpt-2 {'loss': 0.7016809, 'learning_rate': 0.0, 'epoch': 0.5, 'step': 3} Saving checkpoint for step 3 at ./glue/WNLI/checkpoint/ckpt-3 {'loss': 0.693136, 'learning_rate': 0.0, 'epoch': 1.0, 'step': 4} The final steps became 4, and we got something like `'epoch': -0.5`. 4. There are other codes to show the issues, but I hope the above two examples already make it clear that something is wrong in the current code. If you want, I can put more code snippet here.
09-16-2020 17:43:35
09-16-2020 17:43:35
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7175?src=pr&el=h1) Report > Merging [#7175](https://codecov.io/gh/huggingface/transformers/pull/7175?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/df165065c37329880f490025972f81b4daa6e5bd?el=desc) will **increase** coverage by `0.27%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7175/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7175?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7175 +/- ## ========================================== + Coverage 80.71% 80.99% +0.27% ========================================== Files 169 169 Lines 32293 32305 +12 ========================================== + Hits 26066 26165 +99 + Misses 6227 6140 -87 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7175?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.01% <0.00%> (-0.45%)` | :arrow_down: | | [src/transformers/modeling\_tf\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: | | [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.73% <0.00%> (-19.35%)` | :arrow_down: | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.83% <0.00%> (-0.72%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `80.75% <0.00%> (-0.25%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.27%)` | :arrow_up: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (+0.33%)` | :arrow_up: | | [...rc/transformers/data/datasets/language\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `94.11% <0.00%> (+1.17%)` | :arrow_up: | | ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7175/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7175?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7175?src=pr&el=footer). Last update [df16506...b72f0d9](https://codecov.io/gh/huggingface/transformers/pull/7175?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Nice catch!! LGTM.<|||||>@LysandreJik I fully agree on adding tests for the trainer. I put this immediately on top of my todo list.<|||||>> > > @LysandreJik I fully agree on adding tests for the trainer. I put this immediately on top of my todo list. @LysandreJik @jplu If you agree, may I help to build tests for the TFTrainer? If so, probably I need to discuss with @jplu how to proceed though.<|||||>@chiapas I will definitely be glad to have your help on this! Can you send me an email (it is displayed on my Github profile) please, and I will let you know how we can proceed.<|||||>If you could copy and adapt the tests in trainer (so that they run quickly) on a regression problem, it would be awesome!<|||||>> > > If you could copy and adapt the tests in trainer (so that they run quickly) on a regression problem, it would be awesome! We are going to build the tests for TFTrainer soon. But the HF team can decide if to merge this PR now or after the tests are built.<|||||>LGTM for merge, what do you think @sgugger?<|||||>Yes, good to merge. Thanks @chiapas!
transformers
7,174
closed
use the correct add_start_docstrings
As discussed at https://github.com/huggingface/transformers/pull/6940#discussion_r489266992 fixing to use the correct decorator add_start_docstrings @sgugger
09-16-2020 17:38:11
09-16-2020 17:38:11
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7174?src=pr&el=h1) Report > Merging [#7174](https://codecov.io/gh/huggingface/transformers/pull/7174?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/df165065c37329880f490025972f81b4daa6e5bd?el=desc) will **decrease** coverage by `0.40%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7174/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7174?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7174 +/- ## ========================================== - Coverage 80.71% 80.31% -0.41% ========================================== Files 169 169 Lines 32293 32293 ========================================== - Hits 26066 25936 -130 - Misses 6227 6357 +130 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7174?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.15% <100.00%> (ø)` | | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `96.82% <100.00%> (ø)` | | | [src/transformers/tokenization\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `95.23% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.32% <0.00%> (-73.63%)` | :arrow_down: | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.83% <0.00%> (-0.72%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.01% <0.00%> (-0.33%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.77% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <0.00%> (+10.24%)` | :arrow_up: | | ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/7174/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7174?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7174?src=pr&el=footer). Last update [df16506...e73ba1f](https://codecov.io/gh/huggingface/transformers/pull/7174?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,173
closed
remove duplicated code
09-16-2020 17:34:19
09-16-2020 17:34:19
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7173?src=pr&el=h1) Report > Merging [#7173](https://codecov.io/gh/huggingface/transformers/pull/7173?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/df165065c37329880f490025972f81b4daa6e5bd?el=desc) will **decrease** coverage by `0.21%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7173/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7173?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7173 +/- ## ========================================== - Coverage 80.71% 80.50% -0.22% ========================================== Files 169 169 Lines 32293 32291 -2 ========================================== - Hits 26066 25996 -70 - Misses 6227 6295 +68 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7173?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.13% <ø> (-0.02%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.37% <0.00%> (-42.10%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.10% <0.00%> (-29.80%)` | :arrow_down: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: | | [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `47.05% <0.00%> (-13.24%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: | | [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `83.58% <0.00%> (-2.99%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: | | ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7173/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7173?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7173?src=pr&el=footer). Last update [df16506...66eb6ea](https://codecov.io/gh/huggingface/transformers/pull/7173?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,172
closed
Bug in finetuning ALBERT on text-classification in GLUE
Thanks for your beautiful work! I use transformers-3.1.0, and when I run `run_glue.py` in `examples/text-classification/`, it return a bug info, ``` Traceback (most recent call last): File "run_glue.py", line 246, in <module> main() File "run_glue.py", line 127, in main cache_dir=model_args.cache_dir, File "/mnt/dqa/yinheju/env/python3.7_torch1.1/lib/python3.7/site-packages/transformers/tokenization_auto.py", line 220, in from_pretrained return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/mnt/dqa/yinheju/env/python3.7_torch1.1/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1425, in from_pretrained return cls._from_pretrained(*inputs, **kwargs) File "/mnt/dqa/yinheju/env/python3.7_torch1.1/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1531, in _from_pretrained list(cls.vocab_files_names.values()), OSError: Model name 'albert-base-v2' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'albert-base-v2' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url. ``` Please help me to solve this problem, thank you!!!
09-16-2020 17:30:07
09-16-2020 17:30:07
I have the same problem, if you have solved, cound you tell me how? thx!
transformers
7,171
closed
remove deprecated flag
``` /home/circleci/.local/lib/python3.6/site-packages/isort/main.py:915: UserWarning: W0501: The following deprecated CLI flags were used and ignored: --recursive! "W0501: The following deprecated CLI flags were used and ignored: " ```
09-16-2020 17:10:57
09-16-2020 17:10:57
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7171?src=pr&el=h1) Report > Merging [#7171](https://codecov.io/gh/huggingface/transformers/pull/7171?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/df165065c37329880f490025972f81b4daa6e5bd?el=desc) will **decrease** coverage by `1.25%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7171/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7171?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7171 +/- ## ========================================== - Coverage 80.71% 79.46% -1.26% ========================================== Files 169 169 Lines 32293 32293 ========================================== - Hits 26066 25661 -405 - Misses 6227 6632 +405 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7171?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: | | [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `88.17% <0.00%> (-5.38%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: | | ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/7171/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7171?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7171?src=pr&el=footer). Last update [df16506...fc2d6ab](https://codecov.io/gh/huggingface/transformers/pull/7171?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,170
closed
[s2s] Try to get ray/optuna + examples/seq2seq working
I tried for 2h and failed. My initial attempt just hangs (I put it at `examples/seq2seq/run_ray_tune.py`) ```python from ray.tune.schedulers import ASHAScheduler, PopulationBasedTraining from ray import tune from ray.tune import CLIReporter from ray.tune.schedulers import ASHAScheduler, PopulationBasedTraining from functools import partial from durbango import * from finetune import main as ft_main from pathlib import Path import os def get_ray_slug(cfg): strang = '' for k,v in cfg.items(): strang += f'{k}_{v}' for i in range(10000): test = f'rayruns/run_{i}' try: Path(test).mkdir(exist_ok=True,parents=True) break except Exception: continue return os.path.expanduser(test) def ray_main(args, config): for k,v in config.items(): #assert hasattr(args, k), k setattr(args, k, v) args.n_train = 64 args.output_dir = get_ray_slug(config) args.num_train_epochs = 3 ft_main(args) def tune_helsinki_(args, num_samples=4, num_epochs=3): search_space = { "learning_rate": tune.sample_from(lambda spec: 10**(-10 * np.random.rand())), "gradient_accumulation_steps": tune.choice([1, 8, 32, 128, 256]), "dropout": tune.choice([0, 0.1, 0.2, 0.4]), } scheduler = ASHAScheduler( metric="val_avg_bleu", mode="min", max_t=3, grace_period=1, reduction_factor=2) reporter = CLIReporter( parameter_columns=list(search_space.keys()), metric_columns=["val_avg_loss", "val_avg_bleu", "global_step"]) tune.run( partial( ray_main, args, ), resources_per_trial={"cpu": 0, "gpu": 1}, config=search_space, num_samples=num_samples, scheduler=scheduler, progress_reporter=reporter, name="tune_helsinki_asha") # Make default args args = {'logger': True, 'checkpoint_callback': True, 'early_stop_callback': False, 'default_root_dir': None, 'gradient_clip_val': 0, 'process_position': 0, 'num_nodes': 1, 'num_processes': 1, 'gpus': 1, 'auto_select_gpus': False, 'tpu_cores': 0, 'log_gpu_memory': None, 'progress_bar_refresh_rate': 1, 'overfit_batches': 0.0, 'track_grad_norm': -1, 'check_val_every_n_epoch': 1, 'fast_dev_run': False, 'accumulate_grad_batches': 1, 'max_epochs': 1000, 'min_epochs': 1, 'max_steps': None, 'min_steps': None, 'limit_train_batches': 1.0, 'limit_val_batches': 1.0, 'limit_test_batches': 1.0, 'val_check_interval': 0.25, 'log_save_interval': 100, 'row_log_interval': 50, 'distributed_backend': None, 'precision': 32, 'print_nan_grads': False, 'weights_summary': 'top', 'weights_save_path': None, 'num_sanity_val_steps': 0, 'truncated_bptt_steps': None, 'resume_from_checkpoint': None, 'profiler': None, 'benchmark': False, 'deterministic': False, 'reload_dataloaders_every_epoch': False, 'auto_lr_find': False, 'replace_sampler_ddp': True, 'terminate_on_nan': False, 'auto_scale_batch_size': False, 'prepare_data_per_node': True, 'amp_level': 'O2', 'val_percent_check': None, 'test_percent_check': None, 'train_percent_check': None, 'overfit_pct': None, 'model_name_or_path': 'sshleifer/student_marian_en_ro_6_3', 'config_name': '', 'tokenizer_name': 'sshleifer/student_marian_en_ro_6_3', 'cache_dir': '', 'encoder_layerdrop': None, 'decoder_layerdrop': None, 'dropout': None, 'attention_dropout': None, 'learning_rate': 0.0003, 'lr_scheduler': 'linear', 'weight_decay': 0.0, 'adam_epsilon': 1e-08, 'warmup_steps': 500, 'num_workers': 4, 'train_batch_size': 32, 'eval_batch_size': 32, 'output_dir': 'tmp', 'fp16': True, 'fp16_opt_level': 'O1', 'do_train': True, 'do_predict': True, 'seed': 42, 'data_dir': '/home/shleifer/transformers_fork/examples/seq2seq//dbart/wmt_en_ro', 'max_source_length': 128, 'max_target_length': 128, 'val_max_target_length': 128, 'test_max_target_length': 128, 'freeze_encoder': True, 'freeze_embeds': True, 'sortish_sampler': True, 'logger_name': 'wandb', 'n_train': -1, 'n_val': 500, 'n_test': -1, 'task': 'translation', 'label_smoothing': 0.1, 'src_lang': '', 'tgt_lang': '', 'early_stopping_patience': -1} tune_helsinki_(args) ```
09-16-2020 16:27:49
09-16-2020 16:27:49
It works on a branch, will cleanup+share shortly!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,169
closed
BERT Trainer.train() CUDA out of memory error
## Environment info - `transformers` version: 3.0.2 - Platform: Linux-5.3.0 - Python version: 3.6.10 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Distributed ### Who can help @LysandreJik, @sgugger, @patrickvonplaten ## Information I am using 8xV100s (32GB). The script (`run_training.py`) works when running on a single machine but I am running into the ```CUDA out of memory``` when trying to run distributed training. The behavior is consistent whether or not `fp16` is `True`. I am using the publicly available wikitext data. Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. My code is `run_training.py`: ``` bert_config = BertConfig(hidden_size=768, num_attention_heads=12) tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") model = BertForPreTraining(config=bert_config) train_dataset = TextDatasetForNextSentencePrediction(tokenizer=tokenizer, file_path=args.train_data_file, block_size=args.max_length) data_collator = DataCollatorForNextSentencePrediction(tokenizer=tokenizer, mlm_probability=args.mlm_probability) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset ) global_steps, training_loss = trainer.train() ``` 2. Run the script in distributed mode: ``` python -m torch.distributed.launch \ --nnodes={num_nodes} \ --node_rank={rank} \ --nproc_per_node=8 \ --run_training.py \ --max_steps 50 " \ --train_data_file {train_data_file} " \ --eval_data_file {test_data_file} " \ --logging_dir {output_dir} \ --fp16 False " \ --gradient_accumulation_steps 1 " \ --per_gpu_train_batch_size 1 \ --per_gpu_eval_batch_size 1 ``` ## Expected behavior Given that the script works fine (i.e., not run into the out of memory issue) on a single machine, I would expect multi-node to be the same. Any insight into what might be going on is appreciated!
09-16-2020 16:15:52
09-16-2020 16:15:52
Haven’t we fixed a memory leak on master recently?<|||||>Yes we did, though this might be a different problem. To make sure, can you please check if you have the bug with an install from source?<|||||>@sgugger yeah, I'm installing from source (not sure why the script yielded 3.0.2)<|||||>I encountered the problem, too, especially using NSP objective, the memory usage much higher than MLM.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I encounter this problem too when I try to train on multi-nodes and multi GPUs. And my transformers version is 4.15.0, so does this issue have the answer? @choidongyeon @julien-c @sgugger
transformers
7,168
closed
DistilBERT for token classification (pytorch) predicts wrong classes for <PAD> tokens
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Linux-5.4.0-47-generic-x86_64-with-debian-buster-sid - Python version: 3.7.4 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @stefan-it @LysandreJik ## Information Model I am using: DistilBERT (pytorch), but I guess, the same is true for BERT. The problem arises when using: * my own modified scripts: (give details below) The tasks I am working on is: * my own task or dataset: (give details below) ## To reproduce This is NER task on specially compiled dataset which I cannot provide, but it is not that much different from CoNLL. I tried DistillBERT tensorflow and pytorch versions. Input is `inputs_ids` and `attention_mask` in both cases. Results for DistillBERT tensorflow: ![image](https://user-images.githubusercontent.com/22475959/93342115-9d0d3680-f837-11ea-8b0b-5001856bd1e7.png) model was compiled as: ``` model = TFDistilBertForTokenClassification.from_pretrained(MODEL, num_labels=len(LABELS)) model.layers[-1].activation = tf.keras.activations.softmax optimizer = tf.keras.optimizers.Adam(learning_rate=1e-5) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False) metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy') model.compile(optimizer=optimizer, loss=loss, metrics=[metric]) ``` code for report (I didn't remove prediction for padding from the model output as I assumed that model should not have any difficulties predicting it as `O`. And it does in this case) ``` predictions_classes = [LABELS_MAP[_id] for _id in predictions_normalized.reshape(-1)] true_classes = [LABELS_MAP[_id] for _id in label_ids_test.reshape(-1)] print( classification_report( true_classes, predictions_classes, labels=sorted(list(set(true_classes) - {'O'})) ) ) ``` Due to some issues with conversion to onnx format, I had to create pytorch version of the same model. Dataset split is the same in both versions. Here is output of DistillBERT pytorch: ![image](https://user-images.githubusercontent.com/22475959/93352453-879e0980-f843-11ea-883f-9207fbfe603a.png) Notice that support is the same as above, because it is the same data. Recall is comparable to tf. Precision is way off. Model is compiled as: ``` training_args = TrainingArguments( output_dir='results', num_train_epochs=4, per_device_train_batch_size=4, per_device_eval_batch_size=32, warmup_steps=0, weight_decay=0, logging_dir=None, logging_steps=100, eval_steps=500, save_steps=1e6, learning_rate=1e-5, seed=8, evaluate_during_training=True, do_eval=True, disable_tqdm=True, ) model = DistilBertForTokenClassification.from_pretrained('distilbert-base-multilingual-cased', num_labels=len(LABELS)) adam = Adam(model.parameters(), lr=1e-5) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=test_dataset, optimizers=(adam, get_constant_schedule(adam)) ) trainer.train() ``` The reason for huge drop in precision is that model predicts classes other than `O` for pad tokens, because their loss is ignored in loss function: ![image](https://user-images.githubusercontent.com/22475959/93343751-8b2c9300-f839-11ea-949b-a7a36f4a1342.png) If I remove predictions for all special tokens then result looks like: ![image](https://user-images.githubusercontent.com/22475959/93352634-b7e5a800-f843-11ea-90a7-a1540c33cbad.png) Now, it looks like results from tf version. If I comment out these lines: https://github.com/huggingface/transformers/blob/9e376e156a78aa08f802d569d829064aff930c58/src/transformers/modeling_distilbert.py#L822-L830 and keep only `loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))` taking into account loss for pad tokens and still using attention mask as the second input than model doesn't predict PAD tokens as classes other than `O` anymore. Output: ![image](https://user-images.githubusercontent.com/22475959/93354973-40654800-f846-11ea-88a1-47203adcaa67.png) So, performance definitely didn't suffer and model stopped predicting classes other than `O` for PAD tokens. ![image](https://user-images.githubusercontent.com/22475959/93355153-6b4f9c00-f846-11ea-8f9a-f75e43da6fb3.png) ## Expected behavior I think, SOTA models shouldn't predict classes other than `O` for PAD tokens (or some special class).
09-16-2020 15:07:56
09-16-2020 15:07:56
The [PAD] tokens are not used for loss calculation if I remembered correctly, and therefore its predictions has no meaning. You should just ignore the [PAD] tokens results. Put [PAD] tokens into loss calculation will make the model biased toward [PAD] tokens, which is undesirable.
transformers
7,167
closed
weights partially missing for CamembertForMaskedLM
## Environment info - `transformers` version: 3.1.0 - Platform: Darwin-18.7.0-x86_64-i386-64bit - Python version: 3.7.5 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @louismartin ## Information When loading "camembert-base" with `CamembertForMaskedLM` with: from transformers import CamembertForMaskedLM model = CamembertForMaskedLM.from_pretrained("camembert-base") the bias of the LM head decoder is not loaded: Some weights of CamembertForMaskedLM were not initialized from the model checkpoint at camembert-base and are newly initialized: ['lm_head.decoder.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. As I understand `lm_head.decoder.bias` is therefore initialized randomly. I checked the original `camembert-base` model as published by the author, and the lm_head decoder bias is missing too, which is not discussed in the camembert or roberta publication.
09-16-2020 12:15:01
09-16-2020 12:15:01
Hi @raphael0202 , we converted the fairseq checkpoints using [this script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_roberta_original_pytorch_checkpoint_to_pytorch.py), maybe the language model head was not transfered during the conversion. You can either use the model in fairseq as per [our website](https://camembert-model.fr/) or try to understand if and why the LM head hasn't transfered from fairseq to transformers.<|||||>Check this tutorial as well for huggingface: https://huggingface.co/camembert-base<|||||>After a bit of search I found out the warning comes from the way the decoder bias is defined in the code, the same warning is issued when loading the roberta model (see issue 6193). I checked that the LM head decoder bias is the same as the one in the original pytorch checkpoint. Thank you for your help!<|||||>Ok perfect, thanks for fixing this.
transformers
7,166
closed
Create README.md
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
09-16-2020 10:09:27
09-16-2020 10:09:27
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7166?src=pr&el=h1) Report > Merging [#7166](https://codecov.io/gh/huggingface/transformers/pull/7166?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/85ffda96fcadf70d2558ba0a59c84b9f5a2d6f0f?el=desc) will **increase** coverage by `0.98%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7166/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7166?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7166 +/- ## ========================================== + Coverage 78.44% 79.42% +0.98% ========================================== Files 168 168 Lines 32309 32309 ========================================== + Hits 25346 25663 +317 + Misses 6963 6646 -317 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7166?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: | | [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: | | [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-5.77%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.13%)` | :arrow_up: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <0.00%> (+0.16%)` | :arrow_up: | | ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/7166/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7166?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7166?src=pr&el=footer). Last update [b00cafb...8fe976a](https://codecov.io/gh/huggingface/transformers/pull/7166?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Can you add a language tag? `fr` I guess or `fr-BE` if you want to use BCP 47 (cc @yjernite 😛) which we are thinking of supporting<|||||>Sure! I'm not sure exactly where to add it, could you enlighten me?<|||||>see https://huggingface.co/docs#what-metadata-can-i-add-to-my-model-card and let me know if it's not clear!<|||||>I guess in that case the BCP-47 code would still just be `fr` since the text it's trained on isn't specifically Belgian afaiu :) (Unless I'm missing something @antoiloui )<|||||>Yes you're right @yjernite, I put the `fr`tag :)
transformers
7,165
closed
__init__() got an unexpected keyword argument 'cache_dir'
After reviewing issue #7006 and following the steps (updating Transformers to the latest version or even the 3.1.0 branch), I still get the error: TypeError: init() got an unexpected keyword argument 'cache_dir' when running the latest version for transformers (3.1.0). I'm also running on Colab environment: Command: !pip3 install transformers (also tried #! pip3 install git+git://github.com/huggingface/transformers/) !wget https://raw.githubusercontent.com/huggingface/transformers/master/examples/language-modeling/run_language_modeling.py %%bash export TRAIN_FILE=train_path export TEST_FILE=valid_path export MODEL_NAME=gpt2 export OUTPUT_DIR=output python run_language_modeling.py --output_dir=output --model_type=gpt2 --model_name_or_path=gpt2 --do_train --train_data_file=$TRAIN_FILE --do_eval --eval_data_file=$TEST_FILE --cache_dir=None Output: Traceback (most recent call last): File "run_language_modeling.py", line 313, in main() File "run_language_modeling.py", line 242, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None File "run_language_modeling.py", line 143, in get_dataset cache_dir=cache_dir, TypeError: init() got an unexpected keyword argument 'cache_dir'
09-16-2020 09:52:36
09-16-2020 09:52:36
Do you have a link to the colab?<|||||>I was following the tutorial by Joey S and tried to include my own dataset instead. This is the notebook that he has provided: https://colab.research.google.com/drive/1odn0VHb4SmQnUqtOwK6XQTLpoTSbneCf?usp=sharing The notebook he provided already has the error. Thank you so much for taking a look!<|||||>The error seems to not be here anymore when doing `!pip3 install git+https://github.com/huggingface/transformers` instead of `pip3 install transformers` and resetting to factory settings before doing the run.<|||||>Thank you so much! It's working now!<|||||>Glad I could help!
transformers
7,164
closed
tf.keras.models.load_model() does not load saved model that includes TFOpenAIGPTLMHeadModel layer
- `transformers` version: 3.1.0 - Platform: linux - Python version: 3 - Tensorflow version: 2.3.0 ## To reproduce Steps to reproduce the behavior: 1. Load the model with TFOpenAIGPTLMHeadModel 2. Add input layers 3. save the model 4. Load saved model ```python from transformers import TFOpenAIGPTLMHeadModel import tensorflow as tf tf_model = TFOpenAIGPTLMHeadModel.from_pretrained('./trans_model', from_pt=True) # ./trans_model is the directory including pre-trained model from pytorch max_len = None input_ids = tf.keras.layers.Input(shape=(max_len,), name='input_ids_layer', dtype='int32') token_type_ids = tf.keras.layers.Input(shape=(max_len,), name='token_type_ids_layer', dtype='int32') keras_input = [input_ids, token_type_ids] qa_output = tf_model(input_ids, token_type_ids=token_type_ids)[0] keras_model = tf.keras.Model(inputs= keras_input, outputs = qa_output) keras_model.summary() keras_model.save("./saved_model") print('**************************') model = tf.keras.models.load_model("./saved_model") ``` ```bash Traceback (most recent call last): File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/util/nest.py", line 395, in assert_same_structure expand_composites) ValueError: The two structures don't have the same nested structure. First structure: type=TensorSpec str=TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs') Second structure: type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')} More specifically: Substructure "type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}" is a sequence, while substructure "type=TensorSpec str=TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs')" is not During handling of the above exception, another exception occurred: Traceback (most recent call last): File "interact_test.py", line 208, in <module> run() File "interact_test.py", line 180, in run model = tf.keras.models.load_model("./saved_model") File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/save.py", line 187, in load_model return saved_model_load.load(filepath, compile, options) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 121, in load path, options=options, loader_cls=KerasObjectLoader) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/saved_model/load.py", line 633, in load_internal ckpt_options) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 194, in __init__ super(KerasObjectLoader, self).__init__(*args, **kwargs) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/saved_model/load.py", line 130, in __init__ self._load_all() File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 221, in _load_all self._finalize_objects() File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 526, in _finalize_objects _finalize_saved_model_layers(layers_revived_from_saved_model) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 706, in _finalize_saved_model_layers inputs = infer_inputs_from_restored_call_function(call_fn) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 985, in infer_inputs_from_restored_call_function spec = nest.map_structure(common_spec, spec, spec2) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/util/nest.py", line 629, in map_structure expand_composites=expand_composites) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/util/nest.py", line 402, in assert_same_structure % (str(e), str1, str2)) ValueError: The two structures don't have the same nested structure. First structure: type=TensorSpec str=TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs') Second structure: type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')} More specifically: Substructure "type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}" is a sequence, while substructure "type=TensorSpec str=TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs')" is not Entire first structure: . Entire second structure: {'input_ids': .} ````
09-16-2020 08:54:18
09-16-2020 08:54:18
I tried the resolution in #3627, but the output shape changed from 13088 to 768<|||||>@jplu might be interested in this.<|||||>I cannot really test because I don't have your `trans_model` but as far as I can say it is not working because you are using the high level API (with Keras) to create a saved_model. For models with custom layers it is recommended to use the low level way, like this: ``` from transformers import TFOpenAIGPTLMHeadModel import tensorflow as tf tf_model = TFOpenAIGPTLMHeadModel.from_pretrained('openai-gpt') max_len = None input_ids = tf.keras.layers.Input(shape=(max_len,), name='input_ids_layer', dtype='int32') token_type_ids = tf.keras.layers.Input(shape=(max_len,), name='token_type_ids_layer', dtype='int32') keras_input = [input_ids, token_type_ids] qa_output = tf_model(input_ids, token_type_ids=token_type_ids)[0] keras_model = tf.keras.Model(inputs= keras_input, outputs = qa_output) keras_model.summary() tf.saved_model.save("./saved_model") print('**************************') model = tf.saved_model.load("./saved_model") ``` For me this works well.<|||||>> tf.saved_model.save("./saved_model") you mean `tf.saved_model.save(keras_model, "./saved_model")` ? I try it ```python tf.saved_model.save("./saved_model") print('**************************') model = tf.saved_model.load("./saved_model") tf_logits = model.predict([tf_input_ids, tf_token_type_ids]) ``` and the error: ```bash tf_logits = model.predict([tf_input_ids, tf_token_type_ids]) AttributeError: '_UserObject' object has no attribute 'predict' ```<|||||>> you mean tf.saved_model.save(keras_model, "./saved_model") Yes sorry. The error you get is normal because you are not loading a Keras model, but a Tensorflow model. To get a prediction you have to do something like: ``` model([tf_input_ids, None, tf_token_type_ids]) ``` <|||||>I changed the code as following: ```python tf_logits = model([tf_input_ids, tf_token_type_ids]) ``` and the error: ```bash ValueError: Could not find matching function to call loaded from the SavedModel. Got: Positional arguments (3 total): * [<tf.Tensor 'inputs:0' shape=(1, 5) dtype=int64>, <tf.Tensor 'inputs_1:0' shape=(1, 5) dtype=int64>] * False * None Keyword arguments: {} Expected these arguments to match one of the following 4 option(s): Option 1: Positional arguments (3 total): * [TensorSpec(shape=(None, None), dtype=tf.int32, name='input_ids_layer'), TensorSpec(shape=(None, None), dtype=tf.int32, name='token_type_ids_layer')] * False * None Keyword arguments: {} Option 2: Positional arguments (3 total): * [TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/0'), TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/1')] * True * None Keyword arguments: {} Option 3: Positional arguments (3 total): * [TensorSpec(shape=(None, None), dtype=tf.int32, name='input_ids_layer'), TensorSpec(shape=(None, None), dtype=tf.int32, name='token_type_ids_layer')] * True * None Keyword arguments: {} Option 4: Positional arguments (3 total): * [TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/0'), TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs/1')] * False * None Keyword arguments: {} ```<|||||>Can you try: ``` model([tf_input_ids, tf_token_type_ids], False, None) ```<|||||>I try it, and the same error. More information: In face, my ultimate objective is to use the savedmodel for tfserving. But I get the error when querying the serving: ```bash Input to reshape is a tensor with 3840 values, but the requested shape has 768\n\t [[{{node functional_1/tf_open_aigptlm_head_model/transformer/Reshape_3}}]]\n\t [[StatefulPartitionedCall/StatefulPartitionedCall]]'} ``` It means getting 3840(768*5) values instead of 768. And considering the original error: ```bash First structure: type=TensorSpec str=TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs') Second structure: type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')} More specifically: Substructure "type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}" is a sequence, while substructure "type=TensorSpec str=TensorSpec(shape=(None, None), dtype=tf.int32, name='inputs')" is not Entire first structure: . Entire second structure: {'input_ids': .} ``` Does it mean the input is replicated 5 times? And where the shape (None, 5) come from?<|||||>5 is the default input when a None input shape is given. It means there is certainly a bug with handling Keras symbolic Tensors. In order to be sure can you run: ``` saved_model_cli show --dir ./saved_model --tag_set serve --signature_def serving_default ```<|||||>```bash saved_model_cli show --dir ./saved_model --tag_set serve --signature_def serving_default The given SavedModel SignatureDef contains the following input(s): inputs['input_ids_layer'] tensor_info: dtype: DT_INT32 shape: (-1, -1) name: serving_default_input_ids_layer:0 inputs['token_type_ids_layer'] tensor_info: dtype: DT_INT32 shape: (-1, -1) name: serving_default_token_type_ids_layer:0 The given SavedModel SignatureDef contains the following output(s): outputs['tf_open_aigptlm_head_model'] tensor_info: dtype: DT_FLOAT shape: (-1, -1, 13088) name: StatefulPartitionedCall:0 Method name is: tensorflow/serving/predict ``` In the code ```python keras_model = tf.keras.Model(inputs= keras_input, outputs = qa_output) keras_model.summary() keras_model.save("./saved_model") print('**************************') model = tf.keras.models.load_model("./saved_model") ``` When I use the `keras_model` directly, it works perfectly in a dialogue task. But after saving it to savedmodel, everything collapse.<|||||>Ok, the first thing I see is that the names doesn't correspond to the name we use internally, it might be one of the cause that brings the issue.<|||||>Sorry, which names do you mean? Is it a `transformers` library bug or my code mistake?<|||||>> Sorry, which names do you mean? Is it a transformers library bug or my code mistake? In the lib. Also by name I mean that you are using `input_ids_layer` and `token_type_ids_layer` while internally they are `input_ids` and `token_type_ids`.<|||||>So what should I do? Remove the `_layer` string?<|||||>It won't solve your problem. For now there is not a "practical" solution to get a saved model unless you set everything yourself: ``` from transformers import TFOpenAIGPTLMHeadModel, OpenAIGPTTokenizer import tensorflow as tf tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt") model = TFOpenAIGPTLMHeadModel.from_pretrained("openai-gpt") inputs = tokenizer("Put a sentence here", return_tensors="tf") model._saved_model_inputs_spec = None model._set_save_spec(dict(inputs)) tf.saved_model.save(model, "./saved_model") ``` And then using it normally with TF serving. This solution has a big constraint, you have to set manually the size of your input sequence. This is for now the only solution I can give you because we haven't make the TF models fully TF Serving compliant yet. This is planed for a future release.<|||||>Thanks! Looking forward to the new release.<|||||>Or this should do the trick: ``` tf_model = TFOpenAIGPTLMHeadModel.from_pretrained('openai-gpt') input_ids = tf.keras.layers.Input(shape=(128,), name='input_ids', dtype='int32') token_type_ids = tf.keras.layers.Input(shape=(128,), name='token_type_ids', dtype='int32') keras_input = [input_ids, token_type_ids] qa_output = tf_model(input_ids, token_type_ids=token_type_ids)[0] keras_model = tf.keras.Model(inputs= keras_input, outputs = [qa_output]) keras_model.trainable = False keras_model.summary() keras_model.save("./saved_model", save_format="tf") ``` With this I can run the saved_model inside the TF serving Docker image, but for all the cases you have to set yourself your sequence length.<|||||>Sorry, I need a dynamic input length for the dialogue task.<|||||>As a temporary solution you can set the size of your inputs to the max length of the tokenizer. As you won't be able to get bigger sequences from the tokenizer you can be safe.<|||||>I use the following code: ```python tf_model = TFOpenAIGPTLMHeadModel.from_pretrained('./trans_model', from_pt=True) max_len = 128 input_ids = tf.keras.layers.Input(shape=(max_len,), name='input_ids', dtype='int32') token_type_ids = tf.keras.layers.Input(shape=(max_len,), name='token_type_ids', dtype='int32') keras_input = [input_ids, token_type_ids] qa_output = tf_model(input_ids, token_type_ids=token_type_ids)[0] keras_model = tf.keras.Model(inputs= keras_input, outputs = qa_output) keras_model.trainable = False keras_model.summary() keras_model.save("./saved_model", save_format="tf") ``` I use `keras_model` directly to predict. The strange thing is the first 2 predicting always worked fine, while the third predicting collapse (the model always uses the previous sequence to predict the next put, so the input shape will plus 1 each time). ```bash >>> 你现在做什么工作呢 WARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor("input_ids:0", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 12). WARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor("input_ids:0", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 12). WARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor("token_type_ids:0", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 12). WARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor("token_type_ids:0", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 12). logits shape: (1, 12, 13088) WARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor("input_ids:0", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 13). WARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor("input_ids:0", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 13). WARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor("token_type_ids:0", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 13). WARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor("token_type_ids:0", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 13). logits shape: (1, 13, 13088) Traceback (most recent call last): File "interact_test.py", line 233, in <module> run() File "interact_test.py", line 225, in run out_ids = sample_sequence(history, tokenizer, keras_model, args) File "interact_test.py", line 121, in sample_sequence tf_logits = model.predict([tf_input_ids, tf_token_type_ids]) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 130, in _method_wrapper return method(self, *args, **kwargs) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1599, in predict tmp_batch_outputs = predict_function(iterator) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 780, in __call__ result = self._call(*args, **kwds) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 814, in _call results = self._stateful_fn(*args, **kwds) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2829, in __call__ return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1848, in _filtered_call cancellation_manager=cancellation_manager) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1924, in _call_flat ctx, args, cancellation_manager=cancellation_manager)) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 550, in call ctx=ctx) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute inputs, attrs, num_outputs) tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 14 values, but the requested shape requires a multiple of 128 [[node functional_1/tf_open_aigptlm_head_model/transformer/Reshape_2 (defined at /home/t9kuser/.local/lib/python3.6/site-packages/transformers/modeling_tf_openai.py:342) ]] [Op:__inference_predict_function_15804] Errors may have originated from an input operation. Input Source operations connected to node functional_1/tf_open_aigptlm_head_model/transformer/Reshape_2: functional_1/Cast_1 (defined at interact_test.py:121) Function call stack: predict_function ``` ```bash >>> 你好 WARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor("input_ids:0", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 5). WARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor("input_ids:0", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 5). WARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor("token_type_ids:0", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 5). WARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor("token_type_ids:0", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 5). logits shape: (1, 5, 13088) WARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor("input_ids:0", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 6). WARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor("input_ids:0", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 6). WARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor("token_type_ids:0", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 6). WARNING:tensorflow:Model was constructed with shape (None, 128) for input Tensor("token_type_ids:0", shape=(None, 128), dtype=int32), but it was called on an input with incompatible shape (None, 6). logits shape: (1, 6, 13088) Traceback (most recent call last): File "interact_test.py", line 233, in <module> run() File "interact_test.py", line 225, in run out_ids = sample_sequence(history, tokenizer, keras_model, args) File "interact_test.py", line 121, in sample_sequence tf_logits = model.predict([tf_input_ids, tf_token_type_ids]) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 130, in _method_wrapper return method(self, *args, **kwargs) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1599, in predict tmp_batch_outputs = predict_function(iterator) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 780, in __call__ result = self._call(*args, **kwds) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 814, in _call results = self._stateful_fn(*args, **kwds) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2829, in __call__ return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1848, in _filtered_call cancellation_manager=cancellation_manager) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1924, in _call_flat ctx, args, cancellation_manager=cancellation_manager)) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 550, in call ctx=ctx) File "/home/t9kuser/.local/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute inputs, attrs, num_outputs) tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 7 values, but the requested shape requires a multiple of 128 [[node functional_1/tf_open_aigptlm_head_model/transformer/Reshape_2 (defined at /home/t9kuser/.local/lib/python3.6/site-packages/transformers/modeling_tf_openai.py:342) ]] [Op:__inference_predict_function_15804] Errors may have originated from an input operation. Input Source operations connected to node functional_1/tf_open_aigptlm_head_model/transformer/Reshape_2: functional_1/Cast_1 (defined at interact_test.py:121) Function call stack: predict_function ```<|||||>And the savedmodel error: ```bash {'error': '{{function_node __inference__wrapped_model_10271}} {{function_node __inference__wrapped_model_10271}} Incompatible shapes: [1,128,768] vs. [1,5,768]\n\t [[{{node functional_1/tf_open_aigptlm_head_model/transformer/add}}]]\n\t [[StatefulPartitionedCall/StatefulPartitionedCall]]'} ```<|||||>The pretrained model can be downloaded here: https://drive.google.com/file/d/1Wyr-fD4KuF0gWMtZ7STF09O2Ebtly-yq/view?usp=sharing This is my source code: You can reproduce the error, thanks a lot! ```python # # Copyright (c) 2019-present, HuggingFace Inc. # All rights reserved. # This source code is licensed under the BSD-style license found in the # LICENSE file in the root directory of this source tree. import os import logging import random from itertools import chain from argparse import ArgumentParser from pprint import pformat import torch import tensorflow as tf import torch.nn.functional as F import sys import numpy as np from transformers import OpenAIGPTLMHeadModel, GPT2LMHeadModel, BertTokenizer, TFOpenAIGPTLMHeadModel SPECIAL_TOKENS = ["[CLS]", "[SEP]", "[PAD]", "[speaker1]", "[speaker2]"] def top_filtering(logits, top_k=0, top_p=0.0, threshold=-float('Inf'), filter_value=-float('Inf')): """ Filter a distribution of logits using top-k, top-p (nucleus) and/or threshold filtering Args: logits: logits distribution shape (vocabulary size) top_k: <=0: no filtering, >0: keep only top k tokens with highest probability. top_p: <=0.0: no filtering, >0.0: keep only a subset S of candidates, where S is the smallest subset whose total probability mass is greater than or equal to the threshold top_p. In practice, we select the highest probability tokens whose cumulative probability mass exceeds the threshold top_p. threshold: a minimal threshold to keep logits """ assert logits.dim() == 1 # Only work for batch size 1 for now - could update but it would obfuscate a bit the code top_k = min(top_k, logits.size(-1)) if top_k > 0: # Remove all tokens with a probability less than the last token in the top-k tokens indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None] logits[indices_to_remove] = filter_value if top_p > 0.0: # Compute cumulative probabilities of sorted tokens sorted_logits, sorted_indices = torch.sort(logits, descending=True) cumulative_probabilities = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1) # Remove tokens with cumulative probability above the threshold sorted_indices_to_remove = cumulative_probabilities > top_p # Shift the indices to the right to keep also the first token above the threshold sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone() sorted_indices_to_remove[..., 0] = 0 # Back to unsorted indices and set them to -infinity indices_to_remove = sorted_indices[sorted_indices_to_remove] logits[indices_to_remove] = filter_value indices_to_remove = logits < threshold logits[indices_to_remove] = filter_value return logits def build_input_from_segments(history, reply, tokenizer, with_eos=True): """ Build a sequence of input from 3 segments: persona, history and last reply """ bos, eos, pad, speaker1, speaker2 = tokenizer.convert_tokens_to_ids(SPECIAL_TOKENS) sequence = [[bos]] + history + [reply + ([eos] if with_eos else [])] # print('sequence 1', sequence) sequence = [sequence[0]] + [[speaker2 if i % 2 else speaker1] + s for i, s in enumerate(sequence[1:])] # print('sequence 2', sequence) instance = {} instance["input_ids"] = list(chain(*sequence)) instance["token_type_ids"] = [bos] + [speaker2 if i % 2 else speaker1 for i, s in enumerate(sequence[1:]) for _ in s] return instance, sequence def sample_sequence(history, tokenizer, model, args, current_output=None): special_tokens_ids = tokenizer.convert_tokens_to_ids(SPECIAL_TOKENS) # print(special_tokens_ids) if current_output is None: current_output = [] for i in range(args.max_length): instance, sequence = build_input_from_segments(history, current_output, tokenizer, with_eos=False) input_ids = torch.tensor(instance["input_ids"], dtype=torch.long, device=args.device).unsqueeze(0) token_type_ids = torch.tensor(instance["token_type_ids"], dtype=torch.long, device=args.device).unsqueeze(0) # print(type(input_ids)) # print(input_ids.shape) # print('input_ids', input_ids) # print('token_type_ids', token_type_ids) # logits, *_ = model(input_ids, token_type_ids=token_type_ids) # print(type(logits)) # print(logits.shape) # print(logits) tf_input_ids = input_ids.numpy() tf_token_type_ids = token_type_ids.numpy() # tf_input_ids_pad = np.pad(tf_input_ids, ((0, 0), (0, 128 - tf_input_ids.shape[1])), 'constant') # print(tf_input_ids_pad.shape) # print(tf_input_ids_pad) tf_logits = model.predict([tf_input_ids, tf_token_type_ids]) logits = torch.from_numpy(tf_logits) # tf_logits, *_ = model(tf_input_ids, token_type_ids=tf.constant(tf_token_type_ids)) # logits = torch.from_numpy(tf_logits.numpy()) # print(type(tf_logits)) print('logits shape: ', tf_logits.shape) # print(tf_logits) logits = logits[0, -1, :] / args.temperature print('logits tmp shape: ', logits.shape) logits = top_filtering(logits, top_k=args.top_k, top_p=args.top_p) print('logits filter shape: ', logits.shape) probs = F.softmax(logits, dim=-1) print('probs shape: ', probs.shape) prev = torch.topk(probs, 1)[1] if args.no_sample else torch.multinomial(probs, 1) print('prev: ', prev) if i < args.min_length and prev.item() in special_tokens_ids: while prev.item() in special_tokens_ids: prev = torch.multinomial(probs, num_samples=1) if prev.item() in special_tokens_ids: break current_output.append(prev.item()) return current_output def run(): parser = ArgumentParser() parser.add_argument('--gpt2', action='store_true', help="use gpt2") parser.add_argument("--model_checkpoint", type=str, default="./LCCD_GPT", help="Path, url or short name of the model") parser.add_argument("--max_history", type=int, default=2, help="Number of previous utterances to keep in history") parser.add_argument("--device", type=str, default="cpu", help="Device (cuda or cpu)") parser.add_argument("--no_sample", action='store_true', help="Set to use greedy decoding instead of sampling") parser.add_argument("--max_length", type=int, default=30, help="Maximum length of the output utterances") parser.add_argument("--min_length", type=int, default=1, help="Minimum length of the output utterances") parser.add_argument("--seed", type=int, default=42, help="Seed") parser.add_argument("--temperature", type=int, default=0.7, help="Sampling softmax temperature") parser.add_argument("--top_k", type=int, default=0, help="Filter top-k tokens before sampling (<=0: no filtering)") parser.add_argument("--top_p", type=float, default=0.9, help="Nucleus filtering (top-p) before sampling (<=0.0: no filtering)") args = parser.parse_args() logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__file__) logger.info(pformat(args)) if args.model_checkpoint == "": logging.error("Checkpoint needed!") return random.seed(args.seed) torch.random.manual_seed(args.seed) torch.cuda.manual_seed(args.seed) logger.info("Get pretrained model and tokenizer") tokenizer_class = BertTokenizer tokenizer = tokenizer_class.from_pretrained(args.model_checkpoint, do_lower_case=True) tf_model = TFOpenAIGPTLMHeadModel.from_pretrained('./trans_model', from_pt=True) max_len = 128 input_ids = tf.keras.layers.Input(shape=(max_len,), name='input_ids', dtype='int32') token_type_ids = tf.keras.layers.Input(shape=(max_len,), name='token_type_ids', dtype='int32') keras_input = [input_ids, token_type_ids] qa_output = tf_model(input_ids, token_type_ids=token_type_ids)[0] print('**************************') print(type(qa_output)) print(qa_output) keras_model = tf.keras.Model(inputs= keras_input, outputs = [qa_output]) keras_model.trainable = False keras_model.summary() # keras_model.save("./saved_model", save_format="tf") # tf.saved_model.save(keras_model, "./saved_model") # model = tf.saved_model.load("./saved_model") # keras_model.save("./saved_model") # print('**************************') # model = tf.keras.models.load_model("./saved_model") def tokenize(obj): if isinstance(obj, str): return tokenizer.convert_tokens_to_ids(tokenizer.tokenize(obj)) if isinstance(obj, dict): return dict((n, tokenize(o)) for n, o in obj.items()) return list(tokenize(o) for o in obj) history = [] while True: raw_text = input(">>> ") while not raw_text: print('Prompt should not be empty!') raw_text = input(">>> ") sys.stdout.flush() # print(raw_text) raw_text = " ".join(list(raw_text.replace(" ", ""))) # print(raw_text) sys.stdout.flush() history.append(tokenize(raw_text)) # print('history', history) with torch.no_grad(): out_ids = sample_sequence(history, tokenizer, keras_model, args) history.append(out_ids) history = history[-(2 * args.max_history + 1):] out_text = tokenizer.decode(out_ids, skip_special_tokens=True) print(out_text) if __name__ == "__main__": run() ```<|||||>For now I don't really have time to check this, but as far as I can see, the issue is that you are not giving a sequence of 128, your sequences have to be padded to 128. If at each step you get one more element to your sequence, you can remove one padding everytime. Example: 1st iteration, shape [1, 128, 13088]: [[ [embed char 1], [embed char 2], [embed char 3], [embed char 4], [embed char 5], [embed padding], [embed padding], [embed padding], ... [embed padding] ]] 2nd iteration, shape [1, 128, 13088]: [[ [embed char 1], [embed char 2], [embed char 3], [embed char 4], [embed char 5], [embed char 6], [embed padding], [embed padding], ... [embed padding] ]] And so on. <|||||>Yes, I tried this before. But the performance declined a lot. Thanks for your patience!<|||||>Same Issue in Electra. i think, the TensorSpec below seems to be a dummy input for building TF2 model in transformers library. `{'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}` but, i don't know why, that dummy is alive after saving and loading.<|||||>All the models are initialized with this input. If you want to change it you have to recompile it with your own input as I shown in my previous posts.<|||||>@wulikai1993 > Yes, I tried this before. But the performance declined a lot. Thanks for your patience! Indeed the perf will decline, but you still get the same issue?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@jplu any updates? Default exporting or dynamic shapes are still failing for me<|||||>@jplu Even by using your approach, still get the error on XLM models (max_len set to 192 in the `Input`): ``` ValueError: The two structures don't have the same nested structure. First structure: type=TensorSpec str=TensorSpec(shape=(None, 192), dtype=tf.int32, name='inputs') Second structure: type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')} More specifically: Substructure "type=dict str={'input_ids': TensorSpec(shape=(None, 5), dtype=tf.int32, name='input_ids')}" is a sequence, while substructure "type=TensorSpec str=TensorSpec(shape=(None, 192), dtype=tf.int32, name='inputs')" is not ``` Here is the model template, which works perfectly before exporting and trying to load it again: ``` def get_model(transformer, num_classes=1, max_len=512): input_word_ids = Input(shape=(192,), dtype=tf.int32, name="input_word_ids") sequence_output = transformer(input_word_ids)[0] cls_token = sequence_output[:, 0, :] out = Dense(num_classes, activation='sigmoid')(cls_token) model = Model(inputs=input_word_ids, outputs=out) model.compile(Adam(lr=1e-5), loss='binary_crossentropy', metrics=['accuracy']) return model ```<|||||>@dshahrokhian i have the same problem, is there a solution? <|||||>@fikhrimasri gave up on dynamic shapes for now<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
7,163
closed
Pegasus- Arxiv predicts random text
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-5.4.0-47-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @sshleifer ## Information Model I am using (Pegasus-Arxiv): The problem arises when using: * [x ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Download the pegasus-arxiv model 2. Use the sample script below 3. You will get the result below: ``` import torch from transformers import PegasusForConditionalGeneration, PegasusTokenizer src_text ="""In this sequel to the phenomenally popular Harry Potter and the Sorcerer’s Stone, Harry returns to Hogwarts School of Witchcraft and Wizardry for his second year after a miserable summer with his Muggle (nonmagical) relatives. Once again, Harry’s school experiences are colored by encounters with genial ghosts and antagonistic teachers, by the rivalry between good-guy Gryffindor House and slimy Slytherin House, and by an ominous mystery to be solved involving Harry’s archenemy, the dark sorcerer Lord Voldemort. Once again, the attraction of Rowling’s traditional British school story is magnified tenfold by the fantasy elements superimposed upon it. The atmosphere Rowling creates is unique; the story whizzes along; Harry is an unassuming and completely sympathetic hero. But, truth to tell, you may feel as if you’ve read it all before. Rowling clearly hit on a winning formula with the first Harry Potter book; the second book — though still great fun — feels a tad, well, formulaic.""" model_name = 'google/pegasus-arxiv' torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) batch = tokenizer.prepare_seq2seq_batch([src_text], truncation=True, padding='longest').to(torch_device) translated = model.generate(**batch) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) print(tgt_text) ``` ## Expected behavior I expect a clear summary of the text, but I receive a text with no connection to the input, written as a scientific paper: _['this is the first of a series of papers in which we address the question of whether or not the laws of thermodynamics are valid in the limit of infinitely many degrees of freedom. <n> we show that the laws of thermodynamics are valid in the limit of infinitely many degrees of freedom. <n> this is the first of a series of papers in which we address the question of whether or not the laws of thermodynamics are valid in the limit of infinitely many degrees of freedom. <n> we show that the laws of thermodynamics are valid in the limit of infinitely many degrees of freedom. <n> [ theorem]acknowledgement [ theorem]algorithm [ theorem]axiom [ theorem]claim [ theorem]conclusion [ theorem]condition [ theorem]conjecture [ theorem]corollary [ theorem]criterion [ theorem]definition [ theorem]example [ theorem]exercise [ theorem]lemma [ theorem]notation [ theorem]problem [ theorem]proposition [ theorem]remark [ theorem]solution [ theorem]summary this is the first of a series of papers in which we address the question of whether or not the laws of thermodynamics are valid in the limit of infinitely many degrees of freedom.']_ Am I doing something wrong or is it the model? Thanks
09-16-2020 08:11:07
09-16-2020 08:11:07
Seems to be a problem with the 'google/pegasus-arxiv' model, when you use 'google/pegasus-xsum' you get: `Harry Potter and the Philosopher’s Stone is the seventh and final book in JK Rowling’s Harry Potter series.` as output<|||||>Yes I tried different pegasus models (including alot of other models) and pegasus-large e.G. outputs this (which I think is really good1): _In this sequel to the phenomenally popular Harry Potter and the Sorcerer’s Stone, Harry returns to Hogwarts School of Witchcraft and Wizardry for his second year after a miserable summer with his Muggle (nonmagical) relatives. Rowling clearly hit on a winning formula with the first Harry Potter book; the second book — though still great fun — feels a tad, well, formulaic.'_ while pegasus-multinews outputs pretty well generated texts, but unfortunately wrong in the content: _– The seventh and final book in the Harry Potter series, Harry Potter and the Sorcerer\'s Stone, is out today. The sixth book in the series, Harry Potter and the Deathly Hallows, was released in the US in advance of tomorrow\'s release in the UK. Here\'s what critics are saying about the seventh and final book in the series: The plot is still compelling, but the book "feels a tad, well, formulaic," writes James Poniewozik in Time. "The atmosphere Rowling creates is unique; the story whizzes along; Harry is an unassuming and completely sympathetic hero. But, truth to tell, you may feel as if you\'ve read it all before. Rowling clearly hit on a winning formula with the first Harry Potter book; the second book—though still great fun—feels a tad, well, formulaic."'_ Gigaword and billsum are both also outputting non useful texts. Also another question, while pegasus-large and pegasus-cnn_dailymail both only return the most important sentences, pegasus-multinews generates even new text. I was hoping the same for the arxiv model, is there a reason that it differs in that way?<|||||>`pegasus-arxiv` is trained on and expects scientific text. `pegasus-multinews` expects news I presume. If you want to prove a bug, try running an evaluation on a public dataset from the datasets package, and posting the result #6844 .<|||||># Environment info transformers version: 3.1.0 Platform: Windows - 10 Python version: 3.7.6 PyTorch version (GPU?): 1.5.0 (False) Using GPU in script?: no Using distributed or parallel set-up in script?: no ## To Reproduce I found unexpected behaviour when using Pegasus-Pubmed on Pubmed document. ``` import torch from transformers import PegasusForConditionalGeneration, PegasusTokenizer src_text ="""although the association is modest , it is important because of the increasing prevalence of metabolic syndrome and the effect that depression can have on the ability of patients to successfully make lifestyle changes and comply with medication required for hypertension and dyslipidemia . the association is demonstrated here in a general population to our knowledge for the first time , whereas earlier studies ( table 1 ) used subgroups of populations ( 813,17 ) . this distinction is important because many individuals with metabolic syndrome have diabetes , which itself is known to be associated with depression ( 5 ) . metabolic syndrome has been defined in several ways that involve quantitative anthropometric , clinical , and laboratory measurements ( 1,2 ) . for the primary assessment , we chose ncep atp iii ( 1 ) criteria , since these criteria were used in most of the previously reported studies ( 8,9,1113,17 ) .""" model_name = 'google/pegasus-pubmed' torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device) batch = tokenizer.prepare_seq2seq_batch([src_text], truncation=True, padding='longest').to(torch_device) translated = model.generate(**batch) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) print(tgt_text) ``` ## Expected behaviour I expect a summary of the input but i received a longer (relative to the input) version of the text, input has length of 929 vs 1129 of predicted summary. In particular Pegasus generate new knowledge (bold text) which isn't inside the input text. Input: although the association is modest , it is important because of the increasing prevalence of metabolic syndrome and the effect that depression can have on the ability of patients to successfully make lifestyle changes and comply with medication required for hypertension and dyslipidemia . the association is demonstrated here in a general population to our knowledge for the first time , whereas earlier studies ( table 1 ) used subgroups of populations ( 813,17 ) . this distinction is important because many individuals with metabolic syndrome have diabetes , which itself is known to be associated with depression ( 5 ) . metabolic syndrome has been defined in several ways that involve quantitative anthropometric , clinical , and laboratory measurements ( 1,2 ) . for the primary assessment , we chose ncep atp iii ( 1 ) criteria , since these criteria were used in most of the previously reported studies ( 8,9,1113,17 ) . Output: ['depression is known to be associated with metabolic syndrome, but its association with metabolic syndrome has not been studied in a general population. <n> we examined the association between depression and metabolic syndrome using ncep atp iii criteria in a population - based sample ( n = 3,018 ). <n> **metabolic syndrome was defined as having three or more of the following : body mass index 25 kg / m2, waist circumference 90 cm, and triglyceride 130 mg / dl.** <n> **depression was assessed using the center for epidemiologic studies depression scale ( cesds ).** <n> **multivariate logistic regression was used to estimate odds ratios ( ors ) and 95% confidence intervals ( cis ) for the association between depression and metabolic syndrome.** <n> we found a significant association between depression and metabolic syndrome in a general population. <n> **after adjustment for age, sex, race / ethnicity, education, smoking, physical activity, alcohol intake, and body mass index, metabolic syndrome was associated with increased odds of depression ( or = 1.16, 95% ci 1.041.32 ).** <n> **the association was stronger in women than in men.**'] Is that behaviour correct?<|||||>Output should be < 256 tokens (not characters). Input should probably be longer (closer to 1024 tokens). Try copying something from the leftmost column of the [dataset](https://huggingface.co/nlp/viewer/?dataset=scientific_papers&config=pubmed)<|||||>We've now replicated that our pegasus port performs similarly well to the authors implementation on 11 datasets, including arxiv. ![image](https://user-images.githubusercontent.com/6045025/96278960-21bcb300-0fa4-11eb-8ab3-8fb46c721818.png) [Link to Spreadsheet]( https://docs.google.com/spreadsheets/d/1ODfoK-tXOV6TLXDMnujdGLtFhA8oVTy-Cv6Ib6qKgWk/edit#gid=0)<|||||> https://docs.google.com/spreadsheets/d/1ODfoK-tXOV6TLXDMnujdGLtFhA8oVTy-Cv6Ib6qKgWk/edit#gid=0
transformers
7,162
closed
Create larger summaries by using Summarization models like T5 or Pegasus
Hi, I want to create summaries of book-reviews, which should be about half the size of the original text corpus. My goal is to compare how well the summaries are created and if the important content of the original texts are taken over. For this I just had a look at different models and saw, that the models have a very fixed max length (often about 256 tokens), which corresponds in creating only 1-2 sentences, while my goal is for example to have 8 sentences, when the original text corpus has 16. Of course I could split my text corpus in smaller ones, but then I am afraid that the context of the previous sentences gets lost. (Is there maybe a way to pass the context information on into the next predictions, as it could be done in stateful RNNs? Atleast I dont think so) For sure many others already had the issue, since there are tasks as summarizing whole documents or even books, which cannot be done in one sentence. I already [asked](https://stackoverflow.com/questions/63904821/using-transformer-for-text-summarization) on SO, but unfortunately I didnt get an answer there. So I would appreciate any hint on how I could continue on my task. Thank you!
09-16-2020 07:12:46
09-16-2020 07:12:46
Pinging the summarization expert here @sshleifer <|||||>Can we move [here](https://discuss.huggingface.co/t/summarization-on-long-documents/920/5) I am super happy to help, but want to help everybody at the same time. @ibeltagy is working hard on summarizing longer docs, but besides his efforts we don't have what you are looking for.<|||||>Thank you for posting, I will have a look at it.
transformers
7,161
closed
Add empty random document case to DataCollatorForNextSentencePrediction
This PR proposes a simple fix for an issue in `DataCollatorForNextSentencePrediction` that occurs when a document chosen at random for Token B for the NSP job turns out to be an empty document. The issue that occurs in this case shows up here, where we look for a random starting place to start Token B. https://github.com/huggingface/transformers/blob/b00cafbde575b21ff21f2664e297c50b4c5bb63a/src/transformers/data/data_collator.py#L514-L516 Although this issue should not arise if the data has been cleaned and formatted perfectly, it seems like a good precautionary measure to have in place.
09-16-2020 07:00:08
09-16-2020 07:00:08
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7161?src=pr&el=h1) Report > Merging [#7161](https://codecov.io/gh/huggingface/transformers/pull/7161?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/85ffda96fcadf70d2558ba0a59c84b9f5a2d6f0f?el=desc) will **increase** coverage by `2.43%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7161/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7161?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7161 +/- ## ========================================== + Coverage 78.44% 80.88% +2.43% ========================================== Files 168 168 Lines 32309 32309 ========================================== + Hits 25346 26134 +788 + Misses 6963 6175 -788 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7161?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.54% <100.00%> (+0.35%)` | :arrow_up: | | [src/transformers/modeling\_tf\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.51%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <0.00%> (-0.36%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (-0.28%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <0.00%> (+0.16%)` | :arrow_up: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.20% <0.00%> (+0.27%)` | :arrow_up: | | ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/7161/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7161?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7161?src=pr&el=footer). Last update [b00cafb...021caf3](https://codecov.io/gh/huggingface/transformers/pull/7161?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks, @LysandreJik!
transformers
7,160
closed
distributed launch raise Error
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: linux - Python version: 3.7 - PyTorch version (GPU?): 1.6.5 - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: distributed on a single node with 4 gpu cards ### Who can help Longformer/Reformer: @patrickvonplaten --> ## Information Model I am using (LongformerForQuestionAnswering): The problem arises when using: * [ Y ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ Y ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. run run_squad.py with longformer and enable fp16 and gradient_checkpointing. ``` 09/16/2020 06:04:57 - INFO - __main__ - ***** Running training ***** 09/16/2020 06:04:57 - INFO - __main__ - Num examples = 800 09/16/2020 06:04:57 - INFO - __main__ - Num Epochs = 2 09/16/2020 06:04:57 - INFO - __main__ - Instantaneous batch size per GPU = 6 09/16/2020 06:04:57 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 24 09/16/2020 06:04:57 - INFO - __main__ - Gradient Accumulation steps = 1 09/16/2020 06:04:57 - INFO - __main__ - Total optimization steps = 68 09/16/2020 06:04:57 - INFO - __main__ - Starting fine-tuning. Epoch: 0%| | 0/2 [00:00<?, ?it/s/opt/conda/lib/python3.7/site-packages/transformers/modeling_longformer.py:72: UserWarning: This overload of nonzero is deprecated::00<?, ?it/s] nonzero() Consider using one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/utils/python_arg_parser.cpp:766.) sep_token_indices = (input_ids == sep_token_id).nonzero() /opt/conda/lib/python3.7/site-packages/transformers/modeling_longformer.py:72: UserWarning: This overload of nonzero is deprecated: nonzero() Consider using one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/utils/python_arg_parser.cpp:766.) sep_token_indices = (input_ids == sep_token_id).nonzero() /opt/conda/lib/python3.7/site-packages/transformers/modeling_longformer.py:72: UserWarning: This overload of nonzero is deprecated: nonzero() Consider using one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/utils/python_arg_parser.cpp:766.) sep_token_indices = (input_ids == sep_token_id).nonzero() /opt/conda/lib/python3.7/site-packages/transformers/modeling_longformer.py:72: UserWarning: This overload of nonzero is deprecated: nonzero() Consider using one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/utils/python_arg_parser.cpp:766.) sep_token_indices = (input_ids == sep_token_id).nonzero() Traceback (most recent call last): File "run_squad.py", line 839, in <module> main() File "run_squad.py", line 780, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_squad.py", line 213, in train scaled_loss.backward() File "/opt/conda/lib/python3.7/site-packages/torch/tensor.py", line 185, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/opt/conda/lib/python3.7/site-packages/torch/autograd/__init__.py", line 127, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet. Exception raised from mark_variable_ready at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/distributed/c10d/reducer.cpp:453 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x4d (0x7f7ace01177d in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: c10d::Reducer::mark_variable_ready(c10d::Reducer::VariableIndex) + 0x4cd (0x7f7b07e1239d in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #2: c10d::Reducer::autograd_hook(c10d::Reducer::VariableIndex) + 0xeb (0x7f7b07e12bdb in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #3: <unknown function> + 0xabdd16 (0x7f7b07e12d16 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0xac4dc6 (0x7f7b07e19dc6 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #5: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x4dd (0x7f7b0355693d in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #6: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7f7b03558401 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #7: torch::autograd::Engine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>) + 0x25c (0x7f7b035559fc in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #8: torch::autograd::python::PythonEngine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>) + 0x3c (0x7f7b0787fdcc in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #9: torch::autograd::Engine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x803 (0x7f7b03554e53 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #10: torch::autograd::python::PythonEngine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x4e (0x7f7b0787fbbe in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #11: THPEngine_run_backward(THPEngine*, _object*, _object*) + 0xa29 (0x7f7b07880889 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #12: _PyMethodDef_RawFastCallKeywords + 0x306 (0x7f7b2b5e2d36 in /opt/conda/bin/python) frame #13: _PyCFunction_FastCallKeywords + 0x21 (0x7f7b2b5e2db1 in /opt/conda/bin/python) frame #14: _PyEval_EvalFrameDefault + 0x52b5 (0x7f7b2b64ea85 in /opt/conda/bin/python) frame #15: _PyEval_EvalCodeWithName + 0x2f9 (0x7f7b2b5922b9 in /opt/conda/bin/python) frame #16: _PyFunction_FastCallKeywords + 0x325 (0x7f7b2b5e2435 in /opt/conda/bin/python) frame #17: _PyEval_EvalFrameDefault + 0x4a59 (0x7f7b2b64e229 in /opt/conda/bin/python) frame #18: _PyEval_EvalCodeWithName + 0x2f9 (0x7f7b2b5922b9 in /opt/conda/bin/python) frame #19: _PyFunction_FastCallDict + 0x1d5 (0x7f7b2b5933e5 in /opt/conda/bin/python) frame #20: _PyEval_EvalFrameDefault + 0x1d4a (0x7f7b2b64b51a in /opt/conda/bin/python) frame #21: _PyEval_EvalCodeWithName + 0x2f9 (0x7f7b2b5922b9 in /opt/conda/bin/python) frame #22: _PyFunction_FastCallDict + 0x1d5 (0x7f7b2b5933e5 in /opt/conda/bin/python) frame #23: _PyObject_Call_Prepend + 0x63 (0x7f7b2b5b1b93 in /opt/conda/bin/python) frame #24: PyObject_Call + 0x6e (0x7f7b2b5a495e in /opt/conda/bin/python) frame #25: torch::autograd::PyNode::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) + 0x183 (0x7f7b07888033 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #26: <unknown function> + 0x30d1017 (0x7f7b0355c017 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #27: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x1400 (0x7f7b03557860 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #28: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7f7b03558401 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #29: torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x89 (0x7f7b03550579 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #30: torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x4a (0x7f7b0787f99a in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #31: <unknown function> + 0xc819d (0x7f7b0a3b619d in /opt/conda/lib/python3.7/site-packages/torch/lib/../../../.././libstdc++.so.6) frame #32: <unknown function> + 0x76db (0x7f7b2b03b6db in /lib/x86_64-linux-gnu/libpthread.so.0) frame #33: clone + 0x3f (0x7f7b2ad6488f in /lib/x86_64-linux-gnu/libc.so.6) Iteration: 0%| | 0/34 [00:00<?, ?it/s] Epoch: 0%| | 0/2 [00:00<?, ?it/s] Traceback (most recent call last): File "run_squad.py", line 839, in <module> main() File "run_squad.py", line 780, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_squad.py", line 213, in train scaled_loss.backward() File "/opt/conda/lib/python3.7/site-packages/torch/tensor.py", line 185, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/opt/conda/lib/python3.7/site-packages/torch/autograd/__init__.py", line 127, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet. Exception raised from mark_variable_ready at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/distributed/c10d/reducer.cpp:453 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x4d (0x7f78f667577d in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: c10d::Reducer::mark_variable_ready(c10d::Reducer::VariableIndex) + 0x4cd (0x7f793047639d in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #2: c10d::Reducer::autograd_hook(c10d::Reducer::VariableIndex) + 0xeb (0x7f7930476bdb in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #3: <unknown function> + 0xabdd16 (0x7f7930476d16 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0xac4dc6 (0x7f793047ddc6 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #5: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x4dd (0x7f792bbba93d in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #6: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7f792bbbc401 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #7: torch::autograd::Engine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>) + 0x25c (0x7f792bbb99fc in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #8: torch::autograd::python::PythonEngine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>) + 0x3c (0x7f792fee3dcc in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #9: torch::autograd::Engine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x803 (0x7f792bbb8e53 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #10: torch::autograd::python::PythonEngine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x4e (0x7f792fee3bbe in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #11: THPEngine_run_backward(THPEngine*, _object*, _object*) + 0xa29 (0x7f792fee4889 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #12: _PyMethodDef_RawFastCallKeywords + 0x306 (0x7f7953c46d36 in /opt/conda/bin/python) frame #13: _PyCFunction_FastCallKeywords + 0x21 (0x7f7953c46db1 in /opt/conda/bin/python) frame #14: _PyEval_EvalFrameDefault + 0x52b5 (0x7f7953cb2a85 in /opt/conda/bin/python) frame #15: _PyEval_EvalCodeWithName + 0x2f9 (0x7f7953bf62b9 in /opt/conda/bin/python) frame #16: _PyFunction_FastCallKeywords + 0x325 (0x7f7953c46435 in /opt/conda/bin/python) frame #17: _PyEval_EvalFrameDefault + 0x4a59 (0x7f7953cb2229 in /opt/conda/bin/python) frame #18: _PyEval_EvalCodeWithName + 0x2f9 (0x7f7953bf62b9 in /opt/conda/bin/python) frame #19: _PyFunction_FastCallDict + 0x1d5 (0x7f7953bf73e5 in /opt/conda/bin/python) frame #20: _PyEval_EvalFrameDefault + 0x1d4a (0x7f7953caf51a in /opt/conda/bin/python) frame #21: _PyEval_EvalCodeWithName + 0x2f9 (0x7f7953bf62b9 in /opt/conda/bin/python) frame #22: _PyFunction_FastCallDict + 0x1d5 (0x7f7953bf73e5 in /opt/conda/bin/python) frame #23: _PyObject_Call_Prepend + 0x63 (0x7f7953c15b93 in /opt/conda/bin/python) frame #24: PyObject_Call + 0x6e (0x7f7953c0895e in /opt/conda/bin/python) frame #25: torch::autograd::PyNode::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) + 0x183 (0x7f792feec033 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #26: <unknown function> + 0x30d1017 (0x7f792bbc0017 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #27: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x1400 (0x7f792bbbb860 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #28: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7f792bbbc401 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #29: torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x89 (0x7f792bbb4579 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #30: torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x4a (0x7f792fee399a in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #31: <unknown function> + 0xc819d (0x7f7932a1a19d in /opt/conda/lib/python3.7/site-packages/torch/lib/../../../.././libstdc++.so.6) frame #32: <unknown function> + 0x76db (0x7f795369f6db in /lib/x86_64-linux-gnu/libpthread.so.0) frame #33: clone + 0x3f (0x7f79533c888f in /lib/x86_64-linux-gnu/libc.so.6) Traceback (most recent call last): File "run_squad.py", line 839, in <module> main() File "run_squad.py", line 780, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_squad.py", line 213, in train scaled_loss.backward() File "/opt/conda/lib/python3.7/site-packages/torch/tensor.py", line 185, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/opt/conda/lib/python3.7/site-packages/torch/autograd/__init__.py", line 127, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet. Exception raised from mark_variable_ready at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/distributed/c10d/reducer.cpp:453 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x4d (0x7f88d10da77d in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: c10d::Reducer::mark_variable_ready(c10d::Reducer::VariableIndex) + 0x4cd (0x7f890aedb39d in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #2: c10d::Reducer::autograd_hook(c10d::Reducer::VariableIndex) + 0xeb (0x7f890aedbbdb in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #3: <unknown function> + 0xabdd16 (0x7f890aedbd16 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0xac4dc6 (0x7f890aee2dc6 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #5: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x4dd (0x7f890661f93d in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #6: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7f8906621401 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #7: torch::autograd::Engine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>) + 0x25c (0x7f890661e9fc in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #8: torch::autograd::python::PythonEngine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>) + 0x3c (0x7f890a948dcc in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #9: torch::autograd::Engine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x803 (0x7f890661de53 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #10: torch::autograd::python::PythonEngine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x4e (0x7f890a948bbe in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #11: THPEngine_run_backward(THPEngine*, _object*, _object*) + 0xa29 (0x7f890a949889 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #12: _PyMethodDef_RawFastCallKeywords + 0x306 (0x7f892e6abd36 in /opt/conda/bin/python) frame #13: _PyCFunction_FastCallKeywords + 0x21 (0x7f892e6abdb1 in /opt/conda/bin/python) frame #14: _PyEval_EvalFrameDefault + 0x52b5 (0x7f892e717a85 in /opt/conda/bin/python) frame #15: _PyEval_EvalCodeWithName + 0x2f9 (0x7f892e65b2b9 in /opt/conda/bin/python) frame #16: _PyFunction_FastCallKeywords + 0x325 (0x7f892e6ab435 in /opt/conda/bin/python) frame #17: _PyEval_EvalFrameDefault + 0x4a59 (0x7f892e717229 in /opt/conda/bin/python) frame #18: _PyEval_EvalCodeWithName + 0x2f9 (0x7f892e65b2b9 in /opt/conda/bin/python) frame #19: _PyFunction_FastCallDict + 0x1d5 (0x7f892e65c3e5 in /opt/conda/bin/python) frame #20: _PyEval_EvalFrameDefault + 0x1d4a (0x7f892e71451a in /opt/conda/bin/python) frame #21: _PyEval_EvalCodeWithName + 0x2f9 (0x7f892e65b2b9 in /opt/conda/bin/python) frame #22: _PyFunction_FastCallDict + 0x1d5 (0x7f892e65c3e5 in /opt/conda/bin/python) frame #23: _PyObject_Call_Prepend + 0x63 (0x7f892e67ab93 in /opt/conda/bin/python) frame #24: PyObject_Call + 0x6e (0x7f892e66d95e in /opt/conda/bin/python) frame #25: torch::autograd::PyNode::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) + 0x183 (0x7f890a951033 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #26: <unknown function> + 0x30d1017 (0x7f8906625017 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #27: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x1400 (0x7f8906620860 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #28: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7f8906621401 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #29: torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x89 (0x7f8906619579 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #30: torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x4a (0x7f890a94899a in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #31: <unknown function> + 0xc819d (0x7f890d47f19d in /opt/conda/lib/python3.7/site-packages/torch/lib/../../../.././libstdc++.so.6) frame #32: <unknown function> + 0x76db (0x7f892e1046db in /lib/x86_64-linux-gnu/libpthread.so.0) frame #33: clone + 0x3f (0x7f892de2d88f in /lib/x86_64-linux-gnu/libc.so.6) Traceback (most recent call last): File "run_squad.py", line 839, in <module> main() File "run_squad.py", line 780, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_squad.py", line 213, in train scaled_loss.backward() File "/opt/conda/lib/python3.7/site-packages/torch/tensor.py", line 185, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/opt/conda/lib/python3.7/site-packages/torch/autograd/__init__.py", line 127, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet. Exception raised from mark_variable_ready at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/distributed/c10d/reducer.cpp:453 (most recent call first): frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x4d (0x7fa6fe83577d in /opt/conda/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: c10d::Reducer::mark_variable_ready(c10d::Reducer::VariableIndex) + 0x4cd (0x7fa73863639d in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #2: c10d::Reducer::autograd_hook(c10d::Reducer::VariableIndex) + 0xeb (0x7fa738636bdb in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #3: <unknown function> + 0xabdd16 (0x7fa738636d16 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0xac4dc6 (0x7fa73863ddc6 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #5: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x4dd (0x7fa733d7a93d in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #6: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7fa733d7c401 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #7: torch::autograd::Engine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>) + 0x25c (0x7fa733d799fc in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #8: torch::autograd::python::PythonEngine::execute_with_graph_task(std::shared_ptr<torch::autograd::GraphTask> const&, std::shared_ptr<torch::autograd::Node>) + 0x3c (0x7fa7380a3dcc in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #9: torch::autograd::Engine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x803 (0x7fa733d78e53 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #10: torch::autograd::python::PythonEngine::execute(std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&, std::vector<at::Tensor, std::allocator<at::Tensor> > const&, bool, bool, std::vector<torch::autograd::Edge, std::allocator<torch::autograd::Edge> > const&) + 0x4e (0x7fa7380a3bbe in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #11: THPEngine_run_backward(THPEngine*, _object*, _object*) + 0xa29 (0x7fa7380a4889 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #12: _PyMethodDef_RawFastCallKeywords + 0x306 (0x7fa75be06d36 in /opt/conda/bin/python) frame #13: _PyCFunction_FastCallKeywords + 0x21 (0x7fa75be06db1 in /opt/conda/bin/python) frame #14: _PyEval_EvalFrameDefault + 0x52b5 (0x7fa75be72a85 in /opt/conda/bin/python) frame #15: _PyEval_EvalCodeWithName + 0x2f9 (0x7fa75bdb62b9 in /opt/conda/bin/python) frame #16: _PyFunction_FastCallKeywords + 0x325 (0x7fa75be06435 in /opt/conda/bin/python) frame #17: _PyEval_EvalFrameDefault + 0x4a59 (0x7fa75be72229 in /opt/conda/bin/python) frame #18: _PyEval_EvalCodeWithName + 0x2f9 (0x7fa75bdb62b9 in /opt/conda/bin/python) frame #19: _PyFunction_FastCallDict + 0x1d5 (0x7fa75bdb73e5 in /opt/conda/bin/python) frame #20: _PyEval_EvalFrameDefault + 0x1d4a (0x7fa75be6f51a in /opt/conda/bin/python) frame #21: _PyEval_EvalCodeWithName + 0x2f9 (0x7fa75bdb62b9 in /opt/conda/bin/python) frame #22: _PyFunction_FastCallDict + 0x1d5 (0x7fa75bdb73e5 in /opt/conda/bin/python) frame #23: _PyObject_Call_Prepend + 0x63 (0x7fa75bdd5b93 in /opt/conda/bin/python) frame #24: PyObject_Call + 0x6e (0x7fa75bdc895e in /opt/conda/bin/python) frame #25: torch::autograd::PyNode::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&) + 0x183 (0x7fa7380ac033 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #26: <unknown function> + 0x30d1017 (0x7fa733d80017 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #27: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&) + 0x1400 (0x7fa733d7b860 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #28: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&) + 0x451 (0x7fa733d7c401 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #29: torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x89 (0x7fa733d74579 in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #30: torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool) + 0x4a (0x7fa7380a399a in /opt/conda/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #31: <unknown function> + 0xc819d (0x7fa73abda19d in /opt/conda/lib/python3.7/site-packages/torch/lib/../../../.././libstdc++.so.6) frame #32: <unknown function> + 0x76db (0x7fa75b85f6db in /lib/x86_64-linux-gnu/libpthread.so.0) frame #33: clone + 0x3f (0x7fa75b58888f in /lib/x86_64-linux-gnu/libc.so.6) Traceback (most recent call last): File "/opt/conda/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/opt/conda/lib/python3.7/site-packages/torch/distributed/launch.py", line 261, in <module> main() File "/opt/conda/lib/python3.7/site-packages/torch/distributed/launch.py", line 257, in main cmd=cmd) subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', 'run_squad.py', '--local_rank=3', '--model_type', 'longformer', '--do_train', '--model_name_or_path', 'longformer-base-len4K', '--do_eval', '--do_lower_case', '--threads', '30', '--fp16', '--eval_all_checkpoints', '--save_steps', '2500', '--train_file', './data/marco_v1.0/train.json.demo', '--predict_file', './data/marco_v1.0/dev.json', '--per_gpu_train_batch_size', '6', '--learning_rate', '3e-5', '--num_train_epochs', '2', '--max_seq_length', '2048', '--doc_stride', '1024', '--output_dir', 'output/marco_pyramidlocalatt']' returned non-zero exit status 1. ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
09-16-2020 06:18:04
09-16-2020 06:18:04
Hey @xixiaoyao, Could you please copy paste the command you used to run squad so that I can be 100% sure we are running the same command? How did you enable `gradient_checkpointing` ? Did you change the `run_squad.py` script? Would be great if you can copy-paste a runnable code snippet here :-) <|||||>Getting the same issue when using Reformer with pytorch-lightning's distributeddataparallel, although not using one of the official training scripts.<|||||>I am also getting this exact same error with Reformer, but only when I wrap it with DDP and then train across multiple GPUs on the same box. I do not get this error with Longformer. If I don't use DDP with Reformer, then it works fine. Am doing vanilla AR language model training using a custom script. But my script works fine when used on a single GPU with no DDP. The error seems to indicate that there's something about Reformer which DDP does not yet support: "RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet." Am using transformers 3.5.1.<|||||>> I am also getting this exact same error with Reformer, but only when I wrap it with DDP and then train across multiple GPUs on the same box. I do not get this error with Longformer. If I don't use DDP with Reformer, then it works fine. Am doing vanilla AR language model training using a custom script. But my script works fine when used on a single GPU with no DDP. The error seems to indicate that there's something about Reformer which DDP does not yet support: > > "RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet." > > Am using transformers 3.5.1. same with @trias702 <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.<|||||>> > I am also getting this exact same error with Reformer, but only when I wrap it with DDP and then train across multiple GPUs on the same box. I do not get this error with Longformer. If I don't use DDP with Reformer, then it works fine. Am doing vanilla AR language model training using a custom script. But my script works fine when used on a single GPU with no DDP. The error seems to indicate that there's something about Reformer which DDP does not yet support: > > "RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the `forward` function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple `checkpoint` functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases yet." > > Am using transformers 3.5.1. > > same with @trias702 same with @trias702 @yuanenming any updates? @xixiaoyao
transformers
7,159
closed
I reduce the longformer's attention window from 512 to 256, but train speed not changed
# ❓ Questions & Help According to the Longformer's design, it should be sensitive for training speed and memory consuming when the local window size is changed. However, I find nothing happened when I half the attention window size (from [512]*12 to [256]*12). Is this phenomenon under expectations? or there is something wrong?
09-16-2020 06:07:31
09-16-2020 06:07:31
Hey @xixiaoyao, the main gain is memory savings for longformer - so I'm not really surprised if you say that training speed does not change when halving the attention window...<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,158
closed
OOM Issue when evaluating with Trainer
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Ubuntu 20.04 - Python version: Python 3.8.2 - PyTorch version (GPU?): 1.5.1 - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help Trainer: @sgugger ## Information Model I am using BartForConditionalGeneration: The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) Using the normal Trainer with trainer.evaluate() on the BillSum Dataset The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) BillSum Dataset ## To reproduce Steps to reproduce the behavior: 1. Load Billsum 2. FineTune Bart on Billsum 3. try to evaluate with trainer.evaluate() ` Traceback (most recent call last): File "main.py", line 211, in <module> main() File "main.py", line 182, in main trainer.evaluate() File "/home/mypc/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1157, in evaluate output = self.prediction_loop(eval_dataloader, description="Evaluation") File "/home/mypc/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1244, in prediction_loop preds = logits if preds is None else torch.cat((preds, logits), dim=0) RuntimeError: CUDA out of memory. Tried to allocate 2.58 GiB (GPU 0; 7.79 GiB total capacity; 2.72 GiB already allocated; 1.94 GiB free; 4.88 GiB reserved in total by PyTorch) ` ## Expected behavior The Tensors for the logits are getting really big (x, 1024, 52000) and will be concatenated every prediction step in the trainer.py in line 1244: ` if logits is not None: preds = logits if preds is None else torch.cat((preds, logits), dim=0) ` One possible solution is to argmax them while iterating through the prediction loop
09-16-2020 05:43:43
09-16-2020 05:43:43
We know there is a limitation on the size of dataset for evaluation/prediction because every tensors are concatenated together (this is necessary for distributed training). We plan to work on it using the work done in the `datasets` library to have the predictions cached in temporary files as the prediction loop go. However this is not going to be ready before a few weeks, so in the meantime, I encourage you to feed your dataset to predictions by small slices.<|||||>Okay, thank you. Currently i'm only debugging on my machine so the: ` if logits is not None: preds = logits.argmax(-1) if preds is None else torch.cat((preds, logits.argmax(-1)), dim=0)` -hack does it for me. But yeah as soon as I train on multiple machines I will feed it in small slices.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> We know there is a limitation on the size of dataset for evaluation/prediction because every tensors are concatenated together (this is necessary for distributed training). We plan to work on it using the work done in the `datasets` library to have the predictions cached in temporary files as the prediction loop go. However this is not going to be ready before a few weeks, so in the meantime, I encourage you to feed your dataset to predictions by small slices. Hi, has this problem been solved :)
transformers
7,157
closed
ProphetNet
# Add [ProphetNet](https://arxiv.org/abs/2001.04063). This PR implements both ProphetNet and XLM-ProphetNet. The model architectures are identical, but each model uses a different tokenizer. ## Description: ProphetNet is a new pre-trained language model for sequence-to-sequence learning with a novel self-supervised objective called future n-gram prediction. ProphetNet is able to predict more future tokens with an n-stream decoder. The original implementation is Fairseq version at [github repo](https://github.com/microsoft/ProphetNet). xProphetNet has the same model structure but is pretrained with wikipedia 100 languages dataset as described in [xGLUE](https://arxiv.org/abs/2004.01401). xGLUE is a benchmark for cross-lingual NLU and NLG tasks. xProphetNet is also served as a baseline model for cross-lingual generation tasks in xGLUE NTG and QG. ## Usage: Take xGLUE NTG task as an example: The cross-lingual pretrained model is finetuned with English news title generation data, but inference with both English and other zero-shot language data. A quick usage is like: ``` from transformers import ProphetNetTokenizer, ProphetNetForConditionalGeneration, ProphetNetConfig model = ProphetNetForConditionalGeneration.from_pretrained('microsoft/xprophetnet-large-wiki100-cased-xglue-ntg') tokenizer = ProphetNetTokenizer.from_pretrained('microsoft/xprophetnet-large-wiki100-cased-xglue-ntg') EN_SENTENCE_TO_QUESTION = "Microsoft Corporation intends to officially end free support for the Windows 7 operating system after January 14, 2020, according to the official portal of the organization. From that day, users of this system will not be able to receive security updates, which could make their computers vulnerable to cyber attacks." RU_SENTENCE_TO_QUESTION = "орпорация Microsoft намерена официально прекратить бесплатную поддержку операционной системы Windows 7 после 14 января 2020 года, сообщается на официальном портале организации . С указанного дня пользователи этой системы не смогут получать обновления безопасности, из-за чего их компьютеры могут стать уязвимыми к кибератакам." ZH_SENTENCE_TO_QUESTION = "根据该组织的官方门户网站,微软公司打算在2020年1月14日之后正式终止对Windows 7操作系统的免费支持。从那时起,该系统的用户将无法接收安全更新,这可能会使他们的计算机容易受到网络攻击。" inputs = tokenizer([EN_SENTENCE_TO_QUESTION, RU_SENTENCE_TO_QUESTION, ZH_SENTENCE_TO_QUESTION], padding=True, max_length=256, return_tensors='pt') summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=100, early_stopping=True) print([tokenizer.decode(g) for g in summary_ids]) ``` Model will generate news titles like: ``` ['[SEP] Microsoft to end Windows 7 free support after January 14, 2020[SEP][PAD][PAD][PAD][PAD]', '[SEP] Microsoft намерена прекратить бесплатную поддержку Windows 7 после 14 января 2020 года[SEP]', '[SEP]微软打算终止对Windows 7操作系统的免费支持[SEP][PAD][PAD][PAD][PAD][PAD][PAD]'] ``` ## Released checkpoints: pretrained: ``` microsoft/prophetnet-large-uncased microsoft/xprophetnet-large-wiki100-cased ``` fine-tuned: ``` microsoft/prophetnet-large-uncased-cnndm microsoft/xprophetnet-large-wiki100-cased-xglue-ntg microsoft/xprophetnet-large-wiki100-cased-xglue-qg ``` ## Notes According to the outputs of original fairseq outputs, integration tests for prophetnet include: 1. encoder hidden states, decoder hidden states, model hidden states of pretrained Prophetnet, xProphetnet checkpoints 2. model hidden states of xProphetnet NTG finetuned model 3. Cross-lingual outputs of xProphetNet NTG finetuned model with different beam sizes 4. CNN/DM outputs of ProphetNet CNN/DM finetuned model with different input lengths The model was implemented so all of its parts can be used separately. This means that `ProphetNetEncoder` and `ProphetNetEncoder` can be used as stand-alone models. `ProphetNetForCausalLM` can be instantiated easily from pretrained checkpoints and can be used within the EncoderDecoderModel framework.
09-16-2020 05:36:08
09-16-2020 05:36:08
I opened a wrong PR yesterday, please help me check this version, thanks! @JetRunner @patrickvonplaten <|||||>@qiweizhen - this looks great! Is this the complete PR? Can we close the "old" PR: https://github.com/huggingface/transformers/pull/6187 in favor of this one? <|||||>@qiweizhen the Integration tests look great! @JetRunner, I think we can take it from here :-) I saw that there are models, such "xprophetnet-large-wiki100-cased-xglue-ntg" that are both under microsoft and under weizhen - @qiweizhen are these models identical? <|||||>This PR is complete version, as I rebased this branch to the latest huggingface version with directions of @JetRunner . Models under Microsoft are what we actually used. Those under qiweizhen were used to debug. I will delete the models under qiweizhen. Thank you for your helps @patrickvonplaten @JetRunner <|||||>> This PR is complete version, as I rebased this branch to the latest huggingface version with directions of @JetRunner . > > Models under Microsoft are what we actually used. Those under qiweizhen were used to debug. I will delete the models under qiweizhen. > > Thank you for your helps @patrickvonplaten @JetRunner Awesome! Thanks a million for your work! We will take it from here :-) <|||||>@patrickvonplaten Hi, may I ask when could ProphetNet be added into Transformers? Are there any jobs I can co-work to help it be integrated?<|||||>Hey @qiweizhen , Sorry for the delay on this. Prophetnet is my no 1 priority next week. It should be merged by the end of next week. You have done your part - I might ping you for some further questions<|||||>@qiweizhen - the integration tests are awesome! Thanks to that it should be quite straightforward to integrate the model<|||||>@qiweizhen - would it be ok for you if we add a `ProphetNetModel` and a `XLMProphetNetModel`, each with their respective tokenizers. I think this would be cleaner and is also more in line with `Roberta` and `XLMRoberta` for example. I should be quite easy to do this. I can take care of it - would just be great to have your approval on it :-) <|||||>> @qiweizhen - would it be ok for you if we add a `ProphetNetModel` and a `XLMProphetNetModel`, each with their respective tokenizers. I think this would be cleaner and is also more in line with `Roberta` and `XLMRoberta` for example. I should be quite easy to do this. I can take care of it - would just be great to have your approval on it :-) Sure! Thank you!
transformers
7,156
closed
[doc] [testing] improve/expand the Parametrization section
Fixed some issues, plus documented `@pytest.mark.parametrize` which is used extensively in the `examples` tests. @sgugger
09-16-2020 02:53:27
09-16-2020 02:53:27
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7156?src=pr&el=h1) Report > Merging [#7156](https://codecov.io/gh/huggingface/transformers/pull/7156?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/85ffda96fcadf70d2558ba0a59c84b9f5a2d6f0f?el=desc) will **increase** coverage by `0.97%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7156/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7156?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7156 +/- ## ========================================== + Coverage 78.44% 79.42% +0.97% ========================================== Files 168 168 Lines 32309 32309 ========================================== + Hits 25346 25662 +316 + Misses 6963 6647 -316 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7156?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: | | [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: | | [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-5.77%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <0.00%> (+0.16%)` | :arrow_up: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `92.04% <0.00%> (+20.43%)` | :arrow_up: | | ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/7156/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7156?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7156?src=pr&el=footer). Last update [b00cafb...b347e1c](https://codecov.io/gh/huggingface/transformers/pull/7156?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,155
closed
build/eval/gen-card scripts for fsmt
Here are the various build/eval/gen scripts used for fsmt. We will use them in the future should any updates/corrections need to be done. They are also a great starter for porting other similar architectures. Fixes #7092
09-15-2020 22:49:35
09-15-2020 22:49:35
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7155?src=pr&el=h1) Report > Merging [#7155](https://codecov.io/gh/huggingface/transformers/pull/7155?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/85ffda96fcadf70d2558ba0a59c84b9f5a2d6f0f?el=desc) will **increase** coverage by `3.11%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7155/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7155?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7155 +/- ## ========================================== + Coverage 78.44% 81.56% +3.11% ========================================== Files 168 168 Lines 32309 32309 ========================================== + Hits 25346 26352 +1006 + Misses 6963 5957 -1006 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7155?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: | | [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.10% <0.00%> (-12.67%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.69% <0.00%> (-0.54%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.18% <0.00%> (ø)` | | | ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/7155/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7155?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7155?src=pr&el=footer). Last update [85ffda9...cdec73c](https://codecov.io/gh/huggingface/transformers/pull/7155?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Good for me too!
transformers
7,154
closed
Inconsistent parameter naming conventions in ModelConfigs
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 (N/A) ### Who can help @LysandreJik ## Information This isn't exactly a bug, but more a suggestion that would be helpful. Across several different models, there are config classes used to initialize a model (e.g. GPT2Config and BertConfig). Some of these configs have parameters that mean the same thing, but use different naming conventions for the parameters, for instance n_layer in GPT2Config is the same, conceptually, as num_hidden_layers for BertConfig. For ease of development, I think it could be worthwhile to make these parameter names consistent across all of the transformer config classes. I'd be happy to help out on this, it seems like it should be a relatively straightforward change, though it may break some backwards compatibility.
09-15-2020 22:38:29
09-15-2020 22:38:29
Hi! As you've mentioned, this would break backwards compatibility which is not something we're willing to do for a name change. Furthermore, these values are named as such to be the same as the original implementation, therefore these will not be changed. Thank you for opening an issue!
transformers
7,153
closed
[model cards] ported allenai Deep Encoder, Shallow Decoder models
Once @LysandreJik merges https://github.com/huggingface/transformers/pull/6940, please merge these cards These are models ported from https://github.com/jungokasai/deep-shallow/ The models are already on s3 under `allenai` - thank you for moving those from my username, @sshleifer. And added 2 more models from the same author. Fixes #7049
09-15-2020 21:21:49
09-15-2020 21:21:49
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7153?src=pr&el=h1) Report > Merging [#7153](https://codecov.io/gh/huggingface/transformers/pull/7153?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0203ad43bcd0b29423dec6ca1a58ed58300f0d61?el=desc) will **decrease** coverage by `1.35%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7153/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7153?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7153 +/- ## ========================================== - Coverage 80.86% 79.50% -1.36% ========================================== Files 169 169 Lines 32293 32293 ========================================== - Hits 26114 25675 -439 - Misses 6179 6618 +439 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7153?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <0.00%> (-23.38%)` | :arrow_down: | | [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.68% <0.00%> (-0.65%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: | | ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/7153/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7153?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7153?src=pr&el=footer). Last update [0203ad4...b4e0ad4](https://codecov.io/gh/huggingface/transformers/pull/7153?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I wasn't sure about the format for multiple languages in the head of the card, I added: ``` language: en, de ``` Is this correct? I couldn't find an example of the same situation. Perhaps this can be documented? @julien-c<|||||>I think you want like this https://s3.amazonaws.com/models.huggingface.co/bert/Helsinki-NLP/opus-mt-en-roa/README.md <|||||>> I think you want like this Do you mean: ``` language: - en - de ``` ?
transformers
7,152
closed
Gradient checkpointing for GPT-2
# 🚀 Feature request Gradient checkpointing for GPT-2 would be very useful especially for the larger models in the GPT-2 family. I tried to train the largest release, i.e. "gpt2-xl", on AWS's p3dn.24xlarge instance (has 8 V100 GPUs with 32 GB VRAM on each) with no success. Even making the batch size 1 and using mixed precision training (through torch.cuda.amp) failed with OOM. By the way, I was able to train the second largest model ("gpt2-large") with that configuration i.e. batch size of 1 and enabling mixed precision).
09-15-2020 18:03:17
09-15-2020 18:03:17
Did you run into the issue of imbalanced GPU usage? I have been trying to fine-tune the gpt2-xl model myself on two Titan RTX GPUs (24 GB RAM each) but the imbalanced GPU usage seems to be the main culprit to me. Not sure if gpt2-xl would fit even if I wasn't facing this issue. Why don't they provide some details about the memory requirements for fine-tuning the models. <|||||>> Did you run into the issue of imbalanced GPU usage? I have been trying to fine-tune the gpt2-xl model myself on two Titan RTX GPUs (24 GB RAM each) but the imbalanced GPU usage seems to be the main culprit to me. Not sure if gpt2-xl would fit even if this I wasn't facing this issue. Why don't they provide some details about the memory requirements for fine-tuning the models. No, my GPU usage was balanced among multiple GPUs.<|||||>For anyone interested, I was able to train the "gpt2-xl" model by implementing the gradient checkpointing myself by looking at the suggestions in the previous issues. Specifically, I added ``` gradient_checkpointing = kwargs.pop("gradient_checkpointing", None) if gradient_checkpointing is not None: config_dict["gradient_checkpointing"] = gradient_checkpointing ``` to https://github.com/huggingface/transformers/blob/eb074af75e2fc64c9ec2f5f80637885c455ee15b/src/transformers/configuration_utils.py#L358 . Additionally, I replaced the Block output with `return tuple(outputs)` in https://github.com/huggingface/transformers/blob/eb074af75e2fc64c9ec2f5f80637885c455ee15b/src/transformers/modeling_gpt2.py#L322 . Finally, I replaced the block() call with the ``` if getattr(self.config, "gradient_checkpointing", False): def create_custom_forward(module): def custom_forward(*inputs): return module(*inputs, layer_past, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask, use_cache, output_attentions) return custom_forward outputs = torch.utils.checkpoint.checkpoint( create_custom_forward(block), hidden_states) else: outputs = block( hidden_states, layer_past=layer_past, attention_mask=attention_mask, head_mask=head_mask[i], encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, use_cache=use_cache, output_attentions=output_attentions, ) ``` in https://github.com/huggingface/transformers/blob/eb074af75e2fc64c9ec2f5f80637885c455ee15b/src/transformers/modeling_gpt2.py#L611 . To activate the gradient checkpointing, I construct the model by supplying `gradient_checkpointing=True` to the constructor, e.g. `model = GPT2LMHeadModel.from_pretrained(model_checkpoint_directory, gradient_checkpointing=True)`. I made similar changes to `openai-gpt` just to test if checkpointing works since I can train it with and without gradient checkpointing due to its small size. I verified that both the normal and checkpointed training yielded the same losses in the same iterations. Moreover, I checked the memory utilization of the GPUs and verified that the checkpointed training needs considerably lower memory. Thus, I concluded that the gradient checkpointing is working. By the way, I could not train the xlarge model even with the gradient checkpointing if I use AdamW as the optimizer, even when the batch size is 1! Therefore, I switched to the RMSprop since its memory requirement is much smaller. I went ahead and performed a crude memory benchmark for training "gpt2-large" and "gpt2-xl" with sequence length = 1024. You can see the results below. ![gpt2-memory](https://user-images.githubusercontent.com/32267027/93687651-ed9ac300-fac7-11ea-8f31-175157a52966.png) <|||||>This is great. Will be implementing this. So with gradient checkpointing, gpt2-xl only requires about 32.47 GB of RAM with RMSProp? <|||||>> This is great. Will be implementing this. So with gradient checkpointing, gpt2-xl only requires about 32.47 GB of RAM with RMSProp? That is correct for batch size of 3 according to my manual measurements.<|||||>When I implement this, I get the error : ``` File "hyde/src/transformers/modeling_utils.py", line 912, in from_pretrained TypeError: __init__() got an unexpected keyword argument 'gradient_checkpointing' ``` Did you do anything else besides the steps you mentioned? In the configuration_utils.py file, I have added the lines you mentioned after this piece of code: ``` if resolved_config_file is None: raise EnvironmentError config_dict = cls._dict_from_json_file(resolved_config_file) gradient_checkpointing = kwargs.pop("gradient_checkpointing", None) if gradient_checkpointing is not None: config_dict["gradient_checkpointing"] = gradient_checkpointing ``` Thank you so much. <|||||>Hey, could you check this?<|||||>Hi @cemilcengiz, could you tell me which version of transformers is used here?<|||||>> Hi @cemilcengiz, could you tell me which version of transformers is used here? 3.1.0<|||||>> When I implement this, I get the error : > > ``` > File "hyde/src/transformers/modeling_utils.py", line 912, in from_pretrained > TypeError: __init__() got an unexpected keyword argument 'gradient_checkpointing' > ``` > > Did you do anything else besides the steps you mentioned? > In the configuration_utils.py file, I have added the lines you mentioned after this piece of code: > > ``` > if resolved_config_file is None: > raise EnvironmentError > config_dict = cls._dict_from_json_file(resolved_config_file) > gradient_checkpointing = kwargs.pop("gradient_checkpointing", None) > if gradient_checkpointing is not None: > config_dict["gradient_checkpointing"] = gradient_checkpointing > ``` > > Thank you so much. No, that was all I did.<|||||>Can someone please make a PR and add this feature? I think with this added, finetuning would become a lot better with this library<|||||>Well, apparently this feature was added to the library 2 weeks after I opened the issue. Therefore, I am closing it.
transformers
7,151
closed
fix the warning message of overflowed sequence
This PR fixes the warning message (of _**PreTrainedTokenizerBase.prepare_for_model**_) when trying to tokenize a model with a sequence that is longer than the specified maximum sequence length. The current warning message shows the length of the first input_ids, which can be incorrect when a pair is being encoded and the first input length doesn't exceed the limit. The desired warning message should show the overall length of the encoded ids.
09-15-2020 17:43:03
09-15-2020 17:43:03
transformers
7,150
closed
Refactoring the TF activations functions
This PR groups all the TF activations functions into a common file like it is done in the PT part.
09-15-2020 15:50:01
09-15-2020 15:50:01
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7150?src=pr&el=h1) Report > Merging [#7150](https://codecov.io/gh/huggingface/transformers/pull/7150?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/52d250f6aa14844024806e5e4dd1c7882bbd8dd5?el=desc) will **increase** coverage by `0.38%`. > The diff coverage is `88.52%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7150/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7150?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7150 +/- ## ========================================== + Coverage 79.12% 79.50% +0.38% ========================================== Files 168 169 +1 Lines 32303 32287 -16 ========================================== + Hits 25560 25671 +111 + Misses 6743 6616 -127 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7150?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <50.00%> (-71.33%)` | :arrow_down: | | [src/transformers/activations\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `75.00% <75.00%> (ø)` | | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.92% <100.00%> (+0.01%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.90% <100.00%> (+0.16%)` | :arrow_up: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `99.28% <100.00%> (+65.24%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <100.00%> (+<0.01%)` | :arrow_up: | | [src/transformers/modeling\_tf\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `94.04% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.59% <100.00%> (-23.42%)` | :arrow_down: | | [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `94.55% <100.00%> (+0.43%)` | :arrow_up: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `97.06% <100.00%> (+0.15%)` | :arrow_up: | | ... and [30 more](https://codecov.io/gh/huggingface/transformers/pull/7150/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7150?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7150?src=pr&el=footer). Last update [52d250f...a639d90](https://codecov.io/gh/huggingface/transformers/pull/7150?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,149
closed
Ignore me!
I did this PR by mistake, and found out late. Sorry Hugging Face!
09-15-2020 14:56:55
09-15-2020 14:56:55
Sorry for the mess-up, was working on a fork of mine (facepalm) and accidentally PR to the repo.<|||||>No worries – but potentially open to a PR adding Azure logging if it makes sense on your end!<|||||>Hi @julien-c , thanks for pointing that out! I am running `transformers` on AzureML (finetuning on classification tasks mainly), as I find it convenient because of the e.g. automatic compute provisioning and downsizing, experiment tracking, model versioning capabilities Azure offers. I don't know what's the best way to contribute with respect to this (i.e. sharing code running finetuning of BERT using AzureML), but here's what I know: - Implementing tensorboard-like capabilities in AzureML experiments can be done with trivial modifications to the current `trainer.py`, along the lines of https://github.com/huggingface/transformers/commit/939f6798acb178e3d8388e4597ef7e3dfcef0bf2#diff-cd3b4dd7d09c0b32ff40ccc1840a4da4 - Those trivial changes allow to see in the AzureML frontend trends like ![image](https://user-images.githubusercontent.com/4547987/93239437-0ed77880-f783-11ea-803e-f674a378f611.png) which can be handy when running experiments. - To launch one experiment, one would use Azure-specific code like that in https://github.com/microsoft/AzureML-BERT/blob/master/finetune/PyTorch/notebooks/Pretrained-BERT-GLUE.ipynb Microsoft had started a pretty cool repo at https://github.com/microsoft/AzureML-BERT/ with solutions for this using https://github.com/huggingface/transformers/tree/v0.6.2 . What I think is happening though is that `transformers` has evolved quite a lot since then and they would need to refresh those examples. I honestly don't know if it'd make sense to PR https://github.com/microsoft/AzureML-BERT (and not clutter the `transformers` with platform-specific code). WDYT? Cheers! <|||||>The idea above is resurrected here https://github.com/huggingface/transformers/pull/8062
transformers
7,148
closed
add new model prophetnet
prophetnet modified modify codes as suggested v1 add prophetnet test files
09-15-2020 14:49:23
09-15-2020 14:49:23
Hi Patrick, I think @qiweizhen added the integration test. should we take from here?<|||||>yes! That sounds great :-) Thanks a lot @qiweizhen :-)
transformers
7,147
closed
Funnel model cards
Add the model cards for the 10 Funnel Transformer checkpoints
09-15-2020 14:26:38
09-15-2020 14:26:38
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7147?src=pr&el=h1) Report > Merging [#7147](https://codecov.io/gh/huggingface/transformers/pull/7147?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/52d250f6aa14844024806e5e4dd1c7882bbd8dd5?el=desc) will **increase** coverage by `0.12%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7147/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7147?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7147 +/- ## ========================================== + Coverage 79.12% 79.24% +0.12% ========================================== Files 168 168 Lines 32303 32303 ========================================== + Hits 25560 25600 +40 + Misses 6743 6703 -40 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7147?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.05% <0.00%> (-63.52%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `76.19% <0.00%> (-9.78%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.41% <0.00%> (-1.63%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.91% <0.00%> (-0.14%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.44% <0.00%> (+0.16%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.23% <0.00%> (+0.17%)` | :arrow_up: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.18% <0.00%> (+0.35%)` | :arrow_up: | | ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/7147/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7147?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7147?src=pr&el=footer). Last update [52d250f...c666fea](https://codecov.io/gh/huggingface/transformers/pull/7147?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,146
closed
Fine tune with local model raised `torch.nn.modules.module.ModuleAttributeError: 'DataParallel' object has no attribute 'config'`
# ❓ Questions & Help I've fined tune distilgpt2 with `run_language_modeling.py` under my local `output_dir` and want to fine tune with the model in `output_dir` again, but it raised `torch.nn.modules.module.ModuleAttributeError: 'DataParallel' object has no attribute 'config'`. ## Details <!-- Description of your issue --> I've fined tune distilgpt2 with `run_language_modeling.py` as follows: ```shell python3 run_language_modeling.py --output_dir=/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir --model_type=gpt2 --model_name_or_path=distilgpt2 --per_device_train_batch_size=10 --do_train --train_data_file=/home/xxx/gpt_model/data_info/data.txt --block_size=64 --save_steps=100 --overwrite_output_dir ``` It works fine and i can get the fine-tuned model after training with my data `data.txt`. Now i want to fine-tune in the `output_dir` so i run `run_language_modeling.py` as follows on my `new_data.txt`: ```shell python3 run_language_modeling.py --output_dir=/home/xxx/gpt_model/transformers/examples/language-modeling/output_new_data --model_type=gpt2 --model_name_or_path=/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir/ --per_device_train_batch_size=20 --do_train --train_data_file=/home/xxx/gpt_model/data_info/new_data.txt --block_size=64 --save_steps=1000 --overwrite_output_dir ``` But it raise exception and exit. The stderr is as follows ```shell /home/xxx/gpt_model/pytorch/pytorch/torch/optim/lr_scheduler.py:235: UserWarning: Please also save or load the state of the optimizer when saving or loading the scheduler. warnings.warn(SAVE_STATE_WARNING, UserWarning) Traceback (most recent call last): File "run_language_modeling.py", line 320, in <module> main() File "run_language_modeling.py", line 284, in main trainer.train(model_path=model_path) File "/home/xxx/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/trainer.py", line 683, in train self.total_flos = getattr(model.config, "total_flos", 0) File "/home/xxx/gpt_model/pytorch/pytorch/torch/nn/modules/module.py", line 779, in __getattr__ type(self).__name__, name)) torch.nn.modules.module.ModuleAttributeError: 'DataParallel' object has no attribute 'config' ``` ##### (1) I'm training on multiple GPUs, but i did'n specify a specific gpu to run. So at first i think this may caused by multiple GPUs, and add the following code under https://github.com/huggingface/transformers/blob/52d250f6aa14844024806e5e4dd1c7882bbd8dd5/src/transformers/trainer.py#L641 ```shell if isinstance(model,torch.nn.DataParallel): model = model.module ``` It has no error but doesn't work. The output is as follows . After that, the process is killed and return code `echo $?` is `0`. ```shell Epoch: 0it [00:00, ?it/s] /home/lenajin/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/trainer.py:1087: FutureWarning: This method is deprecated, use `Trainer.is_world_process_zero()` instead. warnings.warn("This method is deprecated, use `Trainer.is_world_process_zero()` instead.", FutureWarning) ``` ##### (2) Following with https://github.com/huggingface/transformers/issues/1991 , I try to train with following script to train with gpu whose id is `1`. ```shell CUDA_VISIBLE_DEVICES=1 python3 run_language_modeling.py --output_dir=/home/xxx/gpt_model/transformers/examples/language-modeling/output_dir --model_type=gpt2 --model_name_or_path=distilgpt2 --per_device_train_batch_size=20 --do_train --train_data_file=/home/xxx/gpt_model/data_info/data.txt --block_size=64 --save_steps=100 --overwrite_output_dir ``` it return with return code `echo $?` =`0`. At first i don't know what's the problem, but now i changed `--per_device_train_batch_size=20 ` from 20 to 10. It seems all is well, and now my `GPU-Util` is about `98%`. Maybe it return without training duing to my limit gpu CC?It's wired but at least it works now. Maybe there can give some error message about the return reason?
09-15-2020 13:39:36
09-15-2020 13:39:36
I haven't used the trainer but I think maybe you just need to change the line `self.total_flos = getattr(model.config, "total_flos", 0)` to `_model = model.module if hasattr(model, 'module') else model ` `self.total_flos = getattr(_model.config, "total_flos", 0)`<|||||>This is a breaking change that was not announced - it broke our production script.<|||||>@sgugger might be interested in this. @marrrcin what is the breaking change? This seems to be a bug rather than an intentional change.<|||||>I meat it's a bug that broke our usage of Trainer in production - we're using single-multigpu-machine to train our models and https://github.com/huggingface/transformers/blob/9e68d075a4100906509170498480823e7e61874a/src/transformers/trainer.py#L698 basically breaks it after upgrade to `3.2.0`. Maybe this is enough for the fix: https://github.com/huggingface/transformers/pull/7384 <|||||>I see! Thanks for opening a PR!<|||||>My bad, I think this bug slipped through with one of the style changes at code review on one of my PRs. Thanks for raising the issue!<|||||>Do you think that it will be possible to move this fix forward and release patch over the weekend? We don't want to have custom package installation in our CI.<|||||>We're having a new release (v3.3.0) on Monday, is that soon enough? The fix will be in it.<|||||>That would be great!<|||||>I am facing the same issue. Whenever I continues my training from certain checkpoint instead of training from scratch. I got the exact same error as OP. Should I just wait for the new release on tomorrow?<|||||>Hi, I tested this PR, this does not fully solve the issue, I am getting these error during evaluation of seq2seq_trainer. File "finetune_t5_trainer.py", line 233, in <module> main() File "finetune_t5_trainer.py", line 188, in main result = trainer.evaluate(eval_datasets, compute_metrics_fn) File "/home/rabeeh/internship/seq2seq/t5_trainer.py", line 175, in evaluate prediction_loss_only=True if self.compute_metrics is None else None, # self.compute_metrics[eval_task] File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/trainer.py", line 1452, in prediction_loop metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids)) File "/home/rabeeh/internship/seq2seq/metrics/metrics.py", line 112, in translation_metrics pred_str, label_str = decode_pred(pred) File "/home/rabeeh/internship/seq2seq/metrics/metrics.py", line 99, in decode_pred label_str = tokenizer.batch_decode(pred.label_ids, skip_special_tokens=True) File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2966, in batch_decode for seq in sequences File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2966, in <listcomp> for seq in sequences File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 3002, in decode **kwargs, File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 732, in _decode filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens) File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 716, in convert_ids_to_tokens tokens.append(self._convert_id_to_token(index)) File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers/tokenization_t5.py", line 245, in _convert_id_to_token token = self.sp_model.IdToPiece(index) File "/opt/conda/envs/internship/lib/python3.7/site-packages/sentencepiece/__init__.py", line 501, in _batched_func return _func(self, arg) File "/opt/conda/envs/internship/lib/python3.7/site-packages/sentencepiece/__init__.py", line 494, in _func raise IndexError('piece id is out of range.') IndexError: piece id is out of range. 42it [01:31, 2.18s/it] <|||||>@rabeehk this seems like a completely different issue. Please open a new issue.
transformers
7,145
closed
Load Pre-Trained Model Using Docker
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> I'm trying to load a pre trained model using transformers lib (by hugging-face): from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-medium') Using local machine, it starts to download the model. But with docker I get the following: OSError: Model name 'gpt2-medium' was not found in tokenizers model name list (gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2). We assumed 'gpt2-medium' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url. Any idea why it happens?
09-15-2020 13:15:39
09-15-2020 13:15:39
Are you sure you have internet access on your machine? Can you do `wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-medium-config.json`?<|||||>I have the same issue. The machine on which I am deploying the container to does have internet access, it can download other models from nltk / spacy<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,144
closed
unexpected keyword argument 'force_fusions' when running the onnx notebook
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-5.8.0-1-amd64-x86_64-with-debian-bullseye-sid - Python version: 3.7.8 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ## Information The problem arises when using: * [x] the official example scripts: the notebook "04-onnx-export" * [ ] my own modified scripts: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run the notebook's cells. First cell installs onnxruntime in version 1.4.0 The "Benchmarking ONNX quantized model" cell calls transformers/convert_graph_to_onnx.py, which calls the "quantize" function with parameter force_fusions. `Error: quantize() got an unexpected keyword argument 'force_fusions'` The parameter was discarded in onnxruntime 1.4.0 Removing the parameter in convert_graph_to_onnx.py solves the issue. ## Expected behavior No error.
09-15-2020 12:41:49
09-15-2020 12:41:49
The error might be something else, onnxruntime version 1.4.0 does include the `force_fusions` parameter: https://github.com/microsoft/onnxruntime/blob/v1.4.0/onnxruntime/python/tools/quantization/quantize.py#L1457<|||||>Indeed, my bad, somehow the notebook used the python 3.8 related onnxruntime package and not the one associated with the conda environment from which I was launching jupyter notebook. It works perfectly, thanks :+1:
transformers
7,143
closed
Tiny typo fix
Small typo in the `generate` function within `generation_tf_utils.py`
09-15-2020 11:54:31
09-15-2020 11:54:31
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7143?src=pr&el=h1) Report > Merging [#7143](https://codecov.io/gh/huggingface/transformers/pull/7143?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e29c3f1b1104694889afcce4f13f8c842d6e0d6b?el=desc) will **increase** coverage by `0.75%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7143/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7143?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7143 +/- ## ========================================== + Coverage 79.43% 80.19% +0.75% ========================================== Files 168 168 Lines 32303 32303 ========================================== + Hits 25660 25905 +245 + Misses 6643 6398 -245 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7143?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <ø> (+4.76%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.49% <0.00%> (-41.42%)` | :arrow_down: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: | | [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: | | [src/transformers/tokenization\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `47.05% <0.00%> (-13.24%)` | :arrow_down: | | [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `81.96% <0.00%> (-9.84%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: | | ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/7143/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7143?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7143?src=pr&el=footer). Last update [e29c3f1...2e6aeda](https://codecov.io/gh/huggingface/transformers/pull/7143?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,142
closed
Add quotes to paths in MeCab arguments
Without quotes directories with spaces in them will fail to be processed correctly. This bug was first reported in the fugashi repo [here](https://github.com/polm/fugashi/issues/24). It's not a bug in fugashi - fugashi handles quoted paths correctly, and the relevant dictionary packages quote paths in `MECAB_ARGS` to handle this case. Looks like it was missed when the dictionary handling code was added, since it doesn't use `MECAB_ARGS` but instead builds args up from component parts.
09-15-2020 10:50:11
09-15-2020 10:50:11
transformers
7,141
closed
Adding Fast tokenizers for SentencePiece based tokenizers - Breaking: remove Transfo-XL fast tokenizer
This pull request add the "fast" Rust tokenizer for the SentencePiece tokenizers as well. Based on unreleased v0.9.0 of `tokenizers`. Tokenizers: - [x] Albert - [x] Bart - [x] Bert - [x] Camembert - [x] DistilBert - [x] DPR - [x] Electra - [x] Funnel - [x] GPT2 - [x] LongFormer - [x] LXMert - [x] MBart - [x] MobileBert - [x] OpenAI GPT - [x] Pegasus - [x] Reformer - [x] RetriBert - [x] Roberta - [x] T5 - [x] XLM-Roberta - [x] XLNet Breaking: - Fast version of Transformer-XL (which gave different tokenization results) is removed. Remaining tokenizers without Fast implementations (no fast tokenizers expected in the short/mid-term): - BertJapanese (special python libs for multi-linguality) - CTRL (would require a specific BPE to handle missing merges) - XLM (uses special python libs for multi-linguality) - Flaubert (same as XLM) - Transformer-XL (same as XLM) Other fixes: - Also allow to tokenizer Bert Japanese with Mecab token splitter and fix https://github.com/huggingface/datasets/issues/665 - deprecation warning in tokenizer methods are limited to one occurence per class instance
09-15-2020 10:29:36
09-15-2020 10:29:36
Ready for review, the remaining failing tests should be ok after the next `tokenizers` RC release<|||||>Great job! I'm not entirely up to date with everything in transformers, but this looks very nice and clean!<|||||>Ok, yes I'll add documentation. We will probably wait to have a clean documentation in `tokenizers` as well so we can do proper cross-linking.
transformers
7,140
closed
Onnx + TensorRT uses CPU not GPU
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help @mfuntowicz ## Information Model I am using Bert: The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: https://colab.research.google.com/drive/1mqn99U2NMm-fdw213h0xsqm_uRwRkNzp?usp=sharing ## Expected behavior I am trying to run onnx using tensorrt on the gpu, but it always uses the CPU. I tried to install different version or onnx and cuda with not luck. Any idea why "TensorrtExecutionProvider" runs on the CPU rather than GPU ? and how to fix it ?
09-15-2020 09:47:33
09-15-2020 09:47:33
@mfuntowicz - sorry to ping you here :D Do you have a "high level" idea why this might not work?<|||||>@agemagician IIUC TensorRT needs a GPU with Tensor cores to work correctly. You're using a P100 in the notebook which doesn't have those. So Onnx will fall back to using CUDAExecutionProvider. Now that itself may be slow because of [this](https://github.com/microsoft/onnxruntime/blob/master/docs/ONNX_Runtime_Perf_Tuning.md#why-is-my-model-running-slower-on-gpu-than-cpu). I'd suggest setting verbose logging level and checking. To enable set the following: ``` options = SessionOptions() options.log_verbosity_level=2 options.log_severity_level=0 ``` Then in the console logs if you see something like `2020-10-16 20:25:31.386914373 [W:onnxruntime:Default, fallback_cpu_capability.h:140 GetCpuPreferedNodes] Force fallback to CPU execution for node: Gather_9`, or there's a huge list ``` 2020-10-16 20:25:31.406073394 [V:onnxruntime:, inference_session.cc:869 TransformGraph] Provider: [CPUExecutionProvider]: [Gather (Gather_9), Unsqueeze (Unsqueeze_10), DynamicQuantizeLinear (237_QuantizeLinear), QAttention (Attention_qua... ``` that indicates that some parts of your model are not GPU compatible because Onnx doesn't have the supported operators. In my case it was happening because I was using a quantized model with the CUDA provider, but the quantized model needs TensorRT to run I believe.<|||||>Thanks @bdalal for your reply. I have tested it also on V100 but I had the same issue. Anyway, I think the benefits vs troubles, seems to be not worth it for now. I hope Nvidia, onnx and Pytorch provide a better integration on the future for tensor rt.
transformers
7,139
closed
generate text function support part of the target inputs
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html?highlight=transformer#torch.nn.Transformer in pytorch transformer class, we can see the forward function can pass target sentence; so when i wanna to generate the target_sentence from the middle , i can pass part of the target_sentence to the model, then i can generate the rest of the sentence. ## Motivation the generate function is really powerful. Here is my situation, i have the full source sentence and certain part of the target sentence, then i begin to generate the rest of the target sentence, using beam search etc.. But i found the interface do not support part of target sentence as input. <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
09-15-2020 09:34:48
09-15-2020 09:34:48
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>hi, did you solve this problem?
transformers
7,138
closed
OSError: Can't load weights for 'nlptown/bert-base-multilingual-uncased-sentiment'.
## Environment info - `transformers` version: 3.1.0 - Platform: Linux-5.4.0-47-generic-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in Question: After installing transformers, I followed the Quick tour document. I typed code as follows: from transformers import pipeline classifier = pipeline('sentiment-analysis', model="nlptown/bert-base-multilingual-uncased-sentiment") After running it, I got this error 2020-09-15 16:41:21.434156: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 Traceback (most recent call last): File "/home/a/.conda/envs/tf23hf/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 580, in from_pretrained raise EnvironmentError OSError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hf_pipetest.py", line 2, in <module> classifier = pipeline('sentiment-analysis', model='nlptown/bert-base-multilingual-uncased-sentiment') File "/home/a/.conda/envs/tf23hf/lib/python3.7/site-packages/transformers/pipelines.py", line 2629, in pipeline model = model_class.from_pretrained(model, config=config, **model_kwargs) File "/home/a/.conda/envs/tf23hf/lib/python3.7/site-packages/transformers/modeling_tf_auto.py", line 1515, in from_pretrained return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) File "/home/a/.conda/envs/tf23hf/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 587, in from_pretrained raise EnvironmentError(msg) OSError: Can't load weights for 'nlptown/bert-base-multilingual-uncased-sentiment'. Make sure that: - 'nlptown/bert-base-multilingual-uncased-sentiment' is a correct model identifier listed on 'https://huggingface.co/models' - or 'nlptown/bert-base-multilingual-uncased-sentiment' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin. Please, let me know how to solve this problem.. Thanks in advance
09-15-2020 07:47:15
09-15-2020 07:47:15
Hello! This problem is because the owner of the `nlptown/bert-base-multilingual-uncased-sentiment` model has not uploaded a TensorFlow version. If you're using pipelines, you would only be able to use this model if you have PyTorch installed on your machine. ~cc @mfuntowicz for pipelines' next version, it would be great to include the `from_tf` and `from_pt` options as well so that these models may be loaded directly.~ This would require having both PT and TF installed anyway so not great of a workaround.<|||||>Thanks for the info @LysandreJik! Forgot to clear local cache of models after upgrading 3.0.2->3.1.0, thanks again! ~~However it seems like the default model pull with the pytorch autoclasses are failing as well.~~ ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("nlptown/bert-base-multilingual-uncased-sentiment") model = AutoModelForSequenceClassification.from_pretrained("nlptown/bert-base-multilingual-uncased-sentiment") ``` ~~<details> ~~ <summary>~~Full Error Log (I'm assuming the error is from the issues stated above)~~</summary>~~ --------------------------------------------------------------------------- JSONDecodeError Traceback (most recent call last) <ipython-input-25-cfac34095092> in <module> 1 from transformers import AutoTokenizer, AutoModelForSequenceClassification 2 ----> 3 tokenizer = AutoTokenizer.from_pretrained("nlptown/bert-base-multilingual-uncased-sentiment") 4 model = AutoModelForSequenceClassification.from_pretrained("nlptown/bert-base-multilingual-uncased-sentiment") ~/Documents/github/adaptnlp/venv-adaptnlp/lib/python3.6/site-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 218 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 219 else: --> 220 return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 221 222 raise ValueError( ~/Documents/github/adaptnlp/venv-adaptnlp/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, *inputs, **kwargs) 1423 1424 """ -> 1425 return cls._from_pretrained(*inputs, **kwargs) 1426 1427 @classmethod ~/Documents/github/adaptnlp/venv-adaptnlp/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1544 if tokenizer_config_file is not None: 1545 with open(tokenizer_config_file, encoding="utf-8") as tokenizer_config_handle: -> 1546 init_kwargs = json.load(tokenizer_config_handle) 1547 saved_init_inputs = init_kwargs.pop("init_inputs", ()) 1548 if not init_inputs: /usr/lib/python3.6/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 297 cls=cls, object_hook=object_hook, 298 parse_float=parse_float, parse_int=parse_int, --> 299 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) 300 301 /usr/lib/python3.6/json/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 352 parse_int is None and parse_float is None and 353 parse_constant is None and object_pairs_hook is None and not kw): --> 354 return _default_decoder.decode(s) 355 if cls is None: 356 cls = JSONDecoder /usr/lib/python3.6/json/decoder.py in decode(self, s, _w) 337 338 """ --> 339 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 340 end = _w(s, end).end() 341 if end != len(s): /usr/lib/python3.6/json/decoder.py in raw_decode(self, s, idx) 355 obj, end = self.scan_once(s, idx) 356 except StopIteration as err: --> 357 raise JSONDecodeError("Expecting value", s, err.value) from None 358 return obj, end ~~JSONDecodeError: Expecting value: line 1 column 1 (char 0)~~ ~~</details>~~ ~~Since there already are TF-specific auto classes, should the default auto classes should still be able pull pytorch models even if there is no TF implementation?~~ ~~I haven't checked other models in the repo that only have pytorch models, but would it be safe to assume this is the case for all models in the repo that do not have models uploaded for both deep learning frameworks?~~ ~~PS: This is with torch==1.6.0 with CUDA 10.2 installed~~
transformers
7,137
closed
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
## Environment info - `transformers` version: 3.1.0 - Platform: ubuntu 18.04 - Python version: 3.7 - PyTorch version (GPU?): 1.5.1 - Tensorflow version (GPU?): 1.14.0 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ## Information Model I am using: XLM The problem arises when using: When stack two transformes on top of each other and run them on GPUs. (Note that on CPU the following code works fine) ## To reproduce Steps to reproduce the behavior: ``` import torch from torch import nn from transformers import AutoConfig, AutoModel class Model(nn.Module): def __init__(self): super().__init__() config = AutoConfig.from_pretrained('xlm-mlm-tlm-xnli15-1024') self.transformer1 = AutoModel.from_config(config) self.transformer2 = AutoModel.from_config(config) def forward( self, input_ids=None, input_embeds=None, ): outputs = self.transformer1(input_ids) outputs = self.transformer2(inputs_embeds=outputs[0]) return outputs device = "cuda" if torch.cuda.is_available() else "cpu" model = Model().to(device) inps = torch.randint(1,256,[1,256],device=device) model(input_ids=inps) ``` The stack trace: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-35-5ada78c6ff36> in <module> 34 inps = torch.randint(1, 256, [1, 256], device=device) 35 ---> 36 model(input_ids=inps) /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), <ipython-input-35-5ada78c6ff36> in forward(self, input_ids, input_embeds) 26 print(outputs[0].is_cuda) 27 outputs = outputs[0].to(input_ids.device) ---> 28 outputs = self.transformer2(inputs_embeds=outputs) 29 return outputs 30 /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), /opt/conda/lib/python3.8/site-packages/transformers/modeling_xlm.py in forward(self, input_ids, attention_mask, langs, token_type_ids, position_ids, lengths, cache, head_mask, inputs_embeds) 519 # if src_enc is not None: 520 # assert self.is_decoder --> 521 # assert src_enc.size(0) == bs 522 523 # generate masks RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! ```
09-15-2020 07:45:08
09-15-2020 07:45:08
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,136
closed
BertForMaskedLM Loss function
# ❓ Questions & Help https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py 1)In BertForMaskedLM, why the loss function don't ignore the not masked tokens? loss_fct = CrossEntropyLoss() # -100 index = padding token masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) 2) do i have to set the not masked token to -100 ? <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
09-15-2020 07:22:56
09-15-2020 07:22:56
torch.nn.CrossEntropyLoss(weight: Optional[torch.Tensor] = None, size_average=None, ignore_index: int = -100, reduce=None, reduction: str = 'mean')
transformers
7,135
closed
Loss mask for fine-tuning GPT2LMHeadModel model
If we use padding for short-sentence fine-tune data, when fine-tuning GPT2LMHeadModel, should we change the code here https://github.com/huggingface/transformers/blob/48ff6d5109d691e3630169962a5052586aaaf659/src/transformers/modeling_gpt2.py#L744 to exclude the loss for padding tokens? @patrickvonplaten @thomwolf
09-15-2020 03:13:52
09-15-2020 03:13:52
It has already been mentioned here https://github.com/huggingface/transformers/issues/2001 (see "Bug: Padded tokens are not excluded from the loss" session). Any plan to fix this?<|||||>Hi GPT-2 has no pad token so you can either introduce new pad token or set the eos toke as pad token ```tokenizer.pad_token_id = tokenizer.eos_token_id``` and then set the pad tokens in `labels` to -100 which is the default ignore index for `CrossEntropyLoss` ``` labels[labels == self.tokenizer.pad_token_id] = -100```<|||||>> Hi GPT-2 has no pad token so you can either introduce new pad token or set the eos toke as pad token > `tokenizer.pad_token_id = tokenizer.eos_token_id` > > and then set the pad tokens in `labels` to -100 which is the default ignore index for `CrossEntropyLoss` > ` labels[labels == self.tokenizer.pad_token_id] = -100` Thanks. Just get aware of the `ignore_index` parameter https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html <|||||>For fine-tuning the GPT2 model, it's necessary to manually prepend the bos_token and append eos_token to the input, as has been established here: #3311 Setting pad_token = eos_token and running `labels[labels == pad_token_id] = -100` would therefore be a problem in my opinion, since we would not only ignore padding tokens, but also eos_tokens at the end of sentences for loss computation. I solved the problem by first converting the attention_mask to boolean values, and then inverting the boolean attention_mask. Then `labels[inv_bool_attention_mask] = -100`, such that padding tokens are ignored, but no eos_tokens. <|||||>Just to save the hassle for some folk ```python from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = GPT2LMHeadModel.from_pretrained("gpt2") tokenizer.pad_token = tokenizer.eos_token print("EOS", tokenizer.convert_tokens_to_ids(tokenizer.eos_token)) print("PAD", tokenizer.convert_tokens_to_ids(tokenizer.pad_token)) string = "Hello World!" string += tokenizer.eos_token # manually append eos since this is not done by GPT2Tokenizer # string = tokenizer.bos_token + string # optionally prepend bos (which is actually the same as eos for GPT2Tokenizer) tokenized = tokenizer(string, padding="max_length", max_length=10, return_tensors="pt") input_ids = tokenized["input_ids"] attention_mask = tokenized["attention_mask"] print("INPUT_IDS BEFORE") print(input_ids) print("ATTENTION_MASK") print(attention_mask) input_ids[~attention_mask.bool()] = -100 # disable loss for padding tokens (i.e., eos tokens meant for padding) print("INPUT_IDS AFTER") print(input_ids) ``` Result: ```txt EOS 50256 PAD 50256 INPUT_IDS BEFORE tensor([[15496, 2159, 0, 50256, 50256, 50256, 50256, 50256, 50256, 50256]]) ATTENTION_MASK tensor([[1, 1, 1, 1, 0, 0, 0, 0, 0, 0]]) INPUT_IDS AFTER tensor([[15496, 2159, 0, 50256, -100, -100, -100, -100, -100, -100]]) ```
transformers
7,134
closed
evaluate_during_training after each epoch
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> Hello! I just migrated from `pytorch_pretrained_bert` to `transformers` (3.1.0) and I am having problems understanding how to make the model evaluate at the end of each epoch. For this purpose, it is recommended to use `--evaluate_during_training` as mentioned in the issue [#4617](https://github.com/huggingface/transformers/issues/4617). Looking at `Trainer` source code the condition to run an evaluation is the following: ```python if self.args.evaluate_during_training and self.global_step % self.args.eval_steps == 0: self.evaluate() ``` I am not quite following the logic of why the evaluation depends on the number of steps and not on the number of epochs. I would appreciate if someone helps me on this. In the same issue mentioned above (#4617) someone said that you should set `--save_steps` to `number_of_samples/batch_size`. If this is true (currently testing if this works on my dataset), shouldn't it be mentioned in the documentation? Thank you! <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
09-15-2020 00:34:45
09-15-2020 00:34:45
Hi there. When using `Trainer`, the evaluation loop is run every `args.eval_steps` (which is more consistent across datasets than an evaluation at the end of each epoch since with a small or a large dataset you would get evaluations that don't have the same meaning). If you want it run at the end of each epoch, you can indeed set it to `number_of_samples/batch_size`. I'm not quite sure how this is surprising when the documentation clearly says it runs every `eval_steps` though.<|||||>> Hi there. When using `Trainer`, the evaluation loop is run every `args.eval_steps` (which is more consistent across datasets than an evaluation at the end of each epoch since with a small or a large dataset you would get evaluations that don't have the same meaning). Hi @sgugger . I see! I've been focused on making the `run_multiple_choice.py` work on my dataset using the default parameters. When running the example to get results on SWAG with 3 epochs, I did see the output of the evaluation loop after each epoch. I didn't understand why the same didn't happen with my dataset. Didn't see the problem from your perspective (be more consistent across datasets) but now that you mention it, it makes sense. Thank you for the explanation! > If you want it run at the end of each epoch, you can indeed set it to `number_of_samples/batch_size`. I'm not quite sure how this is surprising when the documentation clearly says it runs every `eval_steps` though. In the `--help` I do see that: --evaluate_during_training Run evaluation during training at each logging step. --eval_steps EVAL_STEPS Run an evaluation every X steps. Guess I got confused because I expected the `--evaluate_during_training` doc to refer to `eval_steps` instead of logging step. It wasn't that straightforward for me that what I need to do is to set `number_of_samples/batch_size`, that's what I meant it would be nice to find in the documentation. It is all clear now though! Thank you again :) <|||||>Note that we can add support for an evaluation strategy that is at the end of each epoch instead of every n steps. That could make things even clearer :-) I'll look into it when I have some time.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,133
closed
Update README
This PR reorganizes the README to reflect the current state of transformers a bit more. croll down a bit [here](https://github.com/huggingface/transformers/tree/update_readme) for a preview. Here is the plan I followed: **Introduction:** Expand a bit on the first two sentences to explain a bit more generally what this library is about. The idea to have content aimed at both researchers/NLP experts and data scientists/engineers. Since it gets longer, we could move it after the top contributors badges. **Online demo:** I think it makes more sense to have this right after the introduction, to show what transformers can do. **Code sample:** Then we switch to how to do it, showcasing two shorts examples: pipeline and a pretrained model loading/example of use. Let’s not overwhelm a reader with more than that and link to our tutorials for more examples. **Feature list:** This becomes Why use/not-use transformers but the spirit remains the same. **Installation:** Keeping this simple and to the point, linking to the documentation for more in-depth content. **Model architectures:** I've kept the full list with papers but I'm not sure it makes sense here as we also have it in the index of our doc. An alternative would be to show this in terms of tasks and give examples for each one of models supporting them (so key names still appear) with links to the docs. **Learn more:** This is where all the content I removed is linked. **Citation**
09-14-2020 20:32:47
09-14-2020 20:32:47
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7133?src=pr&el=h1) Report > Merging [#7133](https://codecov.io/gh/huggingface/transformers/pull/7133?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/90cde2e938638e64a8696a12b79ee5f52364b162?el=desc) will **increase** coverage by `1.87%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7133/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7133?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7133 +/- ## ========================================== + Coverage 79.62% 81.49% +1.87% ========================================== Files 168 168 Lines 32284 32284 ========================================== + Hits 25706 26310 +604 + Misses 6578 5974 -604 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7133?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: | | [src/transformers/tokenization\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-34.89%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.55% <0.00%> (-34.28%)` | :arrow_down: | | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `70.19% <0.00%> (-23.08%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `81.96% <0.00%> (-9.84%)` | :arrow_down: | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (-1.26%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.00% <0.00%> (-0.67%)` | :arrow_down: | | ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7133/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7133?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7133?src=pr&el=footer). Last update [90cde2e...8faad23](https://codecov.io/gh/huggingface/transformers/pull/7133?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,132
closed
Add tokenizer file save in convert_graph_to_onnx.py
# 🚀 Feature request Allow saving the associated tokenizer files (to be loaded possibly with the tokenizer library) when running `convert_graph_to_onnx.py` ## Motivation Having the good vocab file next to the good onnx model to avoid confusion and ease the loading process by any other framework (because models and their tokenizers are deeply linked). ## Your contribution With [this comment from tokenizer repo](https://github.com/huggingface/tokenizers/issues/59#issuecomment-610645970) and the [convert_graph_to_onnx.py](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_graph_to_onnx.py) one can simply add L.358 ``` if tokenizer is None: download_vocab_files_for_tokenizer(nlp.tokenizer, model, output) ``` and before `if __name__ == "__main__"` ``` def download_vocab_files_for_tokenizer(tokenizer, model_type, output_path): vocab_files_map = tokenizer.pretrained_vocab_files_map vocab_files = {} for resource in vocab_files_map.keys(): download_location = vocab_files_map[resource][model_type] f_path = path.join(output_path.parent, path.basename(download_location)) urllib.request.urlretrieve(download_location, f_path) ``` I can do a PR for it if the interest is big enough. NB : This is a patch I added in my local lib but feel free to modify it or transform it as you want, it's just to give the idea
09-14-2020 20:11:59
09-14-2020 20:11:59
cc. @mfuntowicz <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,131
closed
[EncoderDecoderModel] fix indentation error
A statement regarding the encoder config init in `EncoderDecoderModel` was falsely indented which might lead to errors for very specific cases.
09-14-2020 19:25:58
09-14-2020 19:25:58
You can ignore the failing tests
transformers
7,130
closed
AssertionError with multiple GPU
## System Info Red Hat Server 7.7 Pytorch: 1.6.0 Transformers: 3.0.2 Python: 3.7.6 Number of GPU: 4 ## Question I am trying to finetune a GPT2 model using `Trainer` with multiple GPU installed on my machine. However, I get the following error: ```python Traceback (most recent call last): File "run_finetune_gpt2.py", line 158, in <module> main() File "run_finetune_gpt2.py", line 145, in main trainer.train() File "/path/to/venvs/my-venv/lib/python3.6/site-packages/transformers/trainer.py", line 499, in train tr_loss += self._training_step(model, inputs, optimizer) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/transformers/trainer.py", line 622, in _training_step outputs = model(**inputs) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 156, in forward return self.gather(outputs, self.output_device) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 168, in gather return gather(outputs, output_device, dim=self.dim) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather res = gather_map(outputs) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map return type(out)(map(gather_map, zip(*outputs))) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in gather_map return Gather.apply(target_device, dim, *outputs) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 54, in forward assert all(map(lambda i: i.is_cuda, inputs)) AssertionError wandb: Program failed with code 1. Press ctrl-c to abort syncing. wandb: You can sync this run to the cloud by running: wandb: wandb sync wandb/dryrun-20200914_134757-1sih3p0q ``` Any ideas about what might be going on? Thanks in advance!
09-14-2020 19:01:45
09-14-2020 19:01:45
Cryptic error, but Trainer master @sgugger might have a hunch<|||||>Mmm it looks like the inputs are not on a GPU from the error. Are you running the base script or a modified version of it?<|||||>@sgugger It isn't one of the run scripts provided by HF but it is a modified forward method from the `GPT2LMHeadModel`. Here is the majority of my script: https://discuss.huggingface.co/t/finetuning-gpt2-with-user-defined-loss/163<|||||>I can't see how the loss is computed, so can't really help. Are you sure you are always setting the tensors on the right devices?<|||||>@sgugger the loss is calculated in an Ngrams model. My guess is that the tensors are probably not set. Is there some thing I need to set on the pytorch tensors being passed to the Ngrams model?<|||||>Any further thoughts on this? I could provide the whole code or whichever pieces are needed. @sgugger <|||||>I told you I could not help without seeing how you computed your loss. You did not elaborate on that and I'm not a magician, so I don't have any further thoughts on it. If you share the code you're using I could have more insight, yes.<|||||>@sgugger My apologies. Below is the relevant code that shows the loss being calculated as well as the run script that controls the training: ```python from transformers import GPT2Tokenizer, GPT2LMHeadModel, TrainingArguments, Trainer import torch from torch.utils.data import Dataset import sys import numpy as np ZERO = sys.float_info.min class GPT2FinetunedWithNgrams(GPT2LMHeadModel): def __init__(self, config, model_tokenizer=None): super().__init__(config) self.tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right') self.tokenizer.pad_token = self.tokenizer.eos_token def eval_sentence(self, sent: str): vec = self.sentence_vec( sent) # remove punct, lower case, split on space, prepend "<s>", postpend "</s>" start and stop tokens. Returns list of strings. last_idx = min(self.max_ngram, len(vec)) log_prob = 0 for i in range(2, last_idx + 1): log_prob += np.log(max(ZERO, self.pkatz(vec[0:i]))) # conditional probability with katz backoff for i in range(1, len(vec) - last_idx + 1): j = i + last_idx log_prob += np.log(max(ZERO, self.pkatz(vec[i:j]))) return log_prob, len(vec) def sentence_loss(self, sent: str): p, l = self.eval_sentence(sent) return -p def generate_text_while_finetuning(self, input_ids=None, past=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, ): transformer_outputs = self.transformer( input_ids, past=past, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) hidden_states = transformer_outputs[0] lm_logits = self.lm_head(hidden_states) outputs = (lm_logits,) + transformer_outputs[1:] return outputs # (loss), lm_logits, presents, (all hidden_states), (attentions) def forward( self, input_ids=None, past=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, use_cache=True, ): max_length = input_ids.shape[1] + 50 full_generated_gpt2_ids = self.generate(input_ids=input_ids, max_length=max_length, is_finetuning_current_model=True, attention_mask=attention_mask, pad_token_id=50256, do_sample=True, top_k=50, top_p=0.95) decoded_gen_samples = self.tokenizer.batch_decode(full_generated_gpt2_ids, skip_special_tokens=True) tmp_losses = [self.sentence_loss(decoded_sample) for decoded_sample in decoded_gen_samples] losses = torch.tensor(tmp_losses, requires_grad=True) loss = losses.mean() return (loss,) ##The code below is the run script using Trainer class MyDataset(Dataset): def __init__(self, csv_file: str): self.df = pd.read_csv(csv_file, encoding='ISO-8859-1') def __len__(self): return len(self.df) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() text = self.df.iloc[idx, 1] return text def my_data_collator(dataset_samples_list): tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right') tokenizer.pad_token = tokenizer.eos_token encoded_results = tokenizer(dataset_samples_list, padding=True, truncation=True, return_tensors='pt', return_attention_mask=True) batch = {} batch['input_ids'] = torch.stack([result for result in encoded_results['input_ids']]) batch['past'] = None batch['attention_mask'] = torch.stack([result for result in encoded_results['attention_mask']]) batch['position_ids'] = None batch['head_mask'] = None batch['inputs_embeds'] = None batch['labels'] = None batch['use_cache'] = True return batch dataset_train = MyDataset('/path/to/train_dataset.csv') training_args = TrainingArguments( output_dir='/path/to/out', do_train=True, per_device_train_batch_size=64, logging_dir='/path/to/dir', max_steps=300000 ) model = GPT2FinetunedWithNgrams.from_pretrained('gpt2') trainer = Trainer( model=model, args=training_args, data_collator=my_data_collator, train_dataset=dataset_train ) trainer.train() trainer.save_model('/path/to/model_save_dir') ``` The `is_finetuning_current_model=True` flag in `self.generate` in the `forward` method I had to include to overcome a recursion error (#6105).<|||||>You are using numpy to compute your loss. This can't work in PyTorch since it then won't be able to properly compute the gradients of your parameters.<|||||>Ok, I think I see what you are saying. `eval_sentence` needs to rely on pytorch to make it's calculations. I can't take the numpy output and convert it to a pytorch tensor in the `forward` method. Is this correct?<|||||>No you can't. PyTorch needs to see operations applied to your tensors to be able to compute gradients properly.<|||||>Ok. I'll rewrite my loss in pytorch and check to see if that resolves the issue with multiple gpu. Just curious, would the above code work if I were running on a machine with a single gpu? Because I've run it on a single gpu machine (with less training steps and a smaller batch size) and there were no issues. <|||||>There was no error because the tensors were set on the only GPU you add when back from numpy but the gradients were still wrong (basically everything that happened before the numpy part was wiped out).<|||||>@sgugger That is very helpful and would possibly explain why my first training test didn't seem to learn. Looking at the loss that is posted (i.e. `sentence_loss()` and `eval_sentence()`) if I change `log_prob` to a pytorch tensor, is there a flag that needs to be set or anything in order for the gradients to be calculated correctly?<|||||>I went ahead and tried the updated loss on a machine with 3 GPU: ```python from transformers import GPT2Tokenizer, GPT2LMHeadModel, TrainingArguments, Trainer import torch from torch.utils.data import Dataset import sys import pandas as pd #import numpy as np ZERO = sys.float_info.min ZERO_PT = torch.tensor(ZERO) class GPT2FinetunedWithNgrams(GPT2LMHeadModel): def __init__(self, config, model_tokenizer=None): super().__init__(config) self.tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right') self.tokenizer.pad_token = self.tokenizer.eos_token def eval_sentence(self, sent: str): vec = self.sentence_vec( sent) # remove punct, lower case, split on space, prepend "<s>", postpend "</s>" start and stop tokens. Returns list of strings. last_idx = min(self.max_ngram, len(vec)) log_prob = 0 for i in range(2, last_idx + 1): #log_prob += np.log(max(ZERO, self.pkatz(vec[0:i]))) # conditional probability with katz backoff log_prob += torch.log(max(ZERO_PT, self.pkatz(vec[0:i]))) for i in range(1, len(vec) - last_idx + 1): j = i + last_idx #log_prob += np.log(max(ZERO, self.pkatz(vec[i:j]))) log_prob += torch.log(max(ZERO_PT, self.pkatz(vec[i:j]))) return log_prob, len(vec) def sentence_loss(self, sent: str): p, l = self.eval_sentence(sent) return -p def generate_text_while_finetuning(self, input_ids=None, past=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, ): transformer_outputs = self.transformer( input_ids, past=past, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, ) hidden_states = transformer_outputs[0] lm_logits = self.lm_head(hidden_states) outputs = (lm_logits,) + transformer_outputs[1:] return outputs # (loss), lm_logits, presents, (all hidden_states), (attentions) def forward( self, input_ids=None, past=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, use_cache=True, ): max_length = input_ids.shape[1] + 50 full_generated_gpt2_ids = self.generate(input_ids=input_ids, max_length=max_length, is_finetuning_current_model=True, attention_mask=attention_mask, pad_token_id=50256, do_sample=True, top_k=50, top_p=0.95) decoded_gen_samples = self.tokenizer.batch_decode(full_generated_gpt2_ids, skip_special_tokens=True) tmp_losses = [self.sentence_loss(decoded_sample) for decoded_sample in decoded_gen_samples] losses = torch.stack(tmp_losses) loss = losses.mean() return (loss,) ##The code below is the run script. class MyDataset(Dataset): def __init__(self, csv_file: str): self.df = pd.read_csv(csv_file, encoding='ISO-8859-1') def __len__(self): return len(self.df) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() text = self.df.iloc[idx, 1] return text def my_data_collator(dataset_samples_list): tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right') tokenizer.pad_token = tokenizer.eos_token encoded_results = tokenizer(dataset_samples_list, padding=True, truncation=True, return_tensors='pt', return_attention_mask=True) batch = {} batch['input_ids'] = torch.stack([result for result in encoded_results['input_ids']]) batch['past'] = None batch['attention_mask'] = torch.stack([result for result in encoded_results['attention_mask']]) batch['position_ids'] = None batch['head_mask'] = None batch['inputs_embeds'] = None batch['labels'] = None batch['use_cache'] = True return batch dataset_train = MyDataset('/path/to/train_dataset.csv') training_args = TrainingArguments( output_dir='/path/to/out', do_train=True, per_device_train_batch_size=64, logging_dir='/path/to/dir', max_steps=300000 ) model = GPT2FinetunedWithNgrams.from_pretrained('gpt2') trainer = Trainer( model=model, args=training_args, data_collator=my_data_collator, train_dataset=dataset_train ) trainer.train() trainer.save_model('/path/to/model_save_dir') ``` However I am still getting this error: ```python Traceback (most recent call last): File "run_finetune_gpt2.py", line 180, in <module> main() File "run_finetune_gpt2.py", line 165, in main trainer.train() File "/path/to/venvs/my-venv/lib/python3.6/site-packages/transformers/trainer.py", line 499, in train tr_loss += self._training_step(model, inputs, optimizer) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/transformers/trainer.py", line 622, in _training_step outputs = model(**inputs) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 156, in forward return self.gather(outputs, self.output_device) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 168, in gather return gather(outputs, output_device, dim=self.dim) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather res = gather_map(outputs) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map return type(out)(map(gather_map, zip(*outputs))) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in gather_map return Gather.apply(target_device, dim, *outputs) File "/path/to/venvs/my-venv/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 54, in forward assert all(map(lambda i: i.is_cuda, inputs)) AssertionError ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,129
closed
[WIP RAG] Finalize RAG parallel
finalizing RAG in parallel to https://github.com/huggingface/transformers/pull/6813
09-14-2020 18:59:40
09-14-2020 18:59:40
https://github.com/huggingface/transformers/blob/60c8defa01049b559933ad4e2ab3cc32c8b7ef27/src/transformers/tokenization_rag.py#L58 Hello, thank you for the great work! It seems like the first argument passed to the method, `self`, needs to be deleted.<|||||>Closing this PR as it served its purpose. Continue PR together with @ola13 here: https://github.com/huggingface/transformers/pull/6813
transformers
7,128
closed
Fix the HF logger
Hello. The current HF logger do not behave as expected. As example let's take the `run_tf_ner.py` script. With the following header: ``` import logging import os from dataclasses import dataclass, field from importlib import import_module from typing import Dict, List, Optional, Tuple import numpy as np from seqeval.metrics import classification_report, f1_score, precision_score, recall_score from transformers import logging as hf_logging handler = logging.StreamHandler() formatter = logging.Formatter('[%(levelname)s|%(filename)s:%(lineno)s] %(asctime)s >> %(message)s') handler.setFormatter(formatter) logger = hf_logging.get_logger(__name__) logger.handlers.clear() logger.addHandler(handler) hf_logging.enable_propagation() hf_logging.set_verbosity_info() hf_logging.enable_default_handler() from transformers import ( AutoConfig, AutoTokenizer, EvalPrediction, HfArgumentParser, TFAutoModelForTokenClassification, TFTrainer, TFTrainingArguments, ) from utils_ner import Split, TFTokenClassificationDataset, TokenClassificationTask ``` The formatter is fully ignored and takes the default logging format of TensorFlow. This PR fixes the problem with the following new header: ``` import logging import os from dataclasses import dataclass, field from importlib import import_module from typing import Dict, List, Optional, Tuple import numpy as np from seqeval.metrics import classification_report, f1_score, precision_score, recall_score import tensorflow as tf from transformers import logging as hf_logging handler = logging.StreamHandler() formatter = logging.Formatter('[%(levelname)s|%(filename)s:%(lineno)s] %(asctime)s >> %(message)s') handler.setFormatter(formatter) logger = hf_logging.get_logger() logger.handlers.clear() logger.addHandler(handler) logger.setLevel(logging.INFO) from transformers import ( AutoConfig, AutoTokenizer, EvalPrediction, HfArgumentParser, TFAutoModelForTokenClassification, TFTrainer, TFTrainingArguments, ) from utils_ner import Split, TFTokenClassificationDataset, TokenClassificationTask ``` But now the logger is set at library level and not at the class level anymore. Then I wanted to know if this slitght update is a problem or if it is ok.
09-14-2020 18:37:31
09-14-2020 18:37:31
I think the issue stems from the fact that this example is not relying on the base `transformers` logging. The logger is defined by the `__name__` from which you initialize it, and when you're initializing the logger within `transformers` you're doing the following: ```py # file_utils.py file_utils_logger = logging.get_logger(__name__) # initializes the logger with `transformers.file_utils` ``` ```py # modeling_utils.py modeling_utils_logger = logging.get_logger(__name__) # initializes the logger with `transformers.modeling_utils` ``` Therefore when you're getting the base `transformers` logger and changing its level, you're also changing all these loggers' levels. ```py file_utils_logger.getEffectiveLevel() # 30 modeling_utils_logger.getEffectiveLevel() # 30 logger = logging.get_logger() logger.setLevel(logging.INFO) file_utils_logger.getEffectiveLevel() # 20 modeling_utils_logger.getEffectiveLevel() # 20 ``` However, for the `run_tf_ner.py` script, the `__name__` is not dependent on `transformers.[module_name]`, but is instead `__main__`. When you're updating the root HF logger's level, it's therefore not updating that logger level. You can run the following script to see what happens: ```py from transformers import logging file_utils_logger = logging.get_logger('transformers.file_utils') modeling_utils_logger = logging.get_logger('transformers.modeling_utils') ner_script = logging.get_logger('__main__') print(file_utils_logger.getEffectiveLevel(), modeling_utils_logger.getEffectiveLevel(), ner_script.getEffectiveLevel()) # 30 30 30 main = logging.get_logger() main.setLevel(logging.INFO) print(file_utils_logger.getEffectiveLevel(), modeling_utils_logger.getEffectiveLevel(), ner_script.getEffectiveLevel()) # 20 20 30 ``` I think the same thing is happening with handlers. What do you think?<|||||>You are totally right. I fully reverted the changes in `logging.py` and replaced all the missing usage of the HF wrapper in the lib.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7128?src=pr&el=h1) Report > Merging [#7128](https://codecov.io/gh/huggingface/transformers/pull/7128?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/52d250f6aa14844024806e5e4dd1c7882bbd8dd5?el=desc) will **increase** coverage by `1.73%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7128/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7128?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7128 +/- ## ========================================== + Coverage 79.12% 80.86% +1.73% ========================================== Files 168 168 Lines 32303 32303 ========================================== + Hits 25560 26121 +561 + Misses 6743 6182 -561 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7128?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2x4bWVydC5weQ==) | `100.00% <100.00%> (ø)` | | | [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `90.76% <100.00%> (+20.74%)` | :arrow_up: | | [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `94.11% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.77% <0.00%> (-0.51%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (-0.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <0.00%> (-0.36%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: | | ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/7128/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7128?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7128?src=pr&el=footer). Last update [52d250f...977b295](https://codecov.io/gh/huggingface/transformers/pull/7128?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>It should be ok from my side. @LysandreJik you can review whenever you want.
transformers
7,127
closed
only fine tune the encoder part of BART
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> May i ask how can I fine-tune only the BART-encoder? from transformers import BartTokenizer, BartModel tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') model = BartModel.from_pretrained('facebook/bart-base', return_dict=True) inputs = tokenizer(["i am a girl", "i am from germany"], return_tensors="pt", padding=True) outputs = model(**inputs).encoder_last_hidden_state i load the pretrained BART, and get the encoder_last_hiddene_state. however, during fine-tuning, how about the decoder part? how could i only load the part of encoder and fine-tune? Thanks a lot <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
09-14-2020 18:34:34
09-14-2020 18:34:34
What task are you working on? Or to rephrase, what do you do once you have ` encoder_output = model(**inputs).encoder_last_hidden_state`?<|||||>Thanks for your feedback. In my case, I only want the bart encoder and combine it to my own decoder.  Thanks for further feedback<|||||>How do you want to fine-tune the encoder? You'll either need to add a task specific head or pair it with your decoder and train both encoder and decoder. I think you should be able to Initialize only the encoder using following snippet ```python from torch import nn from transformers.modeling_bart import BartEncoder, PretrainedBartModel, PretrainedBartModel from transformers import BartConfig class Encoder(PretrainedBartModel): def __init__(self, config: BartConfig): super().__init__(config) padding_idx, vocab_size = config.pad_token_id, config.vocab_size self.shared = nn.Embedding(vocab_size, config.d_model, padding_idx) self.encoder = BartEncoder(config, self.shared) def forward( self, input_ids, attention_mask=None, output_attentions=False, output_hidden_states=False, return_dict=False ): output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) return encoder_outputs enc = Encoder.from_pretrained("facebook/bart-base") ``` @sshleifer does this makes sense ?<|||||>The text makes sense. Does that code run!?<|||||>> How do you want to fine-tune the encoder? You'll either need to add a task specific head or pair it with your decoder and train both encoder and decoder. I think you should be able to Initialize only the encoder using following snippet > > ```python > from torch import nn > > from transformers.modeling_bart import BartEncoder, PretrainedBartModel, PretrainedBartModel > from transformers import BartConfig > > class Encoder(PretrainedBartModel): > def __init__(self, config: BartConfig): > super().__init__(config) > > padding_idx, vocab_size = config.pad_token_id, config.vocab_size > self.shared = nn.Embedding(vocab_size, config.d_model, padding_idx) > > self.encoder = BartEncoder(config, self.shared) > > def forward( > self, input_ids, attention_mask=None, output_attentions=False, output_hidden_states=False, return_dict=False > ): > > output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions > output_hidden_states = ( > output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states > ) > return_dict = return_dict if return_dict is not None else self.config.use_return_dict > > encoder_outputs = self.encoder( > input_ids=input_ids, > attention_mask=attention_mask, > output_attentions=output_attentions, > output_hidden_states=output_hidden_states, > return_dict=return_dict, > ) > > return encoder_outputs > > enc = Encoder.from_pretrained("facebook/bart-base") > ``` > > @sshleifer does this makes sense ? Thanks for your code @patil-suraj . Really appreciate!!<|||||>> The text makes sense. Does that code run!? In my case, the code runs without problems, so I think it"s correct! Thanks a lot!
transformers
7,126
closed
Multi predictions trainer
This allows the `Trainer` to properly return predictions when the model has several outputs (for instance, all models `XxxForMultipleChoice`). This should unlock progress in #7032 where the start and end logits are both required.
09-14-2020 17:36:54
09-14-2020 17:36:54
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7126?src=pr&el=h1) Report > Merging [#7126](https://codecov.io/gh/huggingface/transformers/pull/7126?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/52d250f6aa14844024806e5e4dd1c7882bbd8dd5?el=desc) will **increase** coverage by `1.71%`. > The diff coverage is `76.92%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7126/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7126?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7126 +/- ## ========================================== + Coverage 79.12% 80.84% +1.71% ========================================== Files 168 168 Lines 32303 32305 +2 ========================================== + Hits 25560 26117 +557 + Misses 6743 6188 -555 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7126?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.04% <70.00%> (+0.13%)` | :arrow_up: | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `60.29% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-1.76%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.77% <0.00%> (-0.51%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <0.00%> (-0.36%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.77% <0.00%> (-0.28%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.84% <0.00%> (-0.25%)` | :arrow_down: | | ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/7126/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7126?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7126?src=pr&el=footer). Last update [52d250f...6eb9b5b](https://codecov.io/gh/huggingface/transformers/pull/7126?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Yes this doesn't work if the model has `output_attentions` or `output_hiddens` in the config set to True: it needs the outputs to be a tuple of tensor with optional loss at the beginning. Documenting this is next when I make a template/protocol for models that `Trainer` supports. I can add a clean error message if one output is detected to be a tuple of tensors and add asserts that `output_attention` and `output_all_hiddens` are False in the config (if those arguments can be found).<|||||>@sgugger are you saying that the eval in Trainer-backed https://github.com/huggingface/transformers/blob/master/examples/multiple-choice/run_multiple_choice.py is not currently working? <|||||>I'm pretty sure he meant `XxxForQuestionAnswering`, not multiple choice<|||||>> I'm pretty sure he meant `XxxForQuestionAnswering`, not multiple choice Oh yes, makes sense now. Thanks @LysandreJik ;)<|||||>Yes @LysandreJik reads my thoughts right, sorry about the typo ;-)
transformers
7,125
closed
fix ZeroDivisionError and epoch counting
@sgugger This PR fix 2 minor bugs in `trainer.py`. First, the two lines num_train_epochs = ( self.args.max_steps // (len(train_dataloader) // self.args.gradient_accumulation_steps) + 1 ) and epochs_trained = self.global_step // (len(train_dataloader) // self.args.gradient_accumulation_steps) gives `ZeroDivisionError` when `len(train_dataloader) < self.args.gradient_accumulation_steps`. The fix takes into account the code below the comment # last step in epoch but step is always smaller than gradient_accumulation_steps therefore when `len(train_dataloader) < self.args.gradient_accumulation_steps`, we still have 1 step in each epoch. The second bug is due to the `+ 1` in the num_train_epochs = ( self.args.max_steps // (len(train_dataloader) // self.args.gradient_accumulation_steps) + 1 ) In the example below, we have len(train_dataloader) == 40 self.args.gradient_accumulation_steps = 4 self.args.max_steps = 10 However we get `num_train_epochs == 2` from the original code, which should be `1`. python utils/download_glue_data.py --data_dir ./examples/text-classification/glue/ --tasks all python3 run_glue.py \ --task_name wnli \ --data_dir ./glue/WNLI \ --model_name_or_path distilbert-base-uncased \ --output_dir ./glue/WNLI/ \ --max_seq_length 16 \ --num_train_epochs 1 \ --per_device_train_batch_size 16 \ --gradient_accumulation_steps 4 \ --max_steps 10 \ --logging_steps 1 \ --save_steps 5 \ --seed 1 \ --do_train \ --do_eval \ --do_predict \ --overwrite_output_dir (Pdb) len(train_dataloader) 40 (Pdb) self.args.gradient_accumulation_steps 4 (Pdb) self.args.max_steps 10 (Pdb) p num_train_epochs 2
09-14-2020 17:33:59
09-14-2020 17:33:59
Looks great! Would you mind adding a few tests so we don't accidentally make changes that will resurface this bug?<|||||>> > > Looks great! Would you mind adding a few tests so we don't accidentally make changes that will resurface this bug? @sgugger I did a few simple tests on my side. Could you guide me how to write a test that is usually done by HF members? BTW, I don't have powerful machine, so hope the tests could be lightweight.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7125?src=pr&el=h1) Report > Merging [#7125](https://codecov.io/gh/huggingface/transformers/pull/7125?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4d3914841932065f19a61ad46e178f94dedeff5a?el=desc) will **increase** coverage by `1.60%`. > The diff coverage is `66.66%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7125/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7125?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7125 +/- ## ========================================== + Coverage 80.07% 81.68% +1.60% ========================================== Files 168 168 Lines 32257 32259 +2 ========================================== + Hits 25831 26351 +520 + Misses 6426 5908 -518 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7125?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `52.90% <66.66%> (+0.29%)` | :arrow_up: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.55% <0.00%> (-34.28%)` | :arrow_down: | | [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `70.19% <0.00%> (-23.08%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: | | [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `88.77% <0.00%> (-2.55%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.80% <0.00%> (-0.25%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.13%)` | :arrow_up: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <0.00%> (+0.16%)` | :arrow_up: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `93.54% <0.00%> (+0.71%)` | :arrow_up: | | ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/7125/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7125?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7125?src=pr&el=footer). Last update [4d39148...c8a542c](https://codecov.io/gh/huggingface/transformers/pull/7125?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Look at the `test_trainer.py` file in the tests directory. There are utility functions to build a small `Trainer` on a simple regression problem. You can just add a few tests mimicking the ones that count the number of steps during training. To check they pass, you can run the command: ``` pytest tests/test_trainer.py ``` which should go pretty quickly.<|||||>Thanks. I will do it and report it when ready.<|||||>I add a test `test_num_train_epochs_in_training` for the case `len(train_dataloader) < self.args.gradient_accumulation_steps`. For the offset by 1 bug below due to the `+ 1` in num_train_epochs = ( self.args.max_steps // (len(train_dataloader) // self.args.gradient_accumulation_steps) + 1 ) , I didn't figure a way to test, because this information is not returned in the output, and its actual usage is for this line logger.info(" Num Epochs = %d", num_train_epochs) because the training loop is actually controlled by `self.args.max_steps` if it is given. (I assume `adding a few tests` means to add in the `test_trainer.py` and push).<|||||>Yes adding the test in `test_trainer.py` is exactly what I wanted, thanks!
transformers
7,124
closed
[s2s] distributed eval in one command
### Issue One GPU command: ```bash python run_eval.py Helsinki-NLP/opus-mt-en-ro wmt_en_ro/test.source enro_test_translations.txt --reference_path wmt_en_ro/test.target --task translation --score_path mar_test_bleu.json --fp16 --bs 64 # {'bleu': 27.6865, 'n_obs': 1999, 'runtime': 85, 'seconds_per_sample': 0.0425} ``` Multi GPU: ``` python -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py --model_name $opus --input_path wmt_en_ro --type_path test --fp16 --save_dir tmp_gen --fp16 --bs 64 python aggregate_distributed_results.py tmp_gen tmp_gen --calc_bleu cat tmp_gen/metrics.json # "bleu": 27.7772 ``` ### New command ``` python -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py --model_name Helsinki-NLP/opus-mt-en-ro --data_dir wmt_en_ro --type_path test --save_dir tmp_gen4 --fp16 --bs 64 --task translation ``` ### Future PRs The actual generations differ slightly vs the single gpu implementation, likely because of sortish sampler/leading spaces.
09-14-2020 16:46:38
09-14-2020 16:46:38
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7124?src=pr&el=h1) Report > Merging [#7124](https://codecov.io/gh/huggingface/transformers/pull/7124?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/206b78d4850d3c6fe85a015654293fc4b803ed7b?el=desc) will **increase** coverage by `0.02%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7124/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7124?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7124 +/- ## ========================================== + Coverage 80.84% 80.86% +0.02% ========================================== Files 168 168 Lines 32284 32284 ========================================== + Hits 26099 26108 +9 + Misses 6185 6176 -9 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7124?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7124/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.83% <0.00%> (-0.36%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7124/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (+0.16%)` | :arrow_up: | | [...rc/transformers/data/datasets/language\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/7124/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `94.11% <0.00%> (+1.17%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7124/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (+1.25%)` | :arrow_up: | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7124/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.00% <0.00%> (+4.00%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7124?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7124?src=pr&el=footer). Last update [206b78d...be6badf](https://codecov.io/gh/huggingface/transformers/pull/7124?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,123
closed
Seq2SeqDataset experiment: try to use arrow datasets
### High Level Goal See if [`datasets/arrow_dataset.py`](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L172) can improve `example/seq2seq/utils.py`'s [`Seq2SeqDataset`](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py#L165) This is more experimental/proof of concept. ### Spec + Generate a PR that adds `Seq2SeqArrowDataset` with a similar signature to `Seq2SeqDataset` but uses the arrow dataset as a backend. + This means that wherever `linecache` is used, arrow should be used. + You can flex the API as much as you'd like, but if you don't call `prepare_seq2seq_batch` you will have to implement lots of collate fns. (If you go this route, you can just implement 1) + Feel free to reduce the scope to either translation or summarization. PR description should answer the following question: - Can we generate similar batches with an ArrowDataset? - Is the dataset faster? - Does it consume as little RAM? - Does it simplify the code? Let me know if this is impossible or unclear!
09-14-2020 15:41:50
09-14-2020 15:41:50
@sshleifer could please assign me to this issue, would love to take a stab at this alongside `Seq2SeqTrainer` or once it's merged. ;)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,122
closed
backtranslation script
+ Wait for #7106, then resume in https://github.com/huggingface/transformers/pull/7121.
09-14-2020 15:34:39
09-14-2020 15:34:39
https://discuss.huggingface.co/t/marian-language-discovery-questions/739/4?u=sshleifer
transformers
7,121
closed
[blk] backtranslation script
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
09-14-2020 15:17:58
09-14-2020 15:17:58
transformers
7,120
closed
modeling_xlnet.py:283: UserWarning: Mixed memory format inputs detected while calling the operator.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.7 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @TevenLeScao ## Information Model I am using (Bert, XLNet ...): xlnet-base-uncased The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Simply invoke XLNet using the following ``` last_hidden_state, mems = lm( input_ids=input_ids, token_type_ids=token_type_ids, output_hidden_states=True, ) ``` I receive a warning as follows: ``` ...\anaconda3\envs\newstsc2\lib\site-packages\transformers\modeling_xlnet.py:283: UserWarning: Mixed memory format inputs detected while calling the operator. The operator will output contiguous tensor even if some of the inputs are in channels_last format. (Triggered internally at ..\aten\src\ATen\native\TensorIterator.cpp:918.) attn_score = (ac + bd + ef) * self.scale ``` ## Expected behavior no warning
09-14-2020 14:47:34
09-14-2020 14:47:34
Hey! I haven't been able to reproduce this with what you've given, maybe it is linked to your 'token_type_ids' ? I don't really have enough info on what you're trying to do with only this line. In any case: - The warning text seems innocuous. - I don't see this line in the current (master) codebase, so I'd suggest upgrading and seeing if this issue still crops up.<|||||>@TevenLeScao FYI: I have the same warning message, on Colab,using Xlnet-base-cased (custom script , custom dataset)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,119
closed
Fix reproducible tests in Trainer
This fixes the tests that were using hardcoded values for reproducible training by testing instead the training yields the same result as one run with the same seed during setup. As a result, it can also run in a multigpu-environment (which is why I removed the decorator). I also fixed the batch size to count the number of steps, which should also fix those tests for multigpu-environent. @stas00, when you get a chance to run the tests on that file, it would be great to know if the fix worked as intended.
09-14-2020 14:12:25
09-14-2020 14:12:25
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7119?src=pr&el=h1) Report > Merging [#7119](https://codecov.io/gh/huggingface/transformers/pull/7119?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5636cbb25d248b61bb9027d026dddcd6d1599b0b?el=desc) will **decrease** coverage by `1.26%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7119/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7119?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7119 +/- ## ========================================== - Coverage 79.48% 78.22% -1.27% ========================================== Files 168 168 Lines 32281 32281 ========================================== - Hits 25660 25251 -409 - Misses 6621 7030 +409 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7119?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.05% <0.00%> (-63.52%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.24% <0.00%> (-55.76%)` | :arrow_down: | | [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `34.28% <0.00%> (-48.00%)` | :arrow_down: | | [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `57.28% <0.00%> (-15.08%)` | :arrow_down: | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.10% <0.00%> (-12.67%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `74.18% <0.00%> (-12.29%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.33% <0.00%> (-11.97%)` | :arrow_down: | | ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/7119/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7119?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7119?src=pr&el=footer). Last update [5636cbb...5d7c2ae](https://codecov.io/gh/huggingface/transformers/pull/7119?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>with both gpus seen: ``` pytest tests/test_trainer.py Test session starts (platform: linux, Python 3.8.5, pytest 6.0.1, pytest-sugar 0.9.4) Using --randomly-seed=3075405078 rootdir: /mnt/nvme1/code/huggingface/transformers-master plugins: xdist-2.1.0, forked-1.3.0, hydra-core-1.0.0, pspec-0.0.4, sugar-0.9.4, randomly-3.4.1, cov-2.10.1, flakefinder-1.0.0 collecting ... tests/test_trainer.py ✓✓✓✓✓ 45% ████▋ ――――――――――――――――――――――――――――――――――――――――――――――― TrainerIntegrationTest.test_train_and_eval_dataloaders ――――――――――――――――――――――――――――――――――――――――――――――― self = <tests.test_trainer.TrainerIntegrationTest testMethod=test_train_and_eval_dataloaders> def test_train_and_eval_dataloaders(self): trainer = get_regression_trainer(learning_rate=0.1, per_device_train_batch_size=16) > self.assertEqual(trainer.get_train_dataloader().batch_size, 16) E AssertionError: 32 != 16 tests/test_trainer.py:144: AssertionError ``` with zero or 1 gpu it passes. <|||||>Could you retest now?<|||||>Success!<|||||>Thanks for checking @stas00 !
transformers
7,118
closed
Temporarily skip failing tests due to dependency change
cc @sgugger @stas00
09-14-2020 11:37:00
09-14-2020 11:37:00
@julien-c wrote that test https://github.com/huggingface/transformers/pull/3800/files#diff-d524f2b47fd9e3af8aea82b9fac55079R88-R90 so it's probably a good idea to ping him instead, as I'm unfamiliar, yet, with that part of `transformers`.
transformers
7,117
closed
Feature request: State the goals and none goals of the library in the README
# 🚀 Feature request I'd like the library's README to say what it's goals and none goals are. For example, as a (new) user I really like [Typescripts non-goals](https://github.com/microsoft/TypeScript/wiki/TypeScript-Design-Goals) because it made it clear what to expect from Typescript and thus how to best use it. I'd like the same for transformers. ## Motivation As a new user I look at the README and think this library can and wants to do everything that I do. But I think it has a tighter more well defined (implicit) scope and that peripheral use cases are better handled elsewhere, such as aligning [offset annotations](https://github.com/huggingface/transformers/issues/7019#issuecomment-691965153) or having trainers [support other data structures](https://github.com/huggingface/transformers/issues/6860). I wish that was stated (as clearly as possible) so that I could make the best use of transformers instead of forcing it to be something it doesn't want to be. ## Your contribution I think it's up to the Huggingface team to decide what transformers is and what it will be. After digging around for a few days I would suggest something like > Transformers makes pre-trained language models available to everyone through clear, consistent and cross platform APIs. Transformer's goal(s) are to make pretrained models accessible and plug into end users existing infrastructure, as well as support common academic tasks. > Transformers does not aim to * Implement end to end applications of NLP * Implement domain specific models (Custom entities in Swahili, joint classification and entity annotation) * ....
09-14-2020 10:42:39
09-14-2020 10:42:39
That's a good idea! The closest document we have to this right now is our [philosophy](https://huggingface.co/transformers/philosophy.html), have you checked it out?<|||||>No, I just saw it now. It in fact has exactly what I would have wanted to find. Particularly the sentence > As a consequence, this library is NOT a modular toolbox of building blocks for neural nets. If you want to extend/build-upon the library, just use regular Python/PyTorch/TensorFlow/Keras modules and inherit from the base classes of the library to reuse functionalities like model loading/saving. <|||||>We could clean up the readme indeed. It's probably time to remove the migration sections about `pytorh-pretrained-bert` and `pytorch-transformers` and the quick-tour about the examples could be shortened.<|||||>Cleaning the README has been on my TODO for a long time, but I have been actively procrastinating. Will do that this week.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,116
closed
fix link to paper
change to url of the cited paper (the previous one linked to a different paper)
09-14-2020 10:05:21
09-14-2020 10:05:21
transformers
7,115
closed
can i self-define the decoder when i user EncoderDecoderModel?
# ❓ Questions & Help In my task, i have to define a decoder by myself, because the vocabulary is Chinese words. But it seems the class method do not support that, i have to init the decoder from the pretrained model. <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
09-14-2020 09:45:35
09-14-2020 09:45:35
Hey @lonelydancer, you can define which decoder you want to use. It can be either a pretrained model or a randomly initialized model. The model just has to be part of `AutoModelForCausalLM` to be loaded into the EncoderDecoderModel. If you give me some more details on your use-case I can probably help you :-) <|||||>@patrickvonplaten Hi,here is my code.It seems ok. tokenizer = BertTokenizer.from_pretrained(model_name) tokenizer.bos_token = tokenizer.cls_token tokenizer2 = BertTokenizer(vocab_file=voc_file, tokenize_chinese_chars=False) # Initialize tokenizer use my own vocab tokenizer2.bos_token = tokenizer2.cls_token decoder_model = AutoModelForCausalLM.from_config(config=decoder_config) config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config, decoder_config) <|||||>Why would you need two tokenizers ? Should the model encode in English and decode to Chinese?<|||||>@patrickvonplaten inmy task,encoder model tokenizer is char level, and the decoder model is a word sequence.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,114
closed
How to return the word embeddings and how to understand the hidden_states in return?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I am new here, transformers is amazing. What I want is to get the embeddings of words with context , “logits” is obviously a bit thin, I think the hidden_states may represent the word embedding, but I can not understand the output of the embeddings and output of each layer. Which one should I choose? Thanks very much! > hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) – Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
09-14-2020 07:56:43
09-14-2020 07:56:43
Well if you want the embeddings of words with their context, your best bet is to take the entire output of the model. If you take only the word embeddings, these are contextless. You should take the model with no head, i.e., the base model: `BertModel`, `GPT2Model`, etc. output to get this value.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,113
closed
Generate coherent text with T5
Can we use T5 to generate coherent text given an input line like in case of GPT-2? If yes how do we fine-tune T5 for such a task?
09-14-2020 06:32:39
09-14-2020 06:32:39
@parthplc Did you find an answer to this question?<|||||>@jsrozner yeah, I was able to finetune t5 for text generation.<|||||>> @jsrozner yeah, I was able to finetune t5 for text generation. Is your work publicly visible anywhere? (colab or a repository) I was looking for some examples to adapt to my use case. Although I got a helpful response here: https://discuss.huggingface.co/t/t5-for-conditional-generation-getting-started/1284/2<|||||>I used the same @jsrozner <|||||>@parthplc @jsrozner can someone share a nb of using finetune.py + t5, would love to see some examples for reference. <|||||>Check out https://discuss.huggingface.co/t/t5-seq2seq-custom-fine-tuning/1497/6 (details on tweaking your vocab if you need it) The link above (https://discuss.huggingface.co/t/t5-for-conditional-generation-getting-started/1284) has links to other useful T5 posts on huggingface. And I am looking for answers to this post for some other tips and tricks: https://discuss.huggingface.co/t/t5-tips-for-finetuning-on-crossword-clues-clue-answer/1514 To actually run, you should do this (called "installing from source): - clone transformers repository - create new conda or other env - (from top level directory), pip install -e . - cd examples && pip install -r requirements.txt - cd seq2seq - ./finetune_t5_bart_tiny.sh (or whatever the name is) -- will run two test epochs After that it's just a matter of choosing the model, tokenizer, setting up your data, and then tweaking params. Most of that is in the posts I linked!
transformers
7,112
closed
prepare for the label for EncoderDecoderModel
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> When i using a EncoderDecoderModel for a translation task, do i have to right shift the target token to generate the label? The [tutorial ](https://github.com/huggingface/transformers/tree/master/model_cards/patrickvonplaten/bert2bert-cnn_dailymail-fp16) not do that, and the source code in forward function do neither. ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
09-14-2020 03:30:46
09-14-2020 03:30:46
In BertLMHeadModel , shift the label *prediction_scores = prediction_scores[:, :-1, :].contiguous() *labels = labels[:, 1:].contiguous()
transformers
7,111
closed
MBART/Marian for low resource/backtranslation
Are there any finetuned checkpoints of mBART on any low resource language release by hugging face. I can see English- Romanian but was wondering if there is a way to access other languages. Did facebook release them actually? @sshleifer
09-14-2020 03:22:12
09-14-2020 03:22:12
Nope, use Helsinki-NLP!<|||||>LMK if you have trouble finding your language.<|||||>**Swahili / Uzbek / Telugu / Gujrati** any of them will do. Also, I just need to do back-translation, do u have a code snippet to do translation using Helsinki-NLP<|||||>```python mname = 'Helsinki-NLP/opus-mt-en-sw' src_text = ['I am a small frog with tiny legs.'] torch_device = 'cpu'#'cuda' if torch.cuda.is_available() else 'cpu' model = AutoModelForSeq2SeqLM.from_pretrained(mname) tok = AutoTokenizer.from_pretrained(mname) translated = model.generate(**tok(src_text, return_tensors='pt')) tok.batch_decode(translated, skip_special_tokens=True) # ['Mimi ni chura mdogo mwenye miguu midogo.'] ``` Full list of models [here](https://huggingface.co/Helsinki-NLP) let's move to [here](https://discuss.huggingface.co/t/marian-language-discovery-questions/739/3) for further language discovery questions! Thanks!<|||||>Backtranslation snippet: ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer mname_fwd = 'Helsinki-NLP/opus-mt-en-ceb' mname_bwd = 'Helsinki-NLP/opus-mt-ceb-en' src_text = ['I am a small frog with tiny legs.'] torch_device = 'cpu'#'cuda' if torch.cuda.is_available() else 'cpu' fwd = AutoModelForSeq2SeqLM.from_pretrained(mname_fwd).to(torch_device) fwd_tok = AutoTokenizer.from_pretrained(mname_fwd) bwd_tok = AutoTokenizer.from_pretrained(mname_bwd) bwd = AutoModelForSeq2SeqLM.from_pretrained(mname_bwd).to(torch_device) if torch_device == 'cuda': fwd = fwd.half() bwd = bwd.half() fwd_batch = fwd_tok(src_text, return_tensors='pt').to(torch_device) translated = fwd.generate(**fwd_batch) translated_txt = fwd_tok.batch_decode(translated, skip_special_tokens=True) bwd_batch = bwd_tok(translated_txt, return_tensors='pt').to(torch_device) backtranslated = bwd.generate(**bwd_batch) bwd_tok.batch_decode(backtranslated, skip_special_tokens=True) # ['I am a small toad with small feet.'] ```<|||||>``` import ast import torch import os from transformers import AutoModelForSeq2SeqLM, AutoTokenizer os.environ["CUDA_VISIBLE_DEVICES"]="2" model1 = 'Helsinki-NLP/opus-mt-en-ceb' model2 = 'Helsinki-NLP/opus-mt-ceb-en' torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' fwd = AutoModelForSeq2SeqLM.from_pretrained(model1).to(torch_device) fwd_tok = AutoTokenizer.from_pretrained(model1) bwd_tok = AutoTokenizer.from_pretrained(model2) bwd = AutoModelForSeq2SeqLM.from_pretrained(model2).to(torch_device) if torch_device == 'cuda': fwd = fwd.half() bwd = bwd.half() for line in open('train_rep.txt'): line = ast.literal_eval(line.strip()) fwd_batch = fwd_tok(line, return_tensors='pt').to(torch_device) translated = fwd.generate(**fwd_batch) translated_txt = fwd_tok.batch_decode(translated, skip_special_tokens=True) bwd_batch = bwd_tok(translated_txt, return_tensors='pt').to(torch_device) backtranslated = bwd.generate(**bwd_batch) orginal_text = bwd_tok.batch_decode(backtranslated, skip_special_tokens=True) print(orginal_text) break ``` got ``` File "/nas/home/tuhinc/miniconda3/envs/advattack/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 555, in convert_to_tensors tensor = as_tensor(value) ValueError: expected sequence of length 27 at dim 1 (got 47) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "bt.py", line 25, in <module> fwd_batch = fwd_tok(line, return_tensors='pt').to(torch_device) File "/nas/home/tuhinc/miniconda3/envs/advattack/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1954, in __call__ **kwargs, File "/nas/home/tuhinc/miniconda3/envs/advattack/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 2139, in batch_encode_plus **kwargs, File "/nas/home/tuhinc/miniconda3/envs/advattack/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 548, in _batch_encode_plus verbose=verbose, File "/nas/home/tuhinc/miniconda3/envs/advattack/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 614, in _batch_prepare_for_model batch_outputs = BatchEncoding(batch_outputs, tensor_type=return_tensors) File "/nas/home/tuhinc/miniconda3/envs/advattack/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 186, in __init__ self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis) File "/nas/home/tuhinc/miniconda3/envs/advattack/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 572, in convert_to_tensors "Unable to create tensor, you should probably activate truncation and/or padding " ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. ``` It works for single sentences but fails for an array or sentences<|||||>Got it had to do padding=True<|||||>`padding='longest'` is the best setting IMO.
transformers
7,110
closed
[s2s] distributed eval cleanup
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
09-14-2020 03:08:13
09-14-2020 03:08:13
Failure is spurious, merging.
transformers
7,109
closed
[s2s run_eval] new features
this PR adds several features to `run_eval.py` and adds a new script `run_eval_search.py` * adding 2 new args: ``` --dump-args print the custom hparams with the results --info [INFO] use in conjunction w/ --dump-args to print with the results whatever other info you'd like, e.g. lang=en-ru. If no value is passed, the current datetime string will be used. ``` * changed `parse_numeric_n_bool_cl_kwargs` to support bool args, so now we can pass ` --early_stopping true` * added a new wrapper script `run_eval_search.py` that performs parametric search using `run_eval.py`. Here is the new section in `README.md` that explains all the additions: #### run_eval tips and tricks When using `run_eval.py`, the following features can be useful: * if you running the script multiple times and want to make it easier to track what arguments produced that output, use `--dump-args`. Along with the results it will also dump any custom params that were passed to the script. For example if you used: `--num_beams 8 --early_stopping true`, the output will be: ``` {'bleu': 26.887, 'n_obs': 10, 'runtime': 1, 'seconds_per_sample': 0.1, 'num_beams': 8, 'early_stopping': True} ``` `--info` is an additional argument available for the same purpose of tracking the conditions of the experiment. It's useful to pass things that weren't in the argument list, e.g. a language pair `--info "lang:en-ru"`. But also if you pass `--info` without a value it will fallback to the current date/time string, e.g. `2020-09-13 18:44:43`. If using `--dump-args --info`, the output will be: ``` {'bleu': 26.887, 'n_obs': 10, 'runtime': 1, 'seconds_per_sample': 0.1, 'num_beams': 8, 'early_stopping': True, 'info': '2020-09-13 18:44:43'} ``` If using `--dump-args --info "pair:en-ru chkpt=best`, the output will be: ``` {'bleu': 26.887, 'n_obs': 10, 'runtime': 1, 'seconds_per_sample': 0.1, 'num_beams': 8, 'early_stopping': True, 'info': 'pair=en-ru chkpt=best'} ``` * if you need to perform a parametric search in order to find the best ones that lead to the highest BLEU score, let `run_eval_search.py` to do the searching for you. The script accepts the exact same arguments as `run_eval.py`, plus an additional argument `--search`. The value of `--search` is parsed, reformatted and fed to ``run_eval.py`` as additional args. The format for the ``--search`` value is a simple string with hparams and the values to try, e.g.: ``` --search "num_beams=5:10 length_penalty=0.8:1.0:1.2 early_stopping=true:false" ``` which will generate `12` `(2*3*2)` searches for a product of each hparam. For example the example that was just used will invoke `run_eval.py` repeatedly with: ``` --num_beams 5 --length_penalty 0.8 --early_stopping true --num_beams 5 --length_penalty 0.8 --early_stopping false [...] --num_beams 10 --length_penalty 1.2 --early_stopping false ``` On completion this function prints a markdown table of the results sorted by the best BLEU score and the winning arguments. ``` bleu | num_beams | length_penalty | early_stopping ----- | --------- | -------------- | -------------- 26.71 | 5 | 1.1 | 1 26.66 | 5 | 0.9 | 1 26.66 | 5 | 0.9 | 0 26.41 | 5 | 1.1 | 0 21.94 | 1 | 0.9 | 1 21.94 | 1 | 0.9 | 0 21.94 | 1 | 1.1 | 1 21.94 | 1 | 1.1 | 0 Best score args: stas/wmt19-en-ru data/en-ru/val.source data/en-ru/test_translations.txt --reference_path data/en-ru/val.target --score_path data/en-ru/test_bleu.json --bs 8 --task translation --num_beams 5 --length_penalty 1.1 --early_stopping True ``` @sshleifer
09-14-2020 02:14:36
09-14-2020 02:14:36
After signature discussion is complete, lets add some test coverage. Can copy/modify [`test_run_eval`](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/test_seq2seq_examples.py#L287)<|||||>> * Important that we can `utils.json_load` the dumped files. Do you see a reason why it should fail to do so? but the point is moot if we merge into one dict as discussed above.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7109?src=pr&el=h1) Report > Merging [#7109](https://codecov.io/gh/huggingface/transformers/pull/7109?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/90cde2e938638e64a8696a12b79ee5f52364b162?el=desc) will **increase** coverage by `1.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7109/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7109?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7109 +/- ## ========================================== + Coverage 79.62% 80.63% +1.00% ========================================== Files 168 168 Lines 32284 32284 ========================================== + Hits 25706 26031 +325 + Misses 6578 6253 -325 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7109?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: | | [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: | | [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.10% <0.00%> (-12.67%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.52% <0.00%> (-3.52%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (-1.26%)` | :arrow_down: | | ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/7109/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7109?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7109?src=pr&el=footer). Last update [90cde2e...875eac8](https://codecov.io/gh/huggingface/transformers/pull/7109?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>(after we sort out these few points you pointed out) I have discovered more work/functionality is needed for the new script - need to be able to recover from CUDA OOM, want to tap into `--info`, etc. So probably, let's merge this first round, and I will continue working on improving it in the future PRs. The changes of this PR for run_eval are all good to go and can be used right away. unless you want to merge all the return values from `run_generate` into one dict, instead of returning a tuple of 2 separate ones. Let me know and I will attend to that first then. And, yes, all the tests will be added too - it's on my short todo list.<|||||>Yes one dict please!<|||||>OK, test added. `test_run_eval_search` is about 3 times slower than `test_run_eval` (it runs eval 4 times). Still OK to leave it as a normal test (no slow)? `test_run_eval`: ``` Results (15.45s): 3 passed real 0m16.193s user 0m13.079s sys 0m1.010s ``` `test_run_eval_search` ``` Results (51.18s): 3 passed real 0m51.962s user 0m42.695s sys 0m1.688s ``` and currently supporting: ``` task_score_names = { "translation": ["bleu"], "translation_en_to_de": ["bleu"], "summarization": ["rouge1", "rouge2", "rougeL"], } ``` <|||||>How do I go about creating a `TINY_FSMT` model, @sshleifer?<|||||>this looks like a hack: ``` try: from .utils import calculate_bleu, calculate_rouge, parse_numeric_n_bool_cl_kwargs, use_task_specific_params except ImportError: from utils import calculate_bleu, calculate_rouge, parse_numeric_n_bool_cl_kwargs, use_task_specific_params ``` would it be better to use `dirname(__file__)` and insert that into `sys.path` so it works no matter where it gets invoked from? Actually adding `examples/seq2seq/conftest.py` that will insert a resolved `examples/seq2seq` into `sys.path` is probably a better fix as it'll solve this issue for all tests under that folder. (see `examples/conftest.py`). and then copy that into each of the examples folders that will tweak their corresponding path. (`pytest` always runs any `conftest.py` it finds first) If you're in agreement I will make another PR to fix those.<|||||> Please mark your test `@slow` mine should probably be `@slow` too. Ideally we have one fast one. `conftest` great idea! If you change imports, its important to verify that scripts actually run afterwards (you can get the unittests passing without try/except).<|||||>Tiny model code (roughly): ```python tok = FSMTTokenizer.from_pretrained('facebook/wmt19-en-de') config = FMSTConfig(decoder_layers=1, encoder_layers=1, vocab_size=vocab_size, d_model=4, encoder_ffn_dim=4, decoder_ffn_dim=4, encoder_attention_heads=1, decoder_attention_heads=1) tiny_model = FSMTModel(config) print(tiny_model.num_parameters()) # Test it src_text, tgt_text = ['I am a small frog.'], ['Ich bin ein kleiner frosch.'] batch = tok.prepare_seq2seq_batch(src_text, tgt_texts=tgt_text) outputs = tiny_model(**batch, return_dict=True) print(outputs.loss) tiny_model.save_pretrained('tiny_fsmt_en_de') # transformers-cli upload tiny_fsmt_en_de stas ```<|||||>> Please mark your test `@slow` mine should probably be `@slow` too. Ideally we have one fast one. Probably having just one of of the models is good enough for a fast test, since it checks that the script works. And then do all the others slow? > `conftest` great idea! If you change imports, its important to verify that scripts actually run afterwards (you can get the unittests passing without try/except). Will do. <|||||>👍 <|||||>> Tiny model code (roughly): Thank you! don't we want to: 1. include the tokenizer files in the s3 model? (vocabs and merges?) 2. would it make sense to create a tiny vocab and use that for tokenizer? Otherwise with these small settings I still get 336552 params. <|||||>1) yes 2) tiny vocab would definitely be better, but is not required! tiny_mbart vocab size is around 300K also.<|||||>OK, here is the full script: ``` from transformers import FSMTTokenizer, FSMTConfig, FSMTForConditionalGeneration mname = "facebook/wmt19-en-de" tokenizer = FSMTTokenizer.from_pretrained(mname) # get the correct vocab sizes, etc. from the master model config = FSMTConfig.from_pretrained(mname) config.update(dict( d_model=4, encoder_layers=1, decoder_layers=1, encoder_ffn_dim=4, decoder_ffn_dim=4, encoder_attention_heads=1, decoder_attention_heads=1)) tiny_model = FSMTForConditionalGeneration(config) print(f"num of params {tiny_model.num_parameters()}") # Test it batch = tokenizer.prepare_seq2seq_batch(["Making tiny model"]) outputs = tiny_model(**batch, return_dict=True) # Save tiny_model.save_pretrained('tiny-wmt19-en-de') tokenizer.save_pretrained('tiny-wmt19-en-de') ``` I end up with 3.1MB of files: ``` l tiny-wmt19-en-de/ total 3.1M -rw-rw-r-- 1 stas stas 2.0K Sep 17 11:30 config.json -rw-rw-r-- 1 stas stas 308K Sep 17 11:30 merges.txt -rw-rw-r-- 1 stas stas 1.4M Sep 17 11:30 pytorch_model.bin -rw-rw-r-- 1 stas stas 85 Sep 17 11:30 special_tokens_map.json -rw-rw-r-- 1 stas stas 111 Sep 17 11:30 tokenizer_config.json -rw-rw-r-- 1 stas stas 747K Sep 17 11:30 vocab-src.json -rw-rw-r-- 1 stas stas 747K Sep 17 11:30 vocab-tgt.json ``` Is this reasonable for non-slow tests? > tiny vocab would definitely be better, but is not required! tiny_mbart vocab size is around 300K also. Or should I make a custom small vocab? It'd shave off about half the size. Probably letter-long bpe codes, so that it could tokenize any input still. Here currently we have 1.5M in dict files.<|||||>I would try it and give up if it's hard. I've given up on trying to shrink sentencepiece vocabs.
transformers
7,108
closed
[QOL] add signature for prepare_seq2seq_batch
This pr adds the signature, but no implementation of `prepare_seq2seq_batch` to `PretrainedTokenizer` to allow IDE autocompletion when `AutoTokenizer` is used. + The signature is quite large and it is nice to see what needs to be supplied. + The signature is enforced by the unittests, so it won't be misleading. On this branch, where `self.tokenizer = AutoTokenizer(...)`: ![image](https://user-images.githubusercontent.com/6045025/93033269-8ed6d480-f603-11ea-9cc4-1947d40de134.png)
09-14-2020 00:54:22
09-14-2020 00:54:22
Didn't know docstring inheritance worked like that. Very cool!<|||||>Docstring looks great to me, thanks for adding!<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7108?src=pr&el=h1) Report > Merging [#7108](https://codecov.io/gh/huggingface/transformers/pull/7108?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/206b78d4850d3c6fe85a015654293fc4b803ed7b?el=desc) will **increase** coverage by `0.02%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7108/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7108?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7108 +/- ## ========================================== + Coverage 80.84% 80.87% +0.02% ========================================== Files 168 168 Lines 32284 32285 +1 ========================================== + Hits 26099 26109 +10 + Misses 6185 6176 -9 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7108?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <ø> (ø)` | | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/7108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `96.82% <ø> (ø)` | | | [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.04% <ø> (ø)` | | | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.15% <100.00%> (ø)` | | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.88% <100.00%> (+0.03%)` | :arrow_up: | | [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/7108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.83% <0.00%> (-0.36%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.08% <0.00%> (+0.24%)` | :arrow_up: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7108/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+2.25%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7108?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7108?src=pr&el=footer). Last update [206b78d...6adf419](https://codecov.io/gh/huggingface/transformers/pull/7108?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,107
closed
Update xsum length penalty to better values
+ Improves ROUGE by 0.4 pts. + Already reflected on S3, so this is just a unittest fix.
09-14-2020 00:46:50
09-14-2020 00:46:50
transformers
7,106
closed
One command to run+aggregate distributed evaluation results
### Current Situation In https://github.com/huggingface/transformers/pull/7105, I wrote a three command combo to run distributed eval. The three commands are: ``` python -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py --fp16 --bs 16 python aggregate_distributed_results.py tmp_gen tmp_gen2 rm -rf tmp_gen ``` + The first command splits up the data and runs `generate` on a chunk foreach GPU, saving results to `rank_{rank}.json` + The second command combines the json results, either just calculating metrics or resaving the generations to disk as `{save_dir}.pred_target`, `save_dir.source` (optionally) `save_dir.target`. + the third command deletes the rank.json files. + the saving of these independent files to disk in the second command is useful for pseudolabeling, where we train a small model on the predictions of a big model. We have to do more book-keeping than in run_eval.py because I haven't yet determined how to reorder predictions to match the original data. So I just save the original data (roughly, there might be truncation issues) and then write it back to disk. Goal: 1 command that uses multiple gpus, saves **aggregated** files and computes metrics `metrics.json`. If this command cannot guarantee the ordering of the generations, it must save necessary data to disk. The design choices in my dirty first attempt do not need to be continued, as long as we can compute metrics and train a second model with the predictions as the new labels There are many ways to accomplish this, here are a few ideas (not mutually exclusive). Ideally, this would be 1 command with the order "figured out" somehow, possibly by returning ids from `Seq2SeqDataset` ``` python run_eval.py (existing_args) --gpus 2 ``` would just work. No need to save source/labels since they are in the correct order. To me this sounds hard to implement. I tried briefly and gave up. The goal now is 1 command, it doesn't need to be the `run_eval.py` command. ### Figuring out ordering by having Seq2Seq dataset return ids - Ideally, this would be 1 command with the order "figured out" somehow, possibly by returning ids from `Seq2SeqDataset`. Then you don't need to save labels/source documents to disk, you can just reorder the predictions. ### launch n processes in the code rather than from the command line if we call `torch.multiprocessing.spawn` ourselves, as in https://github.com/facebookresearch/ParlAI/blob/00efcbebb49524918692638ab580cadeebe70cf8/parlai/scripts/multiprocessing_eval.py#L49 we can wait for the results, join them, and do the reordering in one command.
09-13-2020 21:24:18
09-13-2020 21:24:18
@stas00 let me know if this interests you, no pressure<|||||>Yes, please. Especially since I asked for it ;)<|||||>Cleaned up a bit [here](https://github.com/huggingface/transformers/pull/7110) It is super fast! This is the current workflow though :( ``` python -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py --model_name sshleifer/distilbart-xsum-12-3 --save_dir tmp_gen --input_path xsum --type_path test --max_source_length 1024 --length_penalty 0.6 python aggregate_distributed_results.py tmp_gen tmp_gen --just_metrics mv tmp_gen/metrics.json test_rouge.json rm -rf tmp_gen ``` <|||||>Also metrics diverge a bit from 1 GPU, hopefully because `DistributedSortishSampler` adds extra examples here: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py#L258 That issue is out of scope for this PR, just a note. I may PR a kwarg to the sampler to not add extra examples separately in a non conflicting way. Also the code is still 4/10 clean, feel free to rename variables/improve readability as you see fit.<|||||>Still todo: [ ] better solution than writing a bunch of json files and hoping they all complete within 5 mins of each other. [ ] diagnosing differences in generations. I will flesh these out later on.
transformers
7,105
closed
[s2s] two stage run_distributed_eval.py
```bash python -m torch.distributed.launch --nproc_per_node=2 run_distributed_eval.py --fp16 --bs 16 python cleanup_distributed.py tmp_gen tmp_gen2 ```
09-13-2020 19:30:43
09-13-2020 19:30:43
transformers
7,104
closed
[s2s distill] allow pegasus-12-12
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
09-13-2020 19:27:08
09-13-2020 19:27:08
transformers
7,103
closed
ValueError: Wrong shape for input_ids (shape torch.Size([18])) or attention_mask (shape torch.Size([18]))
## Environment info - `transformers` version: 3.1.0 - Platform: Darwin-18.0.0-x86_64-i386-64bit - Python version: 3.7.2 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No (Running on Macbook Pro MacOS Mojave 10.14) ### Who can help albert, bert, GPT2, XLM: @LysandreJik ## Information Model I am using (Bert, XLNet ...): Bert Multilingual The problem arises when using: * [ ] the official example scripts: (give details below) Hi, this is an error I'm getting running a package someone else created called [simalign](https://github.com/cisnlp/simalign/). I believe their package works with transformers 2.3.0 (I tried it and it worked). But after trying to upgrade to the newest transformers because I really wanted the fill-mask feature which was more recently released. (Actually I wanted a way to get the probability of a word in a certain position given the words around it (left and right) in a sentence and have yet to find a way to do that). At the same time I also want to run simalign in the same application (so I can't downgrade to the previous transformers make it work again). I'm not exactly sure what is causing the error (as I'm not experienced enough) but I'll share the traceback below here: ``` >>> import simalign >>> >>> source_sentence = "Sir Nils Olav III. was knighted by the norwegian king ." >>> target_sentence = "Nils Olav der Dritte wurde vom norwegischen König zum Ritter geschlagen ." >>> model = simalign.SentenceAligner() 2020-09-13 18:02:40,806 - simalign.simalign - INFO - Initialized the EmbeddingLoader with model: bert-base-multilingual-cased I0913 18:02:40.806071 4394976704 simalign.py:47] Initialized the EmbeddingLoader with model: bert-base-multilingual-cased >>> result = model.get_word_aligns(source_sentence.split(), target_sentence.split()) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/simalign/simalign.py", line 181, in get_word_aligns vectors = self.embed_loader.get_embed_list(list(bpe_lists)) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/simalign/simalign.py", line 65, in get_embed_list outputs = [self.emb_model(in_ids.to(self.device)) for in_ids in inputs] File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/simalign/simalign.py", line 65, in <listcomp> outputs = [self.emb_model(in_ids.to(self.device)) for in_ids in inputs] File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/transformers/modeling_bert.py", line 806, in forward extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/transformers/modeling_utils.py", line 248, in get_extended_attention_mask input_shape, attention_mask.shape ValueError: Wrong shape for input_ids (shape torch.Size([18])) or attention_mask (shape torch.Size([18])) ``` I'd love it if anyone could tell me if I could replace a line in the code somewhere to fix this. Here's a sample of that code region where that error occurs: ``` def get_embed_list(self, sent_pair): if self.emb_model is not None: sent_ids = [self.tokenizer.convert_tokens_to_ids(x) for x in sent_pair] inputs = [self.tokenizer.prepare_for_model(sent, return_token_type_ids=True, return_tensors='pt')['input_ids'] for sent in sent_ids] outputs = [self.emb_model(in_ids.to(self.device)) for in_ids in inputs] # use vectors from layer 8 vectors = [x[2][self.layer].cpu().detach().numpy()[0][1:-1] for x in outputs] return vectors else: return None ``` The tasks I am working on is: * [ ] my own task or dataset: (give details below) I'm trying to use simalign in conjunction with finding the probabilities of a word in a sentence given its position in that sentence (meaning I want to assess the probability of the word 'to' and 'too' in this sentence: "I went to the store" Given a function like this: ``` find_probability_of_word_in_given_sentence('to', f'I went {given_word} the store') find_probability_of_word_in_given_sentence('too', f'I went {given_word} the store') ``` I'd want an output like this: ``` to: 0.849283 too: 0.021412 ``` And I don't know if there's a way to do that with transformers version 2.3.0, since it does not have the `fill-mask` feature (although that feature does not return word probabilities for a given word, but word predictions. ## To reproduce Steps to reproduce the behavior: 1. Install simalign 2. Upgrade transformers 3. Run simalign example here: https://github.com/cisnlp/simalign/blob/master/examples/align_example.py ## Expected behavior Simalign works as it does with transformers 2.3.0, returns a list of tuples with numbers.
09-13-2020 17:04:26
09-13-2020 17:04:26
I had the same error, I had it downgraded `pip install transformers==3.0.2` in order to work.<|||||>My problem was not that I needed to downgrade but the fact that I needed new functionality from the newest transformers to work while also working with the features of simalign. I solved this bug here: https://github.com/cisnlp/simalign/issues/10#issuecomment-694407502